Analytic Continuation
The principle of analytic continuation is one of the most essential properties of holomorphic functions. Even though it could be stated simply and precisely as a theorem, doing so would obscure many of the subtleties and how remarkable it is. It is perhaps more instructive to take a step back to real (analytic) functions and Taylor series, and to see why complex numbers is the natural setting. Along the way, we shall encounter other fundamental concepts in complex analysis, such as branch cuts, isolated singularities (including poles), meromorphic functions, monodromy, and even Riemann surfaces.
It may serve as a prologue to a formal study of complex analysis, only assuming basic acquaintance with Taylor series and complex numbers. This largely is the perspective of Weierstrass; for a more complete view, there are Cauchy's theory based on contour integration, Riemann's geometric theory, as well as the perspective of PDE (partial differential equations).
The video Visualizing the Riemann zeta function and analytic continuation by 3Blue1Brown is excellent in giving the geometric intuition, and this article was largely written to complement it.
Contents
From Real to Complex
Most functions that come up "in nature" — either in describing the (ideal) physical world or in pure mathematics — particularly those that are given a special symbol or a name, are in fact analytic: if we take its Taylor series at any point, which only uses the data arbitrarily close to that point, we could recover the function completely. For example, knowing the function \(\sin x\) for \(x\in \big[0, \frac{\pi}{2} \big] \), or even for a tiny interval, is enough to determine the entire function: Simply take its Taylor series at \(x=0:\) \[x-\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots,\] which converges for all \(x\in\mathbb R\), and agrees with the standard, periodic definition of \(\sin x\) over the reals. Taking the Taylor series at any other point will result in the exact same function (see Taylor's theorem). This is the simplest and best-case scenario of analytic continuation — from a small interval to the whole real line — for the radius of convergence is always infinite. Such functions are called entire functions, which include all polynomials, the exponential function, certain "special functions" (e.g., Bessel functions), and their sums, products, and compositions.
The problem becomes more subtle, and hence more interesting, if the radius of convergence is finite. Suppose we only know \( f(x)=\frac{1}{x} \) on a small neighborhood around \(x=1\). The Taylor series takes the form \[ \sum_{n=0}^\infty (-1)^{n} (x-1)^n, \qquad (*)\] which, being a geometric series, converges only for \(|x-1|<1\), i.e. \( 0<x<2 \), where it indeed converges to \( \frac{1}{x} \). Now, knowing the values of the function near \(x=1.5\), for instance, we may "Taylor expand" at \(x=1.5\), and the new Taylor series in fact converges for \( x\in (0, 3) \) while agreeing with the previous values on \( (0, 2) \). We can say that we have analytically continued the function to \( (0,3)\). Thus, by successive Taylor series expansions, we could "recover" the function \( \frac{1}{x} \) for all \(x>0\), but no way whatsoever could we extend it to \(x<0\). The point \(x=0\) poses as an insurmountable barrier, called a singularity of the function. It seems that we are free to define \(f(x)\) to be any (analytic) function on \(x<0\), and no criterion on the function could favor one over the countless others.
This is where complex numbers come into play so that we might be able to "circumvent" the barrier by going into the complex plane. In fact, the Taylor series in general (or power series) makes perfect sense, as a series, when \(x\) is any complex number, so long as we know how to add and multiply complex numbers. Examining the derivation of the geometric series, we see that \[ 1 + r + r^2 + \cdots = \lim_{n\to\infty}\frac{1-r^{n+1}}{1-r}=\frac{1}{1-r}\] holds for all complex numbers \(r\) of modulus strictly less than 1. Thus, the series \((*)\) converges for all complex \(x\) with \(|x-1|<1\), which is a disk of radius \(1\) centered at \(x=1\). The radius of convergence is literally a radius, and this phenomenon holds true for all (convergent) power series. In particular, entire functions are naturally defined on the whole complex plane.
Now, by taking the Taylor series at a point off from the real axis, we may get around the singularity at \(x=0\). There are two ways to get to the negative real axis: through the upper half of the complex plane, or through the lower half. It turns out that we'd end up with the exact same result, which as one might expect is simply \(\frac{1}{x}\) for \(x<0\).
In fact, as illustrated above, each Taylor series in the process of analytic continuation converges in an (open) disk just short of the singularity at \(x=0\). Indeed, for any \(a\in\mathbb C\setminus\{0\}\), we may expand \(\frac{1}{x}\) as a Taylor series centered at \(a\): \[\frac{1}{x} = \frac{1}{a+(x-a)} = \frac{1}{a}\cdot \frac{1}{1+\frac{x-a}a}=\frac{1}{a}\sum_{n=0}^\infty (-1)^n\left(\frac{x-a}{a}\right)^n=\sum_{n=0}^\infty \frac{(-1)^n}{a^{n+1}}(x-a)^n\] for \(|\frac{x-a}{a}|<1\), i.e. \(|x-a|<|a|\). This also illustrates that the precise procedure of analytic continuation (choices of the centers of Taylor series expansions) does not matter, and the end result is the same, namely \(f(x)=\frac{1}{x}\) on the punctured plane \(\mathbb C\setminus\{0\}\).It should be noted right away that not all functions, when analytically continued around a singularity from above and below, have the same result. The two prototypical examples are \(\log x\) and \(\sqrt x\); they are not typically defined for negative \(x\) for this reason.
The complex numbers also provide more insight even in the case when we could analytically continue over the reals. For example, \[f(x)=\frac{1}{1+x^2}\] is defined and infinitely differentiable for all \(x\in\mathbb R\). The Taylor series at \(x=0\), however, has a radius of convergence of 1 (again by geometric series). If we take the complex perspective, we see that \(f(x)\) does have singularities at \(x=\pm i\), which are at a distance 1 from the origin, so it couldn't have a larger radius of convergence. In fact, it is true in general that the Taylor series of any analytic function converges to the function itself within a disk as large as possible (before hitting a "singularity"), when viewed as a complex function.
It may already be enough evidence that analytic functions, which include all the familiar functions, really should be regarded as living on the complex plane or subsets or extensions thereof. They are not confined to a particular domain (per the modern concept of a function) but have the ability to extend or continue in all directions as far as possible, to what can be called its natural domain. Our modern definition of a function, an arbitrary assignment of a value \(y\) for each \(x\) in a prescribed domain, has a very different flavor: no way whatsoever to extend its domain, or rather infinitely many choices of extension that do not single out any particular one. If the original function happens to be continuous, one may require the extension to be continuous too, which would narrow down the choices but still leave infinitely many possibilities (unless the extension is just for one extra point); if the original function was differentiable, one may ask the same for the extension, which would further narrow down the choices. Analyticity is the strongest criterion of all, and it turns out it is enough to single out a unique choice of extension if one exists. That is the principle of analytic continuation.
To phrase the principle of analytic continuation differently: the identity of an analytic function is "encoded" in each and every point of its natural domain, in the sequence of Taylor series coefficients (or the derivatives) at that point, traditionally known as the germ of the function at the point (in the sense of the seed of a crop). One could easily write down the rules for the basic operations — addition, multiplication, division, inversion, differentiation, etc. — on the set of germs at the same point. To carry on the agrarian analogy, a collection \((\)often a \(\mathbb C\)-vector subspace\()\) of germs at a point is called a stalk, and putting all the stalks (of the same sort) over various points of a domain together, endowed with some topology, we get a sheaf, which is semantically the same as a bundle. This is the beginning of sheaf theory.
From now on, we shall use \(z\) \(\big(\)or \(\zeta\), \(s\), etc.\(\big)\) instead of \(x\) for the variable of our functions.
Natural Domains
Despite the fact that an analytic function, by its very nature, is fully determined by a sequence of (complex) numbers, the general theory of functions in the complex domain is a vast subject that goes under many names: complex analysis, (complex) function theory, theory of functions of a (single) complex variable, etc. From the point of view of analytic continuation, the most natural question
Given a convergent power series \[f(z)=\sum_{n=0}^\infty a_n (z-z_0)^n, \] determine the largest domain in the complex plane to which \(f(z)\) can be analytically continued.
is hopelessly difficult. Nevertheless, it offers a panorama of a wide variety of functions, with connections to different areas of mathematics, if we wish to look past some of the detailed justifications. In increasing level of "complexity" (by some measure), we have the following:
Entire functions: those that can be analytically continued to the whole complex plane. It generalizes polynomials. For example, the Fourier (and Laplace) transform \[f(\zeta)=\int_{\mathbb R} e^{-ix\zeta}\phi(x)\,dx \qquad \zeta=\xi+i\eta\in\mathbb C \] of a compactly supported continuous function \(\phi\in C_0(\mathbb R)\) \(\big(\)or more generally a distribution \(\phi\in\mathcal E'(\mathbb R)\) of compact support\(\big)\) is entire, and furthermore the support of \(\phi\) is governed by the growth of \(f(\zeta)\) in the imaginary direction, i.e. as \(\eta\to\pm\infty\) (Paley-Wiener theorem).
Meromorphic functions on \(\mathbb C\): the barriers are all isolated points (called singularities), but analytic continuation is possible around each singularity, and the result does not depend on which way to go around them. (One extra technical condition is often imposed so that all the singularities are "poles" instead of "essential singularities.") It generalizes rational functions. For any non-constant polynomial \(P\) of \(n\) variables with \(P(x)\geq 0\) for all \(x\in\mathbb R^n\), and any compactly supported smooth \(\phi\in C^\infty_0(\mathbb R^n),\) \[ f(s) = \int_{\mathbb R^n} P(x)^s \phi(x)\,dx \qquad \operatorname{Re} s>0\] can be analytically continued to the whole complex plane except for isolated, albeit infinitely many, points on the negative real axis (Bernstein's theorem). For another important class of examples, the so-called \(L\)-functions, such as the Dirichlet \(L\)-function \[L(s)=\sum_{n=1}^\infty \frac{\chi(n)}{n^s} \qquad \operatorname{Re} s>1\] associated to a Dirichlet character \(\chi:\mathbb Z\to\mathbb C\), can be analytically continued to all of \(\mathbb C\) except possibly for a few points such as \(s=1\).
Functions such as \(\log z\) and \(\sqrt z\) could be analytically continued around the singularity at \(z=0,\) but the result depends on the path taken. To remove this ambiguity, one would need to agree on a continuous "borderline" or "cut" extending from \(z=0\) to infinity (e.g. the negative real axis), across which no analytic continuation is permitted. Due to the presence of the cut, \(z=0\) shall not be considered an isolated singularity even though it is the only "barrier" of analytic continuation. \((\)Note also that \(\sqrt z\) does not go to infinity when \(z\to 0.)\) Alternatively, we could analytically continue across the cut by "jumping" to another copy of the complex plane:
Thus the natural domain of \(\log z\) or \(\sqrt z\) is not a subset of the complex plane but consists of multiple copies of the complex plane properly glued together; this is an example of a Riemann surface. In a sense that could be made precise, the point \(z=0\) is no longer a singularity of the function.The barriers of analytic continuation may not even be isolated points but form a "wall." In fact, for any open, connected subset \(U\subsetneq\mathbb C\), there exists an analytic function on \(U\) that cannot be extended past any point of the boundary. In other words, \(U\) is the natural domain of that function. This class may seem exotic, but in fact it is as rich as non-analytic functions of a real variable. To illustrate it, consider \[f(z)=\int_{\mathbb R} \frac{\phi(x)}{x-z}dx \qquad z\in\mathbb C\setminus\operatorname{supp}\phi,\] where \(\phi\) only needs to be integrable \(\big(\phi\in L^1(\mathbb R)\big).\) When \(z\) approaches \(x_0\) on the real axis from above and below, the limits \(f(x_0\pm i\epsilon)\) differ by \(2\pi i\phi(x_0)\); moreover, analytic continuation across the real axis is possible in a neighborhood of \(x_0\) if and only if \(\phi\) is (real) analytic at \(x_0\). Thus, by choosing appropriate \(\phi\), we can construct many functions on the upper (or lower) half plane that cannot be analytically continued across part or all of the real line. Another way for a function to fail to analytically continue past a boundary is when the value of the function approaches infinity along the boundary. An important class of functions of this sort is modular forms, which are defined on the upper half plane, and have very stringent transformation properties; they have deep connections with \(L\)-functions, and are likewise important in many areas of mathematics, most notably in number theory.