# Analytic Continuation

The principle of **analytic continuation** is arguably the most essential feature of holomorphic functions. Even though it could be stated simply and precisely, doing so would obscure many of the subtleties and how remarkable it is. It may be more instructive, then, to take a step back to real (analytic) functions and Taylor series, and to see why complex numbers is the natural setting. Along the way, we shall encounter other fundamental concepts in complex analysis, such as branch cuts, isolated singularities (especially poles), meromorphic functions, monodromy, and even Riemann surfaces.

It may serve as a prologue to a formal study of complex analysis, only assuming basic acquaintance with Taylor series and complex numbers. This largely is the perspective of Weierstrass; for a more complete view, there are Cauchy's theory based on contour integration, Riemann's geometric theory, as well as the perspective of PDE (partial differential equations).

The video Visualizing the Riemann zeta function and analytic continuation on YouTube by 3Blue1Brown is a must-see.

#### Contents

## From real to complex

Most functions that come up "in nature" — either in describing the physical world or in pure mathematics — particularly those that are given a special symbol or a name, are in fact *analytic*: if we take its Taylor series at any point, which only uses the data arbitrarily close to that point, we could recover the function completely. For example, knowing the function \(\sin x\) for \(x\in \big[0, \frac{\pi}{2} \big] \), or even for a tiny interval, is enough to determine the entire function: Simply take its Taylor series at \(x=0\),
\[x-\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots,\]
which converges for all \(x\in\mathbb R\), and agrees with the standard, periodic definition of \(\sin x\) over the reals. Taking the Taylor series at any other point will result in the exact same function (see Taylor's theorem). This is the simplest and best-case scenario of **analytic continuation** — from a small interval to the whole real line — for the radius of convergence is always infinite. Such functions are called **entire** functions, which include all polynomials, the exponential function, certain "special functions" (e.g., Bessel functions), and their sums, products, and compositions.

The problem becomes more subtle, hence more interesting, if the radius of convergence is finite. Suppose we only know \( f(x)=\frac{1}{x} \) on a small neighborhood around \(x=1\). The Taylor series takes the form
\[ \sum_{n=0}^\infty (-1)^{n} (x-1)^n \qquad (*)\]
which, being a geometric series, converges only for \(|x-1|<1\), i.e. \( 0<x<2 \), where it indeed converges to \( \frac{1}{x} \). Now, knowing the values of the function near \(x=1.5\), for instance, we may "Taylor expand" at \(x=1.5\), and the new Taylor series in fact converges for \( x\in (0, 3) \) while agreeing with the previous values on \( (0, 2) \). We can say that we have **analytically continued** the function to \( (0,3)\). Thus, by successive Taylor series expansions, we could "recover" the function \( \frac{1}{x} \) for all \(x>0\), but no way whatsoever could we extend it to \(x<0\). The point \(x=0\) poses as an insurmountable barrier, called a *singularity* of the function. It seems that we are free to define \(f(x)\) to be any (analytic) function on \(x<0\), and no criterion on the function could favor one over the countless others.

This is where complex numbers come into play, so that we might be able to "circumvent" the barrier by going into the complex plane. In fact, the Taylor series in general (or power series) makes perfect sense, as a series, when \(x\) is any complex number, so long as we know how to add and multiply complex numbers. Examining the derivation of the geometric series, we see that
\[ 1 + r + r^2 + \cdots = \lim_{n\to\infty}\frac{1-r^{n+1}}{1-r}=\frac{1}{1-r}\]
holds for all complex number \(r\) *of modulus strictly less than* 1. Thus, the series \((*)\) converges for all complex \(x\) with \(|x-1|<1\), which is a **disk** of radius \(1\) centered at \(x=1\). The radius of convergence is *literally* a radius, and this phenomenon holds true for all (convergent) power series. In particular, entire functions are naturally defined on the whole complex plane.

Now, by taking the Taylor series at a point off from the real axis, we may get around the singularity at \(x=0\). There are two ways to get to the negative real axis: through the upper half of the complex plane, or through the lower half. It turns out that we'd end up with the exact same result, which as one might expect is simply \(\frac{1}{x}\) for \(x<0\).

In fact, as illustrated above, each Taylor series in the process of analytic continuation converges in an (open) disk just short of the singularity at \(x=0\). Indeed, for any \(a\in\mathbb C\setminus\{0\}\), we may expand \(\frac{1}{x}\) as a Taylor series centered at \(a\): \[\frac{1}{x} = \frac{1}{a+(x-a)} = \frac{1}{a}\cdot \frac{1}{1+\frac{x-a}a}=\frac{1}{a}\sum_{n=0}^\infty (-1)^n\left(\frac{x-a}{a}\right)^n=\sum_{n=0}^\infty \frac{(-1)^n}{a^{n+1}}(x-a)^n\] for \(|\frac{x-a}{a}|<1\), i.e., \(|x-a|<|a|\). This also illustrates that the precise procedure of analytic continuation (choices of the centers of Taylor series expansions) does not matter, and the end result is the same, namely \(f(x)=\frac{1}{x}\) on the punctured plane \(\mathbb C\setminus\{0\}\).It should be noted right away that not all functions, when analytically continued around a singularity from above and below, have the same result. The two prototypical examples are \(\log x\) and \(\sqrt x\); they are not typically defined for negative \(x\) for this reason.

The complex numbers also provide more insight even in the case when we could analytically continue over the reals. For example,
\[f(x)=\frac{1}{1+x^2}\]
is defined and infinitely differentiable for all \(x\in\mathbb R\). The Taylor series at \(x=0\), however, has a radius of convergence of 1 (again by geometric series). If we take the complex perspective, we see that \(f(x)\) does have singularities at \(x=\pm i\), which are at a distance 1 from the origin, so it couldn't have a larger radius of convergence. In fact, it is true in general that the Taylor series of any analytic function converges to the function itself within a disk as large as possible (before hitting a "singularity"), *when viewed as a complex function*.

It may already be enough evidence that analytic functions, which include all the familiar functions, really should be regarded as living on the complex plane, or subsets or extensions thereof. They are not confined to a particular domain (per the modern concept of a function) but have the ability to extend or continue in all directions as far as possible, to what can be called its **natural domain**. Our modern definition of a function, an arbitrary assignment of a value \(y\) for each \(x\) in a prescribed domain, has a very different flavor: no way whatsoever to extend its domain, or rather infinitely many choices of extension that do not single out any particular one. If the original function happens to be continuous, one may require the extension to be continuous too, which would narrow down the choices but still leave infinitely many possibilities (unless the extension is just for one extra point); if the original function was differentiable, one may ask the same for the extension, which would further narrow down the choices. Analyticity is the strongest criterion of all, and it turns out it is enough to single out a unique choice of extension, if one exists. That is the **principle of analytic continuation**.

To phrase the principle of analytic continuation differently: the identity of an analytic function is "encoded" in each and every point of its natural domain, in the sequence of Taylor series coefficients (or the derivatives) at that point, traditionally known as the **germ** of the function at the point (in the sense of the seed of a crop). One could easily write down the rules for the basic operations — addition, multiplication, division, inversion, differentiation, etc. — on the set of germs at the *same* point. To carry on the agrarian analogy, a collection (often a \(\mathbb C\)-vector subspace) of germs at a point is called a **stalk**, and putting all the stalks (of the same sort) over various points of a domain together, endowed with some topology, we get a **sheaf**, which is semantically the same as a bundle. This is the beginning of sheaf theory.

From now on, we shall use \(z\) (or \(\zeta\), \(s\), etc.) instead of \(x\) for the variable of our functions.

## Natural domains

Despite the fact that an analytic function, by its very nature, is fully determined by a sequence of (complex) numbers, the general theory of functions in the complex domain is a vast subject that goes under many names: complex analysis, (complex) function theory, theory of functions of a (single) complex variable, etc. From the point of view of analytic continuation, the most natural question

Given a convergent power series \[f(z)=\sum_{n=0}^\infty a_n (z-z_0)^n, \] determine the largest domain in the complex plane to which \(f(z)\) can be analytically continued.

is hopelessly difficult. Nevertheless, it offers a panorama of a wide variety of functions, with connections to different areas of mathematics, if we wish to look past some of the detailed justifications. In increasing level of "complexity" (by some measure), we have:

Entire functions: those that can be analytically continued to the whole complex plane. It generalizes polynomials. For example, the Fourier (and Laplace) transform \[f(\zeta)=\int_{\mathbb R} e^{-ix\zeta}\phi(x)\,dx \qquad \zeta=\xi+i\eta\in\mathbb C \] of a compactly supported continuous function \(\phi\in C_0(\mathbb R)\) (or more generally a distribution \(\phi\in\mathcal E'(\mathbb R)\) of compact support) is entire, and furthermore the support of \(\phi\) is governed by the growth of \(f(\zeta)\) in the imaginary direction, i.e., as \(\eta\to\pm\infty\) (Paley-Wiener theorem).

Meromorphic functions on \(\mathbb C\): the barriers are all

*isolated*points (called singularities), but analytic continuation is possible around each singularity, and the result does not depend on which way to go around them. (One extra technical condition is often imposed so that all the singularities are "poles" instead of "essential singularities".) It generalizes rational functions. For any non-constant polynomial \(P\) of \(n\) variables with \(P(x)\geq 0\) for all \(x\in\mathbb R^n\), and any compactly supported smooth \(\phi\in C^\infty_0(\mathbb R^n)\), \[ f(s) = \int_{\mathbb R^n} P(x)^s \phi(x)\,dx \qquad \operatorname{Re} s>0\] can be analytically continued to the whole complex plane except for isolated, albeit infinitely many, points on the negative real axis (Bernstein's theorem). For another important class of examples, the so-called \(L\)-functions, such as the Dirichlet \(L\)-function \[L(s)=\sum_{n=1}^\infty \frac{\chi(n)}{n^s} \qquad \operatorname{Re} s>1\] associated to a Dirichlet character \(\chi:\mathbb Z\to\mathbb C\), can be analytically continued to all of \(\mathbb C\) except possibly for a few points such as \(s=1\).Functions such as \(\log z\) and \(\sqrt z\), which could be analytically continued around the singularity at \(z=0\) but the result depends on the path taken. To remove this ambiguity, one would need to agree on a continuous "borderline" or "cut" extending from \(z=0\) to infinity (e.g., the negative real axis), across which no analytic continuation is permitted. Due to the presence of the cut, \(z=0\) shall not be considered an

Thus the natural domain of \(\log z\) or \(\sqrt z\) is not a subset of the complex plane, but consisting of multiple copies of the complex plane properly glued together; this is an example of a*isolated*singularity even though it is the only "barrier" of analytic continuation (Note also that \(\sqrt z\) does not go to infinity when \(z\to 0\)). Alternatively, we could analytically continue across the cut by "jumping" to another copy of the complex plane:**Riemann surface**. In a sense that could be made precise, the point \(z=0\) is no longer a singularity of the function.The barriers of analytic continuation may not even be isolated points, but form a "wall". In fact, for any open connected subset \(U\subsetneq\mathbb C\), there exists an analytic function on \(U\) that can not be extended past

*any*point of the boundary. In other words, \(U\) is the natural domain of that function. This class may seem exotic, but in fact it is as rich as*non-analytic*functions of a real variable. To illustrate it, consider \[f(z)=\int_{\mathbb R} \frac{\phi(x)}{x-z}dx \qquad z\in\mathbb C\setminus\operatorname{supp}\phi\] where \(\phi\) only needs to be integrable (\(\phi\in L^1(\mathbb R)\)). When \(z\) approaches \(x_0\) on the real axis from above and below, the limits \(f(x_0\pm i\epsilon)\) differ by \(2\pi i\phi(x_0)\); moreover, analytic continuation across the real axis is possible in a neighborhood of \(x_0\) if and only if \(\phi\) is (real) analytic at \(x_0\). Thus, by choosing appropriate \(\phi\), we can construct many functions on the upper (or lower) half plane that can not be analytically continued across part or all of the real line. Another way for a function to fail to analytically continue past a boundary is when the value of the function approaches infinity along the boundary. An important class of functions of this sort is modular forms, which are defined on the upper half plane, and have very stringent transformation properties; they have deep connections with \(L\)-functions, and are likewise important in many areas of mathematics, most notably in number theory.

## Definitions of holomorphic and meromorphic functions

The process of analytic continuation by repeated Taylor series expansions is cumbersome in practice, unless as in some of the examples above we already knew what the function ought to be. It is most remarkable that there is a very simple criterion for a function on a complex domain to be analytic, so that if we could, by other means, extend a function that satisfies this criterion, it must agree with the one obtained by repeated Taylor series expansions. Despite the nature of analytic functions, the precise statement needs to fix a domain \(U\subseteq\mathbb C\), not necessarily the largest or natural domain. (Here domain has the dual meaning of being the set of "inputs" of a mapping, and of any open *connected* subset of \(\mathbb C\).)

Let \(f: U\to\mathbb C\) be a function on a domain \(U\subseteq\mathbb C\). The following conditions, seemingly in order of weakening strength, are in fact equivalent.

\(f(z)\) is locally representable as a (convergent) power series: for each \(z_0\in U\), \[ f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n \qquad z\in D\] for some coefficients \(a_n\in\mathbb C\) and some (open) disk \(D\subseteq U\) centered at \(z_0\); in particular, \(f(z)\) is infinitely differentiable.

\(f(z)\) is continuously differentiable (i.e., \(f\in C^1(U)\)), and \[ \lim_{h\in\mathbb C\setminus\{0\}, h\to 0} \frac{f(z+h)-f(z)}{h} \] exists (as a complex number) for each \(z\in U\). This limit is called the

(complex) derivativeof \(f\) at the point \(z\), and is denoted \(f'(z)\).\(f(z)\) has a complex derivative at each \(z\in U\).

\(f(z)\) satisfies the Cauchy-Riemann equations \[\frac{\partial}{\partial\bar z} f(z):= \frac{1}{2} \left( \frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\right)f(x+iy)=0\] for all \(z=x+iy\in U\).

If \(f(z)\) satifies any, hence all, of the conditions above, we say that \(f(z)\) is

holomorphicon \(U\), often denoted \(f\in\mathcal O(U)\).

Note that it is meaningless to say a function on a closed set is holomorphic. Technically one could talk about \(\mathcal O(U)\) of holomorphic functions on any open set \(U\), even when \(U\) is not connected. However, the function on each connected component does not determine the values on other components, and should thus be regarded as separate functions. To rephrase it, the ring \(\mathcal O(U\)) is an integral domain if and only if \(U\) is a domain.

The most common way to show that a function \(f(z)\) is holomorphic is by "computing" its derivative. Thus, polynomials and entire functions are holomorphic on \(\mathbb C\), and rational functions are holomorphic on \(\mathbb C\) minus a finite set (of singularities). By the same rules as in ordinary calculus, we see that sums, products, compositions, and quotients (if denominator is never zero) of holomorphic functions are holomorphic. For example, \( \tan z=\frac{\sin z}{\cos z}\) is holomorphic on \(\mathbb C\setminus \{\frac{\pi}{2}+k \pi, k\in \mathbb Z\} \).

The condition of holomorphy has a simple geometric interpretation: At each point \(z\) where \(f'(z)\neq 0\), the function \(f\) maps an infinitesimal disk onto an infinitesimal disk, by way of a rotation and a dilation. Therefore, if \(f'\) does not vanish anywhere on \(U\), we can say that \(f\) is *conformal* in the sense that it is angle-preserving, and its inverse (after restricting to a smaller subset of \(U\) if necessary) is automatically conformal, hence is also holomorphic.

The notation of \(\frac{\partial}{\partial\bar z}\) in Cauchy-Riemann equation is inspired by the intuitive idea that a holomorphic function is expressed solely in terms of \(z\) (see all the examples above), instead of a mixture of \(z\) and \(\bar z\) as in \(\operatorname{Re} z=\frac{1}{2}(z+\bar z)\) and \(|z|^2=z\bar z\). One could likewise define *anti-holomorphic* functions: those that only depend on \(\bar z\) but not on \(z\), and locally they are rotation, dilation, *and* with a reflection.

If \(f(z)\) is holomorphic on \(U\) except for a discrete set of points, it often exhibits many of the properties of rational functions. For example, the behavior near those singularities in many ways mirrors that of the zeros of a holomorphic function.

\(f(z\)) is

meromorphicon a domain \(U\subseteq\mathbb C\) if one of the following equivalent conditions is satisfied:

\(f(z)\) is the quotient of two holomorphic functions on \(U\).

\(f(z)\) is locally representable by a

Laurent series: for each \(z_0\in U\), \[f(z)=\sum_{n=-m}^\infty a_n(z-z_0)^n \qquad z\in D^*\] for some coefficients \(a_n\in\mathbb C\) and some punctured disk \(D^*\in U\) centered at \(z_0\). The lowest index \(-m\) (for which \(a_{-m}\neq 0\)) may depend on the point \(z_0\), and (if \(m>0\)) \(z_0\) is called apoleof order \(m\) of the function.For each \(z_0\in U\), there exists a power \(m\geq 0\) such that \((z-z_0)^m f(z)\) is holomorphic in a neighborhood of \(z_0\).

There exists a discrete set \(P\subset U\) such that \(f(z)\) is holomorphic on \(U\setminus P\), and for each \(z_0\in P\), \(|f(z)|\to \infty\) as \(z\to z_0\).

## Methods of analytic continuation

As noted above, analytic continuation by repeated Taylor series expansions could be rather tedious in practice, if at all feasible. For starters, it requires one to evaluate the sum of a convergent power series \[\sum_{n=0}^\infty a_n(z-z_0)^n,\] as well as its derivatives, at a point \(z_1\neq z_0\). In what amounts to the same thing (but illustrates the difficulty better), one could rewrite each power \((z-z_0)^n\) in terms of \((z-z_1)^m\), \(m=0, 1\ldots, n\), by binomial expansion, and assemble all the terms into a power series centered at \(z_1\). This can be justified so long as \(z_1\) was in the disk of convergence to begin with.

What allows us to use other methods of analytic continuation is its uniqueness, *if we fix a particular domain* as the target. The uniqueness may be made precise in the following **identity theorem**:

If \(f(z)\) is a holomorphic function on a domain (i.e., open and connected) \(U\subseteq \mathbb C\), and \(f(z)=0\) on some non-empty open subset \(V\subset U\), then \(f(z)=0\) on \(U\). In other words, the restriction map \(\mathcal O(U)\to\mathcal O(V)\) is injective, provided that \(U\) is connected.

In fact, the subset \(V\) can be as small as an "infinitesimal interval", or at the very least a countable set with an accumulation point — though one needs to make sure the accumulation point itself is inside \(U\); consider \(f(z)=\sin\frac{\pi}{z}\), which vanishes on \( V=\{\frac{1}{n}, n\geq 1\}\) — for that is enough to determine the Taylor series at the accumulation point. Another way to rephrase the theorem is that a nonzero holomorphic function can only have *isolated zeros*.

In addition to allowing us to perform analytic continuation by other techniques (see below), the theorem is often used to establish an identity by checking it on a smaller subset; this is often referred to as the *principle* of analytic continuation, or the principle of persistence of functional relations. If an identity that involves a complex variable is established on a domain \(V\subset\mathbb C\), and we know both sides extend (holomorphically) to a bigger domain \(U\supset V\), then the identity must hold on all of \(U\). Simply apply the above theorem to the difference of the two sides. If an identity involves several variables, we may apply this argument to one variable at a time, holding the other variables fixed.

**By means of a functional equation**The most famous examples are the Gamma function \(\Gamma(s)\) and the Riemann zeta function \(\zeta(s)\), which traditionally use the variable \(s\) instead of \(z\).

The Gamma function is defined (first by Euler) by \[\Gamma(s):=\int_0^\infty x^{s-1} e^{-x}\,dx \qquad \operatorname{Re} s>0\] It is holomorphic by (formally) differentiating under the integral sign. By a simple integration by parts, the Gamma function satisfies the functional equation \(\Gamma(s+1)=s\Gamma(s)\) for all \(s\) with \(\operatorname{Re} s>0\). We can then define \[\Gamma(s):=\frac{\Gamma(s+1)}{s} \qquad \operatorname{Re} s>-1, s\neq 0\] which first of all agrees with the previous definition for \(\operatorname{Re} s>0\), and is holomorphic on the new domain, hence is an analytic continuation. Likewise, \[\Gamma(s):=\frac{\Gamma(s+1)}{s}=\frac{\Gamma(s+2)}{s(s+1)} \qquad \operatorname{Re} s>-2, s\neq 0, -1\] furnishes the analytic continuation to the left by another strip, and so on and so forth.

Alternatively, one could establish another functional equation: \[\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin(\pi s)}\] which extends \(\Gamma(s)\) to the whole complex plane in one stroke, with simple poles at \(s=0, -1, -2 \ldots\).

**Meromorphic continuation**If any method of analytic continuation stops at a boundary, and the value of the function as we approach a point \(z_0\) on the boundary tends to infinity, we may multiply by some power of \(z-z_0\) and the resulting function can then be analytically continued past the point \(z_0\). We have to remember to divide by the same power of \(z-z_0\), leaving a pole at \(z_0\), then onto the next boundary singularity. By this procedure, we obtain a meromorphic function on as large a domain as possible. What amounts to the same thing is to include Laurent series in the standard analytic continuation disk by disk.**Schwarz reflection principle**Suppose \(f(z)\) is holomorphic on a domain \(U\), and the boundary of \(U\) contains an interval on the real axis (more precisely, \(\partial U\cap \mathbb R = I\) for some open interval \(I\)). If the limit of \(f(z)\) as \(z\) approaches \(I\) exists, is continuous, and is real (\(f(I)=J\subseteq\mathbb R\)), then \(f(z)\) can be analytically continued to \(U \cup I \cup \bar U\) by "reflection" across the real axis: \[ f(z) := \overline{f(\bar z)} \qquad z\in\bar U.\] This can be generalized so that \(I\) (resp. \(J\)) is any line segment or circular arc, and complex conjugation becomes (geometric) reflection or inversion across \(I\) (resp. \(J\)).**Removable singularity**If \(f(z)\) is holomorphic on \(U\setminus\{z_0\}\), and if \(\lim_{z\to z_0} f(z)\) exists, then we may analytically continue \(f(z)\) to \(U\) simply by defining \(f(z_0)\) to be that limit. In fact, this happens whenever \(|f(z)|\) is bounded as \(z\to z_0\) by a theorem of Riemann.

**Cite as:**Analytic Continuation.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/analytical-continuation/