# Sums Of Divergent Series

Every calculus student learns that **divergent series** should not be manipulated in the same way as convergent series. For example, if forced to assign a value to the divergent series

$1-1+1-1+1-1+\cdots,$

the most obvious method is to group terms:

$(1-1)+(1-1)+(1-1)+\cdots=0+0+0+\cdots=0,$

but this produces a different answer if the terms are grouped differently:

$1+(-1+1)+(-1+1)+(-1+1)+\cdots = 1+0+0+\cdots = 1.$

Nevertheless, it is often useful to assign values to divergent series in "reasonable" or "consistent" ways. Indeed, mathematicians from Euler to Ramanujan used divergent series to derive many important results (though with varying degrees of rigorous justification). This wiki will discuss

(1) what makes a method of assigning values "reasonable,"

(2) some methods that are commonly used, and

(3) most importantly, why these methods are useful in practice.

The goal is to convince readers that Abel was incorrect when he famously said,

"The divergent series are the invention of the devil, and it is a shame to base on them any demonstration whatsoever."

#### Contents

## Requirements for Divergent Series Sums

**Regularity**: A summation method for series is said to be *regular* if it gives the correct answer for convergent series (i.e. the limit of the sequence of partial sums).

**Linearity**: If $\sum a_n = A$ and $\sum b_n = B$, then $\sum(a_n+b_n)$ must equal $A+B$ and $\sum ca_n$, where $c$ is a constant, must equal $cA$.

**Stability**: If $\sum\limits_{n=1}^{\infty} a_n = A$, then $\sum\limits_{n=2}^{\infty} a_n = A-a_1$.

Not every useful method for summing series satisfies these requirements (in particular stability), but many do. Note that most methods for summing series do not work on every series; the goal is to find and use methods that sum as many interesting and important series as possible. The above requirements alone are often enough to determine what the value of the sum of a given series must be under any method that satisfies the requirements.

Find the sum of $1-1+1-1+1-1+\cdots$ under any summation method that is regular, linear, and stable, assuming the method provides a sum for this series.

If the sum of the series is $S$, then

$\begin{aligned} S &= 1-1+1-1+1-1+\cdots \\ &= 1+(-1+1-1+1-1+\cdots) &\qquad (\text{stability}) \\ &= 1+(-1)(1-1+1-1+1-\cdots) &\qquad (\text{linearity}) \\ &= 1+(-1)S, \end{aligned}$

so $S = 1-S$ and $S = \frac12$. $_\square$

Find the sum of $1+2+4+8+16+\cdots$ under any summation method that is regular, linear, and stable, assuming the method provides a sum for this series.

Same idea:

$\begin{aligned} S &= 1+2+4+8+\cdots \\ &= 1+(2+4+8+\cdots) &\qquad (\text{stability}) \\ &= 1+2(1+2+4+\cdots) &\qquad (\text{linearity}) \\ &= 1+2S, \end{aligned}$

so $S = 1+2S$ and $S=-1$. $_\square$

The answers to both these questions seem quite odd, but notice that they both represent a sort of *continuation* of a known formula for geometric series:

$\sum_{n=0}^{\infty} r^n = \frac1{1-r}.$

In calculus, one learns that this only converges for $r \in (-1,1)$. One way to get the right answer for the two examples above is to use this formula but plug in values outside the interval of convergence, namely $r = -1$ and $r=2$, respectively. Of course, this is not anywhere near a general summation method, but it does give an intuitive sense of where the answers are coming from. The idea of continuation will arise more formally below.

## Cesaro Summation

As a first example of a summation method, Cesaro summation works as follows: rather than taking the limit of the partial sums, take the limit of their averages. That is, given a series $\sum a_n$, let $s_k$ be the $k^\text{th}$ partial sum as usual and

$t_k = \frac1{k}(s_1+s_2+\cdots+s_k),$

and assign the value $\lim\limits_{k\to\infty} t_k$ to $\sum a_n$, if the limit exists.

It can be shown that Cesaro summation is regular, linear, and stable.

For the series $1-1+1-1+1-1+\cdots$, the partial sums are $1,0,1,0,1,0,\ldots$. The series is not convergent because the limit of this sequence does not exist. But the sequence of averages of the partial sums is

$1,\frac12,\frac23,\frac12,\frac35,\frac12,\ldots,$

which converges to $\frac12$. So the series is Cesaro summable, to $\frac12$.

Cesaro summability allows certain series with oscillatory sequences of partial sums to be "smoothed out," but if the partial sums of the series go to $\infty$ instead (e.g. the harmonic series), the averages of the partial sums will also go to $\infty$, so series like $1+2+4+8+\cdots$ will not be Cesaro summable.

## Abel Summation

Abel summation involves limits of power series: define

$\sum_{n=0}^{\infty} a_n = \lim_{z\to 1^-} \sum_{n=0}^{\infty} a_nz^n$

if the limit exists. The idea is to extend the conclusion of Abel's theorem, which says (in part) that if $\sum a_n$ converges, then the limit on the right side of the above equation exists and equals the sum. (This is precisely the statement that Abel summation is regular.) Note that the example $1-1+1-1+1-1+\cdots$ is Abel summable, because

$1-z+z^2-z^3+\cdots = \frac1{1+z}$

for $|z|<1$, so the limit is $\lim\limits_{z\to 1^-} \frac1{1+z} = \frac12$.

In fact, any series that is Cesaro summable is also Abel summable, and the sums are the same. So Abel summability is stronger (though Cesaro summability is nevertheless still useful due to its relative ease of computation). Here is an example of a series that is Abel summable but not Cesaro summable.

$1-2+3-4+5-6+\cdots$

Define the Abel sum $\sum\limits_{n=0}^\infty a_n$ to be $\displaystyle \lim_{z\to 1^-} \sum_{n=0}^\infty a_nz^n,$ if that limit exists.

The Abel sum of the (divergent) series as shown above can be written as $\frac{a}{b}$, where $a$ and $b$ are coprime positive integers. Find $a+b$.

## Zeta Function Regularization

Some summation methods are defined using analytic continuation of complex-valued functions. An analytic continuation of a function $f$ is a function $g$, defined on a larger set than $f$ is, which is (complex) differentiable everywhere on its domain.

The motivating example is the Riemann zeta function

$\zeta(s) = \sum_{n=1}^{\infty} \frac1{n^s}.$

This only converges for $\text{Re}(s) > 1$, but there is a functional equation which extends the $\zeta$ function to a function that is well-defined and differentiable everywhere except for $s =1$. This functional equation allows the computation

$\zeta(-1) = -\frac1{12}$

$\big($in general, if $n$ is a positive integer, $\zeta(-n) = -\frac{B_{n+1}}{n+1}$, where $B_k$ are the Bernoulli numbers$\big).$ So plugging in $s=-1$ to the series representation of the $\zeta$ function yields

$1+2+3+\cdots = -\frac1{12}.$

It turns out that this sum has practical applications in string theory and in a computation of the one-dimensional Casimir effect in quantum mechanics. In the latter context, the zeta function regularization corresponds to an assumption about the model that accounts for the "cancellation of the infinite part" of the sum.

The general idea is the same: the function $\sum a_n^{-s}$ might converge in some complex half-plane, but if it can be analytically continued to a function defined for $s = -1$, associate the value of the function at $-1$ with the sum of the series. Note that this method is stable but not linear.

## Dirichlet Series Regularization

Another idea that is sometimes (incorrectly) called zeta function regularization is to use the Dirichlet series

$f(s) = \sum_{n=0}^{\infty} \frac{a_n}{n^s}$

and assign the sum the value of $f(0)$, if $f$ can be analytically continued to $0$. This is a different method than zeta function regularization: indeed, it is linear but not stable. $\big(\text{Note }$that it also gives $1+2+3+\cdots=-\frac1{12}.\big)$

## Applications

Sums of divergent series often have applications in physics, as with the $1+2+3+\cdots$ example above. The general idea is that if a physical situation is described by a function $f$ defined by a series that is only convergent for some set of values not including $s$, an analytic continuation $g$ of $f$ to some larger set of values including $s$ is related closely enough to $f$ that $g(s)$ can have some meaningful physical interpretation even though $f(s)$ is undefined.

There are less mysterious situations than this, in which divergent series give combinatorial insight as well. The following (long) example is related by Matt Noonan.

The Catalan numbers count (among many other objects) rooted left-right-ordered binary trees with $n$ vertices. Here a tree is a set of vertices, which are connected to children one level below and a parent one level above. "Rooted" means there is one vertex at the top, "left-right-ordered" means that every child is labeled either a left child or a right child, and "binary" means that every parent has at most two children--and if there are two, one must be a left child and the other a right child.

The generating function for the Catalan numbers is

$F(z) = \frac{1-\sqrt{1-4z}}{2z} = 1+x+2x^2+5x^3+\cdots+C_kx^k+\cdots$

(this is a straightforward application of the fractional binomial theorem). The series has radius of convergence $\frac 14,$ but $F(1) = \frac12-i\frac{\sqrt{3}}2$. Think of $F(1)$ as counting all of the rooted ordered binary trees, although of course the series defining $F(1)$ diverges. Notice that $F(1)^7 = F(1)$, and $F(z)^7$ is the generating function for 7-tuples of rooted ordered binary trees. So this suggests that there is a "natural" bijective way to identify a 7-tuple of rooted ordered binary trees with a unique rooted ordered binary tree, and this turns out to be the case! See this paper for details.

The point here is that the identity for divergent series sums has a straightforward and natural interpretation as a statement about a bijection between two equal-sized sets. This is how applications of sums of divergent series often work: instead of solving down-to-earth problems directly, they give clues to the correct solution, that can later be justified rigorously by other methods.

**Cite as:**Sums Of Divergent Series.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/sums-of-divergent-series/