# Convergence Tests

Recall that the sum of an infinite series \( \sum\limits_{n=1}^\infty a_n \) is defined to be the limit \( \lim\limits_{k\to\infty} s_k \), where \( s_k = \sum\limits_{n=1}^k a_n \). If the limit exists, the series **converges**; otherwise it **diverges**.

Many important series do not admit an easy closed-form formula for \( s_k \). In this situation, one can often determine whether a given series converges or diverges without explicitly calculating \( \lim\limits_{k\to\infty} s_k \), via one of the following tests for convergence.

#### Contents

## Divergence Test

The first and simplest test is not a convergence test.

Divergence test:If \( \lim\limits_{n\to\infty} a_n \) does not exist, or exists and is nonzero, then \( \sum\limits_{n=1}^\infty a_n \) diverges.

The proof is easy: if the series converges, the partial sums \( s_k \) approach a limit \( L \). Then \[ \lim_{n\to\infty} a_n = \lim_{n\to\infty} (s_n-s_{n-1}) = L-L = 0. \]

The series \( \sum\limits_{n=1}^\infty \sin n \) diverges, because \( \lim\limits_{n\to\infty} \sin n \) does not exist.

The divergence test does not apply to the harmonic series \( \sum\limits_{n=1}^\infty \frac1{n} \), because \( \lim\limits_{n\to\infty} \frac1{n} = 0 \). In this case, the divergence test gives no information.

It is a common misconception that the "converse" of the divergence test holds, i.e. if the terms go to \( 0 \) then the sum converges. In fact, this is false, and the harmonic series is a counterexample--it diverges (as will be shown in a later section).

## Ratio Test

The intuition for the next two tests is the geometric series \( \sum ar^n\), which converges if and only if \( |r|<1 \). The precise statement of the test requires a concept that is used quite often in the study of infinite series.

A series \( \sum\limits_{n=1}^\infty a_n \) is

absolutely convergentif \( \sum\limits_{n=1}^\infty |a_n|\) converges. If a series is convergent but not absolutely convergent, it is calledconditionally convergent. \(_\square\)

Ratio test:Suppose \( \lim\limits_{n\to\infty} \left| \dfrac{a_{n+1}}{a_n} \right| = r \). If \( r<1 \), the series \( \sum a_n \) converges absolutely. If \( r>1 \), the series diverges. If \( r = 1 \) (or the limit does not exist), the test gives no information.

Consider the series \( \sum\limits_{n=0}^{\infty} \binom{2n}{n} x^n \). For which values of \( x\) does this series converge?

Partial Solution:

The ratio \( \left| \dfrac{a_{n+1}}{a_n} \right| \) is \[ \begin{align} \frac{\binom{2n+2}{n+1} |x|^{n+1}}{\binom{2n}{n}|x|^n} = \frac{(2n+2)(2n+1)}{(n+1)^2} |x| &= \frac{(2+2/n)(2+1/n)}{(1+1/n)^2} |x|, \end{align} \] which approaches \(4|x|\) as \(n\to\infty.\)

So if \( 4|x|<1 \), the series converges absolutely, and if \( 4|x|>1 \) the series diverges. For \( x = \pm 1/4 \), the question is more delicate. It turns out that the series converges for \( x=-1/4 \) but not \( x=1/4 \). Hence the answer is \( x \in [-1/4,1/4) \). \(_\square\)

The ratio test is quite useful for determining the interval of convergence of power series, along the lines of the above example. Note that at the endpoints of the interval, the ratio test fails.

## Root Test

The root test works whenever the ratio test does, but the liimit involved in the root test is often more difficult from a practical perspective.

Root test:Suppose \( \limsup\limits_{n\to\infty} \sqrt[n]{|a_n|} = r\). Then if \( r<1 \), the series \( \sum a_n \) converges absolutely; if \( r>1 \), it diverges; if \( r=1 \), the test is inconclusive.

Here, \( \limsup \) denotes the limit of the supremum of a sequence, \( \lim\limits_{n\to\infty} \sup\limits_{m\ge n} \sqrt[m]{|a_m|} \) in this case. If we allow \( r \) to be \( \infty\) (which is taken to be \( >1 \) for purposes of the test), the \( \limsup \) always exists (while the limit might not); if the limit exists then it equals the \( \limsup \). In practice, using the root test usually involves computing the limit.

A fact that is often useful in applications of the root test is that \( \lim\limits_{n\to\infty} n^{1/n} = 1. \) (This follows because the limit of the natural log, \( \frac{\ln n}{n}, \) is \( 0 \) by L'Hopital's rule.)

Does \( \sum\limits_{n=1}^{\infty} \frac{2^n n^{n^2+1}}{(n+1)^{n^2}} \) converge or diverge?

Take \( |a_n|^{1/n} \) and get \[ \begin{align} \frac{2 n^{n+1/n}}{(n+1)^n} &= \frac{2 n^{1/n}}{(1+1/n)^n} \to \frac{2\cdot 1}{e} < 1, \end{align} \] so the series converges (absolutely). \(_\square\)

## Integral Test

Often the series \( a_n \) can be extended to a nice function \( f(x) \), and the integral of \( f(x) \) is "close" to the sum.

Integral test:If \( f(x) \) is a nonnegative, continuous, decreasing function on \( [1,\infty) \), then the series \( \sum\limits_{n=1}^\infty f(n) \) converges if and only if the improper integral \( \int_1^\infty f(x) \, dx \) converges.

Note that it is important that \( f(x) \) is decreasing and continuous, as otherwise it is conceivable that the values of \( f \) at integers might be unrelated to its values everywhere else (e.g. imagine an \( f\) that is 0 except very near integers, where it spikes to \( 1 \); such an \( f \) might have a convergent integral, but the series will diverge).

The \( p \)-series \( \sum\limits_{n=1}^\infty \frac1{n^p} \) are defined for any real number \( p \). For which \( p \) does the associated \( p\)-series converge?

For \( p\le 0\), the series diverges by the divergence test. For \( p > 0 \), \( f(x) = \frac1{x^p} \) is a nonnegative decreasing function on \( [1,\infty) \). For \( p \ne 1 \), \[ \int_1^\infty \frac1{x^p} \, dx = \frac1{1-p} x^{1-p} \biggr\rvert_1^\infty, \] which diverges for \( p < 1 \) and converges to \( \frac1{p-1} \) for \( p > 1 \). So the same is true of the associated series.

The case \( p = 1 \) is the harmonic series, which diverges because the associated integral \[ \int_1^\infty \frac1{x} \, dx = \ln x\biggr\rvert_1^\infty \] diverges. So the answer is that the \( p \)-series converges if and only if \( p>1 \). \(_\square\)

## Comparison Test

This test can determine that a series converges by comparing it to a (simpler) convergent series.

Comparison test:If \( \sum b_n \) is absolutely convergent and \( |a_n|\le |b_n|\) for sufficiently large \( n \), then \( \sum a_n \) is absolutely convergent.

Note that it only makes sense to compare nonnegative terms, so this test will never help with conditionally convergent series.

Does \( \sum\limits_{n=1}^\infty \frac1{n^2+n+1} \) converge or diverge?

Since \( \frac1{n^2+n+1} < \frac1{n^2} \), and \( \sum \frac1{n^2} \) converges by the integral test (it is a \( p\)-series with \(p>1 \)), the series converges by the comparison test. \(_\square\)

The comparison test can also determine that a series diverges:

Does \( \sum\limits_{n=1}^\infty \frac1{2n-1} \) converge or diverge?

Since \( \frac1{2n} \le \frac1{2n-1} \), if the series converges then so does \( \sum\limits_{n=1}^\infty \frac1{2n} = \frac12 \sum\limits_{n=1}^\infty\frac1n \). But the harmonic series diverges, so the original series must diverge as well. \(_\square\)

The comparison test is useful, but intuitively it feels limited. For instance, \( \frac1{n^2-n+1} \) is not \( \le \frac1{n^2} \), and yet the series \( \sum \frac1{n^2-n+1}\) ought to converge because the terms "behave like" \( \frac1{n^2} \) for large \( n \). A refinement of the comparison test, described in the next section, will handle series like this.

## Limit Comparison Test

Instead of comparing to a convergent series using an inequality, it is more flexible to compare to a convergent series using behavior of the terms in the limit.

Limit comparison test:If \( \sum b_n \) converges absolutely, and \( \lim\limits_{n\to\infty} \left| \frac{a_n}{b_n} \right| = c \) exists (and is finite), then \( \sum a_n \) converges absolutely.

More symmetrically, if \( x_n,y_n > 0 \) and \( \lim\limits_{n\to\infty} \frac{x_n}{y_n} \) existsand is nonzero, then \( \sum x_n \) and \( \sum y_n \) both converge or both diverge.

\( \sum\limits_{n=1}^{\infty} \frac1{n^2-n+1} \) converges, because \( \sum\limits_{n=1}^{\infty} \frac1{n^2} \) does and \[ \begin{align} \lim_{n\to\infty} \frac{\frac1{n^2-n+1}}{\frac1{n^2}} &= \lim_{n\to\infty} \frac{n^2}{n^2-n+1} \\ &= \lim_{n\to\infty} \frac1{1-1/n+1/n^2} \\&= 1. \end{align} \]

Comparing to \( p\)-series is often the right strategy.

Does \( \sum\limits_{n=1}^\infty \left(\sqrt[n]{2}-1\right) \) converge or diverge?

Apply the limit comparison test with \( 1/n \) and use L'Hopital's rule, since the derivative of \( 2^x \) is \( 2^x \ln 2 \): \[ \begin{align} \lim_{n\to\infty} \frac{2^{1/n}-1}{1/n} &= \lim_{n\to\infty} \frac{2^{1/n} \ln 2 \cdot -\frac1{n^2}}{-\frac1{n^2}}\\ &= \lim_{n\to\infty} \left(2^{1/n} \ln 2\right) \\&= \ln 2. \end{align} \] So the series diverges by limit comparison with the harmonic series. \(_\square\)

## Alternating Series Test

Alternating series arise naturally in many common situations, including evaluations of Taylor series at negative arguments. They furnish simple examples of conditionally convergent series as well. There is a special test for alternating series that detects conditional convergence:

Alternating series test:If \( a_n \) is a decreasing sequence of positive integers such that \( \lim\limits_{n\to\infty} a_n = 0 \), then \( \sum\limits_{n=1}^\infty (-1)^n a_n \) converges.

If \( a_n = 1/n \), the test immediately shows that the alternating harmonic series \( \sum\limits_{n=1}^\infty \frac{(-1)^n}n \) is (conditionally) convergent.

Note that it is enough for the \( a_n \) to be *eventually* decreasing (i.e. \( a_{n+1} \le a_n \) for sufficiently large \( n\)).

Show that \( \sum\limits_{n=1}^\infty (-1)^n \frac{n}{n^2+25} \) converges.

This follows directly from the alternating series test, if we can show that \( \frac{n}{n^2+25} \) is eventually decreasing. The easiest way to do this is to consider the function \( f(x) = \frac{x}{x^2+25} \) and take its derivative: \[ f'(x) = \frac{(x^2+25)-x(2x)}{(x^2+25)^2} = \frac{25-x^2}{(x^2+25)^2}. \] So \( f'(x) < 0 \) for \( x>5 \), which implies the sequence \( f(n) \) is decreasing for \( n > 5 \). \(_\square\)

One interesting fact about the alternating series test is that it gives an effective error bound as well:

Let \( \sum (-1)^n a_n \) be a series that satisfies the conditions of the alternating series test, and suppose that \( a_n \) is decreasing for \( n \ge 1 \) (not just eventually decreasing). If the sum of the series is \( L \) and the \( k^\text{th}\) partial sum is denoted \( s_k \), then \[ |L-s_k| \le a_{k+1}. \]

Give an upper bound for the error in the estimate \( \pi \approx 4-\frac43+\frac45-\cdots -\frac4{399} \).

Assuming that \( \pi \) is the sum of the series \( \sum\limits_{n=0}^\infty (-1)^n \frac4{2n+1} \), the alternating series test says that this error is at most \( 4/401\), which is roughly \( 0.01\). In fact, the sum is \( 3.13659\ldots\), so the error is almost exactly half that, or \( 0.005 \). \(_\square\)

(To show that \( \pi \) is in fact the sum of the series, one possibility is to derive the Taylor series \( \arctan x = x-\frac{x^3}3+\frac{x^5}5-\cdots\) which is valid on \( (-1,1)\), and then to use a theorem of Abel which shows that the identity can be extended to the endpoint \( 1 \) of the interval.)

The alternating series test is actually a special case of the Dirichlet test for convergence, presented in the next section.

## Dirichlet Test

Dirichlet test:Suppose \( a_n,b_n \) are sequences and \(M\) is a constant, and

(1) \(a_n \) is a decreasing sequence,

(2) \( \lim\limits_{n\to\infty} a_n = 0 \),

(3) if \( s_k \) is the \(k^\text{th}\) partial sum of the \( b_n\), then \( |s_k|\le M \) for all \( k \).Then \( \sum\limits_{n=1}^\infty a_nb_n\) converges.

The alternating series test is the special case where \( b_n = (-1)^n \) (and \( M = 1 \)).

Let \(a_n\) be a decreasing sequence of real numbers such that \( \lim\limits_{n\to\infty} a_n = 0 \). Show that \[ \sum_{n=1}^\infty a_n \sin nx \] converges for all real numbers \( x \) which are not integer multiples of \( 2\pi\). (This is useful in the theory of Fourier series.)

This follows from the Dirichlet test and the identity

\[\sin x+\sin 2x+\cdots+\sin nx = \frac{\sin \frac{n}2 x \sin \frac{n+1}2 x}{\sin \frac12 x}\]

because the absolute value of the quantity on the right is \( \le \frac1{|\sin \frac12 x|} \), which is a constant real number as long as the denominator is not \( 0 \). (This is why we had to assume that \( x \) was not an integer multiple of \( 2\pi\).) \(_\square\)

## Abel Test

Abel's test is similar to Dirichlet's test, and is most useful for conditionally convergent series.

Abel test:Suppose \( a_n,b_n\) are sequences and \( M \) is a constant, and

(1) \( \sum a_n \) converges,

(2) \( b_n \) is a monotone (increasing or decreasing) sequence,

(3) \( |b_n|<M \) for all \( n \).Then \( \sum a_nb_n \) converges.

Note that if \( a_n \) is positive (or \( \sum a_n \) is absolutely convergent), this follows immediately from the comparison test (without assumption (2)). So the interesting series to which this applies are conditionally convergent.

The series \( \sum\limits_{n=1}^\infty (-1)^n \frac{\arctan n}n \) converges by Abel's test (take \(b_n = \arctan n\), which is increasing and bounded above by \( \pi/2 \)).

**Cite as:**Convergence Tests.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/convergence-tests/