Series
A mathematical series is an infinite sum of the elements in some sequence. A series with terms \(a_n\), where \(n\) varies from \(1\) through all positive integers, is expressed as \[ \sum_{n= 1}^\infty a_n. \] The \(n^\text{th}\) partial sum \(S_n\) of the series is the value \[S_n = \sum_{i = 1}^n a_i. \]
For example, for the equation \(f(n) = n^2,\) the sum of \(f(n)\) for every \(n\) between \(1\) and \(\infty\) can be expressed as \[ \sum_{n = 1}^\infty n^2 = 1^2 + 2^2 + 3^2 +\cdots.\] If we only wanted the sum of terms up to \(n=10,\) that would be \[ S_{10} = \sum_{n = 1}^{10} n^2 = 1^2 + 2^2 + 3^2 +\cdots+ 10^2 = 385.\]
Series are useful throughout mathematics and science, as a means of approximation, analytic continuation, and evaluation. The implicit connection to the values of the derivatives of the function provides a strong tool in all fields using calculus.
Notation
The symbol \(\sum\) indicates a summation, and it may be interpreted by iterating the parameter (usually designated below the summation) as it takes (usually integer) values in a prescribed range (from the initial value to the upper limit), then adding the resulting expressions. For instance,
\[ \sum_{k = 1}^{200} f(k) = f(1) + f(2) + \dots + f(200).\]
The parameter \(k\) has initial value \(1\). It is iterated for all integer values up to (and including) \(200\), its upper limit. Then these iterations are summed.
The parameter \(k\) may be replaced (in all instances) by \(i\) or any other variable. Commonly used parameters include \(i\), \(j\), \(k\), \(m\), and \(n\). Another way of representing the same thing is with set notation, seen as
\[ \sum_{j \in \{1, \, 2, \dots, \, 200\}} f(j) = f(1) + f(2) + \dots + f(200).\]
In general, a summation notation is acceptable if it is unambiguous and well-defined. In order to be well-defined, the summation must be either finite or absolutely convergent (or conditionally convergent with an ordering to the summation). It follows that convergence is one of the most important questions in the study of series.
Like other mathematical operations, summation may be used in the definition of a function and may contain variables within the series itself. Power series explore this idea further.
Convergence
Main Article: Convergence Tests
A series is said to converge to a value if the limit of its partial sums approaches that value; that is, given an infinite sequence \(\{a_k\}\), the series
\[ \sum_{k = 1}^\infty a_k = \lim_{n \to \infty} \sum_{k = 1}^n a_k.\]
If the limit does not exist, the series is said to diverge. A sufficient condition for a series to diverge is the following:
Divergence Test
If \( \lim\limits_{n\to\infty} a_n \) does not exist, or exists and is non-zero, then \( \sum\limits_{n=1}^\infty a_n \) diverges.
Essentially this is saying that if the limit of some function (not the limit of the sum of that function, but just the limit of the function) gets larger and larger, then the limit does not equal \(0\). And if the limit of the function just keeps getting larger, then the sum of that function as it goes to \(\infty\) is going to diverge. However, failing the divergence test does not mean a summation converges. In general, convergence tests are necessary for determining whether an infinite summation converges or diverges.
A series is said to converge absolutely if the series formed from the absolute value of its terms converges; that is, given an infinite sequence \(\{a_k\}\),
\[ \sum_{k = 1}^\infty |a_k| \text{ converges.}\]
A series is said to converge conditionally if it converges but does not pass this test, meaning it does not converge absolutely.
For example, the sum \(\sum\limits_{n = 1}^\infty \frac{(-1)^{n+1}}{2^n}\) converges absolutely. But the sum \(\sum\limits_{n = 1}^\infty \frac{(-1)^{n+1}}{n}\) converges conditionally. If we know that both converge, we can prove that they converge absolutely or conditionally by taking the sum of the absolute value of the function:
\[\begin{align} \sum\limits_{n = 1}^\infty \left|\frac{(-1)^{n+1}}{2^{n}}\right| &= \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots + \frac{1}{2^{n}} \rightarrow \text{converges}\\ \sum\limits_{n = 1}^\infty \left|\frac{(-1)^{n+1}}{n}\right| &= 1 + \frac{1}{2} + \frac{1}{3} + \cdots +\frac{1}{n} \rightarrow \text{does not converge}. \end{align} \]
Absolute or conditional convergence is important because it gives more information about the sequence and about the series itself. For instance, the Riemann series theorem implies that a series may not be freely reordered if it is conditionally convergent. With power series, it may help determine the behavior at positive or negative values.
Power Series
Main Article: Power Series
A power series is an expression \({\displaystyle \sum_{n=1}^\infty} a_n x^n\) generated by an infinite sequence \(\{a_n\}\).
Power series are used in calculus as local approximations of functions and in combinatorics as abstract tools for counting. In calculus, the issue of convergence is paramount, while it is not as central to combinatorial concerns. In combinatorics, power series may be manipulated symbolically without worrying about issues of convergence.
There exist multivariable extensions of power series. Convergence becomes a more complicated issue with these, but the Fubini-Tonelli theorem provides a guideline for algebraic manipulations.
Methods of Estimation
Due to the infinite nature of series, it is not possible to calculate the sum directly without resorting to algebraic methods. However, generally speaking, there exist many series that cannot be expressed in a closed form, and all modern calculators and computers require some sort of estimation software in order to provide a satisfactory decimal answer. Such approximation methods fall in the field of numerical analysis. Robust approximation tools are doubly important, as integrals are often evaluated numerically as a type of infinite sum.
An alternating series is a series representable in the form
\[ \sum_{k = 1}^\infty (-1)^k a_k \]
for some sequence \(\{a_k\}\) of nonnegative numbers. An alternating series is known as decreasing if \(a_n > a_{n+1}\) for all \(n\).
Let \(S_n\) be the \(n^\text{th}\) partial sum of a series, and let \(S\) be its actual sum. Decreasing alternating series have a simple error bound given by
\[ S_n - a_{n+1} < S < S_n + a_{n+1}. \]
The integral test can help estimate the sum for decreasing positive series \( \sum_{k = 1}^\infty a_k \), where \(a_k = f(k)\) for all \(k\) and the improper integral \(\int_n^\infty f(x) \, dx\) converges. The error bound is
\[ S_n + \int_{n+1}^\infty f(x) \, dx < S < S_n + \int_n^\infty f(x) \, dx.\]
A twist on the ratio test can also be used to estimate a sum. Let \(L = \lim_{n \to \infty} \tfrac{a_{n+1}}{a_n}\), and suppose \(L < 1\).
- If the ratio \(\tfrac{a_{n+1}}{a_n}\) decreases to \(L\) as \(n\) increases, then \[ S_n + a_n \cdot \left( \frac{L}{1 - L} \right) < S < S_n + \frac{a_{n+1}}{1 - \tfrac{a_{n+1}}{a_n}}. \]
- If the ratio \(\tfrac{a_{n+1}}{a_n}\) increases to \(L\) as \(n\) increases, then \[ S_n + \frac{a_{n+1}}{1 - \tfrac{a_{n+1}}{a_n}} < S < S_n + a_n \cdot \left( \frac{L}{1 - L} \right). \]
Using the integral test, find the best possible estimation for the series \(\displaystyle\sum_{n=1}^\infty \frac{1}{n^3}\) using a partial sum with \(10\) terms.
Note that \( \displaystyle\sum_{n=1}^{10} \frac{1}{n^3} \approx 1.19753 \) and \(\tfrac{1}{11^2} \approx 0.00826\).