Matrix Exponentiation
When solving a system of differential equations, it is often easy to solve it in a matrix form. However, the result is often of the form \(Ce^{A},\) where \(A\) is a matrix. In this wiki matrix exponentiating methods will be shown.
Exponentiating a Diagonal Matrix
Exponentiating diagonal matrices are the easiest. All other matrices can be factorized to have a diagonal factor, making this useful.
If \(A\) is a diagonal matrix (i.e all numbers not on the diagonal are 0):
\[A= \begin{bmatrix} a_{1,1}&0&\cdots&\cdots&0\\0&a_{2,2}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&a_{n,n}\end{bmatrix},\]
then
\[e^{A}= \begin{bmatrix} e^{a_{1,1}}&0&\cdots&\cdots&0\\0&e^{a_{2,2}}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&e^{a_{n,n}}\end{bmatrix}.\]
Note that for diagonal matrices
\[A^k= \begin{bmatrix} a^k_{1,1}&0&\cdots&\cdots&0\\0&a^k_{2,2}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&a^k_{n,n}\end{bmatrix}.\]
This can be easily shown by induction. so we have the taylor series
\[e^A=\sum_{k=0}^\infty \dfrac{A^k}{k!}=\sum_{k=0}^\infty\dfrac{1}{k!}\begin{bmatrix} a^k_{1,1}&0&\cdots&\cdots&0\\0&a^k_{2,2}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&a^k_{n,n}\end{bmatrix}.\]
We can use linearity of matrices to get this to be
\[\begin{bmatrix}\sum_{k=0}^\infty\dfrac{a^k_{1,1}}{k!} &0&\cdots&\cdots&0\\0&\sum_{k=0}^\infty\dfrac{a^k_{2,2}}{k!}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&\sum_{k=0}^\infty\dfrac{a^k_{n,n}}{k!}\end{bmatrix}=\begin{bmatrix} e^{a_{1,1}}&0&\cdots&\cdots&0\\0&e^{a_{2,2}}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&e^{a_{n,n}}\end{bmatrix}.\]
Hence proved. \(_\square\)
Some examples are given below.
if \(A=\begin{bmatrix} 1&0\\0&4\end{bmatrix}\), find \(e^A.\)
Since \(A\) is diagonal here,
\[e^A=\begin{bmatrix} e&0\\0&e^4\end{bmatrix}.\ _\square\]
Show that \(det\big(e^A\big)=e^{tr(A)}\) for a diagonal matrix \(A\), where \(tr(A)\) is the trace of a matrix or the sum along its diagonal.
Use the fact that for a diagonal matrix the determinant is
\[ \begin{vmatrix} a_{1,1}&0&\cdots&\cdots&0\\0&a_{2,2}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&a_{n,n}\end{vmatrix}=a_{1,1}a_{2,2}\cdots a_{n,n}.\]
So,
\[\begin{vmatrix} e^{a_{1,1}}&0&\cdots&\cdots&0\\0&e^{a_{2,2}}&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \cdots& \cdots&\cdots&\cdots&\cdots\\0&0&\cdots&\cdots&e^{a_{n,n}}\end{vmatrix}=e^{a_{1,1}}e^{a_{2,2}}\ldots e^{a_{n,n}}=e^{a_{1,1}+a_{2,2}+\cdots+a_{n,n}}=e^{tr(A)}.\ _\square\]
Exponentiating All Matrices
Now that we know how to exponentiate a diagonal matrix, we can do it for all matrices.
If we write \(A\) in its eigenvector form, then
\[A=S\Lambda S^{-1}\implies e^{A}=S e^{\Lambda}S^{-1},\]
where \(S\) is the eigenvector matrix and \(\Lambda\) is the diagonal eigenvalue matrix.
First, we want to find an expression for \(A^k,\) which is
\[A^k=S \Lambda^k S^{-1}.\]
This can be proved by induction. We see the base case \(k=1\) is true for the equation, and the inductive step is
\[A^{k+1}=A^k A=S \Lambda^k S^{-1}S \Lambda S^{-1}=S \Lambda^k I\Lambda S^{-1}=S \Lambda^k \Lambda S^{-1}=S \Lambda^{k+1}S^{-1}.\]
Now we use the Taylor series again:
\[\begin{align} e^A &=\sum_{k=0}^\infty \dfrac{A^k}{k!}\\ &=\sum_{k=0}^\infty \dfrac{S \Lambda^k S^{-1}}{k!}\\ &=S\left(\sum_{k=0}^\infty \dfrac{\Lambda^k}{k!}\right)S^{-1}\\\\ e^A&=S e^{\Lambda}S^{-1}.\ _\square \end{align}\]
Find \(e^A\) for \(A=\begin{bmatrix} 1&-1\\2&4\end{bmatrix}.\)
First, we want to find its eigenvalues, for which we write
\[\begin{align} \det(A-\lambda I)=\begin{vmatrix} 1-\lambda&-1\\2&4-\lambda\end{vmatrix}=0\implies (1-\lambda)(4-\lambda)+2&=0\\ \lambda^2-5\lambda+6&=0\\ \lambda&= 2,3. \end{align}\]
We find eigenvectors for both eigevalues:
- \(\lambda=2\implies \begin{bmatrix} -1&-1\\2&2\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=0.\) One solution, the one we will pick, is \( \begin{bmatrix}1\\-1\end{bmatrix}.\)
- \(\lambda=3\implies \begin{bmatrix} -2&-1\\2&1\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=0.\) One solution, the one we will pick, is \(\begin{bmatrix}-1\\2\end{bmatrix}\).
Then we have
\[A=S\Lambda S^{-1}=\begin{bmatrix}1&-1\\-1&2\end{bmatrix}\begin{bmatrix}2&0\\0&3\end{bmatrix}\begin{bmatrix}2&1\\1&1\end{bmatrix}\]
and
\[\begin{align} e^A &=\begin{bmatrix}1&-1\\-1&2\end{bmatrix}\begin{bmatrix}e^2&0\\0&e^3\end{bmatrix}\begin{bmatrix}2&1\\1&1\end{bmatrix}\\\\ &=\begin{bmatrix}1&-1\\-1&2\end{bmatrix}\begin{bmatrix}2e^2&e^2\\e^3&e^3\end{bmatrix}\\\\ &=\begin{bmatrix}2e^2-e^3&e^2-e^3\\-2e^2+2e^3&-e^2+2e^3\end{bmatrix}, \end{align}\]
and that is the answer. \(_\square\)
Differential Equations
As stated in the introduction, matrices can indeed be used to solve differential equations. Below are some examples:
Given a system of differential equations
\[\begin{cases} \dfrac{dx_1}{dt}=x_2\\ \dfrac{dx_2}{dt}=x_1,\end{cases}\]
solve for all variables in terms of \(t.\)
The trick is to consider the matrix \(U=\begin{bmatrix}x_1\\x_2\end{bmatrix}\). So,
\[ \dfrac{dU}{dt}=\begin{bmatrix}x_1'\\x_2'\end{bmatrix}=\begin{bmatrix}x_2\\x_1\end{bmatrix}=\begin{bmatrix}0&1\\1&0\end{bmatrix} \begin{bmatrix}x_1\\x_2\end{bmatrix}\implies \dfrac{dU}{dt}=\begin{bmatrix}0&1\\1&0\end{bmatrix}U.\]
We know the equation of the form
\[\dfrac{dU}{dt}=AU\implies U= e^{At}U_0, A=\begin{bmatrix}0&1\\1&0\end{bmatrix}.\]
We can see the eigenvalues for this matrix is \(1,-1\) and the eigenvectors are \(\begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}1\\-1\end{bmatrix}\). We then have
\[\begin{align} e^{At} &=\dfrac{1}{2}\begin{bmatrix}1&1\\1&-1\end{bmatrix}\begin{bmatrix}e^t&0\\0&e^{-t}\end{bmatrix}\begin{bmatrix}1&1\\1&-1\end{bmatrix}\\\\ &=\dfrac{1}{2}\begin{bmatrix}1&1\\1&-1\end{bmatrix}\begin{bmatrix}e^t&e^{t}\\e^{-t}&-e^{-t}\end{bmatrix}\\ &=\dfrac{1}{2} \begin{bmatrix}e^t+e^{-t}&e^t-e^{-t}\\e^t-e^{-t}&e^t+e^{-t}\end{bmatrix}. \end{align}\]
So, the solution is
\[U=\dfrac{1}{2} \begin{bmatrix}e^t+e^{-t}&e^t-e^{-t}\\e^t-e^{-t}&e^t+e^{-t}\end{bmatrix}U_0.\ _\square\]
That was tedious. For the case of two systems, matrices are not often the optimal path to take. But as the systems increase, matrices become better solutions. In the next example, there is an initial value problem.
Given a system of differential equations
\[\begin{cases} \dfrac{dx_1}{dt}=x_2\\ \dfrac{dx_2}{dt}=x_1,\end{cases}\]
find the exact solutions when \(x_1(0)=1, x_2(0)=0.\)
We already know the solution from last time to be
\[U=\dfrac{1}{2} \begin{bmatrix}e^t+e^{-t}&e^t-e^{-t}\\e^t-e^{-t}&e^t+e^{-t}\end{bmatrix}U_0.\]
We have
\[\begin{align} U&=\dfrac{1}{2}\begin{bmatrix}e^t+e^{-t}&e^t-e^{-t}\\e^t-e^{-t}&e^t+e^{-t}\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}\\\\ &=\dfrac{1}{2} \begin{bmatrix}e^t+e^{-t}\\e^t-e^{-t}\end{bmatrix}, \end{align}\]
and this does indeed satisfy the equations and boundary conditions. \(_\square\)