Waste less time on Facebook — follow Brilliant.

Laplace Transforms and Homogenous Differential Equations

If you've ever taken a class in ordinary differential equations, you've probably been introduced to the method of the "characteristic equation," used in solving linear, homogenous, constant-coefficient ordinary differential equations. When I was first introduced to this technique, I had no idea where it came from. I thought "How does that even work?" I spent about a year and a half just using the technique, having no real idea where it came from. Recently, I decided it was time to find out. As the reader will see, the method of the "characteristic equation" is an implicit result of the following proof(s). (Note that there are probably many ways to go about proving what we will prove below. I chose the methods we will use because I felt they are the most instructive. If anyone finds an alternative proof for the following propositions, I'd love to see it.) The following proofs assume the reader is familiar with the concepts of ordinary differential equations, Laplace Transforms and partial fraction decomposition.

We will prove:

(1) \(\mathcal{L}\left[ f^{(n)}(t)\right] = s^nF(s) - \displaystyle \sum_{i=1}^{n}f^{(n-i)}(0)s^{i-1}\) for \(n \geq 1\)

(2) All ordinary differential equations of the form \(\displaystyle \sum_{i=0}^{n}b_i f^{(i)}(t) = 0\) , where \({b_i}\) is a set of fixed coefficents and \(b_i \in \mathbb{R}\) , have solution:

\( f(t) = \displaystyle \sum_{i=1}^{n} A_i e^{r_it}\)

where \( A_i\) are arbitrary constants satisfying any initial conditions and \(r_i\) are the roots of the polynomial:

\(\displaystyle \sum_{i=0}^{n} b_it^i\)

Additionally, we restrict \(f(t)\) to those functions whose integral over \([0,\infty)\) exists (this is a requirement for the Laplace Transform to exist).

(1) To be proved: \(\mathcal{L}\left[ f^{(n)}(t)\right] = s^nF(s) - \displaystyle \sum_{i=1}^{n}f^{(n-i)}(0)s^{i-1}\) for \(n \geq 1\)

By definition: \(\mathcal{L}\left[ f^{(n)}(t)\right] = \displaystyle \int_{0}^{\infty} f^{(n)}(t) e^{-st}\,dt\)

Integrating by parts: \(u=e^{-st}\) and \(dv=f^{(n)}(t)dt\)

\(\Rightarrow\) \(du=-se^{-st}dt\) and \(v=f^{(n-1)}(t)\)

\(\Rightarrow\) \(\mathcal{L}\left[ f^{(n)}(t)\right] = e^{-st}f^{(n-1)}(t) \mid_{0}^{\infty} +s \displaystyle \int_{0}^{\infty} f^{(n-1)}(t) e^{-st}\,dt\)

Integrating by parts once more we have:

\(\mathcal{L}\left[ f^{(n)}(t)\right] = e^{-st}f^{(n-1)}(t)+se^{-st}f^{(n-2)}(t) \mid_{0}^{\infty} +s^2 \displaystyle \int_{0}^{\infty} f^{(n-2)}(t) e^{-st}\,dt\)

It is clear that this process will repeat \(n+1\) times. However, let us examine the end portion of this process, specifically \(\displaystyle \int_{0}^{\infty} f^{(1)}(t) e^{-st}\,dt\)

It is apparent from the observed pattern that this integral will have coefficient \(s^{(n-1)}\). Then we have:

\(s^{(n-1)}\displaystyle \int_{0}^{\infty} f^{(1)}(t) e^{-st}\,dt = s^{(n-1)}f(t)e^{-st} \mid_{0}^{\infty} +s^n \displaystyle \int_{0}^{\infty} f(t) e^{-st}\,dt\)

The integral on the far right is the definition of \(F(s)\). Hence:

\(= s^{(n-1)}f(t)e^{-st} \mid_{0}^{\infty} +s^nF(s)\)

Since this is just the end portion of an entire sum of terms, we combine all of said terms to arrive at:

\(\mathcal{L}\left[ f^{(n)}(t)\right] = s^nF(s) + e^{-st}f^{(n-1)}(t)+se^{-st}f^{(n-2)}(t)+...+s^{(n-1)}f(t)e^{-st} \mid_{0}^{\infty}\)

\(\Rightarrow\) \(\mathcal{L}\left[ f^{(n)}(t)\right] = s^nF(s) +e^{-st}\displaystyle\sum_{i=1}^{n}f^{(n-i)}(t)s^{i-1} \mid_{0}^{\infty}\)

Now let us examine: \(\displaystyle\lim_{x\to\infty}e^{-st}\displaystyle\sum_{i=1}^{n}f^{(n-i)}(t)s^{i-1}\)

This is where the integrability of \(f(t)\) on \([0,\infty)\) comes in. This restriction ensures that the limit of the sum is a finite number, and since \(\displaystyle\lim_{x\to\infty}e^{-st} = 0\), this term will disappear.

Then we finally have, since \(\displaystyle\lim_{x\to0}e^{-st} = 1\):

\(\mathcal{L}\left[ f^{(n)}(t)\right] = s^nF(s) - \displaystyle \sum_{i=1}^{n}f^{(n-i)}(0)s^{i-1}\)

Which proves (1).

(2) To be proved: All ordinary differential equations of the form \(\displaystyle \sum_{i=0}^{n}b_i f^{(i)}(t) = 0\) , where \({b_i}\) is a set of fixed coefficents and \(b_i \in \mathbb{R}\) , have solution:

\( f(t) = \displaystyle \sum_{i=1}^{n} A_i e^{r_it}\)

where \( A_i\) are arbitrary constants satisfying any initial conditions and \(r_i\) are the roots of the polynomial:

\(\displaystyle \sum_{i=0}^{n} b_it^i\)

We begin by applying the Laplace Transform to this equation:

\(\mathcal{L}\left[\displaystyle \sum_{i=0}^{n}b_i f^{(i)}(t) \right] = 0\)

Since the Laplace Transform is, by definition, just a definite integral, we may move the operator into the summation and the constant out of the transform to arrive at:

\(\displaystyle \sum_{i=0}^{n}b_i\mathcal{L}\left[f^{(i)}(t) \right] = 0\)

Using what we just proved in (1), it is clear that this is equivalent to:

\(\displaystyle \sum_{i=0}^{n}b_i\left[s^iF(s)-s^{i-1}f(0)-s^{i-2}f^{(1)}(0)-...-sf^{(i-2)}(0)-f^{(i-1)}(0)\right]=0\)

Rearranging terms and utilizing a more compact notation leads us to:

\(F(s)\displaystyle \sum_{i=0}^{n}b_is^i = \displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)\)

\(\Rightarrow\) \( F(s) = \frac{\displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)}{\displaystyle \sum_{i=0}^{n}b_is^i}\)

And again, by definition, we have:

\(f(t)=\mathcal{L}^{-1}\left[\frac{\displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)}{\displaystyle \sum_{i=0}^{n}b_is^i}\right]\)

Now, let us examine \(\displaystyle \sum_{i=0}^{n}b_is^i\). This is an \(n^{th}\) degree polynomial in \(s\), and hence it may be written as:


\(\Rightarrow\) \(\displaystyle \sum_{i=0}^{n}b_is^i = \displaystyle \prod_{i=1}^{n} (s-r_i)\)

We choose a minus sign here because we wish to represent the roots of this polynomial as \(r_i\), hence the roots, as they appear in the factored version of the polynomial, will be negative (this choice is really quite arbitrary).


\(f(t) = \mathcal{L}^{-1}\left[\frac{\displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)}{\displaystyle \prod_{i=1}^{n} (s-r_i)}\right]\)

Now, since we have a summation in the numerator here and a product of \(n\) expressions in the denominator, \(\exists\) constants \(A_1,A_2,...,A_n\) such that:

\(\frac{\displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)}{\displaystyle \prod_{i=1}^{n} (s-r_i)} = \frac{A_1}{(s-r_1)} +\frac{A_2}{(s-r_2)} +...+\frac{A_n}{(s-r_n)}\)

\(\Rightarrow\) \(\mathcal{L}^{-1}\left[\frac{\displaystyle \sum_{i=0}^{n}b_is^{n-i}f^{(i-1)}(0)}{\displaystyle \prod_{i=1}^{n} (s-r_i)}\right] = \mathcal{L}^{-1}\left[\displaystyle \sum_{i=1}^{n} \frac{A_i}{s-r_i}\right]\)

And this, by definition, implies (since we may take the inverse transform of each term in the sum individually):

\(f(t) = \displaystyle \sum_{i=1}^{n} A_i e^{r_it}\)

Which is the solution we wished to prove. Of course, if initial conditions are given, then \(A_i\) must satisfy these. Now, direct your attention to \(r_i\). These are the roots of \(\displaystyle \sum_{i=0}^{n} b_is^i\). Now, if we replace \(s\) with any arbitrary variable, the roots will remain unchanged, as this replacement does not change the degree of the polynomial or \(b_i\). Now, recognize the symmetry between:

\(\displaystyle \sum_{i=0}^{n} b_is^i\) and \(\displaystyle \sum_{i=0}^{n} b_if^{(i)}(t)\)

One may consider, as a more brief way of solving a differential equation of this form, replacing the actual equation with \(\displaystyle \sum_{i=0}^{n} b_is^i\) and finding the polynomial roots, all the while remembering that the solution will be of the form:

\(f(t) = \displaystyle \sum_{i=1}^{n} A_i e^{r_it}\)

This is the very definition of the "characteristic equation" method of solving these equations. Hence this method is an implicit result of our previous two proofs.


It is important to remember that this solution ONLY APPLIES if all \(b_i\) are constant and the differential equation is ordinary, linear and homogenous (at least our proof only applies in this case). Also note that \(r_i\) need not be real. One more thing, (2) does not imply that this is the ONLY solution, nor does it imply that this is the fundamental, general solution. All (2) implies is that all ODE's of this form have at least one solution of this form.

Note by Ethan Robinett
3 years, 1 month ago

No vote yet
1 vote


There are no comments in this discussion.


Problem Loading...

Note Loading...

Set Loading...