Solving the time-dependent random walk

A simplified model for molecular diffusion, stock price changes, genetic drift, etc. is the random walk, which is a particle that, at each point in time, can move a given distance in a random direction. For instance, an object currently at position nn on a 1d line can move to n1n-1, or n+1n+1, each with probability 12\frac12.

Of course, each individual particle will take a different trajectory, and so, is unpredictable. Despite this, exact statistical properties of the movement can be obtained without much work. We'll take the simple discretized case where particles move the same unit distance at each step. A diagram for the flow of particles in this process is shown below:

n1nn+1\displaystyle \boxed{n-1}\longleftrightarrow \boxed{n}\longleftrightarrow \boxed{n+1}

This means that at each time step, a particle at position nn can drift to the right, or the left. We can write this down as a simple differential equation for the change in the number of particles at position nn, NnN_n, with time:


The 2 reflects the fact that particles at nn can drift to the right or the left, with equal probability.

Several simplifications are appropriate:

  • I won't premultiply everything by the probability one half, it amounts to a rescaling of the time unit.
  • Instead of thinking about this in terms of particle number, we can divide the equation by the total number of diffusing particles, and in that way, speak of the probability that a given position is occupied, rather than a numerical amount of particles. This is more generalizable and allows us to think of probability flows.
  • Moreover, instead of thinking of many particles on a single lattice, we can think of the equation as describing an infinite number of particles, with the probabilities describing the bulk behavior of the system. In this way, we can calculate statistical properties of the process very easily.

Rewriting the differential equation in the suggested way, we find:


It's not trivial to solve a differential equation like that just by staring at it. It's also not straightforward to get a nice recurrence relation. In order to solve it, we can use a probability generating function F(z,t)=znpn(t)\displaystyle F(z,t) = \sum z^np_n(t). It's usefulness is best seen by example, but first we can point out two of its properties that reveal how it can be used.

First of all, if we set z=1z=1, then every coefficient multiplying the pn(t)p_n(t) in F(z,t)F(z,t) is equal to 1, and we have the sum of all probabilities at a given time. But at all times, the probabilities have to add up to 1. Therefore, F(1,t)=1F(1,t)=1.

The next property is a beautiful formula of complex analysis. The variable zz can be taken to be a complex number, so, we can use the full array of complex analysis theorems to dissect it. If, in general, we had an interesting expression for F(z,t)F(z,t), and we want to know the probability being at position nn, pn(t)p_n(t), we could simply take the contour integral of F(z,t)zn+1\displaystyle \frac{F(z,t)}{z^{n+1}} around the origin. The reason for this is that pnp_n has the coefficient znz^n, and the only thing that survives a contour integral about the origin are terms with 1/z1/z. This is known as Cauchy's integral formula

Therefore, we have:

pn(t)=12πiF(z,t)zn+1dz\displaystyle p_n(t)=\frac{1}{2\pi i}\oint\frac{F(z,t)}{z^{n+1}}dz

We want to find a simple expression for FF, for which we can take these derivatives. If we take the differential equation of the probability flow for pnp_n, we can multiply it by znz^n and sum over all nn to get the following relation:


This can be integrated directly to produce the simple formula:

F(z,t)=K(z)e(z+z12)t\displaystyle F(z,t)=K(z)e^{\left(z+z^{-1}-2\right)t}

We can choose an initial condition in which all the particles start situated at the origin. I.e., pn=0np_n=0 \forall n except for p0p_0 which is equal to 1. This implies F(z,0)F(z,0) = 1, which shows that the constant KK is 1.

At this point, we have solved the problem in principle, but are still wanting for a solution in nn/. We could expand the expression for F\displaystyle F out into a Taylor series to find the coefficient on the zn\displaystyle z^n which are the pn(t)\displaystyle p_n(t). However, carrying this out results in ugly looking sums and Bessel functions.

Instead, we can do a bit more complex analysis, and make a useful approximation to get an analytic equation for pn(t)p_n(t).

Recalling the contour integral described above, we have:

pn=12πiF(z,t)z(n+1)dz\displaystyle p_n=\frac{1}{2\pi i}\oint F(z,t)z^{-\left(n+1\right)}dz

The only pole of this integral is at the origin and, so, we can take the contour to be the unit circle. In this case,

z=eiθz=e^{i\theta} and dz=ieiθdθdz=ie^{i\theta}d\theta

Also, it's clear that

z+z1=2cosθ\displaystyle z+z^{-1} = 2\cos\theta

So that the expression for FF becomes


Writing this contour integral out in full form, we are faced with evaluating

pn(t)=12πe2t(cosθ1)einθdθ\displaystyle p_n(t)=\frac{1}{2\pi}\oint e^{2t\left(\cos\theta-1\right)}e^{-in\theta}d\theta

which is something complicated. However, we notice that for times t>1t>1, the exponential drops off very quickly for increasing θ\theta, so that contributions to the integral become negligible well before ±π\pm\pi, as seen in the plot below.

With this realization, we can expand the cosθ\cos\theta to second order with very little error. The integral is therefore

pn(t)12πππetθ2inθdθ\displaystyle p_n(t)\approx\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}e^{-t\theta^2-in\theta}d\theta

Further, as there is essentially no contribution to the integral for θ>π\theta > \pi we can extend the upper limit of the integral to infinity without any error. We can also extend the bottom limit to negative infinity, although this introduces small error at small tt. However, for tt modestly large, the error is negligible and as tt\rightarrow\infty, it is non-existent. Therefore

pn(t)=12πetθ2inθdθ\displaystyle p_n(t)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}e^{-t\theta^2-in\theta}d\theta

Finally, this can be solved analytically through the Hubbard-Stratonovich transformation from field theory (or simply, completing the square) and we arrive at

pn(t)=14πten24t\displaystyle p_n(t)=\frac{1}{\sqrt{4\pi t}}e^{-\frac{n^2}{4t}}

This is a Gaussian distribution that spreads symmetrically to the left and right as time proceeds; the variance increases linearly with time.

We show the probability distribution below for n0,20n \in {0,20} and t0,100t\in{0,100}. Remember, we started with the initial condition that all particles start at the origin. We can see that as time progresses, the particles flow out from the origin to greater values of nn.

In another view, here is a movie of the same process.

Means and variance

Now that we've done the difficult job of finding the time dependent probability distribution, we can get some values for free. First, the mean position of the diffusing particles.

We can find this by evaluating the expectation value:

<n>=npn(t)dn=14πtnen24tdn\displaystyle\left<n\right>=\int\limits_{-\infty}^{\infty} np_n(t)dn=\frac{1}{\sqrt{4\pi t}}\int\limits_{-\infty}^{\infty}ne^{\frac{-n^2}{4t}}dn

We can see that this is zero because of the symmetric distribution. On average, particles diffuse about the origin without any bias, and so, are still centered about the initial position.

It is clear that the particles are spreading out with time, so, we expect that the average particle will make it away from the origin. Specifically, we can find the mean squared distance. We find this, again, by taking an expectation value:

<n2>=n2pn(t)dn=14πtn2en24tdn\displaystyle\left<n^2\right>=\int\limits_{-\infty}^{\infty}n^2p_n(t)dn=\frac{1}{\sqrt{4\pi t}}\int\limits_{-\infty}^{\infty}n^2e^{\frac{-n^2}{4t}}dn

This integral can be solved in a number of ways, most beautifully by taking a derivative underneath the integral sign. It turns out that the average squared distance grows linearly in time:


and therefore we have the classic behavior of the random walk, that the average particle diffuses 2t\sqrt{2t} steps from the origin by time tt, which is in good agreement with real random walkers like E. coli\textit{E. coli} swimming in liquid, or gas diffusing through the air.

Note by Josh Silverman
7 years, 2 months ago

No vote yet
1 vote

  Easy Math Editor

This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.

When posting on Brilliant:

  • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
  • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
  • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
  • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events.

MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold

- bulleted
- list

  • bulleted
  • list

1. numbered
2. list

  1. numbered
  2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1

paragraph 2

paragraph 1

paragraph 2

[example link]( link
> This is a quote
This is a quote
    # I indented these lines
    # 4 spaces, and now they show
    # up as a code block.

    print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.

print "hello world"
MathAppears as
Remember to wrap math in \( ... \) or \[ ... \] to ensure proper formatting.
2 \times 3 2×3 2 \times 3
2^{34} 234 2^{34}
a_{i-1} ai1 a_{i-1}
\frac{2}{3} 23 \frac{2}{3}
\sqrt{2} 2 \sqrt{2}
\sum_{i=1}^3 i=13 \sum_{i=1}^3
\sin \theta sinθ \sin \theta
\boxed{123} 123 \boxed{123}


Sort by:

Top Newest

Nice post! Would you recommend Lawler's "Random Walk and the Heat Equation" to students? I haven't read it. Also, problem for you to think about: what happens if you have multiple random walkers in one dimension who are too polite to scoot past each other or occupy the same position?

Eric Brattain - 7 years, 1 month ago

Log in to reply

Would you recommend Lawler's "Random Walk and the Heat Equation" to students? I haven't read it.

I haven't read that! My first experience with this kind of stuff was Stochastic Processes in Physics and Chemistry by van Kampen, which I love and think is great but more than a little dense. I would also recommend these course notes from Matt Scott.

Also, problem for you to think about: what happens if you have multiple random walkers in one dimension who are too polite to scoot past each other or occupy the same position?

I've never tried that situation before, but it sounds like a humdinger, I will put it on my list.

Josh Silverman Staff - 7 years, 1 month ago

Log in to reply

Nice note about random motion

Rajat Dwivedi - 6 years, 11 months ago

Log in to reply

Is this related to Pólya's Random Walk Constants?

lionel doolan - 5 years, 1 month ago

Log in to reply

Yes it is, but that problem (whether or not the particle returns to the origin) is a harder question about the same kind of trajectory.

Josh Silverman Staff - 5 years ago

Log in to reply


Problem Loading...

Note Loading...

Set Loading...