Often in the course of writing some piece of code for data analysis, or in making a simulation of a system, like a virus spreading through a population, gene expression in a cell, or the dynamics of the stock market, we'll want to sample random draws from a probability distribution. The problem is that most languages come equipped only with simple random number generators, capable of drawing uniformly from the real unit interval , or from a given range of integers. The uniform distribution is fine when we need to simulate something simple like rolling dice, but is inadequate when more sophisticated probabilities are crucial; we commonly require nonlinear distributions like the Gaussian, Poisson, and even exotic choices like the Gumbel distribution. Can we find a way to sample from arbitrary probability distributions using simple random number generators?
We wish to sample a non-flat distribution using a random number generator, which draws from the uniform distribution. So we need some way to transform samples from the uniform distribution into samples from the arbitrary distribution .
Imagine having a representative sampling, We can rank-order this list and draw randomly on the index , each value would have an equal chance of being drawn, which would give us the kind of sampling we're after. In the limit of large , the set is increasingly representative of the true distribution
For example, suppose we have the following representation of the binary distribution, which has equal chance of generating 1 or 0. If we sampled randomly on the index, it is easy to see that we transform random draws from the integers into draws from the binary distribution:
Now suppose we have a similar table for a representative sample from the arbitrary distribution . Let us collapse the sample into a table of pairs where the first entry is the value, and the second is the number of times it appears, and sort the table entries by value:
If we want to sample randomly from we can simply draw a random integer between 1 and , then do a lookup in the table. For some , we'll have , where , and so we map
If we take the limit of large samples , and normalize , then we can instead draw a random real number between zero and one, and map it to by finding the approximate solution of
In the continuous limit, we have , and thus our sampling problem amounts to solving the integral equation
Thus, if we wish to sample from an arbitrary distribution , we simply sample from the flat distribution and map to the solution of , i.e.
For a visual illustration of how this works, consider the bimodal Gaussian shown below. We expect draws from this distribution to be centered around the two peaks, and for very few draws to come from the region between, or beside them.
As we showed with the table analogy, drawing randomly on the indices of the table representation of is the same as sampling the distribution . In the continuous case, drawing randomly from the range of the cdf of (and mapping to the associated value) is equivalent to drawing randomly from itself. Thus, if we draw a horizontal line at a random height, we expect to have high probability of mapping to the two Gaussians, and a very low probability of mapping to the regions outside them.
We see that the Gaussians occupy most of the vertical space in the range of the cdf, while the inter-Gaussian occupy only a sliver. We can place our horizontal lines almost anywhere and have a good chance of mapping to one of the two Gaussians. Indeed, to map to the low probability regions, we'd have to randomly place our horizontal line precisely on the plateaus in the cdf (gold box above).
The cdf of the exponential distribution is given by
Solving for , we find
Now let's show that it works. It would be straightforward to write our code completely from scratch, however it is faster, more convenient, and above all, more reliable if we use an open source project to handle some common operations. We'll use the numerical Python library
numpyto generate some random numbers,
1 2 3
import numpy as np import matplotlib.pyplot as plt uniform_rands = np.random.random(1000000)
For a sanity check, let's see what we have at this point. If we plot a histogram, we should see a flat distribution of real numbers between 0 and 1.
Now we map our sample from the flat distribution according to the cdf transform we found above.
1 2 3
param = 1.8 transformed_rands = [1 / param * np.log(1 / (1 - x)) for x in uniform_rands] plt.hist(transformed_rands, bins=500, normed=True)
A co-plot of the desired distribution, alongside a normalized histogram of the data shows that our transformed sample matches our desired distribution very closely. In the limit of more points, the agreement will be exact.
Seemingly insignificant points of disagreement between true distributions and an approximating distribution can lead to harmful consequences. A compelling example of this are the so-called long tail regions of distributions which describe the probability of rare events. Sometimes, mathematically convenient distributions are used to stand in for empirically determined distributions. For instance, Gaussian distributions and their close relatives are used to describe some kinds of behavior in financial markets.
In many cases, the distribution that is used is a very close approximation to the true distribution, differing only in the long tails. Though disagreement in the tails may seem like an innocuous discrepancy (it is a difference in an already sparse region of the distribution), it is actually very significant, and for purposes of risk management, perhaps the most significant.
By definition, these events happen very rarely, which makes them both psychologically convenient to ignore, and technically difficult to properly characterize. It is this combination that makes them so insidious. As a simple illustration of the danger of mis-appraising tail probability, consider the two distributions below, one of which is the Gaussian distribution, and one of which is a reverse Gumbel distribution.
In the view on the left, the distributions look very similar, they have the same mean, nearly the same profile out to 90% or more of the cumulative area of the two distributions, and both fall quickly to small values outside of the area of agreement. However, in the logarithmic view, we can see clearly that the Gaussian distribution believes the risk of a 12-fold loss to be almost 10 orders of magnitude less likely than does the reverse Gumbel distribution.
In real terms, this means that the analyst who uses the Gaussian believes the probability of a 12-fold draw down to be roughly 1 in every trades, while the reverse Gumbel using analyst expects it to happen roughly once every trades. This divergence in belief will lead to completely different behavior as concerns trading strategies and risk management, and could eventually lead to the Gaussian firm having a blowup that they can't cover.
Though the full details of the situation are undoubtedly more complex than this simple example, just such use of close, but not exact stand-in distributions contributed to the famed blowup of Long Term Capital Management, an arbitrage fund backed by the academic might of two recipients of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. After 4 years of unmatched return on investment, the fund exploded in an incredible collapse over the course of a summer, in part due to gross misestimation of correlations between different events.
Trajectory of the value of LTCM's assets under management (March 1994 October 1998)