The theory of distributions was introduced by Laurent Schwartz in the 1940s—based on the theory of (locally convex) topological vector spaces—to provide a firm foundation for a theory of generalized functions, such as the Dirac -function. The main advantage is that the derivative of a distribution always exists, and is itself a distribution (hence is infinitely differentiable in the sense of distributions). It revolutionizes much of analysis that deals with derivatives, most notably with the concept of fundamental solution of a linear differential operator which supplants (or subsumes) the classical notions of Green's function and elementary solution. Compared to other "revolutions" in analysis, such as the epsilon-delta argument, properties of real numbers, and Lebesgue's theory of integration, Schwartz's theory of distributions is conceptually rather simple, and has the appeal of the calculus of Newton and Leibniz: one could work with them without worrying too much about the (functional analytic) foundation.
There were many motivations that led to Schwartz's theory, going as far back as the general solution to the wave equation which curiously does not require that and be differentiable, an observation made by Euler. But the Dirac -function is perhaps the most striking and the best-known example. It is often characterized by the properties
- for all , and
- or more generally so long as is continuous at ,
which clearly is impossible within the framework of ordinary functions and any sensible notion of integration. At best it should be regarded as the "limit" of a sequence of actual functions with higher and narrower spike at . In fact, mathematicians had used this kind of limiting procedure, often under the name of approximation to the identity (Dirichlet kernel being one example), in the study of Fourier series.
Physicists had been more liberal in treating as a bona fide function (of a real variable ). For instance, it can be translated: describes a spike at and dilated: and most importantly, it makes sense to say that is the derivative of the (unit) "step function", also called Heaviside function: To say means that we can integrate the -function: as desired. (If or we have to say it's undefined.) Moreover, by "applying" the Fourier inversion formula, one obtains the curious formula A thorny issue is multiplication: while it is perfectly fine to multiply by an ordinary function, so long as that function is continuous at , it does not seem possible to make sense of the products like or .
The calculus of -functions developed by physicists is not confined to one dimension. For example, the product makes perfectly good sense: it represents the mass (or electric charge) density of a single point-mass (or point-charge) at the origin with total mass (or total charge) = 1. We shall for the moment denote it by , though physicists prefer to write . By this and other expressions one can describe any "distribution" of electric charge—at various points, along a piece of curve, or on a sheet of surface—as a single "generalized function" on , which enters the equation of electrostatics (Poisson's equation) as the "source term": where is the Laplacian on The unknown , the (electric) potential, is to be solved with the extra "boundary" condition that as . For example, a single point-charge ought to "produce" , which is to say a classical fact that could have been known to Newton (his "inverse square law"). This identity helps justify the general solution for an arbitrary charge distribution : simply let act inside the integral... which may also be interpreted as a "superposition" of potentials, one for each "charge" at the point being the infinitesimal volume, or "volume form" This and many other identities really make the -function a worthy object of mathematical inquiry. As we shall see, the theory of distributions gives a very natural interpretation to all these identities.
The charge distribution of electrostatics may well be the source of the terminology for Schwartz, for it is even possible to represent an electric dipole—two equal and opposite point-charges very close to each other—as a distribution, but not as a (Radon) measure. It's a little unfortunate that the term distribution has often been confused with, or thought to be a generalization of, a probability distribution.
Distributions can be defined on any open set (and thus on any smooth manifold), and the space of all distributions on is denoted by . However, most of the naturally occurring distributions, including those mentioned above, are in fact defined on , so we shall state our definitions for even though there is virtually no change for the general case.
Let be the set in fact -vector space of all infinitely differentiable functions with compact support, i.e. there exists a compact (closed and bounded) set such that for all . The space is endowed with the topology that in as , if all the are supported in the same compact set , and for any multi-index , converges to uniformly.
There is an abundance of such functions: the intuition is that any "hand-drawn" function can be "smoothed out" to be infinitely differentiable, even though it's not easy to write one down with a formula. The space , not necessarily with the topology, is also denoted by or . Also, there is no need to be alarmed by the appearance of you are free to think of real-valued functions for the most part.
A distribution on is a continuous linear functional , i.e.,
(linearity) for all and all ; and
The space of all distributions, endowed with the weak topology, is denoted by .
As a matter of fact, any linear functional you can think of will turn out to be continuous, so from the practical point of view, one may ignore the topology altogether.
The Dirac -function is a distribution defined by which one checks is linear (and takes continuity for granted).
For the sake of intuition, we can have a more general "-function type" distribution: for any hypersurface , such as the sphere (or for that matter, a submanifold of any dimension), we have a distribution defined by where is the "surface element" on .
A whole class of examples are given by actual functions, which in particular include all continuous functions .
If is locally integrable, i.e. for any compact set , is finite, then is a distribution and, by abuse of notation, is often denoted simply by . Note that changing the values of on a set of measure zero does not alter the distribution .
For , the Heaviside function is locally integrable and thus defines a distribution by the recipe above. Note we don't need to specify the value of . For a less trivial example, defines a distribution if and only if or if we allow to be complex
As defined, distributions are nothing like actual functions, for one can not evaluate at a single point . Nevertheless, it is meaningful to speak of the "values" on an open set (small or big) "collectively": we say vanishes on an open set if for all supported in . For example, vanishes on . Similarly, we say agrees with a function on an open set if for all supported in . Incidentally, those functions are often called test functions because one can imagine that they are being used to "test" or "detect" the values of a function (or a distribution) on an open set.
It may be remarked that distributions exemplify the dominant viewpoint of modern mathematics that may go under the term "formalism" or "functionalism": a mathematical object is defined by what it does (in relation to other objects), instead of what it is intrinsically.
The most important fact of distributions is that it can always be differentiated. The definition is such that it agrees with the usual notion of derivatives on actual functions (sometimes called regular functions for emphasis, though it's a terribly overloaded word in mathematics), and in this sense distributions are said to be generalized functions. To "derive" the definition, let for some function so its partial derivative , , is continuous and therefore defines a distribution which is what we want to be By integration by part, we have where the boundary term drops out because is compactly supported. Thus, the distribution can also be defined by where the derivative is acting on instead. So, we can extend this definition to all , to wit is a distribution, and we shall denote it by . It is often convenient to denote the value of by , often used for the "pairing" of a finite-dimensional vector space with its dual. For instance, the -function may be written as for all .
For any distribution , its partial derivative in the direction is a distribution defined by Consequently, for any multi-index , is a distribution defined by
The derivatives of the -function ) are given by (and similarly for higher derivatives).
For , we can verify the identity in the sense of distributions: for all .
In fact, the procedure of "deriving" the definition works for many other operations, so long as it is well defined on the test functions (e.g. translations, Fourier transform, convolution). More precisely, if is well-defined for (some class of) locally integrable functions , and for some operation , we can define for an arbitrary distribution by the same expression: For the case of derivatives, and the "adjoint" is . As exercises, come up with your own definition for translating a distribution, and more generally a "change of variables" on .
Having at first rid our distributions of any semblance of functions, we carefully recover all the properties that functions enjoy with the sole exception of evaluation at a point. At the risk of serious notational confusion (for a first encounter), we may at times write distributions as if they were functions, taking on a variable as but remember not to substitute by a particular number or rather a tuple of numbers In particular, we now have a third way of writing the exact same thing: The word "formally" is to remind you that this is not an integral in Riemann's or Lebesgue's sense, but a shorthand—or rather, a longhand—for the evaluation of at . The advantage of writing as is that we may express many naturally occurring distributions more easily, without constantly referring to . See examples below.
Now with the definition of distributional derivatives, we come to the first major achievement of the theory, the notion of fundamental solution of a linear differential equation, or rather a linear differential operator, on : the sum is finite, and the coefficients are functions on , so it acts on One very special class is when all the coefficients are constants, for which we can define the fundamental solution more easily.
A fundamental solution (or Green's function) of a linear differential operator with constant coefficients is a distribution such that
Every non-zero linear differential operator with constant coefficients has a fundamental solution.
Remark: It is not unique since one can always add a "homogeneous solution" (such as a constant) to . For certain types of , one can impose extra conditions that will ensure a unique fundamental solution that, in some sense, is the best one. (The original proofs were nonconstructive; now there are rather explicit formulas, using e.g. Fourier transform.) The fundamental solution is "fundamental" because it enables one to solve a variety of initial or boundary value problems, with any "source term"—coined in precise terms as the Duhamel's principle, but becomes much more clear in the language of distributions, after developing the notion of convolution. See Green's functions in physics for some applications.
The (Newtonian) potential defines a distribution on , and it is the unique fundamental solution for the Laplacian up to a factor of that goes to as . To see that,
We close with a list of the ("best") fundamental solutions for the three classical equations, each giving rise to a vast subject (the so-called elliptic, parabolic, and hyperbolic PDEs, respectively). They serve as a reminder that, despite everything above, how nontrivial fundamental solutions—and distributions in general—can be, and as the background for Malgrange-Ehrenpreis theorem, which may be regarded as a fundamental theorem of analysis.
By applying the formal properties of the -function, one could begin to explore its Fourier transform: That is, the Fourier transform of , when properly defined, should be the constant function . [Various conventions are in common use for the definition of Fourier transform, and for purposes of differential equations, the one given above, with no appearance of , is the most convenient.]
One of the main properties of Fourier transform that we want to preserve is the straightforward calculation: (what kind of are we assuming here?), or more succinctly This is what makes Fourier transform so useful for differential equations: it converts differentiation into multiplication. For example, to find the fundamental solution, we may simply take Fourier transform of both sides of the defining equation which, by the property above, becomes One can "easily" solve it: and, taking the inverse Fourier transform, we have an explicit fundamental solution for , namely There are two immediate difficulties and one puzzle:
- Is a distribution on If the polynomial never vanishes for , sure enough. But what about other
- How is the inverse Fourier transform defined, or computed?
- We know that the fundamental solution is not unique. How did we miss all the other solutions?
All these have satisfactory answers in Schwartz's theory. The heart of the matter is that Fourier transform cannot be defined for all distributions , but only for a subspace of tempered distributions that do not grow too fast at infinity (hence the name "tempered"). Here "not too fast" means (at most) polynomial growth, also called "moderate" growth. Thus, all polynomials are (or can be regarded as) tempered distributions, but functions such as are not. This subspace is denoted , suggesting that it is formally defined as the (continuous) dual of a certain space , which is often called Schwartz functions: infinitely differentiable functions on that are rapidly decreasing at infinity along with all its derivatives the prototypical example is the "Gaussian" We shall not go into more details (especially the issue of topology), but turn straight to the definition. One may "derive" the definition by finding the adjoint of
Implicit in the definition above is that for all
The Fourier transform of a tempered distribution is a tempered distribution, denoted or , defined by where
The Fourier transform is an isomorphism, and the inverse is given by interpreted in the sense of distributions.
Exercise: spell out the interpretation.
Ignoring the topology, one can readily check the following: (These calculations demonstrate particularly well how easy it is to work with distributions, and places that more careful justifications may be needed.)
It agrees with the definition when , or more pedantically, for with , .
. Indeed, by pairing with (arbitrary) ,
The property can now be established for all . A similar calculation shows that .For 2,
Now for 1, we want to show . but remarks that it is to be interpreted in the sense of distributions, i.e. "pairing with " and manipulate symbols formally until one arrives at a sensible expression, which is taken to be the definition:
From (in one dimension), we can deduce physicist's favorite formula: Combined with 3' above, we conclude the Fourier transform of any polynomial , regarded as a tempered distribution, is a derivative of i.e., a -function at the origin
Now, let's address the three issues for solving the fundamental solution by Fourier transform, in the specific case of the Laplacian on , .
- is indeed locally integrable for thus it is a distribution on .
- Yes, it is tempered, so its inverse Fourier transform is defined. In fact, for some constant
- Division in the space of distributions is not uniquely defined. In this case, we can add to the distribution any multiple of , or a (first) partial derivative of —these are the general solution to the "homogeneous equation" .
In some sense, tempered distributions are the right setting for Fourier transform. Here is a list of common (elementary) functions on the real line (regarded as tempered distributions) and their Fourier transform with the factor of tucked away: it may be more convenient to read the table
- A function is periodic if and only if its Fourier transform consists of -functions on an integral lattice. This is the case of Fourier series.
- A function is real if and only if its Fourier transform is symmetric under followed by complex conjugation.
- Roughly speaking, a function is more "spread-out" if its Fourier transform is more "localized", and vice versa. This is related to the Heisenberg uncertainty principle in quantum mechanics.
Tempered distributions are especially well suited for linear differential operators with polynomial coefficients whereby and are being treated on an equal footing since and are closed under multiplication by polynomials, but not general functions. As one might expect, many questions are more of a flavor of algebra. For illustration, the "solution space" of the following operators (in one dimension) can be described, and each is a two-dimensional vector space: The last one is a description of homogeneous distributions of degree on , where , , are defined by which agrees with the function on . This is Hadamard's partie finie, and for is known as the Cauchy principal value. One can give a complete description for all It is tempting to note that the dimension of (distributional) solution space is simply the "degree" of the operator, where and both have degree 1. This is yet another way that distributions "complete" functions, in much the same way that complex numbers "complete" real numbers (when solving algebraic equations), or that the projective geometry completes Euclidean geometry (e.g. point-line duality in the plane).