Millennium Prize Problems
The Millennium Prize Problems are seven of the most well-known and important unsolved problems in mathematics. The Clay Mathematics Institute, a private nonprofit foundation devoted to mathematical research, famously challenged the mathematical community in 2000 to solve these seven problems, and established a US $1,000,000 reward for the solvers of each. One of the seven problems has been solved, and the other six are the subject of a great deal of current research.
The timing of the announcement of the Millennium Prize Problems at the turn of the century was an homage to a famous speech of David Hilbert to the International Congress of Mathematicians in Paris in 1900. The 23 unsolved problems posed by Hilbert were studied by countless \(20^\text{th}\) century mathematicians, which led not only to solutions to some of the problems, but also to the development of new ideas and new research topics. Some of Hilbert's problems remain open--indeed, the most famous of Hilbert's problems, the Riemann hypothesis, is one of the seven Millennium Prize Problems as well.
The problems encompass a diverse group of topics, including theoretical computer science and physics, as well as pure mathematical areas such as number theory, algebraic geometry, and topology.
Contents
Poincare Conjecture
The only Millennium Problem that has been solved to date is the Poincare conjecture, a problem posed in 1904 about the topology of objects called manifolds.
A manifold of dimension \(n\) is a geometric object equipped with a topological structure such that every point has a neighborhood that looks like (is homeomorphic to) \(n\)-dimensional Euclidean space, for some \( n.\) The standard example is a sphere, the surface of a ball embedded in three-dimensional space. An ant on the surface of a sphere thinks that he is standing on flat ground, as the curvature of the sphere is not observable locally. So a sphere is a \(2\)-manifold; the flat ground looks like \(2\)-dimensional Euclidean space.
Another example of a \(2\)-manifold is a (one-holed) torus.
Two manifolds are considered to be different if one cannot be continuously deformed into the other. One way to see that the torus is different from the \(2\)-sphere is that loops on the sphere can all be contracted on the sphere to a point (imagine a rubber band on the surface of a sphere--it can be pulled to the top of the sphere without breaking the band or leaving the sphere), but loops on a torus cannot (e.g. the loop on the top of the torus, or one of the black loops in the picture).
A fundamental question in the theory of manifolds is the classification problem: is there a way to characterize when two manifolds are the same, without having to explicitly write down the map that identifies them? That is, is there a set of properties such that any two manifolds that share all these properties must be the same?
The Poincare conjecture states that any closed (boundaryless) \( n\)-manifold which is homotopy equivalent to the \(n\)-sphere must be the \(n\)-sphere. (Homotopy equivalence is a notion that is strictly weaker than being the same, in general.) This is relatively easy for \(n=1,2.\) It was proved for \( n\ge 5\) by Stephen Smale in the 1960s, and for \( n=4 \) by Michael Freedman in 1982. Both mathematicians were given Fields Medals, the highest honor a mathematician can receive.
The case \(n=3\) is equivalent to the following statement:
Any simply connected, closed 3-manifold is the same as the 3-sphere.
Here simply connected means intuitively that the manifold has no holes; a loop on its surface can always be contracted to a point. As \(n=3\) was the only case left to be proved, this was the statement of the Poincare conjecture when it was posed as a Millennium Problem.
The conjecture was proved in 2003 by the Russian mathematician Grigori Perelman, using ideas of Richard Hamilton from the early 1980s. Hamilton suggested using a vector field flow called the Ricci flow to solve the problem, and demonstrated its efficacy by proving special cases of Poincare's conjecture. Perelman announced his solution of the problem in a series of papers in 2002 and 2003. Peer review confirmed that his proof was correct, and in 2006 he was offered the Fields Medal for his work.
Perelman turned down the Fields Medal and also refused to accept the Clay Millennium Prize when it was officially offered to him in 2010, saying that his contributions were no more significant than Hamilton's. His work is by all accounts quite original and accomplished; despite his apparent modesty and shunning of the spotlight, his proof of the Poincare conjecture will be famous for a very long time.
P vs. NP
Main article: P vs. NP
The problem of determining whether \({\mathbf P} = \mathbf{NP}\) is the most important open problem in theoretical computer science. The question asks whether computational problems whose solutions can be verified quickly can also be solved quickly. The consensus of most experts in the field is that this is not true in general, i.e. \( {\mathbf P}\ne \mathbf{NP},\) but there is very little progress toward a proof.
The class of problems in \( \mathbf P\) is the set of problems for which a solution can be found in polynomial time. That is, the problem depends on a positive integer \(n \) which represents the number of inputs (more formally, the information in the problem can be translated into a string of length \( n\)), and it is in \( \mathbf P\) if there is an algorithm that takes the problem as an input and returns a valid solution, such that the running time of the algorithm is less than \( cn^k \) for some positive numbers \( c,k\) which are independent of \( n.\)
The problem of computing the greatest common divisor of two integers \( a,b\) is in \( \mathbf P;\) in fact the Euclidean algorithm runs in time \( \le 5n,\) where \(n\) is the number of decimal digits of either of the integers.
Note that the constants \( c\) and \(k\) in the definition of polynomial time given above might be forbiddingly large in practice. For instance, the primality testing problem was shown quite recently to be in \( {\mathbf P}\); the proof exhibited an explicit algorithm, but the algorithm is not the fastest algorithm for practical purposes.
The class of problems in \( \mathbf{NP}\) is the set of problems for which a solution can be verified in polynomial time. That is, the problem depends on a positive integer \(n \) which represents the number of inputs (more formally, the information in the problem and the prospective solution can be translated into a string of length \( n\)), and it is in \( \mathbf{NP}\) if there is an algorithm that takes the prospective solution as input and returns "yes" or "no" depending on whether the prospective solution is in fact a solution to the problem, such that the running time of the algorithm is less than \( cn^k\) for some positive numbers \( c,k\) which are independent of \( n.\)
The problem of determining whether there is a Hamiltonian path on a given graph is in \( \mathbf{NP}.\) That is, it is quite easy to check whether a particular path on a graph is Hamiltonian; simply check whether it passes through each vertex exactly once. However, the problem of finding a Hamiltonian path is (conjecturally) much harder. Even the problem of determining whether a Hamiltonian path exists is in a class of problems known as \( \mathbf{NP}\)-complete problems; that is, any problem in \( \mathbf{NP}\) can be reduced in polynomial time to the Hamiltonian path problem. So if the Hamiltonian path problem is in \( \mathbf P,\) it follows that \( \mathbf{P}=\mathbf{NP}.\) An extension of the Hamiltonian path problem is the famous traveling salesperson problem.
A proof that \( {\mathbf P} = {\mathbf{NP}} \) would have far-reaching implications, as it would show that many problems thought to be hard, including problems on which many cryptosystems are based, can be solved in polynomial time. Many problems in theoretical mathematics are in \({\mathbf{NP}}\) as well, so \( {\mathbf P} = {\mathbf{NP}} \) would imply that they could be proved or disproved "mechanically" in polynomial time. It should be noted that this does not necessarily mean that these solutions would be practical, and in fact a proof that \( {\mathbf P} = {\mathbf{NP}} \) might be non-constructive; that is, it might be provable that these problems could be solved in polynomial time, via a proof that does not give any indication of the construction of an explicit algorithm that accomplishes this.
Hodge Conjecture
The Hodge conjecture is a statement about geometric shapes cut out by polynomial equations over the complex numbers. These are called complex algebraic varieties. An extremely useful tool in the study of these varieties was the construction of groups called cohomology groups, which contained information about the structure of the varieties. The groups are constructed quite abstractly, but have many useful relationships: for instance, a map between varieties corresponds to maps between cohomology groups. Since computations on groups are often more straightforward than computations on varieties, this gives a way to classify and study properties of complex algebraic varieties.
Some elements of these cohomology groups can be written down explicitly from geometric information about the variety, in particular subvarieties of the variety.
The unit sphere \(x^2+y^2+z^2=1\) in complex 3-space contains a curve cut out by \( z=0,\) namely the unit circle \( x^2+y^2=1\) in the \(xy\)-plane. This is the equator of the sphere, and is a subvariety.
The Hodge conjecture states that certain cohomology groups studied by Hodge over certain nice complex varieties are generated by the classes of subvarieties. The cohomology groups in question are often called the groups of Hodge classes, and classes generated by subvarieties are often called algebraic. So in these terms, the conjecture becomes
Every Hodge class on a projective complex manifold is algebraic.
The conjecture was formulated by Hodge in 1950. It is known for varieties of dimension \( \le 3,\) and certain other special cases are known. A successful proof would give a useful indication of the interplay between algebra and geometry. Correspondences between geometric structures (varieties) and algebraic structures (groups) often yield very powerful results: for another example of this phenomenon, see Wiles' proof of Fermat's last theorem, which used the Taniyama-Shimura conjecture relating elliptic curves to modular forms.
Riemann Hypothesis
The Riemann hypothesis is perhaps the most famous unsolved problem in mathematics. It concerns the nontrivial zeroes of the Riemann zeta function, which is defined for \( \text{Re } s > 1 \) by the infinite sum \[ \zeta(s) = \sum_{n=1}^\infty \frac1{n^s}. \] It can be shown that \( \zeta \) can be analytically continued to a function which is defined and differentiable everywhere in the complex plane, except for a simple pole at \( s=1.\) This function has trivial zeroes on the negative real line, at \( s=-2,-4,-6,\ldots.\) The location of its other zeroes is more mysterious; the conjecture is that
The nontrivial zeroes of the zeta function lie on the line \( \text{Re }s=\frac12.\)
The beauty of the Riemann hypothesis is that it has strong implications about the distribution of primes. In particular, it implies strong bounds on the error term in the prime number theorem, as well as many other results from number theory. For instance, the Riemann hypothesis is equivalent to any of the following three statements:
(1) \( \sigma(n) < e^{\gamma} n \log \log n\) for all \( n > 5040,\) where \( \sigma(n) \) is the sum of divisors of \(n\) and \( \gamma\) is the Euler-Mascheroni constant.
(2) \( \sigma(n) < H_n + e^{H_n} \log(H_n)\) for all \( n \ge 2,\) where \( H_n\) is the \(n^\text{th}\) harmonic number.
(3) \( \sum\limits_{n\le x} \mu(n) = O\big(x^{\frac12 + \epsilon}\big)\) for any \( \epsilon > 0,\) where \(\mu\) is the Möbius function. (See the wiki on big O notation for an explanation of the right side of the equation.)
The generalized Riemann hypothesis is a statement about the zeroes of certain functions known as \( L\)-functions, defined by Dirichlet series, which are generalizations of the Riemann zeta function. The generalized Riemann hypothesis can be used to prove many open questions in number theory, including Artin's conjecture on primitive roots and the so-called weak Goldbach conjecture that every odd prime greater than 5 is the sum of three odd primes.
There are some known results about nontrivial zeroes; they all lie in the critical strip \( 0 < \text{Re } s < 1;\) infinitely many of them lie on the critical line \( \text{Re } s = \frac12;\) the first \( 10^{13} \) nontrivial zeroes, ordered by size of imaginary part, are all on the critical line. The Riemann hypothesis itself still appears to be quite difficult to attack in any meaningful way.
Yang-Mills Existence and Mass Gap
A Yang-Mills theory in quantum physics is a generalization of Maxwell's work on electromagnetic forces to the strong and weak nuclear forces. It is a key ingredient in the so-called Standard Model of particle physics. The Standard Model provides a framework for explaining electromagnetic and nuclear forces and classifying subatomic particles. It has so far proved to be consistent with experimental evidence, but questions remain about its internal consistency.
In particular, successful applications of the theory to experiments and simplified models have involved a "mass gap," which is formally defined as the difference between the default energy in a vacuum and the next lowest energy state. So this quantity is the mass of the lightest particle in the theory. A solution of the Millennium Problem will include both a set of formal axioms that characterize the theory and show that it is internally logically consistent, as well as a proof that there is some strictly positive lower bound on the masses of particles predicted by the theory.
Generally speaking, the current state of the problem is that researchers are successfully obtaining results consistent with experimental evidence by using ideas and models that come from Yang-Mills theory, but there is no rigorous, axiomatized theory that coherently explains the experimental data and successfully predicts results about nuclear forces. There does not appear to be a compelling reason to believe that the problem will be solved soon, but it is of great interest to the physics and mathematics community at large, and will be the subject of extensive research in the coming decades.
Birch-Swinnerton-Dyer Conjecture
The Birch-Swinnerton-Dyer conjecture concerns the rational points (points with all coordinates rational numbers) on elliptic curves. Elliptic curves are, from a Diophantine perspective, the most interesting curves by far. Associated to every plane curve is a nonnegative integer called the genus. Genus-0 curves are well-understood, and their points are easily parameterized. Curves of genus \( \ge 2 \) have only finitely many rational points, by an extremely deep theorem from the 1980s due to Faltings. Curves of genus 1 with a rational point are precisely the elliptic curves, which have a myriad of applications and a very interesting structure on their sets of rational points. See the elliptic curves wiki for details.
In particular, it is a fact that, given an elliptic curve \( E,\) there is a nonnegative integer \( n\) and a set of rational points \( P_1,\ldots,P_n\) on \( E\) such that every rational point on \(E\) can be written uniquely as an integer linear combination of the \( P_i \) plus a torsion point \( T.\) The torsion points are the points of finite order, and there are finitely many of them. Here the linear combination involves the group law on the elliptic curve, which is nontrivial to write down explicitly (but note that it is not the same thing as coordinate-wise addition). The integer \( n\) is called the rank of \( E,\) and half of the Birch-Swinnerton-Dyer conjecture concerns the computation of that rank.
There is a function \( L(E,s)\) defined by a certain Dirichlet series, which is similar to the Riemann zeta function. The order of vanishing of \( L(E,s) \) at \( s=1\) is called the analytic rank of \( E,\) and the first half of the Birch-Swinnerton-Dyer conjecture is that
The rank of \( E\) equals its analytic rank.
The second half of the conjecture is more technical; it involves the coefficient of \( (s-1)^r \) in the Taylor series for \( L(E,s)\) around \( s=1.\) This coefficient is conjecturally equal to an expression involving products and quotients of several fundamental constants relating to the elliptic curve (for instance, one of them is the number of torsion points).
The first half of the conjecture has been proved in the case when the analytic rank is \( 0 \) or \( 1.\) The second half has been proved for certain special classes of elliptic curves with analytic rank \( 0.\) There is quite a lot of computational evidence for the conjecture (some of which dates back to computer computations done by Birch and Swinnerton-Dyer in the 1960s), but there is not very much progress toward a general proof. Establishing the conjecture would help with theoretical results about the structure of points on elliptic curves, as well as practical applications including finding generators \( P_1,\ldots,P_n\) of the set of rational points.