# Hilbert Space

A **Hilbert space** is a vector space $V$ equipped with an *inner product*, which can be thought of as a generalization of the dot product in Euclidean space, with the additional property that the metric coming from the inner product makes $V$ into a complete metric space. The basic example of a Hilbert space is ${\mathbb R}^n$ $\big($or ${\mathbb C}^n\big)$ with the standard dot product, but many other problems and structures in mathematics and physics turn out to be best described by other types of Hilbert spaces, most notably spaces of certain types of functions.

#### Contents

## Definition of Inner Product

An

inner producton a vector space $V$ over a field $F = {\mathbb R}$ or $\mathbb C$ is a function $\langle \cdot, \cdot \rangle \colon V \times V \to F$ satisfying the following properties:(1) $\langle x,y \rangle = {\overline{\langle y,x \rangle}}$ for all $x,y\in V.$

(2) It is linear in the first argument: $\langle ax_1+bx_2,y \rangle = a\langle x_1,y \rangle + b\langle x_2,y \rangle$ for all $a,b \in F, x_1,x_2,y \in V.$

(3) For $x\in V,$ the inner product of $x$ with itself is positive definite: $\langle x,x \rangle \ge 0,$ and equality holds if and only if $x = 0.$

Notes:

- (1) implies that $\langle x,x \rangle = \overline{\langle x,x \rangle},$ so $\langle x,x \rangle$ is a real number, so the inequality in (3) makes sense.
- (1) and (2) imply that the inner product is
antilinearin the second argument: $\langle x, ay_1 + by_2 \rangle = {\overline{a}} \langle x,y_1 \rangle + {\overline{b}} \langle x,y_2 \rangle.$A vector space with an inner product is called an

inner product space.

The

normof a vector in an inner product space is $\| x \| = \sqrt{\langle x,x \rangle}.$ Thedistancebetween two elements $x,y$ in an inner product space is defined to be $\| x-y \|.$

Which of the following are inner products?

I. On ${\mathbb R}^2,$ with vectors $x,y$ written as $2\times 1$ column vectors, define $\langle x,y \rangle = x^T A y,$ where $A = \begin{pmatrix} 0&-1 \\ 2&1 \end{pmatrix}.$

II. Same as in I, but $A = \begin{pmatrix} 1&2 \\ 2&1 \end{pmatrix}.$

III. The **Minkowski product**: on ${\mathbb R}^4,$ with vectors ${\bf v} = (x_1,y_1,z_1,t_1)$ and ${\bf w} = (x_2,y_2,z_2,t_2),$ write
$\langle {\bf v},{\bf w} \rangle = x_1x_2+y_1y_2+z_1z_2-t_1t_2.$

## Metric Space Properties of Inner Product Space

The goal of this section is to show that any inner product space is a metric space. The proof will use the following fundamental theorem:

(Cauchy-Schwarz inequality): $|\langle x,y \rangle| \le \| x \| \| y \|,$ and equality holds if and only if $x,y$ are linearly dependent.

The proof follows the same path as the proof in ${\mathbb R}^n$. First note that the theorem is trivial for $x$ or $y=0.$ So assume $x,y \ne 0.$ Now let $t = \frac{|\langle x,y \rangle|}{\langle x,y \rangle}.$ Let $u = \frac{x}{\|x\|}, v = \frac{y}{\|y\|}.$ Note that $u$ and $v$ are unit vectors (their norms are both $1$), and $|t|=1.$ Then $\begin{aligned} 0 \le \|tu-v\|^2 = \langle tu-v,tu-v\rangle &= \langle tu,tu\rangle - \langle v,tu\rangle - \langle tu,v \rangle + \langle v,v \rangle \\ &= |t|^2 \| u\|^2 - 2 \text{Re}(\langle tu,v\rangle ) + \|v\|^2 \\ &= 2 - 2\text{Re}(t \langle u,v \rangle) \end{aligned}$ but $\langle u,v \rangle = \dfrac{\langle x,y \rangle}{\|x\|\|y\|},$ so this becomes $0 \le 2-2\text{Re}\left( \frac{|\langle x,y \rangle|}{\|x\|\|y\|} \right)$ and since the quantity in parentheses is real, we can drop the $\text{Re}$: $\begin{aligned} 0 &\le 2-2\frac{|\langle x,y \rangle|}{\|x\|\|y\|} \\ \frac{|\langle x,y \rangle|}{\|x\|\|y\|} &\le 1 \\ |\langle x,y \rangle| &\le \|x\|\|y\|. \end{aligned}$ Equality holds if and only if $tu=v,$ which certainly implies that $x,y$ are linearly dependent; but it is easy to check that equality holds if $x,y$ are dependent. $_\square$

The distance function $d(x,y) = \| x-y\|$ is a metric.

First, $d(x,y) = \|x-y\|$ is nonnegative by definition of the norm $($which makes sense because $\langle x-y,x-y\rangle$ is nonnegative$).$ And it is $0$ if and only if $\langle x-y,x-y\rangle = 0,$ which happens if and only if $x-y=0,$ or $x=y,$ by property (1) of the inner product.

Second, $d(x,y) = d(y,x)$ is immediately clear; in general $\| tu \| = |t| \|u\|$ for $t\in F$ and $u\in V,$ so $\|x-y\| = \|(-1)(y-x) \| = |-1| \|y-x\| = \|y-x\|.$

The triangle inequality is where Cauchy-Schwarz comes in: let $u = x-y,$ $v = y-z.$ Then $u+v = x-z.$ Now $\begin{aligned} (\|u\|+\|v\|)^2-\|u+v\|^2 &= \|u\|^2 + \|v\|^2 + 2\|u\|\|v\| - \langle u+v, u+v \rangle \\ &= \|u\|^2 + \|v\|^2 + 2\|u\|\|v\| - \|u\|^2 - \|v\|^2 - 2\text{Re}(\langle u,v \rangle) \\ &= 2\big(\|u\|\|v\| - \text{Re}(\langle u,v \rangle)\big) \\ &\ge 0 \end{aligned}$ by Cauchy-Schwarz. Note that equality holds for nonzero $u,v$ if and only if $u$ is a positive real multiple of $v.$

So $\|u\|+\|v\| \ge \|u+v\|;$ and substituting back gives $d(x,y)+d(y,z) \ge d(x,z).$ $_\square$

## Example of Hilbert Space - Lebesgue Spaces

Let $L^2({\mathbb R})$ be the set of functions $f\colon {\mathbb R} \to {\mathbb C}$ such that $\int_{-\infty}^{\infty} |f(x)|^2 \, dx$ exists and is finite. Then there is an inner product defined on $L^2({\mathbb R})$ by $\langle f,g \rangle = \int_{-\infty}^{\infty} f(x){\overline{g(x)}} \, dx.$ $\big($So the $L^2$ requirement is precisely that $\|f\|$ is finite.$\big)$ It turns out that if the integral is the Lebesgue integral, then "enough" functions are integrable so that Cauchy sequences of functions will converge. So this makes $L^2({\mathbb R})$ into a Hilbert space.

Similarly, $L^2\big([0,1]\big)$ is a Hilbert space, where the definition is the same except that functions and integrals are taken over the interval $[0,1].$

The alert reader will have noted an apparent inaccuracy in the above discussion; it is possible for a nonzero function $f$ to have $L^2$-norm equal to 0, i.e. if $f$ is zero "almost everywhere," except on a set of measure zero. The solution is to let $L^2(X)$ consist of equivalence classes of functions, rather than functions, where two functions $f$ and $g$ are considered equivalent if $f-g$ is zero almost everywhere. This is an important detail to keep in mind, especially since it is (unfortunately) often suppressed in discussions of Hilbert spaces for convenience's sake.

## Example of Hilbert Space - Sequence Spaces

The discrete version of the previous example is as follows: define $\ell^2$ to be the set of sequences $(z_1,z_2,\ldots)$ of complex numbers such that $\sum_{n=1}^\infty |z_n|^2$ exists and is finite. Then there is an inner product on $\ell^2$ given by $\big\langle (z_n),(w_n)\big\rangle = \sum_{n=1}^\infty z_n{\overline{w_n}}.$ It is a standard exercise to show that $\ell^2$ is in fact a Hilbert space (i.e. that it is complete with respect to the metric induced by this inner product).

## Orthonormal Bases in Hilbert Spaces

In the Hilbert space $\ell^2,$ let ${\bf e_k}$ be the sequence with all $0$'s except for a $1$ in the $k^\text{th}$ term. Then any sequence ${\bf z} = z_n$ can be written as an infinite sum ${\bf z} = \sum_{n=1}^\infty z_n {\bf e_n}.$ That is, the infinite sum on the right converges to ${\bf z},$ using the metric defined by the $\ell^2$-norm given in the previous section. Note that this representation is unique.

There is an obvious analogy with a basis of a vector space, which is a set of elements of the vector space such that every element can be written uniquely as a linear combination of *finitely many* members of the basis. For instance, the same construction gives a basis ${\bf e_1}, \ldots, {\bf e_n}$ of ${\mathbb C}^n.$

Since Hilbert spaces are vector spaces, they have regular vector space bases (by the axiom of choice). To avoid ambiguity, these are often referred to as *Hamel bases.* Hamel bases of Hilbert spaces are generally useless for computation and difficult to construct, unless they are finite $\big(\text{e.g. } V= {\mathbb C}^n\big).$ For "larger" Hilbert spaces, the more natural notion is that of an *orthonormal basis*, which generalizes the example given above.

A set $(y_n)$ of vectors in a Hilbert space is

orthonormalif$\langle y_i,y_j \rangle = \begin{cases} 1&\text{if } i=j \\ 0&\text{if } i \ne j \end{cases}$

It is an

orthonormal basisif, in addition, the only vector $x$ satisfying $\langle x,y_n \rangle = 0$ for all $n$ is the zero vector. (Equivalently, the span of the $y_i$ is dense.)

Note that an orthonormal basis is not necessarily a Hamel basis. For instance, the orthonormal basis $({\bf e_n})$ of $\ell^2$ is not a Hamel basis, since expressing an arbitrary element of $\ell^2$ as a linear combination of basis vectors requires an infinite (convergent) sum.

As in the motivating example, countable orthonormal bases have various nice properties.

Let $(x_n)_{n \in {\mathbb N}}$ be an orthonormal basis for a Hilbert space $X.$ Then

(1) $x = \sum\limits_{n=1}^\infty \langle x,x_n \rangle x_n$ for all $x \in X$; in particular, every $x$ has a unique representation as a (possibly infinite) linear combination of the $x_n$, with coefficients given by this formula.

(2) $\| x\|^2 = \sum\limits_{n=1}^\infty |\langle x,x_n \rangle|^2$ (

Parseval's identity)(3) $\langle x,y \rangle = \sum_{n=1}^\infty \langle x,x_n \rangle {\overline{\langle y,x_n\rangle}}.$

The space $L^2\big([0,1]\big)$ has an orthonormal basis consisting of the functions $e^{2\pi i n \theta},$ for all integers $n.$ The coefficients in the sum $f(\theta) = \sum\limits_{n=-\infty}^\infty a_n e^{2\pi i n \theta}$ are called the Fourier coefficients of $f.$

The axiom of choice implies that every Hilbert space has an orthonormal basis.

## Applications of Hilbert Spaces

As mentioned in the previous section, the $L^2$ spaces are the settings for Fourier transforms and Fourier series. Hilbert spaces also arise naturally in quantum mechanics, where the set of possible states of a particle is a complex Hilbert space called the **state space**.

Other examples of Hilbert spaces in mathematics include Sobolev spaces, which are settings for computations in partial differential equations and the calculus of variations.