By now you should be aware that QM is very mathematical. As the mathematical complexity of physics increases, very complicated equations should be handled with very economic representations. Theoretical mathematicians in the past century have devoted great labour in the organization and abstraction of functions and algebraic structures. In doing so, physicists who were struggling with frightening equations were beginning to take advantage of what was new in math. Not surprisingly, the founders of QM did just that. To describe the physics of QM, much of the formalism is structured around functional analysis and linear operators. I will not go into too much detail on both, so I will explain why functional analysis and linear operators are so important in quantum theory.

**Linear Operators in Hilbert Space**

The space in which wavefunctions inhabit is determined by the four properties of wavefunctions outlined in Lecture 3. This is the Hilbert space. The Hilbert space is remarkable for many reasons, particularly in the generalization of the dot product (the inner product) in arbitrarily many dimensions (up to infinite dimensions). For technical reasons, the Hilbert space is also called the complete inner product space (here complete means linear). In QM, we need to know the following properties. I will represent everything in the Dirac notation (also known as the bra-ket notation), which is a quick and tidy way of expressing inner products.

1) The inner product for two vectors $\left|a \right>=\left( \begin{matrix} { a }_{ 1 } \\ { a }_{ 2 } \\ \begin{matrix} \vdots \\ { a }_{ n } \end{matrix} \end{matrix} \right)$ and $\left|b\right>=\left( \begin{matrix} { b }_{ 1 } \\ { b }_{ 2 } \\ \begin{matrix} \vdots \\ { b }_{ n } \end{matrix} \end{matrix} \right)$ then $\left<a|b\right> = \sum_{i=1}^{n} {{{a}_{i}}^{*}}{b}_{i}$

where $\left<a\right|= \left({ a }_{ 1 }^{ * }{ a }_{ 2 }^{ * }...{ a }_{ n }^{ * }\right)$ is the adjoint (complex-conjugate transpose) of the vector $\left|a\right>$.

**Note**: In mathematics, the adjoint of a matrix is often denoted ${A}^{H}$, but physicists find it much cooler to use the dagger ${A}^{\dagger}$. I should add that ${A}^{\dagger} = {({A}^{*})}^{T} ={({A}^{T})}^{*}$ where $*$ represents taking the complex conjugate of each entry and $T$ represents the transpose of a matrix. You can't transpose a function, so ${\Psi}^{\dagger} = {\Psi}^{*}$ in the end of the day.

Similarly, the inner product for two functions $f(x), g(x)$ over the interval $[a,b]$ is

$\left<f|g\right> = \int_{a}^{b} {f}^{*}(x)g(x)dx.$

2) A linear transformation for a matrix $T$ is given by the matrix product, $T \left|a \right>=Ta$ where $T$ is a square matrix.

3) A function is normalized if $\left<f|f \right>=1$. Two functions are orthogonal if $\left<f|g\right>=0$. A set of functions is orthonormal if $\left<{f}_{m}|{f}_{n}\right>={\delta}_{mn}$ where ${\delta}_{mn}$ is the Kronecker delta.

4) A set of functions is complete if any other function can be expressed as a linear combination of the set of functions $f(x)=\sum_{n=1}^{\infty}{c}_{n}{f}_{n}(x).$

**Hermitian Operators**

In QM position, momentum, and energy are now called observables, and each observables are represented by a special type of linear operator: the Hermitian operator. What makes Hermitian operators special is that the adjoint of a matrix or function is equal to itself. This is why Hermitian operators are sometimes called self-adjoint operators.

Hermitian operators play a major role in QM because it conveniently describes the expectation values of observables (the expectation values of Hermitian operators are always real).

But how do we know if an operator is Hermitian? Consider the integral definition of expectation of an operator $\hat{A}$: $\left<\hat{A}\right>=\int_{-\infty}^{\infty} {\Psi}^{*} \hat{A}\Psi dx.$

In Dirac notation, we denote the above integral $\left<\Psi|\hat{A}\Psi\right>$. To test if the operator is Hermitian, we must show that $\left<\Psi|\hat{A}\Psi\right> =\left<\hat{A}\Psi|\Psi\right>.$

**The Commutator**

The commutator used in QM follows from ring theory, which is denoted by square brackets. This bracket notation works differently from Dirac's bra-ket notation. In abstract terms, the commutator is a measure of how two operations fail at commuting. If two operators commute, then $AB=BA$. Thus the commutator is calculated as $\left[A,B\right] = AB-BA.$ For example, the multiplication of numbers is commutative; hence, $[2,5]=2(5)-5(2)=0$. But what about the position and momentum operators? Recall from Lecture 2 that

$\hat{x} = x$ and $\hat{p} = -i\hbar\frac{\partial}{\partial x}=\frac{\hbar}{i}\frac{\partial}{\partial x}.$ Since these operators act on functions, let's introduce a test wavefunction $\psi(x)$.

$\left[ \hat { x } ,\hat { p } \right] \psi =\left( x\frac { \hbar }{ i } \frac { d\psi }{ dx } -\frac { \hbar }{ i } \frac { d\left( x\psi \right) }{ dx } \right)$

$\left[ \hat { x } ,\hat { p } \right] \psi = \frac{\hbar}{i}\left( x \frac { d\psi }{ dx } -x \frac { d \psi }{ dx } -\psi \right)$

$\left[ \hat { x } ,\hat { p } \right] \psi = i\hbar \psi$

Therefore $\left[ \hat { x } ,\hat { p } \right] = i\hbar$

Although the commutator was designed to test commutative algebras, physics exploits the commutator in a clever way. Derivatives are important in physics, and in QM we take the time derivative of the expectation value of observables. Let's see where the math takes us.

$\frac{d}{dt}\left<\hat{A}\right> =\frac{d}{dt}\left<\Psi|\hat{A}\Psi\right>$ By the product rule $\frac{d}{dt}\left<\hat{A}\right> =\left<{\Psi}_{t}|\hat{A}\Psi\right>+\left<\Psi|{\hat{A}}_{t}\Psi\right>+\left<\Psi|\hat{A}{\Psi}_{t}\right>$

Since the Schrödinger equation is $i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi$ we can simplify the above equation:

$\frac{d}{dt}\left<\hat{A}\right> =\frac{-1}{i\hbar}\left<\hat{H}\Psi|\hat{A}\Psi\right> +\frac{1}{i\hbar}\left<\Psi|\hat{A}\hat{H}\Psi\right>+\left<\frac{\partial \hat{A}}{\partial t}\right>.$

Now $\hat{H}$ is Hermitian, so $\left<\hat{H}\Psi|\hat{A}\Psi\right> = \left<\Psi|\hat{A}\hat{H}\Psi\right>$; therefore,

$\frac{d}{dt}\left<\hat{A}\right> =\frac{i}{\hbar}\left<[\hat{H},\hat{A}]\right> +\left<\frac{\partial \hat{A}}{\partial t}\right>.$

This is an important result that has wide applications in QM, we will hopefully encounter it in the future.

Visit my set Lectures on Quantum Mechanics for more notes.

**Problems**

Use the commutator to prove the Ehrenfest theorem $\frac{d}{dt}\left<\hat{p}\right>=\left<-\frac{\partial \hat{V}}{\partial x}\right>.$ Hint: Remember $\hat{H} = \left[\frac{{\hat{p}}^{2}}{2m}+\hat{V}\right]$ and the commutator follow these identities.

Use the commutator to prove the Virial theorem $\frac{d}{dt}\left<\hat{x}\hat{p}\right>=2\left<\hat{T}\right>-\left<x\frac{\partial \hat{V}}{\partial x}\right>.$

Consider the function $\Psi(x)=\sum_{k=1}^{n}{c}_{k}sin\left(\frac{k\pi x}{a}\right).$ Prove that $\sum_{k=1}^{n}{|{c}_{k}|}^{2}=1$ is a necessary condition for $f(x)$ to be normalized. Note that${|{c}_{k}|}^{2}$ means ${c}_{k}^{*}{c}_{k}$. If ${c}_{k}$ are real, then it is equivalent to the square of its absolute value.

Use the Dirac notation (bra-ket notation) to prove that the necessary condition in Problem 3 is true for all orthonormal functions $\Psi(x)$.

Use the Dirac notation (bra-ket notation) to prove $\left<\Psi|\hat{H}\Psi\right> = \sum_{k=1}^{n}{E}_{n}{|{c}_{k}|}^{2},$ where ${E}_{n}$ are the energy eigenvalues. Give a physical interpretation of this fact.

No vote yet

1 vote

$</code> ... <code>$</code>...<code>."> Easy Math Editor

`*italics*`

or`_italics_`

italics`**bold**`

or`__bold__`

boldNote: you must add a full line of space before and after lists for them to show up correctlyparagraph 1

paragraph 2

`[example link](https://brilliant.org)`

`> This is a quote`

Remember to wrap math in $</span> ... <span>$ or $</span> ... <span>$ to ensure proper formatting.`2 \times 3`

`2^{34}`

`a_{i-1}`

`\frac{2}{3}`

`\sqrt{2}`

`\sum_{i=1}^3`

`\sin \theta`

`\boxed{123}`

## Comments

Sort by:

TopNewestQuite heavy mathematics! I just know enough calculus but those Greek letters were driving me out!

Log in to reply

If we use the Schrodinger wave equation as the starting point for the development of quantum mechanics, then Fourier analysis would be the natural vehicle for examining the solutions of the wave equation. It should be pointed out that a Hilbert space can be represented by Fourier series, i.,e., a "vector" in Hilbert space is an infinite continuous sum of waves. Moreover, all of these infinite continuous sums of waves form an orthonormal basis, which is what makes for a linear vector space. I think the Hilbert space formalism of quantum mechanics is more easily understood once its Fourier series foundations are realized.

Log in to reply

True. I think I will talk about the x,p being conjugate variables in Fourier. Fourier integrals were used in Lecture 2 when I discussed about the wave-packet of a wavefunction.

Actually, I am trying to be very general here. As long as the functions are orthonormal then it should work out.

Log in to reply