next up previous contents
Next: Quantum Computers Up: Quantum Physics in a Previous: Wave Mechanics   Contents


Algebraic Quantum Physics

While the Schrödinger Equation, in principle, allows to compute all details of the particle distribution and the exact energy terms, having to deal with partial differential equations, boundary conditions and normalization factors, is usually very cumbersome and often can't be done analytically, anyway.

Just a nobody would try to develop a color TV set by solving Maxwell equations, the discussion of complex quantum systems requires a more abstract formalism.

The Hilbert Space

States as Vectors

The solutions $\psi_n(x)$ from the examples in section and are complex functions over the intervals ${\mathcal{I}}=[0,l]$ or ${\mathcal{I}}=[0,l]^3$, respectively. Let's introduce the following abbreviations1.8

{\vert n \rangle}\equiv{\vert\psi_n \rangle}\equiv\psi_n(x)...
...{\langle n\vert}\equiv{\langle \psi_n\vert}\equiv{\psi}^*_n(x)
\end{displaymath} (1.24)

or, for the case of $k$ indices
{\vert n_1,n_2,\ldots n_k \rangle}\equiv{\psi}^*_{n_1\ldots...
...le n_1,n_2,\ldots n_k\vert}\equiv\psi_{n_1\ldots n_k}(\vec{r})
\end{displaymath} (1.25)

and also introduce a scalar Product ${\langle \phi\vert\chi \rangle}$ defined as
{\langle \phi\vert\chi \rangle}\equiv
\end{displaymath} (1.26)

The scalar product ${\langle i\vert j \rangle}$ of the eigenfunctions $\psi_i$ and $\psi_j$ from the on dimensional capacitor example ( gives
{\langle i\vert j \rangle}=\int_{\mathcal{I}}{\psi_i}^*(x)\...
...c{2}{l} \int_0^l \sin(\frac{\pi}{l}ix)\sin(\frac{\pi}{l}jx) dx
\end{displaymath} (1.27)

The substitution $\xi=\frac{\pi}{l}x$ leads to
{\langle i\vert j \rangle}=
\frac{2}{\pi} \int_0^{\pi} \sin(i\xi)\sin(j\xi) d\xi=
\end{displaymath} (1.28)

So the eigenfunctions of the Hamilton operator $H$ are orthonormal according to the scalar product (1.26) and therefor form the base of the orthonormal vector space $\mathcal{H}$ consisting of all possible linear combinations of $\{\psi_1,\psi_2,\ldots\}$. This space is the Hilbert space for this particular problem and it can be shown that the eigenvalues of any operator describing a physical observable form an orthogonal base.1.9


Since the Schrödinger Equation is a linear differential equation, any linear combination of solutions is also a solution and thus a valid physical state. To calculate the expectation value ${\langle H \rangle}$ of the energy for a given state $\psi(x,t)$ we have to solve the integral

{\langle H \rangle}={\langle \psi\vert}H{\vert\psi \rangle}=
\end{displaymath} (1.29)

If $\psi(x,t)$ is given as a sum of eigenfunctions as in equation 1.19, integration can be avoided, as
{\langle H \rangle}=\overbrace{\sum_i {c_i}^* {\langle i\ver...
\sum_{ij} {c_i}^* c_j {\langle i\vert}H{\vert j \rangle}
\end{displaymath} (1.30)

Since $H {\vert i \rangle}=E_i{\vert i \rangle}$ and ${\langle i\vert j \rangle}=\delta_{ij}$, ${\langle H \rangle}$ can be expressed as a weighted sum of eigenvalues:
{\langle H \rangle}=\sum_{i} \vert c_i\vert^2 E_i
\end{displaymath} (1.31)

Using the eigenfunctions for the one-dimensional capacitor ( the complex amplitudes $c_i$ for an arbitrary continuous function $f(x)$ over $[0,l]$ are given by
c_i={\langle i\vert f \rangle}=
\sqrt\frac{2}{l} \int_0^l \sin(\frac{\pi}{l}ix)f(x)dx
\end{displaymath} (1.32)

This describes a standard sine-Fourier Transform. The original function can be reconstructed by a composition of eigenfunctions $\psi_n(x)$ with the Fourier components $c_i$
f(x)=\sum_i c_i \psi_i(x)=
\sqrt\frac{2}{l} \sum_{i=1}^{\infty} c_i \sin(\frac{\pi}{l}ix)
\end{displaymath} (1.33)

As before, it can be shown that the eigenvalues of any Hamilton operator always form a complete orthonormal base, thus
I=\sum_i {\vert i \rangle}{\langle i\vert}\quad\mathrm{with}\quad I {\vert\psi \rangle}={\vert\psi \rangle}
\end{displaymath} (1.34)


A Hilbert space $\mathcal{H}$ is a linear vector space over the scalar body $\mathbf{C}$. Let ${\vert f \rangle},{\vert g \rangle},{\vert h \rangle}\in\mathcal{H}$ and $\alpha,\beta\in\mathbf{C}$, then the following operations are defined [23]:

  $\textstyle {\vert f \rangle}+{\vert g \rangle}\in\mathcal{H}$ $\displaystyle \mbox{linear combination}$ (1.35)
  $\textstyle \alpha{\vert f \rangle}\in\mathcal{H}$ $\displaystyle \mbox{scalar multiplication}$ (1.36)
  $\textstyle {\vert f \rangle}+{\vert \rangle}={\vert f \rangle}$ $\displaystyle \mbox{zero-element}$ (1.37)
  $\textstyle {\vert f \rangle}+{\vert-f \rangle}={\vert \rangle}$ $\displaystyle \mbox{inverse element}$ (1.38)

The inner product ${\langle \cdot\vert\cdot \rangle}$ meets the following conditions:
$\displaystyle {\langle f\vert g+h \rangle}$ $\textstyle =$ $\displaystyle {\langle f\vert g \rangle}+{\langle f\vert h \rangle}$ (1.39)
$\displaystyle {\langle f\vert\alpha g \rangle}$ $\textstyle =$ $\displaystyle \alpha {\langle f\vert g \rangle}$ (1.40)
$\displaystyle {\langle f\vert g \rangle}$ $\textstyle =$ $\displaystyle {({\langle g\vert f \rangle})}^*$ (1.41)
$\displaystyle {\langle f\vert f \rangle}=0$ $\textstyle \iff$ $\displaystyle {\vert f \rangle}={\vert \rangle}$ (1.42)
$\displaystyle \vert\vert f\vert\vert \equiv \sqrt{{\langle f\vert f \rangle}}$ $\textstyle \ge$ $\displaystyle 0$ (1.43)


Operators as Matrices

As we have shown in, all valid states $\psi$ can be expressed as a sum of eigenfunctions, i.e.

\psi(\vec{r},t)=\sum_{i=0}^{\infty} c_i \psi_i(\vec{r},t)
\end{displaymath} (1.44)

If we use $\{\psi_0,\psi_1,\ldots\}$ as unit vectors, we can write the bra- and ket-vectors of $\psi$ as infinitely dimensional row- and column-vectors
{\langle \psi\vert}\equiv\left({c_0}^*, {c_1}^*, \ldots\rig...
c_0  c_1  \vdots
\end{displaymath} (1.45)

The time independent Schrödinger equation can then be written as
\left( \begin{array}{cccc}
E_0 & 0 & 0 & \cdots \\
0 & E...
...{array} \right)  
{\vert\psi \rangle}=E {\vert\psi \rangle}
\end{displaymath} (1.46)

The Hamilton Operator is the diagonal matrix $H=\mathrm{diag}(E_0, E_1, \ldots)$. In the case of multiple indices as in, a diagonalization such as e.g. $\{\psi_{000},\psi_{100},\psi_{010},\psi_{001},\psi_{110},\ldots\}$, can be used to order the eigenfunctions. If such an diagonalization exists for a Hilbert space $\mathcal{H}$, then every linear operator $O$ of $\mathcal{H}$ can be written in matrix form with the matrix elements $O_{ij}={\langle i\vert}O{\vert j \rangle}$.
O=\left( \begin{array}{cccc}
O_{00} & O_{01} & \cdots \\
...d\mathrm{with}\quad O_{ij}={\langle i\vert}O{\vert j \rangle}
\end{displaymath} (1.47)

Physical Observables

As has been mentioned in, in quantum physics, a physical observable $\mathcal{O}$ is expressed as a linear operator $O$ (see table 1.1) while the classical value of $\mathcal{O}$ is the expectation value ${\langle O \rangle}$. Obviously, the value of an observable such as position or momentum must be real, as a length of $(1+\mathrm{i})$ meter would have no physical meaning, so we require ${\langle O \rangle}\in \mathbf{R}$.

${O}^\dagger $ is called adjoint operator to $O$ if

{\langle \hat{f}\vert g \rangle}={\langle f\vert}O{\vert g ...
...ith}\quad {\vert\hat{f} \rangle}={O}^\dagger {\vert f \rangle}
\end{displaymath} (1.48)

If $O$ is given in matrix form, the ${O}^\dagger $ is the conjugated transposition of $O$, i.e. ${O}^\dagger ={(O^\mathrm{T})}^*$. An operator $O$ with ${O}^\dagger =O$ is called self adjoint or Hermitian.

All quantum observables are represented by Hermitian operators as we can reformulate the requirement ${\langle O \rangle}\in \mathbf{R}$ as ${\langle O \rangle}={{\langle O \rangle}}^*$ or

{\langle \psi\vert}O{\vert\psi \rangle}={\left({\langle \ps...
{\langle \psi\vert}{O}^\dagger {\vert\psi \rangle}
\end{displaymath} (1.49)


In classical physics, the observables of a system such as particle location, momentum, Energy, etc. where thought to be well defined entities which change their values over time according to certain dynamic laws and which could -- technical difficulties aside -- in principle be observed without disturbing the system itself. It is a fundamental finding of quantum physics that this is not the case.

Consider a state ${\vert\psi \rangle}$ which is a composition of two eigenstates ${\vert\psi_1 \rangle}$ and ${\vert\psi_2 \rangle}$ of the time-independent Schrödinger equation with the assorted energy-eigenvalues $E_1$ and $E_2$

{\vert\psi \rangle}=c_1{\vert\psi_1 \rangle}+c_2{\vert\psi_...
...e} \quad\mathrm{with}\quad \vert c_1\vert^2+\vert c_2\vert^2=1
\end{displaymath} (1.52)

The expectation value of energy ${\langle H \rangle}=\vert c_1\vert^2 E_1+\vert c_2\vert^2 E_2$, but if we actually perform the measurement, we will measure either $E_1$ or $E_2$ with the probabilities $\vert c_1\vert^2$ and $\vert c_2\vert^2$. However, if we measure the resulting state again, we will always get the same energy as in the first measurement as the wave function has collapsed to either $\psi_1$ or $\psi_2$.
{\vert\psi \rangle}\to
\left\{ \begin{array}{cl}
...{with probability}  \vert c_2\vert^2 \\
\end{array}\right.\end{displaymath} (1.53)

The fact that ${\langle H \rangle}$ is only a statistical value, brings up the question when it is reasonable to speak about the energy of a state (or any other observable, for the matter) or, with other words, whether a physical quality of a system exists for itself or is invariably tied to the process of measuring.

The Copenhagen interpretation of quantum physics argues that an observable $\mathcal{O}$ only exists if the system in question happens to be in an eigenstate of the according operator $O$ [22].

The Uncertainty Principle

The destructive nature of measurement raises the question whether 2 observables $\mathcal{A}$ and $\mathcal{B}$ can be measured simultaneously. This can only be the case if the post-measurement state $\psi'$ is an eigenfunction of $A$ and $B$

A{\vert\psi' \rangle}=a{\vert\psi' \rangle} \quad\mathrm{and}\quad B{\vert\psi' \rangle}=b{\vert\psi' \rangle}
\end{displaymath} (1.54)

Using the commutator $[A,B]=AB-BA$, this is equivalent to the condition $[A,B]=0$. If $A$ and $B$ don't commute, then the uncertainty product (see $(\Delta A)(\Delta B)> 0$. To find a lower limit for $(\Delta A)(\Delta B)$ we introduce the operators $\delta A=A-{\langle A \rangle}$ and $\delta B=B-{\langle B \rangle}$ and can express the squared uncertainty product as
(\Delta A)^2(\Delta B)^2={\langle (\delta A)^2 \rangle}{\lan...
{\langle \psi\vert}(\delta B)(\delta B){\vert\psi \rangle}
\end{displaymath} (1.55)

Since $\delta A$ and $\delta B$ are self adjoint, we express the above as $(\Delta A)^2(\Delta B)^2=\vert\vert\delta A \psi\vert\vert^2 \vert\vert\delta B \psi\vert\vert^2$. Using Schwarz's Inequality $\vert\vert f\vert\vert^2\vert\vert g\vert\vert^2\ge\vert\vert fg\vert\vert^2$ and the fact that $[A,B]=[\delta A,\delta B]$ we get
(\Delta A)(\Delta B) \ge \frac{1}{2} \vert\vert[A,B]\vert\vert
\end{displaymath} (1.56)

Observables with a nonzero commutator $[A,B]$ of the dimension of action (i.e. a product of energy and time) are canonically conjugated. If we take e.g. the location and momentum operators from, we find that
(\Delta R_i)(\Delta P_j)\ge
...rtial }{\partial r_j}]\vert\vert=
\end{displaymath} (1.57)

This means that it is impossible to define the location and the impulse for the same coordinate to arbitrary precision; it is, however, possible the measure the location in $x$-direction together with the impulse in $y$-direction.

Temporal Evolution

In we have shown how the Schrödinger equation can be separated if the Hamilton operator is time independent.

If we have the initial value problem with $\psi(t=0)=\psi_0$ we can define an operator $U(t)$ such that

H U(t)  {\vert\psi_0 \rangle} = i\hbar \frac{\partial }{\pa...
...\mathrm{and}\quad U(0) {\vert\psi \rangle}={\vert\psi \rangle}
\end{displaymath} (1.58)

We get the operator equation $HU=i\hbar \frac{\partial }{\partial t}U$ with the solution
\sum_{n=0}^{\infty} \frac{1}{n!} 
\frac{(-i)^n t^n}{\hbar^n} H^n
\end{displaymath} (1.59)

$U$ is the operator of temporal evolution and satisfies the criterion
U(t)  {\vert\psi(t_0) \rangle}={\vert\psi(t_0+t) \rangle}
\end{displaymath} (1.60)

If ${\vert\psi \rangle}=\sum_i c_i {\vert i \rangle}$ is a solution of the time-independent Schrödinger equation, then
{\vert\psi(t) \rangle}=U(t)  {\vert\psi \rangle}=
\sum_i ...
... \rangle} \quad\mathrm{with}\quad
\end{displaymath} (1.61)

is the corresponding time dependent solution (see

Unitary Operators

The operator of temporal evolution satisfies the condition

{U}^\dagger (t)U(t)=\mathrm{e}^{\frac{\mathrm{i}}{\hbar}Ht}\mathrm{e}^{-\frac{\mathrm{i}}{\hbar}Ht}=1
\end{displaymath} (1.62)

Operators $U$ with ${U}^\dagger =U^{(-1)}$ are called unitary. Since the temporal evolution of a quantum system is described by a unitary operator and ${U}^\dagger (t)=U(-t)$ it follows that the temporal behavior of a quantum system is reversible, as long a no measurement is performed.1.10

Unitary operators can also be used to describe abstract operations like rotations

R_{z}(\alpha) {\vert n_1,n_2,n_3 \rangle}=
\cos(\alpha) {...
..._3 \rangle}+\mathrm{i}\sin(\alpha) {\vert n_2,n_1,n_3 \rangle}
\end{displaymath} (1.63)

or the flipping of eigenstates
\mathrm{Not} {\vert n \rangle}=
\left\{ \begin{array}{c...
{\vert n \rangle} & \mbox{otherwise}
\end{array}\right.\end{displaymath} (1.64)

without the need to specify how this transformations are actually performed or having to deal with time-dependent Hamilton operators.

Mathematically, unitary operations can be described as base-transformations between 2 orthonormal bases (just like rotations in $\mathbf{R}^3$). Let $A$ and $B$ be Hermitian operators with the orthonormal eigenfunctions $\psi_n$ and $\tilde{\psi}_n$ and ${\vert\psi \rangle}=\sum_i c_i {\vert\psi_i \rangle}=
\sum_i \tilde{c}_i {\vert\tilde{\psi}_i \rangle}$, then the Fourier coefficients $\tilde{c}_i$ are given by

\tilde{c}_0  \tilde{c}_1  \vd...
...langle \tilde{\psi}_i\vert\psi_j \rangle}{\langle \psi_j\vert}
\end{displaymath} (1.65)

Composed systems


In section we have calculated the eigenstates $\psi_{n_1,n_2,n_3}$ for an electron in a 3 dimensional trap. Real electrons are also characterized by the orientation of their spin which can be either ``up'' ($\uparrow$) or ``down'' ($\downarrow$). The spin-state ${\vert\chi \rangle}$ of an electron can therefor be written as

{\vert\chi \rangle}=
\alpha  \be...
...\quad\mathrm{with}\quad \vert\alpha\vert^2+\vert\beta\vert^2=1
\end{displaymath} (1.66)

The spins also form a finite Hilbert space $\mathcal{H}_S=\mathbf{C}^2$ with the orthonormal base $\{{\vert\uparrow \rangle},{\vert\downarrow \rangle}\}$. If we combine $\mathcal{H}_S$ with the solution space $\mathcal{H}_R$ for the spinless problem (equation 1.22), we get a combined Hilbert space $\mathcal{H}=\mathcal{H}_R \times \mathcal{H}_B$ with the base-vectors
{\vert n_1,n_2,n_3,s \rangle}={\vert\psi_{n_1,n_2,n_3} \ran...
...}\quad n_1,n_2,n_3\in\mathbf{N},\; s\in\{\uparrow,\downarrow\}
\end{displaymath} (1.67)

Product States

If we have two independent quantum systems A and B described by the Hamilton operators $H_A$ and $H_b$ with the orthonormal eigenvectors $\psi^A_i$ and $\psi^B_j$, which are in the states

{\vert\psi^A \rangle}=\sum_i a_i {\vert\psi^A_i \rangle} \q...
{\vert\psi^B \rangle}=\sum_j b_j {\vert\psi^B_j \rangle}
\end{displaymath} (1.68)

then the common state ${\vert\Psi \rangle}$ is given by
{\vert\Psi \rangle}={\vert\psi^A \rangle}{\vert\psi^B \rang...
...vert\psi^B_j \rangle}=
\sum_{i,j} a_i b_j {\vert i,j \rangle}
\end{displaymath} (1.69)

Such states are called product states. Unitary transformations and measurements applied to only one subsystem don't affect the other as
U^A {\vert\Psi \rangle}=(U\times I) {\vert\psi^A \rangle}{\...
\left(U{\vert\psi^A \rangle}\right){\vert\psi^B \rangle}
\end{displaymath} (1.70)

and the probability $p^A_i$ to measure the energy $E^A_i$ in system A is given by1.11
\left({\langle \psi^A_i\vert}{\langle \ps...
a_i \sum_{j} {b}^*_j b_j\right\vert^2 = \vert a_i\vert^2
\end{displaymath} (1.71)


If ${\vert\Psi \rangle}$ is not a product state, then operations on one subsystem can affect the other. Consider two electrons with the common spin state

{\vert\Psi \rangle}=\frac{1}{\sqrt{2}}
\left( {\vert\uparrow\downarrow \rangle}+{\vert\downarrow\uparrow \rangle}
\end{displaymath} (1.72)

If we measure the spin of the first electron, we get either ${\vert\uparrow \rangle}$ or ${\vert\downarrow \rangle}$ with the equal probability $p=1/2$ which the resulting post-measurement states ${\vert\uparrow\downarrow \rangle}$ or ${\vert\downarrow\uparrow \rangle}$. Consequently, if we measure the spin of the second electron, we will always find it to be anti-parallel to the first.

Two systems whose common wave-function ${\vert\Psi \rangle}$ is not a product state are entangled.


... abbreviations1.8
This formalism is called Braket notation and has been introduced by Dirac: The ${\langle \cdot\vert}$ terms are referred to as ``bra''- and the ${\vert\cdot \rangle}$ terms as ``ket''-vectors.
... base.1.9
As physical observables are real values, their corresponding operators $O$ have to be self-adjoint i.e. ${O}^\dagger =O$
... performed.1.10
since a measurement can result in a reduction of the wave-function (see, it is generally impossible to reconstruct ${\vert\psi \rangle}$ from the post-measurement state ${\vert\psi' \rangle}$
... by1.11
We assume here that the eigenvalue $E^A_i$ isn't degenerated, otherwise the solution is analog to equation 1.50.

next up previous contents
Next: Quantum Computers Up: Quantum Physics in a Previous: Wave Mechanics   Contents

(c) Bernhard Ömer - -