Time-Dependent Quantum Theory

Chapter
Part of the Graduate Texts in Physics book series (GTP)

Abstract

The focus of our interest will be the coupling of atomic and molecular systems to laser fields, whose maximal strength is of the order of the field experienced by an electron in the ground state of the hydrogen atom.

The focus of our interest will be the coupling of atomic and molecular systems to laser fields, whose maximal strength is of the order of the field experienced by an electron in the ground state of the hydrogen atom.1 This restriction allows us to describe the field matter interaction non-relativistically by using the time-dependent Schrödinger equation (TDSE) [1]. Analytical solutions of this linear partial differential equation are scarce, however, even in the case without external driving.

In this chapter, we continue laying the foundations for the later chapters by reviewing some basic properties of the time-dependent Schrödinger equation and the corresponding time-evolution operator. After the discussion of two analytically solvable cases, we will consider various ways to rewrite and/or solve the time-dependent Schrödinger equation. Formulating the solution with the help of the Feynman path integral will allow us to consider an intriguing approximate, so-called semiclassical approach to the solution of the time-dependent Schrödinger equation by using classical trajectories. The last part of this chapter is dealing with numerical solution techniques for the time-dependent Schrödinger equation that will be referred to in later chapters.

2.1 The Time-Dependent Schrödinger Equation

In the heyday of quantum theory, Schrödinger postulated a differential equation for the wavefunction of a quantum particle. The properties of this partial differential equation of first order in time and the interpretation of the complex valued wavefunction are in the focus of this section. The importance of Gaussian wavepackets as (approximate) analytical solutions of the Schrödinger equation will show up for the first time by considering the so-called Gaussian wavepacket dynamics.

2.1.1 Introduction

In position representation, the time-dependent Schrödinger equation for the wavefunction of a single particle of mass m, moving in three spatial dimensions (3D) with \(\varvec{r}=(x,y,z)\) is the linear partial differential equation
$$\begin{aligned} \mathrm{i}\hbar \dot{\varPsi }(\varvec{r},t)=\hat{H}(\varvec{r},t)\varPsi (\varvec{r},t). \end{aligned}$$
(2.1)
The Hamilton operator
$$\begin{aligned} \hat{H}(\varvec{r},t)=\hat{T}_\mathrm{k}+V(\varvec{r},t)= \frac{\hat{\varvec{p}}^2}{2m}+V(\varvec{r},t)= -\frac{\hbar ^2}{2m}\Delta +V(\varvec{r},t) \end{aligned}$$
(2.2)
is the sum of kinetic and potential energy, where
$$\begin{aligned} \Delta \equiv \frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{\partial y^2}+\frac{\partial ^2}{\partial z^2} \end{aligned}$$
(2.3)
is the Laplace operator in cartesian coordinates and the potential energy may (and in the cases to be considered later will) be time-dependent.
In order to gain a physical interpretation of the wavefunction, one multiplies (2.1) by \(\varPsi ^*(\varvec{r},t)\) and from the resulting expression subtracts its complex conjugate. In the case of real valued potentials, \(V(\varvec{r},t)=V^*(\varvec{r},t)\), this procedure yields the equation of continuity
$$\begin{aligned} \dot{\rho }({\varvec{r}},t)=-\varvec{\nabla }\cdot {\varvec{j}}({\varvec{r}},t), \end{aligned}$$
(2.4)
with the so-called probability density
$$\begin{aligned} \rho ({\varvec{r}},t)=|\varPsi ({\varvec{r}},t)|^2 \end{aligned}$$
(2.5)
and the probability current density
$$\begin{aligned} {\varvec{j}}({\varvec{r}},t)=\frac{\hbar }{m}\mathrm{Im}\left\{ \varPsi ^*({\varvec{r}},t) \varvec{\nabla }\varPsi ({\varvec{r}},t)\right\} = \frac{1}{m}\mathrm{Re}\left\{ \varPsi ^*({\varvec{r}},t) \hat{{\varvec{p}}}\;\varPsi ({\varvec{r}},t)\right\} , \end{aligned}$$
(2.6)
and where
$$\begin{aligned} \varvec{\nabla }\equiv \varvec{e}_x\frac{\partial }{\partial x}+\varvec{e}_y\frac{\partial }{\partial y}+\varvec{e}_z\frac{\partial }{\partial z} \end{aligned}$$
(2.7)
is the nabla operator. We thus conclude that \(|\varPsi ({\varvec{r}},t)|^2 \mathrm{d}^3r\) is the probability to find the particle at time t in the volume element \(\mathrm{d}^3r\) around \(\varvec{r}\). It may change by probability density flowing in or out, which is expressed via \({\varvec{j}}\). Integration of (2.4) over all space yields that if \(\varPsi \) is normalized at \(t=t_0\) it will be normalized at all times, i.e.
$$\begin{aligned} \int _{-\infty }^{\infty }\mathrm{d}^3r|\varPsi ({\varvec{r}},t)|^2=1 \qquad \forall t \end{aligned}$$
(2.8)
holds in the case of real potential functions and provided that the current density falls to zero faster than \(1/r^2\) for \(r\rightarrow \infty \).

2.1.

Derive the equation of continuity and prove that the norm is conserved in case that \(\varvec{j}\) falls to zero faster than \(1/r^2\) for \(r\rightarrow \infty \).

For time-independent (autonomous) potentials \(V({\varvec{r}},t)=V({\varvec{r}})\) equation (2.1) can be solved by separation of variables using the product Ansatz
$$\begin{aligned} \varPsi ({\varvec{r}},t)=\psi ({\varvec{r}})\varphi (t). \end{aligned}$$
(2.9)
After insertion into the time-dependent Schrödinger equation, we get
$$\begin{aligned} \mathrm{i}\hbar \frac{\dot{\varphi }(t)}{\varphi (t)}= \frac{\left\{ -\frac{\hbar ^2}{2m}\Delta +V({\varvec{r}})\right\} \psi ({\varvec{r}})}{\psi ({\varvec{r}})}. \end{aligned}$$
(2.10)
Due to the fact that they depend on different variables, both LHS and RHS of this equation must be equal to a constant, which we name E. We thus arrive at the two equations
$$\begin{aligned} \mathrm{i}\hbar \dot{\varphi }(t)= & {} E\varphi (t), \end{aligned}$$
(2.11)
$$\begin{aligned} \hat{H}({\varvec{r}})\psi _E({\varvec{r}})= & {} E\psi _E({\varvec{r}}). \end{aligned}$$
(2.12)
The first of these equations can be solved immediately by
$$\begin{aligned} \varphi (t)=\varphi _0\mathrm{e}^{-\mathrm{i}Et/\hbar }. \end{aligned}$$
(2.13)
The second equation is the so-called time-independent Schrödinger equation [2]. It can be solved after specification of the potential \(V({\varvec{r}} )\). An exact analytical solution, however, is possible only in special cases. The energies E and the corresponding wavefunctions \(\psi _E({\varvec{r}})\) are the eigenvalues and eigenfunctions of the problem. In their terms a particular solution of the time-dependent Schrödinger equation is given by
$$\begin{aligned} \varPsi ({\varvec{r}},t)=\psi _E({\varvec{r}})\varphi _0\mathrm{e}^{-\mathrm{i}Et/\hbar }, \end{aligned}$$
(2.14)
where the constant \(\varphi _0\) later on will be absorbed in \(\psi _E\).
Due to the linearity of the time-dependent Schrödinger equation, its most general solution in the case of a discrete spectrum of energies, \(\{E_n\}\), is a linear combination of eigenfunctions given by
$$\begin{aligned} \varPsi ({\varvec{r}},t)= & {} \sum _{n=0}^{\infty }a_n\psi _n({\varvec{r}})\mathrm{e}^{-\mathrm{i}E_nt/\hbar }, \end{aligned}$$
(2.15)
with
$$\begin{aligned} a_n=\int \mathrm{d}^3r\,\psi _n^*({\varvec{r}})\varPsi ({\varvec{r}},0)=\langle \psi _n|\varPsi (0)\rangle \end{aligned}$$
(2.16)
and where we have used that the eigenfunctions \(\psi _n\) corresponding to the discrete energies form a complete orthonormal set, i.e.,
$$\begin{aligned} \sum _n|\psi _n\rangle \langle \psi _n|= & {} \hat{1}, \end{aligned}$$
(2.17)
$$\begin{aligned} \langle \psi _n|\psi _m\rangle= & {} \delta _{nm}, \end{aligned}$$
(2.18)
formulated in Dirac’s bra-ket notation.
In the case of a purely continuous spectrum, the general solution is
$$\begin{aligned} \varPsi ({\varvec{r}},t)= & {} \int _0^{\infty }\mathrm{d}Ea(E)\psi _E({\varvec{r}})\mathrm{e}^{-\mathrm{i}Et/\hbar }. \end{aligned}$$
(2.19)
For spectra that consist of discrete as well as continuous parts, we get a corresponding sum of the two expressions given above.
For a Hamiltonian with a discrete spectrum of energy levels that can be arranged in ascending order, \(E_0\le E_1\le E_2\le ...\), the expectation value of the energy with a trial wavefunction \(|\phi \rangle \) is
$$\begin{aligned} \langle \phi |\hat{H}|\phi \rangle= & {} \sum _{n,m}\langle \phi |\psi _n\rangle \langle \psi _n|\hat{H}|\psi _m\rangle \langle \psi _m|\phi \rangle \nonumber \\= & {} \sum _{n}E_n|\langle \psi _n|\phi \rangle |^2 \nonumber \\\ge & {} E_0\sum _n|\langle \psi _n|\phi \rangle |^2=E_0\langle \phi |\phi \rangle . \end{aligned}$$
(2.20)
For any trial wavefunction one therefore gets
$$\begin{aligned} \frac{\langle \phi |\hat{H}|\phi \rangle }{\langle \phi |\phi \rangle }\ge E_0, \end{aligned}$$
(2.21)
the so-called Rayleigh-Ritz variational principle. In the variational method of Ritz, one is choosing the trial state as an explicit function of a parameter \(\alpha \) and determines a minimum of the LHS of the equation above as a function of that parameter. In this way, an upper bound for the ground state energy can be determined.

In the derivation above, we started from the time-dependent Schrödinger equation in order to derive the time-independent Schrödinger equation. Schrödinger, however, published them in reverse order. Furthermore, one can derive the time-dependent from the time-independent version, if one considers a composite system of many degrees of freedom and treats the “environmental” degrees of freedom classically, leading to the “emergence of time” for the subsystem [3].

2.1.2 Time-Evolution Operator

Conservation of the norm of the wavefunction can also be proven on a more abstract level by using the unitarity of the so-called time-evolution operator. This operator allows for the formal solution of the time-dependent Schrödinger equation without going to a specific representation, and we therefore continue the discussion using the bra-ket notation. Furthermore, we consider time-independent Hamiltonians to start with. The time-dependent Schrödinger equation then reads
$$\begin{aligned} \mathrm{i}\hbar |\dot{\varPsi }(t)\rangle =\hat{H}|\varPsi (t)\rangle . \end{aligned}$$
(2.22)
A formal solution of this equation is given by
$$\begin{aligned} |\varPsi (t)\rangle =\mathrm{e}^{-\mathrm{i}\hat{H}(t-t_0)/\hbar }|\varPsi (t_0)\rangle =:\hat{U}(t,t_0)|\varPsi (t_0)\rangle , \end{aligned}$$
(2.23)
where we have defined the time-evolution operator \(\hat{U}(t,t_0)\) which “evolves” the wavefunction from time \(t_0\) to time t. The solution above can be easily verified by inserting it into (2.22). However, we should be careful in differentiating an exponentiated operator, see also Exercise 2.3.
With the help of the formal solution of the time-dependent Schrödinger equation it can be shown that the normalization integral
$$\begin{aligned} \int _{-\infty }^{\infty }\mathrm{d}^3r|\varPsi ({\varvec{r}},t)|^2= & {} \langle \varPsi (t)|\varPsi (t)\rangle \nonumber \\= & {} \langle \varPsi (t_0)| \mathrm{e}^{\mathrm{i}\hat{H}^{\dagger }(t-t_0)/\hbar } \mathrm{e}^{-\mathrm{i}\hat{H}(t-t_0)/\hbar }|\varPsi (t_0)\rangle \nonumber \\= & {} \langle \varPsi (t_0)|\varPsi (t_0)\rangle \end{aligned}$$
(2.24)
is equal to unity for all times, if it was unity at the initial time \(t_0\). As in the previous subsection, this is true if the Hamiltonian is Hermitian
$$\begin{aligned} \hat{H}^{\dagger }=\hat{H}, \end{aligned}$$
(2.25)
which is equivalent to the time-evolution operator being unitary
$$\begin{aligned} \hat{U}^{\dagger }(t,t_0)=\hat{U}^{-1}(t,t_0), \end{aligned}$$
(2.26)
as can be inferred from the definition given in (2.23). Also the composition property of the time-evolution operator
$$\begin{aligned} \hat{U}(t,t_0)=\hat{U}(t,t')\hat{U}(t',t_0), \end{aligned}$$
(2.27)
where \(t'\in [t_0,t]\) can be deduced from its definition.
Things become much more involved as soon as the Hamiltonian is explicitly time-dependent. To investigate this case, it is very convenient to re-express the time-dependent Schrödinger equation in terms of an integral equation. It can be shown by insertion into (2.22) that
$$\begin{aligned} |\varPsi (t)\rangle =|\varPsi (t_0)\rangle -\frac{\mathrm{i}}{\hbar } \int _{t_0}^{t}\mathrm{d}t'\hat{H}(t')|\varPsi (t')\rangle \end{aligned}$$
(2.28)
is a formal solution of the time-dependent Schrödinger equation.2 The wavefunction and thus the sought for solution also appears under the integral on the RHS, however. The equation above therefore is an implicit, so-called integral equation. It can be solved iteratively, starting with the zeroth iteration
$$\begin{aligned} |\varPsi ^0(t)\rangle =|\varPsi (t_0)\rangle \end{aligned}$$
(2.29)
of a constant wavefunction. For the first iteration, one uses the zeroth iteration for \(|\varPsi (t')\rangle \) on the RHS and finds
$$\begin{aligned} |\varPsi ^1(t)\rangle =|\varPsi (t_0)\rangle -\frac{\mathrm{i}}{\hbar } \int _{t_0}^{t}\mathrm{d}t'\hat{H}(t')|\varPsi (t_0)\rangle . \end{aligned}$$
(2.30)
After infinitely many iterations, the full solution for the time-evolution operator follows to be
$$\begin{aligned} \hat{U}(t,t_0)= & {} \hat{1}+\sum _{n=1}^{\infty }\left( \frac{-\mathrm{i}}{\hbar }\right) ^n \nonumber \\&\int _{t_0}^{t}\mathrm{d}t_n\int _{t_0}^{t_n}\mathrm{d}t_{n-1}\cdots \int _{t_0}^{t_{2}}\mathrm{d}t_{1} \hat{H}(t_n)\hat{H}(t_{n-1})\cdots \hat{H}(t_{1}), \end{aligned}$$
(2.31)
where the integration variables in the nested integrals fulfill the inequalities \(t_n\ge t_{n-1}\ge \dots \ge t_1 \ge t_0\) and in general, the order of the Hamiltonians taken at different times may not be interchanged.
One can confirm this Dyson series solution [4] for the time-evolution operator by inserting it into (2.23) and finally into the time-dependent Schrödinger equation. Alternatively, one could have derived (2.31) also by “time-slicing” the interval \([t_0,t]\) into N equal parts of length \(\Delta t\) and by successively applying the infinitesimal time-evolution operator
$$\begin{aligned} \hat{U}(t_\nu +\Delta t,t_\nu ) =\exp \left\{ -\frac{\mathrm{i}}{\hbar }\hat{H}(t_\nu )\Delta t \right\} {\mathop {=}\limits ^{\lim \Delta t\rightarrow 0}} \hat{1}-\frac{\mathrm{i}}{\hbar }\hat{H}(t_\nu )\Delta t, \end{aligned}$$
(2.32)
with constant Hamiltonians at the beginning of each time step.3 It is rewarding to explicitly check the equivalence of this procedure with the general formula above by working through Exercise 2.2. The fact that unitarity (and thus norm conservation) and the composition property of the time-evolution operator also hold in the time-dependent case can be most easily shown by using the decomposition in terms of infinitesimal evolution operators.

2.2.

Verify the first three terms in the series for the time-evolution operator by collecting terms up to \(\Delta t^2\) in the time-sliced expression
$$\begin{aligned} \hat{U}(t,t_0)=[\hat{1}-\frac{\mathrm{i}}{\hbar }\hat{H}(t_{N-1})\Delta t] [\hat{1}-\frac{\mathrm{i}}{\hbar }\hat{H}(t_{N-2})\Delta t]\cdots [\hat{1}-\frac{\mathrm{i}}{\hbar }\hat{H}(t_0)\Delta t] \end{aligned}$$
and taking the limit \(N\rightarrow \infty ,\Delta t \rightarrow 0\), such that \(N\Delta t=t-t_0\). Furthermore, show that \(\hat{U}(t,t_0)\) is unitary, using the expression above.
Although (2.31) gives the time-evolution operator in terms of a series, this expression is the most convenient one to work with. For reasons of completeness it shall be mentioned that a closed form expression is possible. With the definition
$$\begin{aligned} \hat{T}[\hat{A}(t_1)\hat{B}(t_2)]\equiv \left\{ \begin{array}{ll} \hat{B}(t_2)\hat{A}(t_1)&{}\mathrm{if}\quad t_2>t_1 \\ \hat{A}(t_1)\hat{B}(t_2)&{}\mathrm{if}\quad t_1>t_2 \end{array}\right. \end{aligned}$$
(2.33)
of the time-ordering operator,4 it can be shown that
$$\begin{aligned} \hat{U}(t,t_0)=\hat{T}\mathrm{e}^{-\mathrm{i}/\hbar \int _{t_0}^t\mathrm{d}t'\hat{H}(t')} \end{aligned}$$
(2.34)
holds for the time-evolution operator in the general case of time-dependent Hamiltonians. Furthermore, by working through Exercise 2.3, or even more easily from (2.31), it can be shown that the time-evolution operator fulfills
$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}\hat{U}(t)=-\frac{\mathrm{i}}{\hbar }\hat{H}\hat{U}(t), \end{aligned}$$
(2.35)
i.e., the appropriate time-dependent Schrödinger equation.

2.3.

For the verification of the formal solution of the TDSE, we need the time derivative of an operator of the form
$$\begin{aligned} \hat{U}(t)=\exp [\hat{B}(t)]. \end{aligned}$$
  1. (a)

    Calculate \(\frac{\mathrm{d}\hat{U}}{\mathrm{d}t}\) by using Taylor expansion of the exponential function (keep in mind that, in general, an operator does not commute with its time derivative).

     
  2. (b)

    Consider the special case \(\hat{B}(t)\equiv -\frac{\mathrm{i}}{\hbar }\hat{H}_0t\) and give a closed form solution for \(\frac{\mathrm{d}\hat{U}}{\mathrm{d}t}\).

     
  3. (c)

    Consider the special case \(\hat{B}(t)\equiv -\frac{\mathrm{i}}{\hbar } \int _0^{t}\mathrm{d}t'\hat{H}(t')\) and convince yourself that a simple closed form expression for \(\frac{\mathrm{d}\hat{U}}{\mathrm{d}t}\) can not be given!

     
  4. (d)
    Show that the construction \(\hat{U}(t)=\hat{T}\exp [\hat{B}(t)]\) with the time ordering operator and the operator \(\hat{B}\) from part \(\mathrm {(c)}\) allows for a closed form solution of the time-evolution operator as well as of its time derivative by proving that the relation
    $$\begin{aligned} \hat{T}\hat{B}^{n}= n!\left( \frac{-\mathrm{i}}{\hbar }\right) ^n \int _{0}^{t}\mathrm{d}t_n\int _{0}^{t_n}\mathrm{d}t_{n-1}\cdots \int _{0}^{t_{2}}\mathrm{d}t_{1} \hat{H}(t_n)\hat{H}(t_{n-1})\cdots \hat{H}(t_{1}) \end{aligned}$$
    holds.
     
In order to study some further properties of the time-evolution operator, we go into position representation again by multiplication of (2.23) from the left with \(\langle {\varvec{r}}\vert \) and insertion of unity in terms of position states. Setting \(t_0=0\), we find for the propagated wavefunction
$$\begin{aligned} \varPsi ({\varvec{r}},t)=\int \mathrm{d}^3r'\langle {\varvec{r}} \vert \hat{U}(t,0) \vert {\varvec{r}}' \rangle \varPsi ({\varvec{r}}',0). \end{aligned}$$
(2.36)
The position matrix element of the time-evolution operator
$$\begin{aligned} K({\varvec{r}},t;{\varvec{r}}',0):=\langle {\varvec{r}} \vert \hat{U}(t,0) \vert {\varvec{r}}' \rangle \end{aligned}$$
(2.37)
is frequently referred to as the propagator. As can be shown by differentiation of (2.36) with respect to time, \(K({\varvec{r}},t;{\varvec{r}}',0)\) itself is a solution of the time-dependent Schrödinger equation with initial condition \(K({\varvec{r}},0;{\varvec{r}}',0)=\delta ({\varvec{r}}-{\varvec{r}}')\). For this reason, and under the assumption that \(t\ge 0\), K is also termed time-dependent (retarded) Green’s function of the Schrödinger equation.
Again starting from (2.37) another important property of the propagator can be shown. Due to the fact that the propagator itself is a special wavefunction, the closure relation5
$$\begin{aligned} K({\varvec{r}},t;{\varvec{r}}',0)=\int \mathrm{d}^3r'' K({\varvec{r}},t;{\varvec{r}}'',t'') K({\varvec{r}}'',t'';{\varvec{r}}',0), \end{aligned}$$
(2.38)
follows. It could have also been derived directly from (2.27) by going into position representation and inserting an additional unit operator in terms of position eigenstates.

2.1.3 Spectral Information

In the applications to be discussed in the following chapters, the initial state frequently is assumed to be the ground state of the undriven problem. In this section we will see how spectral information can, in general, be extracted form the propagator via Fourier transformation.

To extract spectral information of autonomous systems from time-series, we start from (2.37) in the case of a time-independent Hamiltonian and insert unity in terms of a complete set of orthonormal energy eigenstates \(\vert \psi _{n}\rangle =\vert n\rangle \) twice, in order to arrive at the spectral representation
$$\begin{aligned} K({\varvec{r}},t;{\varvec{r}}',0)=\sum _{n=0}^{\infty }\psi _{n}^{*}({\varvec{r}}')\psi _{n}({\varvec{r}}) \exp \left\{ -\frac{\mathrm{i}}{\hbar }E_{n}t\right\} \end{aligned}$$
(2.39)
of the propagator. Taking the trace (let \({\varvec{r}}'={\varvec{r}}\) and integrate over position) of this expression one arrives at
$$\begin{aligned} G(t,0):= & {} \int \mathrm{d}^3r\, K({\varvec{r}},t;{\varvec{r}} ,0) \nonumber \\= & {} \int \mathrm{d}^3r \sum _{n=0}^{\infty } \vert \psi _{n}({\varvec{r}}) \vert ^2 \exp \left\{ -\frac{\mathrm{i}}{\hbar }E_{n}t\right\} \nonumber \\= & {} \sum _{n=0}^{\infty } \exp \left\{ -\frac{\mathrm{i}}{\hbar }E_{n}t\right\} . \end{aligned}$$
(2.40)
For large negative imaginary times only the ground state contribution to the sum above survives. This observation leads to the so-called Feynman-Kac formula
$$\begin{aligned} E_0=-\lim _{\tau \rightarrow \infty }\frac{1}{\tau }\mathrm{ln} G(-\mathrm{i}\hbar \tau ,0). \end{aligned}$$
(2.41)
If one performs a Laplace transform on G(t, 0),6 then the energy-dependent Green’s function
$$\begin{aligned} G(z)= & {} \frac{\mathrm{i}}{\hbar }\int _{0}^{\infty } \mathrm{d}t\, G(t,0) \exp \left\{ \frac{\mathrm{i}}{\hbar }zt\right\} \nonumber \\= & {} \sum _{n=0}^{\infty }\frac{1}{E_{n}-z} \end{aligned}$$
(2.42)
emerges. This function has poles at the energy levels of the underlying eigenvalue problem.
For numerical purposes it is often helpful to study the time-evolution of wavepackets. By considering the auto-correlation function of an initial wavefunction
$$\begin{aligned} |\varPsi _\alpha \rangle = \sum _n|n\rangle \langle n|\varPsi _\alpha \rangle =\sum _n c_n^\alpha |n\rangle , \end{aligned}$$
(2.43)
which is defined according to
$$\begin{aligned} c_{\alpha \alpha }(t) :=\langle \varPsi _\alpha |\mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }|\varPsi _\alpha \rangle =\sum _n|c_n^\alpha |^2\mathrm{e}^{-\mathrm{i}E_nt/\hbar }, \end{aligned}$$
(2.44)
one gains the local spectrum by Fourier transformation
$$\begin{aligned} S(\omega )= & {} \frac{1}{2\pi \hbar } \int \mathrm{d}t\, \mathrm{e}^{\mathrm{i}\omega t}{c_{\alpha \alpha }}(t) \nonumber \\= & {} \sum _{n=0}^\infty |c_n^\alpha |^{2}\delta (E_n-\hbar \omega ). \end{aligned}$$
(2.45)
This result is a series of peaks at the eigenvalues of the problem that are weighted with the absolute square of the overlap of the initial state with the corresponding eigenstate \(|n\rangle \). A recent development in this area is the use of so-called harmonic inversion techniques instead of Fourier transformation. This is a nonlinear procedure which allows for the use of rather short time series to extract spectral information [5].
Not only the spectrum but also the eigenfunctions can be determined from time-series. To this end one considers the Fourier transform of the wavefunction at one of the energies just determined
$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{2T}\int _{-T}^{T}\mathrm{d}t\, \mathrm{e}^{\mathrm{i}E_mt/\hbar } |\varPsi _\alpha (t)\rangle= & {} \sum _{n=0}^\infty c_n^\alpha \lim _{T\rightarrow \infty }\frac{1}{2T}\int _{-T}^{T}\mathrm{d}t\, \mathrm{e}^{-\mathrm{i}(E_n-E_m)t/\hbar } |n\rangle \nonumber \\= & {} c_m^\alpha |m\rangle . \end{aligned}$$
(2.46)
This procedure filters out an eigenfunction if the overlap with the initial state is sufficiently high.
Alternatively, the concept of imaginary time propagation is again helpful if one wants to determine the ground state. To this end the time evolution of (2.43)
$$\begin{aligned} |\varPsi _\alpha (-\mathrm{i}\hbar \tau )\rangle =\sum _n c_n^\alpha |n\rangle \mathrm{e}^{-\tau E_n} \end{aligned}$$
(2.47)
is considered for large \(\tau \), when only the ground state contribution survives. At the end of the calculation the ground state has to be renormalized and can be subtracted from the initial wavefunction. Repeating the procedure with the modified initial state, the next highest energy state can in principle be determined.

2.4.

Use the propagator of the 1D harmonic oscillator with \(V(x)=\frac{1}{2}m\omega _\mathrm{e}^2x^2\), given by [6]
$$\begin{aligned} K(x,t;x',0)=\sqrt{\frac{m\omega _\mathrm{e}}{2\pi \mathrm{i}\hbar \sin (\omega _\mathrm{e}t)}} \exp \left\{ \frac{\mathrm{i}m\omega _\mathrm{e}}{2\hbar \sin (\omega _\mathrm{e} t)} \left[ (x^2+x'^2)\cos (\omega _\mathrm{e}t)-2xx'\right] \right\} \end{aligned}$$
to derive the spectrum.

Hint: Use the geometric series.

2.1.4 Analytical Solutions for Wavepackets

To conclude this introductory section, we review an Ansatz for the solution of the time-dependent Schrödinger equation with the help of a Gaussian wavepacket and will derive equations of motion for the parameters of this wavepacket. We then continue with a review of the dynamics of a wavepacket in the box potential. Due to the specific nature of the spectrum in this case, insightful analytic predictions for the probability density as a function of time can be made.

2.1.4.1 Gaussian Wavepacket Dynamics (GWD)

Already in 1926, Schrödinger has stressed the central importance of Gaussian wavepackets in the transition from “micro-” to “macro-mechanics” [7]. For this reason, we will now consider a wavefunction in the form of a Gaussian as the solution of the time-dependent Schrödinger equation.

In Heller’s GWD [8], one uses a complex-valued Gaussian in position representation (for reasons of simplicity here in 1D),
$$\begin{aligned} \varPsi (x,t) =\left( \frac{2\alpha _0}{\pi }\right) ^{1/4} \exp \left\{ -\alpha _t(x-q_t)^2+\frac{\mathrm{i}}{\hbar } p_t(x-q_t) +\frac{\mathrm{i}}{\hbar }{\delta _t}\right\} , \end{aligned}$$
(2.48)
as an Ansatz for the solution of (2.1). The expression above contains real valued parameters \(q_t,p_t\in \mathbf{R}\) and complex valued ones \(\alpha _t,\delta _t\in \mathbf{C}\) that are undetermined up to now. The initial parameters \(q_0,p_0,\alpha _0=\alpha _{t=0}\) shall be given and \(\delta _0=0\). Equations of motion for the four time-dependent parameters can be gained by a Taylor expansion of the potential around \(x=q_t\), according to
$$\begin{aligned} V(x,t)\approx V(q_t,t)+V'(q_t,t)(x-q_t)+\frac{1}{2!}V''(q_t,t)(x-q_t)^2 \end{aligned}$$
(2.49)
and by using the time-dependent Schrödinger equation.
After insertion of the time- and position-derivatives of the wavefunction
$$\begin{aligned} \dot{\varPsi }(x,t)= & {} \left\{ -\dot{\alpha _t}(x-q_t)^2+2\alpha _t\dot{q_t}(x-q_t) +\frac{\mathrm{i}}{\hbar }\dot{p_t}(x-q_t)-\frac{\mathrm{i}}{\hbar }p_t\dot{q_t} +\frac{\mathrm{i}}{\hbar }\dot{\delta _t}\right\} \nonumber \\&\varPsi (x,t), \end{aligned}$$
(2.50)
$$\begin{aligned} \varPsi '(x,t)= & {} \left[ -2\alpha _t(x-q_t) +\frac{\mathrm{i}}{\hbar } p_t\right] \varPsi (x,t), \end{aligned}$$
(2.51)
$$\begin{aligned} \varPsi ''(x,t)= & {} \left\{ -2\alpha _t+ \left[ -2\alpha _t(x-q_t)+\frac{\mathrm{i}}{\hbar } p_t\right] ^2\right\} \varPsi (x,t), \end{aligned}$$
(2.52)
the coefficients of equal powers of \((x-q_t)\) can be compared, as the time-dependent Schrödinger equation has to be fulfilled at every x. This leads to the following set of equations
$$\begin{aligned} (x-q_t)^2:&\quad -\mathrm{i}\hbar \dot{\alpha _t}= -\frac{2\hbar ^2}{m}\alpha _t^2+\frac{1}{2}V''(q_t,t), \end{aligned}$$
(2.53)
$$\begin{aligned} (x-q_t)^1:&\quad \mathrm{i}\hbar 2\alpha _t\dot{q_t}-\dot{p_t} =\mathrm{i}\hbar 2\alpha _t\frac{p_t}{m}+V'(q_t,t), \end{aligned}$$
(2.54)
$$\begin{aligned} (x-q_t)^0:&\quad p_t\dot{q_t}-\dot{\delta _t}=\frac{\hbar ^2}{m}\alpha _t +\frac{p_t^2}{2m}+V(q_t,t). \end{aligned}$$
(2.55)
From (2.54), after separation of real and imaginary part, we find
$$\begin{aligned} \dot{q_t}= & {} \frac{p_t}{m}, \end{aligned}$$
(2.56)
$$\begin{aligned} \dot{p_t}= & {} -V'(q_t,t). \end{aligned}$$
(2.57)
The real-valued parameters therefore fulfill Hamilton equations with initial conditions \(q_{t=0}=q_0\) and \(p_{t=0}=p_0\). From (2.55) and with the definition of the classical Lagrangian
$$\begin{aligned} L=T_\mathrm{k}-V, \end{aligned}$$
(2.58)
where
$$\begin{aligned} T_\mathrm{k}=\frac{p_t^2}{2m} \end{aligned}$$
(2.59)
is the classical kinetic energy, the differential equation
$$\begin{aligned} \dot{\delta _t}=L-\frac{\hbar ^2}{m}\alpha _t \end{aligned}$$
(2.60)
can be derived. The remaining equation for the inverse width parameter \(\alpha _t\) follows from (2.53). It is the nonlinear Riccati differential equation
$$\begin{aligned} \dot{\alpha _t}=-\frac{2\mathrm{i}\hbar }{m}\alpha _t^2 +\frac{\mathrm{i}}{2\hbar }V''(q_t,t). \end{aligned}$$
(2.61)
The width of the Gaussian is a function of time and in contrast to an approach, that will be discussed later in this chapter, where the width of the Gaussians is fixed (i.e. “frozen”), the approach reviewed here is also referred to as the “thawed” GWD.

In the cases of the free particle and of the harmonic oscillator, all equations of motion can be solved exactly analytically and, because the Taylor expansion is also exact in these cases, the procedure above leads to an exact analytic solution of the time-dependent Schrödinger equation.

2.5.

Use the GWD-Ansatz to solve the TDSE.

  1. (a)

    Use the differential equations for \(q_t,p_t,\alpha _t,\delta _t\) in order to show that the Gaussian wavepacket fulfills the equation of continuity.

     
  2. (b)

    Solve the differential equations for \(q_t,p_t,\alpha _t,\delta _t\) for the free particle case \(V(x)=0\).

     
  3. (c)

    Solve the differential equations for \(q_t,p_t,\alpha _t,\delta _t\) for the harmonic oscillator case \(V(x)=\frac{1}{2}m\omega _\mathrm{e}^2x^2.\)

     
  4. (d)

    Calculate \(\langle \hat{x}\rangle ,\langle \hat{p}\rangle \), and \(\Delta x=\sqrt{\langle \hat{x}^2\rangle -\langle \hat{x}\rangle ^2}\), \(\Delta p=\sqrt{\langle \hat{p}^2\rangle -\langle \hat{p}\rangle ^2}\) and from this the uncertainty product using the general Gaussian wavepacket. Discuss the special cases from \(\mathrm {(b)}\) and \(\mathrm {(c)}\). What do the results for the harmonic oscillator simplify to in the case \(\alpha _{t=0}=m\omega _\mathrm{e}/(2\hbar )\)?

     

The GWD method can also be applied to nonlinear classical problems, however, where it is typically valid only for short times. In this context one often uses the notion of the Ehrenfest time, after which non-Gaussian distortions of a wavepacket become manifest. In general, the solutions of the equations of motion have to be determined numerically in the nonlinear case (see Sect. 2.3.4), and the Taylor expansion and thus the final result is only an approximation.

2.1.4.2 Particle in an Infinite Square Well

As another example for an exactly solvable problem in quantum dynamics, we now consider the evolution of an initial wavepacket in an infinite square well (box potential). This problem has been presented in the December 1995 issue of “Physikalische Blätter” [9] and leads to aesthetically pleasing space-time pictures, which are sometimes referred to as “quantum carpets”.

In the work of Kinzel, a Gaussian initial state localized close to the left wall has been investigated. For ease of analytic calculations, let us consider a wavefunction made up of a sum of eigenfunctions with equal weight in the following. The eigenfunctions and eigenvalues of the square well extending from 0 to L are given by
$$\begin{aligned} \psi _n(x)= & {} \sqrt{\frac{2}{L}}\sin \left\{ \frac{n\pi }{L}x\right\} , \end{aligned}$$
(2.62)
$$\begin{aligned} E_n= & {} n^2E_1,\qquad n=1,2,3,\ldots \end{aligned}$$
(2.63)
with the fundamental energy
$$\begin{aligned} E_1=\frac{1}{2m}\left( \frac{\hbar \pi }{L}\right) ^2. \end{aligned}$$
(2.64)
The corresponding frequency and period are
$$\begin{aligned} \omega =E_1/\hbar ;\qquad T=\frac{2\pi }{\omega }. \end{aligned}$$
(2.65)

2.6.

A particle shall be in the eigenstate \(\psi _n\) with energy \(E_n\) of an infinite potential well of width L (\(0\le x \le L\)). Let us assume that the width of the well is suddenly doubled.

  1. (a)

    Calculate the probability to find a particle in the eigenstate \(\psi _m'\) with energy \(E_m'\) of the new well.

     
  2. (b)

    Calculate the probability to find a particle in state \(\psi _m'\) whose energy \(E_m'\) is equal to \(E_n\).

     
  3. (c)

    Consider the time evolution for \(n=1\), i.e. at \(t=0\) the wavefunction is the lowest eigenfunction of the small well \(\varPsi (x,0)=\psi _1(x)\). Calculate the smallest time \(t_\mathrm{min}\) for which \(\varPsi (x,t_\mathrm{min})=\varPsi (x,0)\).

     
  4. (d)

    Draw a picture of the wavefunction \(\varPsi (x,t)\) at \(t=t_\mathrm{min}/2\).

     
In the following we will use dimensionless variables for position and time
$$\begin{aligned} \xi =\frac{x}{L}\;;\qquad \tau =\frac{t}{T} \end{aligned}$$
(2.66)
and will consider an initial wavefunction consisting of N eigenfunctions with equal weights. As can be seen in Fig. 1 of [10], for large N the wavefunctions have the same initial localization properties as the ones used in [9]. Each eigenfunction is evolving with the exponential of the corresponding eigenenergy according to (2.14). The time-evolution of the normalized wavepacket thus is
$$\begin{aligned} \varPsi (\xi ,\tau )= & {} \sqrt{\frac{2}{L N}} \sum _{n=1}^{N}\sin (n\pi \xi )\exp (2\pi \mathrm{i}n^2 \tau ). \end{aligned}$$
(2.67)
In Fig. 2.1, we show the absolute value of the time-evolved wavefunction with \(N=20\) in the range \(0\le \xi \le 1; 0\le \tau \le 0.5\) of the \(\xi -\tau \) plane. In this density plot darkness corresponds to a low and brightness to a large value of the plotted function. A detailed explanation of the features of the time-evolved wavefunction has been given in [10]. Let us here concentrate on the suppression of the wavefunction along the lines \(\tau =\frac{\xi }{2k}\). In order to make progress, we express the sine function with the help of exponentials and arrive at terms of the form
Fig. 2.1

Time-evolution of the absolute value of a superposition of 20 eigenfunctions of the box potential [10]

$$\begin{aligned} \vartheta (\pm \xi ,\tau )=\sum _{n=1}^{N}q^{n^2}\mathrm{e}^{\pm \mathrm{i}n\pi \xi }, \end{aligned}$$
(2.68)
with \(q\equiv \mathrm{e}^{2\pi \mathrm{i}\tau }\). We now shift the argument \(\xi \) by \(2 k\tau \), with positive \(k=1,2,3,\ldots \). Using \(n=n+k/2-k/2\) we get
$$\begin{aligned} \vartheta (\xi +2k\tau ,\tau )= & {} \sum _{n=1}^{N}q^{n^2}\mathrm{e}^{\mathrm{i}n\pi (\xi +2k\tau )} \nonumber \\= & {} q^{-\left( \frac{k}{2}\right) ^2}\mathrm{e}^{-\mathrm{i}\frac{k}{2}\pi \xi } \sum _{n=1}^{N}q^{\left( n+\frac{k}{2}\right) ^2}\mathrm{e}^{\mathrm{i}\left( n+\frac{k}{2}\right) \pi \xi }, \end{aligned}$$
(2.69)
$$\begin{aligned} \vartheta (-(\xi +2k\tau ),\tau )= & {} \sum _{n=1}^{N}q^{n^2}\mathrm{e}^{-\mathrm{i}n\pi (\xi +2k\tau )} \nonumber \\= & {} q^{-\left( \frac{k}{2}\right) ^2}\mathrm{e}^{-\mathrm{i}\frac{k}{2}\pi \xi } \sum _{n=1}^{N}q^{\left( n-\frac{k}{2}\right) ^2}\mathrm{e}^{-\mathrm{i}\left( n-\frac{k}{2}\right) \pi \xi }. \end{aligned}$$
(2.70)
The wavefunction along the straight lines originating from \(\xi =0\) can thus be written as
$$\begin{aligned} \varPsi (2k\tau ,\tau )= & {} \sqrt{\frac{2}{L N}} \frac{1}{2\mathrm{i}} q^{-\left( \frac{k}{2}\right) ^2} \sum _{n=1}^{N}\{q^{\left( n+\frac{k}{2}\right) ^2}-q^{\left( n-\frac{k}{2}\right) ^2}\} \nonumber \\= & {} \sqrt{\frac{2}{L N}} \frac{1}{2\mathrm{i}} q^{-\left( \frac{k}{2}\right) ^2} \nonumber \\&\{q^{\left( N+\frac{k}{2}\right) ^2}\dots +q^{\left( N+1-\frac{k}{2}\right) ^2} -q^{\left( \frac{k}{2}\right) ^2}\dots -q^{\left( 1-\frac{k}{2}\right) ^2} \}. \end{aligned}$$
(2.71)
Using the analogy with two combs shifted against each other, as shown in Fig. 2.2, it is obvious that from the expression in curly brackets above, only 2k terms of order 1 do survive, due to the fact that the major part of the sum cancels term-wise. In the case \(k\ll \sqrt{N}\), we thus can conclude
$$\begin{aligned} \varPsi (2k\tau ,\tau )\sim k/\sqrt{N}\approx 0. \end{aligned}$$
(2.72)
These considerations explain the “channels” of near extinction of the wavefunction along lines of slope 1/2k, emanating from \((\xi =0,\tau =0)\) to the right. For a given value of N, the higher the value of k, fewer terms cancel each other and the less visible the “channel effect” becomes.
Fig. 2.2

Comb analogy for \(N=20\) and \(k=2\). The overlapping parts of the two combs are representing the terms that cancel each other. Only the terms without partner (here 4) do not cancel

2.2 Analytical Approaches to Solve the TDSE

A review of some exactly analytically solvable cases in nonrelativistic quantum dynamics is given in [11], and we have seen two examples for wavepacket solutions at the end of the previous section. For most problems of interest, however, an exact analytical solution of the time-dependent Schrödinger equation cannot be found. It is therefore of quite some interest to devise alternative approaches to quantum dynamics and/or some approximate or exact Ansätze that are generally applicable and lead to viable approximate and/or numerical schemes.

A notable reformulation of the time-dependent Schrödinger equation is the Feynman path integral expression for the propagator [12]. This is of utmost importance in the following, because from the path integral a much used approximation can be derived: the time-dependent semiclassical formulation of quantum theory. Furthermore, in the case of small external perturbations, time-dependent perturbation theory may be the method of choice for the solution of the time-dependent Schrödinger equation. Moreover, for systems with many degrees of freedom, as a first approximation, the wavefunction can be factorized. We thus discuss the so-called Hartree Ansatz and for the first time also the Born-Oppenheimer method in this chapter. Finally, the exact analytical Floquet Ansatz for the treatment of periodically driven quantum systems is reviewed.

The discussion of the numerical implementation of some of these concepts will be postponed to Sect. 2.3.

2.2.1 Feynman’s Path Integral

For time-dependent quantum problems, which occur naturally if we want to describe the interaction of a system with a laser field, as we will see in Chap.  3, an approach that deals with the propagator is very well suited. With the propagator at hand, we can calculate the time evolution of every wavefunction according to (2.36).

2.2.1.1 The Propagator as a Path Integral

A very elegant approach that gives an explicit formula for the propagator goes back to an idea that can be found in later editions of the famous text book by Dirac [13], and has finally been formulated by Feynman [12]. The derivation of the path integral is a prime example for the new quantum mechanical reasoning in terms of probability amplitudes in contrast to the classical way of thinking in probabilities. The famous double slit experiment serves as the chief parable to understand the new twist of quantum mechanical thinking. The postulates that form the basis for the derivation of the path integral are:
Postulate 1:

If, in the particle picture, an event (e.g., an electron hitting the screen after passing a double slit) can have occurred in two mutually exclusive ways, the corresponding amplitudes have to be added to find the total amplitude

Postulate 2:

If an event consists of two successive events, the corresponding amplitudes do multiply.

In the book by Feynman and Hibbs [6] it is shown how the usage of the two postulates above leads to the composition (or semigroup) property of the propagator, which we have already stated in (2.38). At this point, however, only physical intuition helps. How can the probability amplitude \(K({\varvec{r}}_f,t;{\varvec{r}}_i,0)\) itself be determined?
Inspired by an idea of Dirac, and using the first postulate above, Feynman in 1948 [12] expressed the propagator as a sum over all paths from \({\varvec{r}}_i\) to \({\varvec{r}}_f\) in time t. Each of these paths contributes with a phase factor to the sum. The phase is the ratio of the classical action of the respective path and \(\hbar \). Mathematically this can be written as
$$\begin{aligned} K({\varvec{r}}_f,t;{\varvec{r}}_i,0) = \int _{{\varvec{r}}(0)={\varvec{r}}_i}^{{\varvec{r}}(t)={\varvec{r}}_f} \mathrm{d}[{\varvec{r}}]\exp \left\{ \frac{\mathrm{i}}{\hbar }S[{\varvec{r}}]\right\} , \end{aligned}$$
(2.73)
with Hamilton’s principal function (being a functional of the path, which is expressed by the square brackets)
$$\begin{aligned} S[{\varvec{r}}]=\int _{0}^{t}\mathrm{d}t'L. \end{aligned}$$
(2.74)
The classical Lagrangian L has already appeared in (2.58). The symbol \(\int \mathrm{d}[{\varvec{r}}]\) is denoting the integral over all paths (functional integral). In contrast to standard integration, where one sums a function over a certain range of a variable, in a path integral one sums a function of a function (a so-called functional) over a certain class of functions that are parametrized by t and that obey the boundary conditions \({\varvec{r}}(0)={\varvec{r}}_{i},{\varvec{r}}(t)={\varvec{r}}_{f}\). Feynman’s path integral is therefore sometimes referred to as a “sum over histories”. In Fig. 2.3, for an arbitrary one-dimensional potential, we depict the classical path and some other equally important nonclassical paths.
Fig. 2.3

Paths in space-time. Time-slicing and the classical path (thick red line) are also depicted

The exact analytic calculation of the path integral, apart from a few exceptions involving quadratic Lagrangians, is not possible. One therefore frequently resorts to approximate solutions. A principal way to calculate the path integral shall, however, be hinted at. In order to make progress, the time interval [0, t] is divided into N equal parts of length \(\Delta t\), analogously to the procedure in Sect. 2.1.2. This “time-slicing” is depicted in Fig. 2.3. In this way the path integral is discretized and in the limit \(N\rightarrow \infty \) can be written as an \((N-1)\)-dimensional Riemann integral times a normalization constant \(B_{N}\). In one spatial dimension this reads
$$\begin{aligned} K(x_{f},t&;x_{i},0)=\lim _{N\rightarrow \infty } B_{N} \int \mathrm{d}x_{1} \int \cdots \int \mathrm{d}x_{N-1} \nonumber \\&\exp \left\{ \frac{\mathrm{i}}{\hbar } \sum _{j=1}^{N}\left[ \frac{m(x_{j}-x_{j-1})^2}{2\Delta t} -V\left( \frac{x_{j}+x_{j-1}}{2}\right) \Delta t\right] \right\} . \end{aligned}$$
(2.75)
Proving this expression and deriving \(B_N\) is most elegantly done by using a Weyl transformation and will be performed explicitly in Appendix 2.A.
Obviously, the expression above can be interpreted as the successive application of the closure relation (2.38), concatenating short-time propagators
$$\begin{aligned} K(x_{j},\Delta t;x_{j-1},0)\sim \exp \left\{ \frac{\mathrm{i}}{\hbar } \left[ \frac{m(x_{j}-x_{j-1})^2}{2\Delta t} -V\left( \frac{x_{j}+x_{j-1}}{2}\right) \Delta t\right] \right\} . \end{aligned}$$
(2.76)
It considerably deepens one’s understanding to derive the short-time propagator directly from the infinitesimal time-evolution operator of (2.32). The interesting question how the Hamilton operator “mutates” into the classical Lagrangian is answered in Exercise 2.7. In the second part of this exercise, the time-dependent Schrödinger equation can be derived. To this end a simplified version of the short-time propagator with a simple end point rule for the discretization of the potential part of the action by replacing \(V\left( \frac{x_{j}+x_{j-1}}{2}\right) \) by \(V(x_{j-1})\) in (2.76) is sufficient.

2.7.

Study the short-time propagator and use it to derive the TDSE.

  1. (a)
    Derive the short-time propagator starting from
    $$\begin{aligned} \hat{U}(\Delta t)=\exp \{-\mathrm{i}\hat{H}\Delta t/\hbar \} \end{aligned}$$
    for the infinitesimal time-evolution operator.

    Hint: Use first order Taylor expansion of the exponential function.

     
  2. (b)
    Use the short-time propagator in order to propagate an arbitrary wavefunction \(\varPsi (x,t)\) over an infinitesimal time interval \(\Delta t\) via
    $$\begin{aligned} \varPsi (x,t+\Delta t)=\int \mathrm{d}y K(x,\Delta t;y,0)\varPsi (y,t) \end{aligned}$$
    and derive the TDSE!

    Hint: To this integral only a small intervall of y centered around x is contributing. Expansion of the expression above to first order in \(\Delta t\) and up to second order in \(\eta =y-x\) leads to a linear partial differential equation for \(\varPsi (x,t)\).

     

2.2.2 Stationary Phase Approximation

By inspection of (2.75) it is obvious that the calculation of the propagator for arbitrary potentials becomes arbitrarily complicated. In the case of maximally quadratic potentials all integrals are Gaussian integrals, however, and thus can be done exactly analytically. There are some additional examples, for which exact analytic results for the path integral are known. These are collected in the supplement section of the Dover edition of the textbook by Schulman [14].

In general, however, approximate solutions for the path integral are sought for. The idea is to approximate the exponent in such a fashion that only quadratic terms survive. The corresponding approximation is the stationary phase approximation (SPA). It shall be introduced by first looking at a simple 1D integral of the form
$$\begin{aligned} \int _{-\infty }^{+\infty }\mathrm{d}x \exp \{\mathrm{i}f(x)/\delta \}g(x). \end{aligned}$$
(2.77)
To proceed, we perform a Taylor expansion of the function in the exponent up to second order according to \(f(x)\approx f(x_0)+f'(x_0)(x-x_0)+1/2f''(x_0)(x-x_0)^2\) under the condition of stationarity of the phase, i.e.,
$$\begin{aligned} f'(x_{0})=0. \end{aligned}$$
(2.78)
Then with the help of the formula7
$$\begin{aligned} \int _{-\infty }^{+\infty } \mathrm{d}x\exp (\mathrm{i}\alpha x^{2})= \sqrt{\frac{\mathrm{i}\pi }{\alpha }}, \end{aligned}$$
(2.79)
we get
$$\begin{aligned} \int _{-\infty }^{+\infty } \mathrm{d}x\exp \{\mathrm{i}f(x)/\delta \}g(x) {\mathop {=}\limits ^{\delta \rightarrow 0}}\sqrt{\frac{2\pi \mathrm{i}\delta }{f''(x_{0})}} \exp \{\mathrm{i}f(x_{0})/\delta \}g(x_{0}) \end{aligned}$$
(2.80)
for the integral above. This approximation becomes the better, the faster the exponent oscillates, which is determined by the smallness of the parameter \(\delta \). In order to demonstrate this fact, in Fig. 2.4 the function \(\exp \{\mathrm{i}x^2/\delta \}\) is displayed using \(\delta =0.01\). The fast oscillations of the function at x values further away than \(\sqrt{\delta }\) from the point of stationary phase lead to the mutual cancellation of the positive and negative contributions to the integral. Around the stationary phase point (which here is \(x=0\)) this argument does not apply and therefore the major contribution to the integral is determined by the properties of the function f(x) around that point.
Fig. 2.4

The function \(f(x)=x^2\) (solid black line) and the real part of \(\exp \{\mathrm{i}x^2/\delta \}\) with \(\delta =0.01\) (dashed red line)

In the next subsection, the notion of stationary phase integration will be extended to the path integral, being an infinite dimensional “normal” integral. Before doing so, a remark on the direct numerical approach to the path integral is in order. As can be seen already by looking at the integrand of our 1D toy problem, a numerical attack to calculate the integral of a highly oscillatory function will be problematic due to the near cancellation of terms. This is even more true for the full fledged path integral and the associated problem is sometimes referred to as the sign problem, which is a topic at the forefront of present day research. Much more well-behaved with respect to numerical treatment are imaginary time path integrals, which will not be dealt with herein, however.

2.2.3 Semiclassical Approximation

The semiclassical approximation of the propagator goes back to van Vleck [15]. Its direct derivation from the path integral followed many years later, however, and finally led to the generalization of the van Vleck formula by Gutzwiller [16]. We will later-on use semiclassical arguments quite frequently, because they allow for a qualitative and often also for a quantitative explanation of many interesting quantum phenomena. For this reason we will go through the derivation of the so-called van Vleck-Gutzwiller (VVG) propagator in some detail.

In order to derive a time-dependent semiclassical expression, the SPA will be applied to the path integral (2.73). In this case the exponent S[x] is a functional. Therefore, we need the definition of the variation of a functional (see, e.g., [17]). The analog of (2.78) is
$$\begin{aligned} {\delta S}[x_\mathrm{cl}]=0. \end{aligned}$$
(2.81)
This, however, is Hamilton’s principle of classical mechanics. The SPA is thus based on the expansion of the exponent of the path integral around the classical path with boundary conditions \(x(0)=x_i\) and \(x(t)=x_f\). This is also the reason why we have highlighted the classical path in Fig. 2.3. By defining the deviation from the classical path as
$$\begin{aligned} \eta (t')=x(t')-x_\mathrm{cl}(t')\qquad \eta (0)=\eta (t)=0, \end{aligned}$$
(2.82)
the second order expansion needed for the SPA is given by
$$\begin{aligned} S[x]=S[x_\mathrm{cl}]+\frac{1}{2}\int _0^t \mathrm{d}t'\eta (t')\hat{O}\eta (t'), \end{aligned}$$
(2.83)
with the stability operator
$$\begin{aligned} \hat{O}=-m\frac{\mathrm{d}^2}{\mathrm{d}t^2}-\left. V''\right| _{x=x_\mathrm{cl}(t)}. \end{aligned}$$
(2.84)
More details about the underlying variational calculus can be found in Chap. 12 of [14] and in Appendix 2.B.
For the time being, we assume that only one single point of stationary phase exists. In SPA the propagator can then be written as
$$\begin{aligned} K(x_{f},t;x_{i},0)\approx & {} \int _{\eta (0)=0}^{\eta (t)=0}\mathrm{d}[\eta ] \exp \left\{ \frac{\mathrm{i}}{2\hbar } \int _{0}^{t}\mathrm{d}t'\eta (t')\hat{O}\eta (t') \right\} \nonumber \\&\exp \left\{ \frac{\mathrm{i}}{\hbar }S[x_\mathrm{cl}]\right\} . \end{aligned}$$
(2.85)
Due to the additive nature of the action in (2.83), the propagator factorizes into a trivial factor coming from the zeroth order term in the expansion and a so-called prefactor coming from the fluctuations around the classical path. Due to its boundary conditions, this prefactor is also referred to as 0–0-propagator.
The condition for the applicability of the SPA is that the exponent oscillates rapidly. For the functional integral this means that \(\hbar \) must be small compared to the action of the classical trajectory. The main contribution to the propagator in the SPA thus stems from the classical path that solves the boundary value problem defined by the propagator labels and from a small region surrounding the classical path. In this context the SPA is therefore also called the semiclassical approximation. There are several ways to derive an explicit expression for the prefactor in (2.85), see, e.g., [14]. As shown in detail in this textbook the final expression for the propagator is given by
$$\begin{aligned} K(x_{f},t;x_{i},0)\approx \sqrt{\frac{\mathrm{i}}{2\pi \hbar } \frac{\partial ^2 S[x_\mathrm{cl}]}{\partial x_f\partial x_i}} \exp \left\{ \frac{\mathrm{i}}{\hbar }S[x_\mathrm{cl}]\right\} . \end{aligned}$$
(2.86)
The classical information that enters the expression above can be gained by solving the root search problem defined by the propagator labels and calculating the corresponding action and its second derivative with respect to the initial and final position.
As mentioned at the beginning of this section, van Vleck succeeded already in 1928 in finding the expression above in a more “heuristic” manner [15]. He started in his derivation from the observation that the insertion of the Ansatz8
$$\begin{aligned} K\sim \exp \{\mathrm{i}S(x,\alpha ,t)/\hbar \}, \end{aligned}$$
(2.87)
with an integration constant \(\alpha \), into the time-dependent Schrödinger equation, after cancellation of all \(\hbar \)-dependencies, leads to the classical mechanical Hamilton-Jacobi equation [18]
$$\begin{aligned} H\left( \frac{\partial S}{\partial x},x\right) +\frac{\partial S}{\partial t}=0. \end{aligned}$$
(2.88)
Invoking the correspondence principle, S must thus be a generator of a canonical transformation, the classical action functional, and we have found the exponential part of the propagator.
The yet undetermined prefactor of the propagator follows from a more involved reasoning: the classical probability density for reaching point \(x_f\) after starting from point \(x_i\) can be determined by integrating over all possible initial momenta (vertical line in the phase space plot in Fig. 2.5) and is given by
$$\begin{aligned} \rho _\mathrm{cl}(x_f,t;x_i,0)=\frac{1}{h}\int \mathrm{d}p_i\delta [x_f-x_t(x_i,p_i)] =\frac{1}{h}\left| \left. \frac{\partial x_f}{\partial p_i}\right| _{x_i}\right| ^{-1}, \end{aligned}$$
(2.89)
where \(|_{x_i}\) denotes that \(x_i\) is kept fixed and for the time being, we have assumed that there is just one single solution of the double-ended boundary value problem. Furthermore, Planck’s constant enters for dimensionality reasons in a similar fashion as in classical statistical mechanics. Converting to quantum mechanical amplitudes, the square root of the classical probability density has to be taken to arrive at the correct semiclassical prefactor. From basic classical mechanics we have the identity
$$\begin{aligned} \frac{\partial ^2 S[x_\mathrm{cl}]}{\partial x_f\partial x_i}= -\frac{\partial p_i}{\partial x_f}, \end{aligned}$$
(2.90)
for the so-called van Vleck determinant,9 however. Thus up to the absolute value in (2.89), reflecting the fact that the probability density is a positive definite quantity, the semiclassical propagator of van Vleck and the one from the SPA to the path integral are proportional.
Fig. 2.5

The case of two classical solutions of the boundary value problem \(x(0)=x_i;\, x(t)=x_f\). The phase space manifold of initial conditions is indicated by the solid vertical line. This manifold evolves into the bent curve, whereby two of the initial conditions, indicated by the colored dots, fulfill the boundary value problem

Almost 40 years later, Gutzwiller has extended the validity of the van Vleck expression to longer times [16]. First of all for finite times there may be several solutions to the classical root search problem. Such a situation is depicted graphically in Fig. 2.5. In the case of multiple solutions an additional summation therefore has to be introduced in (2.86). Using the path integral derivation together with the summation over several points of stationary phase one finally arrives at the van Vleck-Gutzwiller expression
$$\begin{aligned} K^\mathrm{VVG}(x_{f},t;x_{i},0)\equiv \sum _{\mathrm{cl}} \sqrt{\frac{\mathrm{i}}{2\pi \hbar } \left| \frac{\partial ^2 S[x_{\mathrm{cl}}]}{\partial x_f\partial x_i}\right| } \exp \left\{ \frac{\mathrm{i}}{\hbar }S[x_{\mathrm{cl}}] -\mathrm{i}\nu \pi /2\right\} , \end{aligned}$$
(2.91)
with the so-called Maslov (or Morse) index \(\nu \), introduced into the semiclassical propagator by Gutzwiller and counting the caustics a path has gone through. The Maslov phase allows one to use the absolute value inside the square root.

This final expression has interference effects built in, because of the summation over classical trajectories and is very elegant and intuitive, because it relies solely on classical trajectories. However, it has also a major drawback, especially for systems with several degrees of freedom. Then the underlying root search problem becomes extremely hard to solve and a semiclassical propagator using the solution of classical initial value problems would be much needed. Such a reformulation of the semiclassical expression is possible and will be discussed in Sect. 2.3.4 on numerical methods.

2.2.4 Pictures of Quantum Mechanics and Time-Dependent Perturbation Theory

Another approximate way to solve the time-dependent Schrödinger equation starts directly from the infinite sum of time-ordered operator products in (2.31) for the time-evolution operator. Considering this series as a perturbation expansion and taking only a few terms into account will lead to reasonable results only for short times. In the case of additive Hamiltonians,
$$\begin{aligned} \hat{H}(t)=\hat{H}_0(t)+\hat{W}(t), \end{aligned}$$
(2.92)
we can, however, split the problem into parts and can possibly treat the first one analytically and the rest perturbatively. Please note that in (2.92) both the first and second term may depend on time. This will come in handy when we discuss the different pictures of quantum mechanics in a unified manner.
The time-evolution operator for the unperturbed problem is formally given by
$$\begin{aligned} \hat{U}_0(t,0)=\hat{T}\exp \left\{ -\frac{\mathrm{i}}{\hbar }\int _{0}^{t} \mathrm{d}t'\hat{H}_0(t')\right\} . \end{aligned}$$
(2.93)
With its help we define a wavefunction in the interaction picture, indicated by the index \(\mathrm{I}\),
$$\begin{aligned} |\varPsi _\mathrm{S}(t)\rangle =:\hat{U}_0(t,0)|\varPsi _\mathrm{I}(t)\rangle , \end{aligned}$$
(2.94)
where the wavefunction \(|\varPsi _\mathrm{S}(t)\rangle \) is the one in the Schrödinger picture, which we have considered up to now. Inserting (2.94) into the time-dependent Schrödinger equation and using
$$\begin{aligned} \mathrm{i}\hbar \dot{\hat{U}}_0(t,0)=\hat{H}_0(t)\hat{U}_0(t,0) \end{aligned}$$
(2.95)
yields
$$\begin{aligned} \mathrm{i}\hbar |\dot{\varPsi }_\mathrm{I}(t)\rangle =\hat{W}_\mathrm{I}(t)|\varPsi _\mathrm{I}(t)\rangle , \end{aligned}$$
(2.96)
i.e., the Schrödinger equation in the interaction picture, where the perturbation Hamiltonian in the interaction picture is given by
$$\begin{aligned} \hat{W}_\mathrm{I}(t):=\hat{U}_0^\dagger (t,0)\hat{W}(t)\hat{U}_0(t,0). \end{aligned}$$
(2.97)
The time-evolution operator in the interaction picture is
$$\begin{aligned} \hat{U}_\mathrm{I}(t,0):=\hat{T}\exp \left\{ -\frac{\mathrm{i}}{\hbar }\int _{0}^{t} \mathrm{d}t'\hat{W}_\mathrm{I}(t')\right\} . \end{aligned}$$
(2.98)
Using (2.94), we note that at \(t=0\) the wavefunctions are identical, i.e. \(|\varPsi _\mathrm{S}(0)\rangle =|\varPsi _\mathrm{I}(0)\rangle =|\varPsi (0)\rangle \). Furthermore, calculating an expectation value in the Schrödinger picture and using (2.94), we get
$$\begin{aligned} \langle \hat{A}\rangle (t)= & {} \langle \varPsi _\mathrm{S}(t)|\hat{A}_\mathrm{S}|\varPsi _\mathrm{S}(t)\rangle \nonumber \\= & {} \langle \varPsi _\mathrm{I}(t)|\hat{U}_0^\dagger (t,0)\hat{A}_\mathrm{S}\hat{U}_0(t,0)|\varPsi _\mathrm{I}(t)\rangle . \end{aligned}$$
(2.99)
With the definition of an operator in the interaction picture
$$\begin{aligned} \hat{A}_\mathrm{I}(t):=\hat{U}^\dagger _0(t,0)\hat{A}_\mathrm{S}\hat{U}_0(t,0), \end{aligned}$$
(2.100)
the expectation value becomes
$$\begin{aligned} \langle \hat{A}\rangle (t)=\langle \varPsi _\mathrm{I}(t)|\hat{A}_\mathrm{I}(t)|\varPsi _\mathrm{I}(t)\rangle , \end{aligned}$$
(2.101)
which is identical to the Schrödinger picture expectation value.

2.8.

Verify that the time evolution operator in the interaction picture \(\hat{U}_\mathrm{I}(t,0)=\hat{U}_0^+(t,0)\hat{U}(t,0)\) fulfills the appropriate differential equation.

The third picture that is frequently applied is the one named after Heisenberg. By an appropriate choice of the unperturbed Hamiltonian in (2.93) and the perturbation all three pictures can be dealt with in the same framework:
  • \(\hat{H}=\hat{H}_0+\hat{W}\) leads to the interaction picture

  • \(\hat{H}_0=0\) und \(\hat{W}=\hat{H}\) leads to the Schrödinger picture

  • \(\hat{H}_0=\hat{H}\) and \(\hat{W}=0\) leads to the Heisenberg picture

The relations between the different cases are given in Table 2.1 for the wavefunctions and in Table 2.2 for the operators. In Schrödinger’s original representation, the wavefunction is time-dependent, whereas the operators are time-independent. In the Heisenberg picture it is the other way around. The interaction picture has both wavefunction and operators being time-dependent.
Table 2.1

Relations between the wavefunctions in the different pictures of quantum mechanics

\(|\varPsi _\mathrm{S}(t)\rangle \)

\(|\varPsi _\mathrm{H}\rangle \)

\(|\varPsi _\mathrm{I}(t)\rangle \)

\(|\varPsi _\mathrm{S}(t)\rangle \)

\(\hat{U}(t,0)|\varPsi _\mathrm{H}\rangle \)

\(\hat{U}_0(t,0)|\varPsi _\mathrm{I}(t)\rangle \)

\(|\varPsi _\mathrm{H}\rangle \)

\(\hat{U}^\dagger (t,0)|\varPsi _\mathrm{S}(t)\rangle \)

\(\hat{U}_\mathrm{I}^\dagger (t,0)|\varPsi _\mathrm{I}(t)\rangle \)

\(|\varPsi _\mathrm{I}(t)\rangle \)

\(\hat{U}_0^\dagger (t,0)|\varPsi _\mathrm{S}(t)\rangle \)

\(\hat{U}_\mathrm{I}(t,0)|\varPsi _\mathrm{H}\rangle \)

Table 2.2

Relations between the operators in the different pictures of quantum mechanics

\(\hat{A}_\mathrm{S}\)

\(\hat{A}_\mathrm{H}(t)\)

\(\hat{A}_\mathrm{I}(t)\)

\(\hat{A}_\mathrm{S}\)

\(\hat{U}(t,0)\hat{A}_\mathrm{H}(t)\hat{U}^\dagger (t,0)\)

\(\hat{U}_0(t,0)\hat{A}_\mathrm{I}(t)\hat{U}_0^\dagger (t,0)\)

\(\hat{A}_\mathrm{H}(t)\)

\(\hat{U}^\dagger (t,0)\hat{A}_\mathrm{S}\hat{U}(t,0)\)

\(\hat{U}_\mathrm{I}^\dagger (t,0)\hat{A}_\mathrm{I}(t)\hat{U}_\mathrm{I}(t,0)\)

\(\hat{A}_\mathrm{I}(t)\)

\(\hat{U}_0^\dagger (t,0)\hat{A}_\mathrm{S}\hat{U}_0(t,0)\)

\(\hat{U}_\mathrm{I}(t,0)\hat{A}_\mathrm{H}(t)\hat{U}_\mathrm{I}^\dagger (t,0)\)

With the interaction picture defined, we can now derive time-dependent perturbation theory for small perturbations \(\hat{W}(t)\). Iterative solution of the corresponding time-dependent Schrödinger equation leads to an expression for the propagator in the interaction picture
$$\begin{aligned} \hat{U}_\mathrm{I}(t,0)= \hat{1}+ & {} \sum _{n=1}^{\infty }\left( \frac{-\mathrm{i}}{\hbar }\right) ^n \int _{0}^{t}\mathrm{d}t_n\int _{0}^{t_n}\mathrm{d}t_{n-1}\cdots \int _{0}^{t_{2}}\mathrm{d}t_{1} \nonumber \\&\hat{W}_\mathrm{I}(t_n)\hat{W}_\mathrm{I}(t_{n-1})\cdots \hat{W}_\mathrm{I}(t_{1}), \end{aligned}$$
(2.102)
analogous to (2.31). In perturbation theory, the series is truncated at finite n and in case of \(n=1\), we get
$$\begin{aligned} |\varPsi ^1_\mathrm{I}(t)\rangle =|\varPsi (0)\rangle -\frac{\mathrm{i}}{\hbar }\int _{0}^t\mathrm{d}t' \hat{U}_0^\dagger (t',0)\hat{W}(t')\hat{U}_0(t',0)|\varPsi (0)\rangle . \end{aligned}$$
(2.103)
Going back to the Schrödinger picture, by using (2.94), we get
$$\begin{aligned} |\varPsi ^1_\mathrm{S}(t)\rangle =\hat{U}_0(t,0)|\varPsi (0)\rangle -\frac{\mathrm{i}}{\hbar }\int _{0}^t\mathrm{d}t'\hat{U}_0(t,t')\hat{W}(t')\hat{U}_0(t',0) |\varPsi (0)\rangle . \end{aligned}$$
(2.104)
in first order. We will interpret and use expressions of this kind in the discussion of pump-probe spectroscopy in Chap.  5. Terms of higher order will contain multiple, nested integrals but will not be needed there.

2.2.5 Magnus Expansion

Another approach to solve the time-dependent Schrödinger equation starting from the time-evolution operator is the so-called Magnus expansion. The basic idea of this method is resummation and can be understood by considering a function depending on a parameter \(\lambda \). Expanding this function in powers of the parameter leads to
$$\begin{aligned} A=A_0(1+\lambda A_1+\lambda ^2 A_2+\cdots )\;. \end{aligned}$$
(2.105)
Alternatively, the function can be written as a prefactor times an exponential
$$\begin{aligned} A=A_0\exp (F)\;. \end{aligned}$$
(2.106)
Also the exponent F can be expanded in powers of the parameter
$$\begin{aligned} F=\lambda F_1+\lambda ^2 F_2+\cdots \;. \end{aligned}$$
(2.107)
The exponential function can now be Taylor expanded and after comparing the coefficients of equal powers of \(\lambda \), the \(F_n\) can be expressed in terms of the \(A_n\). Now If we now truncate the series in (2.105) after \(n=2\), by using the coefficients in the exponent via
$$\begin{aligned} A\approx A_0\exp (\lambda A_1+\lambda ^2(A_2-A_1^2/2))\;, \end{aligned}$$
(2.108)
an expression of infinite order in \(\lambda \) has been gained.
In quantum dynamics, the technique presented above is used for the infinite sum in (2.102), representing the time-evolution operator. The parameter \(\lambda \) is equal to \(-\mathrm{i}/\hbar \) in this case. The final result for the time-evolution operator is
$$\begin{aligned} \hat{U}_\mathrm{I}(t,0)= & {} \hat{T}\exp \left\{ -\frac{\mathrm{i}}{\hbar }\int _{0}^{t} \mathrm{d}t'\,\hat{W}_\mathrm{I}(t')\right\} \nonumber \\= & {} \exp \left\{ \sum _{n=1}^\infty \frac{1}{n!} \left( -\frac{\mathrm{i}}{\hbar }\right) ^n\hat{\mathrm{H}}_n(t,0)\right\} \;, \end{aligned}$$
(2.109)
where
$$\begin{aligned} \hat{\mathrm{H}}_1= & {} \int _{0}^{t}\mathrm{d}t'\,\hat{W}_\mathrm{I}(t'), \end{aligned}$$
(2.110)
$$\begin{aligned} \hat{\mathrm{H}}_2= & {} \int _{0}^{t}\mathrm{d}t_2\int _{0}^{t_2}\mathrm{d}t_1 [\hat{W}_\mathrm{I}(t_2),\hat{W}_\mathrm{I}(t_1)], \end{aligned}$$
(2.111)
$$\begin{aligned} \hat{\mathrm{H}}_3= & {} \int _{0}^{t}\mathrm{d}t_3\int _{0}^{t_3}\mathrm{d}t_2 \int _{0}^{t_2}\mathrm{d}t_1 \bigl ([\hat{W}_\mathrm{I}(t_3),[\hat{W}_\mathrm{I}(t_2),\hat{W}_\mathrm{I}(t_1)]] \nonumber \\&+\, [[\hat{W}_\mathrm{I}(t_3),\hat{W}_\mathrm{I}(t_2)],\hat{W}_\mathrm{I}(t_1)]\bigr ) \end{aligned}$$
(2.112)
are the first three terms in the expansion of the exponent.

2.9.

Verify the second order expression \(\hat{\mathrm{H}}_2\) of the Magnus expansion in the exponent of the time-evolution operator in the interaction picture.

The main advantage of the expression in (2.109) is that, in principle, it is an exact result and that it does not contain the time-ordering operator any more. In numerical applications the summation in the exponent will be terminated at finite n, however, and leads to a unitary propagation scheme at any order. If one would truncate the expansion after \(n=1\), then the time-ordering operator in (2.109) would have been ignored altogether. Although this seems to be rather a crude approximation, in Chap.  4 we will see that it leads to reasonable results in the case of atoms subject to extremely short pulses. Furthermore, it has turned out that in the interaction picture with a suitable choice of \(\hat{H}_0\), truncating the Magnus expansion is a successful numerical approach [19].

2.2.6 Time-Dependent Hartree Method

Especially in Chap.  5, we will investigate systems with several coupled degrees of freedom. The factorization of the total wavefunction is a first very crude approximative way to solve the time-dependent Schrödinger equation for such systems. It shall therefore be discussed here for the simplest case of two degrees of freedom corresponding to distinguishable particles.

The total Hamiltonian shall be of the form
$$\begin{aligned} \hat{H}=\sum _{n=1}^2\hat{H}_n(x_n)+V_{12}(x_1,x_2), \end{aligned}$$
(2.113)
with single particle operators
$$\begin{aligned} \hat{H}_n(x_n)=-\frac{\hbar ^2}{2m}\frac{\partial ^2}{\partial x_n^2}+V_n(x_n) \end{aligned}$$
(2.114)
and the coupling potential \(V_{12}\) depending on the two coordinates in a non-additive manner. The so-called Hartree Ansatz for the wavefunction is of the form
$$\begin{aligned} \varPsi (x_1,x_2,t)=\varPsi _1(x_1,t)\varPsi _2(x_2,t) \end{aligned}$$
(2.115)
of a product of single particle wavefunctions.
This Ansatz is exact in the case that the coupling \(V_{12}\) vanishes. The single particle functions then fulfill
$$\begin{aligned} \mathrm{i}\hbar \dot{\varPsi }_n(x_n,t)=\hat{H}_n\varPsi _n(x_n,t). \end{aligned}$$
(2.116)
We now plug the Hartree Ansatz into the full time-dependent Schrödinger equation and find
$$\begin{aligned} \mathrm{i}\hbar \left( \varPsi _2\dot{\varPsi }_1+\varPsi _1\dot{\varPsi }_2\right) =\varPsi _2\hat{H}_1\varPsi _1+\varPsi _1\hat{H}_2\varPsi _2+V_{12}\varPsi _1\varPsi _2\;. \end{aligned}$$
(2.117)
Multiplying this equation with \(\varPsi _2^*\) and integrating over the coordinate of particle 2 yields
$$\begin{aligned} \mathrm{i}\hbar \left( \dot{\varPsi }_1+\varPsi _1\langle \varPsi _2|\dot{\varPsi }_2\rangle _2\right) =\hat{H}_1\varPsi _1+\varPsi _1\langle \varPsi _2|\hat{H}_2|\varPsi _2\rangle _2 +\langle \varPsi _2|V_{12}|\varPsi _2\rangle _2\varPsi _1. \end{aligned}$$
(2.118)
By using the single particle equations of zeroth order with the index 2, the second terms on the LHS and the RHS cancel each other and one finds
$$\begin{aligned} \mathrm{i}\hbar \dot{\varPsi }_1(x_1,t)= \left( -\frac{\hbar ^2}{2m}\Delta _1+V_{1,\mathrm eff}(x_1,t)\right) \varPsi _1(x_1,t), \end{aligned}$$
(2.119)
with an effective, time-dependent potential
$$\begin{aligned} V_{1,\mathrm eff}(x_1,t)=V_1(x_1)+\langle \varPsi _2(t)|V_{12}|\varPsi _2(t)\rangle _2. \end{aligned}$$
(2.120)
An analogous equation can be derived for particle 2
$$\begin{aligned} \mathrm{i}\hbar \dot{\varPsi }_2(x_2,t)=\left( -\frac{\hbar ^2}{2m}\Delta _2+ V_{2,\mathrm eff}(x_2,t)\right) \varPsi _2(x_2,t), \end{aligned}$$
(2.121)
by multiplying the time-dependent Schrödinger equation with \(\varPsi _1^*\) and integrating over \(x_1\).

The particles move in effective “mean” fields that are determined by the dynamics of the other particle. The coupled equations have to be solved self-consistently. This is the reason that the Hartree method sometimes is called a TDSCF (time-dependent self consistent field) method. The multi-configuration time-dependent Hartree (MCTDH) method [20] goes far beyond what has been presented here and in principle allows for an exact numerical solution of the time-dependent Schrödinger equation.

2.2.7 Quantum-Classical Methods

In quantum-classical methods, the degrees of freedom are separated into a subset that shall be dealt with on the basis of classical mechanics and a subset to be described fully quantum mechanically. Analogously to the Hartree method, the classical degrees of freedom will move in an effective potential that is determined by the solution of the quantum problem.

For reasons of convenience we start the discussion with the time-independent Schrödinger equation  and restrict it to the case of two degrees of freedom: a light particle with coordinate x and mass m and a heavy one with X and M, respectively. Having the full solution of the time-independent Schrödinger equation
$$\begin{aligned} \hat{H}\psi _n(x,X)=E_n\psi _n(x,X), \end{aligned}$$
(2.122)
with the Hamiltonian
$$\begin{aligned} \hat{H}=\frac{\hat{p}^2}{2m}+\frac{\hat{P}^2}{2M}+v(x,X)+V(X) \end{aligned}$$
(2.123)
at hand would allow us to construct a time-dependent solution according to
$$\begin{aligned} \varPsi (x,X,t)=\sum _nc_n\exp \left[ -\frac{\mathrm{i}}{\hbar }E_nt\right] \psi _n(x,X). \end{aligned}$$
(2.124)
A way to treat the coupled system approximately is intimately related to the separation Ansatz of the previous section and will be discussed in much more detail later-on in Sect.  5.3.1 on the Born-Oppenheimer approximation. The idea is simple: one replaces the coupled problem by a pair of uncoupled single particle problems. In order to do so, first the light particle is dealt with under certain (fixed) conditions for the heavy particle (denoted by |X)
$$\begin{aligned} \hat{H}^{0}(x|X)\phi _j(x|X)=\epsilon _j^0(X)\phi _j(x|X), \end{aligned}$$
(2.125)
where
$$\begin{aligned} \hat{H}^{0}(x|X)=\frac{\hat{p}^2}{2m}+v(x,X) \end{aligned}$$
(2.126)
depends parametrically on X and j is the quantum number of the light particle.
Using the product Ansatz
$$\begin{aligned} \psi _n(x,X)\approx \phi _j(x|X)\chi _{l,j}(X) \end{aligned}$$
(2.127)
in the full time-independent Schrödinger equation and integrating out the coordinate of the light particle in the same way as in Sect. 2.2.6, one arrives at equations of the form
$$\begin{aligned} \hat{H}^{1}_j(X)\chi _{l,j}(X)=\epsilon _{l,j}^1\chi _{l,j}(X), \end{aligned}$$
(2.128)
with the Hamiltonian
$$\begin{aligned} \hat{H}^{1}_j(X)=\frac{\hat{P}^2}{2M}+V(X)+\epsilon _j^0(X) \end{aligned}$$
(2.129)
and eigenvalues , which are approximately given by
$$\begin{aligned} E_n\approx \epsilon _{l,j}^1, \end{aligned}$$
(2.130)
depending on j as well as on l, which is the quantum number of the heavy particle. The heavy particle is thus governed by an effective potential, \(V(X)+\epsilon _j^0(X)\), depending on the quantum state of the light particle.
Let us now turn to dynamics. In the Ehrenfest method one postulates the classical treatment of the heavy particle. Analogous to the effective potential an effective force can be derived which reads
$$\begin{aligned} F_{\mathrm{eff}}=-\frac{\partial }{\partial X}\left\{ V +\int \mathrm{d}x\, \varPhi ^*\hat{H}^0\varPhi \right\} , \end{aligned}$$
(2.131)
and where the wavefunction of the light particle fulfills the time-dependent Schrödinger equation
$$\begin{aligned} \mathrm{i}\hbar \dot{\varPhi }(x|X(t),t)=\hat{H}^0(x|X(t))\varPhi (x|X(t)). \end{aligned}$$
(2.132)
Expanding this wavefunction in eigenfunctions of the light particle
$$\begin{aligned} \varPhi (x|X(t),t)=\sum _jc_j(t)\phi _j(x|X(t)) \end{aligned}$$
(2.133)
yields coupled differential equations for the coefficients
$$\begin{aligned} \mathrm{i}\hbar \dot{c}_j(t)=\epsilon _j^0c_j-\mathrm{i}\hbar \dot{X}\sum _kd_{jk}c_k, \end{aligned}$$
(2.134)
where \(d_{jk}=\int \mathrm{d}x\phi _j\frac{\partial \phi _k}{\partial X}\). Together with the explicit expression
$$\begin{aligned} F_{\mathrm{eff}}=-\frac{\partial V}{\partial X}-\sum _j|c_j|^2\frac{\partial \epsilon _j^0}{\partial X} +\sum _{j,k<j}[c_j^*c_k+c_k^*c_j][\epsilon _j^0-\epsilon _k^0]d_{jk} \end{aligned}$$
(2.135)
for the effective force, this can be proven by solving Exercise 2.10.

2.10.

Verify the fundamental equations of the Ehrenfest method.

  1. (a)

    First prove the validity of the coupled differential equations for the coefficients \(c_j\) (Use the product and the chain rule of differentiation).

     
  2. (b)

    Calculate the effective force by using \(d_{kj}=-d_{jk}\) (Proof?)

     

The first term in the expression of the force is the so-called external force, whereas the second one describes adiabatic and the third one nonadiabatic dynamics.10 The last two terms have to be determined by solving the quantum problem of the light particle. An alternative quantum-classical approach is the so-called surface hopping technique. Its relation to the Ehrenfest approach, and which method is suited under which circumstances is discussed in [21].

2.2.8 Floquet Theory

For the description of cw-laser driven systems, the problem of time-periodic Hamiltonians is of central importance. In this case we have
$$\begin{aligned} \hat{H}(t+T)=\hat{H}(t), \end{aligned}$$
(2.136)
with the period \(T=2\pi /\omega \) of the external perturbation.
As in the general time-dependent case, the time evolution operator can be used to propagate a wavefunction. In addition to the properties discussed in Sect. 2.1.2, we can now make use of the property
$$\begin{aligned} \hat{U}(t+T,s+T)=\hat{U}(t,s). \end{aligned}$$
(2.137)
In order to solve the time-dependent Schrödinger equation, we prove that the Hamiltonian extended by the time derivative
$$\begin{aligned} \hat{\mathcal{H}}(t)\equiv \hat{H}(t)-\mathrm{i}\hbar \partial _{t} \end{aligned}$$
(2.138)
commutes with the time-evolution operator over one period. The time-dependent Schrödinger equation can be rewritten by using the above definition and we apply the time-evolution operator from the left
$$\begin{aligned} \hat{U}(t+T,t)\hat{\mathcal{H}}(t)|\varPsi (t)\rangle= & {} 0, \end{aligned}$$
(2.139)
$$\begin{aligned} \hat{U}(t+T,t)\hat{\mathcal{H}}(t)\hat{U}^{-1}(t+T,t)\hat{U}(t+T,t) |\varPsi (t)\rangle= & {} 0, \end{aligned}$$
(2.140)
$$\begin{aligned} \hat{\mathcal{H}}(t)\hat{U}(t+T,t)|\varPsi (t)\rangle= & {} 0. \end{aligned}$$
(2.141)
The last, decisive step follows form the periodicity of the Hamiltonian and with the help of the chain rule.11 We can thus conclude that
$$\begin{aligned}{}[\hat{U}(t+T,t),\hat{\mathcal{H}}(t)]=0 \end{aligned}$$
(2.142)
holds. The two operators thus have a common system of eigenfunctions, which shall be denoted by \(|\varPsi _{\epsilon }(t)\rangle \).
From the composition property (2.27) and with (2.137) it follows that
$$\begin{aligned} \hat{U}(2T,0)=\hat{U}(2T,T)\hat{U}(T,0)=\hat{U}^{2}(T,0). \end{aligned}$$
(2.143)
The group of time-evolution operators over one period therefore is an Abelian group. Its eigenfunctions have to transform according to a one-dimensional irreducible representation [22]
$$\begin{aligned} \hat{U}(T,0)|\varPsi _{\epsilon }(0)\rangle = \exp \left\{ -\frac{\mathrm{i}}{\hbar }\epsilon T\right\} |\varPsi _{\epsilon }(0)\rangle . \end{aligned}$$
(2.144)
Comparing this equation with the time-evolution over one period
$$\begin{aligned} \hat{U}(T,0)|\varPsi _{\epsilon }(0)\rangle =|\varPsi _{\epsilon }(T)\rangle \end{aligned}$$
(2.145)
leads to the Floquet theorem for the solution of the time-dependent Schrödinger equation
$$\begin{aligned} \varPsi _{\epsilon }(x,t)= & {} \exp \left\{ -\frac{\mathrm{i}}{\hbar }\epsilon t\right\} \psi _{\epsilon }(x,t), \end{aligned}$$
(2.146)
$$\begin{aligned} \psi _{\epsilon }(x,t)= & {} \psi _{\epsilon }(x,t+T). \end{aligned}$$
(2.147)
The wavefunction is a product of an exponential factor times a periodic function.12 The factor \(\epsilon \) in the exponent of (2.146) is sometimes referred to as Floquet exponent and the corresponding periodic function \(\psi _{\epsilon }\) is called Floquet function in order to honor the french mathematician Gaston Floquet, who worked on differential equations with periodic coefficients in the 19th century.
The product in (2.146) is formally analogous to the separation (2.9) in the stationary case. In order to use this analogy, we rewrite the time-dependent Schrödinger equation as in (2.139), to read
$$\begin{aligned} \hat{\mathcal{H}}(x,t)\varPsi (x,t)=0. \end{aligned}$$
(2.148)
Inserting the Floquet solution (2.146) and performing the time-derivative of the exponential part yields
$$\begin{aligned} \hat{\mathcal{H}}(x,t)\psi _{\alpha }(x,t)=\epsilon _{\alpha }\psi _{\alpha }(x,t), \end{aligned}$$
(2.149)
where the quantum number index \(\alpha \) has been introduced. This “Floquet type Schrödinger equation” has the same formal structure as the time-independent Schrödinger equation. Therefore the Floquet exponents are also called quasi-energies and the Floquet functions are referred to as quasi-eigenfunctions. In the case of a Hermitian Hamiltonian the quasi-energies are real.

2.11.

Use the extended scalar product
$$\begin{aligned} \langle \langle u_{\alpha }|v_{\beta }\rangle \rangle := \frac{1}{T}\int _{0}^{T}\mathrm{d}t \int _{-\infty }^{\infty }\mathrm{d}x u_{\alpha }^{*}(x,t)v_{\beta }(x,t) \end{aligned}$$
and the extended Hamiltonian \( \hat{\mathcal{H}}= \hat{H}(t)-\mathrm{i}\hbar \partial _{t} \), with \(\hat{H}(t+T)=\hat{H}(t)\), in order to show that the Floquet energies \(\epsilon _\alpha \) are real in case of Hermitian Hamiltonias \(\hat{H}(t)\).
In the case of vanishing external field the Hamiltonian becomes time-independent. This implies that also \(\psi _{\alpha }\) is time-independent. The index \(\alpha \) therefore is related to the quantum numbers of the unperturbed problem. It is a special feature of the Floquet solution (2.146) and (2.147) that the modified quasi-eigenfunctions and corresponding energies
$$\begin{aligned} \psi _{\alpha ^{\prime }}(x,t):= & {} \psi _{\alpha }(x,t) \exp (\mathrm{i}k\omega t), \end{aligned}$$
(2.150)
$$\begin{aligned} \epsilon _{\alpha ^{\prime }}:= & {} \epsilon _{\alpha }+k\hbar \omega , \end{aligned}$$
(2.151)
with \(k=0,\pm 1,\pm 2,\dots \) 13 are equivalent to \(\psi _{\alpha }(x,t),\epsilon _\alpha \), due to the fact that they correspond to the same total solution \(\varPsi _{\alpha }(x,t)\). The index
$$\begin{aligned} \alpha ^{\prime }:= (\alpha ,k)\quad k=0,\pm 1,\pm 2,\ldots \end{aligned}$$
(2.152)
denotes a class of infinitely many solutions. Out of each class only one lays in a so-called Brillouin zone of width \(\hbar \omega \), however. The discussion above and more details on the underlying Hilbert space theory can be found in [23]. Without proving the completeness, we will use
$$\begin{aligned} \hat{1}=\sum _{\alpha '}|\psi _{\alpha '}(0)\rangle \langle \psi _{\alpha '}(0)| \end{aligned}$$
(2.153)
as the representation of unity in terms of (discrete) Floquet states. In order for this representation to be true also in the nondriven case, it is clear that only one member of the class of equivalent eigensolutions may contribute to the sum above. A solution of the time-dependent Schrödinger equation can therefore be written as a superposition of quasi-eigenfunctions with appropriate coefficients
$$\begin{aligned} |\varPsi (t)\rangle= & {} \sum _{\alpha '}c_{\alpha '}|\psi _{\alpha '}(t)\rangle \exp \left\{ -\frac{\mathrm{i}}{\hbar }\epsilon _{\alpha '}t\right\} , \end{aligned}$$
(2.154)
$$\begin{aligned} c_{\alpha '}= & {} \langle \psi _{\alpha '}(0)|\varPsi (0)\rangle . \end{aligned}$$
(2.155)
This equation exhibits clearly that the quasi-energies determine the long-term time-evolution of a periodically driven quantum system. The behavior of the quasi-energies as a function of an external parameter (e.g., the field strength or the frequency) will therefore be very important. In order to study this behavior, the symmetry of the Hamiltonian will be decisive. We will come back to this point in Appendix  3.A to the Chap.  3.

2.3 Numerical Methods

Apart from special two-level problems that will be dealt with in the next chapter and systems with maximally quadratic potentials (and problems that can be mapped onto such cases) there are only a few exactly analytically solvable problems in quantum dynamics, as can be seen by studying the review by Kleber [11].

Almost all interesting problems of atomic and molecular physics with and without the presence of laser fields classically display nonlinear dynamics, however, and the Gaussian wavepacket dynamics of Sect. 2.1.4 will be valid only for short times. Exact numerical solutions of the quantum dynamics are therefore sought for. Apart from time-dependent information that is, e.g., needed for the description of pump-probe experiments, to be discussed in Chap.  5, also spectral information for systems with autonomous Hamiltonians can be gained from time series, as was shown in Sect. 2.1.2.

In the following, different ways to solve the time-dependent Schrödinger equation numerically will be described. First, we will review some numerically exact methods, and in the end the implementation of the semiclassical theory lined out in Sect. 2.2.3 by so-called initial value methods will be discussed, thereby also touching the numerical solution of the underlying classical equations of motion.

Most methods to solve the TDSE that we discuss can be characterized according to two formal criteria, which will be called problem (a) and problem (b) in the following:
  1. (a)

    Which (finite) basis is used to represent the wavefunction?

     
  2. (b)

    In which (approximate) way is the time-evolution performed?

     

We will distinguish the methods according to their different approach to the solution of the problems above.

2.3.1 Orthogonal Basis Expansion

All methods to solve the time-dependent Schrödinger equation numerically have to deal with a way to express the wavefunction in a certain representation. Here we shall consider the expansion of the wavefunction in a set of orthogonal basis functions, which are eigenfunctions of a certain (simple) Hamiltonian like, e.g., the 1D harmonic oscillator one
$$\begin{aligned} \hat{H}_\mathrm{HO}=-\frac{\hbar ^2}{2m}\frac{\partial ^2}{\partial x^2}+ \frac{1}{2}m\omega _\mathrm{e}^2x^2. \end{aligned}$$
(2.156)
Its eigenfunctions are given by
$$\begin{aligned} \psi _n(x)=\sqrt{\frac{\sigma }{n!2^n\sqrt{\pi }}}H_n(\sigma x) \exp \left\{ -\frac{1}{2}\sigma ^2x^2\right\} , \end{aligned}$$
(2.157)
where \(H_n\) with \(n=0,1,2,\dots \) are Hermite polynomials [24], the first three of which are
$$\begin{aligned} H_0(x)= & {} 1, \end{aligned}$$
(2.158)
$$\begin{aligned} H_1(x)= & {} 2x, \end{aligned}$$
(2.159)
$$\begin{aligned} H_2(x)= & {} 4x^2-2, \end{aligned}$$
(2.160)
and \(\sigma =\sqrt{m\omega _\mathrm{e}/\hbar }\).
The alternative representation of the harmonic oscillator Hamiltonian
$$\begin{aligned} \hat{H}_\mathrm{HO}=\hbar \omega _\mathrm{e}\left( \hat{a}^\dagger \hat{a}+\frac{1}{2}\right) \end{aligned}$$
(2.161)
in terms of so-called creation and annihilation operators
$$\begin{aligned} \hat{a}^{\dagger }= & {} \frac{1}{\sqrt{2}}\left( \sigma \hat{x}-\frac{1}{\sigma } \frac{\partial }{\partial x}\right) , \end{aligned}$$
(2.162)
$$\begin{aligned} \hat{a}= & {} \frac{1}{\sqrt{2}}\left( \sigma \hat{x}+\frac{1}{\sigma } \frac{\partial }{\partial x}\right) \end{aligned}$$
(2.163)
is very helpful. These operators fulfill the commutation relation
$$\begin{aligned}{}[\hat{a},\hat{a}^\dagger ]=\hat{1} \end{aligned}$$
(2.164)
and have the properties
$$\begin{aligned} \hat{a}^{\dagger }|n\rangle= & {} \sqrt{n+1}|n+1\rangle \qquad n=0,1,2,\dots ,\end{aligned}$$
(2.165)
$$\begin{aligned} \hat{a}|n\rangle= & {} \sqrt{n}|n-1\rangle \qquad n=1,2,\dots , \end{aligned}$$
(2.166)
i.e., they either allow to “climb up” or “climb down” the ladder of harmonic oscillator states (in short hand notation just denoted by the index n) and are therefore also referred to as ladder operators.
By resolving the definition of the ladder operators in terms of the position and the derivative operator
$$\begin{aligned} \hat{x}= & {} \frac{1}{\sqrt{2}\sigma }(\hat{a}+\hat{a}^\dagger ), \end{aligned}$$
(2.167)
$$\begin{aligned} \frac{\partial }{\partial x}= & {} \frac{\sigma }{\sqrt{2}}(\hat{a}-\hat{a}^\dagger ), \end{aligned}$$
(2.168)
one can express arbitrary powers of these operators in terms of products of \(\hat{a}^\dagger \) and \(\hat{a}\). Matrix elements of any operator between harmonic oscillator states can then be calculated by employing (2.165) and (2.166).
An arbitrary time-dependent wavefunction can now be expanded into eigenfunctions of the harmonic oscillator according to
$$\begin{aligned} |\varPsi (t)\rangle =\sum _{l=0}^\infty d_l(t)|l\rangle . \end{aligned}$$
(2.169)
After insertion of this expression into the time-dependent Schrödinger equation governed by the Hamiltonian \(\hat{H}\), and multiplication from the left with \(\langle n|\), an infinite linear system of coupled ordinary differential equations for the expansion coefficients
$$\begin{aligned} \mathrm{i}\hbar \dot{d}_n(t)=\sum _{l=0}^\infty d_l(t)\langle n|\hat{H}|l\rangle \end{aligned}$$
(2.170)
is gained. This system can (in principle) be solved if the initial conditions were known. At this point, however, we should address our “problems” as stated in the introduction to this section:
  1. (a)

    The basis problem is solved by truncating the expansion at a large \(l=L-1\), which is determined by the initial state that shall be described. One thus uses a “Finite Basis Representation”. Convergence of the results can be checked by increasing the size L of the finite basis.

     
  2. (b)

    The numerical integration of the linear system of differential equations could be performed with the help of a suitable integration routine like the Runge-Kutta method [25].

     
Solving the system of coupled differential equations can be circumvented, however, by finding the (first N) eigenvalues \(E_n\) and eigenfunctions \(|n_H\rangle \) of the Hamiltonian \(\hat{H}\) in case this is autonomous. By determining the (now time-independent) expansion coefficients of the wavefunction in terms of these eigenfunctions, the wavefunction is in principle exactly14 time-evolved by using the corresponding eigenenergies according to
$$\begin{aligned} |\varPsi (t)\rangle =\sum _{n=0}^{N-1}c_n|n_H\rangle \mathrm{e}^{-\mathrm{i}E_nt/\hbar }. \end{aligned}$$
(2.171)
We should keep in mind, however, that the solution of the eigenvalue problem requires a numerical effort of order \(L^3\) if \(L\times L\) is the size of the matrix to be diagonalized [25] and is therefore only suitable if L can be kept small.

2.3.1.1 The Floquet Matrix

The alternative method to tackle problem (b) by solving the eigenvalue problem is fortunately not restricted to autonomous Hamiltonians. It also works for periodically driven systems that have been discussed in Sect. 2.2.8 and leads to the calculation of the quasi-energies and quasi-eigenfunctions. We start from the Floquet type Schrödinger equation in Dirac notation
$$\begin{aligned} \hat{\mathcal{H}}(t)|\psi _{\alpha }(t)\rangle = \epsilon _{\alpha }|\psi _{\alpha }(t)\rangle , \end{aligned}$$
(2.172)
with the extended Hamiltonian defined in (2.138). Due to the periodic time-dependence of the Floquet functions, they can be Fourier expanded according to
$$\begin{aligned} |\psi _{\alpha }(t)\rangle =\sum _{n=-\infty }^{\infty } |\psi _{\alpha }^{n}\rangle \mathrm{e}^{-\mathrm{i}n\omega t}. \end{aligned}$$
(2.173)
The Fourier coefficients on the RHS of this expression can in turn be expanded in a complete orthogonal system of basis functions \(\{|k\rangle \}\) via
$$\begin{aligned} |\psi _{\alpha }^{n}\rangle =\sum _{k=0}^{\infty }\psi _{k,\alpha }^{n} |k\rangle \end{aligned}$$
(2.174)
and the Schrödinger equation thus is given by
$$\begin{aligned} \sum _{n=-\infty }^{\infty }\sum _{k=0}^{\infty } \hat{\mathcal{H}}\psi _{k,\alpha }^{n} |k\rangle \mathrm{e}^{-\mathrm{i}n\omega t}= \sum _{n=-\infty }^{\infty }\sum _{k=0}^{\infty } \epsilon _{\alpha }\psi _{k,\alpha }^{n}|k\rangle \mathrm{e}^{-\mathrm{i}n\omega t}. \end{aligned}$$
(2.175)
Multiplying this equation with \( \langle l m|:=\langle \psi _l \mathrm{e}^{-\mathrm{i}m\omega t}| \) from the left and integrating over one period of the external force yields
$$\begin{aligned} \sum _{n=-\infty }^{\infty }\sum _{k=0}^{\infty } \langle \langle l m |\hat{\mathcal{H}}|k n\rangle \rangle \psi _{k,\alpha }^{n} = \epsilon _{\alpha }\psi _{l,\alpha }^{m}, \end{aligned}$$
(2.176)
where we have used the definition \( \langle \langle \cdots \rangle \rangle := \frac{1}{T}\int _0^T\mathrm{d}t\langle \cdots \rangle \) of the extended scalar product that has already been used in Exercise 2.11.
Equation (2.176) has first been given by Shirley [26] in the case of a two-level system. Shirley then transformed the equations in order to recover an eigenvalue problem, whose solutions are the quasi-energies. This procedure can also be used for an infinite dimensional Hilbert space, however. In order to do so, one rewrites the equation above according to
$$\begin{aligned} \sum _{n=-\infty }^{\infty }\sum _{k=0}^{\infty } \left\{ \langle l|\hat{H}^{[m-n]}|k \rangle -n\hbar \omega \delta _{mn}\delta _{lk}\right\} \psi _{k,\alpha }^{n} =\epsilon _{\alpha }\psi _{l,\alpha }^{m}, \end{aligned}$$
(2.177)
where the definition
$$\begin{aligned} \hat{H}^{[m-n]}=\frac{1}{T}\int _{0}^{T}\mathrm{d}t\hat{H}(t)\exp (\mathrm{i}[m-n]\omega t) \end{aligned}$$
(2.178)
has been introduced and we have used
$$\begin{aligned} \frac{1}{T}\int _0^T\mathrm{d}t\exp ((\mathrm{i}[m-n]\omega t)=\delta _{mn}. \end{aligned}$$
(2.179)
In the case of a monochromatic perturbation,
$$\begin{aligned} \hat{H}(t)\equiv \hat{H}_{0}+\hat{H}_1\sin (\omega t), \end{aligned}$$
(2.180)
time integration yields
$$\begin{aligned} \hat{H}^{[m-n]}=\hat{H}_{0}\delta _{mn}+\frac{\hat{H}_{1}}{2\mathrm{i}} \left\{ \delta _{m,n-1}-\delta _{m,n+1}\right\} . \end{aligned}$$
(2.181)
Equation (2.176) is the eigenvalue problem of the extended Hamiltonian \({\hat{\mathcal{H}}}\), whose matrix elements are given by
$$\begin{aligned} \langle l|\hat{H}^{[m-n]}|k\rangle -n\hbar \omega \delta _{mn}\delta _{lk}. \end{aligned}$$
(2.182)
The Fourier expansion (2.173) has rendered the problem time-independent. One has to cope with an additional “dimension” (\(n=0,\pm 1,\pm 2\dots \)), however.
After choosing a basis (e.g., the harmonic oscillator basis) and calculating the matrix elements, the quasi-energies are the eigenvalues and the quasi-eigenfunctions are the eigenvectors of (2.177). The Floquet matrix to be diagonalized is
$$\begin{aligned} \left( \begin{array}{ccccccccc} \cdot &{} &{} &{} &{} &{} &{} &{} &{}\\ &{}\cdot &{} &{} &{} &{} &{} &{} &{} \\ &{} &{}\mathbf{H_{0}}-2\hbar \omega \mathbf{1}&{} \frac{1}{2\mathrm{i}}{} \mathbf{H_{1}}&{}\mathbf{0}&{}\mathbf{0} &{}\mathbf{0} &{} &{}\\ &{} &{}-\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}} &{}\mathbf{H_{0}}-1\hbar \omega \mathbf{1}&{}\frac{1}{2\mathrm{i}}\mathbf{H_{1}}&{}\mathbf{0}&{} &{} &{} \\ &{} &{}\mathbf{0}&{}-\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}} &{}\mathbf{H_{0}}&{}\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}}&{}\mathbf{0}&{} &{} \\ &{} &{}\mathbf{0}&{}\mathbf{0}&{}-\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}} &{}\mathbf{H_{0}}+1\hbar \omega \mathbf{1}&{}\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}}&{} &{} \\ &{} &{}\mathbf{0}&{}\mathbf{0}&{}\mathbf{0}&{}-\frac{1}{2\mathrm{i}}{} \mathbf{H_{1}} &{}\mathbf{H_{0}}+2\hbar \omega \mathbf{1}&{} &{} \\ &{} &{} &{} &{} &{} &{} &{}\cdot &{} \\ &{} &{} &{} &{} &{} &{} &{} &{}\cdot \end{array} \right) . \end{aligned}$$
(2.183)
Here \(\mathbf{1},\mathbf{0}\) are unit and zero matrices, and in principle the block matrices have to be added ad infinitum, i.e., \(n\rightarrow \infty \). In the numerics, however, one uses matrices \(\mathbf{H_{0}}\) and \(\mathbf{H_{1}}\) of finite size \(L\times L\) as well as a finite number \(2M+1\) of Fourier terms. Convergence can be checked by increasing L as well as M.

In general the basis function expansion method is only a viable approach if the matrix elements of the Hamiltonian can be calculated easily. If the basis is the harmonic oscillator one, this is the case if the potential is given by a polynomial of low order. In other cases or if the potential is multidimensional, so-called “discrete variable representations” (DVR) [27] are frequently used. Finally, it should be noted that the diagonalization of the Floquet matrix becomes much more difficult, if the system under consideration contains a continuum of states. Then the method of complex rotation can be employed [28].

2.3.2 Split-Operator Method

The split-operator method for the solution of the time-dependent Schrödinger equation is based on the approximate representation of the time-evolution operator, i.e., the treatment of problem (b) by using the Zassenhaus formula [29]15
$$\begin{aligned} \mathrm{e}^{\hat{x}+\hat{y}}=\mathrm{e}^{\hat{x}}\mathrm{e}^{\hat{y}}\mathrm{e}^{-1/2[\hat{x},\hat{y}]} \mathrm{e}^{1/3[\hat{y},[\hat{x},\hat{y}]]+1/6[\hat{x},[\hat{x},\hat{y}]]}\dots . \end{aligned}$$
(2.184)
In the following, we restrict the discussion to a particle moving in one spatial dimension under a Hamiltonian of the usual form
$$\begin{aligned} \hat{H}=\hat{T}_\mathrm{k}(\hat{p})+\hat{V}(\hat{x}). \end{aligned}$$
(2.185)
For very short but finite time steps \(\Delta t\), one then finds from the Zassenhaus formula that
$$\begin{aligned} \mathrm{e}^{-\mathrm{i}\hat{H}\Delta t/\hbar }\approx \mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar } \mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/\hbar } \end{aligned}$$
(2.186)
is accurate to first order in \(\Delta t\). This approximation is also the basis for the (exact) Trotter product formula [14]
$$\begin{aligned} \mathrm{e}^{-\mathrm{i}\hat{H} t/\hbar }= \lim _{N\rightarrow \infty }\left[ \mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}t/(N \hbar )}\mathrm{e}^{-\mathrm{i}\hat{V}t/(N \hbar )}\right] ^N. \end{aligned}$$
(2.187)
By working through Exercise 2.12 one can prove that a Strang splitting according to
$$\begin{aligned} \mathrm{e}^{-\mathrm{i}\hat{H}\Delta t/\hbar }= \mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )} \mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar } \mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}+{O}(\Delta t^3) \end{aligned}$$
(2.188)
leads to an approximation of higher accuracy.

2.12.

Show that the Strang splitting of the time-evolution operator leads to a second order method.

Hint: use the Zassenhaus as well as the BCH formula

Problem (a) is now dealt with by representing the wavefunction at \(t=0\) on an equidistant position space grid \(x_n\in [x_\mathrm{min},x_\mathrm{max}],n=1,\dots ,N\). The wavefunction propagated for a time \(\Delta t\) at the grid point \(x_n\) is then given by
$$\begin{aligned} \varPsi (x_n,\Delta t)= & {} \langle x_n|\mathrm{e}^{-\mathrm{i}\hat{H}\Delta t/\hbar }|\varPsi (0)\rangle \nonumber \\\approx & {} \langle x_n|\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )} \mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar } \mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}|\varPsi (0)\rangle . \end{aligned}$$
(2.189)
By inserting unity twice in terms of position states and once in terms of momentum states, the threefold integral (for the numerics, the integrations are discretized due to the grid based representation of the wavefunction)
$$\begin{aligned} \varPsi (x_n,\Delta t)\approx & {} \int \mathrm{d}x' \int \mathrm{d}p' \int \mathrm{d}x'' \langle x_n|\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/2\hbar }|x''\rangle \nonumber \\&\langle x''|\mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar }|p'\rangle \langle p'|\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/2\hbar }|x'\rangle \langle x'|\varPsi (0)\rangle \end{aligned}$$
(2.190)
emerges. The integral over \(x''\) can be performed immediately due to the diagonal nature of the potential operator in position space and the \(\delta \)-function appearing in
$$\begin{aligned} \langle x_n|\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}|x''\rangle = \mathrm{e}^{-\mathrm{i}V(x_n)\Delta t/(2\hbar )}\delta (x''-x_n). \end{aligned}$$
(2.191)
Also the second exponentiated potential term simplifies, yielding
$$\begin{aligned} \langle p'|\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}|x'\rangle= & {} \langle p'|x'\rangle \mathrm{e}^{-\mathrm{i}V(x')\Delta t/(2\hbar )} \nonumber \\= & {} \frac{1}{\sqrt{2\pi \hbar }}\mathrm{e}^{-\mathrm{i}p'x'/\hbar }\mathrm{e}^{-\mathrm{i}V(x')\Delta t/(2\hbar )}. \end{aligned}$$
(2.192)
The \(x'\) integration is a Fourier transformation of the intermediate wavefunction
$$\begin{aligned} {\tilde{\varPsi }}(x',0)=\mathrm{e}^{-\mathrm{i}V(x')\Delta t/(2\hbar )}\varPsi (x',0) \end{aligned}$$
(2.193)
into momentum space. This leads to the fact that the exponentiated operator of the kinetic energy becomes a multiplicative factor and can be applied easily via
$$\begin{aligned} \langle x''|\mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar }|p'\rangle= & {} \langle x''|p'\rangle \mathrm{e}^{-\mathrm{i}T_\mathrm{k}(p')\Delta t/\hbar } \nonumber \\= & {} \frac{1}{\sqrt{2\pi \hbar }}\mathrm{e}^{\mathrm{i}p'x''/\hbar } \mathrm{e}^{-\mathrm{i}T_\mathrm{k}(p')\Delta t/\hbar }. \end{aligned}$$
(2.194)
The \(p'\) integration transforms the wavefunction back into position space.
The main numerical effort is the need to perform two Fourier transforms of the wavefunction during the propagation over one time step. These can be performed by using the fast Fourier transformation (FFT) algorithm [25], however. The implementation of the split-operator based FFT method16 can therefore be summarized as follows:
  1. 1.

    Represent the initial wavefunction on a position space grid

     
  2. 2.

    Apply the operator \(\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}\)

     
  3. 3.

    Perform a FFT into momentum space

     
  4. 4.

    Apply the operator \(\mathrm{e}^{-\mathrm{i}\hat{T}_\mathrm{k}\Delta t/\hbar }\)

     
  5. 5.

    Perform an inverse FFT back into position space

     
  6. 6.

    Apply the operator \(\mathrm{e}^{-\mathrm{i}\hat{V}\Delta t/(2\hbar )}\).

     

This procedure is applied for the propagation over a small time step. For the propagation over long times it will be repeated frequently and if the intermediate values of the wavefunction are not needed, the two half time steps of potential propagation can be combined (apart from the first and the last one). Furthermore, we stress that to propagate the wavefunction over the next time step, we will need its value not only at \(x_n\) but at all values of x. This is reflecting the nonlocal nature of quantum theory. For the calculation of the new wavefunction the old one is needed everywhere. This is in contrast to classical mechanics. A trajectory only depends on its own initial conditions; classical mechanics is a local theory.

A nice review of the details of FFT and a corresponding subroutine can be found in [25]. Some facts will be briefly repeated here. A function \(\varPhi (x_n)\) can be written as a discrete Fourier transform according to
$$\begin{aligned} \varPhi (x_n)=\sum _{k=-N/2+1}^{N/2}a_k\mathrm{e}^{2\pi \mathrm{i}k x_n/X}, \end{aligned}$$
(2.195)
with the Fourier coefficients
$$\begin{aligned} a_k=\frac{1}{N}\sum _{n=1}^{N}\varPhi (x_n)\mathrm{e}^{-2\pi \mathrm{i}k x_n/X}. \end{aligned}$$
(2.196)
For the implementation it is important to note that:
  • \(N=2^j\) has to be an integer power of 2

  • The grid length is \(X=x_{max}-x_{min}\) and \(x_n\) are equidistant with \(\Delta x=X/N\)

  • The numerical effort scales with \(N \ln N\) [25]

  • The maximal momentum that can be described is
    $$p_{max}=h/(2\Delta x)=Nh/(2X),$$
    and \(p_{min}=-p_{max}\)
  • The covered phase space volume is \(V_P=2Xp_{max}=Nh\)

  • The time step should fulfill \(\Delta t<\hbar \pi /(3V_\mathrm{max})\), with \(V_\mathrm{max}\) the maximal excursion of the potential [31]. For very long propagation times, see also [32].

  • If calculated using (2.45), energy resolution is given by \(\Delta E_\mathrm{min}=\hbar \pi /T_\mathrm{t}\), with \(T_\mathrm{t}\) the total propagation time.

There are more recent implementations of FFT which do not have the restriction to integer powers of 2 and which, through adaption to the platform that is used for the calculations can have considerable advantages in speed (FFTW: fastest Fourier transformation in the West [33]).

The usage of the time-evolution operator for constant Hamiltonians at the beginning of our discussion is no restriction of the presented methodology to time-independent Hamiltonians. As in the case of the infinitesimal time-evolution operator (2.32), one can use a constant Hamiltonian for the propagation over a small time interval \(\Delta t\). At the beginning of the next time step a slightly changed Hamiltonian is employed.

Finally, one drawback of the method that will not come into play in the present book, however, shall be mentioned. The split-operator idea only succeeds in producing simple multiplicative exponentials if there are no products of \(\hat{p}\) and \(\hat{x}\) in the Hamiltonian. These would appear in the treatment of dissipative quantum problems, which are outside the scope of this presentation.

2.3.2.1 Negative Imaginary Absorbing Potentials

Another possible drawback of a grid based method like the split-operator FFT method shall be dealt with in a bit more detail. It can be cast in the form of a question: What happens to a wavepacket, when it hits the grid boundaries? It would reenter on the other side of the grid, leading to nonphysical results! This can be avoided by adding a negative imaginary potential of the form
$$\begin{aligned} V(x)=-\mathrm{i}f(x)\varTheta (x-x_a), \end{aligned}$$
(2.197)
which is nonzero for values \(x>x_a\), close to the right grid boundary \(x_\mathrm{max}\) and a similar term at the left side of the grid. In Sect. 2.1 we have made use of the fact that the potential is real valued in order to show that the norm of any wavefunction is conserved. The fact that the total potential is now complex leads to a loss of norm. This may, however, be not as disturbing as the re-entrance phenomenon, especially in situations where the wavepacket would just move on in “free-space” like in a scattering situation after the scattering event is over.

The choice of the functional form of f(x) in (2.197) is crucial. It turns out that the potential has to rise smoothly and rather slowly in order that there do not occur unphysical reflections of the wavefunction induced by the negative imaginary potential. A detailed study of several different functional forms of the imaginary potential can be found in [34].

2.3.3 Alternative Methods of Time-Evolution

In the material presented so far we have dealt both, with the solution of problem (a) as well as problem (b). In the following some alternative ways of treating the time-evolution, i.e., problem (b) shall be reviewed.

2.3.3.1 Method of Finite Differences

The discretization of the time-derivative in (2.22) with the help of the first order formula
$$\begin{aligned} |\dot{\varPsi }(t)\rangle \approx \frac{|\varPsi (t+\Delta t)\rangle -|\varPsi (t)\rangle }{\Delta t} \end{aligned}$$
(2.198)
leads to an explicit numerical method if the RHS is evaluated with the “old” wavefunction \(|\varPsi (t)\rangle \). This means that the wavefunction at a later time is explicitly given by the wavefunction at the earlier time. Unfortunately, however, this so-called Euler method is numerically instable.
An at least conditionally stable method can be constructed by application of the second-order formula
$$\begin{aligned} |\dot{\varPsi }(t)\rangle \approx \frac{|\varPsi (t+\Delta t)\rangle -|\varPsi (t-\Delta t)\rangle }{2 \Delta t} \end{aligned}$$
(2.199)
for the time-derivative. The corresponding method is referred to as second order differencing (SOD) and has been advocated for the solution of the time-dependent Schrödinger equation by Askar and Cakmak [35]. The method can be shown to be energy and norm conserving. The condition under which it is stable can be derived by considering the eigenvalues of the propagation matrix that appears by using the discrete form of the time-derivative
$$\begin{aligned} \left( \begin{array}{c} {|\varPsi _{n+1}\rangle } \\ {|\varPsi _n\rangle } \end{array} \right) = \left( \begin{array}{cc} \hat{1}-4\hat{H}^2\Delta t^2/\hbar ^2&{}-2\mathrm{i}\hat{H}\Delta t/\hbar \\ -2\mathrm{i}\hat{H}\Delta t/\hbar &{}\hat{1} \end{array} \right) \left( \begin{array}{c} {|\varPsi _{n-1}\rangle } \\ {|\varPsi _{n-2}\rangle } \end{array} \right) . \end{aligned}$$
(2.200)
The eigenvalues of the matrix are (replacing \(\hat{H}\) by E)
$$\begin{aligned} \lambda _{1,2}=1-2E^2\Delta t^2/\hbar ^2\pm \frac{2E\Delta t}{\hbar } \sqrt{\frac{E^2\Delta t^2}{\hbar ^2}-1}. \end{aligned}$$
(2.201)
The discrete mapping is norm conserving due to \(\lambda _1\lambda _2=1\). For stability, the radicand in the expression above has to be negative, such that the eigenvalues become complex. Otherwise after the n-th iteration an exponential increase of numerical instabilities would occur. Thus for stability \(\Delta t<\hbar /E_\mathrm{max}\) has to hold, where \(E_\mathrm{max}\) is the largest eigenvalue of \(\hat{H}\) taking part in the dynamics [36]. A slightly different look at the second order differencing method method is taken in Exercise 2.13.

2.13.

Show that in the second order differencing method the following holds if \(\hat{H}\) is Hermitian (and time-independent)
  1. (a)

    \( \mathrm{Re}\langle \varPsi (t-\Delta t)|\varPsi (t)\rangle =\mathrm{Re}\langle \varPsi (t)|\varPsi (t+\Delta t)\rangle =\mathrm{const} \)

     
  2. (b)

    \( \mathrm{Re}\langle \varPsi (t-\Delta t)|\hat{H}|\varPsi (t)\rangle =\mathrm{Re}\langle \varPsi (t)|\hat{H}|\varPsi (t+\Delta t)\rangle =\mathrm{const} \)

     
  3. (c)

    Interprete the results gained above.

     
  4. (d)

    Consider the time-evolution of an eigenstate \(\psi \) of the Hamiltonian with eigenvalue E and derive a criterion for the maximally allowed time step \(\Delta t\).

    Hint: Insert the exact time-evolution into the SOD scheme and distinguish the exact eigenvalue from the approximate \(E_\mathrm{app}\) due to SOD time evolution.

     

2.3.3.2 Crank-Nicolson Method

An alternative possibility to circumvent the problem of instability of the Euler method is given by the Crank-Nicolson procedure. Here the first order formula
$$\begin{aligned} \hat{U}(\Delta t)\approx \hat{1}-\mathrm{i}\hat{H}\Delta t/\hbar , \end{aligned}$$
(2.202)
representing the short-time evolution operator is used forward as well as backward in time
$$\begin{aligned} |\varPsi _{n+1}\rangle= & {} \hat{U}(\Delta t)|\varPsi _{n}\rangle , \end{aligned}$$
(2.203)
$$\begin{aligned} |\varPsi _{n-1}\rangle= & {} \hat{U}(-\Delta t)|\varPsi _{n}\rangle . \end{aligned}$$
(2.204)
In order to make progress one resolves both equations for \(|\varPsi _{n}\rangle \) by multiplying with the corresponding inverse operators. Equating the gained expressions yields
$$\begin{aligned} (\hat{1}+\mathrm{i}\hat{H}\Delta t/\hbar )|\varPsi _{n+1}\rangle = (\hat{1}-\mathrm{i}\hat{H}\Delta t/\hbar )|\varPsi _{n-1}\rangle . \end{aligned}$$
(2.205)
The procedure now is an implicit one, that is stable as well as norm conserving.
Due to its implicit nature, the method requires a matrix inversion and formally leads to the Cayley approximation (see also Chap. 19.2 of [25])
$$\begin{aligned} |\varPsi _{n+1}\rangle = \frac{\hat{1}-\mathrm{i}\hat{H}\Delta t/(2\hbar )}{\hat{1}+\mathrm{i}\hat{H}\Delta t/(2\hbar )} |\varPsi _{n}\rangle \end{aligned}$$
(2.206)
for the propagated wavefunction. The CN scheme is second order accurate in \(\Delta t\), as can be seen by a Taylor expansion of the final expression and comparing to the Taylor expansion of the exact time-evolution operator.

2.3.3.3 Polynomial Methods

The idea behind polynomial methods is the expansion of the time-evolution operator in terms of polynomials, according to
$$\begin{aligned} \mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }=\sum _na_nP_n(\hat{H})\;. \end{aligned}$$
(2.207)
Two different approaches are commonly used:
  • In the Chebyshev method, the polynomials are fixed to be the complex valued Chebyshev ones. A first application to the problem of wavefunction propagation has been presented by Tal-Ezer and Kosloff [37]. These authors have shown that the approach is up to six times more efficient than the SOD method, presented above. It allows for evolution over relatively long time steps. Drawbacks are that intermediate time information is not readily available and, even worse in the present context, that time-dependent Hamiltonians cannot be treated.

  • In contrast to the first approach, in the Lanczos method , the polynomials are not fixed but are generated in the course of the propagation. A very profound introduction to the commonly applied short iterative Lanczos method can be found in [38].

2.3.4 Semiclassical Initial Value Representations

As the final prerequisite before we deal with the physics of laser-matter interaction, a reformulation of the semiclassical van Vleck-Gutzwiller propagator presented in Sect. 2.2.3 shall be discussed. We have already mentioned that the VVG method is based on the solution of classical boundary value (or root search) problems, which makes it hard to implement. A much more user friendly approach would be based on classical initial value solutions and is therefore termed initial value representation.

We start the discussion of a specific initial value representation of the semiclassical propagator with a short introduction to commonly used symplectic integration procedures for the solution of the underlying classical dynamics.

2.3.4.1 Symplectic Integration

Positions and momenta of a classical Hamiltonian system with N degrees of freedom obey the equations of motion
$$\begin{aligned} \dot{q}_n= & {} \frac{\partial H(\varvec{p},\varvec{q},t)}{\partial p_n}, \end{aligned}$$
(2.208)
$$\begin{aligned} \dot{p}_n= & {} -\frac{\partial H(\varvec{p},\varvec{q},t)}{\partial q_n}. \end{aligned}$$
(2.209)
Using the Poisson bracket
$$\begin{aligned} \{a,b\}=\sum _{n=1}^N \left( \frac{\partial a}{\partial q_n}\frac{\partial b}{\partial p_n} -\frac{\partial a}{\partial p_n}\frac{\partial b}{\partial q_n} \right) \;, \end{aligned}$$
(2.210)
the equations above can also be cast into an equation for the 2N- dimensional phase-space vector \(\varvec{\eta }^\mathrm{T}=({\varvec{q}}^\mathrm{T},{\varvec{p}}^\mathrm{T})\), reading
$$\begin{aligned} \dot{\varvec{\eta }}=-\{H,\varvec{\eta }\}=:-\hat{H}\varvec{\eta }\;. \end{aligned}$$
(2.211)
Although we are dealing with classical mechanics an operator, \(\hat{H}\), appears here. In the present subsection this operator stands for the application of the Poisson bracket with the Hamilton function.
Formally, the equation above can be integrated over a small time step, yielding
$$\begin{aligned} \varvec{\eta }(t+\Delta t)=\exp \{-\Delta t \hat{H}\}\varvec{\eta }(t). \end{aligned}$$
(2.212)
Now we can again use the split-operator method, i.e. an “effective Hamiltonian” can be introduced according to
$$\begin{aligned} \exp \{-\Delta t \hat{H}_\mathrm{eff}\}:=\exp \{-\Delta t \hat{T}_\mathrm{k}\} \exp \{-\Delta t \hat{V}\}. \end{aligned}$$
(2.213)
This effective operator is only an approximation to the true one. The total phase space volume is conserved, however, i.e., Liouville’s theorem holds also for the approximate dynamics [39]. More generally, it can be shown that symplectic integration methods preserve the N Poincaré invariants of the Hamiltonian system [40].

2.14.

By expanding up to second order in \(\Delta t\) show that there is a difference between \(\exp \{-\Delta t\hat{H}\}\) and \(\exp \{-\Delta t\hat{T}_\mathrm{k}\} \exp \{-\Delta t\hat{V}\}\).

Hint: The Jacobi identity \(\{A,\{B,C\}\}+\{B,\{C,A\}\}+\{C,\{A,B\}\}=0\) might be helpful.

In the numerics, the splitting of the exponentiated Hamiltonian into the kinetic and the potential part means that first one solves a problem in which only \(\hat{V}\) operates, i.e., the momentum is altered due to the effect of the potential. This is the so-called kick step. Then the position is shifted using constant updated momentum. This is the so-called drift step. For very short times (expand the exponential to first order) one gets
$$\begin{aligned} {\varvec{p}}^1= & {} {\varvec{p}}^0+\Delta t {\varvec{F}}_{{\varvec{q}}={\varvec{q}}^0}, \end{aligned}$$
(2.214)
$$\begin{aligned} {\varvec{q}}^1= & {} {\varvec{q}}^0+\Delta t {\varvec{G}}_{{\varvec{p}}={\varvec{p}}^1}, \end{aligned}$$
(2.215)
where the superscript denotes the iteration step and the abbreviations
$$\begin{aligned} {\varvec{G}}= & {} \frac{\partial T_\mathrm{k}}{\partial {\varvec{p}}} \end{aligned}$$
(2.216)
$$\begin{aligned} {\varvec{F}}= & {} -\frac{\partial V}{\partial {\varvec{q}}} \end{aligned}$$
(2.217)
have been used. This procedure is a variant of the symplectic Euler method. It performs much better than the highly unstable “standard” Euler method, for which in the second line the old momentum \({\varvec{p}}^0\) is used.
Analogously to the discussion of the split-operator procedure in quantum mechanics in Sect. 2.3.2, a split-operator procedure of higher order can be used. By employing the Strang splitting
$$\begin{aligned} \exp \{-\Delta t \hat{H}_\mathrm{eff}\}:=\exp \{-\Delta t \hat{T}_\mathrm{k}/2\} \exp \{-\Delta t \hat{V}\}\exp \{-\Delta t \hat{T}_\mathrm{k}/2\}, \end{aligned}$$
(2.218)
the so-called leap frog method
$$\begin{aligned} {\varvec{q}}^1= & {} {\varvec{q}}^0+\frac{\Delta t}{2} {\varvec{G}}({\varvec{p}}={\varvec{p}}^0), \end{aligned}$$
(2.219)
$$\begin{aligned} {\varvec{p}}^2= & {} {\varvec{p}}^0+\Delta t {\varvec{F}}({\varvec{q}}={\varvec{q}}^1), \end{aligned}$$
(2.220)
$$\begin{aligned} {\varvec{q}}^2= & {} {\varvec{q}}^1+\frac{\Delta t}{2} {\varvec{G}}({\varvec{p}}={\varvec{p}}^2) \end{aligned}$$
(2.221)
arises. In general, any symplectic integration scheme (where the kick step comes first) can be cast into the following form:Coefficients \(a_k,b_k\) of different symplectic methods are gathered in Table 2.3. Additional coefficients can be found in [39]. In order to keep the numerical effort of force calculation rather low, it is desirable to have as many b-coefficients equal to zero as possible.
Table 2.3

Coefficients for some symplectic integration methods of increasing order

Ruth’s leap frog

(position Verlet)

\(a_1=1/2\)

\(a_2=1/2\)

\(b_1=0\)

\(b_2=1\)

Fourth-order Gray [41]

\(a_1=(1-\sqrt{1/3})/2\)

\(b_1=0\)

\(a_2=\sqrt{1/3}\)

\(b_2=(1/2+\sqrt{1/3})/2\)

\(a_3=-a_2\)

\(b_3=1/2\)

\(a_4=(1+\sqrt{1/3})/2\)

\(b_4=(1/2-\sqrt{1/3})/2\)

Sixth-order Yoshida [42]

\(a_1=0.78451361047756\)

\(b_1=0.39225680523878\)

\(a_2=0.23557321335936\)

\(b_2=0.51004341191846\)

\(a_3=-1.1776799841789\)

\(b_3=-0.47105338540976\)

\(a_4=1.3151863206839\)

\(b_4=0.068753168252520\)

\(a_5=a_3\)

\(b_5=b_4\)

\(a_6=a_2\)

\(b_6=b_3\)

\(a_7=a_1\)

\(b_7=b_2\)

\(a_8=0\)

\(b_8=b_1\)

In a thorough study of the numerical accuracy of symplectic integrators it has been found that they yield very stable trajectories and that, for autonomous Hamiltonians, the standard deviation of the energy and the energy drift are comparatively small [39].

2.3.4.2 Coherent States

Having set the stage with the discussion of the solution of the classical equations of motion, we now come to the central ingredients for the reformulation of the semiclassical propagator expression. These are the so-called coherent states , which are discussed in detail in the textbook of Louisell [43] and in Heller’s Les Houches lecture notes [44]. In Dirac notation they are given by
$$\begin{aligned} |\varvec{z}\rangle =\mathrm{e}^{-1/2|z|^2}\mathrm{e}^{\varvec{z}\cdot \hat{\varvec{a}}^\dagger }|\varvec{0}\rangle , \end{aligned}$$
(2.222)
where \(|\varvec{0}\rangle \) is the ground state of N uncoupled 1D harmonic oscillators of mass m and frequency \(\omega _\mathrm{e}\). Furthermore, we have used the multi dimensional analog of (2.162)
$$\begin{aligned} \hat{\varvec{a}}^\dagger =\frac{1}{\sqrt{2}}\left( \frac{\hat{\varvec{q}}}{b} -\mathrm{i}\frac{\hat{\varvec{p}}}{c}\right) , \end{aligned}$$
(2.223)
for the vector of creation operators with \(b=\sqrt{\hbar /m \omega _\mathrm{e}},c=\sqrt{\hbar m \omega _\mathrm{e}}\) and
$$\begin{aligned} \varvec{z}=\frac{1}{\sqrt{2}}\left( \frac{\varvec{q}}{b}+\mathrm{i}\frac{\varvec{p}}{c}\right) , \end{aligned}$$
(2.224)
with the expectation values \(\varvec{q}\) and \(\varvec{p}\) of the operators \(\hat{\varvec{q}}\) and \(\hat{\varvec{p}}\).
In position representation, the coherent states are N-dimensional Gaussian wavepackets of the form
$$\begin{aligned} \langle {\varvec{x}}|{\varvec{z}}\rangle =\left( \frac{1}{\pi b^2}\right) ^{N/4} \exp \left\{ -\frac{1}{2b^2}({\varvec{x}}-{\varvec{q}})^2 +\frac{\mathrm{i}}{bc} {\varvec{p}}\cdot \left( {\varvec{x}}-\frac{{\varvec{q}}}{2}\right) \right\} . \end{aligned}$$
(2.225)
Coherent states form an (over-)complete set of basis states [43] and can be used to represent unity according to
$$\begin{aligned} {\hat{1}}= \int \frac{\mathrm{d}^{2N} z}{\pi ^N} |{\varvec{z}}\rangle \langle {\varvec{z}}|= \int \frac{\mathrm{d}^N p\, \mathrm{d}^N q}{(2\pi \hbar )^N} |{\varvec{z}}\rangle \langle {\varvec{z}}|. \end{aligned}$$
(2.226)

2.15.

Restricting the discussion to \(N=1\), prove that the coherent states form a complete set by expressing them as a sum over harmonic oscillator eigenstates.

The basis for the reformulation of the semiclassical propagator is the matrix element of the time-evolution operator17 between coherent states
$$\begin{aligned} K({\varvec{z}}_f,t;{\varvec{z}}_i,0) \equiv \langle {\varvec{z}}_f|\mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }|{\varvec{z}}_i\rangle . \end{aligned}$$
(2.227)
The semiclassical approximation for this object can be performed quite analogously to the derivation of the van Vleck-Gutzwiller propagator by starting from the appropriate path integral [45]. However, it turns out that the final expression contains a classical over-determination problem due to the fact that not only the position is fixed at the initial and the final time but also the momentum! This problem is solved by the complexification of phase space. We will not dwell on that rather involved topic any longer. Fortunately the over-determination problem will be resolved rather elegantly in the following.

2.3.4.3 Herman-Kluk Propagator

The next step in order to make progress is to consider the time-evolution operator in position representation. It can be expressed via the coherent state propagator, by inserting unity in terms of coherent states twice, according to
$$\begin{aligned} K({\varvec{x}}_f,t;{\varvec{x}}_i,0)= & {} \langle {\varvec{x}}_f|\mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }|{\varvec{x}}_i\rangle \nonumber \\= & {} \int \frac{\mathrm{d}^{2N} z_f}{\pi ^N}\int \frac{\mathrm{d}^{2N} z_i}{\pi ^N} \langle {\varvec{x}}_f|{\varvec{z}}_f\rangle \langle {\varvec{z}}_f|\mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }|{\varvec{z}}_i\rangle \langle {\varvec{z}} _i|{\varvec{x}}_i\rangle . \end{aligned}$$
(2.228)
If we now replace the coherent state matrix element of the propagator by its semiclassical approximation and perform the final phase space integration in the stationary phase approximation, the over-determination problem is resolved and the semiclassical propagator is reformulated in terms of real classical initial value solutions [46]. This procedure yields
$$\begin{aligned} K^\mathrm{HK}({\varvec{x}}_f,t;{\varvec{x}}_i,0)\equiv & {} \int \frac{\mathrm{d}^{N} p_i\mathrm{d}^{N} q_i}{(2\pi \hbar )^N} \langle {\varvec{x}}_f|\tilde{{\varvec{z}}}_t\rangle R({\varvec{p}}_i,{\varvec{q}}_i,t) \nonumber \\&\exp \left\{ \frac{\mathrm{i}}{\hbar }S({\varvec{p}}_i,{\varvec{q}}_i,t)\right\} \langle \tilde{{\varvec{z}}}_i|{\varvec{x}}_i\rangle , \end{aligned}$$
(2.229)
which is the so-called Herman-Kluk propagator. By a different reasoning it has first been derived by Herman and Kluk [47], based on previous work by Heller [48]. Definitions that are used in the expression above are the classical action functional, that depends on the initial phase space variables, and time and for this reason is written (and denoted) as a function here, according to
$$\begin{aligned} S({\varvec{p}}_i,{\varvec{q}}_i,t)\equiv \int _0^t\mathrm{d}t'\left[ {\varvec{p}}_{t'}\cdot \dot{{\varvec{q}}}_{t'}-H\right] . \end{aligned}$$
(2.230)
Furthermore,
$$\begin{aligned} R({\varvec{p}}_i,{\varvec{q}}_i,t)\equiv \left| \frac{1}{2}\left( \mathbf{m}_{11}+\mathbf{m}_{22} -\mathrm{i}\hbar \gamma \mathbf{m}_{21}- \frac{1}{\mathrm{i}\hbar \gamma }{} \mathbf{m}_{12} \right) \right| ^{1/2}, \end{aligned}$$
(2.231)
with \(\gamma =m\omega _\mathrm{e}/\hbar \), denotes the Herman-Kluk determinantal prefactor, which contains classical stability (monodromy) block-matrices \(\mathbf{m}_{ij}\). They are solutions to the linearized Hamilton equations and for reference they are defined in Appendix 2.C.
The Gaussian wavepackets in (2.229) have a slightly different phase convention than the ones of (2.225). For this reason a new symbol with a tilde,
$$\begin{aligned} \langle {\varvec{x}}|\tilde{{\varvec{z}}}\rangle = \left( \frac{\gamma }{\pi }\right) ^{N/4} \exp \left\{ -\frac{\gamma }{2}({\varvec{x}}-{\varvec{q}})^2+ \frac{\mathrm{i}}{\hbar } {\varvec{p}}\cdot ({\varvec{x}}-{\varvec{q}})\right\} , \end{aligned}$$
(2.232)
has been introduced. To complete the explanation of all abbreviations, the centers of the final Gaussians in phase space are \(\{{\varvec{p}}_t({\varvec{p}}_i,{\varvec{q}}_i),{\varvec{q}}_t({\varvec{p}}_i, {\varvec{q}}_i)\}\), which are initial value solutions of the classical Hamilton equations.

In contrast to the VVG prefactor, the expression (2.231) does not exhibit singularities at caustics. Recently, it has been proven that the Herman-Kluk method is a uniform semiclassical method [49]. Furthermore, for the numerics it is important that the square root in the prefactor has to be taken in such a fashion that the result is continuous as a function of time [50]. This is reminiscent of the Maslov phase in the van Vleck-Gutzwiller expression (2.91), which does not have to be calculated explicitly, however.

A final remark on the connection between the two semiclassical expressions for the propagator, which we have discussed so far, shall be made. After performing the integration over the initial phase space variables in (2.229) in the stationary phase approximation, the van Vleck-Gutzwiller expression will emerge. One can also turn around that reasoning and derive the Herman-Kluk prefactor, by demanding that the SPA applied to the phase space integral yields the van Vleck-Gutzwiller expression [50]. For the derivation of a more general form of the prefactor in this way, see [51]. An even simpler way to derive the VVG propagator from the Herman-Kluk expression by taking the limit \(\gamma \rightarrow \infty \) is explicitly given in Appendix 2.D.

2.3.4.4 Semiclassical Propagation of Gaussian Wavepackets

The pure HK propagator is a clumsy object, due to the need to integrate over all of phase space. Fortunately, however, in the focus of our interest will not be the bare propagator but its application to an initial Gaussian wavepacket. Let us therefore consider the mixed matrix element
$$\begin{aligned} K({\varvec{x}}_f,t;\tilde{{\varvec{z}}}_\alpha ,0)\equiv & {} \langle {\varvec{x}}_f|\mathrm{e}^{-\mathrm{i}\hat{H}t/\hbar }|\tilde{{\varvec{z}}}_\alpha \rangle \nonumber \\= & {} \int \mathrm{d}^N x_i\, K({\varvec{x}}_f,t;{\varvec{x}}_i,0) \langle {\varvec{x}}_i|\tilde{{\varvec{z}}}_\alpha \rangle \end{aligned}$$
(2.233)
of the time-evolution operator, where \(K({\varvec{x}}_f,t;{\varvec{x}}_i,0)\) shall be replaced by the HK approximation of (2.229).
The Gaussian to be propagated \(\langle {\varvec{x}}_i|\tilde{{\varvec{z}}}_\alpha \rangle \) is determined by its phase space center \(({\varvec{q}}_\alpha ,{\varvec{p}}_\alpha )\) and shall have the same inverse width parameter \(\gamma \) as the coherent state basis functions. The calculation of the overlap in (2.233) can be done analytically and yields the simple result
$$\begin{aligned} \langle \tilde{{\varvec{z}}}_i|\tilde{{\varvec{z}}}_\alpha \rangle= & {} \exp \left\{ -\frac{\gamma }{4}({\varvec{q}}_i-{\varvec{q}}_\alpha )^2+\frac{\mathrm{i}}{2\hbar } ({\varvec{q}}_i-{\varvec{q}}_\alpha )\cdot ({\varvec{p}}_i+{\varvec{p}}_\alpha )\right. \nonumber \\- & {} \left. \frac{1}{4\gamma \hbar ^2}({\varvec{p}}_i-{\varvec{p}}_\alpha )^2\right\} . \end{aligned}$$
(2.234)
In (2.233), the integration over initial phase space still has to be done. It is, however, much more user friendly than in the case of the bare propagator, due to the fact that the overlap just calculated is effectively cutting off the integrand too far away from the initial center in phase space. In numerical applications the phase space integration is often performed by using Monte Carlo methods [52]. Pictorially, the application of the Herman-Kluk propagator to a Gaussian can be represented as shown in Fig. 2.6.
Fig. 2.6

Pictorial representation of the semiclassical initial value procedure to propagate a Gaussian wavepacket à la Herman and Kluk in one dimension: The propagated wavepacket \(\langle x_f|\varPsi \rangle \) is a weighted sum over many Gaussians. The weights are the product of a prefactor times a complex exponential function \(R\exp \{\mathrm{i}S/\hbar \} \langle \tilde{z}_i|\tilde{z}_\alpha \rangle \);

adapted from [53]

How is all that related to the thawed GWD of Heller that we have used in Sect. 2.1.4? There the Gaussian wavepacket has been propagated using its center trajectory alone. The GWD therefore is much more crude than the HK method! There should be a way to derive GWD from the more general expression, however. This is indeed the case. To this end one has to expand the exponent in the integral over initial phase space around the center \(({\varvec{p}}_\alpha ,{\varvec{q}}_\alpha )\) of the initial Gaussian up to second order. The integration is then a Gaussian integration and can be performed analytically.18 One finally gets [51]
$$\begin{aligned} K^\mathrm{GWD}({\varvec{x}}_f,t;\tilde{{\varvec{z}}}_\alpha ,0)\equiv & {} \left( \frac{\gamma }{\pi }\right) ^{N/4} \left| \left( \mathbf{m}_{22}+\mathrm{i}\hbar \gamma \mathbf{m}_{21}\right) \right| ^{-1/2} \nonumber \\&\exp \left\{ -\frac{1}{2} ({\varvec{x}}_f-{\varvec{q}}_{\alpha t})\cdot \gamma _t ({\varvec{x}}_f-{\varvec{q}}_{\alpha t})\right. \nonumber \\&+\left. \frac{\mathrm{i}}{\hbar } {\varvec{p}}_{\alpha t}\cdot ({\varvec{x}}_f-{\varvec{q}}_{\alpha t}) +\frac{\mathrm{i}}{\hbar } S\right\} , \end{aligned}$$
(2.235)
with the time-dependent \(N\times N\) inverse width parameter matrix
$$\begin{aligned} \gamma _t=\gamma \bigl (\mathbf{m}_{11}+\frac{1}{\mathrm{i}\gamma \hbar }\mathbf{m}_{12}\bigr ) \bigl (\mathbf{m}_{22}+\mathrm{i}\gamma \hbar \mathbf{m}_{21}\bigr )^{-1}. \end{aligned}$$
(2.236)
The width of the single Gaussian can thus change in the course of time, in contrast to the widths of the many Gaussians in the case of the HK propagator. This is the reason, why the more simple single Gaussian method is called “thawed” Gaussian wavepacket dynamics, whereas the more complex, multiple Gaussian HK method applied to a Gaussian initial state is closely related to the “frozen” Gaussian wavepacket dynamics of Heller [48]. When speaking of frozen Gaussians propagators, strictly, this refers to a HK-like approach but with a unit prefactor [54].

Finally, it is worthwhile to check that the time-dependent parameter \(\gamma _t\) fulfills a nonlinear Riccati differential equation similar to (2.61). To this end, in Exercise 2.16, the equations of motion of the stability matrix elements given in Appendix 2.C should be used.

2.16.

The TGWD inverse width parameter allows for a reformulation of the HK prefactor. For reasons of simplicity, consider the 1D case.

  1. (a)
    Show that the nonlinear Riccati differential equation
    $$\begin{aligned} \dot{\gamma }_t=-\frac{\mathrm{i}\hbar }{m}\gamma _t^2-\frac{1}{\mathrm{i}\hbar }V'' \end{aligned}$$
    is fulfilled by the inverse width parameter \(\gamma _t\).
     
  2. (b)
    Writing the inverse width parameter in the log-derivative form
    $$\begin{aligned} \gamma _t=\frac{m}{\mathrm{i}\hbar }\frac{\dot{Q}}{Q}, \end{aligned}$$
    with \(Q=m_{22}+\mathrm{i}\hbar \gamma m_{21}\), show that the complex conjugate of the HK prefactor can be entirely formulated in terms of \(\gamma _t\) via
    $$\begin{aligned} R^*=\sqrt{\frac{1}{2}(1+\gamma _t/\gamma )}\exp \left\{ \frac{1}{2}\int _0^t\mathrm{d}t'\frac{\mathrm{i}\hbar }{m}\gamma _{t'}\right\} . \end{aligned}$$
     
The different level of accuracy of the two approximations is illustrated in Fig. 2.7, where a comparison of the multi-trajectory HK method and the single trajectory GWD are contrasted with exact numerical results, gained by using the split-operator FFT method of Sect. 2.3.2. The displayed quantity is the real part of the auto-correlation function
$$\begin{aligned} c_{\alpha \alpha }(t)\equiv \langle \varPsi _\alpha (0)|\varPsi _\alpha (t)\rangle \end{aligned}$$
(2.237)
of an initial Gaussian wavepacket in a Morse potential with dimensionless Hamiltonian
$$\begin{aligned} \hat{H}=\frac{\hat{p}^2}{2}+D(1-\exp \{-\lambda x\})^2, \end{aligned}$$
(2.238)
with parameters \(D=30,\lambda =0.08\).19 GWD describes the envelope of the quantum curve rather well. The fine oscillations that are captured almost perfectly by the multiple trajectory method are not described at all, however.
Fig. 2.7

Different trajectory-based and fully quantum mechanical auto-correlation functions of a Gaussian wavepacket with dimensionless parameters \(q_\alpha ,p_\alpha =0\) und \(\gamma =12\) in a Morse potential. a thawed GWD (solid line) versus exact quantum mechanics (dashed line) b HK (solid red line) versus exact quantum mechanics (dashed line);

adapted from [55]

2.4 Notes and Further Reading

TDSE and Time-Evolution Operator

The restriction to the use of the nonrelativistic TDSE for the description of laser-matter interaction is due to the focus of this book on intermediate field strengths, maximally of the order of an atomic unit (see also Appendix  4.A), and wavelengths around the visible range. If stronger fields are considered (and for longer wavelengths), in the case of electrons, the Dirac equation has to be solved. A concise discussion of the Dirac equation and relativistic corrections to the Schrödinger equation can be found in Appendix 7 of [56]. The spinor character of the wavefunction leads to a considerable complexification of the problem, which may not be necessary if the dipole approximation (see Chap.  3) is still applicable [57].

The very detailed book by Schleich [58] contains a lot of additional information on Schrödinger type time-evolution operators and also the time-evolution of the density operator is discussed therein. More on time-dependent and energy-dependent Green’s function can be found in Economou’s book [59]. The extraction of spectral information from time-dependent quantum information goes back to work of Heller, which is reviewed, e.g., in [44] and is covered also in the book by Tannor [38]. In both previous citations many additional references concerning Gaussian wavepacket dynamics can be found. A recent book by Schuch is focusing on the nonlinear Riccati equation, appearing in the GWD, with applications to quantum theory and irreversible processes [60].

Analytical Methods

A lot of additional material on the path integral formulation of quantum mechanics is contained in the tutorial article by Ingold [61] and the books by Feynman and Hibbs [6] and by Schulman [14]. The second book has more details on variational calculus and on exactly solvable path integrals (especially in the new Dover edition). It has been pointed out by Makri and Miller that a simple short-time propagator, based on the trapezoidal rule for the discretization of the potential part of the Lagrangian (or taking a midpoint rule), is not correct through first order in \(\varDelta t\), even in the case of the harmonic oscillator [62], although it leads to the correct propagator (2.75). Improved short-time propagators have been proposed there.

The significance of generating functions of canonical transformations in the (semi-)classical limit of quantum mechanics is discussed in depth by Miller [63]. The book by Reichl [64] contains chapters on semiclassical methods and on time-periodic systems, dealing with Floquet theory. These methods are then used in the context of quantum chaology. The book by Billing contains more material on semiclassics and on mixed quantum classical methods [65]. A book devoted to the semiclassical approach to the solution of the TDSE, as well as to the understanding of quantum mechanics, using this approach, is the one by Heller [66].

The Magnus expansion of the time-evolution operator is formally closely analogous to the so-called cumulant expansion, known from statistical physics [67].

Numerical Methods

The book by Tannor [38] contains more information on methodological and numerical approaches to solve the time-dependent Schrödinger equation. Polynomial and DVR methods are dealt with in detail there. A book that is fun to read, although it covers a seemingly dry topic is the classic “Numerical Recipes” [25]. Among many other things, more details on FFT and on finite difference methods to solve the TDSE can be found therein. Methods for the solution of the TDSE in the case of strong field driving are discussed in [68]. More recently, a phase space approach has been devised that is taylored for the solution of the TDSE for laser-driven electronic wave-packet propagation [69]. In [70], molecular quantum dynamics is discussed from the viewpoint of the MCTDH method.

A review of different semiclassical approximations based on Gaussian wavepackets is given in [55], whereas a combination of the Herman-Kluk method with thawed GWD for correlated many-particle systems, termed semiclassical hybrid dynamcis (SCHD), is laid out in [51]. The coupled coherent states (CCS) method of Shalashilin and Child [71] allows in principle for an exact numerical solution of the TDSE and has the Herman-Kluk method as a limiting case, as shown very elegantly in [72]. Finally, we have not discussed Bohmian mechanics that is recently being used not only as an interpretational tool, but in a synthetic way, in order to generate solutions of the TDSE using (nonclassical) trajectories [73]. For another very good discussion of this topic, see Chap. 4 of [38]. The nonlocality of quantum mechanics is especially apparent in Bohmian mechanics, due to the presence of the so-called quantum potential that couples the motion of individual trajectories.

Footnotes

  1. 1.

    This is the atomic unit for the electric field, given by \(\mathcal{E}=5.14\times 10^9\) V/cm.

  2. 2.

    We stress that the integral form given in (2.28) is equivalent to the differential form of the time-dependent Schrödinger equation (as can be shown by differentiation) and in addition it has the initial condition “built in”.

  3. 3.

    This is how Dyson proceeded in [4].

  4. 4.

    At \(t_1=t_2\) and for \(\hat{A}\ne \hat{B}\) additional assumptions on ordering would have to be made.

  5. 5.

    In cases where no inverse group element exists this is called semi-group property.

  6. 6.

    In order to ensure convergence of the integral, one adds a small positive imaginary part to the energy, \(z=E+\mathrm{i}\epsilon \).

  7. 7.

    This is a Fresnel integral, i.e., the Gaussian integral of ( 1.29) with purely imaginary a.

  8. 8.

    This Ansatz is also the starting point of so-called Bohmian mechanics approaches to quantum dynamics.

  9. 9.

    In general, \(x_f\) and \(x_i\) have to be replaced by vectors!

  10. 10.

    The explanation of these terms follows in Chap.  5.

  11. 11.

    We have used \(\hat{U}(t+T,t)\hat{\mathcal{H}}(t)\hat{U}^{-1}(t+T,t)= \hat{\mathcal{H}}(t+T)=\hat{H}(t+T)-\mathrm{i}\hbar \partial _{t+T} =\hat{H}(t)-\mathrm{i}\hbar \partial _t\).

  12. 12.

    Formally this Ansatz is equivalent to the Bloch theorem of solid state physics.

  13. 13.

    Note that k has to be an integer in order for the modified quasi-eigenfunction to be periodic.

  14. 14.

    This would be true, if the energies were exact, which is prohibited by problem (a).

  15. 15.

    The Baker-Campbell-Haussdorff (BCH) formula is the dual relation and reads \(\exp \{\hat{x}\}\exp \{\hat{y}\}=\exp \{\hat{x}+\hat{y}+1/2[\hat{x},\hat{y}]+1/12( [\hat{x},[\hat{x},\hat{y}]]+[\hat{y},[\hat{y},\hat{x}]])+\cdots \}\).

  16. 16.

    Originally this approach was proposed by Fleck, Morris and Feit for the solution of the Maxwell wave-equation [30].

  17. 17.

    For notational convenience, we assume the Hamiltonian to be time-independent; the following results are also valid in the general case of a time-dependent Hamiltonian, however.

  18. 18.

    Note that this procedure is more approximative than a stationary phase approximation.

  19. 19.

    In Sect.  5.1.2 the physical background of the Morse oscillator will be elucidated.

  20. 20.

    Be careful to use the formula \(\det \mathbf{M}=\det \mathbf{m_{22}}\det (\mathbf{m_{11}-m_{12}m_{22}^{-1}m_{21}})\) valid for block matrices!

References

  1. 1.
    E. Schrödinger, Ann. Phys. (Leipzig) 81, 109 (1926)CrossRefGoogle Scholar
  2. 2.
    E. Schrödinger, Ann. Phys. (Leipzig) 79, 489 (1926)CrossRefGoogle Scholar
  3. 3.
    J.S. Briggs, J.M. Rost, Eur. Phys. J. D 10, 311 (2000)ADSCrossRefGoogle Scholar
  4. 4.
    F. Dyson, Phys. Rev. 75, 486 (1949)ADSCrossRefGoogle Scholar
  5. 5.
    V.A. Mandelshtam, J. Chem. Phys. 108, 9999 (1998)ADSCrossRefGoogle Scholar
  6. 6.
    R.P. Feynman, A.R. Hibbs, Quantum Mechanics and Path Integrals, emended edn. (Dover, Mineola, 2010)MATHGoogle Scholar
  7. 7.
    E. Schrödinger, Die Naturwissenschaften 14, 664 (1926)ADSCrossRefGoogle Scholar
  8. 8.
    E.J. Heller, J. Chem. Phys. 62, 1544 (1975)ADSCrossRefGoogle Scholar
  9. 9.
    W. Kinzel, Physikalische Blätter 51, 1190 (1995)CrossRefGoogle Scholar
  10. 10.
    F. Grossmann, J.M. Rost, W.P. Schleich, J. Phys. A Math. Gen. 30, L277 (1997)CrossRefGoogle Scholar
  11. 11.
    M. Kleber, Phys. Rep. 236, 331 (1994)ADSCrossRefGoogle Scholar
  12. 12.
    R.P. Feynman, Rev. Mod. Phys. 20, 367 (1948)ADSCrossRefGoogle Scholar
  13. 13.
    P.A.M. Dirac, The Principles of Quantum Mechanics, 4th edn. (Oxford, London, 1958)Google Scholar
  14. 14.
    L.S. Schulman, Techniques and Applications of Path Integration (Dover, Mineola, 2005)MATHGoogle Scholar
  15. 15.
    J.H. van Vleck, Proc. Acad. Nat. Sci. USA 14, 178 (1928)ADSCrossRefGoogle Scholar
  16. 16.
    M.C. Gutzwiller, J. Math. Phys. 8, 1979 (1967)ADSCrossRefGoogle Scholar
  17. 17.
    S. Grossmann, Funktionalanalysis II (Akademie Verlag, Wiesbaden, 1977)MATHGoogle Scholar
  18. 18.
    H. Goldstein, C. Poole, J. Safko, Classical Mechanics, 3rd edn. (Addison Wesley, San Francisco, 2002)MATHGoogle Scholar
  19. 19.
    W.R. Salzman, J. Chem. Phys. 85, 4605 (1986)ADSCrossRefGoogle Scholar
  20. 20.
    M.H. Beck, A. Jäckle, G.A. Worth, H.D. Meyer, Phys. Rep. 324, 1 (2000)ADSCrossRefGoogle Scholar
  21. 21.
    D. Kohen, F. Stillinger, J.C. Tully, J. Chem. Phys. 109, 4713 (1998)ADSCrossRefGoogle Scholar
  22. 22.
    M. Tinkham, Group Theory and Quantum Mechanics (McGraw-Hill, New York, 1964)MATHGoogle Scholar
  23. 23.
    H. Sambe, Phys. Rev. A 7, 2203 (1973)ADSCrossRefGoogle Scholar
  24. 24.
    M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions (Dover Publications, New York, 1965)MATHGoogle Scholar
  25. 25.
    W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in Fortran, 2nd edn. (Cambridge University Press, Cambridge, 1992)MATHGoogle Scholar
  26. 26.
    J.H. Shirley, Phys. Rev. 138, B979 (1965)ADSCrossRefGoogle Scholar
  27. 27.
    J.C. Light, Time-Dependent Quantum Molecular Dynamics, ed. by J. Broeckhove, L. Lathouwers (Plenum Press, New York, 1992), p. 185Google Scholar
  28. 28.
    U. Peskin, N. Moiseyev, Phys. Rev. A 49, 3712 (1994)ADSCrossRefGoogle Scholar
  29. 29.
    W. Witschel, J. Phys. A 8, 143 (1975)ADSMathSciNetCrossRefGoogle Scholar
  30. 30.
    J.A. Fleck, J.R. Morris, M.D. Feit, Appl. Phys. 10, 129 (1976)ADSCrossRefGoogle Scholar
  31. 31.
    M.D. Feit, J.A. Fleck, A. Steiger, J. Comp. Phys. 47, 412 (1982)ADSCrossRefGoogle Scholar
  32. 32.
    M. Braun, C. Meier, V. Engel, Comp. Phys. Comm. 93, 152 (1996)ADSCrossRefGoogle Scholar
  33. 33.
    M. Frigo, S.G. Johnson, Proceedings of the IEEE 93(2), 216 (2005). Special issue on Program Generation, Optimization, and Platform Adaptation Google Scholar
  34. 34.
    A. Vibok, G.G. Balint-Kurti, J. Phys. Chem 96, 8712 (1992)CrossRefGoogle Scholar
  35. 35.
    A. Askar, A.S. Cakmak, J. Chem. Phys. 68, 2794 (1978)ADSMathSciNetCrossRefGoogle Scholar
  36. 36.
    C. Leforestier, R.H. Bisseling, C. Cerjan, M.D. Feit, R. Friesner, A. Guldberg, A. Hammerich, G. Jolicard, W. Karrlein, H.D. Meyer, N. Lipkin, O. Roncero, R. Kosloff, J. Comp. Phys. 94, 59 (1991)ADSCrossRefGoogle Scholar
  37. 37.
    H. Tal-Ezer, R. Kosloff, J. Chem. Phys. 81, 3967 (1984)ADSCrossRefGoogle Scholar
  38. 38.
    D.J. Tannor, Introduction to Quantum Mechanics: A Time-Dependent Perspective (University Science Books, Sausalito, 2007)Google Scholar
  39. 39.
    S.K. Gray, D.W. Noid, B.G. Sumpter, J. Chem. Phys. 101, 4062 (1994)ADSCrossRefGoogle Scholar
  40. 40.
    J.D. Meiss, Rev. Mod. Phys. 64, 795 (1992)ADSCrossRefGoogle Scholar
  41. 41.
    M.L. Brewer, J.S. Hulme, D.E. Manolopoulos, J. Chem. Phys. 106, 4832 (1997)ADSCrossRefGoogle Scholar
  42. 42.
    H. Yoshida, Phys. Lett. A 150, 262 (1990)ADSMathSciNetCrossRefGoogle Scholar
  43. 43.
    W.H. Louisell, Quantum Statistical Properties of Radiation (Wiley, New York, 1990)MATHGoogle Scholar
  44. 44.
    E.J. Heller, Chaos and Quantum Physics, ed. by M.J. Giannoni, A. Voros, J. Zinn-Justin. Les Houches Session LII (Elsevier, Amsterdam, 1991), pp. 549–661Google Scholar
  45. 45.
    J.R. Klauder, Random Media, ed. by G. Papanicolauou (Springer, New York, 1987), p. 163Google Scholar
  46. 46.
    F. Grossmann, J.A.L. Xavier, Phys. Lett. A 243, 243 (1998)ADSMathSciNetCrossRefGoogle Scholar
  47. 47.
    M.F. Herman, E. Kluk, Chem. Phys. 91, 27 (1984)ADSCrossRefGoogle Scholar
  48. 48.
    E.J. Heller, J. Chem. Phys. 75, 2923 (1981)ADSMathSciNetCrossRefGoogle Scholar
  49. 49.
    K.G. Kay, Chem. Phys. 322, 3 (2006)ADSCrossRefGoogle Scholar
  50. 50.
    K.G. Kay, J. Chem. Phys. 100, 4377 (1994)ADSCrossRefGoogle Scholar
  51. 51.
    F. Grossmann, J. Chem. Phys. 125, 014111 (2006)ADSCrossRefGoogle Scholar
  52. 52.
    E. Kluk, M.F. Herman, H.L. Davis, J. Chem. Phys. 84, 326 (1986)ADSCrossRefGoogle Scholar
  53. 53.
    F. Grossmann, M.F. Herman, J. Phys. A Math. Gen. 35, 9489 (2002)ADSCrossRefGoogle Scholar
  54. 54.
    S. Zhang, E. Pollak, J. Chem. Phys. 121(8), 3384 (2004)ADSCrossRefGoogle Scholar
  55. 55.
    F. Grossmann, Comm. At. Mol. Phys. 34, 141 (1999)Google Scholar
  56. 56.
    B.H. Bransden, C.J. Joachain, Physics of Atoms and Molecules, 2nd edn. (Pearson Education, Harlow, 2003)Google Scholar
  57. 57.
    S. Selstø, E. Lindroth, J. Bengtsson, Phys. Rev. A 79, 043418 (2009)ADSCrossRefGoogle Scholar
  58. 58.
    W.P. Schleich, Quantum Optics in Phase Space (Wiley-VCH, Berlin, 2001)CrossRefMATHGoogle Scholar
  59. 59.
    E.N. Economou, Green’s Functions in Quantum Physics, 3rd edn. (Springer, Berlin, 2006)CrossRefGoogle Scholar
  60. 60.
    D. Schuch, Quantum Theory from a Nonlinear Perspective: Riccati Equations in Fundamental Physics (Springer International Publishing, 2018)Google Scholar
  61. 61.
    G.L. Ingold, Coherent Evolution in Noisy Environments, ed. by A. Buchleitner, K. Hornberger. Lecture Notes in Physics (Springer, Berlin, 2002), pp. 1–53Google Scholar
  62. 62.
    N. Makri, W.H. Miller, Chem. Phys. Lett. 151, 1 (1988)ADSCrossRefGoogle Scholar
  63. 63.
    W.H. Miller, Adv. Chem. Phys. 25, 69 (1974)Google Scholar
  64. 64.
    L.E. Reichl, The Transition to Chaos, 2nd edn. (Springer, New York, 2004)CrossRefMATHGoogle Scholar
  65. 65.
    G.D. Billing, The Quantum Classical Theory (Oxford University Press, New York, 2003)MATHGoogle Scholar
  66. 66.
    E.J. Heller, The Semiclassical Way to Dynamics and Spectroscopy (Princeton University Press, Princeton, 2018)Google Scholar
  67. 67.
    A. Nitzan, Chemical Dynamics in Condensed Phases (Oxford University Press, Oxford, 2006)Google Scholar
  68. 68.
    K.J. Schafer, Strong Field Laser Physics, ed. by T. Brabec. Springer Series in Optical Sciences, vol. 134 (Springer, Berlin, 2009), chap. 6, pp. 111–145Google Scholar
  69. 69.
    N. Takemoto, A. Shimshovitz, D.J. Tannor, J. Chem. Phys. 137, 011102 (2012)ADSCrossRefGoogle Scholar
  70. 70.
    F. Gatti, B. Lasorne, H.D. Meyer, A. Nauts, Applications of Quantum Dynamics in Chemistry. Lecture Notes in Chemistry, vol. 98 (Springer International Publishing, 2017)Google Scholar
  71. 71.
    D.V. Shalashilin, M.S. Child, J. Chem. Phys. 113, 10028 (2000)ADSCrossRefGoogle Scholar
  72. 72.
    W.H. Miller, J. Phys. Chem. B 106, 8132 (2002)CrossRefGoogle Scholar
  73. 73.
    R.E. Wyatt, Quantum Dynamics with Trajectories: Introduction to Quantum Hydrodynamics, Interdisciplinary Applied Mathematics, vol. 28 (Springer, New York, 2005)Google Scholar
  74. 74.
    M. Mizrahi, J. Math. Phys. 16, 2201 (1975)ADSCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Institut für Theoretische PhysikTechnische Universität DresdenDresden, SaxonyGermany

Personalised recommendations