Abstract
We study the timesliced thawed Gaussian propagation method, which was recently proposed for solving the timedependent Schrödinger equation. We introduce a triplet of quadraturebased analysis, synthesis and reinitialization operators to give a rigorous mathematical formulation of the method. Further, we derive combined error bounds for the discretization of the wave packet transform and the timepropagation of the thawed Gaussian basis functions. Numerical experiments in 1D illustrate the theoretical results.
Introduction
Algorithms for simulations of quantum dynamics play a central role in the field of numerical analysis since these methods nowadays are the computational keystone in many research areas such as quantum chemistry. In this paper we consider the timedependent Schrödinger equation
where the function \(V:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is a smooth potential of subquadratic growth and the complexvalued wave function \(\psi :{\mathbb {R}}^d\times {\mathbb {R}}\rightarrow {\mathbb {C}}\) depends on \(x\in {\mathbb {R}}^d\) and \(t\in {\mathbb {R}}\). The righthand side of (1.1) is given by the action of the semiclassical operator
as it results for example from the timedependent Born–Oppenheimer approximation, where the small positive parameter \(\varepsilon ^2\) represents a mass ratio of nuclei and electrons, see e.g. [31, Chapter II.2]. Since we assume that the potential is of subquadratic growth, H is a selfadjoint linear operator on \(L^2({\mathbb {R}}^d)\) and therefore the spectral theorem provides the unitary propagator
which guarantees existence and uniqueness of the solution \(\psi (t)=U(t)\psi _0\) for a given initial wave function \(\psi _0\in L^2({\mathbb {R}}^d)\).
Motivated by questions in physics and chemistry, various numerical algorithms for simulations of quantum dynamics have been developed during the last decades. For example, reduced models via variational approximations have been investigated, which include the multiconfiguration methods such as MCTDH, see [34], the variational multiconfiguration Gaussian wave packet (vMCG) method [39], or the variational Gaussian wave packets [4, 21]. Semiclassical approaches such as Hagedorn wave packets [8, 15], Gaussian beams [29, 30, 40] or the Herman–Kluk propagator [23, 28] have been developed to include quantum effects especially for highdimensional systems, for which standard gridbased numerical methods are infeasible.
Recently, Kong et al. have proposed the timesliced thawed Gaussian (TSTG) propagation method, see [25], in which Gaussian wave packets are decomposed into linear combinations of Gaussian basis functions without the need of multidimensional numerical integration. The resulting approximations of wave packets can be obtained by discretizing the inversion formula for the socalled FBI (Fourier–Bros–Iagolnitzer) transform, which is used in microlocal analysis to analyze the distribution of wave packets in position and momentum space simultaneously, see e.g. [32]. According to the FBI inversion formula, see e.g. [27, Proposition 5.1], any squareintegrable function \(\psi \in L^2({\mathbb {R}}^d)\) can be decomposed as
where the inner product in \(L^2({\mathbb {R}}^d)\) is taken antilinear in its first and linear in its second argument and the semiclassically scaled wave packet \(g_z\in {\mathcal {S}}({\mathbb {R}}^d)\) is defined for a given Schwartz function \(g:{\mathbb {R}}^d\rightarrow {\mathbb {C}}\) of unit norm, that could be but needn’t be a Gaussian, and a phase space center \(z=(q,p)\in {\mathbb {R}}^{2d}\) by
Wave packets of this form are typically used for numerical computations in quantum molecular dynamics and have been extensively studied in the literature, sometimes with different conventions for the phase factor, e.g. \(e^{ip\cdot \left( xq/2\right) /\varepsilon }\) in [5, Chapter 1.1.2].
A direct discretization of the integral in (1.2) using a multivariate quadrature formula in phase space yields an approximation of the form
where \({\mathcal {K}}\subset {\mathbb {N}}^{2d}\) is a given finite multiindex set, e.g. a cube \(\{\mathbf {k}\in {\mathbb {N}}^{2d}\,:\,k_j\le K\}\) or a simplex \(\{\mathbf {k}\in {\mathbb {N}}^{2d}\,:\,\sum _{j=1}^{{2d}}k_j\le K\}\), the representation coefficients \(c_{\mathbf {k}}(\psi )\in {\mathbb {C}}\) are complex numbers, depending on \(\psi \) and the underlying discretization scheme, and the functions \(g_{\mathbf {k}}:=g_{z_\mathbf {k}}\) are wave packets centered at the grid points \(z_{\mathbf {k}}\in {\mathbb {R}}^{2d}\). In particular, if both the represented function \(\psi \) and the basis functions \(g_{\mathbf {k}}\) are Gaussian wave packets, then the coefficients \(c_{\mathbf {k}}(\psi )\), which for this case essentially sample inner products \(\langle g_{\mathbf {k}}\mid \psi \rangle \) of two Gaussians, can be calculated using a formula for multidimensional Gaussian integrals, see Lemma 1. The choice of Gaussian functions is particularly attractive for time propagation, since the timedependent Schrödinger equation with quadratic potential leaves the class of Gaussian wave packets invariant. This fact can be used to approximate the timeevolution of Gaussian wave packets in anharmonic potentials, a distinction being made as to whether the width matrix is chosen to be constant in time (frozen) or timedependent (thawed) and we note that the wave packet transform (1.2) has been used for different approximation schemes such as the Herman–Kluk propagator (frozen) or Gaussian beams (thawed).
The discretization in (1.4) with Gaussian basis functions and uniform Riemann sums was used by Kong et al. and can be viewed as one of the main ingredients for the design of the TSTG method, which we investigate in the present paper.
Remark 1
In the following we work with timeevolved basis functions and to emphasize that we distinguish between “original” and timeevolved basis functions, we write \(g_{\mathbf {k},0}\) for the original and \(g_{\mathbf {k}}(t)\) for the timeevolved basis functions.
Starting from the representation of the initial wave function according to (1.4), the solution to the Schrödinger Eq. (1.1) is approximated in the TSTG method after a short propagation time \(\tau >0\) by the linear combination of timeevolved basis functions as
where we introduced the abbreviations \(\psi (\tau )\) and \(g_{\mathbf {k}}(\tau )\) for \(\psi (\bullet ,\tau )\) and \(g_{\mathbf {k}}(\bullet ,\tau )\). Using thawed Gaussians to approximate the timeevolution of each basis function, the discretization of the wave packet transform (1.4) is brought into play again to represent the individual thawed Gaussian approximants \(u_{\mathbf {k}}^\tau \approx g_{\mathbf {k}}(\tau )\) as
which enables to approximate the solution \(\psi (\tau )\) directly in the original basis in terms of updated coefficients \(c^{1,\tau }_{\mathbf {k}}\) as
The concatenation of TSTG propagation steps then result in approximations for larger times \(2\tau ,3\tau ,\ldots \), which are obtained (without additional timeintegration) by computing update coefficients \(c^{2,\tau }_{\mathbf {k}},c^{3,\tau }_{\mathbf {k}},\ldots \) of higher order. Since all these coefficients have analytic representations, multidimensional numerical quadrature can be completely avoided, which means that the total error of the method is generated by three sources: (1) the discretization of the wave packet transform, (2) the thawed Gaussian approximations and (3) the numerical integration of the thawed equations of motion. The precise analysis of these errors is the subject of this paper.
As expected, our analysis confirms that also for the TSGT method the conventional gridbased approach results in an unacceptably large number of basis functions since the total number of grid points increases exponentially with the dimension d for achieving a given accuracy. One way to bypass the curse of dimensionality for the resulting tensors of basis functions and coefficients is to use lowrank approximation techniques. In our future research, we will explore the combination of the TSTG method with tensortrain (TT) approximations as introduced by Oseledets and Tyrtyshnikov, see [35, 36].
Main results and outline
The paper is organized as follows. In Sect. 2 we review the TSTG method and provide a detailed mathematical formulation of all subroutines. This includes the definition of quadraturebased analysis, synthesis and reinitialization operators, which are used later to investigate the discretization of the wave packet transform and allow for direct comparison with other methods that can also be used to solve the timedependent Schrödinger equation. To the best of our knowledge, this is the first time that a rigorous mathematical formulation of the TSTG method is presented. Afterwards, we investigate the errors produced by the individual subroutines and their concatenation. In Sect. 3 we analyze the error for the discretization of the wave packet transform, whereas Sect. 4 deals with thawed Gaussian approximations followed by an analysis of time discretization for both variationally and nonvariationally evolving basis functions. Our main new result Theorem 1, the first rigorous error bound for the TSTG method, is presented in Sect. 5. Finally, the onedimensional numerical experiments in Sect. 6 support our theoretical results and illustrate the applicability of the TSTG method for simulations of quantum dynamics, including tunneling dynamics in a doublewell potential.
The TSTG propagation method
In this section we present a detailed description of the TSTG method, which is accomplished by deriving a rigorous mathematical formulation of all subroutines. We introduce the analysis, synthesis and reinitialization operators and compare the method with other existing approaches.
Recall the definition of the wave packet \(g_z\) in (1.3). For a complex symmetric matrix \(C\in {\mathbb {C}}^{d\times d}\) with positive definite imaginary part (the set of all matrices with this property is known as the Siegel upper halfspace, see [37], and is denoted by \(\mathfrak {S}^+(d)\) in this paper) and all \(x\in {\mathbb {R}}^d\), we set
from which we obtain
The dependency on C and \(\varepsilon \) is always assumed implicitly in the shorthand notation.
Based on the timeindependent linear approximation space
the TSTG method approximates the solution \(\psi \) to the Schrödinger Eq. (1.1) with timedependent coefficients as follows:
The timedependent representation coefficients result from the concatenation of thawed Gaussian propagation steps for the basis functions with the reinitialization of the evolved basis in the timeindependent approximation space \({\mathcal {V}}_{{\mathcal {K}}}\). To give the equations of motion for the coefficients, we introduce the quadraturebased pair of operators
where for a given quadrature formula the analysis operator \({\mathcal {A}}_{{\mathcal {K}}}\) maps a function \(\psi \in L^2({\mathbb {R}}^d)\) to the coefficient tensor \((c_{\mathbf {k}}(\psi ))\) according to the discretization of the wave packet transform (1.4) and the synthesis operator \({\mathcal {S}}_{{\mathcal {K}}}\) maps a given coefficient tensor \((c_{\mathbf {k}})\) to the Gaussian superposition
Furthermore, for a tensor \({\mathcal {C}}\in {\mathbb {C}}^{{\mathcal {K}}\times {\mathcal {K}}}\) we introduce the socalled reinitialization operator
which can be viewed as a multidimensional version of the matrix–vector product. With these operators at hand we can formulate the TSTG method, which starts to run through the following three subroutines once:

(s1)
Representation coefficients of the initial wave function:
The first subroutine computes the coefficients
$$\begin{aligned} (c_{\mathbf {k}}(\psi _0))={\mathcal {A}}_{{\mathcal {K}}}\psi _0, \end{aligned}$$which can be used to build the following approximation of a given initial wave function \(\psi _0\) in \({\mathcal {V}}_{{\mathcal {K}}}\):
$$\begin{aligned} \psi _0 \approx {\mathcal {S}}_{{\mathcal {K}}}{\mathcal {A}}_{{\mathcal {K}}}\psi _0 =\sum _{\mathbf {k}\in {\mathcal {K}}}c_{\mathbf {k}}(\psi _0)\,g_{\mathbf {k},0}. \end{aligned}$$ 
(s2)
Thawed Gaussian propagation of the basis functions:
In the second subroutine, each basis function \(g_{\mathbf {k},0}\) is propagated for a short propagation period \(\tau >0\). More precisely, each individual timeevolved basis function \(g_{\mathbf {k}}(\tau )\) is approximated by an element \(u_{\mathbf {k}}(\tau )\) in the manifold of complex Gaussian functions
$$\begin{aligned} {\mathcal {M}} =\Bigg \{u\in L^2({\mathbb {R}}^d)\,\Big \, u(x) =g_z^{C,\varepsilon }(x)e^{iS/\varepsilon },\,z\in {\mathbb {R}}^{2d},\,C\in \mathfrak {S}^+(d),\,S\in {\mathbb {R}}\Bigg \}, \end{aligned}$$evolving according to the thawed Gaussian propagation method, see [20]. It is known that \(u_{\mathbf {k}}(\tau )\) is an accurate approximation only if the potential can be approximated as harmonic throughout the “support” of \(u_{\mathbf {k}}(\tau )\), i.e., as long as its width is not too wide, see Lemma 5 for a precise estimate. Based on a numerical integrator for the corresponding equations of motion, let us introduce the approximate propagator
$$\begin{aligned} {\mathcal {U}}_{\mathbf {k}}^\tau :{\mathcal {M}}\rightarrow {\mathcal {M}},\quad g_{\mathbf {k},0}\mapsto u^\tau _{\mathbf {k}}, \end{aligned}$$(2.3)where we use the notation with the superscript to indicate that \(u^\tau _{\mathbf {k}}\in {\mathcal {M}}\) is the numerical approximation to \(u_{\mathbf {k}}(\tau )\) obtained by solving a system of ordinary differential equations (see also Sect. 4). Then, for all \(\mathbf {k}\in {\mathcal {K}}\), the second subroutine produces the numerical approximants
$$\begin{aligned} u^\tau _{\mathbf {k}} ={\mathcal {U}}_{\mathbf {k}}^\tau \,g_{\mathbf {k},0} \approx g_{\mathbf {k}}(\tau ). \end{aligned}$$ 
(s3)
Computation of coefficients for the reinitialization:
The approximants \(u^\tau _{\mathbf {k}}\) obtained in (s2) are now reexpanded in \({\mathcal {V}}_{{\mathcal {K}}}\) as follows. For all \(\mathbf {k}\in {\mathcal {K}}\), we apply the analysis operator \({\mathcal {A}}_{{\mathcal {K}}}\) to the wave packet \(u^\tau _{\mathbf {k}}\), which gives us the tensors
$$\begin{aligned} {\mathcal {A}}_{{\mathcal {K}}}u^\tau _{\mathbf {k}} =(c_{\mathbf {k}'}(u^\tau _{\mathbf {k}}))\in {\mathbb {C}}^{{\mathcal {K}}}. \end{aligned}$$The result of the third subroutine is then a tensor \({\mathcal {C}}^\tau \in {\mathbb {C}}^{{\mathcal {K}}\times {\mathcal {K}}}\) that contains the coefficients \({\mathcal {C}}^\tau _{\mathbf {k}', \mathbf {k}}:=c_{\mathbf {k}'}(u^\tau _{\mathbf {k}})\) for all \(\mathbf {k},\mathbf {k}'\in {\mathcal {K}}\). In particular, this tensor is obtained without numerical integration, because all coefficients sample inner products of two Gaussians.
Remark 2
Note that the corresponding reexpansion \(u^\tau _{\mathbf {k},{\mathcal {K}}}\) of \(u^\tau _{\mathbf {k}}\) in \({\mathcal {V}}_{{\mathcal {K}}}\) is given by the action of the synthesis operator:
Running through the above subroutines once, we are equipped with the tensor \((c_{\mathbf {k}}(\psi _0))\) for the approximation of the initial datum and the tensor \({\mathcal {C}}^\tau \in {\mathbb {C}}^{{\mathcal {K}}\times {\mathcal {K}}}\) containing the coefficients \(c_{\mathbf {k}'}(u^\tau _{\mathbf {k}})\). To now obtain an approximation of the solution at time \(\tau \), we use the reinitialization operator \({\mathcal {R}}_{{\mathcal {K}}}^\tau :={\mathcal {R}}_{{\mathcal {K}}}({\mathcal {C}}^\tau )\) to get
where we have changed the names of the indices \(\mathbf {k}\) and \(\mathbf {k}'\) to get to the third line. Furthermore, using that the unitary propagator can be decomposed for \(n>1\) as
single TSTG propagation steps can be concatenated to approximate the solution at times \(2\tau ,3\tau ,\ldots \), where for the \((n+1)\)th iteration we use the approximant \(\psi _{{\mathcal {K}}}^{n,\tau }\) of the nth iteration as new initial datum and therefore we arrive at the following approximation at time \(t_n=n\tau \),
where we replaced the operator \({\mathcal {A}}_{{\mathcal {K}}} {\mathcal {S}}_{{\mathcal {K}}}\) in the intermediate steps with the identity, which reflects the fact that the representation coefficients from a previous step can be kept in memory. In particular, the reinitialization yields that the corresponding coefficients of \(\psi _{{\mathcal {K}}}^{n,\tau }\) are given for all \(\mathbf {k}\in {\mathcal {K}}\) by the recursion formula
Finally, let us emphasize that the coefficients (and thus also the approximants) are updated recursively on the discrete time grid \(2\tau ,3\tau ,\ldots \) and therefore (2.2) should be rewritten for a fixed propagation time \(\tau \) as
Remark 3
The TSTG method as originally introduced by Kong et al. does not use a direct discretization of the wave packet transform. Instead, the authors present an equivalent approach using a basis of closely overlapping Gaussians to construct a partition of unity based on a summation curve that can be approximated by a constant in the support of all basis functions. We examined this approach in [1] and the discretization of the wave packet transform presented here gives a new perspective that enables a straightforward representation of the discretization error.
Comparison with other methods
Looking at the chosen ansatz in (2.2), one way to determine the corresponding timedependent coefficient tensor \(c=(c_{\mathbf {k}})\) would be the standard Galerkin method, which yields a linear system of ordinary differential equations and is derived from the condition that
With the orthogonal projection \(P_{{\mathcal {K}}}:L^2({\mathbb {R}}^d)\rightarrow {\mathcal {V}}_{{\mathcal {K}}}\) onto the approximation space, the Galerkin condition (2.5) can also be written as
Let us therefore take a closer look at \({\mathcal {V}}_{{\mathcal {K}}}\). The approximation space is spanned by the nonorthogonal Gaussian basis functions \(g_{\mathbf {k},0}\). To achieve a given accuracy for the discretization of the wave packet transform, the grid points \(z_{\mathbf {k}}\) must be chosen sufficiently close, which means that the basis functions have a large overlap and therefore the Gram matrix of the Galerkin method becomes illconditioned. This problem has been extensively studied in the literature, see e.g. [12, Section 3], and several stabilization algorithms have been proposed, see e.g. [13, 26]. Furthermore, it is worth noting that the Gram matrix becomes the identity if the Gaussians are replaced by an orthonormal basis and a comparison must be made with the Galerkin method in [31, Chapter III.1.1], where the timeindependent approximation space is spanned by the first \(K\ge 1\) Hermite functions
which are known to form an \(L^2\)orthonormal set. Although this choice enables a convincingly simple representation of the orthogonal projection, namely
which is used in [31, Chapter III.1.1, Theorem 1.2] to derive the approximation error of the Galerkin method, in practical applications the dimension of \({\mathcal {V}}_{\mathcal {K}}\) must typically be chosen large in order to compute the evolution of the wave function with sufficient accuracy. For instance, for simulations of tunneling in doublewell potentials (quartic potentials with two local minima separated by energy barriers) as presented later in §6.3, the Hermite basis is expensive since the Hermite functions are localized by a Gaussian envelope and therefore the degree of the polynomial prefactors must be large to capture both minima.
Furthermore, we note that timevarying approximation spaces have also been studied in the past. Linear combinations of timeevolved frozen Gaussian functions have been proposed by Heller, see [22], and can be improved by taking a linear combination of Dirac–Frenkel timedependent coefficients, which are determined by the timedependent variational principle, see [31, Chapter II.5.3]. We would also like to mention the Galerkin approximation for Hagedorn functions, a generalization of the Hermite functions based on a Gaussian amplitude with arbitrary width matrix in the Siegel half space, see e.g. [27, Section 4.3] and [2, 15].
Summary
While the standard Galerkin condition yields a linear system of ordinary differential equations for the coefficients, which contains the illconditioned Gram matrix due to the closely overlapping basis functions, the TSTG method combines thawed Gaussians for the propagation of the basis with the operators \({\mathcal {A}}_{{\mathcal {K}}},{\mathcal {S}}_{{\mathcal {K}}}\) and \({\mathcal {R}}_{{\mathcal {K}}}^\tau \), which are based on the discretization of the wave packet transform and are obtained without numerical integration.
Discretizing the wave packet transform
In this section we discuss the discretization of the phase space integral
for the case of Gaussian basis functions and uniform Riemann sums. We present an analytical formula for the coefficients \(c_{\mathbf {k}}(\psi )\), proving that they are Gaussian wave packets in phase space. Moreover, we discuss the discretization error for (3.1).
Recall the inversion formula of the FBI transform in (1.2). The first attempt to obtain an approximation of the phase space integral might use a multivariate integration formula based on weighted point evaluations of the integrand and for this case the analysis operator takes the form
where the numbers \(w_{\mathbf {k}}\ge 0\) are nonnegative weights. In particular, in “Appendix A” we prove that \({\mathcal {A}}_{{\mathcal {K}}}\) and \({\mathcal {S}}_{{\mathcal {K}}}\) are formally adjoint and therefore from now on we write \({\mathcal {S}}_{{\mathcal {K}}} ={\mathcal {A}}_{{\mathcal {K}}}^*\). Since on the manifold \({\mathcal {M}}\subset L^2({\mathbb {R}}^d)\) of complex Gaussian functions the analysis operator has an analytic representation, let us start to take a closer look at the inner products of Gaussians.
Remark 4
The inversion formula of the FBI transform is known in the literature under different names, for instance as the inversion formula for the shorttime Fourier transform in time–frequency analysis (the semiclassical parameter \(\varepsilon \) is not considered in this context), see e.g. [16, Corollary 3.2.3], or, in presence of a Gaussian amplitude, as the inversion formula for the Gabor transform, see e.g. [10, Eq. 3.2.5]. Correspondingly, its discrete counterpart as considered here is related to Gabor frames. However, the coefficients as they result from a direct discretization of the phase space integral are not the exact Gabor coefficients and are obtained without computing the dual window of g. For a broader perspective on this theory we refer to [16, Chapter 5].
Inner products of Gaussians
The inner product of Gaussian wave packets has an explicit analytic expression and the next lemma shows that it can be written as a Gaussian in phase space.
Lemma 1
For \(C_1,C_2\in \mathfrak {S}^+(d)\) in the Siegel space and \(z_1,z_2\in {\mathbb {R}}^{2d}\) we have
where the matrix
is an element of the Siegel space \(\mathfrak {S}^+(2d)\) of \(2d\times 2d\) matrices and for \(B=C_2\bar{C}_1\) the complex constant \(\beta \in {\mathbb {C}}\) is given by
Moreover, if the eigenvalues of the positive definite matrices \(\mathrm{Im}(C_k)\) and \(\mathrm{Im}(C_k^{1})\), \(k=1,2\), are bounded from below by a constant \(\theta >0\), then the absolute value of the inner product is bounded by
where the constant \({\zeta }>0\) depends on \(\theta \) and an upper bound on the eigenvalues of \(\mathrm{Im}(C_k)\) and \(\mathrm{Im}(C_k^{1})\), but is independent of \(\varepsilon \).
We present the proof in “Appendix B” and note that the bound in (3.4) can easily be improved if the lower bound on the eigenvalues of \(\mathrm{Im}(C_k)\) and \(\mathrm{Im}(C_k^{1})\) is not chosen uniformly. We also refer to the proof for the dependence of \({\zeta }\) on the spectral parameters.
From Lemma 1 we learn that the inner product \(\langle g_{\mathbf {k}}\mid g_{z_0}^{C_0,\varepsilon }\rangle \), as it appears in (3.1) for the choice \(z=z_{\mathbf {k}}\) and \(\psi =g_{z_0}^{C_0,\varepsilon }\), is a Gaussian in phase space:
Lemma 2
For a Gaussian wave packet \(\psi \), the coefficients \(c_{\mathbf {k}}(\psi )\) that result from a discretization of the wave packet transform based on a multivariate quadrature formula are weighted Gaussian wave packets in phase space.
Due to the rapid decay of Gaussians, the (improper) phase space integral (3.1) can be approximated by a truncated integral, which itself can be approximated via different multivariate quadrature rules afterwards. In the next step we investigate these approximations.
Truncation and multivariate quadrature
We continue to investigate the truncation error for the wave packet transform.
Lemma 3
(Truncation error) For a given phase space center \(z_0\in {\mathbb {R}}^{2d}\) and a positive parameter \(b>0\) consider the phase space box
Moreover, for \(C,C_0\in \mathfrak {S}^+(d)\) let \(g_z=g_z^{C,\varepsilon }\) and \(\psi _0=g_{z_0}^{C_0,\varepsilon }\) and assume that the eigenvalues of \(\mathrm{Im}(C),\mathrm{Im}(C_0)\) and \(\mathrm{Im}(C^{1}),\mathrm{Im}(C_0^{1})\) are bounded from below by \(\theta >0\) and from above by \(\Theta >0\). Then, there exists a positive constant \(c>0\), which is independent of \(\varepsilon \) but depends on the spectral parameters, such that
where \(B_q\subset {\mathbb {R}}^d\) denotes the projection of B onto the position space.
Proof
Recall the definition of the Gaussian wave wave packet \(g_z=g_z^{C,\varepsilon }\) in (2.1). A short calculation shows that in terms of the rescaled phase space box
the difference
satisfies the following equation for all \(x\in {\mathbb {R}}^d\):
which depends on \(\varepsilon \) only through the semiclassically scaled truncation box \(B^\varepsilon \). Since the scaling \(f\mapsto \varepsilon ^{d/4}f(\sqrt{\varepsilon }\bullet )\) is unitary and the Gaussian envelope \(g_{z'}^{C,1}=g^C(\bullet q')\) has unit \(L^2\)norm, it further follows that
and therefore the bound for the inner product of Gaussians in (3.4) yields
Furthermore, the symmetry of the integral and Fubini’s theorem yields that
Using the exponentialtype bound \(\mathrm{erfc}(z)\le e^{z^2}\), \(z>0\), for the complementary error function, see e.g. [3, Eq. (5)], we conclude that
and therefore we finally get
In particular, this shows that the constant c can be chosen as
\(\square \)
We note that Lemma 3 can be easily improved if separate boxes \(B_q\subset {\mathbb {R}}^d\) and \(B_p\subset {\mathbb {R}}^d\) are used in position and momentum space, which can also be aligned with the eigenvectors of the width matrix of the integrand, see e.g. [1, Lemma 3.4].
The truncated phase space integral in (3.6) can now easily be approximated by a multidimensional Riemann sum over sufficiently dense lattices in position and momentum space. This approach was used by Kong et al., who worked with uniform grids of size \(\Delta q_j>0\) and \(\Delta p_j>0\) in each coordinate direction \(j=1,\ldots ,d\), corresponding to constant weights
For a given phase space box B such as (3.5), the discretization error then depends not only on the number of grid points that are used to subdivide B, but also on the dimension of the phase space:
Lemma 4
Let \(f\in C^\infty ({\mathbb {R}}^{2d})\) and \(K=k^{2d}\) for some \(k\ge 1\). Then, there exists a positive constant \(c_f>0\), depending on the function f, such that
where \({\mathcal {K}}=\{1,2,\ldots ,k\}^{2d}\). In particular, \(c_f\) can be chosen as the total variation of the function f (in the sense of Hardy and Krause).
We formulated Lemma 4 as a special variant of a more general result that can be found in [6, Chapter 5.5.5]. Moreover, we note that the estimate in Lemma 4 can be improved to a bound of order \({\mathcal {O}}(k^{2})\) if the composite midpoint rule is used instead of the composite rectangle rule.
The total error for the discretization of the wave packet transform is now obtained by combining the estimates in Lemmas 3 and 4. For a Gaussian wave packet \(\psi _0=g_{z_0}^{C_0,\varepsilon }\) and a given phase space box B centered in \(z_0\), let \(B_q\subset {\mathbb {R}}^d\) denote the projection of B onto the position space. Moreover, let us introduce the following notation for the spatial discretization error:
Note that this definition reflects the assumption that on \({\mathcal {V}}_{{\mathcal {K}}}\) the operator \({\mathcal {A}}_{{\mathcal {K}}}^*{\mathcal {A}}_{{\mathcal {K}}}\) is replaced by the identity, since representation coefficients can be kept in memory. We then arrive at the following error estimate:
Proposition 1
(Discretization error for uniform Riemann sums) Let \(C_0\in \mathfrak {S}^+(d)\) and \(z_0\in {{\mathbb {R}}^{2d}}\). For the discretization of the phase space integral
using the phase space box B in (3.5) and uniform Riemann sums with \(k\ge 1\) grid points in each coordinate direction, there exist constants \(c^{(\mathrm{T)}},c^{(\mathrm{RS})}>0\) such that
We learn from the previous discussions that the discretization of the phase space integral with conventional gridbased approaches such as Riemann sums in every coordinate direction results in an unacceptably large number of basis functions, since the total number of grid points increases exponentially with the dimension. Sparse grid methods can overcome this curse of dimensionality to a certain extent, and we refer to [14] for a comprehensive presentation of several methods based on Smolyak’s sparse grid construction and further developments. As already mentioned, we plan to use tensortrain approximations to extend the dimensionality of dynamics simulable with the TSTG approach.
Remark 5
In [1] we study the discretization of the wave packet transform via different quadrature rules. Based on Gauss–Hermite quadrature, we introduce a representation of Gaussian wave packets in which the number of basis functions is significantly reduced and therefore offers an alternative to the approximation with Riemann sums according to Proposition 1.
Methods for propagating Gaussian wave packets
This section deals with the propagation of the basis functions. The main result is the error bound for a single TSTG step in Proposition 3, which combines an estimate for thawed Gaussian approximations with an estimate for the numerical integration of the underlying equations of motion.
Recall that in subroutine (s2) of the method the individual basis functions are propagated according to the (nonvariational) thawed Gaussian equations, see [25, Eq. (17)]. The equations for the parameters \(z\in {\mathbb {R}}^{2d},\,C\in \mathfrak {S}^+(d)\) and \(S\in {\mathbb {C}}\) in the definition of the manifold \({\mathcal {M}}\) combine the Hamiltonian system
for the motion of the center z(t) with equations for C(t) and S(t) ensuring that in the presence of a quadratic potential we obtain exact solutions. In addition to the work done by Kong et al., other propagation methods are also possible as long as the approximants \(u_{\mathbf {k}}(\tau )\) lie in the Gaussian manifold \({\mathcal {M}}\) to ensure that the coefficients for the reexpansion can be calculated analytically. For instance, variationally evolving Gaussians offer an alternative which, like the nonvariational Gaussians, provide approximations with order \({\mathcal {O}}(\sqrt{\varepsilon })\) accuracy. The approximate solution is then determined by the Dirac–Frenkel timedependent variational approximation principle, see e.g. [27, Section 3], and the equations of motion for the parameters were first derived by Coalson and Karplus, see [4]. Using Hagedorn’s parametrization \(C=PQ^{1}\), where the matrices \(P,Q\in {\mathbb {C}}^{n\times n}\) are invertible and satisfy the relations
these equations read
where we denote by \(\langle W\rangle _u=\langle u\mid Wu\rangle ,\,W\in \{V,\nabla _x V,\nabla ^2_x V\}\), the expectation values. In particular, for the propagation of the basis functions in the TSTG method the initial conditions are given by
where \({\mathrm{Im}(C_0)^{1/2}}\) is the unique positive definite square root of \({\mathrm{Im}(C_0)}\).
Remark 6
To get the equations of motion for the nonvariational Gaussians as used by Kong et al., we replace the equations in (4.3) for (q(t), p(t), Q(t), P(t)) by the point evaluations
which are computationally less demanding than the variational equations of motion. This implies that \(Z(t)=(Q(t),P(t))\) is a solution to the linearization of the classical equations of motion,
where the function h and the symplectic matrix J are defined according to (4.1). Moreover, we note that in the presence of a quadratic potential the above equations coincide with those in (4.3). The parametrization in terms of \(Z=(Q,P)\) goes back to the work of Hagedorn, see [17, 18] and the matrix conditions in (4.2) ensure the correct normalization of the approximant \(u\in {\mathcal {M}}\).
The next lemma presents the accuracy of the thawed Gaussian methods and extends the results for the \(L^2\)error for variational Gaussians in [27, Theorem 3.5] to nonvariational Gaussians. We note that the first \(L^2\)error for nonvariational Gaussians was proved by Hagedorn, see [18, Theorem 2.9].
Lemma 5
Assume that

the eigenvalues of the positive definite width matrix \(\mathrm{Im}(C(t))\) are bounded from below by a constant \(\rho >0\), for all \(t\in [0,\tau ]\).

the potential function V is three times continuously differentiable with a polynomially bounded third derivative.
Moreover, assume that \(u(t)\in {\mathcal {M}}\) is an approximation to the Schrödinger equation that results from the thawed Gaussian method (variational or nonvariational). Then, there exists a positive constant \(c^{(1)}>0\) such that the error between the approximant u(t) and the solution \(\psi (t)\) is bounded in the \(L^2\)norm by
where \(c^{(1)}\) is independent of \(\varepsilon \) and t but depends on \(\rho \).
The crucial ingredient for the proof is the fact that both the variational and the nonvariational approximation are exact, provided that the potential is quadratic, see [27, Proposition 3.2], and therefore the estimate in (4.5) follows from a bound on the defect for the cubic part of the potential.
Proof
Let \(U_q:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) denote the secondorder Taylor polynomial of V at q and let \(W_q:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) be the corresponding remainder, i.e.,
Since the approximant \(u(t)\in {\mathcal {M}}\) is the exact solution to
we obtain
where
Moreover, using that \(W_q(x)\) is the nonquadratic remainder at q, an estimate for moments of Gaussian functions (see [27, Lemma 3.8]) yields the existence of a constant \(c^{(1)}>0\), depending on \(\rho \), such that
Consequently, since \(u\psi \) satisfies the Schrödinger equation up to the defect
we finally conclude that
\(\square \)
Remark 7
We note that the equations of motion are different for the variational and the nonvariational thawed Gaussian method and therefore we get individual lower bounds on the eigenvalues of the width matrix, so that, although we have omitted this dependency in our notation of Lemma 5, individual constants result for the two methods. In particular, the estimate of Lasser and Lubich for Gaussian moments show that \(c^{(1)}\) depends on the third derivative of V and is of order \(\rho ^{3/2}\) with respect to the spectral parameter \(\rho \). We also mention that, in contrast to the computation of the full wave function, the error in the expectation value of observables improves to an order \({\mathcal {O}}(\varepsilon )\) accuracy, see [27, Theorem 3.5b].
The estimate in (4.5) shows that the thawed Gaussian approximations produce errors that increase linearly in t, where a small semiclassical parameter yields an improvement by a factor \(\sqrt{\varepsilon }\) for the corresponding constant. Since we want to use the thawed Gaussians for the TSTG method to approximate the timeevolution of the basis functions \(g_{\mathbf {k},0}\), we see that the propagation time \(\tau \) must be chosen such that we get accurate approximations for all \(\mathbf {k}\in {\mathcal {K}}\). A good choice of \(\tau \) therefore enables the control of the error for the propagation of the basis, but small values result in more concatenation steps in order to approximate the solution for a fixed final time (we present numerical experiments for the dependency on \(\tau \) in §6.2). With this in mind, let us note that frozen Gaussian approximations would also be possible, see [22]. On the one hand, this leads to simpler equations of motion since these approximations do not need information about the second derivative of the potential, but on the other hand, with an eye on the parameter \(\varepsilon \), the frozen Gaussian method reduce the order to \({\mathcal {O}}(1)\).
We now turn to the numerical integration for the equations of motion.
Time discretization
For the integration of the equations of motion we need a suitable numerical integrator. In (2.3) we therefore introduced the approximate propagator \({\mathcal {U}}_{\mathbf {k}}^\tau :{\mathcal {M}}\rightarrow {\mathcal {M}}\), which has not yet been defined in detail except that it maps a Gaussian basis function \(g_{\mathbf {k},0}\) to some numerical approximation \(u^\tau _{\mathbf {k}}\approx g_{\mathbf {k}}(\tau )\). The development of such integrators essentially uses exponential operator splitting methods such as the firstorder Lie splitting or the secondorder Strang splitting, where we say that the integrator is of order \(s\ge 1\), if there exists a constant \(c^{(2)}>0\) such that the error between the approximant \(u^\tau _{\mathbf {k}}\), obtained after \(m\ge 1\) steps of size \(h_\tau =\tau /m\), and the true solution \(u_{\mathbf {k}}(\tau )\) is bounded in the \(L^2\)norm by
For example, the \(L^2\)error of Strang splitting is \({\mathcal {O}}(h_\tau ^2/\varepsilon )\), which implies that the step size \(h_\tau \) must be sufficiently smaller than \(\sqrt{\varepsilon }\), and we refer to [7] for rigorous error bounds in the semiclassical scaling \(\varepsilon \ll 1\).
Equipped with a numerical integrator, we get the following error:
Proposition 2
For \(\tau >0\) and a uniform time grid of step size \(h_\tau >0\) let
Moreover, assume that \({\mathcal {U}}_{\mathbf {k}}^\tau :{\mathcal {M}}\rightarrow {\mathcal {M}}\) is a numerical integrator of order \(s\ge 1\). Then, under the hypotheses of Lemma 5, for all \(\mathbf {k}\in {\mathcal {K}}\) there exists a positive constant \(c_{\mathbf {k}}>0\) such that
Proof
Let \(\tau >0\) and \(m\ge 1\). For all \(\mathbf {k}\in {\mathcal {K}}\), we combine the estimate in (4.5) with the estimate in (4.6) to obtain
Consequently, the bound in (4.8) follows for the constant
\(\square \)
A practical secondorder algorithm of the variational splitting was proposed and studied by Faou and Lubich, see [9]. In particular, it conserves the norm and the symplecticity relations of the matrices Q and P in (4.2). Moreover, we note that there are various higherorder splittings for the unitary propagator that can also be used and refer the interested reader to [33] and [19, Chapter III].
We are now equipped with an error estimate for the discretization of the wave packet transform, for the thawed Gaussian approximations and for the numerical integration of the thawed equations of motion. We are therefore ready to analyze the error generated by a single TSTG step. Afterwards, in Theorem 1 we lift this error estimate to a global one.
Error after a single TSTG step
Recall that a single TSTG step consists of the following approximations:

1.
The approximation of the initial wave function \(\psi _0\) in the approximation space \({\mathcal {V}}_{{\mathcal {K}}}\) according to subroutine (s1)

2.
The propagation of the basis according to (s2)

3.
The reexpansion of the timeevolved basis in \({\mathcal {V}}_{{\mathcal {K}}}\) according to (s3)
For a tensor \((c_{\mathbf {k}})\in {\mathbb {C}}^{\mathcal {K}}\) let us introduce the following notation for its 1norm:
We then obtain the following result:
Proposition 3
(Error after a single TSTG step) For a given box \(B\subset {\mathbb {R}}^{2d}\) in phase space and a finite index set \({\mathcal {K}}\subset {\mathbb {N}}^{2d}\) recall the definition of the spatial discretization error \(E_{wp}\) in (3.7). Moreover, for \(\mathbf {k}\in {\mathcal {K}},\,\tau >0\) and \(h_\tau >0\) recall the definition of the time discretization error \(E_{\mathbf {k}}^\tau \) in (4.7) produced by a numerical propagator of order \(s\ge 1\) for the thawed equations of motion. Then, there exists a positive constant \(C>0\) such that
where \(E^{1,\tau }>0\) denotes the following bound for the total spatial discretization error:
Proof
In the following let \(\Vert \bullet \Vert \) denote the \(L^2\)norm on the box \(B_q\) in position space. Using that the evolution operator \(U(\tau )=e^{iH\tau /\varepsilon }\) is unitary, we have
For the second summand, the definition of the coefficients \(c^{1,\tau }_{\mathbf {k}}\) in (2.4) yields
In particular, as proved in “Appendix D”, for all \(\mathbf {k}\in {\mathcal {K}}\) there exists a positive constant \(\tilde{c}_{\mathbf {k}}>0\) such that
Consequently, using the bound for \(E_{\mathbf {k}}^\tau \) in (4.8) with the constant \(c_{\mathbf {k}}>0\), the estimate in (4.9) follows for the choice
\(\square \)
We note that (4.9) combines the 1norm with the \(\max \)norm to bound the last sum in (4.10). Since the spatial errors \(E_{wp}(u^\tau _{\mathbf {k}})\) will increase at the boundary of the grid \(\{z_{\mathbf {k}}\}_{\mathbf {k} \in {\mathcal {K}}}\), but at the same time the coefficients \(c_{\mathbf {k}}(\psi _0)\) decrease exponentially with the distance \(\Vert z_{\mathbf {k}}z_0\Vert _2\), other Hölder conjugate exponents, which reflect this griddependent interplay more accurately, could also be chosen.
In the next section we investigate the error that is produced by the concatenation of single TSTG steps.
Error estimates for the concatenation
As discussed in §2, approximations for larger times \(2\tau ,3\tau ,\ldots \) are based on the updated coefficients \(c^{2,\tau }_{\mathbf {k}},c^{3,\tau }_{\mathbf {k}},\ldots \) which are given by the recursion formula in (2.4). We therefore start to investigate the magnitude of these coefficients.
Recall that the timeevolved Gaussian approximants \(u_{\mathbf {k}}^\tau \in {\mathcal {M}}\) are reexpanded in the original basis of Gaussians \(g_{\mathbf {k},0}\), which gives us the updated coefficients
Since both factors \(c_{\mathbf {k}'}(\psi _0)\) and \(c_{\mathbf {k}}(u^\tau _{\mathbf {k}'})\) are Gaussian wave packets in phase space, the coefficients can be bounded by a Gaussian envelope (as a sum of Gaussians) and therefore, by induction on n, Gaussian bounds can be derived for all higherorder coefficients \(c^{n,\tau }_{\mathbf {k}},n>1\):
Proposition 4
For \(z_0\in {\mathbb {R}}^{2d}\) and \(C_0\in \mathfrak {S}^+(d)\) let \(\psi _0=g_{z_0}^{C_0,\varepsilon }\) and \(\{z_{\mathbf {k}}\}_{\mathbf {k} \in {\mathcal {K}}}\) be an arbitrary grid in phase space. Then, for all \(n\ge 0\) and \(\tau >0\), there exist positive constants \({\zeta _n^{\varepsilon ,\tau }},\theta _n^\tau >0\) such that for all \(\mathbf {k}\in {\mathcal {K}}\) we have
For the proof of Proposition 4 we first derive an auxiliary result that allows us to bound the representation coefficients \(c_{\mathbf {k}}(u^\tau _{\mathbf {k}'})\) of the timeevolved Gaussian approximant \(u^\tau _{\mathbf {k}'}\), which according to Lemma 2 is a Gaussian in phase space centered at \(z_{\mathbf {k}'}(\tau )\), by a Gaussian envelope centered at \(z_{\mathbf {k}'}=z_{\mathbf {k}'}(0)\).
Lemma 6
Under the assumptions of Proposition 4, for all \(\mathbf {k}'\in {\mathcal {K}}\), there exist positive constants \({\zeta _{\mathbf {k}'}^\tau }>0\) and \(\theta _{\mathbf {k}'}^\tau >0\) such that for all \(\mathbf {k}\in {\mathcal {K}}\) we have
Proof
Let \(\mathbf {k}'\in {\mathcal {K}}\) and \(\tau >0\). The definition of the coefficients \({\mathcal {A}}_{{\mathcal {K}}}u^\tau _{\mathbf {k}'}\) implies
where the nonnegative weights \(w_{\mathbf {k}}\ge 0\) depend on the underlying quadrature rule and therefore, using Lemma 1, we find constants \(\beta _{\mathbf {k}'}^\tau ,\theta _{\mathbf {k}'}^\tau >0\) such that
where \(z_{\mathbf {k}'}(\tau )\in {\mathbb {R}}^{2d}\) is the center of the evolved basis function \(u^\tau _{\mathbf {k}'}\in {\mathcal {M}}\). To bound this Gaussian envelope by a reshifted envelope centered at the original point \(z_{\mathbf {k}'}\) instead of the evolved center \(z_{\mathbf {k}'}(\tau )\), we write the timeevolved grid in terms of the original grid as
and introduce the maximal phase space shift
Using the Cauchy–Schwarz inequality in \({\mathbb {R}}^d\), it then follows that
Hence, if we denote by \(D_{\max }>0\) the maximal distance \(\Vert z_{\mathbf {k}}z_{\mathbf {k}'}\Vert _2\) between two grid points in phase space and
the bound in (5.2) follows for \({\zeta _{\mathbf {k}'}^\tau } =\beta _{\mathbf {k}'}^\tau \beta ^\tau \). \(\square \)
Proof of Proposition 4
We present a proof by induction on \(n\ge 0\). For \(n=0\), the bound in (5.1) follows from Lemma 6 if we replace \(u^\tau _{\mathbf {k}'}\) by \(\psi _0\). In particular, for this special case, the constants \({\zeta _0^{\varepsilon ,\tau }}\) and \(\theta _0^\tau \) do not depend on either \(\varepsilon \) or \(\tau \) and thus we could also write \({\zeta _0}\) and \(\theta _0\). Now, let \(n>1\) and assume that the bound in (5.1) holds for \(n1\). The recursion formula (2.4) yields
where the factor \(c^{n1,\tau }_{\mathbf {k'}}\) can be estimated according to the induction hypothesis and the second factor \(c_{\mathbf {k}}(u^\tau _{\mathbf {k}'})\) according to Lemma 6. This means that we find constants \({\zeta _{n1}^{\varepsilon ,\tau }},\theta _{n1}^\tau >0\) and \({\zeta _{\mathbf {k}'}^\tau },\theta _{\mathbf {k}'}^\tau >0\) such that
and therefore we conclude that
where we introduced
as well as the shifted grid points \(\tilde{z}_{\mathbf {k}}:=z_{\mathbf {k}}z_0\). In “Appendix C” we show that there exists a positive constant \(c>0\), depending on \(\theta _{n1}^\tau ,\theta ^\tau ,\varepsilon \) and the phase space grid, such that for all onedimensional components \(j=1,\ldots ,2d\) we have
Consequently, using the definition of the shifted grid \(\tilde{z}_{\mathbf {k}}=z_{\mathbf {k}}z_0\), we finally get
which proves the bound in (5.1) for
\(\square \)
The last proposition provides a bound for the magnitude of the coefficients \(c^{n,\tau }_{\mathbf {k}}\). Together with the error bound for a single TSTG step in Proposition 3, we are now ready to present the error bound for the concatenation.
Global error estimate for the concatenation
From Proposition 3 we learn that the total error of a single TSTG propagation step can be decomposed into a time and a spatial component. In particular, the error with respect to time consists of the error for the thawed Gaussian approximation of order \({\mathcal {O}}(\sqrt{\varepsilon })\) and the error for the numerical integration of order \({\mathcal {O}}(h_\tau ^s/\varepsilon )\), whereas the spatial error consists of the error for the approximation of the initial datum \(\psi _0\) in \({\mathcal {V}}_{{\mathcal {K}}}\) and the error for reexpansion of the timeevolved approximant \(u_{\mathbf {k}}^\tau \) in \({\mathcal {V}}_{{\mathcal {K}}}\). Our finial result generalizes this result for the concatenation of \(n>1\) TSTG steps:
Theorem 1
Under the hypotheses of Proposition 3, there exists a positive constant \(C>0\) such that the global error of the TSTG propagation method with \(n\ge 1\) concatenated steps at time \(t_n=n\tau \) is given by
where \(E^{n,\tau }>0\) denotes the following bound for the total spatial discretization error:
Remark 8
Recall that \(h_\tau \) is the step size of the numerical integrator for the underlying system of ODEs in Sect. 4, while \(\tau \) is the TSTG step size. In particular, one typically chooses \(h_\tau =\tau /m\) for a positive integer \(m\ge 1\). Moreover, we note that, in order to balance the error in
one obtains the condition \(h_\tau ={\mathcal {O}}(\varepsilon ^{3/2s})\), where \(s\ge 1\) is the order of the integrator. In particular, we get \(h_\tau ={\mathcal {O}}(\varepsilon ^{3/2})\) for \(s=1\) and \(h_\tau ={\mathcal {O}}(\varepsilon ^{3/4})\) for \(s=2\), which does not seem to be as efficient as the Gaussian beam method at \(h_\tau ={\mathcal {O}}(\sqrt{\varepsilon })\). However, for the TSTG method, numerical integration only needs to be performed for the time interval \([0,\tau ]\) since the numerical solution at time \(t_n=n\tau \) is obtained by concatenating \(n>1\) TSTG steps, without additional numerical integration but only via the computation of the update coefficients \(c^{n,\tau }_{\mathbf {k}}\). Therefore, the total number \(N_{\mathrm{GB}}={\mathcal {O}}(t_n/\sqrt{\varepsilon })\) of time steps for the Gaussian beam method must be compared with \(N_{\mathrm{TSTG}} ={\mathcal {O}}(\tau /\varepsilon ^{3/2s})\).
Proof
Again, let \(\Vert \bullet \Vert \) denote the \(L^2\)norm on \(B_q\). For \(n\ge 1\) we define
Using that \(U(\tau )\) is unitary, we obtain the recursion
where the second summand is the local error of the nth step. Hence, the global error \(e_{n,\tau }\) after n steps can be expressed in terms of the local errors as
We note that \(e_{1,\tau }\) is the error after a single propagation step in Proposition 3. Furthermore, for \(1\le l\le n1\) the definition of the coefficients \(c^{l,\tau }_{\mathbf {k}}\) in (2.4) yields
Consequently, using once more the bounds for \(E_{\mathbf {k}}^\tau \) in (4.8) and for \(E_{wp}(u^\tau _{\mathbf {k}})\) in “Appendix D” with corresponding constants \(c_{\mathbf {k}}\) and \(\tilde{c}_{\mathbf {k}}\), respectively, the bound in (5.3) follows for the constant
\(\square \)
The previous theorem proves that the error for the TSTG propagation increases linearly with the number n of propagation steps, where the corresponding constant depends on the errors introduced by the discretization of the wave packet transform, the thawed Gaussian approximation and the integration of the equations of motion. For the numerical experiments presented in the next section, we examine an error bound based on a direct computation of
for all \(l=1,2,\ldots ,n1\), using the splitstep Fourier method for the propagation of the basis functions. Future research will address the derivation of a practical a posteriori error bound to be used in (5.4) for implementing the TSTG method with adaptive step sizes or adaptive mesh refinements.
Numerical results
We demonstrate the capabilities of the TSTG method with a series of examples. We first examine the discretization of the wave packet transform that is used to decompose the initial wave function and for the reexpansion of the timeevolved basis as described in Sect. 3. Afterwards, we test the method by computing the full wave function of the onedimensional harmonic oscillator for different propagation times \(\tau \) and step sizes \(h_\tau \). Moreover, we reproduce the numerical results of Kong et al. for a onedimensional doublewell potential. In addition to Kong et al., who used nonvariationally evolving Gaussians for the propagation of the basis functions, we also used variational Gaussians to compare both methods.
Remark 9
The following numerical examples support the main result presented in Theorem 1 and show that the estimate in (5.3) is indeed a workable error bound. Our experiments show how the errors depend on the underlying method for propagating the basis functions (variationally vs. nonvariationally evolving thawed Gaussians). Since the capabilities of the TSTG method itself have already been presented by Kong et al., we concentrate on onedimensional numerical experiments for the error analysis. For multidimensional numerical experiments on the TSTG method and comparison with other methods, we refer to [25, Results].
Approximation of the initial wave function
We present numerical experiments for the approximation of a Gaussian wave function with uniform Riemann sums according to Proposition 1 for
which is later used in §6.3 as initial wave function for the doublewell potential. Figure 1 shows the reconstruction errors in the supremum norm as a function of grid points for different truncation boxes \(B=[b_q,b_q]\times [b_p,b_p]\), where we used the same number of grid points for both intervals. For each column of Fig. 1 (the width C of the basis functions is fixed here) we compare the two choices \(\varepsilon =1\) (top) and \(\varepsilon =0.1\) (bottom). All panels show that larger phase space boxes yield a worse decay of the error, which is in accordance with Lemma 3. In particular, the upper two plots show that for the smallest box (solid lines) the truncation error is reached after approximately 64 grid points (plateaus) and we see that the number of grid points needed to achieve a given tolerance increases with decreasing \(\varepsilon \), since the small value of \(\varepsilon \) corresponds to a narrow Gaussian.
Onedimensional harmonic oscillator
In this example we consider the quantum harmonic oscillator, which corresponds to the quadratic potential \(V(x)=x^2/2\). For the initial datum we chose the Gaussian wave packet \(\psi _0=g_{z_0}^{C_0}\) with \(z_0=(1,0)^T, C_0=1i\) and \(\varepsilon \in \{0.1,1\}\). In particular, the analytic solution is known to be, see [18, Theorem 2.5],
where q(t), p(t) and S(t) are given by
The discretization of the wave packet transform was based on the phase space box \(B=[8,8]\times [8\pi ,8\pi ]\), where we used 64 grid points in position space, 32 grid points in momentum space and the width parameter \(C=4i\) for the basis functions. The propagation of the basis functions was implemented with the secondorder variational splitting integrator in [27, Section 7.5]. Figure 2 shows the \(L^2\)error between the TSTG method and the analytic solution on the spatial interval \(B_q=[8,8]\) for \(\varepsilon =1\) and two choices of \(\tau =0.1\) (red) and \(\tau =0.01\) (black). The step size for the time integration was \(h_\tau =1\cdot 10^{3}\). The dashed lines indicate the error bound of Theorem 1 based on a direct evaluation of the error bounds \(\mathrm{err}_{{\mathcal {K}}}^{l,\tau }\) in (5.4), where we used again the analytic solution to compute the errors \(E_{\mathbf {k}}^\tau \). We added the linear functions \(t\mapsto 2\cdot 10^{6}t\) (dotted red) and \(t\mapsto 2\cdot 10^{7}t\) (dotted black) to verify that the error increases linearly with the number of TSTG steps. We note that for \(\tau =0.01\) we need 10 times the number of concatenations compared to \(\tau = 0.1\) and therefore the slopes of the red and black lines differ by a factor of 10. To keep the number of TSTG steps and thus the total error small, we recognize that the propagation time \(\tau \) should be chosen as large as possible.
Figure 3 shows the \(L^2\)error for \(\varepsilon =0.1\). Computations were based on 128 grid points in position and momentum space and \(\tau =0.01\) for two step sizes \(h_\tau =1\cdot 10^{3}\) and \(h_\tau =1\cdot 10^{4}\). For the larger choice of \(h_\tau \) (red curve) we see that the error increases faster, which is in accordance with our theoretical result in Proposition 2. For the black curve we can see a periodic pattern (due to the oscillations of the solution) and the linear increase of the error is imperceptible over the time range. We note that the errors in Fig. 3 also show periodiclike oscillations and the linear increase becomes visible because of the long time range (with respect to \(\varepsilon \)).
Onedimensional doublewell potential
In our last numerical experiment we follow the presentation in [25, Results] by using the onedimensional doublewell potential
together with the initial wave function in (6.1) for \(\varepsilon =1\), which is a model for quantum tunneling. As for the harmonic oscillator potential, we used again the phase space box \(B=[8,8]\times [8\pi ,8\pi ]\) with 64 points in position and momentum space and \(C=4i\) for the basis functions. In addition to the variational Gaussians, we implemented the nonvariational Gaussians based on the Störmer–Verlet method, see e.g. [19, Chapter I.1.4], which have also been used by Kong et al.. For the reference solution we implemented the splitstep Fourier method, using 256 points in the range \(B_q=[8,8]\) with time increment \(\tau =0.01\). The step size \(h_\tau =0.001\) was used for both the variational and the nonvariational Gaussian propagation.
The upper panels of Fig. 4 show the \(L^2\)error between the TSTG method and the reference solution for the variational Gaussians (left) and the nonvariational Gaussians (right) together with the error bounds of Theorem 1 (dashed lines). The lower panels compare the TSTG method with the reference solution for the socalled survival amplitude (overlap between the \(\psi (x,t)\) and the mirror image of the initial state on the opposite side of the doublewell), which is defined by
and is a measure for the tunneling amplitude.
The results in Fig. 4 show that the TSTG method accurately reproduced the full wave function and the survival amplitude. The experiments also show that the \(L^2\)error increases linearly (approx. as \(t\mapsto 10^{6}t\) for the variational Gaussians), whereas for the nonvariational Gaussians the rate is larger (approx. \(t\mapsto 5\cdot 10^{4}t\)). Furthermore, in Fig. 5 we compare the TSTG method with the reference solution for the energy expectation values (top) and the relative errors (bottom). For better illustration we only plotted the time range of the last 4000 of a total of 16,000 TSTG propagation steps. We can see that the expectation values of the reference solution are very well approximated even after very long running times. In particular, the slopes of the blue lines in the lower panel show that the error for the nonvariational Gaussians (upper curves) increases faster.
Conclusion and outlook
In the previous sections we derived a workable error bound for the timesliced thawed Gaussian propagation method. The method combines the discretization of the wave packet transform with thawed Gaussian approximations for the propagation of the basis functions. To provide a mathematical formulation of the TSTG method, we introduced the quadraturebased analysis, synthesis and reinitialization operators \({\mathcal {A}}_{{\mathcal {K}}}, {\mathcal {S}}_{{\mathcal {K}}}\) and \({\mathcal {R}}^\tau _{{\mathcal {K}}}\), which allow to write the approximate solution at time \(t_n=n\tau \) as
The algorithm has been implemented in MATLAB to underline our theoretical results and to show that the global error of the method increases linearly with the number n of time steps, regardless of the thawed Gaussian method (variational or nonvariational) and the order of the time integrator used. In the multidimensional setup the method could be improved to a certain extent by using different quadrature rules for the discretization of the wave packet transform. To make the method applicable especially to highdimensional systems, the curse of dimensionality must be overcome and the detailed mathematical formulation presented in this paper provides the theoretical fundamentals for combining the method with TTtechniques, which we plan to explore in our future research.
References
Bergold, P., Lasser, C.: The Gaussian Wave Packet Transform via Quadrature Rules (2020). arXiv:2010.03478
Blanes, S., Gradinaru, V.: High order efficient splittings for the semiclassical timedependent Schrödinger equation. J. Comput. Phys. 405, 109157 (2020)
Chiani, M., Dardari, D., Simon, M.K.: New exponential bounds and approximations for the computation of error probability in fading channels. IEEE Trans. Wirel. Commun. 2(4), 840–845 (2003)
Coalson, R.D., Karplus, M.: Multidimensional variational Gaussian wave packet dynamics with application to photodissociation spectroscopy. J. Chem. Phys. 93(6), 3919–3930 (1990)
Combescure, M., Robert, D.: Coherent States and Applications in Mathematical Physics, 2nd edn. Theoretical and Mathematical Physics. Springer Cham (2012)
Davis, P.J., Rabinowitz, P.: Methods of Numerical Integration, 2nd edn. Academic Press, Cambridge (2007)
Descombes, S., Thalhammer, M.: An exact local error representation of exponential operator splitting methods for evolutionary problems and applications to linear Schrödinger equations in the semiclassical regime. BIT Numer. Math. 50, 729–749 (2010)
Faou, E., Gradinaru, V., Lubich, C.: Computing semiclassical quantum dynamics with Hagedorn wavepackets. SIAM J. Sci. Comput. 31(4), 3027–3041 (2009)
Faou, E., Lubich, C.: A Poisson integrator for Gaussian wavepacket dynamics. Comput. Vis. Sci. 9(2), 45–55 (2006)
Feichtinger, H.G., Strohmer, T.: Gabor Analysis and Algorithms: Theory and Applications. Applied and Numerical Harmonic Analysis. Springer, Berlin (1998)
Folland, G.B.: Harmonic Analysis in Phase Space. Annals of Mathematics Studies. Princeton University Press, Princeton (1989)
Fornberg, B., Flyer, N.: Solving PDEs with radial basis functions. Acta Numer. 24, 215–258 (2015)
Fornberg, B., Larsson, E., Flyer, N.: Stable computations with Gaussian radial basis functions. SIAM J. Sci. Comput. 33(2), 869–892 (2011)
Gerstner, T., Griebel, M.: Numerical integration using sparse grids. Numer. Algorithms 18(3), 209–232 (1998)
Gradinaru, V., Hagedorn, G.A.: Convergence of a semiclassical wavepacket based timesplitting for the Schrödinger equation. Numer. Math. 126(1), 53–73 (2014)
Gröchenig, K.: Foundations of timefrequency analysis. In: Applied and Numerical Harmonic Analysis. Springer (2001)
Hagedorn, G.A.: Semiclassical quantum mechanics. I. The \(\hbar \rightarrow 0\) limit for coherent states. Commun. Math. Phys. 71(1), 77–93 (1980)
Hagedorn, G.A.: Raising and lowering operators for semiclassical wave packets. Ann. Phys. 269(1), 77–104 (1998)
Hairer, E., Lubich, C., Wanner, G.: Geometric numerical integration: structurepreserving algorithms for ordinary differential equations. In: Springer Series in Computational Mathematics, 2nd edn. Springer, Berlin (2006)
Heller, E.J.: Timedependent approach to semiclassical dynamics. J. Chem. Phys. 62(4), 1544–1555 (1975)
Heller, E.J.: Time dependent variational approach to semiclassical dynamics. J. Chem. Phys. 64(1), 63–73 (1976)
Heller, E.J.: Frozen Gaussians: a very simple semiclassical approximation. J. Chem. Phys. 75(6), 2923–2931 (1981)
Herman, M.F., Kluk, E.: A semiclasical justification for the use of nonspreading wavepackets in dynamics calculations. Chem. Phys. 91(1), 27–34 (1984)
Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM) (2002)
Kong, X., Markmann, A., Batista, V.S.: Timesliced thawed Gaussian propagation method for simulations of quantum dynamics. J. Phys. Chem. A 120(19), 3260–3269 (2016)
Kormann, K., Lasser, C., Yurova, A.: Stable interpolation with isotropic and anisotropic gaussians using Hermite generating function. SIAM J. Sci. Comput. 41(6), A3839–A3859 (2019)
Lasser, C., Lubich, C.: Computing quantum dynamics in the semiclassical regime. Acta Numer 29, 229–401 (2020)
Lasser, C., Sattlegger, D.: Discretising the Herman–Kluk Propagator. Numer. Math. 137(1), 119–157 (2017)
Leung, S., Qian, J.: Eulerian Gaussian beams for Schrödinger equations in the semiclassical regime. Commun. Comput. Phys. 228(8), 2951–2977 (2009)
Liu, H., Runborg, O., Tanushev, N.M.: Error estimates for Gaussian beam superpositions. Math. Comput. 82(282), 919–952 (2013)
Lubich, C.: From quantum to classical molecular dynamics: reduced models and numerical analysis. In: Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS) (2008)
Martinez, A.: An introduction to semiclassical and microlocal analysis. In: Universitext. Springer, New York (2002)
McLachlan, R.I., Quispel, G.R.W.: Splitting methods. Acta Numer. 11, 341–434 (2002)
Meyer, H.D., Manthe, U., Cederbaum, L.: The multiconfigurational timedependent Hartree approach. Chem. Phys. Lett. 165(1), 73–78 (1990)
Oseledets, I.V.: Tensortrain decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)
Oseledets, I.V., Tyrtyshnikov, E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31(5), 3744–3759 (2009)
Siegel, C.L.: Einführung in die Theorie der Modulfunktionen \(n\)ten Grades. Math. Ann. 116(1), 617–657 (1939)
Swart, T.C.: Initial Value Representations. Dissertation, Freie Universität Berlin (2008)
Worth, G.A., Robb, M.A., Burghardt, I.: A novel algorithm for nonadiabatic direct dynamics using variational Gaussian wavepackets. Faraday Discuss. 127, 307–323 (2004)
Zheng, C.: Optimal error estimates for firstorder Gaussian beam approximations to the Schrödinger equation. SIAM J. Numer. Anal. 52(6), 2905–2930 (2014)
Acknowledgements
Fruitful discussions with Victor S. Batista and Micheline B. Soley are gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Analysis and synthesis operator
Lemma 7
For \(x,y\in {\mathbb {C}}^{{\mathcal {K}}}\) and a positive weight \(w\in {\mathbb {R}}^{{\mathcal {K}}},\,w_{\mathbf {k}}>0\) for all \(\mathbf {k}\in {\mathcal {K}}\), we define the weighted inner product
Moreover, for a given phase space grid \(\{z_{\mathbf {k}}\}_{\mathbf {k}\in {\mathcal {K}}}\) let \({\mathcal {A}}_{{\mathcal {K}}}\) be the operator that maps a squareintegrable function \(\psi \in L^2({\mathbb {R}}^d)\) to the coefficient tensor \((c_{\mathbf {k}}(\psi ))\), as well as \({\mathcal {S}}_{{\mathcal {K}}}\) the corresponding synthesis operator. Then,
Proof
Let \(c\in {\mathbb {C}}^{{\mathcal {K}}}\) and \(\psi \in L^2({\mathbb {R}}^d)\). By definition of the synthesis operator we have
\(\square \)
Appendix B: Inner products of Gaussians
Proof of Lemma 1
The product of the functions \(\overline{g_{z_1}^{C_1,\varepsilon }}\) and \(g_{z_2}^{C_2,\varepsilon }\) is a Gaussian. To obtain an explicit representation, we rewrite the sum of the exponents
as a quadratic function
where a short calculation shows that \(B\in {\mathbb {C}}^{d\times d},\,b\in {\mathbb {C}}^d\) and \(c\in {\mathbb {C}}\) are given by
In particular, since \(\mathrm{Im}(\bar{C}_1)=\mathrm{Im}(C_1)\) is positive definite and the sum of two real positive definite matrices is again positive definite, we conclude that B is an element of the Siegel space \(\mathfrak {S}^+(d)\). This yields the following representation for all \(x\in {\mathbb {R}}^d\):
where the positive constant \(\alpha >0\) is given by
Therefore, we conclude that
where a formula of multivariate Gaussian integrals (see e.g. [11, “Appendix A”, Theorem 1]) yields
We note that the branch of the square root is determined by the requirement
if \(iB\) is real and positive definite. Moreover, using the formulas in Eq. (B.1), we obtain the following representation:
In the last line we have two Gaussians: One with respect to the difference \(p_2p_1\) with width matrix \(B^{1}\) and one for \(q_2q_1\) with width matrix \(\bar{C}_1\bar{C}_1B^{1}\bar{C}_1\). In particular, the Woodbury matrix identity, see e.g. [24, Page 258], yields
Hence, since \(Z\in \mathfrak {S}^+(d)\) implies \(Z^{1}\in \mathfrak {S}^+(d)\) (see e.g. [11, Theorem 4.64]), we conclude that both width matrices
are in \(\mathfrak {S}^+(d)\) and therefore we conclude that the block diagonal matrix M in (3.3) is an element of \(\mathfrak {S}^+(2d)\). Putting together the above calculations we arrive at (3.2).
To prove the bound in (3.4), we follow the idea of [38, 11.4 Lemma] and assume that the eigenvalues of \(\mathrm{Im}(C_k)\) and \(\mathrm{Im}(C_k^{1})\) are bounded from below by \(\theta >0\) and from above by \(\Theta >0\). Furthermore, let us introduce the realvalued Gaussian function
Then, for all \(x\in {\mathbb {R}}^d\), the spectral bounds imply that
and therefore we obtain the following bound:
where the last equality follows by the formula in (3.2). Furthermore, combining Plancherel’s theorem for the \(\varepsilon \)rescaled Fourier transform \({\mathcal {F}}_\varepsilon :L^2({\mathbb {R}}^d)\rightarrow L^2({\mathbb {R}}^d)\), defined for all \(p\in {\mathbb {R}}^d\) by
with a formula for the Fourier transform \({\mathcal {F}}_\varepsilon g_{z_k}^{C_k,\varepsilon }\), implies
Consequently, combining the bounds in (B.2) and (B.3) proves (3.4) for
\(\square \)
Appendix C: Discrete Gaussian convolution
Lemma 8
For \(\sigma >0\) consider the onedimensional Gaussian function
For arbitrary grid points \(t_1<t_2<\cdots <t_N\) let
Then, for all \(\sigma _1,\sigma _2>0\), there exists a constant \(c>0\) such that for all \(s\in {\mathbb {R}}\) we have
where c depends on \(\sigma _1,\sigma _2\) and h, but not on N.
Proof
Let \(s\in {\mathbb {R}}\). A short calculation shows that
where we introduced the parameters
Consequently, the sum in (C.2) can be written as
where the sum at the right handside can be bounded independently of \(s'\) as
where the minimal distance h between consecutive grid points is defined in (C.1). In particular, since the last sum can be viewed as a Riemann sum approximation to the integral
we conclude that there exists a positive constant \(c>0\), depending on \(\sigma _3\) and h, such that
which makes the proof complete. \(\square \)
Appendix D: Reconstruction error
In the following, we prove that for all \(\mathbf {k}\in {\mathcal {K}}\) there exists a positive constant \(\tilde{c}_{\mathbf {k}}>0\) such that
Therefore, let us fix \(\mathbf {k}\in {\mathcal {K}}\) and decompose the Gaussian \(u^\tau _{\mathbf {k}}\), which is the numerical approximation to the timeevolved Gaussian basis function \(g_{\mathbf {k}}(\tau )\) after a short TSTG propagation time \(\tau >0\), as follows:
where \(g_{\mathbf {k},0}\) is the initial basis function and \(u_{\mathbf {k}}(\tau )\in {\mathcal {M}}\) the approximation in the manifold of complex Gaussians. Using that \(g_{\mathbf {k},0}\) is an element of \({\mathcal {V}}_{{\mathcal {K}}}\), the definition of the discretization error in (3.7) yields that \(E_{wp}(g_{\mathbf {k},0})=0\) and therefore
Hence, it suffices to show that \(E_{wp}(R^j_\mathbf {k}(\tau ))={\mathcal {O}}(\tau )\). Firstly, we see that
where \(\Vert {\mathcal {A}}_{{\mathcal {K}}}^*{\mathcal {A}}_{{\mathcal {K}}}\Vert \) denotes the operator norm of the linear operator \({\mathcal {A}}_{{\mathcal {K}}}^*{\mathcal {A}}_{{\mathcal {K}}}\). In particular, a short calculation shows that
where \(w_{\mathbf {k}}\ge 0\) are the weights of the underlying quadrature rule. Hence, combining (D.2) with the bound for \(\Vert u^\tau _{\mathbf {k}}u_{\mathbf {k}}(\tau )\Vert \) in (4.6), we conclude that
Finally, using that \(u_{\mathbf {k}}(t)\in {\mathcal {M}}\) is the exact solution to
where \(U_{q_{\mathbf {k}}}\) denotes the secondorder Taylor polynomial of V at \(q_{\mathbf {k}}\), we conclude that \(\Vert u_{\mathbf {k}}(\tau )g_{\mathbf {k},0}\Vert \) can be estimated in terms of the timedependent Hamiltonian
We have
and a short calculation shows that
Hence, using an estimate for moments of Gaussians, see e.g. [27, Lemma 3.8], we obtain
and similarly
Altogether,
with \(\rho _{\mathbf {k}}(\tau )={\mathcal {O}}(\sqrt{\varepsilon })\) uniformly in \(\tau \). This shows that
and therefore the bound in (D.1) follows for
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bergold, P., Lasser, C. An error bound for the timesliced thawed Gaussian propagation method. Numer. Math. 152, 511–551 (2022). https://doi.org/10.1007/s00211022013197
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211022013197
Mathematics Subject Classification
 42A38
 65D32
 65P10
 65Z05
 81Q20