Abstract
Quantum computers are predicted to outperform classical ones for solving partial differential equations, perhaps exponentially. Here we consider a prototypical PDE—the heat equation in a rectangular region—and compare in detail the complexities of ten classical and quantum algorithms for solving it, in the sense of approximately computing the amount of heat in a given region. We find that, for spatial dimension \(d \ge 2\), there is an at most quadratic quantum speedup in terms of the allowable error \(\epsilon \) using an approach based on applying amplitude estimation to an accelerated classical random walk. However, an alternative approach based on a quantum algorithm for linear equations is never faster than the best classical algorithms.
Similar content being viewed by others
1 Introduction
Quantum computers are predicted to solve certain problems substantially more efficiently than their classical counterparts. One area where quantum algorithms could significantly outperform classical ones is the approximate solution of partial differential equations (PDEs). This prospect is both exciting and plausible: exciting because of the ubiquity of PDEs in many fields of science and engineering, and plausible because some of the leading classical approaches to solving PDEs (e.g. via the finite difference or finite element methods) are based on discretising the PDE and reducing the problem to solving a system of linear equations. There are quantum algorithms that solve linear equations exponentially faster than classical algorithms (in a certain sense), via approaches that stem from the algorithm of Harrow, Hassidim and Lloyd (HHL) [1], so these algorithms could be applied to PDEs. There have been a succession of papers in this area which have developed new quantum algorithmic techniques [2,3,4,5,6,7,8,9,10] and applied quantum algorithms to particular problems [3, 11,12,13,14].
However, in order to determine if a genuine quantum speedup can be obtained, it is essential to take into account all complexity parameters, and to compare against the best classical algorithms. The quantum algorithm should be given the same task as the classical algorithm—to produce a classical solution to a classical problem, up to a certain level of accuracy—rather than (for example) being asked to produce a quantum superposition corresponding to the solution. This can sometimes lead to apparently exponential speedups being reduced substantially. For example, it was suggested that quantum algorithms for the finite element method could solve electromagnetic scattering cross-section problems exponentially more efficiently than classical algorithms [3], but it was later argued that the speedup can be at most polynomial [15] (in fixed spatial dimension). The true extent of the achievable speedup (or otherwise) by quantum algorithms for PDEs over their classical counterparts remains to be seen.
Here we aim to fix a benchmark problem to enable us to compare the complexities of classical and quantum algorithms for solving PDEs. The analysis of [15], for example, was not specific to a particular problem, and also focused only on the finite element method; here, by contrast, we aim to choose a specific problem and pin down whether quantum algorithms of various forms can solve it more quickly than standard classical algorithms. We will consider the heat equation, which has a number of desirable features in this context: it is a canonical problem which has been studied extensively; it has wide applications in many fields of science, such as describing particle diffusion in physics [16], modelling option pricing in finance [17], and serving as the theoretical foundation of the scale-space technique in image analysis [18]; and there are many methods known for solving it.
1.1 Our Results
We compare the complexity of five classical methods and five quantum methods for solving the heat equation:
for some \(\alpha > 0\), in d spatial dimensions. We consider the hypercubic spatial region \(x_i \in [0,L]\) and the time region \(t \in [0,T]\), and let \(R = [0,L]^d \times [0,T]\). We use periodic boundary conditions for each \(x_i\), but not t. We fix the boundary conditions \(u(x_1,\dots ,x_d,0) = u_0(x_1,\dots ,x_d)\) for some “simple” function \(u_0:\mathbb {R}^d \rightarrow \mathbb {R}^{\ge 0}\) that is known in advance. We henceforth use boldface to denote vectors, and in particular let \(\mathbf {x}\) denote the vector \((x_1,\dots ,x_d)\). To get some intution for “reasonable” relationships between some of the parameters, \(T \gg L^2/\alpha \) is a typical timescale for the distribution of heat to approach the uniform distribution.
Complexity model. We measure the complexity of classical algorithms in terms of the number of elementary operations executed, where we assume that an elementary arithmetic operation (addition or multiplication) on real numbers can be performed in constant time. This is a generous measure of complexity, but allows us to compute and compare the complexity of the classical algorithms we consider in a fairly straightforward way. We also assume that a classical algorithm can generate a random real number in the range [0, 1] in constant time. Although this is again a debatable assumption, it simplifies the analysis and is common in the literature on sampling from probability distributions and randomised algorithms. We measure the complexity of quantum algorithms by the number of gates used. In our bounds, we aim to compare the complexity of classical and quantum techniques for solving (1), while avoiding a dependence on the complexity of \(u_0\). Therefore, we assume that \(u_0(x_1,\dots ,x_d)\) can be computed exactly at no cost for all \(x_1,\dots ,x_d\), and further that \(\int _{S} u_0(x_1,\dots ,x_d) dx_1 \dots d{x_d}\) and \(\int _{S} u_0^2(x_1,\dots ,x_d) dx_1 \dots d{x_d}\) can be computed exactly at no cost for all regions S. Below, we will extend this assumption to being able to compute sums of simple functions of \(u_0(x_1,\dots ,x_d)\) values over discretised regions. (Note that all of the classical and quantum algorithms we consider have some requirement for an assumption of this form, so we are not giving one type of algorithm an unfair advantage over the other.)
We will additionally assume that, for all \(i,j \in \{1,\dots ,d\}\) and some smoothness bound \(\zeta \) of dimension (length)\(^{-4}\) if u is dimensionless,
The denominators in these bounds are chosen to be appropriate based on dimensional analysis considerations. Indeed, if one has a bound only on the 4th derivative and on u itself, this is sufficient to obtain similar scaling for the second and first derivative bounds [19].
There are many interpretations one could consider of what it means to “solve” the heat equation. Here we focus on solving the following problem: given \(\epsilon \in (0,1)\), a fixed \(t \in [0,T]\), and a subset \(S \subseteq [0,L]^d\), output \(\widetilde{H}\) such that
with probability at least 0.99. That is, for a given time, and a given spatial region, we aim to approximate the total amount of heat within that region. The complexity of solving the heat equation depends on the desired accuracy \(\epsilon \) as well as all of the other parameters. We usually imagine that these other parameters are fixed first, and then consider the scaling of the complexity with respect to \(\epsilon \). This is not the only scaling parameter one could consider: for example, one could adjust the smoothness of the function and the complexity of the region being considered. However, focusing on accuracy enables us to compare the algorithms that we study in a unified way, in terms of a natural figure of merit. In the detailed bounds that we compute, we also include the dependence of the algorithmic complexity on other relevant quantities, such as smoothness.
One reason for considering the total heat in a region (5) is that it allows us to consider classical deterministic methods (which compute u for all—discretised—x and t), classical probabilistic methods and quantum methods in a unified way; all these methods allow one to compute (5). We remark that the classical deterministic literature frequently considers solving the heat equation to correspond to writing down the above quantity for a family of subsets S that partitions the whole space \([0,L]^d\). Our problem can be seen as a natural and mathematically rigorous way of solving a discretised version of this question, in a way that enables the possibility of comparison to quantum methods (where one is not typically able to compute u for all x and t). One application it allows is the computation of heat flow into or out of a region.
All of the algorithms we studied were based on the standard approach of discretising the Eq. (1) via the finite difference method, leading to a system of linear equations. Specifically, we used the simple “forward time, central space” (FTCS) method with a uniform rectangular grid.Footnote 1 We evaluated the following classical algorithms:
-
Solving the corresponding system of linear equations using the conjugate gradient method.
-
Iterating forward in time from the initial condition.
-
Using the Fast Fourier Transform to solve the linear system.
-
A random walk method based on the connection between the heat equation and random walk on a grid [20,21,22].
-
An accelerated version of the random walk method, using efficient sampling from the binomial distribution.Footnote 2
We also evaluated the following quantum algorithms:
-
Solving the linear system using the fastest quantum algorithms for solving linear equations [25].
-
Diagonalising the linear system using the quantum Fourier transform and postselection.
-
Applying amplitude estimation [28] to the classical random walk on a grid.
-
Applying amplitude estimation to the fast classical random walk algorithm.
These methods vary in their flexibility. For example, the quantum and classical linear equations methods can be applied to much more general boundary conditions and spatial domains than those considered here (and to other PDEs), whereas the Fast Fourier Transform and coherent diagonalisation methods are only immediately applicable to solving the heat equation in a simple region.
There are still more solution methods that could be considered (e.g. the use of different discretisation techniques). One example is solving the heat equation by expressing it as a system of ODEs, by discretising only the right-hand side of (1). A high-precision quantum algorithm for systems of ODEs was given in [5]. However, applying it to the heat equation seems to give a complexity somewhat worse than solving the fully discretised system of linear equations using a quantum algorithm (see “Appendix A”). One can also solve the heat equation in the specific case of a hyperrectangular region by using the known explicit solution in terms of Fourier series. This requires computing integrals dependent on the initial condition \(u_0\), but for certain initial conditions, it may be more efficient (or even give an exact solution).
Our results are summarised in Table 1, where we display runtimes in terms of \(\epsilon \) alone, although we compute the complexity of the various algorithms in terms of the other parameters in detail below. The key points are as follows:
-
For \(d =1\), the quantum methods are all outperformed by the classical Fast Fourier Transform method. For \(d \ge 2\), the fastest method is the quantum algorithm based on applying amplitude amplification to a “fast” classical random walk. For arbitrary d, the largest quantum speedup using this method is from \(\widetilde{O}(\epsilon ^{-2})\) to \(\widetilde{O}(\epsilon ^{-1})\).
-
The Fast Fourier Transform and fast random walk amplitude estimation algorithms are specific to a rectangular region. Considering algorithms that could also be applied to more general regions, the fastest classical method for \(d \le 3\) is iterating the initial condition forward in time. This outperforms all quantum methods in \(d=1\), performs roughly as well as (standard) random walk amplitude estimation in \(d=2\), and is outperformed by random walk amplitude estimation for \(d \ge 3\).
-
The quantum linear equation solving method is always outperformed by other quantum methods. In particular, it does not achieve an exponential speedup over classical methods, as might be expected. However, note that it provides more flexibility in terms of estimating other quantities, and allowing for more general boundary conditions, than the most efficient classical methods.
-
Among the space-efficient methods—those which use space polylogarithmic in \(1/\epsilon \)—there is a quantum speedup in all dimensions (from \(\widetilde{O}(\epsilon ^{-2})\) to \(\widetilde{O}(\epsilon ^{-1})\)), because this criterion rules out the classical Fast Fourier Transform method.
These bounds do not assume the use of a preconditioner to improve the condition number of the relevant linear system. If a perfect preconditioner were available, then the complexity of the quantum linear equation solving method would be reduced to be comparable with that of the diagonalisation method, but would still not be competitive with other methods.
We conclude that, if our results for the heat equation are representative of the situation for more general PDEs, it is unclear whether quantum algorithms will offer a super-polynomial advantage over their classical counterparts for solving PDEs, but polynomial speedups may be available.
In the remainder of this work, we prove the results corresponding to the complexities reported in Table 1. We begin by describing the discretisation and numerical integration approach used, before going on to describe and determine the complexity of the various algorithms. To achieve this, we need to obtain several technical bounds (e.g. on the condition number of the relevant linear system; on the \(\ell _2\) norm of a solution to the heat equation; and on the complexity of approximating the heat in a region from a quantum state corresponding to a solution to the heat equation). We aim for a self-contained presentation wherever possible, rather than referring to results in the extensive literature on numerical solutions of PDEs; see [29,30,31] for further details.
2 Technical Ingredients
In this section we will discuss the key ingredients that are required for quantum and classical algorithms to solve the heat equation.
2.1 Discretisation
All of the algorithms that we will consider are based on discretising the PDE (1). Here we will consider the simplest method of discretisation, known as the forward-time, central-space (FTCS) method. This method is based on discretising using the following equalities (for one variable), which can be proved using Taylor’s theorem with remainder:
where we assume that u is 4 times differentiable, and \(\xi \in [t,t+h]\), \(\xi ' \in [x,x+h]\), \(\xi '' \in [x-h,x]\). So
We will apply these approximations to multivariate functions \(u(\mathbf {x},t)\) that satisfy, for all \(i,j \in \{1,\dots ,d\}\),
for some \(\zeta \) and all \((\mathbf {x},t) \in R\). From (1), this implies that \(\max _{(x_1,\dots ,x_d,t) \in R}|\frac{\partial ^2 u}{\partial t^2}(\mathbf {x},t)| \le \zeta \alpha ^2 d^2 / L^d\). We note that this is dimensionally consistent as \(\alpha \) has dimensions (length)\(^2/\)time and u is a density.
We will use the sequence of discrete positions \(x_0 = 0, x_1 = \Delta x, \dots , x_n = n \Delta x\); \(t_0 = 0, t_1 = \Delta t, \dots , t_m = m \Delta t\), such that \(T = m \Delta t\), \(L = n \Delta x\). Let G (for “grid”) denote the set of points \((\mathbf {x},t) \in R\) such that the coordinates of \(\mathbf {x}\) are integer multiples of \(\Delta x\), and t is an integer multiple of \(\Delta t\). For any t, we use \(G_t\) to denote the set of points \(\mathbf {x}\) such that \((\mathbf {x},t)\in G\). We will let the vector \(\mathbf {u}\) denote the exact solution of (1) at points in G, and will use \(\widetilde{u}\) or \(\widetilde{\mathbf {u}}\) for the approximate solution to (1) found via discretisation, dependent on whether we are considering this as a function or a vector.
Considering points in G and using the approximations (8) and (9) gives the linear constraints
The following result can be shown using standard techniques.
Theorem 1
(Approximation up to small \(\ell _\infty \) error). If \(\Delta t \le \Delta x^2/(2d\alpha )\),
Proof
From (11),
Let \(\mathcal {L}\) be the linear operator defined by the right-hand side of (13). Letting \(\widetilde{\mathbf {u}_i}\) and \(\mathbf {u_i}\) denote the approximate and exact solutions at time \(t_i\) (i.e. the \(n^d\)-component vectors \(\widetilde{u}(\cdot ,t_i)\), \(u(\cdot ,t_i)\)), we have \(\widetilde{\mathbf {u}_{i+1}} = \mathcal {L} \widetilde{\mathbf {u}_i}\). \(\mathcal {L}\) is stochastic if
and this condition holds by assumption. By the discretisation error bounds (8), (9),
implying
i.e.
Writing \(\widetilde{\mathbf {u}_i} = \mathbf {u_i} + \mathbf {e_i}\) for some error vector \(\mathbf {e_i}\), we have
as \(\mathcal {L}\) is stochastic, \(\Vert \mathcal {L}\mathbf {e_1}\Vert _\infty \le \Vert \mathbf {e_1}\Vert _\infty \), so \(\Vert \widetilde{\mathbf {u}_2} - \mathbf {u_2}\Vert _\infty \le 2\zeta \alpha d \Delta t L^{-d} \left( \frac{\alpha d \Delta t}{2} + \frac{\Delta x^2}{12}\right) \). Repeating this argument,
as claimed. \(\square \)
Corollary 2
To estimate \(\mathbf {u}\) up to \(\ell _\infty \) accuracy \(\epsilon /L^d\), it is sufficient to take
This corresponds to taking \(m = \lceil 2T^2 d^2 \alpha ^2 \zeta / (3\epsilon )\rceil \), \(n = \lceil L \sqrt{d \alpha \zeta T/ (3\epsilon )}\rceil \).
Proof
By design, \(\Delta t = \Delta x^2 / (2d\alpha )\), so Theorem 1 can be applied. Insertion of the stated values into Theorem 1 gives the claimed result. \(\square \)
Note that the constant factors in \(\Delta t\) and \(\Delta x\) could be traded off against one another to some extent, and that the constraint that spatial 4th derivatives are upper-bounded by \(\zeta /L^d\) applies to the solution u to the heat equation, rather than the initial condition \(u_0\). However, for any t, \(\Vert \frac{\partial ^4u}{\partial x_i^4}(\mathbf {x},t)\Vert _\infty \le \Vert \frac{\partial ^4u_0}{\partial x_i^4}(\mathbf {x})\Vert _\infty \), so such a constraint on \(u_0\) implies an equivalent constraint on u at other times t. (This claim follows from the discretisation argument of Theorem 1: the linear time-evolution operator \(\mathcal {L}\) defined in the theorem cannot increase the infinity-norm, and discretised partial-derivative operators commute with \(\mathcal {L}\).)
We will make the choices for m and n specified in Corollary 2 throughout the rest of the paper. Observe that, with these choices, the operator \(\mathcal {L}\) is precisely a simple random walk on \(\mathbb {Z}_n^d\).
Now we have introduced the discretisation method, we can describe the normalisation used: we assume that
By stochasticity of \(\mathcal {L}\), this implies that \(\Vert \widetilde{\mathbf {u}_i}\Vert _1 = \Delta x^{-d}\) for all i. This assumption is approximately equivalent to assuming that \(\int _{[0,L]^d} u_0(\mathbf {x}) dx_1 \dots dx_d = 1\); we will discuss why at the end of the next section. As a quick check, note that taking \(u_0(\mathbf {x}) = L^{-d}\) gives \(\Vert \mathbf {u_0}\Vert _1 = \left( \frac{n}{L}\right) ^d\), \(\int _{[0,L]^d} u_0(\mathbf {x}) dx_1 \dots dx_d = 1\).
2.2 Numerical integration
Our goal will ultimately be to compute the integral defined in (5) giving the total amount of heat within a region S approximately, at a fixed time. Following the discretisation approach, we will have access to (approximate) evaluations of a function u at equally spaced grid points, and seek to compute the integral of u over S.
We will consider several numerical integration methods for achieving this goal. Each of them is based on a 1-dimensional approximation of the form
where w(i) are non-negative real weights, \(x_i\) are grid points between a and b with spacing \(\Delta x\), where \(b-a\) is an integer multiple of \(\Delta x\), and E is an error term. If we define \(\mathbf {w}\), \(\mathbf {f}\) to be the vectors corresponding to evaluations of w and f at grid points, we can write the approximation as \(\Delta x \mathbf {w} \cdot \mathbf {f}\). To extend an approximation of this form to d-variate functions, we simply apply it in each dimension, e.g. for \(d=2\):
where E(x) is the error term for x, and \(|E'| \le (b_1-a_1) \max _x |E(x)| \le L \max _x |E(x)|\). For arbitrary d, it is straightforward to see that we can interpret this approximation as computing the inner product \((\Delta x)^d \mathbf {w}^{\otimes d} \cdot \mathbf {f}\). The error bound becomes \(O(d L^{d-1} \max _{\mathbf {x}} |E(\mathbf {x})|)\) as we will always have \(\sum _i w(i) \le n\).
When applied to the heat equation, we seek to evaluate \(\int _S u(\mathbf {x},t) d\mathbf {x}\) for some subset \(S \subseteq [0,L]^d\) and a fixed time t. Applying the above approximation gives a weighted sum over \(\mathbf {x}\) of the form
where \(G_t\) is a set of grid points of spacing \(\Delta x\) in spatial dimensions, and spacing \(\Delta t\) in time. Then
where \(\widetilde{E} = \max _{\mathbf {x}} |E(\mathbf {x})|\), the second inequality follows from the previous error analysis, and the final inequality follows from Theorem 1. As \(\Delta t = \Delta x^2 / (2d\alpha )\) from Corollary 2, this corresponds to a bound which is
We will consider three numerical integration methods that fit into the above framework:
-
1.
Simpson’s rule: \(x_i = a+i\Delta x\), \(a \le x_i \le b\), \(\mathbf {w} = \frac{1}{3}(1,4,2,4,2,\dots ,4,1)\),
$$\begin{aligned} |E| \le \frac{\Delta x^4}{180} (b-a) \max _{\xi \in [a,b]} \left| \frac{d^4f}{dx^4}(\xi ) \right| . \end{aligned}$$(31)Inserting into (30) and using \(|b-a| \le L\), \(\Vert \mathbf {w}\Vert _1 \le n = L/\Delta x\), we obtain an overall error bound of
$$\begin{aligned} O(\Delta x^2 \alpha d \zeta T+ d \Delta x^4 \zeta ) = O(d \Delta x^2 \zeta (\alpha T + \Delta x^2)). \end{aligned}$$(32)Assuming that \(\Delta x \rightarrow 0\), the second term is negligible. Choosing \(\Delta x\) as in Corollary 2, the final error introduced by numerical integration is \(O(\epsilon )\).
-
2.
The midpoint rule: \(x_i = a+(i+\frac{1}{2})\Delta x\), \(a< x_i < b\), \(\mathbf {w} = (1,1,\dots ,1)\),
$$\begin{aligned} |E| \le \frac{\Delta x^2}{24} (b-a) \max _{\xi \in [a,b]} \left| \frac{d^2f}{dx^2}(\xi ) \right| = O(\Delta x^2 L^{3-d} \zeta ). \end{aligned}$$(33)Using a similar argument to the previous point, we obtain an overall error bound of
$$\begin{aligned} O(\Delta x^2 \alpha d \zeta T + d \Delta x^2 L^2 \zeta ) = O(d \Delta x^2 \zeta (\alpha T+ L^2)). \end{aligned}$$(34)The error increases with L, so we may need to choose \(\Delta x\) smaller than the choice made in Corollary 2. Indeed, working through the same argument, we obtain
$$\begin{aligned} m = O(T\alpha d^2 \zeta (\alpha T + L^2)/\epsilon ),\;\;\;\; n = O(L \sqrt{d\zeta (\alpha T + L^2)/\epsilon }). \end{aligned}$$(35)However, for fixed \(\alpha ,d,T,L\) the asymptotic scaling is the same as Simpson’s rule, and we will see below that this technique can be advantageous in two respects: the \(\ell _2\) and \(\ell _\infty \) norms of \(\mathbf {w}\) are lower, and its values are all equal.
-
3.
The left Riemann sum: \(x_i = a+i\Delta x\), \(a \le x_i < b\), \(\mathbf {w} = (1,1,\dots ,1)\),
$$\begin{aligned} |E| \le \frac{\Delta x}{2} (b-a) \max _{\xi \in [a,b]} \left| \frac{df}{dx}(\xi ) \right| = O(\Delta x L^{4-d} \zeta ). \end{aligned}$$(36)By the same argument, we obtain an overall error bound of
$$\begin{aligned} O(\Delta x^2 \alpha d \zeta T + d \Delta x L^3 \zeta ) = O(d \Delta x \zeta (\Delta x \alpha T + L^3)). \end{aligned}$$(37)This is weaker than both of the previous bounds, but allows us to justify the normalisation assumption that we made that \(\sum _{(\mathbf {x},0) \cap G} u_0(\mathbf {x}) = (\Delta x)^{-d}\). This is equivalent to the approximate integral of \(u_0\) using the left Riemann sum in (29) equalling 1, which implies that for \(\Delta x \rightarrow 0\), \(\int _{\mathbf {x} \in [0,L]^d} u_0(\mathbf {x}) d\mathbf {x} \rightarrow 1\).
2.3 Condition number
Since \(\widetilde{\mathbf {u}_{i+1}} = \mathcal {L} \widetilde{\mathbf {u}_i}\) holds for \(i=0,1,\ldots ,m-1\), we can find a full approximate solution to the heat equation at all points in G by solving the following linear system:
An important quantity that determines the complexity of classical and quantum algorithms for solving a linear system \(A \mathbf {x} = \mathbf {b}\) is the condition number \(\kappa = \Vert A\Vert \Vert A^{-1}\Vert \), where \(\Vert \cdot \Vert \) denotes the operator norm, i.e., the maximal singular value. The proof of the following theorem is given in “Appendix B”.
Theorem 3
The coefficient matrix A in (38) satisfies \(\Vert A\Vert = \Theta (1)\), \(\Vert A^{-1}\Vert = \Theta (m)\). Hence the condition number is \(\Theta (m)\).
Also note that \(\mathcal {L}\) appears on the right-hand side of (38), raising the question of the complexity of preparing the vector (or quantum state) \(\mathcal {L}\widetilde{\mathbf {u}_0} = \mathcal {L}\mathbf {u_0}\). In the quantum case, this complexity depends on the condition number of \(\mathcal {L}\), which in general could be high; indeed, \(\mathcal {L}\) can sometimes be noninvertible. However, we have made the assumption that the initial vector \(\mathbf {u_0}\) is non-negative, and for all vectors of this form, \(\mathcal {L}\) is well-conditioned:
Lemma 4
Let \(\mathcal {L}\) be defined by (13), taking \(\Delta t = \Delta x^2/(2\alpha d)\) as in Corollary 2. Then for all nonnegative vectors \(\mathbf {u}\), \(\Vert \mathcal {L}\mathbf {u}\Vert _2^2 / \Vert \mathbf {u}\Vert _2^2 \ge 1/(2d)\).
The proof is included in “Appendix C”.
3 Classical Methods
Next we determine the complexity of various classical methods for solving the heat equation, based on the analysis of the previous section.
3.1 Linear systems
A standard classical method for the heat equation (and more general PDEs) is simply to solve the system of linear equations defined in Sect. 2.1 directly. A leading approach for solving sparse systems of linear equations is the conjugate gradient method [32]. This can solve a system of N linear equations, each containing at most s unknowns, and corresponding to a matrix A with condition number \(\kappa \), up to accuracy \(\delta \) in the energy norm \(\Vert \cdot \Vert _A\) in time \(O(s \sqrt{\kappa } N \log (1/\delta ))\). The energy norm \(\Vert \mathbf {x}\Vert _A\) with respect to a positive semidefinite matrix A is defined as \(\Vert \mathbf {x}\Vert _A = \sqrt{\mathbf {x}^T A \mathbf {x}}\).
Note that as the dependence on \(1/\delta \) is logarithmic, using almost any reasonable norm would not change this complexity bound much. For example, we have
Theorem 5
(Classical linear equations method). There is a classical algorithm that outputs an approximate solution \(\widetilde{u}(\mathbf {x},t)\) such that \(|\widetilde{u}(\mathbf {x},t) - u(\mathbf {x},t)| \le \epsilon /L^d\) for all \((\mathbf {x},t) \in G\) in time
Proof
By Corollary 2 and Theorem 3, we can achieve discretisation accuracy \(\epsilon /L^d\) in the \(\infty \)-norm (which is sufficient to compute the amount of heat within a region up to accuracy \(\epsilon \) via numerical integration) with a system of \(N=O(mn^d)\) linear equations, each containing O(d) variables, with condition number \(\Theta (m)\), where \(m = 2T^2 d^2 \alpha ^2 \zeta / (3\epsilon )\), \(n = L \sqrt{d \alpha \zeta T/ (3\epsilon )}\). We can also calculate the vector on the right-hand side of (38) in time \(O(dn^d)\) by multiplying \(\mathbf {u_0}\) by \(\mathcal {L}\). Using the conjugate gradient method, this system can be solved up to accuracy \(\delta \) in the energy norm in time \(O(d m^{3/2} n^d \log (1/\delta ))\). Then, by (39) and Theorem 3, to achieve accuracy \(\epsilon \) in the \(\ell _2\) norm (and hence the \(\ell _\infty \) norm) it is sufficient to take \(\delta = \Theta (\epsilon /\sqrt{m})\), giving an overall complexity of \(O(d m^{3/2} n^d \log (m/\epsilon ))\). Inserting the expressions for m and n gives the claimed result. \(\square \)
The above approach based on linear equations can be used both for the forwards-in-time and backwards-in-time discretisation methods, and indeed to solve much more general PDEs than the heat equation. In the case of the forwards-in-time approach which is our focus here, there is an even simpler method: compute \(\mathcal {L}^m \mathbf {u_0}\).
Theorem 6
(Classical time-stepping method). There is a classical algorithm that outputs an approximate solution \(\widetilde{u}(\mathbf {x},t)\) such that \(|\widetilde{u}(\mathbf {x},t) - u(\mathbf {x},t)| \le \epsilon /L^d\) for all \((\mathbf {x},t) \in G\) in time \(O(3^{-d/2} T^{{d}/{2}+2}L^d \alpha ^{d/2+2} d^{d/2+3} (\zeta /\epsilon )^{d/2+1})\).
Proof
We simply apply the linear operator \(\mathcal {L}\) defined in (13) m times to the initial vector \(\mathbf {u_0}\). Each matrix-vector multiplication can be carried out in time \(O(dn^d)\), so all required vectors \(\widetilde{\mathbf {u}_i}\) can be produced in \(O(dmn^d)\) steps. Inserting the bounds for m and n from Corollary 2 gives the claimed result. \(\square \)
The time-evolution method described in Theorem 6 is simple and efficient; however, the method of Theorem 5 based on solving a full system of linear equations is more flexible. A natural alternative approach to compute \(\mathcal {L}^\tau \mathbf {u_0}\) for some integer \(\tau \) is to use the fast Fourier transform to diagonalise \(\mathcal {L}\).
We will first need a technical lemma, which will also be used later on, about the complexity of computing eigenvalues of \(\mathcal {L}^\tau \).
Lemma 7
For any \(\tau \in \{0,\dots ,m\}\), and any \(\delta > 0\), all of the eigenvalues of \(\mathcal {L}^\tau \) can be computed up to accuracy \(\delta \) in time \(O(d n^d + n \log (\tau /\delta ))\).
Proof
It is shown in (A6) and (B14) that
where H is a circulant matrix with eigenvalues
for \(j \in \{0,\dots ,n-1\}\). Eigenvalues of \(\mathcal {L}\) can be associated with strings \(j_1,\dots ,j_d\), where \(j_i\) corresponds to eigenvalue \(\lambda _{j_i}\) of H at position i. Assume that we have chosen \(\Delta t\) and \(\Delta x\) according to Corollary 2, such that \(\Delta t = \Delta x^2/(2d\alpha )\). Then in order to compute an eigenvalue of \(\mathcal {L}\) indexed by \(j_1,\dots ,j_d\) up to accuracy \(\delta '\), it is sufficient to compute each eigenvalue \(\lambda _{j_i}\) up to accuracy \(O(\delta ')\), take the sum, and add 1. Then for the corresponding eigenvalue of \(\mathcal {L}^\tau \) to be accurate up to \(\delta \), it is sufficient to achieve \(\delta '=\delta /\tau \). This follows from all \(\mathcal {L}\)’s eigenvalues \(\lambda \) being in the range \([-1,1]\), which implies that given an approximation \(\widetilde{\lambda } = \lambda \pm \delta '\), where \(\widetilde{\lambda } \in [-1,1]\), \(|\lambda ^\tau - \widetilde{\lambda }^\tau | \le \tau \delta '\).
Therefore, we need to compute each eigenvalue \(\lambda _j\) up to accuracy \(O(\delta /\tau )\). This can be achieved by Taylor-expanding the sine function up to \(O(\log (\tau /\delta ))\) terms, which is a comfortable upper bound to achieve the required accuracy. The n distinct eigenvalues can thus be pre-computed in overall costFootnote 3\(O(n\log (\tau /\delta ))\). There are \(n^d\) eigenvalues of \(\mathcal {L}\) each being a sum of d \(\lambda _j\)’s; so the complexity of computing all the eigenvalues is \(dn^d\). Thus the total cost is this plus the “one-time” cost of computing the \(\lambda _j\)’s. \(\square \)
Theorem 8
(Classical diagonalisation method). There is a classical algorithm that outputs an approximate solution \(\widetilde{u}(\mathbf {x},t)\) such that \(|\widetilde{u}(\mathbf {x},t) - u(\mathbf {x},t)| \le \epsilon /L^d\) for all \((\mathbf {x},t) \in G\) in time
Proof
As \(\mathcal {L}\) is a sum of circulant matrices acting on d separate dimensions (see (41)), it is diagonalised by the d-th tensor power of the discrete Fourier transform (equivalently, the inverse quantum Fourier transform up to normalisation). So we use the following expression to approximately compute \(\widetilde{\mathbf {u}_i}\):
where \(\Lambda \) is the diagonal matrix whose entries are eigenvalues of \(\mathcal {L}\), and F is the discrete Fourier transform. The algorithm begins by writing down \(\mathbf {u_0}\) in time \(O(n^d)\), then applies the multidimensional fast Fourier transform to \(\mathbf {u_0}\) in time \(O(d n^d \log n)\). Next each entry of the resulting vector is multiplied by the corresponding eigenvalue of \(\mathcal {L}^i\), approximately computed up to accuracy \(\delta \) using Lemma 7. Thus we obtain a diagonal matrix \(\widetilde{\Lambda ^i}\) such that \(\Vert \widetilde{\Lambda ^i} - \Lambda ^i\Vert \le \delta \). Then
where we use \(\Vert \mathbf {u_0}\Vert _1 = \left( {n}/{L}\right) ^d\) as stated in (23). So it is sufficient to take \(\delta = \epsilon /n^d\). By Lemma 7, the complexity of the second step is
Notice that m and n are related by \(m=n^2 (2 T d \alpha /L^2)\) so \(\log m = O(\log n)\), and hence this bound simplifies to \(O(dn^d + dn\log n + n \log (1/\epsilon ))\). So the complexity is dominated by the fast Fourier transform steps which has complexity \(O(dn^d\log n)\), and inserting the values for m and n, we obtain an overall complexity of
as claimed. \(\square \)
Given a solution that is accurate up to \(\ell _\infty \) error \(\epsilon /L^d\) at all points in G via Theorem 5, 6 or 8, we can apply Simpson’s rule to achieve final error \(\epsilon \) in computing the amount of heat in any desired region via numerical integration. This does not increase the overall complexity of any of the above algorithms, as it requires time only \(O(n^d)\).
We see that, of all the “direct” methods for producing a solution to the heat equation classically, the most efficient is the fast Fourier transform method, which costs \(\widetilde{O}(3^{-d/2} L^d d^{d/2+1} (T\alpha \zeta /\epsilon )^{d/2})\). However, this only gives us the solution at a particular time t, and assumes that we are solving the heat equation in a (hyper)rectangular region.
We remark that it could be possible to find an alternative algorithm to that in Theorem 8 by taking the limit \(m \rightarrow \infty \) and replacing the matrix power with an exponential.Footnote 4 A similar idea could be applied within the framework of other algorithms studied in this work. However, as the complexity of Theorem 8 turns out to be dominated by the Fourier transform part of the algorithm, it is unclear to what extent this would improve the complexity of this result.
3.2 Random walk method
The random walk method for solving the heat equation [20,21,22] is based around the observation that the linear operator \(\mathcal {L}\) corresponding to evolving in time by one step is stochastic, so this process can be understood as a random walk. This ultimately follows from the representation of the heat equation in terms of a Laplacian; this would also apply to heat flow within other structures than the simple rectangular region considered here. Given a sample from a distribution corresponding to the initial condition \(\mathbf {u}_0\), one can iterate the random walk m times to produce samples from distributions corresponding to each of the subsequent time steps.
Lemma 9
Assume that we have chosen particular values for m and n. Then there is a classical algorithm that outputs samples from distributions \(\overline{\mathbf {u}}_{\mathbf {i}}\) such that \(\Vert \overline{\mathbf {u}}_i - (\Delta x)^d \widetilde{\mathbf {u}_i}\Vert _\infty \le \epsilon \) for all \(i = 0,\dots ,m\) in time \(O(md\log n)\).
Proof
Let \(\overline{\mathbf {u}}_{\mathbf {0}} = (\Delta x)^d \mathbf {u_0}\). As \(\sum _{(\mathbf {x},0) \in G} u_0(\mathbf {x}) = (\Delta x)^{-d}\), \(\overline{\mathbf {u}}_{\mathbf {0}}\) is indeed a probability distribution. We have assumed that \(\sum _{\mathbf {x} \in S} u_0(\mathbf {x})\) can be computed without cost, which implies that arbitrary marginals of \(\overline{\mathbf {u}}_{\mathbf {0}}\) can be computed without cost. This allows us to sample from \(\overline{\mathbf {u}}_{\mathbf {0}}\) in time \(O(\log (n^d)) = O(d \log n)\) by a standard technique: split the domain into half and compute the total probability in each region; choose a region to split further, according to these probabilities; and repeat until the region is reduced to just one point \(\mathbf {x}\), which is a sample from \(\overline{\mathbf {u}}_{\mathbf {0}}\).
Given a sample \(\mathbf {x}\) from \(\overline{\mathbf {u}}_{\mathbf {i}}\), we can sample from \(\overline{\mathbf {u}}_{\mathbf {i+1}} = (\Delta x)^d \widetilde{\mathbf {u}_{i+1}}\) by applying the stochastic map \(\mathcal {L}\) to \(\mathbf {x}\) (in the sense of sampling from a distribution on new positions, rather than maintaining the entire vector), to update to a new position in time \(O(d \log n)\). So we can output one sample from each of the distributions \(\overline{\mathbf {u}}_{\mathbf {i}}\) in total time \(O(md\log n)\).
\(\square \)
We can now use this to approximate the total amount of heat in a given rectangular region at a given time t, via the midpoint rule.
Theorem 10
For any \(S \subseteq [0,L]^d\) such that the corners of S are all integer multiples of \(\Delta x\), shifted by \(\Delta x / 2\), and any \(t \in [0,T]\) that is an integer multiple of \(\Delta t\), there is a classical algorithm that outputs \(\overline{u}(S)\) such that \(|\overline{u}(S) - \int _S u(\mathbf {x},t) d\mathbf {x}| \le \epsilon \), with probability 0.99, in time
Proof
For any probability distribution P and any subset U, \(\sum _{\mathbf {x} \in U} P(\mathbf {x})\) can be estimated by choosing a sequence of k samples \(\mathbf {x}_i\) according to P, and outputting the fraction of samples that are contained within U. The expectation of this quantity is precisely \(\sum _{\mathbf {x} \in U} P(\mathbf {x})\), and by a standard Chernoff bound (or Chebyshev inequality) argument [33], it is sufficient to take \(k = O(1/\epsilon ^2)\) to estimate this expectation up to accuracy \(\epsilon \) with 99% probability of success. We use Lemma 9 to sample from the required distribution. Write \(t = i \Delta t\) for some integer i. Then, if we choose \(m = O(T\alpha d^2 \zeta (\alpha T + L^2)/\epsilon )\), \(n = O(L \sqrt{d\zeta (\alpha T + L^2)/\epsilon })\) (see (35)) and apply this technique to \(G_t \cap S\), we get precisely the midpoint rule formula for approximating \(\int _S u(\mathbf {x},t) d\mathbf {x}\). Thus we have
via the analysis of the midpoint rule in Sect. 2.2, noting that we have the normalisation \(\overline{\mathbf {u}}_{\mathbf {i}} = (\Delta x)^d \widetilde{\mathbf {u}_i}\). Inserting these choices for m and n into the bound of Lemma 9 and multiplying by \(O(1/\epsilon ^2)\) gives the claimed result. \(\square \)
The reader may wonder why we did not use a differently weighted sum in Theorem 10, corresponding to approximating the integral via Simpson’s rule, given that this rule apparently has better accuracy. The reason is that the weighting used for Simpson’s rule has components which are exponentially large in d, which would lead to an exponential dependence on d in the final complexity, coming from the Chernoff bound.
3.3 Fast random walk method
We can speed up the algorithm of the previous section by sampling from the final distribution of the random walk more efficiently than the naïve simulation method of Lemma 9.
Lemma 11
Assume that we have chosen particular values for m and n. Then there is a classical algorithm that outputs samples from a distribution \(\overline{\mathbf {u}}_{\mathbf {m}}\) such that \(\Vert \overline{\mathbf {u}}_m - (\Delta x)^d \widetilde{\mathbf {u}_m}\Vert _\infty \le \epsilon \) in expected time \(O(d\log n)\).
Proof
As in Lemma 9, we begin by sampling from \(\overline{\mathbf {u}}_{\mathbf {0}}\) in time \(O(d \log n)\). Next, given such a sample, we want to perform m steps of a random walk on \(\mathbb {Z}_n^d\). We can do this by simulating m steps of a random walk on \(\mathbb {Z}^d\) and reducing each element of the output modulo n. Next we show that this can be achieved without performing each step of the random walk in sequence (which would give a complexity scaling linearly with m). The random walk can be understood as follows: for each of m steps, choose a dimension uniformly at random, then increment or decrement the corresponding coordinate with equal probability of each. The number of steps taken in each dimension can be determined one at a time. For the i’th dimension (\(1 \le i \le d\)), if \(m'\) steps have been taken in total in the previous \(i-1\) dimensions, the number of steps taken in that dimension is distributed according to a binomial distribution with parameters \((m-m',1/(d-i+1))\). Once the number \(s_i\) of steps taken in each dimension i is known, the number of increments in that dimension is also binomially distributed with parameters \((s_i,1/2)\). So the problem reduces to sampling from binomial distributions with parameters (l, p) for arbitrary \(l \le m\), \(0<p<1\). This can be achieved in constant time [34, 35] if one assumes (as we do) that arithmetic operations on real numbers can be performed in constant time, and a random real number can be generated in constant time. Therefore the overall complexity is bounded by the initial cost of sampling. \(\square \)
We can plug Lemma 11 into the argument of Theorem 10 to obtain the following improved result:
Theorem 12
For any \(S \subseteq [0,L]^d\) such that the corners of S are all integer multiples of \(\Delta x\), shifted by \(\Delta x / 2\), and any \(t \in [0,T]\) that is an integer multiple of \(\Delta t\), there is a classical algorithm that outputs \(\overline{u}(S)\) such that \(|\overline{u}(S) - \int _S u(\mathbf {x},t) d\mathbf {x}| \le \epsilon \), with probability 0.99, in time
Proof
The proof is the same as for Theorem 10, substituting the use of Lemma 11 for Lemma 9. The final complexity is \(O((d\log n)/\epsilon ^2)\), with \(n = O(L \sqrt{d\zeta (\alpha T + L^2)/\epsilon })\). \(\square \)
4 Quantum Methods
In this section we describe several quantum algorithms for solving the heat equation. We begin by stating some technical ingredients that we will require.
First, we describe a technical lemma that allows us to go from a quantum state corresponding to an approximate solution to the heat equation at one or more given times simultaneously, to an estimate of the heat in a given region.
Lemma 13
(Quantum numerical integration). Let \(\widetilde{\mathbf {u}}\) be the \(mn^d\)-component vector corresponding to some function \(\widetilde{u}(\mathbf {x},t)\) such that \(|\widetilde{u}(\mathbf {x},t)-u(\mathbf {x},t)| \le \epsilon /L^d\) for all \((\mathbf {x},t) \in G\), and let
be the corresponding normalised quantum state. Let \(| \widetilde{\widetilde{u}} \rangle \) be a normalised state that satisfies \(\Vert | \widetilde{\widetilde{u}} \rangle - | \widetilde{u} \rangle \Vert _2 \le \gamma \), where \(\gamma = O(\epsilon n^{d/2}/((\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}}\Vert _2))\). Also assume that we have an estimate \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\) such that \(|\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}-\Vert \widetilde{\mathbf {u}}\Vert _2| \le \gamma \Vert \widetilde{\mathbf {u}}\Vert _2\). Let S be a hyperrectangular region at a fixed time t such that the corners of S are in G. Then it is sufficient to use an algorithm that produces \(| \widetilde{\widetilde{u}} \rangle \) k times to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x} \pm \epsilon \) with 99% probability of success, where \(k = O( (\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}}\Vert _2 /( \epsilon n^{d/2}))\).
Proof
Let \(w(\mathbf {x})\) be a set of weights corresponding to a numerical integration rule as defined in Sect. 2.2 (we will use Simpson’s rule in what follows). We will attempt to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x}\) by approximately computing \((\Delta x)^d \sum _{\mathbf {x} \in G_t \cap S} w(\mathbf {x})\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2} \langle \mathbf {x},t|\widetilde{\widetilde{u}} \rangle \), where \(G_t\) is the set of \(\mathbf {x}\) such that \((\mathbf {x},t)\in G\). We first determine the level of accuracy that is required in computing \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\), \(| \widetilde{\widetilde{u}} \rangle \). By the triangle inequality we have
where in the last inequality we use the analysis of Sect. 2.2 and Cauchy–Schwarz.
To achieve a final bound of \(\epsilon \), we need to have \(\gamma = O(\epsilon /(\Vert \widetilde{\mathbf {u}}\Vert _2(\Delta x)^d \Vert w\Vert _2))\). To find a concrete expression for this requirement, we need to compute \(\Vert w\Vert _2\). In the case of Simpson’s rule, we have
Thus it is sufficient to take \(\gamma = O(\epsilon n^{d/2}/(\sqrt{10}L/3)^d) \Vert \widetilde{\mathbf {u}}\Vert _2^{-1}\) to achieve final accuracy \(\epsilon \).
Finally, we need to approximately compute \((\Delta x)^d \sum _{\mathbf {x} \in G_t \cap S} w(\mathbf {x})\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2} \langle \mathbf {x},t|\widetilde{\widetilde{u}} \rangle \) given an algorithm that produces copies of \(| \widetilde{\widetilde{u}} \rangle \). This can be achieved using amplitude estimation [28] to estimate the inner product between the state
and \(| \widetilde{\widetilde{u}} \rangle \), up to accuracy \(\epsilon / ((\Delta x)^d \Vert w\Vert _2 \widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2})\), and multiplying by \((\Delta x)^d \Vert w\Vert _2 \widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\). In order to achieve this level of accuracy, we need to use the algorithm for producing \(| \widetilde{\widetilde{u}} \rangle \) k times, where \(k = O( (\Delta x)^d \Vert w\Vert _2 \widetilde{\Vert \widetilde{\mathbf {u}}\Vert }/\epsilon )\) from amplitude estimation. Applying the previous calculation of \(\Vert w\Vert _2\), and using that \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2} \approx \Vert \widetilde{\mathbf {u}}\Vert _2\), gives the claimed result. \(\square \)
Observe that in fact Lemma 13 can be used to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x}\) given copies of states \(| \widetilde{\widetilde{u}} \rangle \) corresponding to an approximation to u which is accurate only within \(G_t \cap S\), rather than over all of S. We will use this later on to estimate the amount of heat in a region, given a state corresponding to a solution to the heat equation at a particular time t, rather than all times as stated in this lemma.
The midpoint rule could be used instead of Simpson’s rule in Lemma 13 to integrate over hyperrectangular regions S such that the corners of S are in G, shifted by \(\Delta x / 2\); this would lead to a similar complexity.
We will also need a technical result regarding the \(\ell _2\) norm of solutions to the heat equation.
Lemma 14
Let \(\mathcal {L}\) be defined by (13), taking \(\Delta t = \Delta x^2/(2d\alpha )\) as in Corollary 2. Then for any integer \(\tau \ge 1\),
In this lemma, and elsewhere, we use \(| 0 \rangle \) to denote the origin in \(\mathbb {R}^d\). The proof is deferred to “Appendix D”.
4.1 Quantum linear equation solving method
In this section we describe an approach to solve the heat equation using quantum algorithms for linear equations. The idea is analogous to the classical linear equations method: we use a quantum algorithm for solving linear equations to produce a quantum state that encodes a solution approximating \(u(\mathbf {x},t)\) for all times t, and then use Lemma 13 to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x}\). First we state the complexity of the quantum subroutines that we will use.
Theorem 15
(Solving linear equations [25, Theorem 10]). Let \(A\mathbf {y}=\mathbf {b}\) for an \(N \times N\) invertible matrix A with sparsity s and condition number \(\kappa \). Given an algorithm that constructs the state \(| b \rangle = \frac{1}{\Vert \mathbf {b}\Vert _2}\sum _i \mathbf {b}_i | i \rangle \) in time \(T_b\), there is a quantum algorithm that can output a state \(| \widetilde{y} \rangle \) such that
with probability at least 0.99, in time
where
Theorem 30 of [25] is stated only for Hermitian matrices, but as remarked in a footnote there, it also applies to non-Hermitian matrices by encoding as a submatrix of a Hermitian matrix. The bound on \(T_U\) comes from [27, Lemma 48], in which we set \(s_r=s_c=s\) and \(\varepsilon =\eta /(\kappa ^2 \log ^3(\kappa /\eta ))\). Note that a quantum algorithm by Childs, Kothari and Somma [36] for solving linear equations could also be used; this would achieve a similar complexity, but the lower-order terms are not stated explicitly in [36].
Theorem 16
(Linear equation norm estimation [25, Theorem 12]). Let \(A\mathbf {y}=\mathbf {b}\) for an \(N \times N\) invertible matrix A with sparsity s and condition number \(\kappa \). Given an algorithm that constructs the state \(| b \rangle = \frac{1}{\Vert \mathbf {b}\Vert _2} \sum _i \mathbf {b}_i | i \rangle \) in time \(T_b\), there is a quantum algorithm that outputs \(\widetilde{z}\) such that
with probability at least 0.99, in time
where
As the complexity bounds suggest, the algorithms of Theorems 15 and 16 are rather complicated.
Theorem 17
(Quantum linear equations method). Let \(S \subseteq [0,L]^d\) be a subset at a fixed time t. There is a quantum algorithm that produces an estimate \(\int _S u(x,t) dx \pm \epsilon \) with 0.99 probability of success in time
where
and \(C = 20^{1/2}3^{-5/4}\pi ^{-1/4}\), and we assume that if \(d=2\), \(\alpha T = O(\log (1/\epsilon ))\), and if \(d \ge 3\) that \(Td\alpha /\zeta = O(L^6 \epsilon ^{-1})\).
Proof
By Corollary 2 and Theorem 3, we can achieve discretisation accuracy \(\epsilon /L^d\) in the \(\infty \)-norm with a system of \(N=O(mn^d)\) linear equations (see (38)), each containing O(d) variables, with condition number \(\Theta (m)\), where \(m = \lceil 2 T^2 d^2 \alpha ^2 \zeta / (3\epsilon ) \rceil \), \(n = \lceil L \sqrt{d \alpha \zeta T/ (3\epsilon )} \rceil \). We will apply Theorem 15 to solve this system of equations.
First, we can produce the initial quantum state corresponding to the right-hand side of (38) as follows. First we construct \(| u_0 \rangle \), which can be done in time \(O(d \log n)\) as we have assumed that we can compute marginals of \(u_0\) (and its powers) efficiently [37,38,39,40]. Then we apply the nonunitary operation \(\mathcal {L}\) to \(| u_0 \rangle \). This can be achieved in time \(\widetilde{O}(\kappa )\), where \(\kappa \) is the condition number of \(\mathcal {L}\), via the technique of linear combination of unitaries of [36]. To be more precise, from (A3), we can decompose \(H=-2I+Q+Q^T\), where Q is a shift. So \(\mathcal {L}=\frac{1}{2d}\sum _{j=1}^d(I^{\otimes (j-1)}\otimes Q\otimes I^{\otimes (d-j)}+I^{\otimes (j-1)}\otimes Q^T\otimes I^{\otimes (d-j)})\), which is a linear combination of 2d unitaries. The claimed result then follows directly from [36, Lemma 7]. The \(\widetilde{O}\) notation hides polylogarithmic terms in n and d. In fact, \(\kappa \) can be replaced with \(\Vert \mathcal {L}\Vert /\Vert \mathcal {L}| u_0 \rangle \Vert _2\) (see [15, Section IIIB] for a discussion). From Lemma 4, and noting that \(\Vert \mathcal {L}\Vert = O(1)\), this is upper-bounded by \(O(\sqrt{d})\). Therefore, the complexity of preparing a normalised version of \(\mathcal {L}| u_0 \rangle \) is \(\widetilde{O}(\sqrt{d})\) up to logarithmic terms; inspection of Theorem 15 shows that this is negligible compared with the complexity of other aspects of the algorithm.
Let \(| \widetilde{u} \rangle = \frac{1}{\Vert \widetilde{\mathbf {u}}\Vert _2} \sum _{(\mathbf {x},t) \in G} \widetilde{u}(\mathbf {x},t) | \mathbf {x},t \rangle \). Using Theorem 15, there is a quantum algorithm that can produce a state \(| \widetilde{\widetilde{u}} \rangle \) such that \(\Vert | \widetilde{\widetilde{u}} \rangle - | \widetilde{u} \rangle \Vert _2 \le \gamma \) in time
By Theorem 16, there is a quantum algorithm that produces an estimate \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\) of \(\Vert \widetilde{\mathbf {u}}\Vert _2\) satisfying
in time
In both of these estimates we use that \(\log N \gg \log ^{2.5}( (dm /\gamma ) \log (m/\gamma ))\) based on our estimation of \(\gamma \) below. Using Lemma 13 and inserting \(\gamma = O(\epsilon n^{d/2} /((\sqrt{10}L/3)^d\Vert \widetilde{\mathbf {u}}\Vert _2))\), the complexity of producing \(| \widetilde{\widetilde{u}} \rangle \) is
and the complexity of producing \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\) is
By Lemma 13, in order to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x} \pm \epsilon \) it is sufficient to use the algorithm for producing \(| \widetilde{\widetilde{u}} \rangle \)
times, giving an overall complexity for that part of
This implies that the overall complexity of the algorithm is dominated by the complexity of producing the estimate \(\widetilde{\Vert \widetilde{\mathbf {u}}\Vert _2}\). Defining
for conciseness, (76) can be rewritten as
To calculate B, it remains to upper-bound \(\Vert \widetilde{\mathbf {u}}\Vert _2\). A straightforward upper bound is
But we will obtain a tighter upper bound, for which it will be sufficient to consider the particular initial condition \(\mathbf {u_0}(0^d) = n^d\), \(\mathbf {u_0}(\mathbf {x}) = 0\) for \(x \ne 0^d\). This initial condition can be seen to give a worst-case upper bound by convexity, as follows. Consider the operator \(\mathcal {L}\) occurring in (13) and an arbitrary initial condition \(\mathbf {u'}(\mathbf {x}) = p_{\mathbf {x}}\) such that \(\sum _{\mathbf {x}} p_{\mathbf {x}} = (n/L)^d\) (corresponding to the \(L_1\) norm of the initial condition being normalised to 1). Then \(\mathbf {u_0}\) is a convex combination of point functions of the form \(\mathbf {u_{x_0}}(\mathbf {x_0}) = (n/L)^d\), \(\mathbf {u_{x_0}}(\mathbf {x}) = 0\) for \(\mathbf {x} \ne \mathbf {x_0}\). So \(\Vert \mathcal {L}^\tau \mathbf {u'}\Vert _2 \le \Vert \mathcal {L}^\tau \mathbf {u_0}\Vert _2\) by convexity of the \(\ell _2\) norm and shift-invariance of \(\mathcal {L}\).
By Lemma 14, for any \(\tau \ge 1\),
This gives an upper bound on the total \(\ell _2\) norm of
The first two summands under the square root are negligible compared with the others. For \(d=1\), the sum over \(\tau \) is O(n); for \(d=2\), it is \(O(\log n)\); and for \(d \ge 3\), it is O(1). The final summand is negligible for \(d \ge 2\) since \(m/n^2 = 2Td\alpha /L^2\) which is a constant.
This then gives us overall \(\ell _2\) norm bounds \(\Vert \widetilde{\mathbf {u}}\Vert _2 = O((n^{3/2} + \sqrt{mn})/ L)\) for \(d=1\), \(\Vert \widetilde{\mathbf {u}}\Vert _2 = O(n^2 \sqrt{\log n}/L^2)\) for \(d=2\), and \(\Vert \widetilde{\mathbf {u}}\Vert _2 = O((\sqrt{2}d^{1/4}n/(\pi ^{1/4} L))^d)\) for \(d\ge 3\). Compared with (81), this last bound is stronger by a factor of almost \(\sqrt{m}\). By the lower bound part of Lemma 14, the bounds are close to tight.
Inserting the values for m and n, and these bounds on \(\Vert \widetilde{\mathbf {u}}\Vert _2\), in the complexity bound (76), the final complexities are as stated in the theorem. In computing these, we use the bounds that
where \(C = 20^{1/2}3^{-5/4}\pi ^{-1/4}\). \(\square \)
Note that in this analysis, as in the classical case, we have assumed that arbitrary nonzero entries of the matrix A can be computed in time O(1). From the above proof, we see that an \(\gamma \)-approximation \(| \widetilde{\widetilde{u}} \rangle \) of the quantum state of the solution of the linear system (38) is obtained in time \(\widetilde{O}(dm) = \widetilde{O}(T^2d^3\alpha ^2\zeta /\epsilon )\) from (72). The dependence on \(\epsilon \) is linear and on d is cubic. This is exponentially better than the classical algorithm given in Theorem 5. The above theorem therefore shows that the exponential dependence of the complexity on \(\epsilon , d\) comes from computing the amount of heat, rather than the state preparation step. The exponential speedup disappears because the quantum state of the solution does not contain the information of the norm. The norm is required when estimating the amount of heat in a certain region and is exponentially large from our proof, see (81). So when we use amplitude estimation to estimate the amount of the heat in a certain region, we need to multiply the norm, and this norm will appear in the complexity to ensure the desired accuracy.
4.2 Fast-forwarded random walk method
We next consider alternative methods which directly produce a quantum state corresponding to the distribution of the random walk at time \(t = i \Delta t\): that is, a state \(| \psi _i \rangle \) close to \(\sum _{\mathbf {x}} \widetilde{u}_i(\mathbf {x}) | \mathbf{x}\rangle / \Vert \tilde{\mathbf {u}}_{\mathbf {i}}\Vert _2\). We can then estimate \(\int _S u(\mathbf {x},t)d\mathbf {x} \pm \epsilon \) using Lemma 13.
These methods start by producing an initial state \(| u_0 \rangle = \sum _{\mathbf {x}} u_0(\mathbf {x}) | \mathbf{x}\rangle / \Vert \mathbf {u_0}\Vert _2\). Given that we have assumed that we can compute sums of squares of \(u_0\) over arbitrary regions in time O(1), \(| u_0 \rangle \) can be constructed in time \(O(d \log n)\) via the techniques of [37, 39, 40]. This will turn out not to affect the overall complexity of the algorithms.
The first approach we consider can be viewed as a coherent version of the random walk method. Given the initial state \(| u_0 \rangle \), we attempt to produce a state approximating \(| u_i \rangle = | \mathcal {L}^i u_i \rangle \) for some i.
Theorem 18
(Apers and Sarlette [26], Gilyén et al. [27]). Given a symmetric Markov chain with transition matrix \(\mathcal {L}\) and a quantum state \(| \psi _0 \rangle \), there is an algorithm which produces a state \(| \widetilde{\psi _i} \rangle \) such that
using
steps of the quantum walk corresponding to \(\mathcal {L}\).
Theorem 19
(Fast-forwarded random walk method). Let S be a subset at a fixed time \(t = i\Delta t\). There is a quantum algorithm based on fast-forwarding random walks that estimates \(\int _S u(\mathbf {x},t) d\mathbf {x} \pm \epsilon \) in time
Proof
We use the algorithm of Theorem 18 to produce a state \(| \widetilde{\widetilde{u}}_i \rangle \) such that \(\Vert | \widetilde{\widetilde{u}}_i \rangle - | \widetilde{u}_i \rangle \Vert _2 \le \gamma \), where \(| \widetilde{u}_i \rangle = \widetilde{\mathbf {u}_i}/\Vert \widetilde{\mathbf {u}_i}\Vert _2\) and \(\gamma \) is defined in Lemma 13, which is applied at a single time. We need to use this algorithm k times, where k is also defined in Lemma 13. The complexity of implementing a quantum walk step is essentially the same as that of implementing a classical random walk step, which is \(O(d \log n)\). The complexity of producing the initial state \(| u_0 \rangle \) is also \(O(d \log n)\). Therefore, the complexity of the overall algorithm is
As \(k = O( (\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}_i}\Vert _2 / (\epsilon n^{d/2}))\), \(\gamma = O(\epsilon n^{d/2}/((\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}_i}\Vert _2))\) from Lemma 13, we see that the \(\Vert \widetilde{\mathbf {u}_i}\Vert _2\) terms cancel. Inserting the values for \(\gamma \) and k, using \(\Vert \mathbf {u_0}\Vert _2 \le (n/L)^d\) and inserting the values for n and m determined in Corollary 2, we obtain the claimed result. \(\square \)
4.3 Diagonalisation and postselection method
Similarly to the classical case (Theorem 8), we can find a more efficient algorithm than Theorem 19 (one without the factor of \(\sqrt{m}\)) in the special case we are considering of solving the heat equation in a hypercube, using the fact that the quantum Fourier transform diagonalises \(\mathcal {L}\). By contrast with the classical method, here we perform operations in superposition. As in the previous section, again the goal is to produce \(| u_i \rangle \) for some i; as we can diagonalise \(\mathcal {L}\) efficiently, all that remains is to implement the (non-unitary) operation \(\Lambda ^i\), where \(\Lambda \) is the diagonal matrix corresponding to the eigenvalues of \(\mathcal {L}\).
Theorem 20
(Quantum diagonalisation and postselection method). Let S be a hyperrectangular region at a fixed time \(t = i\Delta t\) such that the corners of S are in G. There is a quantum algorithm that estimates \(\int _S u(\mathbf {x},t) d\mathbf {x} \pm \epsilon \) with 99% sucess probability in time
Proof
We start with the state \(| u_0 \rangle \), and apply the approximate quantum Fourier transform in time \(O(d \log n \log \log n)\) to produce a state \(| \psi \rangle \). Note that this is exponentially faster than the classical FFT. Then, similarly to Theorem 8, we want to apply the map \(\Lambda ^i\) to this state, where \(\Lambda \) is the diagonal matrix whose entries are eigenvalues of \(\mathcal {L}\), before applying the inverse quantum Fourier transform to produce \(| \widetilde{u}_i \rangle \). Recalling that eigenvalues \(\lambda _j\) of \(\mathcal {L}\) correspond to strings \(j = j_1,\dots ,j_d\), where \(j_1,\dots ,j_d \in \{0,\dots ,n-1\}\), we expand
Then applying \(\Lambda ^i\) can be achieved by attaching an ancilla qubit and performing the map
and measuring the ancilla qubit. If we receive the outcome 0, then the residual state is as desired, and we can apply the inverse quantum Fourier transform to produce \(\mathcal {L}^i | u_0 \rangle /\Vert \mathcal {L}^i | u_0 \rangle \Vert _2\). The probability that the measurement of the ancilla qubits succeeds is precisely \(\Vert \mathcal {L}^i | u_0 \rangle \Vert _2^2\). Using amplitude amplification, \(O(\Vert \mathcal {L}^i | u_0 \rangle \Vert _2^{-1})\) repetitions are enough to produce the desired state with success probability 0.99. We will also need to produce an estimate of \(\Vert \widetilde{\mathbf {u}_i}\Vert _2\). To do so, we can apply amplitude estimation to this procedure to produce an estimate of the square root of the probability of receiving outcome 0. This gives \(\Vert \widetilde{\mathbf {u}_i}\Vert _2 (1\pm \delta )\) (with success probability lower-bounded by a constant arbitrarily close to 1) at an additional multiplicative cost of \(O(\delta ^{-1})\) [28].
For any \(i \in \{0,\dots ,m\}\), and any \(\delta > 0\), by the same argument as Lemma 7 any desired eigenvalue of \(\mathcal {L}^i\) can be computed classically up to accuracy \(\delta \) in time O(d), given a precomputation cost of \(O(n\log (m/\delta ))\) at the start of the algorithm, which will turn out to be negligible. Then it has been shown by Sanders et al [41] that given a classical description of \(\lambda ^i_j\) for each j, one can perform the map (93) on the ancilla qubit up to accuracy \(O(\delta )\) using \(O(d + \log (1/\delta ))\) gates and some additional ancilla qubits which are reset to their original state.
Thus the overall cost of producing the state \(\mathcal {L}^i | u_0 \rangle /\Vert \mathcal {L}^i | u_0 \rangle \Vert _2\) is
In order to use Lemma 13, we will take \(\delta = \gamma = \Theta (\epsilon n^{d/2}/((\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}_i}\Vert _2))\). Inserting the value for n from Corollary 2 and using the upper bounds \(\Vert \widetilde{\mathbf {u}_i}\Vert _2 \le \Vert \mathbf {u_0}\Vert _2 \le \Vert \mathbf {u_0}\Vert _1 = (n/L)^d\), we get \(\log (1/\delta ) = O(d \log n + \log 1/\epsilon ) = O(d \log n)\), implying that the cost of implementing the QFT dominates the overall complexity.
Taking this sufficiently small choice of \(\delta \), by Lemma 13 we can use the above procedure k times to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x} \pm \epsilon \), where \(k = O( (\sqrt{10}L/3)^d \Vert \widetilde{\mathbf {u}_i}\Vert _2 /(\epsilon n^{d/2})) = O(1/\delta )\). So we see that the complexity of producing a sufficiently accurate estimate of \(\Vert \widetilde{\mathbf {u}_i}\Vert _2\) is asymptotically equivalent to that of performing the numerical integration.
The total cost is then k times the cost of (94). Simplifying by using \(| u_0 \rangle = \frac{1}{\Vert \mathbf {u_0}\Vert _2} \sum _{\mathbf {x}} u_0(\mathbf {x}) | \mathbf {x} \rangle \) and hence \(\Vert \mathcal {L}^i | u_0 \rangle \Vert = \Vert \widetilde{\mathbf {u}_i}\Vert _2 / \Vert \mathbf {u_0}\Vert _2\), a \(\Vert \widetilde{\mathbf {u}_i}\Vert _2\) term cancels, leaving a cost of
Once again inserting the values for m and n based on Corollary 2 and using the upper bounds \(\Vert \mathbf {u_0}\Vert _2 \le \Vert \mathbf {u_0}\Vert _1 = (n/L)^d\), we obtain an overall bound of
as claimed in the theorem. \(\square \)
4.4 Random walk amplitude estimation approach
In our final algorithms, we apply amplitude estimation to the classical random walk approach of Sects. 3.2 and 3.3. This is the simplest of all the quantum approaches, but turns out to achieve the most efficient results in most cases. We begin with the application to accelerating the “standard” random walk method.
Theorem 21
For any \(S \subseteq [0,L]^d\) such that the corners of S are all integer multiples of \(\Delta x\), shifted by \(\Delta x / 2\), and any \(t \in [0,T]\) such that \(t = i \Delta t\) for some integer i, there is a quantum algorithm that outputs \(\overline{u}(S)\) such that \(|\overline{u}(S) - \int _S u(\mathbf {x},t) d\mathbf {x}| \le \epsilon \), with probability 0.99, in time \(O((T\alpha d^3 \zeta (\alpha T + L^2)/\epsilon ^2)\log (L \sqrt{d\zeta (\alpha T + L^2)/\epsilon }))\).
Proof
The argument is the same as Theorem 10, except that we use amplitude estimation [28], rather than standard probability estimation. Given a classical boolean function f that takes as input a sequence s of bits, amplitude estimation allows \(\Pr _s[f(s)=1]\) to be estimated up to accuracy \(\epsilon \), with success probability 0.99, using f \(O(1/\epsilon )\) times. In this case, we can think of s as the random seed input to a deterministic procedure which first produces a sample from \(\mathbf {\overline{u}}_0\), where \(\overline{\mathbf {u}}_{\mathbf {0}} = (\Delta x)^d \mathbf {u_0}\) as in Lemma 9, and then executes a sequence of i steps of the random walk. Then \(f(s)=1\) if the final position is within S, and \(f(s)=0\) otherwise. This can be used to estimate \(\int _S u(\mathbf {x},t) d\mathbf {x}\) in the same way as the proof of Theorem 10, except that the complexity is lower by a factor of \(\Theta (1/\epsilon )\). \(\square \)
Note that this approach as described in Theorem 21 uses space \(O(m) = O(T^2 d^2 \alpha ^2 \zeta /\epsilon )\) to store the sequence of movements of the random walk. This is substantially worse than the classical equivalent, which uses space \(O(d \log n) = O(d \log (L^2Td\alpha \zeta /\epsilon ))\). It has been an open problem since 2001 whether quantum algorithms can coherently simulate general classical random walk processes with little space overhead [42]. However, quadratic space overhead over the classical algorithm (which is sufficient to give a polylogarithmic space quantum algorithm) can be achieved using the pseudorandom number generator of Nisan [43] to replace the sequence of O(m) random bits specifying the movements of the walk.
4.5 Fast random walk amplitude estimation approach
Finally, we can also apply amplitude estimation to speed up the algorithm of Theorem 12.
Theorem 22
For any \(S \subseteq [0,L]^d\) such that the corners of S are all integer multiples of \(\Delta x\), shifted by \(\Delta x / 2\), and any \(t \in [0,T]\) such that \(t = i \Delta t\) for some integer i, there is a quantum algorithm that outputs \(\overline{u}(S)\) such that \(|\overline{u}(S) - \int _S u(\mathbf {x},t) d\mathbf {x}| \le \epsilon \), with probability 0.99, in time \( O((d/\epsilon ) \log (TL\alpha d^{5/2} \zeta ^{3/2} ((\alpha T + L^2)/\epsilon )^{3/2} ))\).
Proof
The argument is the same as the proof of Theorem 21. We apply amplitude amplification to the random seed used as input to a procedure for sampling from the initial distribution and the binomial distributions required for the corresponding classical random walk algorithm (Theorem 12). As in the case of Theorem 21, the complexity is lower than the corresponding classical algorithm by a factor of \(\Theta (1/\epsilon )\). \(\square \)
5 Concluding Remarks
We have considered ten algorithms (five classical and five quantum) for solving the heat equation in a hyperrectangular region, and have found that the quantum algorithm for solving linear equations is never the fastest, but that for \(d\ge 2\), a quantum algorithm based on applying amplitude amplification is the most efficient, achieving a speedup up to quadratic over the fastest classical algorithm. However, quantum algorithms based on solving linear equations may have other advantages over the classical ones, such as flexibility for more complicated problems, and better space-efficiency.
The heat equation is of interest in itself, but also as a model for understanding the likely performance of quantum algorithms when applied to other PDEs. For example, it was claimed in [11] that a quantum algorithm for solving Poisson’s equation could achieve an exponential speedup over classical algorithms in terms of the spatial dimension d. However, Poisson’s equation can be solved using a classical random walk method which remains polynomial-time even for large d [44]; this method approximates the solution at a particular point, rather than giving the solution in a whole region. It seems likely that other classical approaches to solving PDEs may be able to compete with some apparent exponential quantum speedups, analogously to the “dequantization” approach in quantum machine learning (see [45] and references therein).
Notes
Another standard method for solving the heat equation is the Crank–Nicolson method, which is based on an alternative discretisation scheme to the FTCS method, and has stronger requirements on the smoothness of the solution. The use of this method could lead to a classical algorithm whose complexity is lower than the FTCS linear equations method by at most a factor of \(\epsilon \) (see “Appendix E”); however, this would still not beat the best classical algorithms presented in Table 1.
We would like to thank an anonymous referee for suggesting this point.
We would like to thank an anonymous referee for this suggestion.
References
Harrow, A., Hassidim, A., Lloyd, S.: Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 15, 150502 (2009). arXiv:0811.3171
Leyton, S., Osborne, T.: A quantum algorithm to solve nonlinear differential equations (2008). arXiv:0812.4423
Clader, B., Jacobs, B., Sprouse, C.: Preconditioned quantum linear system algorithm. Phys. Rev. Lett. 110, 250504 (2013). arXiv:1301.2340
Berry, D.: High-order quantum algorithm for solving linear differential equations. J. Phys. A: Math. Gen. 47, 105301 (2014). arXiv:1010.2745
Berry, D., Childs, A., Ostrander, A., Wang, G.: Quantum algorithm for linear differential equations with exponentially improved dependence on precision. Commun. Math. Phys. 356, 1057 (2017). arXiv:1701.03684
Arrazola, J., Kalajdzievski, T., Weedbrook, C., Lloyd, S.: Quantum algorithm for nonhomogeneous linear partial differential equations. Phys. Rev. A 100, 032306 (2019). arXiv:1809.02622
Childs, A., Liu, J.-P.: Quantum spectral methods for differential equations. Commun. Math. Phys. (2020). arXiv:1901.00961
Lubasch, M., Joo, J., Moinier, P., Kiffner, M., Jaksch, D.: Variational quantum algorithms for nonlinear problems (2019). arXiv:1907.09032
Childs, A., Liu, J.-P., Ostrander, A.: High-precision quantum algorithms for partial differential equations (2020). arXiv:2002.07868
Xin, T., Wei, S., Cui, J., Xiao, J., Arrazola, I., Lamata, L., Kong, X., Lu, D., Solano, E., Long, G.: Quantum algorithm for solving linear differential equations: theory and experiment. Phys. Rev. A 101 (2020). arXiv:1807.04553
Cao, Y., Papageorgiou, A., Petras, I., Traub, J., Kais, S.: Quantum algorithm and circuit design solving the Poisson equation. New J. Phys. 15, 013021 (2013). arXiv:1207.2485
Scherer, A., Valiron, B., Mau, S.-C., Alexander, S., van den Berg, E., Chapuran, T.: Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target. Quantum Inf. Process. 16, 1 (2017). arXiv:1505.06552
Wang, S., Wang, Z., Li, W., Fan, L., Wei, Z., Gu, Y.: Quantum fast Poisson solver: the algorithm and modular circuit design (2019). arXiv:1910.09756
Costa, P., Jordan, S., Ostrander, A.: Quantum algorithm for simulating the wave equation. Phys. Rev. A 99 (2019). arXiv:1711.05394
Montanaro, A., Pallister, S.: Quantum algorithms and the finite element method. Phys. Rev. A 93, 032324 (2016). arXiv:1512.05903
Carslaw, H.S., Jaeger, J.C.: Conduction of Heat in Solids. Clarendon Press, Oxford (1959)
Wilmott, P., Howson, S., Howison, S., Dewynne, J., et al.: The Mathematics of Financial Derivatives: A Student Introduction. Cambridge University Press, Cambridge (1995)
Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629 (1990)
Ore, O.: On functions with bounded derivatives. Trans. Am. Math. Soc. 43, 321 (1938)
Lawler, G.: Random Walk and the Heat Equation. American Mathematical Society, Providence (2010)
Kac, M.: Random walk and the theory of Brownian motion. Am. Math. Mon. 54, 369 (1947)
King, G.: Monte-Carlo method for solving diffusion problems. Ind. Eng. Chem. 43, 2475 (1951)
Cliffe, K., Giles, M., Scheichl, R., Teckentrup, A.: Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients. Comput. Vis. Sci. 14, 3 (2011)
Giles, M.: Multilevel Monte Carlo path simulation. Oper. Res. 56, 607 (2008)
Chakraborty, S., Gilyén, A., Jeffery, S.: The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation. In: Proceedings of 46th International Colloquium on Automata, Languages, and Programming, pp. 33:1–33:14 (2019). arXiv:1804.01973
Apers, S., Sarlette, A.: Quantum fast-forwarding: Markov chains and graph property testing (2018). arXiv:1804.02321
Gilyén, A., Su, Y., Low, G., Wiebe, N.: Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In: Proceedings of 51st Annual ACM Symposium Theory of Computing, pp. 193–204 (2019). arXiv:1806.01838
Brassard, G., Høyer, P., Mosca, M., Tapp, A.: Quantum amplitude amplification and estimation. Quantum Computation and Quantum Information: A Millennium 305, 53 (2002). arXiv:quant-ph/0005055
Iserles, A.: A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press, Cambridge (2009)
LeVeque, R.: Finite Difference Methods for Ordinary and Partial Differential Equations. SIAM, Philadelphia (2007)
Trefethen, L.: Finite difference and spectral methods for ordinary and partial differential equations (1996). http://people.maths.ox.ac.uk/trefethen/pdetext.html
Shewchuk, J.: An introduction to the conjugate gradient method without the agonizing pain. Technical Report CMU-CS-TR-94-125 (Carnegie Mellon University, 1994). http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.ps
Dubhashi, D., Panconesi, A.: Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, Cambridge (2009)
Kachitvichyanukul, V., Schmeiser, B.: Binomial random variate generation. Commun. ACM 31, 216 (1988)
Devroye, L.: Non-uniform Random Variate Generation. Springer-Verlag, New York (1986)
Childs, A., Kothari, R., Somma, R.: Quantum linear systems algorithm with exponentially improved dependence on precision. SIAM J. Comput. 46, 1920 (2017). arXiv:1511.02306
Zalka, C.: Simulating quantum systems on a quantum computer. Proc. R. Soc. A Math. Phys. Eng. Sci. 454, 313 (1998) https://doi.org/10.1098/rspa.1998.0162
Long, G.-L, Sun, Y.: Efficient scheme for initializing a quantum register with an arbitrary superposed state. Phys. Rev. A 64 (2001). arXiv:quant-ph/0104030
Grover, L., Rudolph, T.: Creating superpositions that correspond to efficiently integrable probability distributions (2002). arXiv:quant-ph/0208112
Kaye, P., Mosca, M.: Quantum networks for generating arbitrary quantum states (2004). arXiv:quant-ph/0407102
Sanders, Y.R., Low, G.H., Scherer, A., Berry, D.W.: Black-box quantum state preparation without arithmetic. Phys. Rev. Lett. 122, 020502 (2019). https://doi.org/10.1103/PhysRevLett.122.020502, arXiv:1807.03206
Watrous, J.: Quantum simulations of classical random walks and undirected graph connectivity. J. Comput. Syst. Sci. 62, 376 (2001). (quant-ph/9812012)
Nisan, N.: Pseudorandom generators for space-bounded computation. Combinatorica 12, 449 (1992)
Bauer, W.: The Monte Carlo method. J. Soc. Ind. Appl. Math. 6, 438 (1958)
Chia, N.-H., Gilyén, A., Li, T., Lin, H.-H., Tang, E., Wang, C.: Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning (2019). arXiv:1910.06151
Diaconis, P.: Group representations in probability and statistics (Institute of Mathematical Statistics, 1988)
Acknowledgements
We would like to thank Jin-Peng Liu and Gui-Lu Long for comments on a previous version, and two anonymous referees for helpful suggestions which improved this work. We acknowledge support from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union’s Horizon 2020 Programme (QuantAlgo project), EPSRC grants EP/R043957/1 and EP/T001062/1, and EPSRC Early Career Fellowship EP/L021005/1. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 817581). No new data were created during this study.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by H-T. Yau.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Runtime of applying a quantum algorithm for ODEs to the heat equation
In this appendix, we sketch the complexity obtained when using the algorithm of Berry et al. [5] to solve the heat equation as a system of ODEs. Also note that a procedure is not explicitly given in [5] to approximate the \(\ell _2\) norm of the solution vector, which is required to estimate its properties. We will show that the quantum algorithm based on [5] is somewhat worse than the quantum linear equations method proposed in Theorem 17 to generate the quantum state of the heat equation.
In the heat equation (1), if we just discretise \(x_1,\ldots ,x_d\) to the same level of accuracy as specified in Sect. 2.1, then we obtain a system of ODEs of the form
where \(\widetilde{\mathbf {u}}\) is the vector of \(\{u(j_1\Delta x,\ldots ,j_d\Delta x,t): ~ j_1,\ldots ,j_d\in \{0,1,\ldots ,n-1\}\}\),
and
is an \(n\times n\) matrix.
In [5], Berry et al. proposed a quantum algorithm to solve time-independent ODEs \(\frac{d\mathbf {x}}{dt} = A \mathbf {x} + \mathbf {b}\). They assumed that A is diagonalizable and the real parts of the eigenvalues are non-positive. This is satisfied for the heat equation (A1) as shown in the following lemma.
Lemma 23
The eigenvalues of A are \(\{ \lambda _{j_1}+\cdots +\lambda _{j_d}: j_1,\ldots ,j_d \in \{0,1,\ldots ,n-1\}\}\), where
Moreover, A is diagonalized by the d-th tensor product of the quantum Fourier transform.
Proof
Since H is a circulant matrix, it can be diagonalized by the quantum Fourier transform F. Denote \(\Lambda =\mathrm{diag}\{\lambda _0,\ldots ,\lambda _{n-1}\}\) as the diagonal matrix that stores the eigenvalues of H, then \(\Lambda F^\dag = F^\dag H\). Set \(c_0=-2,c_1=1,c_2=\cdots =c_{n-2}=0,c_{n-1}=1\). Then \(\Lambda F^\dag |0\rangle = F^\dag H |0\rangle \) gives
For convenience, set \(\omega _{n}=e^{2\pi i/n}\), then
The claimed result follows easily from Eq. (A2). \(\square \)
Since we can determine the nonzero entries of H efficiently, we can determine the nonzero entries of A efficiently too. The sparsity of A is \(\Theta (d)\). By Theorem 9 of [5], the quantum state \(|\widetilde{\mathbf {u}}(T)\rangle \) of the ODE (A1) to precision \(\epsilon \) is obtained in time
where \(g=\max _{t\in [0,T]} \Vert \widetilde{\mathbf {u}}(t)\Vert /\Vert \widetilde{\mathbf {u}}(T)\Vert \). By Lemma 23 and Corollary 2,
Thus, the quantum state \(|\widetilde{\mathbf {u}}(T)\rangle \) is obtained in time
Note that in the proof of Theorem 17, equation (72) shows that we can obtain the state \(|\widetilde{u}\rangle \) in time
In comparison, the complexity of the algorithm of [5] has better dependence on d, but is increased by a multiplicative factor \(g \ge 1\). The complexity of obtaining the desired state using the quantum spectral method of Childs and Liu [7] also equals (A9).
Appendix B: Estimation of the condition number
For some of the classical and quantum methods we consider, the condition number of the relevant linear system will be an important component of the algorithms’ overall complexity.
Recall from Eq. (38) that this linear system is
In the following, we will estimate the condition number of the above linear system. For convenience, we let A denote the coefficient matrix.
First we consider the case \(d=1\). In this case
where H is the matrix defined in Eq. (A3). If we define \(\mathcal {T}\) to be the following \(m\times m\) matrix:
then
For convenience, denote
Then by Lemma 23, the eigenvalues of \(\mathcal {L}\) are \(1-\gamma _j\) for \(j=0,1,\ldots ,n-1\). Moreover,
where F is the quantum Fourier transform. It is easy to show that the set of singular values of A is the collection of the singular values of
for all j. Next, we focus on the calculation of the singular values of \(A_j\). Note that if \(\gamma _j=1\), then \(A_j=I\). This case is trivial, so we assume that \(\gamma _j\ne 1\) in the following. From Eq. (B3), it is easy to see that \(A_j\) is nonsingular.
Proposition 24
The eigenvalues of \(A_jA_j^\dag \) have the following form:
where \(\theta \) is nonzero and satisfies
Before proving the above result, we first show how to estimate the condition number of A from this proposition.
Proposition 25
Assuming that \(d=1\), the condition number \(\kappa \) of the linear system (38) is \(\kappa =\Theta (m)\). Moreover, \(\Vert A\Vert =\Theta (1),\Vert A^{-1}\Vert =\Theta (m)\).
Proof
Let \(\sigma _{\max },\sigma _{\min }\) be the maximal and minimal nonzero singular value of A respectively. If \(j=0\), then \(\gamma _j=0\) and \(A_j=\mathcal {T}\). The singular values of \(\mathcal {T}\) are
where \(k=1,\ldots ,m\). A proof of this will be given at the end of this appendix. If we choose \(k=m\), then
To compute the minimal nonzero value of \((\sin \theta /\sin m\theta )^2\) in the interval \([0,\pi ]\), it suffices to focus on the interval \(\theta \in [0,\pi /2]\), since \(|\sin m\theta |\) is periodic in the interval \([0,\pi /2]\), and the periods are \(\{[k\pi /m,(k+1)\pi /m]:k=0,\ldots ,m/2-1\}\). Also, in the interval \([0,\pi /2]\), \(\sin \theta \) is increasing. Since we want to compute the minimal value, we just need to consider the interval \([0,\pi /m]\). Actually, we only need to focus on \([0,\pi /2m]\) because \(|\sin m\theta |\) is symmetric along the line \(\theta =\pi /2m\). When \(\theta \) is small, \(\sin \theta \ge 2\theta /\pi \) and \(\sin m\theta \le m\theta \), so
Therefore, we have
Next, we estimate \(\sigma _{\max }\). Since \(\alpha \Delta t/\Delta x^2 \le 1/2\), we have \(0\le \gamma _j\le 2\). Thus, \((1-\gamma _j)^2 +2(1-\gamma _j)\cos \theta +1 \le 4\). When \(\gamma _j=1\), the eigenvalue is 1, so \(\sigma _{\max }\ge 1\). Note that in the case \(\alpha \Delta t/\Delta x^2=1/2\), then \(\gamma _j=1\) implies that \(j=n/4\) in Eq. (B5). As a result, \(\sigma _{\max }=\Theta (1)\). Together with Eq. (B13), we obtain the claimed result. \(\square \)
Next, we consider the general case \(d>1\). It is easy to see that
The coefficient matrix of the linear system (38) is
Theorem
3 (restated). The largest and smallest singular values of the matrix in (38) satisfy \(\sigma _{\max } = \Theta (1)\), \(\sigma _{\min } = \Theta (1/m)\), respectively. Hence the condition number is \(\Theta (m)\).
Proof
The proof of this theorem is similar to that of Proposition 25. The calculation of the singular values of A can be reduced to calculating the singular values of
where \(j_1,\ldots ,j_d \in \{0,1,\ldots ,n-1\}\). The result of Proposition 24 also holds for \(A_{j_1,\ldots ,j_d} \) by changing \(\gamma _j\) into \(\gamma _{j_1}+\cdots +\gamma _{j_d}\). Let \(\sigma _{\max },\sigma _{\min }\) be the maximal and minimal nonzero singular value respectively.
The estimation of \(\sigma _{\min }\) is the same as that in the proof of Proposition 25. The upper bound is obtained by considering the special case \(\gamma _{j_1}=\cdots =\gamma _{j_d}=0\). Similarly to Eq. (B11), \(\sigma _{\min }\le \pi /(2m+1)\). As for the lower bound, the proof of that in Eq. (B12) is independent of \(\gamma _j\), so it is also true for \(A_{j_1,\ldots ,j_d}\). Thus \(\sigma _{\min }=\Theta (1/m)\).
As for \(\sigma _{\max }\), if we consider the special case \(\gamma _{j_1}=\cdots =\gamma _{j_d}=1/d\), then we obtain \(\sigma _{\max } \ge 1\). This special case is obtained by taking \(j=n/4\) in the case \(d\alpha \Delta t/\Delta x^2 = 1/2\). Since the eigenvalue of \(A_{j_1,\ldots ,j_d}\) also has the form (B8) by changing \(\gamma _j\) into \(\gamma _{j_1}+\cdots +\gamma _{j_d}\), \(\gamma _j = 4 \frac{\alpha \Delta t}{\Delta x^2} \sin ^2 \frac{j \pi }{n}\) and \(d\alpha \Delta t/\Delta x^2 \le 1/2\), we have \(\gamma _{j_1}+\cdots +\gamma _{j_d} \le 4 d\alpha \Delta t/\Delta x^2 \le 2.\) By Eq. (B8), \(\sigma _{\max } \le 4\). Thus \(\sigma _{\max }=\Theta (1)\), and \(\sigma _{\min }=\Theta (1/m)\). \(\square \)
Proof of Proposition 24
For convenience, set \(\beta _j=\gamma _j/(1-\gamma _j)\), then
where
and \(q_j=1/(1+\beta _j)=1-\gamma _j\). In the following, we need to compute the eigenvalues of \(Q_j\). The following lemma describes the characteristic polynomial of \(Q_j\). It is easy to calculate that \(\det (Q_j+2I)=m+1+mq_j\ne 0\) as \(-1\le q_j\le 1\). This means \(-2\) is not an eigenvalue of \(Q_j\). In the following analysis, we will not consider this case.
Lemma 26
Assume that \(\lambda \ne 2\). For any \(m\ge 1\), let
Then
where \(x_1 = \frac{1}{2} (\lambda + \sqrt{\lambda ^2-4}), x_2 = \frac{1}{2} (\lambda - \sqrt{\lambda ^2-4})\), and \(x_1\ne x_2\). Moreover,
Proof
By definition, \(f_m = \lambda f_{m-1} - f_{m-2}\), then \(f_m = \alpha _1 x_1^m + \alpha _2 x_2^m\) for some \(\alpha _1,\alpha _2\). Since \(f_1=\lambda , f_2 = \lambda ^2-1\), we have
Solving the linear system gives
So \(f_m = \frac{x_1^{m+1} - x_2^{m+1}}{x_1-x_2}\). Since \(\lambda \ne 2\), we obtain \(x_1\ne x_2\). By definition,
This completes the proof. \(\square \)
Now we have to solve for \(\lambda \) from Eq. (B22), i.e.,
Divides both sides of the above equation by \(x_2^{m+1}\), we obtain
Since \(x_1x_2=1\), we have
If \(x_1\) is an solution, then \(x_2=1/x_1\) is also a solution of the above equation. Assume that \(x_1 = re^{i \theta }\). Since \(x_1+x_1^{-1} = \lambda \in \mathbb {R}\), if \(\theta \ne 0 \mod \pi \), then \(r=\pm 1\).
By (B17) and noting that in Lemma 26, \(-\lambda \) is the eigenvalue of \(Q_j\), we obtain that the eigenvalues of \(A_jA_j^\dag \) are of the form
where \(x_1\) runs over all solutions of Eq. (B27). By Eq. (B29) and \(q_j=1/(1+\beta _j)\), we know that \(x_1^{2m+1} (1+x_1 (1+\beta _j)) = x_1+(1+\beta _j)\). Thus \(\sigma /(1-\gamma _j)^2\) can be rewritten as
If \(x_1 \in \mathbb {R}\), and if \(|x_1|\ge 1\), then the first expression of (B33) implies that \(\sigma /(1-\gamma _j)^2\) is exponentially large; however the second expression shows that \(\sigma /(1-\gamma _j)^2\) tends to zero. The same contradiction also appears if \(|x_1|\le 1\). So if \(x_1 \in \mathbb {R}\), then \(x_1=\pm 1\). We prove this more formally in the following lemma.
Lemma 27
If \(x_1\in \mathbb {R}, |x_1| \ge 1\) and \(x_1^{2m+1} (1+x_1(1+\beta _j)) = x_1+1+\beta _j\), then \(x_1=\pm 1\).
Proof
First assume \(x_1>1\). We have \(x_1^{2m} (1+x_1 (1+\beta _j)) = 1+\frac{1+\beta _j}{x_1}\). The left side is strictly greater than \(1+(1+\beta _j)\), while the right side strictly smaller than \(1+(1+\beta _j)\), a contradiction. Next assume \(x_1<-1\). Set \(\tilde{x}_1=-x_1>1\), then we have \((1+\beta _j)-\tilde{x}_1=\tilde{x}_1^{2m+1} (\tilde{x}_1(1+\beta _j)-1) \ge \tilde{x}_1(1+\beta _j)-1 > (1+\beta _j)-1\). This means \(\tilde{x}_1<1\), a contradiction. \(\square \)
Due to the two equivalent expressions (B33) of eigenvalues, it is also a contradiction if \(0<|x_1|<1\). Since \(x_1\ne x_2\), the above lemma means \(x_1 \notin \mathbb {R}\), thus the only possibility is \(x_1=e^{i\theta }\) for some \(\theta \), then \((x_1^{2m}-1)x_1 + (1+\beta _j)(x_1^{2(m+1)} - 1) = 0\) implies that
So \((e^{i m\theta }-e^{-i m\theta }) + (1+\beta _j)(e^{i (m+1)\theta } - e^{-i (m+1)\theta }) = 0\), that is
Thus,
where the last identity (B42) is derived from the identity (B35).
On the other hand,
Substitute \(\beta _j = \gamma _j/(1-\gamma _j)\) into (B35) and (B43) will yield the claimed results. \(\square \)
Based on the above calculation, next we compute the singular values of \(\mathcal {T}\), which is claimed in Eq. (B10). It suffices to choose \(j=0\) in (B33). If \(j=0\), then \(\gamma _j=\beta _j=0\), so \(x_1\) satisfies \(x_1^{2m+1}(1+x_1) = (1+x_1)\). Since \(x_1\ne -1\), we obtain \(x_1^{2m+1}=1\), i.e., \(e^{i(2m+1)\theta }=1\), thus \(\theta = \frac{2k\pi }{2m+1},\) where \(k=0,\pm 1,\ldots ,\pm m\). Note that \(x_1 \ne x_2\), so \(k\ne 0\). Also note that \(x_1x_2=1\), so we just need to choose \(k=1,2,\ldots ,m\) to determine \(x_1\). For these \(\theta \),
Therefore, the singular values of \(\mathcal {T}\) are \(2 \cos \frac{k\pi }{2m+1}\), where \(k=1,2,\ldots ,m\).
Appendix C: \(\mathcal {L}\) is Well-Conditioned on Nonnegative Vectors
In this appendix, we show that \(\mathcal {L}\) cannot shrink nonnegative vectors too much, implying that the quantum algorithm for solving linear equations can construct a quantum state corresponding to \(\mathcal {L}\mathbf {u_0}\) efficiently, given a quantum state corresponding to \(\mathbf {u_0}\).
Lemma
4 (restated). Let \(\mathcal {L}\) be defined by (13), taking \(\Delta t = \Delta x^2/(2\alpha d)\) as in Corollary 2. Then for all nonnegative vectors \(\mathbf {u}\), \(\Vert \mathcal {L}\mathbf {u}\Vert _2^2 / \Vert \mathbf {u}\Vert _2^2 \ge 1/(2d)\).
Proof
Write \(\mathcal {L} = \sum _{i=1}^d \mathcal {L}_i\), where \(\mathcal {L}_i\) acts only on the i’th coordinate and
This operator corresponds to the matrix
Then
using non-negativity of \(\mathcal {L}_i\) and \(\mathbf {u}\). It is easy to see that the matrix for \(\mathcal {L}_i^2\) has entries all equal to \(\frac{1}{2d^2}\) on the main diagonal, and non-negative entries elsewhere. Therefore, for each i,
and hence \(\Vert \mathcal {L}\mathbf {u}\Vert _2^2 \ge \Vert \mathbf {u}\Vert _2^2/(2d)\). \(\square \)
Appendix D: Bounds on \(\ell _2\) Norm of Solutions to Heat Equation
In this appendix we prove Lemma 14, which gives upper and lower bounds on \(\Vert \mathcal {L}^\tau | 0 \rangle \Vert _2^2\) in the special case where \(\Delta t = \Delta x^2/(2d\alpha )\). To achieve this, we will use Fourier analysis (similarly to “Appendix B”). As in the previous appendix, write \(\mathcal {L} = \sum _{i=1}^d \mathcal {L}_i\), where \(\mathcal {L}_i\) acts only on the i’th coordinate and
Each operator \(\mathcal {L}_i\) is diagonalised by the quantum Fourier transform on \(\mathbb {Z}_n\) and has eigenvalues \(\frac{1}{d} \cos (2 \pi y / n)\) for \(y=0,\dots ,n-1\). Applying the quantum Fourier transform to \(| 0 \rangle \) gives a uniform superposition over all Fourier modes y, which we identify with elements of \(\mathbb {Z}_n\). Then
We also observe that \(\mathcal {L}\) describes a simple random walk on a periodic d-dimensional square lattice. As
where we use \(| 0 \rangle \) to denote the origin, we can interpret \(\Vert \mathcal {L}^\tau | 0 \rangle \Vert _2^2\) as the probability of returning to the origin after \(2\tau \) steps of the random walk.
To complete the proof of Lemma 14 and bound this quantity, we will first handle the simpler 1-dimensional case separately.
Lemma 28
Let \(d=1\) and let \(\mathcal {L}\) be defined by (13), taking \(\Delta t = \Delta x^2/(2\alpha )\) as in Corollary 2. Then
Proof
A lower bound
follows by observing that the probability of returning to 0 after \(2\tau \) steps is lower-bounded by the probability of a random walk on the integers (not considered modulo n) returning to 0 after \(2\tau \) steps, which is exactly \(\left( {\begin{array}{c}2\tau \\ \tau \end{array}}\right) /2^{2\tau }\). Next, we use (D2) to obtain
which is an exact statement for the walk modulo n, and observe that a lower bound of 1/n is immediate from considering only the \(y=0\) term.
For an upper bound, we start with the same expression, and use
The first inequality follows from splitting the sum up as
Using that \(\cos (\theta )^2 = \cos (k\pi \pm \theta )^2\) for \(k \in \mathbb {Z}\), each of the last three sums is upper-bounded by the first one. For example,
note that if n is not a multiple of 2, \(y' = n/2-y\) ranges over values of the form \(i + 1/2\) for integer i. As \(\cos \theta \) is decreasing in the range \(0 \le \theta \le \pi \), replacing the sum with a sum over integers in the range \(\{0,\dots ,n/4\}\) could not make it smaller. The second inequality uses that \(\cos \theta \le e^{-\theta ^2/2}\) for \(\theta \le \pi /2\) [46, Chapter 3, Theorem 2]. \(\square \)
Lemma
14 (restated). Let \(\mathcal {L}\) be defined by (13), taking \(\Delta t = \Delta x^2/(2d\alpha )\) as in Corollary 2. Then for any \(\tau \ge 1\),
Proof
We start by proving the upper bound, which is based on the interpretation of \(\langle 0|\mathcal {L}^{2\tau }|0 \rangle \) as the probability of returning to the origin after \(2\tau \) steps of a random walk. Each step corresponds to choosing one of d dimensions uniformly at random, then moving in one of two possible directions in that dimension. The walk returns to the origin after \(2\tau \) steps if it has done so in every dimension. To understand the probability of this event, we use Lemma 28.
Let \(s \in \{1,\dots ,d\}^{2\tau }\) denote the sequence of dimensions chosen by the walk, and let \(N_i(s)\) denote the number of i’s in s. Let p(N) denote the probability that a 1d walk returns to the origin after N steps. Then
using independence of the random walks, conditioned on s. By Lemma 28, we have
By a Chernoff bound, for each \(i \in \{1,\dots ,d\}\),
so using a union bound over i,
as claimed. Next we prove the lower bound. Using
we get a lower bound of \(n^{-d}\) immediately by considering the term \(y_1=\dots =y_d=0\). For the remaining part of the lower bound, we use that from Lemma 28, the probability that a walk on \(\mathbb {Z}_n\) making 2k steps returns to the origin is lower-bounded by \(\frac{1}{2\sqrt{k}}\). So, if each of the d independent random walks makes an even number of steps, the probability that they all simultaneously return to the origin is at least \(\frac{1}{(2\sqrt{\tau })^d}\). It remains to lower-bound the probability that all of the walks make an even number of steps.
Let \(N_e(d,2\tau )\) denote the number of sequences of \(2\tau \) integers between 1 and d such that the number of times that each integer appears in the sequence is even. The probability that all the walks make an even number of steps is \(N_e(d,2\tau ) / d^{2\tau }\). We will show by induction on d that \(N_e(d,2\tau ) \ge d^{2\tau } / 2^d\). For the base case, \(N_e(1,2\tau ) = 1 \ge 1/2\) as required. Then for \(d \ge 2\),
Therefore, with probability at least \(1/2^d\), all of the walks make an even number of steps, and the probability that they all return to the origin after \(2\tau \) steps in total is at least \(\frac{1}{(4\sqrt{\tau })^d}\) as claimed. \(\square \)
Appendix E: The Crank–Nicolson method for solving the heat equation
The Crank–Nicolson method is a commonly used numerical method for solving the heat equation. It is unconditionally stable, and is a combination of the forward and backward Euler methods. In this part, we briefly review the Crank–Nicolson method and see how it may enable a reduction of the complexity of the linear equation method for solving our problem. The improvement is at most a linear factor of \(\epsilon \) depending on the condition number of the induced linear system of equations. From numerical evidence for the scaling of the condition number, it appears that the improvement is by a factor of \(\epsilon ^{3/4}\). However, the use of this method requires more stringent assumptions on the smoothness of the solution than the FTCS method, and still leads to an algorithm whose complexity is worse than the best classical algorithms given in Table 1, even under the most optimistic assumptions on the scaling of the condition number.
We first consider the dimension 1 case. Below, \(x_i = i\Delta x, t_i = i \Delta t\). By Taylor expanding, we have
where \(\xi \in [t_{j+1/2}, t_{j+1}], \xi '\in [t_{j}, t_{j+1/2}]\). So
Using the average of the second centered differences for \(\frac{\partial ^2 u}{\partial x^2}(x_i,t_{j+1}), \frac{\partial ^2 u}{\partial x^2}(x_i,t_{j})\), we have
The Crank–Nicolson method in dimension 1 is then based on the following approximation:
In the dimension d case, from the heat equation \( {\partial u}/{\partial t} = \alpha \sum _{i=1}^d {\partial ^2 u}/{\partial x_i^2}, \) we know that
Using assumptions (2)–(4) in the main paper, we can bound
by making a further bound assumption consistent with the above that
Denoting \(\lambda = \alpha \Delta t/2\Delta x^2\), the Crank–Nicolson method in dimension d is
where \(\{\mathbf {e}_1,\ldots ,\mathbf {e}_d\}\) is the standard basis of \(\mathbb {R}^d\). Therefore,
We can write it in a matrix form
where the notation \(\tilde{\mathbf {u}}_{i}\) and \(\mathbf {u}_{i}\) below are the same as those in Theorem 1. From the above analysis,
Assume that \(L\tilde{\mathbf {u}}_{i} = L\mathbf {u}_{i} + E_i\), where \(E_i\) is the error. Then
If \(\Vert R L^{-1} \Vert _\infty = \Theta (1)\), then \( \Vert L\tilde{\mathbf {u}}_{2} - L {\mathbf {u}}_{2}\Vert _\infty \le 2 E. \) Therefore, we have
Set the above error bound as \(\epsilon /L^{d+2}\), we have
So
Comparing to Corollary 2, m has better dependence on \(\epsilon \).
From numerical tests, the condition number of the induced linear equations is of order m. So if we use the classical linear equation method, similarly to the proof of Theorem 5, the complexity (here we only show the dependence on \(\epsilon \)) becomes
This is better than Theorem 5 (recall that the complexity in Theorem 5 is \(\widetilde{O}(\epsilon ^{-\frac{d}{2} - \frac{3}{2}})\)) by a factor of \(\epsilon ^{3/4}\). Moreover, even if we assume that the condition number is 1, the complexity is \(\widetilde{O}(mn^d)=\widetilde{O}(\epsilon ^{-\frac{d}{2} - \frac{1}{2}})\), which leads to a linear improvement in \(\epsilon \). But this is still worse than the best classical algorithms given in Table 1.
In summary, as we can see, the advantage of the Crank–Nicolson method is that the error to approximate \(\partial u/\partial t\) becomes \(O(\Delta t^2)\). Correspondingly, m can be improved to \(m\approx 1/\sqrt{\epsilon }\) from \(1/\epsilon \) (see Corollary 2). As a result, the Crank–Nicolson method may lead to a slightly better classical linear equations algorithm by at most a factor of \(\epsilon \) than Theorem 5, at the cost of needing to make an additional assumption on the smoothness of the solution. One may also ask whether the other algorithms studied in this work could be improved by the use of the Crank–Nicolson method rather than FTCS. When \(d\ge 3\) our best classical algorithm is based on random walk. However, we do not know how to define a random walk method for the heat equation based on the Crank–Nicolson scheme. When \(d\le 2\), the best classical algorithm is based on the fast Fourier transform. From the Crank–Nicolson scheme, we can also propose a similar method based on the fast Fourier transform. Similarly to Theorem 8, the cost is close to \(n^d\). In the Crank–Nicolson method, we still have \(n\approx 1/\sqrt{\epsilon }\). So the complexity does not change.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Linden, N., Montanaro, A. & Shao, C. Quantum vs. Classical Algorithms for Solving the Heat Equation. Commun. Math. Phys. 395, 601–641 (2022). https://doi.org/10.1007/s00220-022-04442-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-022-04442-6