A Chebyshev Polynomials
This appendix defines the Chebyshev polynomials and presents some of their properties that are useful for our analysis.
For any \(k \in {\mathbb {N}}\), the Chebyshev polynomial of the first kind can be defined as the function
$$\begin{aligned} T_k(x)=\cos (k\arccos x),\quad x\in [-1,1]. \end{aligned}$$
(A.1)
It can be shown that this is a polynomial of degree k in x. For example, we have
$$\begin{aligned} T_0(x)&=1,&T_1(x)&=x,&T_2(x)&=2x^2-1,&T_3(x)&=4x^3-3x,&T_4(x)&=8x^4-8x^2+1. \end{aligned}$$
(A.2)
Using the trigonometric addition formula \(\cos (k+1)\theta +\cos (k-1)\theta =2\cos \theta \cos k\theta \), we have the recurrence
$$\begin{aligned} T_{k+1}(x)=2xT_k(x)-T_{k-1}(x) \end{aligned}$$
(A.3)
(which also provides an alternative definition of the Chebyshev polynomials, starting from the initial conditions \(T_0(x)=1\) and \(T_1(x)=x\)). We also have the bounds
$$\begin{aligned} \begin{aligned} |T_k(x)|&\le 1 \text { for } |x|\le 1,&T_k(\pm 1)&=(\pm 1)^k. \end{aligned} \end{aligned}$$
(A.4)
Chebyshev polynomials are orthogonal polynomials on \([-1,1]\) with the weight function \(w(x):=(1-x^2)^{-1/2}\). More concretely, defining an inner product on \(L_w^2(-1,1)\) by
$$\begin{aligned} (f,g)_w&:=\int _{-1}^1 f(x)g(x)\frac{\mathrm {d}{x}}{\sqrt{1-x^2}}, \end{aligned}$$
(A.5)
we have
$$\begin{aligned} (T_m,T_n)_w&=\int _0^{\pi }\cos m\theta \cos n\theta \, \mathrm {d}{\theta } \end{aligned}$$
(A.6)
$$\begin{aligned}&=\frac{\pi }{2}\sigma _n\delta _{m,n} \end{aligned}$$
(A.7)
where
$$\begin{aligned} \sigma _n:= {\left\{ \begin{array}{ll} 2 &{} n=0\\ 1 &{} n\ge 1. \end{array}\right. } \end{aligned}$$
(A.8)
It is well known from the approximation theorem of Weierstrass that \(\{T_k(x) : k \in {\mathbb {N}}\}\) is complete on the space \(L_w^2(-1,1)\). In other words, we have the following:
Lemma 7
Any function \(u\in L_w^2(-1,1)\) can be expanded by a unique Chebyshev series as
$$\begin{aligned} u(x)=\sum _{k=0}^{\infty }{\hat{c}}_kT_k(x) \end{aligned}$$
(A.9)
where the coefficients are
$$\begin{aligned} {\hat{c}}_k=\frac{2}{\pi }(u,T_k)_w. \end{aligned}$$
(A.10)
For any \(N \in {\mathbb {N}}\), we introduce the orthogonal projection \(P_N :L_w^2(-1,1)\rightarrow \mathbb {P}_N\) (where \(\mathbb {P}_N\) denotes the set of polynomials of degree at most N) by
$$\begin{aligned} P_N u(x)=\sum _{k=0}^N{\hat{c}}_kT_k(x). \end{aligned}$$
(A.11)
By the completeness of the Chebyshev polynomials, we have
$$\begin{aligned} (P_Nu(x),v(x))_w=(u(x),v(x))_w\quad \forall \, v\in \mathbb {P}_N \end{aligned}$$
(A.12)
and
$$\begin{aligned} \Vert P_Nu(x)-u(x)\Vert _w\rightarrow 0,\quad N\rightarrow \infty . \end{aligned}$$
(A.13)
Finally, we compute the Chebyshev series of \(u'(x)\) in terms of the Chebyshev series of u(x). Since \(T_k(x)=\cos k\theta \) where \(\theta =\arccos x\), we have
$$\begin{aligned} T'_k(x)=\frac{k\sin k\theta }{\sin \theta }. \end{aligned}$$
(A.14)
Since
$$\begin{aligned} 2\cos k\theta =\frac{\sin (k+1)\theta }{\sin \theta }-\frac{\sin (k-1)\theta }{\sin \theta }, \end{aligned}$$
(A.15)
we obtain
$$\begin{aligned} 2T_k(x)=\frac{T'_{k+1}(x)}{k+1}-\frac{T'_{k-1}(x)}{k-1},\quad k\ge 2 \end{aligned}$$
(A.16)
and
$$\begin{aligned} T_1(x)=\frac{T'_2(x)}{4}. \end{aligned}$$
(A.17)
Since \(P_N u(x) \in \mathbb {P}_N\), the derivative of this projection should be in \(\mathbb {P}_{N-1}\). Indeed, we have
$$\begin{aligned} \begin{aligned} u'(x)&=\sum _{k=0}^{N-1}{\hat{c}}'_kT_k(x) \\&=\frac{1}{2}\sum _{k=1}^{N-1}{\hat{c}}'_k\frac{T'_{k+1}(x)}{k+1} -\frac{1}{2}\sum _{k=2}^{N-1}{\hat{c}}'_k\frac{T'_{k-1}(x)}{k-1} +{\hat{c}}'_0T_0(x)\\&=\sum _{k=2}^{N-2}({\hat{c}}'_{k-1}-{\hat{c}}'_{k+1}) \frac{T'_k(x)}{2k}-\frac{1}{2}{\hat{c}}'_{2}T'_1(x) +\frac{1}{2}{\hat{c}}'_{N-2}\frac{T'_{N-1}(x)}{N-1} +\frac{1}{2}{\hat{c}}'_{N-1}\frac{T'_n(x)}{N}+{\hat{c}}'_0T_0(x)\\&=\sum _{k=1}^N{\hat{c}}_kT'_k(x). \end{aligned} \end{aligned}$$
(A.18)
Comparing the coefficients of both sides, we find
$$\begin{aligned} \begin{aligned} \sigma _k{\hat{c}}'_k&={\hat{c}}'_{k+2}+2(k+1){\hat{c}}_{k+1},\quad k\in [{N}]_0\\ {\hat{c}}'_N&=0 \\ {\hat{c}}'_{N+1}&=0 \end{aligned} \end{aligned}$$
(A.19)
where \(\sigma _k\) is defined in (A.8).
Since \({\hat{c}}'_k=0\) for \(k\ge N\), we can calculate \({\hat{c}}'_{N-1}\) from \({\hat{c}}_N\) and then successively calculate \({\hat{c}}'_{N-2},\ldots ,{\hat{c}}'_1,{\hat{c}}'_0\). This recurrence gives
$$\begin{aligned} {\hat{c}}'_k=\frac{2}{\sigma _k}\sum _{\begin{array}{c} j=k+1 \\ j+k \text { odd} \end{array}}^Nj{\hat{c}}_j,\quad k\in [{N}]_0. \end{aligned}$$
(A.20)
Since \({\hat{c}}'_k\) only depends on \({\hat{c}}_j\) for \(j>k\), the transformation matrix \(D_N\) between the values \({\hat{c}}'_k\) and \({\hat{c}}_k\) for \(k \in [{N+1}]_0\) is an upper triangular matrix with all zero diagonal elements, namely
$$\begin{aligned}{}[D_N]_{kj}= {\left\{ \begin{array}{ll} \frac{2j}{\sigma _k} &{}j>k,~j+k \text { odd} \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(A.21)
B An Example of the Quantum Spectral Method
Section 3 defines a linear system that implements the quantum spectral method for solving a system of d time-dependent differential equations. Here we present a simple example of this system for the case \(d=1\), namely
$$\begin{aligned} \frac{\mathrm {d}{x}}{\mathrm {d}{t}}=A(t)x(t)+f(t) \end{aligned}$$
(B.1)
where \(x(t),A(t),f(t)\in {\mathbb {C}}\), \(t\in [0,T]\), and we have the initial condition
$$\begin{aligned} x(0)=\gamma \in {\mathbb {C}}. \end{aligned}$$
(B.2)
In particular, we choose \(m=3\), \(n=2\), and \(p=1\) in the specification of the linear system. We divide [0, T] into \(m=3\) intervals \([0,\Gamma _1],[\Gamma _1,\Gamma _2],[\Gamma _2,T]\) with \(\Gamma _0=0, \Gamma _m=T\), and map each one onto \([-1,1]\) with the linear mapping \(K_h\) satisfying \(K_h(\Gamma _h)=1\) and \(K_h(\Gamma _{h+1})=-1\). Then we take the finite Chebyshev series of x(t) with \(n=2\) into the differential equation with interpolating nodes \(\{t_l=\cos \frac{l\pi }{n} : l \in [{2}]\} = \{0,-1\}\) to obtain a linear system. Finally, we repeat the final state \(p=1\) time to increase the success probability.
With these choices, the linear system has the form
$$\begin{aligned} L= \begin{pmatrix} L_1+L_2(A_0) &{} &{} &{} &{} \\ L_3 &{} L_1+L_2(A_1) &{} &{} &{} \\ &{} L_3 &{} L_1+L_2(A_2) &{} &{} \\ &{} &{} L_3 &{} L_4 &{} \\ &{} &{} &{} L_5 &{} L_4 \\ \end{pmatrix} \end{aligned}$$
(B.3)
with
$$\begin{aligned} L_1&=|0\rangle \langle 0|P_n+\sum _{l=1}^n|l\rangle \langle l|P_nD_n= \begin{pmatrix} 1 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} -4 \\ \end{pmatrix} \end{aligned}$$
(B.4)
$$\begin{aligned} L_2(A_h)&=-\sum _{l=1}^nA_h(t_l)\otimes |l\rangle \langle l|P_n=- \begin{pmatrix} 0 &{} 0 &{} 0 \\ A_h(0) &{} 0 &{} -A_h(0) \\ A_h(-1) &{} -A_h(-1) &{} A_h(-1) \\ \end{pmatrix} \end{aligned}$$
(B.5)
$$\begin{aligned} L_3&=\sum _{i=0}^{d}\sum _{k=0}^n(-1)^k|i0\rangle \langle ik|= \begin{pmatrix} 1 &{} -1 &{} 1 \\ 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{pmatrix} \end{aligned}$$
(B.6)
$$\begin{aligned} L_4&=-\sum _{i=0}^{d}\sum _{l=1}^n|il\rangle \langle il-1|+\sum _{i=0}^{d}\sum _{l=0}^n|il\rangle \langle il|= \begin{pmatrix} 1 &{} 0 &{} 0 \\ -1 &{} 1 &{} 0 \\ 0 &{} -1 &{} 1 \\ \end{pmatrix} \end{aligned}$$
(B.7)
$$\begin{aligned} L_5&=-\sum _{i=0}^{d}|i0\rangle \langle in|= \begin{pmatrix} 0 &{} 0 &{} -1 \\ 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{pmatrix}. \end{aligned}$$
(B.8)
The vector \(|X\rangle \) has the form
$$\begin{aligned} |X\rangle = \begin{pmatrix} c_0(\Gamma _1) \\ c_1(\Gamma _1) \\ c_2(\Gamma _1) \\ c_0(\Gamma _2) \\ c_1(\Gamma _2) \\ c_2(\Gamma _2) \\ c_0(\Gamma _3) \\ c_1(\Gamma _3) \\ c_2(\Gamma _3) \\ x \\ x \\ x \\ x \\ x \\ x \\ \end{pmatrix} \end{aligned}$$
(B.9)
where \(c_l(\Gamma _{h+1})\) are the Chebyshev series coefficients of \(x(\Gamma _{h+1})\) and x is the final state \(x(\Gamma _m)=x(-1)\).
Finally, the vector \(|B\rangle \) has the form
$$\begin{aligned} |B\rangle = \begin{pmatrix} \gamma \\ f_0(0) \\ f_0(-1) \\ 0 \\ f_1(0) \\ f_1(-1) \\ 0 \\ f_2(0) \\ f_2(-1) \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix} \end{aligned}$$
(B.10)
where \(\gamma \) comes from the initial condition and \(f_{h}(\cos \frac{l\pi }{n})\) is the value of \(f_h\) at the interpolation point \(t_l=\cos \frac{l\pi }{n} \in \{0,-1\}\).