Advertisement

Superconvergence of Ultra-Weak Discontinuous Galerkin Methods for the Linear Schrödinger Equation in One Dimension

  • 5 Accesses

Abstract

We analyze the superconvergence properties of ultra-weak discontinuous Galerkin (UWDG) methods with various choices of flux parameters for one-dimensional linear Schrödinger equation. In our previous work (Chen et al. in J Sci Comput 78(2):772–815, 2019), stability and optimal convergence rate are established for a large class of flux parameters. Depending on the flux choices and if the polynomial degree k is even or odd, in this paper, we prove 2k or \((2k-1)\)th order superconvergence rate for cell averages and numerical flux of the function, as well as \((2k-1)\) or \((2k-2)\)th order for numerical flux of the derivative. In addition, we prove superconvergence of \((k+2)\) or \((k+3)\)th order of the DG solution towards a special projection. At a class of special points, the function values and the first and second order derivatives of the DG solution are superconvergent with order \(k+2, k+1, k\), respectively. The proof relies on the correction function techniques initiated in Cao et al. (SIAM J Numer Anal 52(5):2555–2573, 2014), and applied to Cao et al. (Numer Methods Partial Differ Equ 33(1):290–317, 2017) for direct DG (DDG) methods for diffusion problems. Compared with Cao et al. (2017), Schrödinger equation poses unique challenges for superconvergence proof because of the lack of the dissipation mechanism from the equation. One major highlight of our proof is that we introduce specially chosen test functions in the error equation and show the superconvergence of the second derivative and jump across the cell interfaces of the difference between numerical solution and projected exact solution. This technique was originally proposed in Cheng and Shu (SIAM J Numer Anal 47(6):4044–4072, 2010) and is essential to elevate the convergence order for our analysis. Finally, by negative norm estimates, we apply the post-processing technique and show that the accuracy of our scheme can be enhanced to order 2k. Theoretical results are verified by numerical experiments.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

References

  1. 1.

    Adjerid, S., Devine, K.D., Flaherty, J.E., Krivodonova, L.: A posteriori error estimation for discontinuous Galerkin solutions of hyperbolic problems. Comput. Methods Appl. Mech. Eng. 191(11–12), 1097–1112 (2002)

  2. 2.

    Adjerid, S., Massey, T.C.: Superconvergence of discontinuous Galerkin solutions for a nonlinear scalar hyperbolic problem. Comput. Methods Appl. Mech. Eng. 195(25–28), 3331–3346 (2006)

  3. 3.

    Bona, J., Chen, H., Karakashian, O., Xing, Y.: Conservative, discontinuous Galerkin-methods for the generalized Korteweg-de Vries equation. Math. Comput. 82(283), 1401–1432 (2013)

  4. 4.

    Bramble, J.H., Schatz, A.H.: Higher order local accuracy by averaging in the finite element method. Math. Comput. 31(137), 94–111 (1977)

  5. 5.

    Cao, W., Li, D., Yang, Y., Zhang, Z.: Superconvergence of discontinuous Galerkin methods based on upwind-biased fluxes for 1d linear hyperbolic equations. ESAIM: Math. Model. Numer. Anal. 51(2), 467–486 (2017)

  6. 6.

    Cao, W., Liu, H., Zhang, Z.: Superconvergence of the direct discontinuous Galerkin method for convection-diffusion equations. Numer. Methods Partial Differ. Equ. 33(1), 290–317 (2017)

  7. 7.

    Cao, W., Shu, C.-W., Yang, Y., Zhang, Z.: Superconvergence of discontinuous Galerkin method for scalar nonlinear hyperbolic equations. SIAM J. Numer. Anal. 56(2), 732–765 (2018)

  8. 8.

    Cao, W., Zhang, Z., Zou, Q.: Superconvergence of discontinuous Galerkin methods for linear hyperbolic equations. SIAM J. Numer. Anal. 52(5), 2555–2573 (2014)

  9. 9.

    Cessenat, O., Despres, B.: Application of an ultra weak variational formulation of elliptic PDEs to the two-dimensional helmholtz problem. SIAM J. Numer. Anal. 35(1), 255–299 (1998)

  10. 10.

    Chen, A., Li, F., Cheng, Y.: An ultra-weak discontinuous Galerkin method for Schrödinger equation in one dimension. J. Sci. Comput. 78(2), 772–815 (2019)

  11. 11.

    Cheng, Y., Shu, C.-W.: A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives. Math. Comput. 77(262), 699–730 (2008)

  12. 12.

    Cheng, Y., Shu, C.-W.: Superconvergence of discontinuous Galerkin and local discontinuous Galerkin schemes for linear hyperbolic and convection-diffusion equations in one space dimension. SIAM J. Numer. Anal. 47(6), 4044–4072 (2010)

  13. 13.

    Cockburn, B., Luskin, M., Shu, C.-W., Süli, E.: Enhanced accuracy by post-processing for finite element methods for hyperbolic equations. Math. Comput. 72(242), 577–606 (2003)

  14. 14.

    Cockburn, B., Shu, C.-W.: Runge–Kutta discontinuous Galerkin methods for convection-dominated problems. J. Sci. Comput. 16(3), 173–261 (2001)

  15. 15.

    Davis, P.J.: Circulant Matrices. Wiley, New York (1979)

  16. 16.

    Ji, L., Xu, Y., Ryan, J.K.: Negative-order norm estimates for nonlinear hyperbolic conservation laws. J. Sci. Comput. 54(2), 531–548 (2013)

  17. 17.

    Liang, X., Khaliq, A.Q.M., Xing, Y.: Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrödinger equations. Commun. Comput. Phys. 17(2), 510–541 (2015)

  18. 18.

    Lu, W., Huang, Y., Liu, H.: Mass preserving discontinuous Galerkin methods for Schrödinger equations. J. Comput. Phys. 282, 210–226 (2015)

  19. 19.

    Meng, X., Ryan, J.K.: Discontinuous Galerkin methods for nonlinear scalar hyperbolic conservation laws: divided difference estimates and accuracy enhancement. Numer. Math. 136(1), 27–73 (2017)

  20. 20.

    Mock, M.S., Lax, P.D.: The computation of discontinuous solutions of linear hyperbolic equations. Commun. Pure Appl. Math. 31(4), 423–430 (1978)

  21. 21.

    Reed, W.H., Hill, T.: Triangular mesh methods for the neutron transport equation. Technical report, Los Alamos Scientific Lab., N. Mex. (USA) (1973)

  22. 22.

    Ryan, J., Shu, C.-W., Atkins, H.: Extension of a post processing technique for the discontinuous Galerkin method for hyperbolic equations with application to an aeroacoustic problem. SIAM J. Sci. Comput. 26(3), 821–843 (2005)

  23. 23.

    Shen, J., Tang, T., Wang, L.-L.: Spectral Methods: Algorithms, Analysis and Applications, vol. 41. Springer, Berlin (2011)

  24. 24.

    Shu, C.-W.: Discontinuous Galerkin methods for time-dependent convection dominated problems: Basics, recent developments and comparison with other methods. In: Barrenechea, A.C.G.R., Brezzi, F., Georgoulis, E. (eds.) Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, vol. 114, pp. 369–397. Springer, Switzerland (2016)

  25. 25.

    Xu, Y., Shu, C.-W.: Local discontinuous Galerkin methods for nonlinear Schrödinger equations. J. Comput. Phys. 205(1), 72–97 (2005)

  26. 26.

    Xu, Y., Shu, C.-W.: Optimal error estimates of the semidiscrete local discontinuous Galerkin methods for high order wave equations. SIAM J. Numer. Anal. 50(1), 79–104 (2012)

  27. 27.

    Yang, Y., Shu, C.-W.: Analysis of optimal superconvergence of discontinuous Galerkin method for linear hyperbolic equations. SIAM J. Numer. Anal. 50(6), 3110–3133 (2012)

  28. 28.

    Zhou, L., Xu, Y., Zhang, Z., Cao, W.: Superconvergence of local discontinuous Galerkin method for one-dimensional linear schrödinger equations. J. Sci. Comput. 73(2–3), 1290–1315 (2017)

Download references

Author information

Correspondence to Yingda Cheng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yingda Cheng: Research is supported by NSF Grants DMS-1453661 and DMS-1720023.

Mengping Zhang: Research supported by NSFC Grant 11871448.

Appendix

Appendix

Collections of Intermediate Results

In this section, we list some results that will be used in the rest of the appendix. First, we gather some results from [10]. (59)–(63) in [10] yields

$$\begin{aligned} \sum _{j=1}^N\Vert r_j\Vert _\infty \le C, \quad \text {if A2}, \end{aligned}$$
(66)

where \(r_j\) has been defined in (17). (59) and (71) and the first equation in A.3.3 in [10] yields

$$\begin{aligned} \sum _{j=1}^N\Vert r_j\Vert _\infty \le Ch^{-2}, \quad \text {if A3}. \end{aligned}$$
(67)

Next, we provide estimates of the Legendre coefficients in neighboring cells of equal size.

If \(u \in W^{k+2+n,\infty }(I)\), then expand \({\hat{u}}_j(\xi )\) at \(\xi = -1\) in (9) by Taylor series, we have for \(m \ge k+1\), \(\exists z\in [-1,1]\), s.t.

$$\begin{aligned} \begin{aligned} u_{j,m}&= C \int _{-1}^1 \frac{d}{d\xi ^{k+1}} \Big (\sum _{s=0}^n \frac{d}{d\xi ^s}{\hat{u}}_j (-1) \frac{(\xi +1)^s}{s!}\\&\quad + \frac{d}{d\xi ^{n+1}} {\hat{u}}_j(z) \frac{(\xi +1)^{n+1}}{(n+1)!} \Big ) \frac{d}{d\xi ^{m-k-1}} (\xi ^2 -1)^{m} d\xi , \\&= \sum _{s=0}^n \theta _s h_j^{k+1+s} u^{(k+1+s)}(x_{{j-\frac{1}{2}}}) + O (h_j^{k+2+n} |u|_{W^{k+2+n,\infty }(I_j)}), \end{aligned} \end{aligned}$$
(68)

where \(\theta _l\) are constants independent of u and \(h_j\).

Therefore, when \(h_j = h_{j+1}\), we use Taylor expansion again, and compute the difference of two \(u_{j, m}\) from neighboring cells

$$\begin{aligned} u_{j, m} - u_{j+1,m}&= \sum _{s=1}^n \mu _s h_j^{k+1+s} u^{(k+1+s)}(x_{{j-\frac{1}{2}}}) + O (h_j^{k+2+n} |u|_{W^{k+2+n,\infty }(I_j \cup I_{j+1})}). \end{aligned}$$
(69)

Then we obtain the estimates

$$\begin{aligned} |u_{j, m} - u_{j+1,m} + \sum _{s=1}^n \mu _s h_j^{k+1+s} u^{(k+1+s)}(x_{{j-\frac{1}{2}}}) | \le C h^{k+2+n}|u|_{W^{k+2+n,\infty }(I_j \cup I_{j+1})}, \end{aligned}$$
(70)

where \(\mu _s\) are constants independent of u and \(h_j\).

Two Convolution-Like Operators

In the proof of Lemmas 3.8 and 3.9 in [10], we used Fourier analysis for error analysis. Now we extract the main ideas and generalize the results to facilitate the proof of superconvergence results in Lemmas 3.6 and 4.2.

We define two operators on a periodic functions u in \(L^2(I)\):

$$\begin{aligned} \boxtimes _{\lambda }u (x)&= \frac{1}{1-\lambda ^N} \sum _{l=0}^{N-1}\lambda ^l u\left( x+L \frac{l}{N}\right) , \end{aligned}$$
(71a)
$$\begin{aligned} \boxplus u(x)&= \sum _{l=0}^{N-1}(-1)^l \frac{-N + 2l}{2} u\left( x+L \frac{l}{N}\right) , \end{aligned}$$
(71b)

where \(L = b-a\) is the size of I.

Expand u by Fourier series, i.e., \(u(x) = \sum _{n=-\infty }^{\infty }{\hat{f}}(n) e^{2\pi inx/L}\), we have

$$\begin{aligned} \begin{aligned} \boxtimes _{\lambda }u (x)&= \frac{1}{1-\lambda ^N} \sum _{l=0}^{N-1}\lambda ^l \sum _{n=-\infty }^{\infty }{\hat{f}}(n) e^{in (\frac{2\pi }{L} x + 2\pi \frac{l}{N})} \\&= \frac{1}{1-\lambda ^N} \sum _{n=-\infty }^{\infty }{\hat{f}}(n) e^{\frac{2\pi }{L} in x} \sum _{l=0}^{N-1}( \lambda e^{i2\pi \frac{n}{N}} )^l \\&= \sum _{n=-\infty }^{\infty }\frac{{\hat{f}}(n)}{1 - \lambda e^{2\pi i \frac{n}{N}}} e^{\frac{2\pi }{L} inx}, \\ \boxplus u(x)&= \sum _{l=0}^{N-1}(-1)^l \frac{-N + 2l}{2} \sum _{n=-\infty }^{\infty }{\hat{f}}(n) e^{in (\frac{2\pi }{L} x + 2\pi \frac{l}{N})} \\&= \sum _{n=-\infty }^{\infty }{\hat{f}}(n) e^{\frac{2\pi }{L} in x} \sum _{l=0}^{N-1}\frac{-N + 2l}{2} (-e^{i2\pi \frac{n}{N}})^l \\&= \sum _{n=-\infty }^{\infty }\frac{-2e^{2\pi i \frac{n}{N}}}{(1 + e^{2\pi i \frac{n}{N}})^2} {\hat{f}}(n) e^{\frac{2\pi }{L} in x}. \end{aligned} \end{aligned}$$

In addition, we can apply the operator on the same function recursively, we have

$$\begin{aligned} \begin{aligned} \boxtimes _{\lambda _1}^{\nu _1} \cdots \boxtimes _{\lambda _n}^{\nu _n} u (x)&= \sum _{n=-\infty }^{\infty }\frac{1}{(1 - \lambda _1e^{2\pi i \frac{n}{N}})^{\nu _1}} \cdots \frac{1}{(1 - \lambda _n e^{2\pi i \frac{n}{N}})^{\nu _n}} {\hat{f}}(n) e^{i\frac{2\pi }{L} inx}, \\ (\boxtimes _{\lambda })^\nu u (x)&= \sum _{n=-\infty }^{\infty }\frac{{\hat{f}}(n)}{(1 - \lambda e^{2\pi i \frac{n}{N}})^\nu } e^{\frac{2\pi }{L} inx}, \\ (\boxplus )^\nu u(x)&= \sum _{n=-\infty }^{\infty }\Big (\frac{-2e^{2\pi i \frac{n}{N}}}{(1 + e^{2\pi i \frac{n}{N}})^2} \Big )^\nu {\hat{f}}(n) e^{\frac{2\pi }{L} in x}. \end{aligned} \end{aligned}$$

As shown in the proof of Lemmas 3.8 and 3.9 in [10], if \(\lambda _i, i \le n\) is a complex number with \(|\lambda _i| = 1\), independent of h, then

$$\begin{aligned} \boxtimes _{\lambda _1}^{\nu _1} \cdots \boxtimes _{\lambda _n}^{\nu _n} u (x) \le C | u |_{W^{1+\sum _{i=1}^n \nu _i,1}(I)}, \quad (\boxplus u(x))^\nu \le C | u |_{W^{1+2\nu ,1}(I)}. \end{aligned}$$
(72)

Proof of Lemma 3.2

Proof

$$\begin{aligned} A_j+B_j= & {} G [L_{j,k-1}^-, L_{j,k}^-] + H [L_{j,k-1}^+, L_{j,k}^+] \\= & {} \frac{1}{2}\begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j}\end{bmatrix} M_+ + \begin{bmatrix} \alpha _1&{}\quad -\beta _2\\ -\beta _1&{}\quad -\alpha _1\end{bmatrix} \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j}\end{bmatrix} M_-, \end{aligned}$$

where

$$\begin{aligned} M_\pm= & {} \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad h_j \end{bmatrix} [L_{j,k-1}^- \pm L_{j,k-1}^+, L_{j,k}^- \pm L_{j,k}^+] \\= & {} \begin{bmatrix} 1\pm (-1)^{k-1} &{}\quad 1\pm (-1)^{k} \\ k(k-1)( 1\pm (-1)^k) &{}\quad k(k+1)(1\pm (-1)^{k+1}) \end{bmatrix}. \end{aligned}$$

Therefore,

$$\begin{aligned} (A_j+B_j)^{-1} = \frac{1}{D_1}M_-^{-1} \begin{bmatrix} -\alpha _1&{}\quad \beta _2- \frac{h_j}{2k(k+(-1)^k)} \\ \beta _1h_j - \frac{k(k - (-1)^k)}{2} &{}\quad \alpha _1h_j \end{bmatrix} \end{aligned}$$

where \(D_1 = \frac{(-1)^kh_j}{2k(k + (-1)^k)} ( (-1)^k \Gamma _j + \Lambda _j)\) is bounded by definitions of \(\Gamma _j, \Lambda _j\) and mesh regularity condition. Then

$$\begin{aligned} \begin{aligned} (A_j+B_j)^{-1} G \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix}&= \frac{1}{D_1}M_-^{-1} \begin{bmatrix} -\alpha _1&{}\quad {\tilde{\beta _2}} h h_j^{-1} - \frac{1}{2k(k + (-1)^k)} \\ {\tilde{\beta _1}} h^{-1} h_j - \frac{k(k -(-1)^k)}{2} &{}\quad \alpha _1\end{bmatrix} \\&\qquad \begin{bmatrix} \frac{1}{2}+ \alpha _1&{}\quad -{\tilde{\beta _2}} h h_j^{-1} \\ - {\tilde{\beta _1}} h^{-1} h_j &{}\quad \frac{1}{2}- \alpha _1\end{bmatrix}, \\ (A_j+B_j)^{-1} H \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix}&= \frac{1}{D_1}M_-^{-1} \begin{bmatrix} -\alpha _1&{}\quad {\tilde{\beta _2}} h h_j^{-1} - \frac{1}{2k(k + (-1)^k)} \\ {\tilde{\beta _1}} h^{-1} h_j - \frac{k(k - (-1)^k)}{2} &{}\quad \alpha _1\end{bmatrix} \\&\qquad \begin{bmatrix} \frac{1}{2}- \alpha _1&{}\quad {\tilde{\beta _2}} h h_j^{-1} \\ {\tilde{\beta _1}} h^{-1} h_j &{}\quad \frac{1}{2}+ \alpha _1\end{bmatrix} \end{aligned} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {M}}_{j, m}= & {} (A_j+B_j)^{-1} G \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} 1 \\ m(m+1) \end{bmatrix} \\&+(-1)^m (A_j+B_j)^{-1} H \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} 1 \\ -m(m+1) \end{bmatrix}. \end{aligned}$$

By mesh regularity condition, \(\exists \sigma _1, \sigma _2, s.t., \sigma _1 h_j \le h \le \sigma _2 h_j\) and the proof is complete. \(\square \)

Proof of Lemma 3.3

By Definition 3.1, \(P^\star _hu |_{I_j}= \sum _{m=0}^{k-2} u_{j,m} L_{j,m} + \acute{u}_{j,k-1}L_{j,k-1} + \acute{u}_{j,k}L_{j,k}.\) We solve the two coefficients \(\acute{u}_{j,k-1}, \acute{u}_{j,k}\) on every cell \(I_j\) according to definition (14).

If assumption A1 is satisfied, it has been shown in Lemma 3.1 in [10] that (14) is equivalent to (21). Substitute u and \(u_x\) by (8), we obtain the following equation

$$\begin{aligned} (A_j+B_j) \begin{bmatrix} \acute{u}_{j,k-1} \\ \acute{u}_{j,k} \end{bmatrix} = (A_j+B_j) \begin{bmatrix} {u}_{j,k-1} \\ {u}_{j,k} \end{bmatrix} + \sum _{m=k+1}^\infty u_{j,m} (G L_{j,m}^- + H L_{j,m}^+), \end{aligned}$$
(73)

the existence and uniqueness of the system above is ensured by assumption A1, that is, \(\det (A_j+B_j) = 2(-1)^k \Gamma _j \ne 0\). Thus, (22) is proven.

If any of the assumptions A2/A3 is satisfied, we obtain

$$\begin{aligned} A \begin{bmatrix} \acute{u}_{j,k-1} \\ \acute{u}_{j,k} \end{bmatrix} + B \begin{bmatrix} \acute{u}_{j+1,k-1} \\ \acute{u}_{j+1,k} \end{bmatrix} = \sum _{m=k-1}^\infty u_{j,m} G L_{m}^- + u_{j+1,m} H L_{m}^+, \end{aligned}$$

which can be solved by a global linear system with coefficient matrix M. The solution is

$$\begin{aligned} \begin{aligned} \begin{bmatrix} \acute{u}_{j,k-1} \\ \acute{u}_{j,k} \end{bmatrix}&= \sum _{l=0}^{N-1}r_l A^{-1} \Big ( A \begin{bmatrix} {u}_{j+l,k-1} \\ {u}_{j+l,k} \end{bmatrix} + B \begin{bmatrix} {u}_{j+l+1,k-1} \\ {u}_{j+l+1,k} \end{bmatrix}\\&\quad + \sum _{m=k+1}^\infty u_{j+l,m} G L_{m}^- + u_{j+l+1,m} H L_{m}^+ \Big ), \\&= \sum _{l=0}^{N-1}r_l \Big (\begin{bmatrix} {u}_{j+l,k-1} \\ {u}_{j+l,k} \end{bmatrix} - Q\begin{bmatrix} {u}_{j+l+1,k-1} \\ {u}_{j+l+1,k} \end{bmatrix}\\&\quad + \sum _{m=k+1}^\infty u_{j+l,m} [ L_{k-1}^- , L_k^- ]^{-1} L_{m}^- - u_{j+l+1,m} Q [ L_{k-1}^+ , L_k^+ ]^{-1} L_{m}^+ \Big ) \\&= \begin{bmatrix} {u}_{j,k-1} \\ {u}_{j,k} \end{bmatrix}+ \sum _{m=k+1}^\infty \Big (\sum _{l=1}^{N-1} u_{j+l, m} V_{2,m} + u_{j, m} r_0 [ L_{k-1}^- , L_k^- ]^{-1} \\&\quad \quad L_{m}^- - u_{j+N, m} r_N [ L_{k-1}^- , L_k^- ]^{-1} L_{m}^- \Big ) \\&= \begin{bmatrix} {u}_{j,k-1} \\ {u}_{j,k} \end{bmatrix} +\sum _{m=k+1}^\infty \big (u_{j, m} V_{1,m} + \sum _{l=0}^{N-1}u_{j+l,m} r_l V_{2,m} \big ), \end{aligned} \end{aligned}$$

where \(r_N = Q^N (I_2 - Q^N)^{-1} = r_0 - I_2 \) is used in the third equality. Therefore, (23) is proven. The proof of (24) is given in Lemmas 3.2, 3.4, 3.8, 3.9 in [10].

If \(h_j = h_{j+1}\), denote

$$\begin{aligned} {\mathcal {U}}_j = \begin{bmatrix} \acute{u}_{j,k-1} - u_{j,k-1}\\ \acute{u}_{j,k}-u_{j,k} \end{bmatrix} - \begin{bmatrix} \acute{u}_{j+1,k-1} -u_{j+1,k-1} \\ \acute{u}_{j+1,k} -u_{j+1,k} \end{bmatrix} \end{aligned}$$

then by (22) and (23), we have

$$\begin{aligned} \begin{aligned} {\mathcal {U}}_j&= \sum _{m=k+1}^\infty (u_{j,m} - u_{j+1,m}) {\mathcal {M}}_{m}, \quad \text {if A1}, \\ {\mathcal {U}}_j&= \sum _{m=k+1}^\infty \big ( (u_{j, m} - u_{j+1, m}) V_{1,m} + \sum _{l=0}^{N-1}(u_{j+l,m} - u_{j+l+1,m}) r_l V_{2,m} \big ), \ \text {if A2/A3}. \end{aligned} \end{aligned}$$

When assumption A1 is satisfied (25) is a direct result of (69).

When assumption A2 is satisfied, we have

$$\begin{aligned} \Vert {\mathcal {U}}_j\Vert _\infty \le C (1+\sum _{l=0}^{N-1}\Vert r_l\Vert _{\infty } ) \max _{j+l\in {\mathbb {Z}}_N} h^{k+2} | u^{(k+2)}(x_{j+l-\frac{1}{2}}) | \le Ch^{k+2} | u |_{W^{k+2,\infty }(I)}, \end{aligned}$$

where (66), (69) and the fact that \(V_{1,m}, V_{2,m}, \forall m \ge 0\) are constant matrices independent of h are used in above inequalities.

When assumption A3 is satisfied, we perform more detailed computation of \({\mathcal {S}}_j\) and use Fourier analysis to bound it by utilizing the smoothness and periodicity. If \(u \in W^{k+2+n, \infty }(I)\),

$$\begin{aligned} \begin{aligned} {\mathcal {U}}_j&= \sum _{l=0}^{N-1}r_l \sum _{m=k+1}^\infty (u_{l+j,m} - u_{l+j+1,m}+ \sum _{s=1}^n \mu _s h_j^{k+1+s} u^{(k+1+s)}(x_{j+l-\frac{1}{2}}))V_{2,m} \\&\quad -\, \sum _{l=0}^{N-1}r_l \sum _{m=k+1}^\infty \sum _{s=1}^n \mu _s h_j^{k+1+s} u^{(k+1+s)}(x_{j+l-\frac{1}{2}}) V_{2,m} + \sum _{m=k+1}^\infty (u_{j, m} - u_{j, m}) V_{1,m}. \end{aligned} \end{aligned}$$

When \(\frac{|\Gamma |}{|\Lambda |} < 1\), \(Q=-A^{-1}B\) has two imaginary eigenvalues \(\lambda _1, \lambda _2\) with \(|\lambda _1| = |\lambda _2| =1\). By (59) of [10], we have \(r_l= \frac{\lambda _1^l}{1-\lambda _1^N} Q_1 + \frac{\lambda _2^l}{1-\lambda _2^N} (I_2 - Q_1)\), where \(Q_1\) is a constant matrix independent of h, and defined in (60) and (61) in [10].

Thus, by (69) and (70)

$$\begin{aligned} \begin{aligned} \Vert {\mathcal {U}}_j \Vert _\infty&\le C \sum _{l=0}^{N-1}\Vert r_l\Vert _{\infty } h^{k+2+n} |u|_{W^{k+2+n,\infty }(I)} + C h^{k+2}|u|_{W^{k+2,\infty }(I)}\\&\quad +\, \Vert \sum _{m=k+1}^\infty \sum _{s = 1}^n \mu _s h^{k+1+s} (Q_1 \boxtimes _{\lambda _1} + (I_2-Q_1) \boxtimes _{\lambda _2})u^{(k+1+s)}(x_{j-\frac{1}{2}}) V_{2,m} \Vert _\infty . \end{aligned} \end{aligned}$$

By (72), we have

$$\begin{aligned} \Vert {\mathcal {U}}_j \Vert _\infty \le C_1 h^{k+2}. \end{aligned}$$

When \(\frac{|\Gamma |}{|\Lambda |} =1\), \(Q=-A^{-1}B\) has two repeated eigenvalues. By (71) of [10], we have \(r_l = \frac{(-1)^l}{2} I_2 + (-1)^l \frac{-N+2l}{4\Gamma } Q_2\), where \(Q_2/ \Gamma \) is a constant matrix, we estimate \({\mathcal {U}}_j\) by the same procedure as previous case and obtain

$$\begin{aligned} \begin{aligned} \Vert {\mathcal {U}}_j \Vert _\infty&\le C \sum _{l=0}^{N-1}\Vert r_l\Vert _{\infty } h^{k+2+n} |u|_{W^{k+4,\infty }(I)} + C h^{k+2}|u|_{W^{k+2,\infty }(I)}\\&\quad +\, \Vert \sum _{m=k+1}^\infty \sum _{s = 1}^n \mu _s h^{k+1+s} \frac{1}{2}(\boxtimes _{-1} + \frac{Q_2}{\Gamma } \boxplus ) u^{(k+1+n)}(x_{j-\frac{1}{2}}) V_{2,m} \Vert _\infty \le C_1 h^{k+2}. \end{aligned} \end{aligned}$$

Finally, the estimates for \({\mathcal {U}}_j\) is complete for all assumptions and (25) is proven.

Proof of Lemma 3.6

Proof

By the definition of \(P^\dagger _h\), the solution of \(\grave{u}_{j, k-1}, \grave{u}_{j, k}\) has similar linear algebraic system as (22). That is, under assumption A2 or A3, the existence and uniqueness condition is \(\det (A+B) = 2( (-1)^k \Gamma + \Lambda ) \ne 0\). Thus,

$$\begin{aligned} \begin{bmatrix} \grave{u}_{j,k-1} \\ \grave{u}_{j,k} \end{bmatrix} = \begin{bmatrix} u_{j,k-1} \\ u_{j,k} \end{bmatrix} + \sum _{m=k+1}^\infty u_{j,m} {\mathcal {M}}_{m}. \end{aligned}$$
(74)

And then, by (19) and (10), (28) is proven.

If any of the assumptions A2/A3 is satisfied, then the difference can be written as

$$\begin{aligned} Wu|_{I_j} = P^\star _hu|_{I_j} - P^\dagger _hu|_{I_j} = (\acute{u}_{j,k-1} - \grave{u}_{j,k-1} )L_{j,k-1} + (\acute{u}_{j,k} - \grave{u}_{j,k} ) L_{j,k}. \end{aligned}$$

The properties of \(P^\star _hu\) and \(P^\dagger _hu\) yield the following coupled system

$$\begin{aligned}&A \begin{bmatrix} \acute{u}_{j,k-1} - \grave{u}_{j,k-1} \\ \acute{u}_{j,k} - \grave{u}_{j,k} \end{bmatrix} + B \begin{bmatrix} \acute{u}_{j+1,k-1} - \grave{u}_{j+1,k-1} \\ \acute{u}_{j+1,k} - \grave{u}_{j+1,k} \end{bmatrix} = \begin{bmatrix} {\tau }_{j} \\ {\iota }_{j} \end{bmatrix}, \quad \forall j \in {\mathbb {Z}}_N,\\&\begin{bmatrix} \tau _j\\ \iota _j \end{bmatrix} = \begin{bmatrix} u \\ u_x \end{bmatrix} \bigg |_{x_{j + \frac{1}{2}}} - G \begin{bmatrix} P^\dagger _hu \\ (P^\dagger _hu)_x \end{bmatrix} \bigg |_{x_{j+\frac{1}{2}}}^- - H \begin{bmatrix} P^\dagger _hu \\ (P^\dagger _hu)_x \end{bmatrix} \bigg |_{x_{j+\frac{1}{2}}}^+ \\&= G \begin{bmatrix} (u -P^\dagger _hu)|_{x_{j+\frac{1}{2}}}^- - (u -P^\dagger _hu)|_{x_{j+\frac{3}{2}}}^- \\ (u -P^\dagger _hu)_x|_{x_{j+\frac{1}{2}}}^- - (u -P^\dagger _hu)_x|_{x_{j+\frac{3}{2}}}^- \end{bmatrix}, \end{aligned}$$

where the second equality was obtained by the definition of \(P^\dagger _hu\) (27b).

Gather the relations above for all j results in a large \(2N \times 2N\) linear system with block circulant matrix M,  defined in (15), as coefficient matrix, then the solution is

$$\begin{aligned} \begin{bmatrix} \acute{u}_{j,k-1} - \grave{u}_{j,k-1} \\ \acute{u}_{j,k} - \grave{u}_{j,k} \end{bmatrix} = \sum _{l=0}^{N-1} r_l A^{-1} \begin{bmatrix} {\tau }_{l+j} \\ {\iota }_{l+j} \end{bmatrix} , \quad j \in {\mathbb {Z}}_N, \end{aligned}$$

where by periodicity, when \(l+j> N\), \(\tau _{l+j} = \tau _{l+j-N}, \iota _{l+j} = \iota _{l+j-N}\).

On uniform mesh, by the definition of \(R_{j,m}\) in (31), \(R_{j,m}(1)\) and \((R_{j,m})_x(1)\) are independent of j, we denote the corresponding values as \(R_m(1)\) and \((R_m)_x(1)\) and let \(R^-_{m} = [R_m(1), (R_m)_x(1)]^T\). By (30), we have

$$\begin{aligned} \begin{bmatrix} (u -P^\dagger _hu)|_{x_{j+\frac{1}{2}}}^- - (u -P^\dagger _hu)|_{x_{j+\frac{3}{2}}}^- \\ (u -P^\dagger _hu)_x|_{x_{j+\frac{1}{2}}}^- - (u -P^\dagger _hu)_x|_{x_{j+\frac{3}{2}}}^- \end{bmatrix} = \sum _{m=k+1}^\infty (u_{j,m} - u_{j+1,m}) R^-_m \end{aligned}$$

and

$$\begin{aligned} \begin{bmatrix} \acute{u}_{j,k-1} - \grave{u}_{j,k-1} \\ \acute{u}_{j,k} - \grave{u}_{j,k} \end{bmatrix} = \sum _{l=0}^{N-1} r_l \big ( \sum _{m=k+1}^\infty (u_{l+j,m} - u_{l+j+1,m}) A^{-1} G R^-_m \big ) \doteq {\mathcal {S}}_j, \quad j \in {\mathbb {Z}}_N.\nonumber \\ \end{aligned}$$
(75)

We can estimate \({\mathcal {S}}_j\) by the same lines as the estimation of \({\mathcal {U}}_j\) in Appendix A.3 and (29) is proven. \(\square \)

Proof of Lemma 4.1

Proof

By error equation, the symmetry of \(A(\cdot , \cdot )\) and the definition of \(s_h,\) we have

$$\begin{aligned} 0=a(e, v_h)= & {} a(\epsilon _h, v_h) + a(\zeta _h, v_h)=\int _I s_h v_h dx \nonumber \\&+\int _I (\zeta _h)_t v_h dx-iA(v_h,\zeta _h), \quad \forall v_h \in V_h^k. \end{aligned}$$
(76)

Now, we are going to choose three special test functions to extract superconvergence properties (35)–(37) about \(\zeta _h.\) We first prove (35). Due to the invertibility of the coefficient matrix M,  there exists a nontrivial function \(v_1 \in V_h^k\), such that \(\forall j \in {\mathbb {Z}}_N, v_1|_{I_j} = \alpha _{j,k-1}L_{j,k-1} + \alpha _{j,k} L_{j,k} + \overline{(\zeta _h)_{xx}}\), \(\int _{I_j}v_1 (\zeta _h)_{xx} dx = \Vert (\zeta _h)_{xx}\Vert _{L^2(I_j)}^2\), \({\hat{v}}_1 |_{{j+\frac{1}{2}}} = 0\) and \(\widetilde{(v_1)_x}|_{{j+\frac{1}{2}}} = 0\). Thus \(A(v_h, \zeta _h) = \Vert (\zeta _h)_{xx}\Vert ^2\). Let \(v_h=v_1,\) then (76) becomes

$$\begin{aligned} 0=\int _I s_h v_1 dx +\int _I (\zeta _h)_t v_1 dx-i \Vert (\zeta _h)_{xx}\Vert ^2. \end{aligned}$$

Hence \(\Vert (\zeta _h)_{xx}\Vert ^2 \le \Vert s_h + (\zeta _h)_t\Vert \cdot \Vert v_1\Vert .\) In order to show the estimates for \(\Vert (\zeta _h)_{xx}\Vert ,\) it remains to estimate \(\Vert v_1\Vert .\)

When the assumption A1 holds, the definition of \(v_1\) yields the following local system for each pair of \(\alpha _{j,k-1}\) and \(\alpha _{j,k}\),

$$\begin{aligned} (A_j + B_j) \begin{bmatrix} \alpha _{j,k-1} \\ \alpha _{j,k} \end{bmatrix} = -G \begin{bmatrix} \overline{(\zeta _h)_{xx}^-} \\ \overline{(\zeta _h)_{xxx}^-} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} - H \begin{bmatrix} \overline{(\zeta _h)_{xx}^+} \\ \overline{(\zeta _h)_{xxx}^+} \end{bmatrix} \bigg |_{{j-\frac{1}{2}}}, \quad \forall j \in {\mathbb {Z}}_N. \end{aligned}$$

By simple algebra

$$\begin{aligned}&\begin{bmatrix} \alpha _{j,k-1} \\ \alpha _{j,k} \end{bmatrix} = -(A_j + B_j)^{-1}G \begin{bmatrix} 1 &{} 0 \\ 0 &{} \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} \overline{(\zeta _h)_{xx}^-} \\ h_j \overline{(\zeta _h)_{xxx}^-} \end{bmatrix} \nonumber \\&\quad \bigg |_{{j+\frac{1}{2}}} - (A_j + B_j)^{-1}H \begin{bmatrix} 1 &{} 0 \\ 0 &{} \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} \overline{(\zeta _h)_{xx}^+} \\ h_j \overline{(\zeta _h)_{xxx}^+} \end{bmatrix} \bigg |_{{j-\frac{1}{2}}}, \end{aligned}$$
(77)

By orthogonality of Legendre polynomials, it follows that

$$\begin{aligned} \begin{aligned} \Vert v_1\Vert ^2_{L^2(I_j)}&= |\alpha _{j,k-1}|^2 \int _{I_j}L_{j,k-1}^2 dx + |\alpha _{j,k}|^2 \int _{I_j}L_{j,k}^2 dx + \Vert (\zeta _h)_{xx}\Vert _{L^2(I_j)}^2\\&\le C( h_j \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_j)}^2 + h_j^3 \Vert (\zeta _h)_{xxx} \Vert _{L^2(\partial I_j)}^2 + \Vert (\zeta _h)_{xx} \Vert _{L^2(I_j)}^2) \le C \Vert (\zeta _h)_{xx} \Vert _{L^2(I_j)}^2, \end{aligned} \end{aligned}$$

where Lemma 3.2, trace inequalities and inverse inequalities are used in above inequality. Therefore, (35) is proven when assumption A1 is satisfied.

Similarly, we define \(v_2 \in V_h^k\), such that \(\forall j \in {\mathbb {Z}}_N, v_2|_{I_j} = \alpha _{j,k-1}L_{j,k-1} + \alpha _{j,k} L_{j,k}\), \(\int _{I_j}v_2 (\zeta _h)_{xx} dx = 0\), \({\hat{v}}_2 |_{{j+\frac{1}{2}}} = 0\) and \(\widetilde{(v_2)_x}|_{{j+\frac{1}{2}}} = \overline{[\zeta _h]}|_{{j+\frac{1}{2}}}\). Thus \(A(v_h, \zeta _h) = -\sum _{j=1}^N|[\zeta _h]|_{j+\frac{1}{2}}^2\). When assumption A1 is satisfied, this definition yields the following local system for each pair of \(\alpha _{j,k-1}\) and \(\alpha _{j,k}\),

$$\begin{aligned} (A_j + B_j) \begin{bmatrix} \alpha _{j,k-1} \\ \alpha _{j,k} \end{bmatrix} = G \begin{bmatrix} 0 \\ \overline{[\zeta _h]} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} + H \begin{bmatrix} 0 \\ \overline{[\zeta _h]} \end{bmatrix} \bigg |_{{j-\frac{1}{2}}}, \quad \forall j \in {\mathbb {Z}}_N. \end{aligned}$$

By same algebra as above, we have

$$\begin{aligned}&\begin{bmatrix} \alpha _{j,k-1} \\ \alpha _{j,k} \end{bmatrix} = (A_j + B_j)^{-1}G \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} 0 \\ h_j \overline{[\zeta _h]} \end{bmatrix} \\&\quad \bigg |_{{j+\frac{1}{2}}} + (A_j + B_j)^{-1} H \begin{bmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \frac{1}{h_j} \end{bmatrix} \begin{bmatrix} 0 \\ h_j \overline{[\zeta _h]} \end{bmatrix} \bigg |_{{j-\frac{1}{2}}}. \end{aligned}$$

By Lemma 3.2, it follows directly that

$$\begin{aligned} \Vert v_2\Vert ^2_{L^2(I_j)} \le C h^3_j ( |{[\zeta _h]}|^2_{{j+\frac{1}{2}}} + |{[\zeta _h]}|^2_{{j-\frac{1}{2}}}). \end{aligned}$$

Plug \(v_2\) in (76), we obtain

$$\begin{aligned} \sum _{j=1}^N|[\zeta _h]|_{{j+\frac{1}{2}}}^2 = i \int _I s_h v_2 dx +i \int _I (\zeta _h)_t v_2 dx \le \Vert s_h + (\zeta _h)_t\Vert \Vert v_2\Vert . \end{aligned}$$

Therefore, (36) is proven when assumption A1 is satisfied.

Finally, we can also choose \(v_3 \in V_h^k\), such that \(\forall j \in {\mathbb {Z}}_N, v_3|_{I_j} = \alpha _{j,k-1}L_{j,k-1} + \alpha _{j,k} L_{j,k}\) such that \(\int _{I_j}v_3 (\zeta _h)_{xx} dx = 0\), \({\hat{v}}_3 |_{{j+\frac{1}{2}}} = \overline{[(\zeta _h)_x]}|_{{j+\frac{1}{2}}}\) and \(\widetilde{(v_3)_x}|_{{j+\frac{1}{2}}} = 0\). Thus \(A(v_h, \zeta _h) = \sum _{j=1}^N|[(\zeta _h)_x]|_{j+\frac{1}{2}}^2\). Follow the same lines as the estimates for \(v_2\), we end up with the estimates

$$\begin{aligned} \Vert v_3 \Vert _{L^2(I_j)}^2 \le C h_j ( |{[(\zeta _h)_x]}|^2_{{j+\frac{1}{2}}} + |{[(\zeta _h)_x]}|^2_{{j-\frac{1}{2}}}). \end{aligned}$$

Plug \(v_3\) in (76), we obtain (37) when assumption A1 is satisfied.

Under assumption A2, we need to compute \(\sum _{j=1}^N(|\alpha _{j,k-1}|^2 + |\alpha _{j,k}|^2)\) to estimate \(\Vert v_1 \Vert ^2\). The definition of \(v_1\) yields the following coupled system

$$\begin{aligned} A \begin{bmatrix} \alpha _{j,k-1} \\ \alpha _{j,k} \end{bmatrix} + B \begin{bmatrix} \alpha _{j+1,k-1} \\ \alpha _{j+1,k} \end{bmatrix} = -G \begin{bmatrix} \overline{(\zeta _h)_{xx}^-} \\ \overline{(\zeta _h)_{xxx}^-} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} - H \begin{bmatrix} \overline{(\zeta _h)_{xx}^+} \\ \overline{(\zeta _h)_{xxx}^+} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}}, \quad j \in {\mathbb {Z}}_N.\nonumber \\ \end{aligned}$$
(78)

Write it in matrix form

$$\begin{aligned} M \varvec{\alpha }= \varvec{b}, \quad \varvec{\alpha }= [ \varvec{\alpha }_1, \ldots , \varvec{\alpha }_N]^T, \end{aligned}$$

where M is defined in (15) and

$$\begin{aligned} \varvec{\alpha }_j = [\alpha _{j,k-1}, \alpha _{j, k}], \varvec{b} = [ \varvec{b}_1, \ldots , \varvec{b}_N]^T, \varvec{b}_j = -G \begin{bmatrix} \overline{(\zeta _h)_{xx}^-} \\ \overline{(\zeta _h)_{xxx}^-} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} - H \begin{bmatrix} \overline{(\zeta _h)_{xx}^+} \\ \overline{(\zeta _h)_{xxx}^+} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}}. \end{aligned}$$

Multiply \(A^{-1}\) from the left in (78), we get an equivalent system

$$\begin{aligned} M' \varvec{\alpha }= \varvec{b}', \quad M' = circ(I_2, A^{-1}B, 0_2, \cdots , 0_2), \varvec{b}' = [ \varvec{b}_1', \ldots , \varvec{b}_N']^T, \varvec{b}_j' = A^{-1} \varvec{b}_j, \end{aligned}$$

and \((M')^{-1} = circ(r_0, \ldots , r_{N-1})\). By Theorem 5.6.4 in [15] and similar to the proof in Lemma 3.1 in [6],

$$\begin{aligned} M' = ({\mathcal {F}}_{N}^* \otimes I_2) \varvec{\Omega }({\mathcal {F}}_{N} \otimes I_2), \end{aligned}$$

where \({\mathcal {F}}_{N}\) is the \(N \times N\) discrete Fourier transform matrix defined by \(({\mathcal {F}}_N)_{ij} = \frac{1}{\sqrt{N}} {\overline{\omega }}^{(i-1)(j-1)}, \omega = e^{i\frac{2\pi }{N}}.\)\({\mathcal {F}}_{N}\) is symmetric and unitary and

$$\begin{aligned} \varvec{\Omega }= \text {diag}( I_2 + A^{-1}B, I_2 + \omega A^{-1}B , \cdots , I_2 + \omega ^{N-1} A^{-1}B ). \end{aligned}$$

The assumption \(\frac{\left| \Gamma \right| }{\left| \Lambda \right| } > 1\) in A2 ensures that the eigenvalues of \(Q = -A^{-1}B\) (see (52) in [10]) are not 1, thus \(I_2 + \omega ^j A^{-1}B, \forall j,\) is nonsingular and \(\varvec{\Omega }\) is invertible. Then

$$\begin{aligned} | \rho ( (M')^{-1}) | = \Vert (M')^{-1} \Vert _2 \le \Vert {\mathcal {F}}_{N}^* \otimes I_2 \Vert _2 \Vert \varvec{\Omega }\Vert _2 \Vert {\mathcal {F}}_{N} \otimes I_2 \Vert _2 \le C. \end{aligned}$$
(79)

Therefore,

$$\begin{aligned}&\sum _{j=1}^N(|\alpha _{j,k-1}|^2 + |\alpha _{j,k}|^2) = \varvec{\alpha }^T \varvec{\alpha }= (\varvec{b}')^T (M')^{-T} (M')^{-1} (\varvec{b}')^T \\&\quad \le \Vert (M')^{-1} \Vert _2^2 \Vert \varvec{b}'\Vert _2^2 \le C \sum _{j=1}^N\Vert \varvec{b}_j'\Vert _2^2. \end{aligned}$$

Since \(A^{-1}G \begin{bmatrix} 1 &{} 0 \\ 0 &{} \frac{1}{h} \end{bmatrix}, A^{-1}H \begin{bmatrix} 1 &{} 0 \\ 0 &{} \frac{1}{h} \end{bmatrix}\) are constant matrices, we have

$$\begin{aligned} \begin{aligned} \Vert \varvec{b}_j'\Vert _2^2&\le C \left( \left\| \begin{bmatrix} \overline{(\zeta _h)_{xx}^-} \\ h \overline{(\zeta _h)_{xxx}^-} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} \right\| _2 + \left\| \begin{bmatrix} \overline{(\zeta _h)_{xx}^+} \\ h \overline{(\zeta _h)_{xxx}^+} \end{bmatrix} \bigg |_{{j+\frac{1}{2}}} \right\| _2 \right) \\&\le C( \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_j)}^2 + \Vert (\zeta \le C( \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_j)}^2 + \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_{j+1})}^2), \end{aligned} \end{aligned}$$

where inverse inequality is used to obtain the last inequality. Finally, we obtain the estimate

$$\begin{aligned} \begin{aligned} \Vert v_1 \Vert ^2&= \sum _{j=1}^N|\alpha _{j,k-1}|^2 \Vert L_{j,k-1} \Vert _{L^2 (I_j)}^2 + \sum _{j=1}^N|\alpha _{j,k}|^2 \Vert L_{j,k} \Vert _{L^2 (I_j)}^2 + \Vert (\zeta _h)_{xx} \Vert ^2 \\&\le \Vert (\zeta _h)_{xx} \Vert ^2 + Ch \sum _{j=1}^N(|\alpha _{j,k-1}|^2 + |\alpha _{j,k}|^2) \\&\le \Vert (\zeta _h)_{xx} \Vert ^2 + Ch \sum _{j=1}^N( \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_j)}^2 + \Vert (\zeta _h)_{xx} \Vert _{L^2(\partial I_{j+1})}^2) \\&\le \Vert (\zeta _h)_{xx} \Vert ^2 + C h \Vert (\zeta _h)_{xx} \Vert _{ L^2 (\partial {\mathcal {I}}_N)}^2 \le C \Vert (\zeta _h)_{xx} \Vert ^2, \end{aligned} \end{aligned}$$

where inverse inequality is used to obtain the last inequality. Then the estimates for (35) hold true. (36) and (37) can be proven by the same procedure when assumption A2 is satisfied, and the steps are omitted for brevity. \(\square \)

Remark A.1

When assumption A3 is satisfied, the eigenvalues of Q are two complex number with magnitude 1, then a constant bound for \(\rho ((M')^{-1})\) as in (79) is not possible. Therefore, we cannot obtain similar results for assumption A3.

Proof for Lemma 4.2

Proof

Since \(w_q \in V_h^k\), we have

$$\begin{aligned} w_q |_{I_j} = \sum _{m=0}^k c_{j,m}^q L_{j,m}. \end{aligned}$$
(80)

Let \(v_h = D^{-2} L_{j,m}, m \le k - 2\) in (38a), we obtain

$$\begin{aligned} c_{j,m}^q = -i \frac{2m+1}{h_j} \frac{h_j^2}{4} \int _{I_j} \partial _t w_{q-1} D^{-2} L_{j,m} dx. \end{aligned}$$
(81)

Since \(D^{-2} L_{j,m} \in P_c^{m+2}(I_j)\), by the property \(u - P^\star _hu \perp V_h^{k-2}\) in the \(L^2\) inner product sense, we have

$$\begin{aligned} c_{j,m}^1 = {\left\{ \begin{array}{ll} -i\frac{2m+1}{h_j} \frac{h_j^2}{4} \int _{I_j} \partial _t (u - P^\star _hu) D^{-2} L_{j,m} dx = 0, &{} m \le k - 4, \\ -i\frac{2m+1}{h_j} \frac{h_j^2}{4} \int _{I_j} \partial _t ((u_{j,k-1} - \acute{u}_{j,k-1}) L_{j,k-1} + (u_{j,k} - \acute{u}_{j,k}) L_{j,k}) D^{-2} L_{j,m} dx, &{} m = k-3, k-2. \\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(82)

By induction using (80), (81), (82), for \(0 \le m \le k - 2 - 2q\), \(c_{j,m}^q = 0.\)

Furthermore, the first nonzero coefficient can be written in a simpler form related to \(u_{j,k-1}\) by induction.

When \(q=1\), we compute \(c_{j, k-3}^1\) by (82) and the definition of \(w_0\). That is

$$\begin{aligned} \begin{aligned} c_{j, k-3}^1&= -i \frac{2(k-3)+1}{h_j} \Big ( \frac{h_j}{2} \Big )^2 \partial _t (u_{j,k-1} - \acute{u}_{j,k-1}) \int _{I_j}D^{-2} L_{j,k-3} L_{j, k-1} dx \\&= C h_j^2 \partial _t (u_{j,k-1} - \acute{u}_{j,k-1}). \end{aligned} \end{aligned}$$

Suppose \(c_{k+1-2q}^{q-1} = Ch_j^{2q-2}\partial _t^{q-1} (u_{j,k-1} - \acute{u}_{j,k-1})\), then

$$\begin{aligned} \begin{aligned} c_{j, k-1-2q}^q&= -i \frac{2(k-1-2q)+1}{h_j} \Big ( \frac{h_j}{2} \Big )^2 \partial _t c_{j,k+1-2q}^{q-1} \int _{I_j}D^{-2} L_{j,k-1-2q} L_{j, k+1-2q} dx \\&= C {h_j}^{2q} \partial _t^{q} (u_{j,k-1} - \acute{u}_{j,k-1}). \end{aligned} \end{aligned}$$

The induction is completed and (42) is proven when \(r=0\).

Next, we begin estimating the coefficient \(c_{j, m}^q.\) By Holder’s inequality and (81), we have the estimates for \(c_{j, m}^q, k-1-2q \le m \le k-2\),

$$\begin{aligned} \left| c_{j,m}^q \right| \le C h^{2- \frac{1}{s}} \Vert \partial _t w_{q-1} \Vert _{L^s(I_j)}. \end{aligned}$$

To estimate the coefficients \(c_{j, k-1}^q, c_{j, k}^q\), we need to discuss it by cases. If assumption A1 is satisfied, meaning (38b) and (38c) can be decoupled and therefore \(w_q\) is locally defined by (38). By (39) and following the same algebra of solving the kth and \((k+1)\)th coefficients in (73),

$$\begin{aligned} \begin{bmatrix} c_{j,k-1}^q \\ c_{j,k}^q \end{bmatrix} = -\sum _{m=0}^{k-2} {\mathcal {M}}_{j, m} c_{j,m}^q. \end{aligned}$$

By (19), for all \(j \in {\mathbb {Z}}_N\),

$$\begin{aligned} \begin{aligned} \left| c_{j,k-1}^q \right| ^2 + \left| c_{j,k}^q \right| ^2&\le C \sum _{m=k-2q-3}^{k-2} \left| c_{j,m}^q \right| ^2 \le C h^3 \Vert \partial _t w_{q-1} \Vert ^2_{L^2(I_j)}. \\ \max (\left| c_{j,k-1}^q \right| , \left| c_{j,k}^q \right| )&\le C \max _{k-2q-3 \le m \le k-2} \left| c_{j,m}^q \right| \le C h^2 \Vert \partial _t w_{q-1} \Vert _{L^\infty (I_j)}. \end{aligned} \end{aligned}$$

If one of assumption A2/A3 is satisfied, (39) defines a coupled system. From the same lines for obtaining (23) in Appendix A.3, the solution for \( c_{j,k-1}^q, c_{j,k}^q\) is

$$\begin{aligned} \begin{aligned} \begin{bmatrix} c_{j,k-1}^q \\ c_{j,k}^q \end{bmatrix}&= -\sum _{m=k-1-2q}^{k-2} \sum _{l=0}^{N-1} r_l A^{-1} (G L_{m}^- c_{j+l,m}^q + H L_{m}^+ c_{j+l+1,m}^q ) \\&= - \sum _{m=k-1-2q}^{k-2}\Big ( c_{j,m}^q V_{1,m} + \sum _{l=0}^{N-1}c_{j+l,m}^q r_l V_{2,m}\Big ). \end{aligned} \end{aligned}$$
(83)

Under assumption A2, we can estimate \(c_{j,m}^q, m = k - 1, k,\) using (66), that is

$$\begin{aligned} \left\| \begin{bmatrix} c_{j,k-1}^q \\ c_{j,k}^q \end{bmatrix} \right\| _\infty \le C \left( 1+ \sum _{l=0}^{N-1}\Vert r_l\Vert _\infty \right) \max |c_{j+l, m}^q| \le C h^2 \Vert \partial _t w_{q-1} \Vert _{L^\infty ({\mathcal {I}}_N)}. \end{aligned}$$

Under assumption A3, \(\sum _{l=0}^{N-1}\Vert r_l\Vert _\infty \) is unbounded. Thus we use Fourier analysis to bound the coefficients utilizing the smoothness and periodicity by similar idea in [10]. In the rest of the proof, we make use of two operators \(\boxtimes \) and \(\boxplus \), which are defined in (71a) and (71b).

When \(\frac{|\Gamma |}{|\Lambda |} < 1\), \(Q=-A^{-1}B\) has two imaginary eigenvalues \(\lambda _1, \lambda _2\) with \(|\lambda _1| = |\lambda _2| =1\). By (59) of [10], we have \(r_l= \frac{\lambda _1^l}{1-\lambda _1^N} Q_1 + \frac{\lambda _2^l}{1-\lambda _2^N} (I_2 - Q_1)\), where \(Q_1\) is a constant matrix independent of h, and defined in (60) and (61) in [10]. We perform more detailed computation of the coefficients. In (82), plug in (23), for \(m = k-3, k-2\), when \(u_t \in W^{k+2+n,\infty }(I)\),

$$\begin{aligned} c_{j,m}^1&= i \frac{2m+1}{h} \frac{h^2}{4} \int _{I_j}[L_{j,k-1}, L_{j,k}] \partial _t \sum _{p=k+1}^\infty \Big ( u_{j,p} V_{1,p} + \sum _{l=0}^{N-1}u_{j+l,p} r_l V_{2,p} \Big ) D^{-2} L_{j,m} dx \\&= i \frac{2m+1}{2} \frac{h^2}{4} \sum _{p = k + 1}^\infty \partial _t \Big (u_{j,p} F_{p,m}^1 + \sum _{l=0}^{N-1}u_{j+l,p}r_l F_{p,m}^2 \Big ) \\&= i \frac{2m+1}{8}h^2 \sum _{p=k+1}^\infty \Big ( \sum _{s=0}^n \mu _s h^{k+1+s}u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) F_{p,m}^1 \\&\quad +\, \sum _{l=0}^{N-1}\left( \frac{\lambda _1^l}{1-\lambda _1^N} Q_1 + \frac{\lambda _2^l}{1-\lambda _2^N} (I_2 - Q_1) \right) \\&\sum _{s=0}^n \mu _s h^{k+1+s}u_t^{(k+1+s)}(x_{j+l-\frac{1}{2}}) F_{p,m}^2 + O(h^{k+n+1}|u_t|_{W^{k+2+n,\infty }(I)}) \Big ) \\&= i \frac{2m+1}{8}h^2 \sum _{p=k+1}^\infty \sum _{s=0}^n \mu _s h^{k+1+s} \big (u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) F_{p,m}^1 \\&\quad +\, (Q_1 \boxtimes _{\lambda _1} + (I_2-Q_1) \boxtimes _{\lambda _2}) u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}})F^2_{p,m} \big ) + O(h^{k+3+n}|u_t|_{W^{k+2+n,\infty }(I)}), \end{aligned}$$

where \(F_{p,m}^\nu = \frac{2}{h} \int _{I_j}[L_{j,k-1}, L_{j,k}] V_{\nu ,p} D^{-2} L_{j,m} dx, \nu = 1, 2,\) are constants independent of h and (68) is used in the third equality.

Plug the formula above into (83), by similar computation, we have

$$\begin{aligned} \begin{aligned} \begin{bmatrix} c_{j,k-1}^1 \\ c_{j,k}^1 \end{bmatrix}&= -i\frac{2m+1}{8}h^2 \sum _{m=k-3}^{k-2} \sum _{p=k+1}^\infty \sum _{s=0}^n \mu _s h^{k+1+s} \big (u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}}) F_{p,m}^1 V_{1,m} \\&\quad +\, (Q_1 \boxtimes _{\lambda _1} + (I_2-Q_1) \boxtimes _{\lambda _2}) u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) (F_{p,m}^2 V_{1,m} + F_{p,m}^1 V_{2, m}) \\&\quad +\, (Q_1 \boxtimes _{\lambda _1} + (I_2-Q_1) \boxtimes _{\lambda _2})^2 u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}})F^2_{p,m} V_{2,m} \big ) + O(h^{k+2+n}|u_t|_{W^{k+2+n,\infty }(I)}). \end{aligned} \end{aligned}$$

By (72), we have

$$\begin{aligned} (Q_1 \boxtimes _{\lambda _1} + (I_2-Q_1) \boxtimes _{\lambda _2})^\nu u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}}) \le C |u_t|_{W^{k+2+s+\nu ,1}(I)} \le C |u|_{W^{k+4+s+\nu ,1}(I)}. \end{aligned}$$

Therefore,

$$\begin{aligned} |c_{j, m}^1 | \le C_2 h^{k+3}, \quad m = k-3,k-2, \ \text { and } |c_{j, m}^1 | \le C_3 h^{k+3}, \quad m = k-1,k. \end{aligned}$$

By induction and similar computation, we can obtain the formula for \(c_{j, m}^q\). For brevity, we omit the computation and directly show the estimates

$$\begin{aligned} |c_{j, m}^q | \le C_{3q} h^{k+1+2q}, \quad k-1-2q \le m \le k. \end{aligned}$$

When \(\frac{|\Gamma |}{|\Lambda |} =1\), \(Q=-A^{-1}B\) has two repeated eigenvalues. By (71) of [10], we have \(r_l = \frac{(-1)^l}{2} I_2 + (-1)^l \frac{-N+2l}{4\Gamma } Q_2\), where \(Q_2/ \Gamma \) is a constant matrix, then by (23) and (68). For \(m = k-3, k-2\), when \(u_t \in W^{k+2+n,\infty }(I)\), we compute \(c_{j, m}^1\) by the same procedure as previous case and obtain

$$\begin{aligned} \begin{aligned} c_{j, m}^1&= i \frac{2m+1}{8}h^2 \sum _{p=k+1}^\infty \sum _{s=0}^n \mu _s h^{k+1+s} \big (u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) F_{p,m}^1 \\&+ \frac{1}{2}(\boxtimes _{-1} + \frac{Q_2}{\Gamma } \boxplus ) u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}}) F^2_{p,m} \big ) + O(h^{k+3+n}|u_t|_{W^{k+2+n,\infty }(I)}). \end{aligned} \end{aligned}$$

Plug formula above into (83), we have

$$\begin{aligned} \begin{aligned} \begin{bmatrix} c_{j,k-1}^1 \\ c_{j,k}^1 \end{bmatrix}&= -i \frac{2m+1}{8}h^2 \sum _{m=k-3}^{k-2}\sum _{p=k+1}^\infty \sum _{s=0}^n \mu _s h^{k+1+s} \big ( u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) F_{p,m}^1 V_{1,m} \\&\quad +\, \frac{1}{2}(\boxtimes _{-1} + \frac{Q_2}{\Gamma } \boxplus ) u_t^{(k+1+s)}(x_{{j-\frac{1}{2}}}) (F_{p,m}^2 V_{1,m} + F_{p,m}^1 V_{2, m}) \\&\quad +\, \frac{1}{4}(\boxtimes _{-1} + \frac{Q_2}{\Gamma } \boxplus )^2 u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}})F^2_{p,m} V_{2,m} \big ) + O(h^{k+2+n}|u_t|_{W^{k+2+n,\infty }(I)}). \end{aligned} \end{aligned}$$

By (72), we have

$$\begin{aligned} (\boxtimes _{-1} + \frac{Q_2}{\Gamma } \boxplus )^\nu u_t^{(k+1+s)} (x_{{j-\frac{1}{2}}}) \le C |u_t|_{W^{k+2+s+2\nu ,1}(I)} \le C |u|_{W^{k+4+s+2\nu ,1}(I)} \end{aligned}$$

and

$$\begin{aligned} |c_{j, m}^1 | \le C_2 h^{k+3}, \quad m = k-3,k-2, \ \text { and } |c_{j, m}^1 | \le C_4 h^{k+3}, \quad m = k-1,k. \end{aligned}$$

By induction and similar computation, we can obtain the formula for \(c_{j, m}^q\). For brevity, we omit the computation and directly show the estimates

$$\begin{aligned} |c_{j, m}^q | \le C_{4q} h^{k+1+2q}, \quad k-1-2q \le m \le k. \end{aligned}$$

All the analysis above works when we change definition of \(w_q\) to \(\partial _t^r w_q\) (and change \((w_{q-1})_t\) to \(\partial _t^{r+1} w_{q-1}\) accordingly) in (38). Summarize the estimates for \(c_{j, m}^q\) under all three assumptions, for \(1 \le q \le \lfloor \frac{k-1}{2} \rfloor \), we have

$$\begin{aligned} |\partial _t^r c_{j,m}^q| \le C_{2r,q} h^{k+1+2q}, \quad \Vert \partial _t^r w_q \Vert \le C \left( \sum _{j=1}^N\sum _{m= k-2q -1}^k |\partial _t^r c_{j,m}^q|^2 h_j\right) ^{\frac{1}{2}} \le C_{2r,q}h^{k+1+2q}. \end{aligned}$$

Then (42), (43) is proven. And (44) is a direct result of above estimate and (41). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, A., Cheng, Y., Liu, Y. et al. Superconvergence of Ultra-Weak Discontinuous Galerkin Methods for the Linear Schrödinger Equation in One Dimension. J Sci Comput 82, 22 (2020) doi:10.1007/s10915-020-01124-0

Download citation

Keywords

  • Ultra-weak discontinuous Galerkin method
  • Superconvergence
  • Post-processing
  • Projection
  • One-dimensional Schrödinger equation