Skip to main content
Log in

Contraction and Optimality Properties of an Adaptive Legendre–Galerkin Method: The Multi-Dimensional Case

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We analyze the theoretical properties of an adaptive Legendre–Galerkin method in the multidimensional case. After the recent investigations for Fourier–Galerkin methods in a periodic box and for Legendre–Galerkin methods in the one dimensional setting, the present study represents a further step towards a mathematically rigorous understanding of adaptive spectral/\(hp\) discretizations of elliptic boundary-value problems. The main contribution of the paper is a careful construction of a multidimensional Riesz basis in \(H^1\), based on a quasi-orthonormalization procedure. This allows us to design an adaptive algorithm, to prove its convergence by a contraction argument, and to discuss its optimality properties (in the sense of non-linear approximation theory) in certain sparsity classes of Gevrey type.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Adams, J.: On the expression of the product of any two legendres coefficients by means of a series of legendres coefficients. Proc. R. Soc. Lond. 27, 63–71 (1878)

    Article  MATH  Google Scholar 

  2. Baouendi, M.S., Goulaouic, C.: Régularité analytique et itérés d’opérateurs elliptiques dégénérés; applications. J. Funct. Anal. 9, 208–248 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  3. Benzi, M., Tůma, M.: Orderings for factorized sparse approximate inverse preconditioners. SIAM J. Sci. Comput. 21(5), 1851–1868 (electronic), 2000. Iterative methods for solving systems of algebraic equations (Copper Mountain, CO, 1998)

  4. Binev, P.: Tree approximation for \(hp\)-adaptivity. (in preparation)

  5. Binev, P.: Instance optimality for \(hp\)-type approximation. Technical Report 39, Mathematisches Forschungsinstitut Oberwolfach (2013)

  6. Binev, P., Dahmen, W., DeVore, R.: Adaptive finite element methods with convergence rates. Numer. Math. 97(2), 219–268 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bürg, M., Dörfler, W.: Convergence of an adaptive \(hp\) finite element strategy in higher space-dimensions. Appl. Numer. Math. 61(11), 1132–1146 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  8. Canuto, C., Nochetto, R., Stevenson, R., Verani, M.: An \(hp\) adaptive finite element method: convergence and optimality properties. (in preparation)

  9. Canuto, C., Nochetto, R., Verani, M.: Contraction and optimality properties of adaptive Legendre–Galerkin methods: the 1-dimensional case. Comput. Math. Appl. 67(4), 752–770 (2014)

    Article  MathSciNet  Google Scholar 

  10. Canuto, C., Nochetto, R.H., Verani, M.: Adaptive Fourier–Galerkin methods. Math. Comput. 83, 1645–1687 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  11. Canuto, C., Simoncini, V., Verani, M.: On the decay of the inverse of matrices that are sum of Kronecker products. Linear Algebra Appl. 452, 21–39 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  12. Canuto, C., Verani, M.: On the numerical analysis of adaptive spectral/hp methods for elliptic problems. In: Brezzi, F. et al. (eds.) Analysis and Numerics of Partial Differential Equations, pp. 165–192. Springer INdAM series (2013)

  13. Cascon, J.M., Kreuzer, C., Nochetto, R.H., Siebert, K.G.: Quasi-optimal convergence rate for an adaptive finite element method. SIAM J. Numer. Anal. 46(5), 2524–2550 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  14. Cohen, A., Dahmen, W., DeVore, R.: Adaptive wavelet methods for elliptic operator equations: convergence rates. Math. Comput. 70, 27–75 (1998)

    Article  MathSciNet  Google Scholar 

  15. Cohen, A., DeVore, R., Nochetto, R.H.: Convergence rates of AFEM with \(H^{-1}\) data. Found. Comput. Math. 12(5), 671–718 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  16. Dörfler, W.: A convergent adaptive algorithm for Poisson’s equation. SIAM J. Numer. Anal. 33(3), 1106–1124 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  17. Dörfler, W., Heuveline, V.: Convergence of an adaptive \(hp\) finite element strategy in one space dimension. Appl. Numer. Math. 57(10), 1108–1124 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  18. Duff, I.S., Erisman, A.M., Reid, J.K.: Direct Methods for Sparse Matrices. Clarendon Press, Oxford (1989)

    MATH  Google Scholar 

  19. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD (1996)

  20. Gui, W., Babuška, I.: The \(h,\;p\) and \(h\)-\(p\) versions of the finite element method in 1 dimension. III. The adaptive \(h\)-\(p\) version. Numer. Math. 49(6), 659–683 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  21. Hall, P., Jin, J.: Innovated higher criticism for detecting sparse signals in correlated noise. Ann. Stat. 38(3), 1686–1732 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  22. Jaffard, S.: Propriétés des matrices “bien localisées” près de leur diagonale et quelques applications. Annales de l’I.H.P. 5, 461–476 (1990)

  23. Krishtal, I., Strohmer, T., Wertz, T.: Localization of matrix factorizations (2013). arXiv:1305.1618

  24. Maitre, J.-F., Pourquier, O.: Condition number and diagonal preconditioning: comparison of the \(p\)-version and the spectral element methods. Numer. Math. 74(1), 69–84 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  25. Mitchell, W.F., McClain, M.A.: A survey of \(hp\)-adaptive strategies for elliptic partial differential equations. In: Recent Advances in Computational and Applied Mathematics, pp. 227–258. Springer, Dordrecht (2011)

  26. Morin, P., Nochetto, R.H., Siebert, K.G.: Data oscillation and convergence of adaptive FEM. SIAM J. Numer. Anal. 38(2), 466–488 (2000). (electronic)

    Article  MATH  MathSciNet  Google Scholar 

  27. Nochetto, R.H., Siebert, K.G., Veeser, A.: Theory of adaptive finite element methods: an introduction. In: Multiscale, Nonlinear and Adaptive Approximation, pp. 409–542. Springer, Berlin (2009)

  28. Schmidt, A., Siebert, K.G.: A posteriori estimators for the \(h\)-\(p\) version of the finite element method in 1D. Appl. Numer. Math. 35(1), 43–66 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  29. Stevenson, R.: Optimality of a standard adaptive finite element method. Found. Comput. Math. 7(2), 245–269 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  30. Stevenson, R.: Adaptive wavelet methods for solving operator equations: an overview. In: Multiscale, Nonlinear and Adaptive Approximation, pp. 543–597. Springer, Berlin (2009)

Download references

Acknowledgments

The authors would like to thank Michele Benzi for helpful discussions and for pointing to [23].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Claudio Canuto.

Additional information

The first and third author have been partially supported by the Italian research grant Prin 2012 2012HBLYE4_004 “Metodologie innovative nella modellistica differenziale numerica”.

Appendix

Appendix

Proof

(Proof of Lemma 2 ) In the following we will make extensive use of the following property of the product of univariate Legendre polynomials (see, e.g., [1]):

$$\begin{aligned} L_m(x_i)L_n(x_i)=\sum _{r=0}^{\min (m,n)} A_{m,n}^r L_{m+n-2r} (x_i)\qquad i=1,2 \end{aligned}$$
(5.1)

with

$$\begin{aligned} A_{m,n}^r:= \frac{A_{m-r} A_r A_{n-r}}{A_{n+m-r}}\frac{2n+2m-4r+1}{2n+2m-2r+1} \end{aligned}$$

and

$$\begin{aligned} A_0:=1\ , \qquad A_m:=\frac{1\cdot 3 \cdot 5 \ldots (2m-1)}{m !}=\frac{(2m)!}{2^m (m!)^2}. \end{aligned}$$

Moreover we recall the following asymptotic estimates (see [9]):

  • Case \(0< r <\min (m,n)\):

    $$\begin{aligned} \frac{A_{m-r} A_r A_{n-r}}{A_{n+m-r}}\sim \frac{1}{\pi } \frac{\sqrt{n+m-r}}{\sqrt{m-r}\sqrt{n-r}\sqrt{r}}\ ; \end{aligned}$$
    (5.2)
  • Case \(r=0\):

    $$\begin{aligned} \frac{A_{m} A_{n}}{A_{n+m}}\sim \frac{1}{\sqrt{\pi }} \frac{\sqrt{n+m}}{\sqrt{nm}} \ ; \end{aligned}$$
    (5.3)
  • Case \(r=\min (m,n)\) and \(m\not = n\):

    $$\begin{aligned} \frac{A_{\min (m,n)} A_{\vert m -n \vert }}{{{A_{\max (m,n)}}}}\sim \frac{1}{\sqrt{\pi }} \frac{\sqrt{\max (m,n)}}{\sqrt{\min (m,n)}\sqrt{\vert m -n \vert }}. \end{aligned}$$
    (5.4)

    When \(m=n\) it is sufficient to use \(A_0=1\) to get \(\frac{A_{m} A_{0}}{A_{m}}=1\).

We begin from the following expression:

$$\begin{aligned} a^\eta _{mn}=\int _{\varOmega } \nu (x)\nabla \eta _m(x)\nabla \eta _n(x) dx + \int _\varOmega \sigma (x)\eta _m(x)\eta _n(x) dx=: a_{mn}^{(1)} + a^{(0)}_{mn}. \end{aligned}$$
(5.5)

We first estimate \(a^{(1)}_{mn}\). Let \(m=(m_1,m_2)\) and \(n=(n_1,n_2)\) then using the notation \(\nu :=\nu (x_1,x_2)\) and the relation \(\eta ^\prime _k(x_i)=-\sqrt{k-1/2} L_{k-1}(x_i)\), \(i=1,2\), we have

$$\begin{aligned} a^{(1)}_{mn}&= \int _\varOmega \nu \eta ^\prime _{m_1}(x_1)\eta ^\prime _{n_1}(x_1)\eta _{m_2}(x_2) \eta _{m_2}(x_2) dx_1 dx_2 \nonumber \\&+ \int _\varOmega \nu \eta _{m_1}(x_1)\eta _{n_1}(x_1)\eta ^\prime _{m_2}(x_2)\eta ^\prime _{m_2}(x_2) dx_1 dx_2\nonumber \\ \!&= \!B^1_{m,n}\int _{\varOmega }\! \nu L_{m_1-1}(x_1)L_{n_1-1}(x_1)[L_{m_2-2}(x_2)\!-\!L_{m_2}(x_2)][L_{n_2-2}(x_2)\!-\!L_{n_2}(x_2)]dx_1dx_2\nonumber \\&+~B^2_{m,n}\int _{\varOmega } \nu [L_{m_1-2}(x_1)-L_{m_1}(x_1)][L_{n_1-2}(x_1)\nonumber \\&-L_{n_1}(x_2)] L_{m_2-1}(x_2)L_{n_2-1}(x_2)dx_1dx_2\nonumber \\&=: J_1 + J_2 \end{aligned}$$
(5.6)

where we set

$$\begin{aligned} B^1_{m,n}:=\frac{\sqrt{m_1-1/2}\sqrt{n_1-1/2}}{\sqrt{4m_2-2} \sqrt{4n_2-2}}\qquad B^2_{m,n}:= \frac{\sqrt{m_2-1/2}\sqrt{n_2-1/2}}{\sqrt{4m_1-2}\sqrt{4n_1-2}}. \end{aligned}$$

Let us focus on the first term \(J_1\). Straightforward calculations yield

$$\begin{aligned} J_1&= B^1_{m,n}\Big \{ \int _{\varOmega } \nu L_{m_1-1}(x_1)L_{n_1-1}(x_1) L_{m_2-2}(x_2)L_{n_2-2}(x_2) dx_1dx_2\nonumber \\&- \int _{\varOmega } \nu L_{m_1-1}(x_1)L_{n_1-1}(x_1) L_{m_2}(x_2)L_{n_2-2}(x_2) dx_1dx_2\nonumber \\&- \int _{\varOmega } \nu L_{m_1-1}(x_1)L_{n_1-1}(x_1) L_{m_2-2}(x_2)L_{n_2}(x_2) dx_1dx_2\nonumber \\&+ \int _{\varOmega } \nu L_{m_1-1}(x_1)L_{n_1-1}(x_1) L_{m_2}(x_2)L_{n_2}(x_2) dx_1dx_2\Big \}\nonumber \\&=: J_1^1+ J_1^2+ J_1^3+ J_1^4. \end{aligned}$$

For the ease of presentation we only show how to estimate \(J_1^1\) as the other terms can be worked out similarly. Employing (5.1) we obtain

$$\begin{aligned} J_1^1&= B_{m,n}^1 \int _\varOmega \nu \sum _{r_1=0}^{\min (m_1-1,n_1-1)} A_{m_1-1,n_1-1}^{r_1} L_{m_1+n_1-2-2r_1}(x_1) \\&\qquad \sum _{r_2=0}^{\min (m_2-2,n_2-2)} A_{m_2-2,n_2-2}^{r_2} L_{m_2+n_2-4-2r_2}(x_2) ~dx_1x_2 \\&= B_{m,n}^1 \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} A_{m_1-1,n_1-1}^{r_1} A_{m_2-2,n_2-2}^{r_2}\\&\int _\varOmega \nu L_{m_1+n_1-2-2r_1}L_{m_2+n_2-4-2r_2}dx_1x_2. \end{aligned}$$

Using the multidimensional Legendre expansion \(\nu (x)=\sum _{k\in \mathcal{K}} \nu _k L_k(x)\) we obtain

$$\begin{aligned} J_1^1\!=\!4 B_{m,n}^1 \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} \frac{A_{m_1-1,n_1-1}^{r_1} A_{m_2-2,n_2-2}^{r_2} \nu _{m_1+n_1-2-2r_1,m_2+n_2-4-2r_2}}{[2(m_1 +n_1-2-2r_1)+1][2(m_2+n_2-4-2r_2)+1]}. \end{aligned}$$

We now employ the asymptotic estimates (5.2)–(5.4) to bound the terms \(A_{m_1-2,n_1-2}^{r_1}\) and \(A_{m_2-2,n_2-2}^{r_2}\). Accordingly, we need to distinguish among several cases depending on the combination of the values assumed by \(r_1\) and \(r_2\). However, for the ease of reading, we only consider the case \(0<r_1<\min (m_1-2,n_1-2)\) and \(0<r_2<\min (m_2-2,n_2-2)\) as the other ones can be treated similarly. In this case, (5.2) yields

$$\begin{aligned} A_{m_1-2,n_1-2}^{r_1}\simeq \frac{1}{\pi } \frac{\sqrt{n_1+m_1-4-r_1}}{\sqrt{m_1-2-r_1}\sqrt{n_1-2-r_1}\sqrt{r_1}} \frac{2(n_1-2)+2(m_1-2)-4r_1+1}{2(n_1-2)+2(m_1-2)-2r_1+1}. \end{aligned}$$
(5.7)

A similar estimate holds also for \(A_{m_2-2,n_2-2}^{r_2}\). Thus we have

$$\begin{aligned}&\frac{ B_{m,n}^1 A_{m_1-1,n_1-1}^{r_1} A_{m_2-2,n_2-2}^{r_2}}{[2(m_1+n_1-2-2r_1)+1][2(m_2+n_2-4-2r_2)+1]} \\&\quad \simeq \frac{\sqrt{2m_1-1}\sqrt{2n_1-1}}{\sqrt{2m_2-1}\sqrt{2n_2-1}} \frac{\sqrt{n_2+m_2-4-r_2}\sqrt{n_1+m_1-4-r_1} }{\sqrt{m_2-2-r_2}\sqrt{n_2-2-2r_2}\sqrt{r_2} \sqrt{m_2-2-r_2}\sqrt{n_2-2-2r_2}\sqrt{r_2}} \\&\qquad \times \frac{1}{(2m_2+2n_2-2r_2-7)(2m_1+2n_1-2r_1-3)}\\&\quad \simeq \frac{\sqrt{m_1n_1}}{\sqrt{m_1-1-r_1}\sqrt{n_1-1}\sqrt{r_1}} \frac{1}{\sqrt{n_1+m_1-2-r_1}} \frac{1}{\sqrt{n_2+m_2-4-r_2}} \frac{1}{\sqrt{2m_2-1}\sqrt{2n_2-1}}\\&\qquad \times \frac{1}{\sqrt{m_2-2-r_2}\sqrt{n_2-2-2r_2}\sqrt{r_2}}\\&\quad \simeq \frac{1}{\sqrt{\min (m_1+n_1-2,\vert m_1-n_1\vert )}} \frac{1}{\sqrt{n_2+m_2-4-r_2}} \frac{1}{\sqrt{2m_2-1}\sqrt{2n_2-1}} \\&\qquad \times \frac{1}{\sqrt{m_2-2-r_2}\sqrt{n_2-2-2r_2}\sqrt{r_2}}\\&\quad \lesssim 1. \end{aligned}$$

Thus we have

$$\begin{aligned} J_1^1 \lesssim \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-4-2r_2}. \end{aligned}$$
(5.8)

Similar estimates can be obtained for the terms \(J_1^2,\ldots ,J_1^4\) yielding

$$\begin{aligned} J_1&\lesssim \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)}\nu _{m_1+n_1-2-2r_1,m_2+n_2-4-2r_2}\\&+ \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2,n_2-2)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-2-2r_2}\\&+ \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-2-2r_2}\\&+ \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2,n_2)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-2r_2}. \end{aligned}$$

Anaologously, we can prove the following estimate for the term \(J_2\)

$$\begin{aligned} J_2&\lesssim \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-1,n_2-1)} \nu _{m_1+n_1-4-2r_1,m_2+n_2-2-2r_2} \\&+ \sum _{r_1=0}^{\min (m_1,n_1-2)} \sum _{r_2=0}^{\min (m_2-1,n_2-1)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-2-2r_2} \\&+ \sum _{r_1=0}^{\min (m_1-2,n_1)} \sum _{r_2=0}^{\min (m_2-1,n_2-1)} \nu _{m_1+n_1-2-2r_1,m_2+n_2-2-2r_2} \\&+ \sum _{r_1=0}^{\min (m_1,n_1)} \sum _{r_2=0}^{\min (m_2-1,n_2-1)} \nu _{m_1+n_1-2r_1,m_2+n_2-2-2r_2}. \end{aligned}$$

Assuming \( \vert \nu _k \vert \le C_\eta e^{-\gamma \Vert k\Vert _{\ell ^{1}}}\) for every \(k\in \mathcal{K}\) and employing the above estimates for \(J_1\) and \(J_2\), we obtain

$$\begin{aligned} \vert a_{mn}^{(1)}\vert&\lesssim e^{-\gamma (\vert m_1-n_1\vert + \vert m_2-n_2\vert )}\\&\times \left\{ \sum _{r_1=0}^{\min (m_1-1,n_1-1)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} e^{-2\gamma (\min (m_1-1,n_1-1)-r_1)} e^{-2\gamma (\min (m_2-2,n_2-2)-r_2)}\right. \\&\left. +\cdots + \sum _{r_1=0}^{\min (m_1,n_1)} \sum _{r_2=0}^{\min (m_2-1,n_2-1)} e^{-2\gamma (\min (m_1,n_1)-r_1)} e^{-2\gamma (\min (m_2-1,n_2-1)-r_2)}\right\} \\&\lesssim e^{-\gamma (\vert m_1-n_1\vert + \vert m_2-n_2\vert )}= C e^{-\gamma \Vert m-n\Vert _{\ell ^{1}}}. \end{aligned}$$

We now estimate \(a^{(0)}_{mn}\). Let \(m=(m_1,m_2)\) and \(n=(n_1,n_2)\) then recalling (2.2) and using the notation \(\sigma :=\sigma (x_1,x_2)\) we have

$$\begin{aligned} a^{(0)}_{mn}&= \int _\varOmega \sigma \eta _{m_1}(x_1)\eta _{n_1}(x_1)\eta _{m_2}(x_2)\eta _{m_2}(x_2)dx_1 dx_2 \\&= C_m^n \int _\varOmega \sigma [(L_{m_1-2}-L_{m_1}) (L_{n_1-2}-L_{n_1})](x_1)[(L_{m_2-2}-L_{m_2})(L_{n_2-2}-L_{n_2})](x_2) dx_1 dx_2 \\&= C_m^n \int _\varOmega \sigma [L_{m_1-2}L_{n_1-2}-L_{m_1-2}L_{n_1}-L_{m_1}L_{n_1-2}+L_{n_1}L_{m_1}](x_1)[L_{m_2-2}L_{n_2-2} + \\&\qquad \qquad -L_{m_2-2}L_{n_2}-L_{m_2}L_{n_2-2}+L_{n_2}L_{m_2}](x_2) dx_1 dx_2 \end{aligned}$$

with \(C_m^n:=\frac{1}{\sqrt{(4m_1-1)(4m_2-1)(4n_1-1)(4n_2-1)}}\). Now employing (5.1) we obtain

$$\begin{aligned} a^{(0)}_{mn}&= C_m^n \int _\varOmega \sigma \left[ \sum _{r_1=0}^{\min (m_1-2,n_1-2)} A_{m_1-2,n_1-2}^{r_1} L_{m_1+n_1-4-2r_1} - \sum _{r_1=0}^{\min (m_1-2,n_1)} A_{m_1-2,n_1}^{r_1} L_{m_1+n_1-2-2r_1} \right. \\&\left. -\sum _{r_1=0}^{\min (m_1,n_1-2)} A_{m_1,n_1-2}^{r_1} L_{m_1+n_1-2-2r_1} +\sum _{r_1=0}^{\min (m_1,n_1)} A_{m_1,n_1}^{r_1} L_{m_1+n_1-2r_1} \right] (x_1) \\&\left[ \sum _{r_2=0}^{\min (m_2-2,n_2-2)} A_{m_2-2,n_2-2}^{r_2} L_{m_2+n_2-4-2r_2} - \sum _{r_2=0}^{\min (m_2-2,n_2)} A_{m_2-2,n_2}^{r_2} L_{m_2+n_2-2-2r_2} \right. \\&\left. -\sum _{r_2=0}^{\min (m_2,n_2-2)} A_{m_2,n_2-2}^{r_2} L_{m_2+n_2-2-2r_2} +\sum _{r_2=0}^{\min (m_2,n_2)} A_{m_2,n_2}^{r_2} L_{m_2+n_2-2r_2} \right] (x_2) dx_1dx_2. \\&= I_1+\cdots +I_{16}. \end{aligned}$$

We now need to estimate \(I_1,\ldots ,I_{16}\). To simplify the exposition, we only show how to estimate \(I_1\), as the other terms can be treated similarly.

Using the multidimensional Legendre expansion \(\sigma (x)=\sum _{k\in \mathcal{K}} \sigma _k L_k(x)\) together with (2.1) we get

$$\begin{aligned} I_1\!&= \!C_m^n \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} A_{m_1-2,n_1-2}^{r_1} A_{m_2-2,n_2-2}^{r_2} \int _\varOmega \sigma L_{m_1+n_1-4-2r_1}L_{m_2+n_2-4-2r_2} dx_1dx_2 \\&= C_m^n \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)}\frac{A_{m_1-2,n_1-2}^{r_1} A_{m_2-2,n_2-2}^{r_2} \sigma _{m_1+n_1-4-2r_1,m_2+n_2-4-2r_2} }{[2(m_1+n_1-4-2r_1)+1][2(m_2+n_2-4-2r_2)+1]}. \end{aligned}$$

We now employ the asymptotic estimates (5.2)–(5.4) to bound the terms \(A_{m_1-2,n_1-2}^{r_1}\) and \(A_{m_2-2,n_2-2}^{r_2}\). Accordingly, we need to distinguish among several cases depending on the combination of the values assumed by \(r_1\) and \(r_2\). However, for the ease of reading, we only consider the case \(0<r_1<\min (m_1-2,n_1-2)\) and \(0<r_2<\min (m_2-2,n_2-2)\) as the other ones can be treated similarly. In this case, (5.2) yields

$$\begin{aligned} A_{m_1-2,n_1-2}^{r_1}\simeq \frac{1}{\pi } \frac{\sqrt{n_1+m_1-4-r_1}}{\sqrt{m_1-2-r_1}\sqrt{n_1-2-r_1}\sqrt{r_1}} \frac{2(n_1-2)+2(m_1-2)-4r_1+1}{2(n_1-2)+2(m_1-2)-2r_1+1}. \end{aligned}$$
(5.9)

A similar estimate holds also for \(A_{m_2-2,n_2-2}^{r_2}\). Hence, we have

$$\begin{aligned} \frac{C_m^n A_{m_1-2,n_1-2}^{r_1} A_{m_2-2,n_2-2}^{r_2}}{[2(m_1+n_1-4-2r_1)+1][2(m_2+n_2-4-2r_2)+1]} \lesssim 1. \end{aligned}$$
(5.10)

Employing (5.3) and (5.4) yields similar estimates for the cases \(r_1=0,\min (m_1-2,n_1-2)\) and \(r_2=0,\min (m_2-2,n_2-2)\). In conclusion, we get

$$\begin{aligned} I_1\lesssim \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)}\sigma _{m_1+n_1-4-2r_1,m_2+n_2-4-2r_2}. \end{aligned}$$
(5.11)

Similar estimates can be obtained for \(I_2,\ldots ,I_{16}\) thus yielding

$$\begin{aligned} a_{mn}^{(0)}&\lesssim \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} \sigma _{m_1+n_1-4-2r_1,m_2+n_2-4-2r_2} \\&+ \cdots + \sum _{r_1=0}^{\min (m_1,n_1)} \sum _{r_2=0}^{\min (m_2,n_2)}\sigma _{m_1+n_1-2r_1,m_2+n_2-2r_2}. \end{aligned}$$

Assuming \( \vert \sigma _k \vert \le C_\eta e^{-\gamma \Vert k\Vert _{\ell ^{1}}}\) for every \(k\in \mathcal{K}\) and employing the above estimate, we obtain

$$\begin{aligned} \vert a_{mn}^{(0)}\vert&\lesssim e^{-\gamma (\vert m_1-n_1\vert + \vert m_2-n_2\vert )}\\&\times \left\{ \sum _{r_1=0}^{\min (m_1-2,n_1-2)} \sum _{r_2=0}^{\min (m_2-2,n_2-2)} e^{-2\gamma (\min (m_1-2,n_1-2)-r_1)} e^{-2\gamma (\min (m_2-2,n_2-2)-r_2)}\right. \\&\left. + \cdots + \sum _{r_1=0}^{\min (m_1,n_1)} \sum _{r_2=0}^{\min (m_2,n_2)} e^{-2\gamma (\min (m_1,n_1)-r_1)} e^{-2\gamma (\min (m_2,n_2)-r_2)}\right\} \\&\lesssim e^{-\gamma (\vert m_1-n_1\vert + \vert m_2-n_2\vert )}= C e^{-\gamma \Vert m-n\Vert _{\ell ^{1}}}. \end{aligned}$$

This concludes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Canuto, C., Simoncini, V. & Verani, M. Contraction and Optimality Properties of an Adaptive Legendre–Galerkin Method: The Multi-Dimensional Case. J Sci Comput 63, 769–798 (2015). https://doi.org/10.1007/s10915-014-9912-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-014-9912-3

Keywords

Navigation