Skip to main content
Log in

On the \({H^1}\)-stability of the \({L_2}\)-projection onto finite element spaces

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

We study the stability in the \(H^1\)-seminorm of the \(L_2\)-projection onto finite element spaces in the case of nonuniform but shape regular meshes in two and three dimensions and prove, in particular, stability for conforming triangular elements up to order twelve and conforming tetrahedral elements up to order seven.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bank, R.E.: PLTMG: A software package for solving elliptic partial differential equations, users’ guide 11.0. Tech. rep., Department of Mathematics, University of California at San Diego (2012)

  2. Bank, R.E., Dupont, T.: An optimal order process for solving finite element equations. Math. Comput. 36, 35–51 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bank, R.E., Nguyen, H.: \(hp\) adaptive finite elements based on derivative recovery and superconvergence. Comput. Vis. Sci. 14, 287–299 (2012)

    Article  MathSciNet  Google Scholar 

  4. Bank, R.E., Sherman, A.H., Weiser, A.: Refinement algorithms and data structures for regular local mesh refinement. In: Stepleman, R.S. (ed.) Scientific Computing (Applications of Mathematics and Computing to the Physical Sciences), pp. 3–17. North Holland (1983)

  5. Bey, J.: Tetrahedral grid refinement. Computing 55, 355–378 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bramble, J., Pasciak, J., Steinbach, O.: On the stability of the \(L_2\)-projection in \(H^1(\Omega )\). Math. Comp. 71, 147–156 (2001)

    Article  MathSciNet  Google Scholar 

  7. Carstensen, C.: Merging the Bramble–Pasciak–Steinbach and the Crouzeix–Thomée criterion for \(H^1\)-stability of the \(L_2\)-projection onto finite element spaces. Math. Comput. 71, 157–163 (2001)

    Article  MathSciNet  Google Scholar 

  8. Carstensen, C.: An adaptive mesh-refining algorithm allowing for an \(H^1\) stable \(L_2\) projection onto Courant finite element spaces. Constr. Approx. 20, 549–564 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  9. Clément, P.: Approximation by finite element functions using local regularization. RAIRO Sér. Rouge Anal. Numér. R–2, 77–84 (1975)

    Google Scholar 

  10. Crouzeix, M., Thomée, V.: The stability in \(L_p\) and \(W_p^1\) of the \(L_2\)-projection onto finite element function spaces. Math. Comput. 48, 521–532 (1987)

    MATH  Google Scholar 

  11. Demko, S.: Inverses of band matrices and local convergence of spline projection. SIAM J. Numer. Anal. 14, 616–619 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  12. Demko, S., Moss, W.F., Smith, P.W.: Decay rates for inverses of band matrices. Math. Comput. 43, 491–499 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  13. Deuflhard, P., Hohmann, A.: Numerical analysis in modern scientific computing: an introduction. In: Texts in Applied Mathematics, vol. 43. Springer, Berlin (2003)

  14. Douglas, J., Dupont, T., Wahlbin, L.: The stability in \(L^q\) of the \(L^2\)-projection into finite element spaces. Numer. Math. 23, 193–197 (1975)

    Article  MATH  Google Scholar 

  15. Olver, F., Lozier, D., Boisvert, R., Clark, C. (eds.): NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010). http://dlmf.nist.gov/

  16. Xu, J.: Iterative methods by space decomposition and subspace correction. SIAM Rev. 34, 581–613 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  17. Yserentant, H.: Two preconditioners based on the multi-level splitting of finite element spaces. Numer. Math. 58, 163–184 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  18. Yserentant, H.: Old and new convergence proofs for multigrid methods. Acta Numerica 2, 285–326 (1993)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harry Yserentant.

Additional information

Harry Yserentant was supported by the DFG-Priority Program 1324 and the DFG-Research Center Matheon.

Randolph E. Bank was supported by the Alexander von Humboldt Foundation through a Humboldt Research Award.

Appendix: The one-dimensional case

Appendix: The one-dimensional case

The one-dimensional case is particularly simple. The subspace \({\fancyscript{S}}_0\) can in this case be built up from a single type of basis functions that are assigned to the boundary points of the subintervals. Let \([-1,1]\) be the reference interval and let \(\varphi :[-1,1]\rightarrow \mathbb R \) be the polynomial of given degree \(n\) that takes the values \(\varphi (-1)=1\) and \(\varphi (1)=0\) and is \(L_2\)-orthogonal to all polynomials of degree \(n\) that vanish at \(-1\) and \(1\). The basis functions on the reference interval are then the functions

$$\begin{aligned} \varphi _1(x)=\varphi (x), \quad \varphi _2(x)=\varphi (-x). \end{aligned}$$

The reduced mass matrix of the overall problem becomes in this case a tridiagonal matrix. The condition number of its diagonally scaled counterpart can be estimated by the condition number \(\kappa \) of the \((2\times 2)\)-element matrix

$$\begin{aligned} \begin{pmatrix} (\varphi _1,\varphi _1) &{} (\varphi _1,\varphi _2) \\ (\varphi _2,\varphi _1) &{} (\varphi _2,\varphi _2) \end{pmatrix}. \end{aligned}$$

As the application of the operator \(C\) from Sect. 2 corresponds here to diagonal preconditioning of the reduced mass matrix, it determines the convergence rate

$$\begin{aligned} q=\frac{\sqrt{\kappa }-1}{\sqrt{\kappa }+1} \end{aligned}$$

of the iterative method studied in Sect. 2 and with that the upper bound for the factor \(\mu \) from Eq. (6.5) by which the length of neighbored subintervals can differ.

To calculate the above element matrix, we use the Jacobi polynomials \(P_k=P_k^{(\alpha ,\beta )}\) of degree \(k\) for the indices \(\alpha ,\beta =2\). They are given by the Rodrigues formula

$$\begin{aligned} P_k(x)=\frac{(-1)^k}{2^k\,k!\,(1-x^2)^2}\; \Big (\frac{{\mathrm{d}}}{{\mathrm{d}}x}\Big )^k\Big \{(1-x^2)^{k+2}\Big \} \end{aligned}$$

and satisfy the orthogonality relations

$$\begin{aligned} \int _{-1}^1 (1-x)^2(1+x)^2\,P_k(x)P_\ell (x)\,\mathrm{d}x= \frac{32}{2k+5}\,\frac{(k+1)(k+2)}{(k+3)(k+4)}\,\delta _{k\ell }. \end{aligned}$$

For even \(k\), they are symmetric and for odd \(k\) antisymmetric:

$$\begin{aligned} P_k(-x)=(-1)^k\,P_k(x). \end{aligned}$$

More comprehensive information can be found in [15].

Lemma

The above element matrix is a scalar multiple of the matrix

$$\begin{aligned} \begin{pmatrix} n+1 &{} (-1)^{n+1}\,\\ \,(-1)^{n+1}&{} n+1 \end{pmatrix}. \end{aligned}$$

Proof

We first expand the projection of the linear function that takes the value 1 at the left and the value 0 at the right endpoint onto the space of polynomials of degree at most \(n\) that vanish at both endpoints in terms of the \(L_2\)-orthonormal polynomials

$$\begin{aligned} \psi _k(x)= \bigg (\frac{2k+5}{32}\,\frac{(k+3)(k+4)}{(k+1)(k+2)}\bigg )^{1/2} (1-x^2)\,P_k(x). \end{aligned}$$

Since the boundary terms vanish, multiple integration by parts yields

$$\begin{aligned} \int _{-1}^1\frac{1-x}{2}\;(1-x^2)\,P_k(x)\,\mathrm{d}x= \frac{1}{2^{k+1}k!}\, \int _{-1}^1\Big (\frac{\mathrm{d}}{\mathrm{d}x}\Big )^k \Big \{\frac{1}{1+x}\Big \}(1-x^2)^{k+2}\,\mathrm{d}x \end{aligned}$$

and thus after some rather obvious intermediate steps the explicit representation

$$\begin{aligned} \int _{-1}^1\frac{1-x}{2}\;(1-x^2)\,P_k(x)\,\mathrm{d}x= (-1)^k\,\frac{8}{(k+3)(k+4)} \end{aligned}$$

of the integral. The expansion coefficients are therefore

$$\begin{aligned} c_k=(-1)^k\, \bigg (\frac{2k+5}{32}\;\frac{(k+3)(k+4)}{(k+1)(k+2)}\bigg )^{1/2}\! \frac{8}{(k+3)(k+4)} \end{aligned}$$

and the initially introduced shape function \(\varphi \) of polynomial order \(n\ge 2\) is given by

$$\begin{aligned} \varphi (x)=\frac{1-x}{2}-\sum _{k=0}^{n-2}c_k\psi _k(x) \end{aligned}$$

or directly in terms of the Jacobi polynomials by

$$\begin{aligned} \varphi (x)=\frac{1-x}{2}-\sum _{k=0}^{n-2}\, \frac{(-1)^k}{4}\,\frac{2k+5}{(k+1)(k+2)}\,(1-x^2)\,P_k(x). \end{aligned}$$

Using the \(L_2\)-orthonormality of the \(\psi _k\), the diagonal entries of the matrix become

$$\begin{aligned} \int _{-1}^1\varphi (x)^2\,\mathrm{d}x= \frac{2}{3}-\sum _{k=0}^{n-2}c_k^2= \frac{2}{n\,(n+2)}. \end{aligned}$$

As \(\psi _k(-x)=(-1)^k\psi _k(x)\), its off-diagonal entries are correspondingly

$$\begin{aligned} \int _{-1}^1\varphi (x)\varphi (-x)\,\mathrm{d}x= \frac{1}{3}-\sum _{k=0}^{n-2}(-1)^k\,c_k^2= \frac{2\,(-1)^{n+1}}{n\,(n+1)(n+2)}. \end{aligned}$$

As the final results remain true for \(n=1\), this proves the proposition. \(\square \)

The reduced total mass matrix is composed of the element mass matrices and is the sum of these. If the polynomial order is everywhere the same the following holds:

Lemma

If the polynomial order is \(n\) on all subintervals, the minimum and the maximum eigenvalue of the diagonally scaled reduced total mass matrix are

$$\begin{aligned} 1-\frac{1}{n+1}, \quad 1+\frac{1}{n+1}. \end{aligned}$$

Proof

We start from the generalized eigenvalue problem

$$\begin{aligned} \begin{pmatrix} (\varphi _1,\varphi _1) &{} (\varphi _1,\varphi _2) \\ (\varphi _2,\varphi _1) &{} (\varphi _2,\varphi _2) \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix} = \lambda \begin{pmatrix} (\varphi _1,\varphi _1) &{} 0 \\ 0 &{} (\varphi _2,\varphi _2) \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix} \end{aligned}$$

for the just considered element matrix and its diagonal. The minimum and the maximum eigenvalue \(\lambda \) are by our first lemma the two values given above. Therefore

$$\begin{aligned} \Big (1-\frac{1}{n+1}\Big )\,v^T\!Dv \le v^T\!M_0v \le \Big (1+\frac{1}{n+1}\Big )\,v^T\!Dv \end{aligned}$$

for all coefficient vectors \(v\) of functions in \({\fancyscript{S}}_0\), where \(M_0\) is the reduced total mass matrix and \(D\) its diagonal. The given values thus represent a lower respectively upper bound for the eigenvalues of the diagonally scaled reduced total mass matrix.

The eigenvectors for the minimum and the maximum respectively the maximum and the minimum eigenvalue of the element matrix are the multiples of the vectors

$$\begin{aligned} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \quad \begin{pmatrix} 1 \\ -1 \end{pmatrix}, \end{aligned}$$

depending on the polynomial degree \(n\). Inserting above the coefficient vectors \(v\) of the functions in \({\fancyscript{S}}_0\) with value \(1\) and alternately the values \(\pm 1\) at the intermediate points, one sees that both bounds are attained. This proves the proposition. \(\square \)

After diagonal scaling the condition number of the reduced total mass matrix is thus

$$\begin{aligned} \kappa =\frac{n+2}{n} \end{aligned}$$

if the polynomial order is \(n\) on all subintervals. In this case therefore

$$\begin{aligned} q=\frac{\sqrt{\kappa }-1}{\sqrt{\kappa }+1} <\frac{1}{2n}. \end{aligned}$$

The \(L_2\)-projection \(Q\) onto the full space \({\fancyscript{S}}\) thus remains \(H^1\)-stable as long as the length of neighbored subintervals differs at most by the factor of \(2n\). The larger the polynomial degree is, the less stringent this condition becomes. By the same arguments as in Sect. 7 one can, however, show that the length of neighbored subintervals must not differ arbitrarily.

Even more interesting than the case of fixed polynomial degree is the \(hp\)-case in which the polynomial degree can change from subinterval to subinterval. Independent of that, 1/2 remains a lower and 3/2 an upper bound for the eigenvalues of the diagonally scaled reduced element matrices. Thus the estimate

$$\begin{aligned} \kappa \le 3 \end{aligned}$$

holds for the condition number of the diagonally scaled reduced total mass matrix. This implies that the \(L_2\)-projection onto such \(hp\)-spaces remains stable with respect to the weighted \(L_2\)-norm from Theorem 4.1 independent of the involved polynomial degrees as long as the length of neighbored subintervals differs at most by a factor

$$\begin{aligned} \mu <\frac{\sqrt{3}+1}{\sqrt{3}-1}. \end{aligned}$$

The limiting factor in the \(hp\)-case is therefore the inverse inequality.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bank, R.E., Yserentant, H. On the \({H^1}\)-stability of the \({L_2}\)-projection onto finite element spaces. Numer. Math. 126, 361–381 (2014). https://doi.org/10.1007/s00211-013-0562-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-013-0562-4

Mathematics Subject Classfication (2000)

Navigation