Abstract
We consider the preconditioned conjugate gradient method (PCG) with optimal preconditioner in the frame of the boundary element method for elliptic first-kind integral equations. Our adaptive algorithm steers the termination of PCG as well as the local mesh-refinement. Besides convergence with optimal algebraic rates, we also prove almost optimal computational complexity. In particular, we provide an additive Schwarz preconditioner which can be computed in linear complexity and which is optimal in the sense that the condition numbers of the preconditioned systems are uniformly bounded. As model problem serves the 2D or 3D Laplace operator and the associated weakly-singular integral equation with energy space \(\widetilde{H}^{-1/2}(\Gamma )\). The main results also hold for the hyper-singular integral equation with energy space \(H^{1/2}(\Gamma )\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Model problem
Let \(\Omega \subset \mathbb {R}^d\) with \(d=2,3\) be a bounded Lipschitz domain with polyhedral boundary \(\partial \Omega \). Let \(\Gamma \subseteq \partial \Omega \) be a (relatively) open and connected subset. Given \(f : \Gamma \rightarrow \mathbb {R}\), we seek the density \(\phi ^\star :\Gamma \rightarrow \mathbb {R}\) of the weakly-singular integral equation
where \(G(\cdot )\) denotes the fundamental solution of the Laplace operator, i.e.,
Given a triangulation \(\mathcal {T}_\bullet \) of \(\Gamma \), we employ a lowest-order Galerkin boundary element method (BEM) to compute a \(\mathcal {T}_\bullet \)-piecewise constant function \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) such that
With the numbering \(\mathcal {T}_\bullet = \{T_1,\ldots ,T_N\}\), consider the standard basis \(\big \{\chi _{\bullet ,j}\,:\,j=1,\ldots ,N\big \}\) of \(\mathcal {P}^0(\mathcal {T}_\bullet )\) consisting of characteristic functions \(\chi _{\bullet ,j}\) of \(T_j\in \mathcal {T}_\bullet \). We make the ansatz
Then, the Galerkin formulation (3) is equivalent to the linear system
where the matrix \(\varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) is positive definite and symmetric. For a given initial triangulation \(\mathcal {T}_0\), we consider an adaptive mesh-refinement strategy of the type
which generates a sequence \(\mathcal {T}_\ell \) of successively refined triangulations \(\mathcal {T}_\ell \) for all \(\ell \in \mathbb {N}_0\). We note that the condition number of the Galerkin matrix \(\varvec{A}_\ell \) from (5) depends on the number of elements of \(\mathcal {T}_\ell \), as well as the minimal and maximal diameter. Therefore, the step solve requires an efficient preconditioner as well as an appropriate iterative solver.
1.2 State of the art
In the last decade, the mathematical understanding of adaptive mesh-refinement has matured. We refer to [8, 13, 18, 21, 37, 41] for some milestones for adaptive finite element methods for second-order linear elliptic equations, [3, 19, 20, 24, 27] for adaptive BEM, and [11] for a general framework of rate-optimality of adaptive mesh-refining algorithms. The interplay between adaptive mesh-refinement, optimal convergence rates, and inexact solvers has been addressed and analyzed for adaptive FEM for linear problems in [4, 6, 41], for eigenvalue problems in [12], and recently also for strongly monotone nonlinearities in [28]. In particular, all available results for adaptive BEM [3, 19, 20, 24, 27] assume that the Galerkin system (5) is solved exactly. Instead, the present work analyzes an adaptive algorithm which steers both, the local mesh-refinement and the iterations of the PCG algorithm.
In principle, it is known [11, Section 7] that convergence and optimal convergence rates are preserved if the linear system is solved inexactly, but with sufficient accuracy. The purpose of this work is to guarantee the latter by incorporating an appropriate stopping criterion for the PCG solver into the adaptive algorithm. Moreover, to prove that the proposed algorithm does not only lead to optimal algebraic convergence rates, but also to (almost) optimal computational costs, we provide an appropriate symmetric and positive definite preconditioner \(\varvec{P}_\ell \in \mathbb {R}^{N\times N}\) such that
-
first, the matrix-vector products with \(\varvec{P}_\ell ^{-1}\) can be computed at linear cost;
-
second, the system matrix \(\varvec{P}^{-1/2}_\ell \varvec{A}_\ell \varvec{P}_\ell ^{-1/2}\) of the preconditioned linear system
$$\begin{aligned} \varvec{P}^{-1/2}_\ell \varvec{A}_\ell \varvec{P}_\ell ^{-1/2} \widetilde{\varvec{x}}_\ell ^\star = \varvec{P}_\ell ^{-1/2} \varvec{b}_\ell \end{aligned}$$(7)has a uniformly bounded condition number which is independent of \(\mathcal {T}_\ell \).
Then, \(\varvec{x}_\ell ^\star = \varvec{P}_\ell ^{-1/2} \widetilde{\varvec{x}}_\ell ^\star \) solves the original system (5). To that end, we exploit the multilevel structure of adaptively generated meshes in the framework of adaptive Schwarz methods. For hyper-singular integral equations, such a multilevel additive Schwarz preconditioner has been proposed and analyzed in [22, 25] for \(d = 2,3\) and for weakly-singular integral equations in [23] for \(d = 2\). In particular, the present work closes this gap by analyzing an optimal additive Schwarz preconditioner for weakly-singular integral equations for \(d = 3\). Besides, we refer to the recent work [43] on optimal preconditioning in Hilbert spaces of negative order. We note that the proofs of [22, 23] do not transfer to weakly-singular integral equations for \(d=3\). Instead, we build on recent results for finite element discretizations [5, 33] which are then transferred to the present BEM setting by use of an abstract concept from [39].
1.3 Outline and main results
Section 2 introduces the functional analytic framework and fixes the necessary notation. Section 3 states our main results. In Sect. 3.1, we define a local multilevel additive Schwarz preconditioner (24) for a sequence of locally refined meshes. Theorem 3 states that the \(\ell _2\)-condition number of the preconditioned systems is uniformly bounded for all these meshes, i.e., the preconditioner is optimal. In Sect. 3.2, we first state our adaptive algorithm which steers the local mesh-refinement as well as the stopping of the PCG iteration (Algorithm 5). Theorem 8 proves
-
that the overall error in the energy norm can be controlled a posteriori,
-
that the quasi-error (which consists of energy norm error plus error estimator) is linearly convergent in each step of the adaptive algorithm (i.e., independent of whether the algorithm decides for local mesh-refinement or for one step of the PCG iteration),
-
that the quasi-error even decays with optimal rate (i.e., with each possible algebraic rate) with respect to the degrees of freedom, i.e., Algorithm 5 is rate optimal in the sense of, e.g., [11, 13, 24, 41].
Finally, Sect. 3.3 considers the computational costs. Under realistic assumptions on the treatment of the arising discrete integral operators, Corollary 10 states that the quasi-error converges at almost optimal rate (i.e., with rate \(s-\varepsilon \) for any \(\varepsilon >0\) if rate \(s>0\) is possible for the exact Galerkin solution) with respect to computational costs, i.e., Algorithm 5 requires almost optimal computational time. Section 4 underpins our theoretical findings by some 2D and 3D experiments. The proof of Theorem 3 is given in Sect. 5, the proofs of Theorem 8 and Corollary 10 are given in Sect. 6. The final Sect. 7 shows that our main results also apply to the hyper-singular integral equation.
2 Preliminaries and notation
2.1 Functional analytic setting
We briefly recall the most important facts and refer to [36] for further details and proofs. With the Sobolev space \(H^\alpha (\partial \Omega )\) defined as in [36] for \(0 \le \alpha \le 1\), let \(H^\alpha (\Gamma ) := \big \{v|_\Gamma \,:\,v \in H^\alpha (\partial \Omega )\big \}\) be associated with the natural quotient norm. Let \(\widetilde{H}^{-\alpha }(\Gamma )\) be the dual space of \(H^\alpha (\Gamma )\) with respect to the extended \(L^2(\Gamma )\) scalar product \(\langle \psi \,,\,f\rangle = \int _\Gamma \psi (x) \, f(x) \,\mathrm{d}x\). Then, the single-layer potential V from (1) gives rise to a bounded linear operator \(V:\widetilde{H}^{-1/2+s}(\Gamma ) \rightarrow H^{1/2+s}(\Gamma )\) for all \(-1/2 \le s \le 1/2\) which is even an isomorphism for \(-1/2< s < 1/2\). For \(d=2\), the latter requires \(\mathrm{diam}(\Omega ) < 1\) which can always be ensured by scaling of \(\Omega \). For \(s=0\), the operator V is even symmetric and elliptic, i.e.,
defines a scalar product and \(|||\,\phi \,|||^2 := \langle \langle \phi \,,\,\phi \rangle \rangle \) is an equivalent norm on \(\widetilde{H}^{-1/2}(\Gamma )\). For a given right-hand side \(f \in H^{1/2}(\Gamma )\), the weakly-singular integral equation (1) can thus equivalently be reformulated as
In particular, the Lax–Milgram theorem proves existence and uniqueness of the solution \(\phi ^\star \in \widetilde{H}^{-1/2}(\Gamma )\) to (9).
2.2 Boundary element method (BEM)
Given a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \), let
be the space of \(\mathcal {T}_\bullet \)-piecewise constant functions. Note that \(\mathcal {P}^0(\mathcal {T}_\bullet ) \subset L^2(\Gamma ) \subset \widetilde{H}^{-1/2}(\Gamma )\). The Galerkin formulation (3) can be reformulated as
Therefore, the Lax–Milgram theorem proves existence and uniqueness of the discrete solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\).
2.3 Mesh-refinement for 2D BEM
For \(d = 2\), a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \) is a partition into non-degenerate compact line segments. It is called \(\gamma \)-shape regular, if
Here, \(h_T:=\mathrm{diam}(T) > 0\) denotes the Euclidean diameter of T, i.e., the length of the line segment.
We employ the extended bisection algorithm from [2]. For a mesh \(\mathcal {T}_\bullet \) and \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), let \(\mathcal {T}_\circ := \mathrm{refine}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) be the coarsest mesh such that all marked elements \(T\in \mathcal {M}_\bullet \) have been refined, i.e., \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \backslash \mathcal {T}_\circ \). We write \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\), if there exists \(n\in \mathbb {N}_0\), conforming triangulations \(\mathcal {T}_0,\ldots ,\mathcal {T}_n\) and corresponding sets of marked elements \(\mathcal {M}_j\subseteq \mathcal {T}_j\) such that
-
\(\mathcal {T}_\bullet = \mathcal {T}_0\),
-
\(\mathcal {T}_{j+1} = \mathrm{refine}(\mathcal {T}_j,\mathcal {M}_j)\) for all \(j=0,\ldots ,n-1\),
-
\(\mathcal {T}_\circ = \mathcal {T}_n\),
i.e., \(\mathcal {T}_\circ \) is obtained from \(\mathcal {T}_\bullet \) by finitely many steps of refinement. Note that the bisection algorithm from [2] guarantees, in particular, that all \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) are uniformly \(\gamma \)-shape regular, where \(\gamma \) depends only on \(\mathcal {T}_\bullet \).
2.4 Mesh-refinement for 3D BEM
For \(d = 3\), a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \) is a conforming triangulation into non-degenerate compact surface triangles. In particular, we avoid hanging nodes. To ease the presentation, we suppose that the elements \(T \in \mathcal {T}_\bullet \) are flat. The triangulation is called \(\gamma \)-shape regular, if
Here, \(\mathrm{diam}(T)\) denotes the Euclidean diameter of T and \(h_T:=|T|^{1/2}\) with |T| being the two-dimensional surface measure. Note that \(\gamma \)-shape regularity implies that \(h_T \le \mathrm{diam}(T) \le \gamma \, h_T\) and hence excludes anisotropic elements.
For 3D BEM, we employ 2D newest vertex bisection (NVB) to refine triangulations locally; see [35, 42] for details on the refinement algorithm and Fig. 1 for an illustration. For a mesh \(\mathcal {T}_\bullet \) and \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), we employ the same notation \(\mathcal {T}_\circ := \mathrm{refine}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) resp. \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) as for \(d = 2\).
For newest vertex bisection (NVB) in 2D, each triangle \(T\in \mathcal {T}\) has one reference edge, indicated by the double line (left). Bisection of T is achieved by halving the reference edge (middle). The reference edges of the sons are always opposite to the new vertex. Recursive application of this refinement rule leads to conforming triangulations
2.5 A posteriori BEM error control
For \(\psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\mathcal {U}_\bullet \subseteq \mathcal {T}_\bullet \), define
Here \(\nabla _\Gamma (\cdot )\) denotes the arclength derivative for \(d = 2\) resp. the surface gradient for \(d = 3\). To abbreviate notation, let \(\eta _\bullet (\psi _\bullet ) := \eta _\bullet (\mathcal {T}_\bullet , \psi _\bullet )\). If \(\psi _\bullet = \phi _\bullet ^\star \) is the discrete solution to (11), then there holds the reliability estimate (i.e., the global upper bound)
where \(C_{\mathrm{rel}} > 0\) depends only on \(\Gamma \) and \(\gamma \)-shape regularity of \(\mathcal {T}_\bullet \); see [10, 17] for \(d = 2\) resp. [15] for \(d = 3\). Provided that \(\phi ^\star \in L^2(\Gamma )\), the following weak efficiency
has recently been proved in [3], where \(C_{\mathrm{eff}}>0\) depends only on \(\Gamma \) and \(\gamma \)-shape regulartiy of \(\mathcal {T}_\bullet \). We note that the weighted \(L^2\)-norm on the right-hand side of (16) is only slightly stronger than \(|||\,\cdot \,|||\simeq \Vert \cdot \Vert _{\widetilde{H}^{-1/2}(\Gamma )}\), so that one empirically observes \(\eta _\bullet (\phi _\bullet ^\star )\lesssim |||\,\phi -\phi _\bullet ^\star \,|||\) in practice, cf. [10, 15, 17]. In certain situations (e.g., weakly-singular integral formulation of the interior 2D Dirichlet problem), one can rigorously prove the latter (strong) efficiency estimate up to higher-order data oscillations; see [2].
2.6 Preconditioned conjugate gradient method (PCG)
Suppose that \(\varvec{P}_\bullet , \varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) are symmetric and positive definite matrices. Given \(\varvec{b}_\bullet \in \mathbb {R}^N\) and an initial guess \(\varvec{x}_{\bullet 0}\), PCG (see [29, Algorithm 11.5.1]) aims to approximate the solution \(\varvec{x}_\bullet ^\star \in \mathbb {R}^N\) to (5). We note that each step of PCG has the following computational costs:
-
\(\mathcal {O}(N)\) cost for vector operations (e.g., assignment, addition, scalar product),
-
computation of one matrix-vector product with \(\varvec{A}_\bullet \),
-
computation of one matrix-vector product with \(\varvec{P}_\bullet ^{-1}\).
Let \(\widetilde{\varvec{x}}_\bullet ^\star \in \mathbb {R}^N\) be the solution to (7) and recall that \(\varvec{x}_\bullet ^\star = \varvec{P}_\bullet ^{-1/2} \widetilde{\varvec{x}}_\bullet ^\star \). We note that PCG formally applies the conjugate gradient method (CG, see [29, Algorithm 11.3.2]) for the matrix \(\widetilde{\varvec{A}}_\bullet := \varvec{P}_\bullet ^{-1/2} \varvec{A}_\bullet \varvec{P}_\bullet ^{-1/2}\) and the right-hand side \(\widetilde{\varvec{b}}_\bullet = \varvec{P}_\bullet ^{-1/2} \varvec{b}_\bullet \). The iterates \(\varvec{x}_{\bullet k} \in \mathbb {R}^N\) of PCG (applied to \(\varvec{P}_\bullet \), \(\varvec{A}_\bullet \), \(\varvec{b}_\bullet \), and the initial guess \(\varvec{x}_{\bullet 0}\)) and the iterates \(\widetilde{\varvec{x}}_{\bullet k}\) of CG (applied to \(\widetilde{\varvec{A}}_\bullet \), \(\widetilde{\varvec{b}}_\bullet \), and the initial guess \(\widetilde{\varvec{x}}_{\bullet 0} := \varvec{P}_\bullet ^{1/2} \varvec{x}_{\bullet 0}\)) are formally linked by
see [29, Section 11.5]. Moreover, direct computation proves that
Consequently, [29, Theorem 11.3.3] for CG (applied to \(\widetilde{\varvec{A}}_\bullet \), \(\widetilde{\varvec{b}}_\bullet \), \(\widetilde{\varvec{x}}_{\bullet 0}\)) yields the following lemma for PCG (which follows from the implicit steepest decent approach of CG).
Lemma 1
Let \(\varvec{A}_\bullet , \varvec{P}_\bullet \in \mathbb {R}^{N \times N}\) be symmetric and positive definite, \(\varvec{b}_\bullet \in \mathbb {R}^N\), \(\varvec{x}_\bullet ^\star := \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \), and \(\varvec{x}_{\bullet 0} \in \mathbb {R}^N\). Suppose the \(\ell _2\)-condition number estimate
Then, the iterates \(\varvec{x}_{\bullet k}\) of the PCG algorithm satisfy the contraction property
where \(q_{\mathrm{pcg}} := (1-1/C_{\mathrm{pcg}})^{1/2} < 1\). \(\square \)
If the matrix \(\varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) stems from the Galerkin discretization (5) for \(\mathcal {T}_\bullet = \{ T_1, \ldots , T_N \}\), there is a one-to-one correspondence of vectors \(\varvec{y}_\bullet \in \mathbb {R}^N\) and discrete functions \(\psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) via \(\psi _\bullet = \sum _{j=1}^N \varvec{y}_\bullet [j] \, \chi _{\bullet ,j}\). Let \(\phi _{\bullet k} \in \mathcal {P}^0(\mathcal {T}_\bullet )\) denote the discrete function corresponding to the PCG iterate \(\varvec{x}_{\bullet k} \in \mathbb {R}^N\), while the Galerkin solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) of (11) corresponds to \(\varvec{x}_\bullet ^\star = \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \). We note the elementary identity
2.7 Optimal preconditioners
We say that \(\varvec{P}_\bullet \) is an optimal preconditioner, if \(C_{\mathrm{pcg}} \ge 1\) in the \(\ell _2\)-condition number estimate (18) depends only on \(\gamma \)-shape regularity of \(\mathcal {T}_\bullet \) and the initial mesh \(\mathcal {T}_0\) (and is hence essentially independent of the mesh \(\mathcal {T}_\bullet \)).
3 Main results
3.1 Optimal additive Schwarz preconditioner
In this work, we consider multilevel additive Schwarz preconditioners that build on the adaptive mesh-hierarchy.
Let \(\mathcal {E}_\bullet \) denote the set of all nodes (\(d=2\)) resp. edges (\(d=3\)) of the mesh \(\mathcal {T}_\bullet \) which do not belong to the relative boundary \(\partial \Gamma \). Only for \(\Gamma =\partial \Omega \), \(\mathcal {E}_\bullet \) contains all nodes resp. edges of \(\mathcal {T}_\bullet \). For \(E\in \mathcal {E}_\bullet \), let \(T^\pm \in \mathcal {T}_\bullet \) denote the two unique elements with \(T^+\cap T^- = E\). We define the Haar-type function \(\varphi _{\bullet ,E}\in \mathcal {P}^0(\mathcal {T}_\bullet )\) (associated to \(E\in \mathcal {E}_\bullet \)) by
where \(|E| := 1\) for \(d=2\) and \(|E| :=\mathrm{diam}(E)\) for \(d=3\). Note that
For \(d=3\), we additionally suppose that the orientation of each edge E is arbitrary but fixed. We choose \(T^+ \in \mathcal {T}_\bullet \) such that \(\partial T^+\) and \(E \subset \partial T^+\) have the same orientation.
Given a mesh \(\mathcal {T}_0\), suppose that \(\mathcal {T}_\ell \) is a sequence of locally refined meshes, i.e., for all \(\ell \in \mathbb {N}_0\), there exists a set \(\mathcal {M}_\ell \subseteq \mathcal {T}_\ell \) such that \(\mathcal {T}_{\ell +1} = \mathrm{refine}(\mathcal {T}_\ell ,\mathcal {M}_\ell )\). Then, define
which consist of new (interior) nodes/edges plus some of their neighbours. We note the following subspace decomposition which is, in general, not direct.
Lemma 2
With \(\mathcal {X}_\bullet := \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\mathcal {X}_{\bullet ,E} := {\text {span}}\{\varphi _{\bullet ,E}\}\), it holds that
\(\square \)
Additive Schwarz preconditioners are based on (not necessarily direct) subspace decompositions. Following the standard theory (see, e.g., [44, Chapter 2]), (23) yields a (local multilevel) preconditioner. To provide its matrix formulation, let \(\varvec{I}_{k,\ell }\in \mathbb {R}^{\#\mathcal {T}_\ell \times \#\mathcal {T}_k}\) be the matrix representation of the canonical embedding \(\mathcal {P}^0(\mathcal {T}_k) \hookrightarrow \mathcal {P}^0(\mathcal {T}_\ell )\) for \(k<\ell \), i.e.,
Let \(\varvec{H}_\ell \in \mathbb {R}^{\#\mathcal {T}_\ell \times \#\mathcal {E}_\ell }\) denote the matrix that represents Haar-type functions, i.e.,
Since only two coefficients per column are non-zero, \(\varvec{H}_\ell \) is sparse, while \(\varvec{I}_{k,\ell }\) is non-sparse in general. Finally, define the (non-invertible) diagonal matrix \(\varvec{D}_\ell \in \mathbb {R}^{\#\mathcal {E}_\ell \times \#\mathcal {E}_\ell }\) by
Then, the matrix representation of the preconditioner associated to (23) reads
For \(d=2\), the subsequent Theorem 3 is already proved in [23, Section III.B] for \(\Gamma = \partial \Omega \) and in [26, Section 6.3] for \(\Gamma \subsetneqq \partial \Omega \). For \(d = 3\), we need the following additional assumptions:
-
First, suppose that \(\Omega \subset \mathbb {R}^3\) is simply connected and \(\Gamma = \partial \Omega \).
-
Second, let \(\widehat{\mathcal {T}}_0\) be a conforming triangulation of \(\Omega \) into non-degenerate compact simplices such that \(\mathcal {T}_0 = \widehat{\mathcal {T}}_0|_\Gamma \) is the induced boundary partition on \(\Gamma \).
Then, the following theorem is our first main result. The proof is given in Sect. 5.
Theorem 3
Under the foregoing assumptions, the preconditioner \(\varvec{P}_L\) from (24) is optimal, i.e., there holds (18), where \(C_{\mathrm{pcg}}\ge 1\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\), but is independent of \(L\in \mathbb {N}\).
We stress that the matrix in (24) will never be assembled in practice. The PCG algorithm only needs the action of \(\varvec{P}_L^{-1}\) on a vector. This can be done recursively by using the embeddings \(\varvec{I}_{\ell ,\ell +1}\) which are, in fact, sparse. Up to (storing and) inverting \(\varvec{A}_0\) on the coarse mesh, the evaluation of \(\varvec{P}_L^{-1}\varvec{x}\) can be done in \(\mathcal {O}(\#\mathcal {T}_L)\) operations; see, e.g., [22, Section 3.1] for a detailed discussion. If the mesh \(\mathcal {T}_L\) is fine compared to the initial mesh \(\mathcal {T}_0\) (or if \(\varvec{A}_0\) is realized with, e.g., \(\mathcal {H}\)-matrix techniques), then the computational costs and storage requirements associated with \(\varvec{A}_0\) can be neglected.
Remark 4
Our proof for \(d = 3\) requires additional assumptions on \(\Omega \), \(\Gamma = \partial \Omega \), and \(\mathcal {T}_0\). As stated above, the case \(d = 2\) allows for a different proof (which, however, does not transfer to \(d = 3\)) and can thus avoid these assumptions; see [23, 26]. We believe that Theorem 3 also holds for \(d = 3\) and \(\Gamma \subsetneqq \partial \Omega \). This is also underpinned by a numerical experiment in Sect. 4.4. The mathematical proof, however, remains open.
3.2 Optimal convergence of adaptive algorithm
We analyze the following adaptive strategy which is driven by the weighted-residual error estimator (14). We note that Algorithm 5 as well as the following results are independent of the precise preconditioning strategy as long as the employed preconditioners are optimal; see Sect. 2.7.
Algorithm 5
Input: Conforming triangulation \(\mathcal {T}_0\) of \(\Gamma \), adaptivity parameters \(0<\theta \le 1\) and \(\lambda > 0\), and \(C_{\mathrm{mark}} > 0\), optimal preconditioning strategy \(\varvec{P}_\bullet \) for all \(\mathcal {T}_\bullet \in \mathrm{refine}(\mathcal {T}_0)\).
Loop: With \(k := 0 =: j\) and \(\phi _{00} := 0\), iterate the following steps (i)–(vii):
-
(i)
Update counter \((j,k) \mapsto (j,k+1)\).
-
(ii)
Do one step of the PCG algorithm with the optimal preconditioner \(\varvec{P}_j\) to obtain \(\phi _{jk} \in \mathcal {P}^0(\mathcal {T}_j)\) from \(\phi _{j(k-1)} \in \mathcal {P}^0(\mathcal {T}_j)\).
-
(iii)
Compute the local contributions \(\eta _j(T,\phi _{jk})\) of the error estimator for all \(T \in \mathcal {T}_j\).
-
(iv)
If \(|||\,\phi _{jk} - \phi _{j(k-1)}\,||| > \lambda \, \eta _j(\phi _{jk})\), continue with (i).
-
(v)
Otherwise, define \(\underline{k}(j):=k\) and determine some set \(\mathcal {M}_j \subseteq \mathcal {T}_j\) with up to the multiplicative factor \(C_{\mathrm{mark}}\) minimal cardinality such that \(\theta \, \eta _j(\phi _{jk}) \le \eta _j(\mathcal {M}_j, \phi _{jk})\).
-
(vi)
Generate \(\mathcal {T}_{j+1} := \mathrm{refine}(\mathcal {T}_j,\mathcal {M}_j)\) and define \(\phi _{(j+1)0} := \phi _{jk}\).
-
(vii)
Update counter \((j, k) \mapsto (j+1, 0)\) and continue with (i).
Output: Sequences of successively refined triangulations \(\mathcal {T}_j\), discrete solutions \(\phi _{jk}\), and corresponding error estimators \(\eta _j(\phi _{jk})\), for all \(j \ge 0\) and \(k \ge 0\). \(\square \)
Remark 6
The choice \(\lambda = 0\) corresponds to the case that (11) is solved exactly, i.e., \(\phi _{(j+1)0} = \phi _j^\star \). Then, optimal convergence of Algorithm 5 has already been proved in [2, 19, 24, 27] for weakly-singular integral equations and [20, 27] for hyper-singular integral equations. The choice \(\theta = 1\) will generically lead to uniform mesh-refinement, where for each mesh all elements \(\mathcal {M}_j = \mathcal {T}_j\) are refined in step (vi) of Algorithm 5. Instead, small \(0 < \theta \ll 1\), will lead to highly adapted meshes.
Remark 7
Let \(\mathcal {Q} := \big \{(j, k) \in \mathbb {N}_0 \times \mathbb {N}_0\,:\,\text {index }\)(j, k)\(\text { is used in Algorithm}~5\big \}\). It holds that \((0,0) \in \mathcal {Q}\). Moreover, for \(j, k\in \mathbb {N}_0\), it holds that
-
for \(j\ge 1\), \((j, 0) \in \mathcal {Q}\) implies that \((j-1, 0) \in \mathcal {Q}\),
-
for \(k\ge 1\), \((j, k) \in \mathcal {Q}\) implies that \((j, k-1) \in \mathcal {Q}\).
If j is clear from the context, we abbreviate \(\underline{k} := \underline{k}(j)\), e.g., \(\phi _{j\underline{k}} := \phi _{j\underline{k}(j)}\). In particular, it holds that \(\phi _{j\underline{k}} = \phi _{(j+1)0}\). Since PCG (like any Krylov method) provides the exact solution after at most \(\#\mathcal {T}_j\) steps, it follows that \(1 \le \underline{k}(j) < \infty \). Finally, we define the ordering
Moreover, let
be the total number of PCG iterations until the computation of \(\phi _{jk}\). Note that \(j'>j\) and \(|(j',k')| = |(j,k)|\) imply that \(j'=j+1\), \(k=\underline{k}(j)\), and \(k'=0\) and hence \(\phi _{j'k'} = \phi _{jk}\). \(\square \)
Theorem 8
The output of Algorithm 5 satisfies the following assertions (a)–(c). The constants \(C_{\mathrm{rel}}^\star , C_{\mathrm{eff}}^\star >0\) depend only on \(q_{\mathrm{pcg}}\), \(\Gamma \), and the uniform \(\gamma \)-shape regularity of \(\mathcal {T}_j \in \mathrm{refine}(\mathcal {T}_0)\), whereas \(C_{\mathrm{lin}}>1\) and \(0<q_{\mathrm{lin}}<1\) depend additionally only on \(\theta \) and \(\lambda \), and \(C_{\mathrm{opt}}>0\) depends additionally only on s, \(\mathcal {T}_0\), and \(\Lambda _{0\underline{k}}\).
-
(a)
There exists a constant \(C_{\mathrm{rel}}^\star > 0\) such that
$$\begin{aligned} |||\,\phi ^\star - \phi _{jk}\,|||&\le C_{\mathrm{rel}}^\star \, \big ( \eta _j(\phi _{jk}) + |||\,\phi _{jk} - \phi _{j(k-1)}\,||| \big )\nonumber \\&\qquad \text {for all } (j,k) \in \mathcal {Q}\quad \text { with } k \ge 1. \end{aligned}$$(26)There exists a constant \(C_{\mathrm{eff}}^\star > 0\) such that, provided that \(\phi ^\star \in L^2(\Gamma )\), it holds that
$$\begin{aligned} \eta _j(\phi _{jk})&\le C_{\mathrm{eff}}^\star \, \big ( \Vert h_j^{1/2}(\phi ^\star - \phi _{jk})\Vert _{L^2(\Gamma )} + |||\,\phi _{jk} - \phi _{j(k-1)}\,||| \big )\nonumber \\&\qquad \text {for all } (j,k) \in \mathcal {Q}, k \ge 1. \end{aligned}$$(27) -
(b)
For arbitrary \(0 < \theta \le 1\) and arbitrary \(\lambda > 0\), there exist constants \(C_{\mathrm{lin}} \ge 1\) and \(0< q_{\mathrm{lin}} < 1\) such that the quasi-error
$$\begin{aligned} \Lambda _{jk}^2 := |||\,\phi ^\star - \phi _{jk}\,|||^2 + \eta _{j}(\phi _{jk})^2 \end{aligned}$$(28)is linearly convergent in the sense of
$$\begin{aligned} \Lambda _{j'k'} \le C_{\mathrm{lin}} \, q_{\mathrm{lin}}^{|(j',k')|-|(j,k)|} \, \Lambda _{jk} \quad \text {for all } (j,k), (j',k') \in \mathcal {Q} \text { with } (j',k') \ge (j,k). \end{aligned}$$(29) -
(c)
For \(s > 0\), define the approximation class
$$\begin{aligned} \Vert \phi ^\star \Vert _{\mathbb {A}_s} := \sup _{N \in \mathbb {N}_0} \left( (N+1)^s \, \min _{\begin{array}{c} \mathcal {T}_\bullet \in \mathrm{refine}(\mathcal {T}_0) \\ \#\mathcal {T}_\bullet - \#\mathcal {T}_0 \le N \end{array}} \eta _\bullet (\phi _\bullet ^\star )\right) . \end{aligned}$$(30)Then, for sufficiently small \(0 < \theta \ll 1\) and \(0 < \lambda \ll 1\), cf. Assumption (68) below, and all \(s>0\), it holds that
$$\begin{aligned}&\Vert \phi ^\star \Vert _{\mathbb {A}_s}< \infty \Longleftrightarrow \exists \, C_{\mathrm{opt}} > 0: \sup _{(j,k) \in \mathcal {Q}} \big ( \#\mathcal {T}_j - \#\mathcal {T}_0 + 1 \big )^{s} \, \Lambda _{jk}\le C_{\mathrm{opt}}\,\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty . \end{aligned}$$(31)
Remark 9
By definition, it holds that \(\eta _{j}(\phi _{jk}) \le \Lambda _{jk}\) for all \((j,k) \in \mathcal {Q}\). If \(\phi _{jk} \in \{\phi _j^\star ,\phi _{j\underline{k}}\}\), then there also holds the converse inequality \(\eta _{j}(\phi _{jk}) \simeq \Lambda _{jk}\). To see this, note that \(\phi _{jk} = \phi _j^\star \) and (15) prove that \(\Lambda _{jk} \le (1+C_{\mathrm{rel}}) \, \eta _{j}(\phi _{jk})\). If \(\phi _{jk} = \phi _{j\underline{k}}\), then Theorem 8(a) and Step (iv) of Algorithm 5 prove that \(\Lambda _{j\underline{k}} \le (1+C_{\mathrm{rel}}^\star ) \, \eta _{j}(\phi _{j\underline{k}}) + |||\,\phi _{j\underline{k}} - \phi _{j(\underline{k}-1)}\,||| \le (1 + C_{\mathrm{rel}}^\star + \lambda ) \, \eta _{j}(\phi _{j\underline{k}})\).
3.3 Almost optimal computational complexity
Suppose that we use \(\mathcal {H}^2\)-matrices for the efficient treatment of the discrete single-layer integral operator. Recall that the storage requirements (resp. the cost for one matrix-vector multiplication) of an \(\mathcal {H}^2\)-matrix are of order \(\mathcal {O}(Np^2)\), where N is the matrix size and \(p \in \mathbb {N}\) is the local block rank. For \(\mathcal {H}^2\)-matrices (unlike \(\mathcal {H}\)-matrices), these costs are, in particular, independent of a possibly unbalanced binary tree which underlies the hierarchical data structure [30].
For a mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), we employ the local block rank \(p = \mathcal {O}(\log (1+\#\mathcal {T}_\bullet ))\) to ensure that the matrix compression is asymptotically exact as \(N = \#\mathcal {T}_\bullet \rightarrow \infty \), i.e., the error between the exact matrix and the \(\mathcal {H}\)-matrix decays exponentially fast; see [30]. We stress that we neglect this error in the following and assume that the matrix-vector multiplication (based on the \(\mathcal {H}^2\)-matrix) yields the exact matrix-vector product.
The computational cost for storing \(\varvec{A}_\bullet \) (as well as for one matrix-vector multiplication) is \(\mathcal {O}((\#\mathcal {T}_\bullet )\log ^2(1+\#\mathcal {T}_\bullet ))\). In an idealized optimal case, the computation of \(\phi _\bullet ^\star \) is hence (at least) of cost \(\mathcal {O}((\#\mathcal {T}_\bullet )\log ^2(1+\#\mathcal {T}_\bullet ))\).
We consider the computational costs for one step of Algorithm 5:
-
We assume that one step of the PCG algorithm with the employed optimal preconditioner is of cost \(\mathcal {O}\big ((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\); cf. the preconditioner from Sect. 3.1.
-
We assume that we can compute \(\eta _j(\psi _j)\) for any \(\psi _j \in \mathcal {P}^0(\mathcal {T}_j)\) (by means of numerical quadrature) with \(\mathcal {O}\big ((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\) operations.
-
Clearly, the Dörfler marking in Step (v) can be done in \(\mathcal {O}\big ((\#\mathcal {T}_j)\log (1+\#\mathcal {T}_j)\big )\) operations by sorting. Moreover, for \(C_{\mathrm{mark}} = 2\), Stevenson [41] proposed a realization of the Dörfler marking based on binning, which can be performed at linear cost \(\mathcal {O}(\#\mathcal {T}_j)\).
-
Finally, the mesh-refinement in Step (vi) can be done in linear complexity \(\mathcal {O}(\#\mathcal {T}_j)\) if the data structure is appropriate.
Overall, one step of Algorithm 5 is thus done in \(\mathcal {O}((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j))\) operations. However, an adaptive step \((j',k') \in \mathcal {Q}\) depends on the full history of previous steps.
-
Hence, the cumulative computational complexity for the adaptive step \((j',k') \in \mathcal {Q}\) is of order \(\mathcal {O}\big (\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\).
The following corollary proves that Algorithm 5 does not only lead to convergence of the quasi-error \(\Lambda _{jk}\) with optimal rate with respect to the degrees of freedom (see Theorem 8), but also with almost optimal rate with respect to the computational costs.
Corollary 10
For \(j \in \mathbb {N}_0\), let \(\widehat{\mathcal {T}}_{j+1} = \mathrm{refine}(\widehat{\mathcal {T}}_j,\widehat{\mathcal {M}}_j)\) with arbitrary \(\widehat{\mathcal {M}}_j \subseteq \widehat{\mathcal {T}}_j\) and \(\widehat{\mathcal {T}}_0 = \mathcal {T}_0\). Let \(s > 0\) and suppose that the corresponding error estimator \({\widehat{\eta }}_j(\widehat{\phi }_j^\star )\) converges at rate s with respect to the single-step computational costs, i.e.,
Suppose that \(\lambda \) and \(\theta \) satisfy the assumptions of Theorem 8(c). Then, the quasi-errors \(\Lambda _{jk}\) generated by Algorithm 5 converge almost at rate s with respect to the cumulative computational costs, i.e.,
4 Numerical experiments
In this section, we present numerical experiments that underpin our theoretical findings. We use lowest-order BEM for direct and indirect formulations in 2D as well as 3D. For each problem, we compare the performance of Algorithm 5 for
-
different values of \(\lambda \in \{1,10^{-1},10^{-2},10^{-3},10^{-4}\}\),
-
different values of \(\theta \in \{0.2,0.4,0.6,0.8,1\}\),
where \(\theta = 1\) corresponds to uniform mesh-refinement. In particular, we monitor the condition numbers of the arising BEM systems for diagonal preconditioning [7], the proposed additive Schwarz preconditioning from Sect. 3.1, and no preconditioning. The 2D implementation is based on our MATLAB implementation Hilbert [1], while the 3D implementation relies on an extension of the BEM++ library [40].
4.1 Slit problem in 2D
Let \(\Gamma :=(-1,1)\times \{0\}\), cf. Fig. 2. We consider
The unique exact solution of (34) reads \(\phi ^\star (x,0):=-2x/\sqrt{1-x^2}\). For uniform mesh-refinement, we thus expect a convergence order of \(\mathcal {O}(N^{-1/2})\), while the optimal rate is \(\mathcal {O}(N^{-3/2})\) with respect to the number of elements.
Example 4.1: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the left end point (left) and for the matrices arising from Algorithm 5 (right)
Example 4.1: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)
In Fig. 3, we compare Algorithm 5 for different values for \(\theta \) and \(\lambda \) as well as uniform mesh-refinement. Uniform mesh-refinement leads only to the rate \(\mathcal {O}(N^{-1/2})\), while adaptivity, independently of the value of \(\theta \) and \(\lambda \), regains the optimal rate \(\mathcal {O}(N^{-3/2})\). A naive initial guess in Step (vi) of Algorithm 5 (i.e., if \(\phi _{(j+1)0}:=0\)) leads to a logarithmical growth of the number of PCG iterations, whereas for nested iteration \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) (as formulated in Algorithm 5) the number of PCG iterations stays uniformly bounded, cf. Fig. 4. Finally, Fig. 2 shows the condition numbers for an artificial refinement towards the left end point and for Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Example 4.1: number of PCG iterations in Algorithm 5 for nested iteration (dashed lines), i.e., \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) in Step (vi), and naive initial guess (solid lines), i.e., \(\phi _{(j+1)0}:=0\). We compare fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},10^{-2},10^{-3}\}\)
4.2 Z-shaped domain in 2D
Let \(\Gamma :=\partial \Omega \) be the boundary of the Z-shaped domain with reentrant corner at the origin (0, 0), cf. Fig. 5. The right-hand side is given by \(f=(K+1/2)g\) with the double-layer operator \(K:H^{1/2}(\Gamma )\rightarrow H^{1/2}(\Gamma )\). We note that the weakly-singular integral equation (1) is then equivalent to the Dirichlet problem
We prescribe the exact solution in 2D polar coordinates as
Then, u admits a generic singularity at the reentrant corner. The exact solution \(\phi ^\star \) of (1) is just the normal derivative of the solution u.
We expect a convergence order of \(\mathcal {O}(N^{-4/7})\) for uniform mesh-refinement, and the optimal rate \(\mathcal {O}(N^{-3/2})\) for the adaptive strategy, which is seen in Fig. 6 for different values of \(\theta \) and \(\lambda \). A naive initial guess in Step (vi) of Algorithm 5 (i.e., if \(\phi _{(j+1)0}:=0\)) leads to a logarithmical growth of the number of PCG iterations, whereas for nested iteration \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) the number of PCG iterations stays uniformly bounded, cf. Fig. 7. Figure 5 shows the condition numbers for an artificial refinement towards the reentrant corner as well as the condition numbers for Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Example 4.2: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the reentrant corner (top) and for Algorithm 5 (bottom)
Example 4.2: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)
Example 4.2: number of PCG iterations in Algorithm 5 for nested iteration (dashed lines), i.e., \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) in Step (vi), and naive initial guess (solid lines), i.e., \(\phi _{(j+1)0}:=0\). We compare fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},10^{-2},10^{-3}\}\)
Example 4.3: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards one reentrant corner (left) or edge (middle), and for Algorithm 5 (right)
Example 4.3: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)
4.3 L-shaped domain in 3D
Let \(\Gamma :=\partial \Omega \) be the boundary of the L-shaped domain \(\Omega = (-1,1)^3 \backslash ([-1,0] \times [0,1] \times [-1,1])\), cf. Fig. 8. The right-hand side is given by \(f=(K+1/2)g\) with the double-layer operator \(K:H^{1/2}(\Gamma )\rightarrow H^{1/2}(\Gamma )\). Again, the weakly-singular integral equation (1) is then equivalent to the Dirichlet problem (35). We prescribe the exact solution in 3D cylindrical coordinates as
Note that u admits a singularity along the reentrant edge. The exact solution \(\phi ^\star \) of (1) is just the normal derivative of the exact solution u.
In Fig. 9, we compare Algorithm 5 with different values for \(\theta \) and \(\lambda \) to uniform mesh-refinement. Un7767iform mesh-refinement leads only to a reduced rate of \(\mathcal {O}(N^{-1/2})\), while adaptivity, independently of \(\theta \) and \(\lambda \), leads to the improved rate of approximately \(\mathcal {O}(N^{-2/3})\). While one would expect \(\mathcal {O}(N^{-3/4})\) for smooth exact solutions \(\phi ^\star \), this would require anisotropic elements along the reentrant edge for the present solution \(\phi ^\star =\partial _n u\). Since NVB guarantees uniform \(\gamma \)-shape regularity of the meshes, the latter is not possible and hence leads to a reduced optimal rate. Finally, Fig. 8 shows the condition numbers for (diagonal or additive Schwarz) preconditioning and no preconditioning for artificial refinements towards one reentrant corner or the reentrant edge as well as the condition numbers of the matrices arising from Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Example 4.4: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the right corner (left), the right edges (middle), and for Algorithm 5 (right)
Example 4.4: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)
4.4 Screen problem in 3D
Let \(\Gamma :=((-1,1)^2 {\setminus } [0,1]^2) \times \{0\}\), rotated by \(3\pi /4\), cf. Figure 10. We consider the weakly-singular integral equation \(V \phi ^\star =1\) on \(\Gamma \). The exact solution \(\phi ^\star \in \widetilde{H}^{-1/2}(\Gamma )\) is unknown.
For the numerical solution of the Galerkin system, we employ PCG with the additive Schwarz preconditioner from Sect. 3.1. We note that Theorem 3 does not cover this setting. In particular, we note that the proposed additive Schwarz preconditioner from Sect. 3.1 appears to be optimal (cf. Fig. 10), while the mathematical optimality proof still remains open for screens.
In Fig. 11, we compare Algorithm 5 with different values for \(\theta \) and \(\lambda \) to uniform mesh-refinement. We see that uniform mesh-refinement leads only to a reduced rate of \(\mathcal {O}(N^{-1/4})\), while adaptivity, independently of \(\theta \) and \(\lambda \), leads to the improved rate of approximately \(\mathcal {O}(N^{-1/2})\).
To underline the quasi-optimal computational complexity of Algorithm 5, we plot the error estimator \(\eta _j(\phi _{j\underline{k}})\) in the different experiments over the cumulative quantities \(\sum _{(j,k) \le (j',k')} \#\mathcal {T}_j\), \(\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log (\#\mathcal {T}_j)\) and \(\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(\#\mathcal {T}_j)\) for \(\theta = 0.4\) and \(\lambda \in \{1,10^{-3}\}\)
4.5 Computational complexity
With Fig. 12, we aim to underpin the almost optimal computational complexity of Algorithm 5 (see Corollary 10). To this end, we plot the error estimator \(\eta _j(\phi _{jk})\) over the cumulative sums
for \(\theta = 0.4\) and \(\lambda \in \{1,10^{-3}\}\). The negative impact of the logarithmic terms on the (preasymptotic) convergence rate is clearly visible.
5 Proof of Theorem 3 (optimal multilevel preconditioner)
For \(d = 2\), we refer to [23, 26] and thus focus only on \(d = 3\) and \(\Gamma = \partial \Omega \). Due to our additional assumption, \(\mathcal {T}_0 = \widehat{\mathcal {T}}_0|_\Gamma \) is the restriction of a conforming simplicial triangulation \(\widehat{\mathcal {T}}_0\) of \(\Omega \) to the boundary \(\Gamma \). Moreover, 2D NVB refinement of \(\mathcal {T}_0\) (on the boundary \(\Gamma \)) is a special case of 3D NVB refinement of \(\widehat{\mathcal {T}}_0\) (in the volume \(\Omega \)) plus restriction to the boundary; see, e.g., [42]. Hence, each mesh \(\mathcal {T}_\bullet \in \mathbb {T} = \mathrm{refine}(\mathcal {T}_0)\) is the restriction of a conforming NVB refinement \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}} := \mathrm{refine}(\widehat{\mathcal {T}}_0)\), i.e., \(\mathcal {T}_\bullet = \widehat{\mathcal {T}}_\bullet |_\Gamma \). Throughout, let \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\) be the coarsest extension of \(\mathcal {T}_\bullet \in \mathbb {T}\). Recall that NVB is a binary refinement rule. Therefore, \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) also implies that \(\widehat{\mathcal {T}}_\circ \in \mathrm{refine}(\widehat{\mathcal {T}}_\bullet )\). Finally, we note that all triangulations \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\) are uniformly \(\gamma \)-shape regular, i.e.,
where \(\gamma \) depends only on \(\widehat{\mathcal {T}}_0\).
Our argument adapts ideas from [32], where a subspace decomposition for the lowest-order Nédélec space \(\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) (see, e.g., [34]) in \(\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )\) implies a decomposition of the corresponding discrete trace space. While the original idea dates back to [39], a nice summary of the argument is found in [32, Section 2].
Remark 11
-
(i)
Our proof is based on the construction of an extension operator from \(\mathcal {P}_*^0(\mathcal {T}_\bullet )\) to \(\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\), see Lemma 13 below. It is not clear if such an operator can be constructed for the case \(\Gamma \subsetneqq \partial \Omega \).
-
(ii)
In [31], a subspace decomposition of the lowest-order Raviart–Thomas space \(\varvec{\mathcal {R}}\varvec{\mathcal {T}}^0(\widehat{\mathcal {T}}_\bullet )\) (see, e.g., [45]) in \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\) implies a decomposition of the corresponding normal trace space \(\mathcal {P}^0(\mathcal {T}_\bullet )\). Due to different scaling properties of the Raviart–Thomas basis functions [in the \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\) norm] and their normal trace (in the \(H^{-1/2}(\Gamma )\) norm), this argument does not apply in our case.
5.1 Discrete spaces and extensions
Let \(\widehat{\mathcal {E}}_\bullet \) (resp. \(\widehat{\mathcal {N}}_\bullet \)) denote the set of all edges (resp. all nodes) of \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\). For each node \(\varvec{x}\in \widehat{\mathcal {N}}_\bullet \), let \(\eta _{\bullet ,\varvec{x}} \in \mathcal {S}^1(\widehat{\mathcal {T}}_\bullet )\) be the corresponding hat function, i.e., \(\eta _{\bullet ,\varvec{x}}\) is \(\widehat{\mathcal {T}}_\bullet \)-piecewise affine and globally continuous with \(\eta _{\bullet ,\varvec{x}}(\varvec{y}) = \delta _{\varvec{x}\varvec{y}}\) for all \(\varvec{x}, \varvec{y} \in \widehat{\mathcal {N}}_\bullet \). For \(E \in \widehat{\mathcal {E}}_\bullet \), let \({\varvec{u}}_{\bullet ,E} \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) denote the corresponding Nédélec basis function, i.e., for \(K \in \widehat{\mathcal {T}}_\bullet \) with \(E = \mathrm{conv}\{\varvec{x},\varvec{y}\} \subset \partial K\), it holds that
where \(C > 0\) is chosen such that \(\int _{E'} {\varvec{u}}_{\bullet ,E} \,ds = |E| \, \delta _{E E'}\) for all \(E,E'\in \widehat{\mathcal {E}}_\bullet \). Scaling arguments yield the next lemma. The proof follows the lines of [32, Lemma 5.7].
Lemma 12
For \(E\in \mathcal {E}_\bullet \), recall the Haar function \(\varphi _{\bullet ,E} \in \mathcal {P}^0(\mathcal {T}_\bullet )\) from (21). Let \({\varvec{u}}_{\bullet ,E} \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) denote the corresponding Nédélec basis function; see (38). Then,
where \(C>0\) depends only on \(\Omega \) and the \(\gamma \)-shape regularity of \(\widehat{\mathcal {T}}_\bullet \).
Proof
A straightforward calculation shows that \(\varphi _{\bullet ,E} = \varvec{{\text {curl}}}\, {\varvec{u}}_{\bullet ,E} \cdot \varvec{n}|_\Gamma \). Then, continuity of the normal trace operator yields that
Furthermore, scaling arguments prove that
where we have finally applied an inverse estimate. This concludes the proof. \(\square \)
The following lemma holds for (simply) connected Lipschitz domains \(\Omega \) and follows essentially from [5]. Recall \(\mathcal {P}^0_*(\mathcal {T}_\bullet )\) from (22).
Lemma 13
There exists a linear operator \(\varvec{E}_\bullet : \mathcal {P}_*^0(\mathcal {T}_\bullet ) \rightarrow \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) such that
The constant \(C > 0\) depends only on \(\Omega \) and \(\gamma \)-shape regularity of \(\widehat{\mathcal {T}}_\bullet \).
Proof
Let \(\psi _\bullet \in \mathcal {P}_*^0(\mathcal {T}_\bullet )\). First, [5, Theorem 2.1] provides \({\varvec{\sigma }}_\bullet \in \varvec{\mathcal {R}}\varvec{\mathcal {T}}^0(\widehat{\mathcal {T}}_\bullet )\) with
Then, [5, Lemma 4.3] provides \(\varvec{E}_\bullet \psi _\bullet := {\varvec{v}}_\bullet \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) such that
Combining these results, we conclude the proof. \(\square \)
5.2 Abstract additive Schwarz preconditioners
Let \(\mathcal {X}\) denote some finite dimensional Hilbert space with norm \(\Vert \cdot \Vert _{\mathcal {X}}\) and subspace decomposition
where \(\mathcal {I}\) is a finite index set. The additive Schwarz operator is given by \(\mathcal {S} = \sum _{i\in \mathcal I} \mathcal {S}_i\), where \(\mathcal {S}_i\) is the \(\mathcal {X}\)-orthogonal projection onto \(\mathcal {X}_i\), i.e.,
where \(\langle \cdot \,,\,\cdot \rangle _{\mathcal {X}}\) denotes the scalar product on \(\mathcal {X}\). Then, the operator \(\mathcal {S}\) is positive definite and symmetric (with respect to \(\langle \cdot \,,\,\cdot \rangle _{\mathcal {X}}\)). Define the multilevel norm
It is proved, e.g., in [38, Theorem 16] that \(\langle \mathcal {S}^{-1}x\,,\,x\rangle _{\mathcal {X}} = |||\,x\,|||_\mathcal {X}^2\). Let \(C > 0\). If
then the extreme eigenvalues of \(\mathcal {S}^{-1}\) (and hence those of \(\mathcal {S}\)) are bounded (from above and below). In particular, the additive Schwarz operator \(\mathcal {S}\) is optimal in the sense that its condition number (ratio of largest and smallest eigenvalues) depends only on \(C>0\).
Let \(\varvec{S}\) denote the matrix representation of \(\mathcal {S}\). Then, the norm equivalence from above and the latter observations imply that the condition number of \(\varvec{S}\) is bounded. The abstract theory on additive Schwarz operators given in [44, Chapter 2] shows that \(\varvec{S}\) has the form \(\varvec{S} = \varvec{P}^{-1}\varvec{A}\), where \(\varvec{A}\) is the Galerkin matrix of \(\langle \cdot \,,\,\cdot \rangle _\mathcal {X}\). Therefore, boundedness of the condition number of \(\varvec{S}\) implies optimality of the preconditioner \(\varvec{P}^{-1}\).
We shortly discuss the matrix representation (24) of the additive Schwarz preconditioner \(\varvec{P}^{-1}\). Following [44, Chapter 2], let \(\varvec{A}_i\) denote the Galerkin matrix of \(\langle \cdot \,,\,\cdot \rangle _\mathcal {X}\) restricted to \(\mathcal {X}_i\), and let \(\varvec{I}_i\) denote the matrix that realizes the embedding from \(\mathcal {X}_i\rightarrow \mathcal {X}\). We consider the matrix representation of \(\mathcal {S}_i : \mathcal {X}\rightarrow \mathcal {X}_i\subset \mathcal {X}\). Let \(x\in \mathcal {X}\) with coordinate vector \(\varvec{x}\), and let \(x_i\in \mathcal {X}_i\) be arbitrary with coordinate vector \(\varvec{x}_i\). The defining relation
of \(\mathcal {S}_i\) then reads in matrix-vector form (with \(\varvec{S}_i\) being the matrix representation of \(\mathcal {S}_i\)) as
or equivalently
Since \(\varvec{A}_i\) is invertible, we have that
Note that the range of the operator \(\mathcal {S}_i\) is \(\mathcal {X}_i\) and correspondingly for the matrix representation \(\varvec{S}_i\). We therefore apply the embedding \(\varvec{I}_i\) and obtain the representation
To finally prove (24), note that for one-dimensional subspaces \(\mathcal {X}_i\), \(\varvec{A}_i\) reduces to the diagonal entry of the matrix \(\varvec{A}\). Overall, we thus derive the matrix representation (24).
5.3 Subspace decomposition of \(\varvec{\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )}\) in \(\varvec{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}\)
The following result is taken from [33, Theorem 4.1]; see also the references therein. In particular, we note that their proof requires the assumption that \(\Omega \) is simply connected.
Proposition 14
Let \(\mathcal {Y}_\bullet := \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\), \(\mathcal {Y}_{\bullet ,E} := {\text {span}}\{{\varvec{u}}_{\bullet ,E}\}\), \(\mathcal {Y}_{\bullet ,\varvec{x}} := {\text {span}}\{\nabla \eta _{\bullet ,\varvec{x}}\}\), and
Then, it holds that
Moreover, it holds that
where \(C > 0\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\). \(\square \)
5.4 Subspace decomposition of \(\varvec{\mathcal {P}^0(\mathcal {T}_\bullet )}\) in \(\varvec{H^{-1/2}(\Gamma )}\)
It remains to prove the following proposition to conclude the proof of Theorem 3.
Proposition 15
The multilevel norm \(|||\,\cdot \,|||_{\mathcal {X}_L}\) associated with the decompomposition (23) satisfies the equivalence
where \(C>0\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\).
Proof of lower estimate in (44). Let \(\psi \in \mathcal {P}^0(\mathcal {T}_L)\) with arbitrary decomposition
Note that \(\mathcal {X}_{\ell ,E} \subset \mathcal {P}^0_*(\mathcal {T}_\ell )\). Recall the extension operator \(\varvec{E}_\ell \) from Lemma 13. Define \({\varvec{v}}_* := \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \varvec{E}_\ell \psi _{\ell ,E} \in \mathcal {Y}_L\). Then, \(\varvec{{\text {curl}}}\,{\varvec{v}}_* \cdot \varvec{n}|_\Gamma = \psi _*\) and hence
from the continuity of the trace operator in \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\). Moreover, the triangle inequality, the definition of the multilevel norm \(|||\,\cdot \,|||_{\mathcal {Y}_L}\), and Lemma 13 show that
Taking the infimum over all possible decompositions (45), we derive the lower estimate in (44) by definition (41) of the multilevel norm. \(\square \)
Proof of upper estimate in (44). Let \(\psi \in \mathcal {P}^0(\mathcal {T}_L)\). Define \(\psi _{00} := \langle \psi \,,\,1\rangle _\Gamma /|\Gamma |\) and \(\psi _* := \psi - \psi _{00} \in \mathcal {P}^0_*(\mathcal {T}_L)\). Note that
With Lemma 13, choose \({\varvec{v}} = \varvec{E}_L \psi _* \in \mathcal {Y}_L = \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_L)\). Note that \(\psi _* = \varvec{{\text {curl}}}\, {\varvec{v}} \cdot \varvec{n}|_\Gamma \) and \(\Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \lesssim \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )}\). The upper bound in Proposition 14 further provides \({\varvec{v}}_0 \in \mathcal {Y}_0\), \({\varvec{v}}_{\ell ,E} \in \mathcal {Y}_{\ell ,E}\), and \({\varvec{v}}_{\ell ,\varvec{x}} \in \mathcal {Y}_{\ell ,\varvec{x}}\) such that
as well as
Observe that \(\varvec{{\text {curl}}}\, {\varvec{v}}_{\ell ,\varvec{x}} = 0\), since \({\varvec{v}}_{\ell ,\varvec{x}} \in \mathcal {Y}_{\ell ,\varvec{x}} = {\text {span}}\{\nabla \eta _{\ell ,\varvec{x}}\}\). Thus, we see that
Due to the restriction \((\cdot )|_\Gamma \) to the boundary, the latter sum reduces to a sum over all \(E \in \mathcal {E}_\ell \) (instead of all \(E \in \widehat{\mathcal {E}}_\ell ^\star \)). Note that \(\psi _{*0} := \varvec{{\text {curl}}}\, {\varvec{v}}_0 \cdot \varvec{n}|_\Gamma \in \mathcal {X}_0 = \mathcal {P}^0(\mathcal {T}_0)\) and hence \(\psi _{00} + \psi _{*0} \in \mathcal {X}_0\). Note that
Due to Lemma 12 and \({\varvec{v}}_{\ell ,E} \in \mathcal {Y}_{\ell ,E} = {\text {span}}\{{\varvec{u}}_{\ell ,E}\}\), it holds that \(\psi _{\ell ,E} := \varvec{{\text {curl}}}\, {\varvec{v}}_{\ell ,E} \cdot \varvec{n}|_\Gamma \in \mathcal {X}_{\ell ,E} = {\text {span}}\{\varphi _{\ell ,E}\}\) with \(\Vert \psi _{\ell ,E}\Vert _{H^{-1/2}(\Gamma )} \simeq \Vert {\varvec{v}}_{\ell ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}\). We hence see that
with
This concludes the proof. \(\square \)
6 Proof of Theorem 8 (rate optimality of adaptive algorithm)
In the spirit of [11], we give an abstract analysis, where the precise problem and discretization (i.e., Galerkin BEM with piecewise constants for the weakly-singular integral equation for the 2D and 3D Laplacian) enter only through certain properties of the error estimator. These properties are explicitly stated in Sect. 6.1, before Sect. 6.2 provides general PCG estimates. The remaining sections (Sects. 6.3–6.6) then only exploit these abstract frameworks to prove Theorem 8 and Corollary 10.
6.1 Axioms of adaptivity
In this section, we recall some structural properties of the residual error estimator (14) which have been identified in [11] to be important and sufficient for the numerical analysis of Algorithm 5. For the proof, we refer to [19, 24]. We only note that (A4) already implies (A3) with \(C_{\mathrm{rel}} \le C_{\mathrm{drl}}\) in general; see [11, Section 3.3].
For ease of notation, let \(\mathcal {T}_0\) be the fixed initial mesh of Algorithm 5. Let \(\mathbb {T}:=\mathrm{refine}(\mathcal {T}_0)\) be the set of all possible meshes that can be obtained by successively refining \(\mathcal {T}_0\).
Proposition 16
There exist constants \(C_{\mathrm{stb}},C_{\mathrm{red}},C_{\mathrm{rel}}>0\) and \(0<q_{\mathrm{red}}<1\) which depend only on \(\Gamma \) and the \(\gamma \)-shape regularity, such that the following properties (A1)–(A4) hold:
- (A1) :
-
Stability on non-refined element domains For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), arbitrary discrete functions \(v_\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(v_\circ \in \mathcal {P}^0(\mathcal {T}_\circ )\), and an arbitrary set \(\mathcal {U}_\bullet \subseteq \mathcal {T}_\bullet \cap \mathcal {T}_\circ \) of non-refined elements, it holds that
$$\begin{aligned} |\eta _\circ (\mathcal {U}_\bullet ,v_\circ )-\eta _\bullet (\mathcal {U}_\bullet ,v_\bullet )| \le C_{\mathrm{stb}}\,|||\,v_\circ -v_\bullet \,|||. \end{aligned}$$ - (A2) :
-
Reduction on refined element domains For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), and arbitrary \(v_\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(v_\circ \in \mathcal {P}^0(\mathcal {T}_\circ )\), it holds that
$$\begin{aligned} \eta _\circ (\mathcal {T}_\circ \backslash \mathcal {T}_\bullet ,v_\circ )^2 \le q_{\mathrm{red}}\,\eta _\bullet (\mathcal {T}_\bullet \backslash \mathcal {T}_\circ ,v_\bullet )^2 + C_{\mathrm{red}}\,|||\,v_\circ -v_\bullet \,|||^2. \end{aligned}$$ - (A3) :
-
Reliability For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), the error of the exact discrete solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) of (11) is controlled by
$$\begin{aligned} |||\,\phi ^\star -\phi _\bullet ^\star \,||| \le C_{\mathrm{rel}}\,\eta _\bullet (\phi _\bullet ^\star ). \end{aligned}$$ - (A4) :
-
Discrete reliability For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\) and all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), there exists a set \(\mathcal {R}_{\bullet ,\circ }\subseteq \mathcal {T}_\bullet \) with \(\mathcal {T}_\bullet \backslash \mathcal {T}_\circ \subseteq \mathcal {R}_{\bullet ,\circ }\) as well as \(\#\mathcal {R}_{\bullet ,\circ } \le C_{\mathrm{drl}}\,\#(\mathcal {T}_\bullet \backslash \mathcal {T}_\circ )\) such that the difference of \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\phi _\circ ^\star \in \mathcal {P}^0(\mathcal {T}_\circ )\) is controlled by
$$\begin{aligned} |||\,\phi _\circ ^\star -\phi _\bullet ^\star \,||| \le C_{\mathrm{drl}}\,\eta _\bullet (\mathcal {R}_{\bullet ,\circ },\phi _\bullet ^\star ). \end{aligned}$$
\(\square \)
6.2 Energy estimates for the PCG solver
This section collects some auxiliary results which rely on the use of PCG and, in particular, PCG with an optimal preconditioner. We first note the following Pythagoras identity.
Lemma 17
Let \(\varvec{A}_\bullet , \varvec{P}_\bullet \in \mathbb {R}^{N \times N}\) be symmetric and positive definite, \(\varvec{b}_\bullet \in \mathbb {R}^N\), \(\varvec{x}_\bullet ^\star := \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \), \(\varvec{x}_{\bullet 0} \in \mathbb {R}^N\) and \(\varvec{x}_{\bullet k}\) the iterates of the PCG algorithm.
There holds the Pythagoras identity
Proof
According to the definition of PCG (and CG), it holds that
where \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{x}_{\bullet 0}):=\mathrm{span}\{\widetilde{\varvec{r}}_{\bullet 0},\widetilde{\varvec{A}}_\bullet \widetilde{\varvec{r}}_{\bullet 0},\ldots ,\widetilde{\varvec{A}}_\bullet ^{k-1} \widetilde{\varvec{r}}_{\bullet 0}\}\) with \(\widetilde{\varvec{r}}_{\bullet 0}:=\widetilde{\varvec{b}}_\bullet -\widetilde{\varvec{A}}_\bullet \widetilde{\varvec{x}}_{\bullet 0}\). According to Linear Algebra, \(\widetilde{\varvec{x}}_{\bullet k}\) is the orthogonal projection of \(\widetilde{\varvec{x}}_\bullet ^\star \) in \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\) with respect to the matrix norm \(\Vert \cdot \Vert _{\widetilde{\varvec{A}}_\bullet }\). From nestedness \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\subseteq \mathcal {K}_{k+1}(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\), it thus follows that
Together with (17) and (20), this proves (49). \(\square \)
The following lemma collects some estimates which follow from the contraction property (19) of PCG.
Lemma 18
Algorithm 5 guarantees the following estimates for all \((j,k) \in \mathcal {Q}\) with \(k \ge 1\):
-
(i)
\(|||\,\phi _j^\star - \phi _{jk}\,||| \le q_{\mathrm{pcg}} \, |||\,\phi _j^\star - \phi _{j(k-1)}\,|||\)
-
(ii)
\(|||\,\phi _{jk}-\phi _{j(k-1)}\,||| \le (1+q_{\mathrm{pcg}}) \, |||\,\phi _j^\star -\phi _{j(k-1)}\,|||\)
-
(iii)
\(|||\,\phi _j^\star -\phi _{j(k-1)}\,||| \le (1-q_{\mathrm{pcg}})^{-1} \, |||\,\phi _{jk}-\phi _{j(k-1)}\,|||\)
-
(iv)
\(|||\,\phi _j^\star -\phi _{jk}\,||| \le q_{\mathrm{pcg}} (1-q_{\mathrm{pcg}})^{-1} \, |||\,\phi _{jk}-\phi _{j(k-1)}\,|||\)
Proof
According to (20), estimate (19) proves (i). The estimates (ii)–(iv) follow from (i) and the triangle inequality. \(\square \)
6.3 Proof of Theorem 8(a)
With reliability (A3) and stability (A1), we see that
With Lemma 18(iv), we hence prove the reliability estimate (26).
According to [3], it holds that
Let \(\mathbb {G}_j: \widetilde{H}^{-1/2}(\Gamma ) \rightarrow \mathcal {P}^0(\mathcal {T}_j)\) be the Galerkin projection. Let \(\varvec{P}i_j: L^2(\Gamma ) \rightarrow \mathcal {P}^0(\mathcal {T}_j)\) be the \(L^2\)-orthogonal projection. With the Céa lemma and a duality argument (see, e.g., [16, Theorem 4.1]), we see that
Hence, for \(\psi =\phi ^\star -\phi _{jk}\), it follows that
Combining the latter estimates, we see that
With Lemma 18(iv), we hence prove the efficiency estimate (27). \(\square \)
6.4 Proof of Theorem 8(b)
The following lemma is the heart of the proof of Theorem 8(b).
Lemma 19
Consider Algorithm 5 for arbitrary parameters \(0 < \theta \le 1\) and \(\lambda > 0\). There exist constants \(0<\mu ,q_{\mathrm{ctr}}<1\) such that
satisfies, for all \(j \in \mathbb {N}_0\), that
as well as
Moreover, for all \((j',k'),(j,k)\in \mathcal {Q}\), it holds that
The constants \(0< \mu , q_{\mathrm{ctr}} < 1\) depend only on \(\lambda \), \(\theta \), \(q_{\mathrm{pcg}}\), and the constants in (A1)–(A3).
Proof
The proof is split into five steps.
Step 1 We fix some constants, which are needed below. We note that all these constants depend on \(0 < \theta \le 1\) and \(\lambda > 0\), but do not require any additional constraint. First, define
Second, choose \(\gamma >0\) such that
Third, choose \(\mu >0\) such that
Fourth, choose \(\varepsilon >0\) such that
Fifth, choose \(\kappa >0\) such that
With (55)–(57), we finally define
Step 2 Due to reliability (A3), stability (A1), and Lemma 18(iii), it follows that
Step 3. We consider the case \(k+1<\underline{k}(j)\). Step (iv) of Algorithm 5 yields that
Moreover, the Pythagoras identity (49) implies that
Further, we note the Pythagoras identity
Combining (59)–(61) and applying Lemma 18(iii), we see that
Step 2 yields that
Using (55)–(56) and (61), we thus see that
This concludes the proof of (50).
Step 4 We use the definition \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) from Step (vi) of Algorithm 5 to see that
For the first summand of (62), we use stability (A1) and reduction (A2). Together with the Dörfler marking strategy in Step (v) of Algorithm 5 and \(\mathcal {M}_j \subseteq \mathcal {T}_j \backslash \mathcal {T}_{j+1}\), we see that
With this and stability (A1), the Young inequality and Lemma 18(ii) yield that
For the second summand of (62), we apply the Pythagoras identity (61) together with Lemma 18(i) and obtain that
Combining (62)–(65), we end up with
Using the same arguments as in Step 2, we get that
This concludes the proof of (51).
Step 5 Inequality (52) follows by induction. This concludes the proof. \(\square \)
Proof of Theorem 8(b)
The proof is split into three steps.
Step 1 Let \(j \in \mathbb {N}\). Recall the Pythagoras identity (61). We use stability (A1) and Step (iv) of Algorithm 5 to see that
With the Pythagoras identity (49), we may argue similarly to obtain that
Hence, it follows that \(\Delta _{j\underline{k}}\simeq \Delta _{j(\underline{k}-1)}\).
Step 2 For \(0\le j\le j^\prime \), define \({\widehat{k}}(j) := {\widehat{k}} \in \mathbb {N}_0\) by
From Step 1, Lemma 19, and the geometric series (for the sum over k), it follows that
For \(k^\prime <\underline{k}(j^\prime )\), inequality (52) and the geometric series (for the sum over j) yield that
For \(k^\prime =\underline{k}(j^\prime )\), inequality (52), the geometric series, and Step 1 yield that
Overall, it follows that
Step 3 According to the proof of [11, Lemma 4.9], estimate (66) guarantees (and is even equivalent to) the existence of \(0<q_{\mathrm{lin}}<1\) such that
Clearly, it holds that \(\Lambda _{jk} \simeq \Delta _{jk}^{1/2}\) for all \((j,k) \in \mathcal {Q}\). This and (67) conclude the proof. \(\square \)
6.5 Proof of Theorem 8(c)
The proof of optimal convergence rates requires the following additional properties of the mesh-refinement strategy. For 3D BEM (and 2D NVB from Sect. 2.4) these properties are verified in [8, 41, 42], and any assumption on \(\mathcal {T}_0\) is removed in [35]. For 2D BEM (and the extended 1D bisection from Sect. 2.3), these properties are verified in [2].
- (R1) :
-
Splitting property Each refined element is split in at least 2 and at most in \(C_{\mathrm{son}}\ge 2\) many sons, i.e., for all \(\mathcal {T}_\bullet \in \mathbb {T}\) and all \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), the refined mesh \(\mathcal {T}_\circ = {\text {refine}}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) satisfies that
$$\begin{aligned} \# (\mathcal {T}_\bullet {\setminus } \mathcal {T}_\circ ) + \# \mathcal {T}_\bullet \le \# \mathcal {T}_\circ \le C_{\mathrm{son}} \, \# (\mathcal {T}_\bullet {\setminus } \mathcal {T}_\circ ) + \# (\mathcal {T}_\bullet \cap \mathcal {T}_\circ ). \end{aligned}$$ - (R2) :
-
Overlay estimate For all meshes \(\mathcal {T} \in \mathbb {T}\) and \(\mathcal {T}_\bullet ,\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T})\) there exists a common refinement \(\mathcal {T}_\bullet \oplus \mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet ) \cap {\text {refine}}(\mathcal {T}_\circ ) \subseteq {\text {refine}}(\mathcal {T})\) with
$$\begin{aligned} \# (\mathcal {T}_\bullet \oplus \mathcal {T}_\circ ) \le \# \mathcal {T}_\bullet + \# \mathcal {T}_\circ - \# \mathcal {T}. \end{aligned}$$ - (R3) :
-
Mesh-closure estimate There exists \(C_{\mathrm{mesh}}>0\) such that the sequence \(\mathcal {T}_j\) with corresponding \(\mathcal {M}_j \subseteq \mathcal {T}_j\), which is generated by Algorithm 5, satisfies that
$$\begin{aligned} \# \mathcal {T}_j - \# \mathcal {T}_0 \le C_{\mathrm{mesh}} \sum _{\ell =0}^{j -1} \# \mathcal {M}_\ell . \end{aligned}$$
Recall the constants \(C_{\mathrm{stb}} > 0\) from (A1) and \(C_{\mathrm{drl}}>0\) from (A4). Suppose that \(0<\theta \le 1\) and \(\lambda > 0 \) are sufficiently small such that
In particular, it holds that \(0< \theta < \theta _{\mathrm{opt}}\) and \(0< \lambda < \lambda _{\mathrm{opt}}\). We need the following comparison lemma which is found in [11, Lemma 4.14].
Lemma 20
Suppose (R2), (A1), (A2), and (A4). Recall the assumption (68). There exist constants \(C_1, C_2 > 0\) such that for all \(s>0\) with \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \) and all \(j \in \mathbb {N}_0\), there exists \(\mathcal {R}_j \subseteq \mathcal {T}_j \) which satisfies
as well as the Dörfler marking criterion
The constants \(C_1, C_2\) depend only on the constants of (A1), (A2), and (A4). \(\square \)
Another lemma, which we need for the proof of Theorem 8(c), shows that the iterates \(\phi _{\bullet k} \) of Algorithm 5 are close to the exact Galerkin approximation \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\).
Lemma 21
Let \(0<\lambda <\lambda _{\mathrm{opt}}\). For all \(j \in \mathbb {N}_0\), it holds that
Moreover, there holds equivalence
Proof
Stability (A1) yields that \(|\eta _j(\phi ^\star _j)-\eta _j(\phi _{j\underline{k}})|\le C_{\mathrm{stb}}\,|||\,\phi ^\star _j - \phi _{j\underline{k}}\,||| \). Therefore, Lemma 18(iv) and the assumption on the PCG iterate in Step (iv) of Algorithm 5 imply that
Since \(0<\lambda <\lambda _{\mathrm{opt}}\) and hence \(\lambda C_{\mathrm{stb}}\,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} = \lambda /\lambda _{\mathrm{opt}}<1\), this yields that
Altogether, this proves (71). Moreover, with stability (A1), we see that
as well as
This concludes the proof. \(\square \)
Finally, we need the following lemma which immediately shows “\(\Longleftarrow \)” in (31).
Lemma 22
Suppose (R1). For \(j \in \mathbb {N}_0\), let \(\widehat{\mathcal {T}}_{j+1} = \mathrm{refine}(\widehat{\mathcal {T}}_j,\widehat{\mathcal {M}}_j)\) with arbitrary, but non-empty \(\widehat{\mathcal {M}}_j \subseteq \widehat{\mathcal {T}}_j\) and \(\widehat{\mathcal {T}}_0 = \mathcal {T}_0\). Let \(\widehat{\mathcal {Q}} \subseteq \mathbb {N}_0 \times \mathbb {N}_0\) be an index set and \(\widehat{\phi }_{jk} \in \mathcal {P}^0(\widehat{\mathcal {T}}_j)\) for all \((j,k) \in \widehat{\mathcal {Q}}\). Let \(s > 0\) and suppose that the corresponding quasi-errors \(\widehat{\Lambda }_{jk}^2 := |||\,\phi ^\star - \widehat{\phi }_{jk}\,|||^2 + \widehat{\eta }_j(\widehat{\phi }_j^\star )^2\) satisfy that
Then, it follows that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \).
Proof
Due to the Pythagoras identity (61) and stability (A1), it holds that
Additionally, [9, Lemma 22] shows that
Given \(N\in \mathbb {N}_0\), there exists an index \(j\in \mathbb {N}_0\) such that
With (74)–(76), it follows that
Since the upper bound is finite and independent of N, this implies that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \). \(\square \)
Proof of Theorem 8(c)
With Lemma 22, it only remains to prove the implication “\(\Longrightarrow \)” in (31). The proof is split into three steps, where we may suppose that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \).
Step 1 By Assumption (68), Lemma 20 provides a set \(\mathcal {R}_j\subseteq \mathcal {T}_j\) with (69)–(70). Due to stability (A1) and \(\lambda _{\mathrm{opt}}^{-1} = C_{\mathrm{stb}}\,\frac{q_{\mathrm{pcg}}}{1-q_{\mathrm{pcg}}}\), it holds that
Together with \(\theta '' \eta _j(\phi _j^\star ) \le \eta _\ell (\mathcal {R}_\ell ,\phi _j^\star )\), this proves that
and results in
Hence, \(\mathcal {R}_j\) satisfies the Dörfler marking for \(\phi _{j\underline{k}}\) with parameter \(\theta \). By choice of \(\mathcal {M}_j\) in Step (v) of Algorithm 5, we thus infer that
The mesh-closure estimate (R3) guarantees that
Step 2 For \(j=0\) it holds that \(1\lesssim \Lambda _{0\underline{k}}^{-1/s}\). For \(j>0\), we proceed as follows: Remark 9 yields that \(\eta _\ell (\phi _{\ell \underline{k}}) \simeq \Lambda _{\ell \underline{k}}\). Theorem 8(b) and the geometric series prove that
Combining this with (78) and including the estimate for \(j=0\), we derive that
Step 3 Arguing as in (76) and employing Theorem 8(b), we see that
Since \(\underline{k}(0) \le \#\mathcal {T}_0< \infty \), we hence conclude that \(\displaystyle \sup _{(j,k) \in \mathcal {Q}} (\#\mathcal {T}_{j}-\#\mathcal {T}_0+1)^s \, \Lambda _{jk} < \infty . \)\(\square \)
6.6 Proof of Corollary 10
For all \(\delta > 0\), it holds that
where the hidden constant depends only on \(\delta \). From (32), it thus follows that
From Lemma 22, we derive that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \). Hence, Theorem 8(c) yields that
Let \(0< \varepsilon < s\) and choose \(\delta > 0\) such that
This leads to
From Theorem 8(b) and the geometric series, it follows that
Combining the last two estimates, we see that
This concludes the proof. \(\square \)
7 Hyper-singular integral equation
We only sketch the setting and refer to [36] for further details and proofs. Given \(f: \Gamma \rightarrow \mathbb {R}\), the hyper-singular integral equation seeks \(u^\star :\Gamma \rightarrow \mathbb {R}\) such that
where \(\partial _{\varvec{n}}\) denotes the normal derivative with the outer unit normal vector \(\varvec{n}(\cdot )\) on \(\Gamma \subseteq \partial \Omega \). For \(0 \le \alpha \le 1\), define \(\widetilde{H}^\alpha (\Gamma ) := \big \{v \in H^\alpha (\Gamma )\,:\,{\text {supp}}(v) \subseteq {\overline{\Gamma }}\big \}\) and let \(H^{-\alpha }(\Gamma )\) be its dual space with respect to \(\langle \cdot \,,\,\cdot \rangle \). Note that \(\widetilde{H}^{\pm \alpha }(\Gamma ) = H^{\pm \alpha }(\Gamma )\) for \(\Gamma = \partial \Omega \). The hyper-singular integral operator \(W : \widetilde{H}^{1/2+s}(\Gamma ) \rightarrow H^{-1/2+s}(\Gamma )\) is a bounded linear operator for all \(-1/2 \le s \le 1/2\) which is even an isomorphism for \(-1/2< s < 1/2\). For \(s=0\), the operator W is symmetric and (since \(\Gamma \) is connected) positive semi-definite with kernel being the constant functions. For \(\Gamma \subsetneqq \partial \Omega \), the operator \(W : \widetilde{H}^{1/2}(\Gamma ) \rightarrow H^{-1/2}(\Gamma )\) is hence an elliptic isomorphism. Moreover, for \(\Gamma = \partial \Omega \) and \(H^{\pm 1/2}(\Gamma ) := \big \{\psi \in H^{\pm 1/2}(\Gamma )\,:\,\langle \psi \,,\,1\rangle =0\big \}\), \(W : H^{1/2}_{*}(\Gamma ) \rightarrow H^{-1/2}_{*}(\Gamma )\) is an elliptic isomorphism. Therefore,
defines a scalar product on \(\widetilde{H}^{1/2}(\Gamma )\), and the induced norm \(|||\,u\,||| := \langle \langle u\,,\,u\rangle \rangle ^{1/2}\) is an equivalent norm on \(\widetilde{H}^{1/2}(\Gamma )\). Let \(f \in H^{-1/2}(\Gamma )\). If \(\Gamma \subsetneqq \partial \Omega \), suppose additionally that \(f \in H^{-1/2}_{*}(\partial \Omega )\). Then, (81) admits a unique solution \(u^\star \in \widetilde{H}^{1/2}(\Gamma )\) resp. \(u^\star \in H^{1/2}_{*}(\partial \Omega )\), which is also the unique solution \(u^\star \in \widetilde{H}^{1/2}(\Gamma )\) of the variational formulation
Given a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \), let
The Lax–Milgram theorem yields existence and uniqueness of \(u_\bullet ^\star \in \widetilde{\mathcal {S}}^1(\mathcal {T}_\bullet )\) such that
With the corresponding weighted-residual error estimator, it holds that
see [10, 17] for \(d = 2\) resp. [14] for \(d = 3\).
In [22, 26], optimal additive Schwarz preconditioners are derived for this setting. Hence, Algorithm 5 can also be used in the present setting. We refer to [20, Section 3.3] for the fact that the axioms of adaptivity (A1)–(A4) from Proposition 16 remain valid for the hyper-singular integral equation. All other arguments in Section 6 rely only on general properties of the PCG algorithm (Sect. 6.2), the properties (A1)–(A4), and the Hilbert space setting of \(|||\,\cdot \,|||\). Overall, this proves that our main results (Theorem 8 and Corollary 10) also cover the hyper-singular integral equation.
References
Aurada, M., Ebner, M., Feischl, M., Ferraz-Leite, S., Führer, T., Goldenits, P., Karkulik, M., Mayr, M., Praetorius, D.: A Matlab implementation of adaptive 2D-BEM. Numer. Algorithms 67, 1–32 (2014)
Aurada, M., Feischl, M., Führer, T., Karkulik, M., Praetorius, D.: Efficiency and optimality of some weighted-residual error estimator for adaptive 2D boundary element methods. Comput. Methods Appl. Math. 13(3), 305–332 (2013)
Aurada, M., Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Praetorius., D.: Local inverse estimates for non-local boundary integral operators. Math. Comput. 86(308), 2651–2686 (2017)
Arioli, M., Georgoulis, E.H., Loghin, D.: Stopping criteria for adaptive finite element solvers. SIAM J. Sci. Comput. 35(3), A1537–A1559 (2013)
Ainsworth, M., Guzmán, J., Sayas, F.-J.: Discrete extension operators for mixed finite element spaces on locally refined meshes. Math. Comput. 85(302), 2639–2650 (2016)
Arioli, M., Liesen, J., Międlar, A., Strakoš, Z.: Interplay between discretization and algebraic computation in adaptive numerical solution of elliptic PDE problems. GAMM Mitt. 36(1), 102–129 (2013)
Ainsworth, M., McLean, W., Tran, T.: The conditioning of boundary element equations on locally refined meshes and preconditioning by diagonal scaling. SIAM J. Numer. Anal. 36(6), 1901–1932 (1999)
Binev, P., Dahmen, W., DeVore, R.: Adaptive finite element methods with convergence rates. Numer. Math. 97(2), 219–268 (2004)
Bespalov, A., Haberl, A., Praetorius, D.: Adaptive FEM with coarse initial mesh guarantees optimal convergence rates for compactly perturbed elliptic problems. Comput. Methods Appl. Mech. Eng. 317, 318–340 (2017)
Carstensen, C.: An a posteriori error estimate for a first-kind integral equation. Math. Comput. 66(217), 139–155 (1997)
Carstensen, C., Feischl, M., Page, M., Praetorius, D.: Axioms of adaptivity. Comput. Math. Appl. 67(6), 1195–1253 (2014)
Carstensen, C., Gedicke, J.: An adaptive finite element eigenvalue solver of asymptotic quasi-optimal computational complexity. SIAM J. Numer. Anal. 50(3), 1029–1057 (2012)
Cascon, J.M., Kreuzer, C., Nochetto, R.H., Siebert, K.G.: Quasi-optimal convergence rate for an adaptive finite element method. SIAM J. Numer. Anal. 46(5), 2524–2550 (2008)
Carstensen, C., Maischak, M., Praetorius, D., Stephan, E.P.: Residual-based a posteriori error estimate for hypersingular equation on surfaces. Numer. Math. 97(3), 397–425 (2004)
Carstensen, C., Maischak, M., Stephan, E.P.: A posteriori error estimate and \(h\)-adaptive algorithm on surfaces for Symm’s integral equation. Numer. Math. 90(2), 197–213 (2001)
Carstensen, C., Praetorius, D.: Averaging techniques for the effective numerical solution of Symm’s integral equation of the first kind. SIAM J. Sci. Comput. 27(4), 1226–1260 (2006)
Carstensen, C., Stephan, E.P.: A posteriori error estimates for boundary element methods. Math. Comput. 64(210), 483–500 (1995)
Dörfler, W.: A convergent adaptive algorithm for Poisson’s equation. SIAM J. Numer. Anal. 33(3), 1106–1124 (1996)
Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Dirk, P.: Quasi-optimal convergence rates for adaptive boundary element methods with data approximation. Part I: weakly-singular integral equation. Calcolo 51, 531–562 (2014)
Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Praetorius, D.: Quasi-optimal convergence rates for adaptive boundary element methods with data approximation. Part II: hyper-singular integral equation. Electron. Trans. Numer. Anal. 44, 153–176 (2015)
Feischl, M., Führer, T., Praetorius, D.: Adaptive FEM with optimal convergence rates for a certain class of nonsymmetric and possibly nonlinear problems. SIAM J. Numer. Anal. 52(2), 601–625 (2014)
Feischl, M., Führer, T., Praetorius, D., Stephan, E.P.: Optimal additive Schwarz preconditioning for hypersingular integral equations on locally refined triangulations. Calcolo 54(1), 367–399 (2017)
Feischl, M., Führer, T., Praetorius, D., Stephan, E.P.: Optimal preconditioning for the symmetric and nonsymmetric coupling of adaptive finite elements and boundary elements. Numer. Methods Partial Differ. Equ. 33(3), 603–632 (2017)
Feischl, M., Karkulik, M., Melenk, J.M., Praetorius., D.: Quasi-optimal convergence rate for an adaptive boundary element method. SIAM J. Numer. Anal. 51, 1327–1348 (2013)
Führer, T., Melenk, J.M., Praetorius, D., Rieder, A.: Optimal additive Schwarz methods for the hp-BEM: the hypersingular integral operator in 3D on locally refined meshes. Comput. Math. Appl. 70, 1583–1605 (2015)
Führer, T.: Zur Kopplung von finiten Elementen und Randelementen. Ph.D. thesis, TU Wien (2014)
Gantumur, T.: Adaptive boundary element methods with convergence rates. Numer. Math. 124(3), 471–516 (2013)
Gantner, G., Haberl, A., Praetorius, D., Stiftner, B.: Rate optimal adaptive FEM with inexact solver for nonlinear operators. IMA J. Numer. Anal. 38, 1797–1831 (2018)
Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, 4th edn. Johns Hopkins University Press, Baltimore (2013)
Hackbusch, W.: Hierarchical Matrices: Algorithms and Analysis, Volume 49 of Springer Series in Computational Mathematics. Springer, Heidelberg (2015)
Hiptmair, R., Jerez-Hanckes, C., Mao, S.: Extension by zero in discrete trace spaces: inverse estimates. Math. Comput. 84(296), 2589–2615 (2015)
Hiptmair, R., Mao, S.: Stable multilevel splittings of boundary edge element spaces. BIT 52(3), 661–685 (2012)
Hiptmair, R., Haijun, W., Zheng, W.: Uniform convergence of adaptive multigrid methods for elliptic problems and Maxwell’s equations. Numer. Math. Theory Methods Appl. 5(3), 297–332 (2012)
Hiptmair, R., Zheng, W.: Local multigrid in \({ H}(\bf{curl})\). J. Comput. Math. 27(5), 573–603 (2009)
Karkulik, M., Pavlicek, D., Praetorius, D.: On 2D newest vertex bisection: optimality of mesh-closure and \(H^1\)-stability of \(L_2\)-projection. Constr. Approx. 38, 213–234 (2013)
McLean, W.: Strongly Elliptic Systems and Boundary Integral Equations. Cambridge University Press, Cambridge (2000)
Morin, P., Nochetto, R.H., Siebert, K.G.: Data oscillation and convergence of adaptive FEM. SIAM J. Numer. Anal. 38(2), 466–488 (2000)
Oswald, P.: Multilevel finite element approximation. Teubner Skripten zur Numerik [Teubner Scripts on Numerical Mathematics]. Theory and Applications. B.G. Teubner, Stuttgart (1994)
Oswald, P.: Interface preconditioners and multilevel extension operators. In: Eleventh International Conference on Domain Decomposition Methods (London, 1998), pp. 97–104. DDM.org, Augsburg (1999)
Śmigaj, W., Betcke, T., Arridge, S., Phillips, J., Schweiger, M.: Solving Boundary Integral Problems with BEM++. ACM, New York (2013)
Stevenson, R.: Optimality of a standard adaptive finite element method. Found. Comput. Math. 7(2), 245–269 (2007)
Stevenson, R.: The completion of locally refined simplicial partitions created by bisection. Math. Comput. 77(261), 227–241 (2008)
Stevenson, R., van Venetië, R.: Optimal preconditioning for problems of negative order. ArXiv e-prints, March 2018
Toselli, A., Widlund, O.: Domain Decomposition Methods-Algorithms and Theory, Volume 34 of Springer Series in Computational Mathematics. Springer, Berlin (2005)
Xu, J., Chen, L., Nochetto, R.H.: Optimal multilevel methods for \(H({\rm grad})\), \(H({\rm curl})\), and \(H({\rm div})\) systems on graded and unstructured grids. In: DeVore, R., Kunoth, A. (eds.) Multiscale, Nonlinear and Adaptive Approximation, pp. 599–659. Springer, Berlin (2009)
Acknowledgements
Open access funding provided by Austrian Science Fund (FWF). The authors thankfully acknowledge the support by the Austrian Science Fund (FWF) through Grant P27005 (AH, DP, SS) as well as Grant F65 (DP) and by CONICYT through FONDECYT project P11170050 (TF).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Führer, T., Haberl, A., Praetorius, D. et al. Adaptive BEM with inexact PCG solver yields almost optimal computational costs. Numer. Math. 141, 967–1008 (2019). https://doi.org/10.1007/s00211-018-1011-1
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-018-1011-1