Numerische Mathematik

, Volume 141, Issue 4, pp 967–1008 | Cite as

Adaptive BEM with inexact PCG solver yields almost optimal computational costs

  • Thomas Führer
  • Alexander Haberl
  • Dirk Praetorius
  • Stefan SchimankoEmail author
Open Access


We consider the preconditioned conjugate gradient method (PCG) with optimal preconditioner in the frame of the boundary element method for elliptic first-kind integral equations. Our adaptive algorithm steers the termination of PCG as well as the local mesh-refinement. Besides convergence with optimal algebraic rates, we also prove almost optimal computational complexity. In particular, we provide an additive Schwarz preconditioner which can be computed in linear complexity and which is optimal in the sense that the condition numbers of the preconditioned systems are uniformly bounded. As model problem serves the 2D or 3D Laplace operator and the associated weakly-singular integral equation with energy space \(\widetilde{H}^{-1/2}(\Gamma )\). The main results also hold for the hyper-singular integral equation with energy space \(H^{1/2}(\Gamma )\).

Mathematics Subject Classification

65N38 65N22 65F08 65N50 41A25 65Y20 

1 Introduction

1.1 Model problem

Let \(\Omega \subset \mathbb {R}^d\) with \(d=2,3\) be a bounded Lipschitz domain with polyhedral boundary \(\partial \Omega \). Let \(\Gamma \subseteq \partial \Omega \) be a (relatively) open and connected subset. Given \(f : \Gamma \rightarrow \mathbb {R}\), we seek the density \(\phi ^\star :\Gamma \rightarrow \mathbb {R}\) of the weakly-singular integral equation
$$\begin{aligned} (V \phi ^\star )(x) := \int _\Gamma G(x-y) \phi ^\star (y) \,\mathrm{d}y = f(x) \quad \text {for all } x \in \Gamma , \end{aligned}$$
where \(G(\cdot )\) denotes the fundamental solution of the Laplace operator, i.e.,
$$\begin{aligned} G(z) = - \frac{1}{2\pi } \, \log |z| \quad \text {for } d = 2 \quad \text {resp.} \quad G(z) = \frac{1}{4\pi } \, \frac{1}{|z|} \quad \text {for } d = 3. \end{aligned}$$
Given a triangulation \(\mathcal {T}_\bullet \) of \(\Gamma \), we employ a lowest-order Galerkin boundary element method (BEM) to compute a \(\mathcal {T}_\bullet \)-piecewise constant function \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) such that
$$\begin{aligned} \int _\Gamma (V \phi _\bullet ^\star )(x) \, \psi _\bullet (x) \,\mathrm{d}x = \int _\Gamma f(x) \, \psi _\bullet (x) \,\mathrm{d}x \quad \text {for all } \psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet ). \end{aligned}$$
With the numbering \(\mathcal {T}_\bullet = \{T_1,\ldots ,T_N\}\), consider the standard basis \(\big \{\chi _{\bullet ,j}\,:\,j=1,\ldots ,N\big \}\) of \(\mathcal {P}^0(\mathcal {T}_\bullet )\) consisting of characteristic functions \(\chi _{\bullet ,j}\) of \(T_j\in \mathcal {T}_\bullet \). We make the ansatz
$$\begin{aligned} \phi _\bullet ^\star = \sum _{k=1}^N \varvec{x}_\bullet ^\star [k] \, \chi _{\bullet ,k} \quad \text {with coefficient vector } \varvec{x}_\bullet ^\star =(\varvec{x}_\bullet ^\star [1], \ldots , \varvec{x}_\bullet ^\star [N]) \in \mathbb {R}^N. \end{aligned}$$
Then, the Galerkin formulation (3) is equivalent to the linear system
$$\begin{aligned} \varvec{A}_\bullet \varvec{x}_\bullet ^\star = \varvec{b}_\bullet \text { with } \varvec{A}_\bullet [j,k] := \int _{T_j} (V \chi _{\bullet ,k})(x) \,\mathrm{d}x, \quad \varvec{b}_\bullet [j] := \int _{T_j} f(x) \,\mathrm{d}x, \end{aligned}$$
where the matrix \(\varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) is positive definite and symmetric. For a given initial triangulation \(\mathcal {T}_0\), we consider an adaptive mesh-refinement strategy of the type
$$\begin{aligned} \boxed {~solve~} \longrightarrow \boxed {~estimate~} \longrightarrow \boxed {~mark~} \longrightarrow \boxed {~refine~} \end{aligned}$$
which generates a sequence \(\mathcal {T}_\ell \) of successively refined triangulations \(\mathcal {T}_\ell \) for all \(\ell \in \mathbb {N}_0\). We note that the condition number of the Galerkin matrix \(\varvec{A}_\ell \) from (5) depends on the number of elements of \(\mathcal {T}_\ell \), as well as the minimal and maximal diameter. Therefore, the step  solve  requires an efficient preconditioner as well as an appropriate iterative solver.

1.2 State of the art

In the last decade, the mathematical understanding of adaptive mesh-refinement has matured. We refer to [8, 13, 18, 21, 37, 41] for some milestones for adaptive finite element methods for second-order linear elliptic equations, [3, 19, 20, 24, 27] for adaptive BEM, and [11] for a general framework of rate-optimality of adaptive mesh-refining algorithms. The interplay between adaptive mesh-refinement, optimal convergence rates, and inexact solvers has been addressed and analyzed for adaptive FEM for linear problems in [4, 6, 41], for eigenvalue problems in [12], and recently also for strongly monotone nonlinearities in [28]. In particular, all available results for adaptive BEM [3, 19, 20, 24, 27] assume that the Galerkin system (5) is solved exactly. Instead, the present work analyzes an adaptive algorithm which steers both, the local mesh-refinement and the iterations of the PCG algorithm.

In principle, it is known [11, Section 7] that convergence and optimal convergence rates are preserved if the linear system is solved inexactly, but with sufficient accuracy. The purpose of this work is to guarantee the latter by incorporating an appropriate stopping criterion for the PCG solver into the adaptive algorithm. Moreover, to prove that the proposed algorithm does not only lead to optimal algebraic convergence rates, but also to (almost) optimal computational costs, we provide an appropriate symmetric and positive definite preconditioner \(\varvec{P}_\ell \in \mathbb {R}^{N\times N}\) such that
  • first, the matrix-vector products with \(\varvec{P}_\ell ^{-1}\) can be computed at linear cost;

  • second, the system matrix \(\varvec{P}^{-1/2}_\ell \varvec{A}_\ell \varvec{P}_\ell ^{-1/2}\) of the preconditioned linear system
    $$\begin{aligned} \varvec{P}^{-1/2}_\ell \varvec{A}_\ell \varvec{P}_\ell ^{-1/2} \widetilde{\varvec{x}}_\ell ^\star = \varvec{P}_\ell ^{-1/2} \varvec{b}_\ell \end{aligned}$$
    has a uniformly bounded condition number which is independent of \(\mathcal {T}_\ell \).
Then, \(\varvec{x}_\ell ^\star = \varvec{P}_\ell ^{-1/2} \widetilde{\varvec{x}}_\ell ^\star \) solves the original system (5). To that end, we exploit the multilevel structure of adaptively generated meshes in the framework of adaptive Schwarz methods. For hyper-singular integral equations, such a multilevel additive Schwarz preconditioner has been proposed and analyzed in [22, 25] for \(d = 2,3\) and for weakly-singular integral equations in [23] for \(d = 2\). In particular, the present work closes this gap by analyzing an optimal additive Schwarz preconditioner for weakly-singular integral equations for \(d = 3\). Besides, we refer to the recent work [43] on optimal preconditioning in Hilbert spaces of negative order. We note that the proofs of [22, 23] do not transfer to weakly-singular integral equations for \(d=3\). Instead, we build on recent results for finite element discretizations [5, 33] which are then transferred to the present BEM setting by use of an abstract concept from [39].

1.3 Outline and main results

Section 2 introduces the functional analytic framework and fixes the necessary notation. Section 3 states our main results. In Sect. 3.1, we define a local multilevel additive Schwarz preconditioner (24) for a sequence of locally refined meshes. Theorem 3 states that the \(\ell _2\)-condition number of the preconditioned systems is uniformly bounded for all these meshes, i.e., the preconditioner is optimal. In Sect. 3.2, we first state our adaptive algorithm which steers the local mesh-refinement as well as the stopping of the PCG iteration (Algorithm 5). Theorem 8 proves
  • that the overall error in the energy norm can be controlled a posteriori,

  • that the quasi-error (which consists of energy norm error plus error estimator) is linearly convergent in each step of the adaptive algorithm (i.e., independent of whether the algorithm decides for local mesh-refinement or for one step of the PCG iteration),

  • that the quasi-error even decays with optimal rate (i.e., with each possible algebraic rate) with respect to the degrees of freedom, i.e., Algorithm 5 is rate optimal in the sense of, e.g., [11, 13, 24, 41].

Finally, Sect. 3.3 considers the computational costs. Under realistic assumptions on the treatment of the arising discrete integral operators, Corollary 10 states that the quasi-error converges at almost optimal rate (i.e., with rate \(s-\varepsilon \) for any \(\varepsilon >0\) if rate \(s>0\) is possible for the exact Galerkin solution) with respect to computational costs, i.e., Algorithm 5 requires almost optimal computational time. Section 4 underpins our theoretical findings by some 2D and 3D experiments. The proof of Theorem 3 is given in Sect. 5, the proofs of Theorem 8 and Corollary 10 are given in Sect. 6. The final Sect. 7 shows that our main results also apply to the hyper-singular integral equation.

2 Preliminaries and notation

2.1 Functional analytic setting

We briefly recall the most important facts and refer to [36] for further details and proofs. With the Sobolev space \(H^\alpha (\partial \Omega )\) defined as in [36] for \(0 \le \alpha \le 1\), let \(H^\alpha (\Gamma ) := \big \{v|_\Gamma \,:\,v \in H^\alpha (\partial \Omega )\big \}\) be associated with the natural quotient norm. Let \(\widetilde{H}^{-\alpha }(\Gamma )\) be the dual space of \(H^\alpha (\Gamma )\) with respect to the extended \(L^2(\Gamma )\) scalar product \(\langle \psi \,,\,f\rangle = \int _\Gamma \psi (x) \, f(x) \,\mathrm{d}x\). Then, the single-layer potential V from (1) gives rise to a bounded linear operator \(V:\widetilde{H}^{-1/2+s}(\Gamma ) \rightarrow H^{1/2+s}(\Gamma )\) for all \(-1/2 \le s \le 1/2\) which is even an isomorphism for \(-1/2< s < 1/2\). For \(d=2\), the latter requires \(\mathrm{diam}(\Omega ) < 1\) which can always be ensured by scaling of \(\Omega \). For \(s=0\), the operator V is even symmetric and elliptic, i.e.,
$$\begin{aligned} \langle \langle \phi \,,\,\psi \rangle \rangle := \int _\Gamma (V\phi )(x) \, \psi (x) \,\mathrm{d}x\qquad \text {for all }\phi ,\psi \in \widetilde{H}^{-1/2}(\Gamma ) \end{aligned}$$
defines a scalar product and \(|||\,\phi \,|||^2 := \langle \langle \phi \,,\,\phi \rangle \rangle \) is an equivalent norm on \(\widetilde{H}^{-1/2}(\Gamma )\). For a given right-hand side \(f \in H^{1/2}(\Gamma )\), the weakly-singular integral equation (1) can thus equivalently be reformulated as
$$\begin{aligned} \langle \langle \phi ^\star \,,\,\psi \rangle \rangle = \langle f\,,\,\psi \rangle \quad \text { for all } \psi \in \widetilde{H}^{-1/2}(\Gamma ). \end{aligned}$$
In particular, the Lax–Milgram theorem proves existence and uniqueness of the solution \(\phi ^\star \in \widetilde{H}^{-1/2}(\Gamma )\) to (9).

2.2 Boundary element method (BEM)

Given a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \), let
$$\begin{aligned} \mathcal {P}^0(\mathcal {T}_\bullet ) := \big \{\psi _\bullet : \Gamma \rightarrow \mathbb {R}\,:\,\forall T \in \mathcal {T}_\bullet \quad \psi _\bullet |_T \text { is constant}\big \} \end{aligned}$$
be the space of \(\mathcal {T}_\bullet \)-piecewise constant functions. Note that \(\mathcal {P}^0(\mathcal {T}_\bullet ) \subset L^2(\Gamma ) \subset \widetilde{H}^{-1/2}(\Gamma )\). The Galerkin formulation (3) can be reformulated as
$$\begin{aligned} \langle \langle \phi _\bullet ^\star \,,\,\psi _\bullet \rangle \rangle = \langle f\,,\,\psi _\bullet \rangle \quad \text {for all } \psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet ). \end{aligned}$$
Therefore, the Lax–Milgram theorem proves existence and uniqueness of the discrete solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\).

2.3 Mesh-refinement for 2D BEM

For \(d = 2\), a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \) is a partition into non-degenerate compact line segments. It is called \(\gamma \)-shape regular, if
$$\begin{aligned} \max \big \{h_T / h_{T'}\,:\,T,T' \in \mathcal {T}_\bullet \text { with } T\cap T' \ne \emptyset \big \} \le \gamma . \end{aligned}$$
Here, \(h_T:=\mathrm{diam}(T) > 0\) denotes the Euclidean diameter of T, i.e., the length of the line segment.
We employ the extended bisection algorithm from [2]. For a mesh \(\mathcal {T}_\bullet \) and \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), let \(\mathcal {T}_\circ := \mathrm{refine}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) be the coarsest mesh such that all marked elements \(T\in \mathcal {M}_\bullet \) have been refined, i.e., \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \backslash \mathcal {T}_\circ \). We write \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\), if there exists \(n\in \mathbb {N}_0\), conforming triangulations \(\mathcal {T}_0,\ldots ,\mathcal {T}_n\) and corresponding sets of marked elements \(\mathcal {M}_j\subseteq \mathcal {T}_j\) such that
  • \(\mathcal {T}_\bullet = \mathcal {T}_0\),

  • \(\mathcal {T}_{j+1} = \mathrm{refine}(\mathcal {T}_j,\mathcal {M}_j)\) for all \(j=0,\ldots ,n-1\),

  • \(\mathcal {T}_\circ = \mathcal {T}_n\),

i.e., \(\mathcal {T}_\circ \) is obtained from \(\mathcal {T}_\bullet \) by finitely many steps of refinement. Note that the bisection algorithm from [2] guarantees, in particular, that all \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) are uniformly \(\gamma \)-shape regular, where \(\gamma \) depends only on \(\mathcal {T}_\bullet \).

2.4 Mesh-refinement for 3D BEM

For \(d = 3\), a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \) is a conforming triangulation into non-degenerate compact surface triangles. In particular, we avoid hanging nodes. To ease the presentation, we suppose that the elements \(T \in \mathcal {T}_\bullet \) are flat. The triangulation is called \(\gamma \)-shape regular, if
$$\begin{aligned} \max _{T\in \mathcal {T}_\bullet } \frac{\mathrm{diam}(T)}{h_T} \le \gamma . \end{aligned}$$
Here, \(\mathrm{diam}(T)\) denotes the Euclidean diameter of T and \(h_T:=|T|^{1/2}\) with |T| being the two-dimensional surface measure. Note that \(\gamma \)-shape regularity implies that \(h_T \le \mathrm{diam}(T) \le \gamma \, h_T\) and hence excludes anisotropic elements.
For 3D BEM, we employ 2D newest vertex bisection (NVB) to refine triangulations locally; see [35, 42] for details on the refinement algorithm and Fig. 1 for an illustration. For a mesh \(\mathcal {T}_\bullet \) and \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), we employ the same notation \(\mathcal {T}_\circ := \mathrm{refine}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) resp. \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) as for \(d = 2\).
Fig. 1

For newest vertex bisection (NVB) in 2D, each triangle \(T\in \mathcal {T}\) has one reference edge, indicated by the double line (left). Bisection of T is achieved by halving the reference edge (middle). The reference edges of the sons are always opposite to the new vertex. Recursive application of this refinement rule leads to conforming triangulations

2.5 A posteriori BEM error control

For \(\psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\mathcal {U}_\bullet \subseteq \mathcal {T}_\bullet \), define
$$\begin{aligned} \eta _\bullet (\mathcal {U}_\bullet , \psi _\bullet )^2&:= \sum _{T\in \mathcal {U}_\bullet } \eta _\bullet (T, \psi _\bullet )^2,\nonumber \\ \text {where} \quad \eta _\bullet (T, \psi _\bullet )^2&:= h_T \, \Vert \nabla _\Gamma (f-V\psi _\bullet )\Vert _{L^2(T)}^2. \end{aligned}$$
Here \(\nabla _\Gamma (\cdot )\) denotes the arclength derivative for \(d = 2\) resp. the surface gradient for \(d = 3\). To abbreviate notation, let \(\eta _\bullet (\psi _\bullet ) := \eta _\bullet (\mathcal {T}_\bullet , \psi _\bullet )\). If \(\psi _\bullet = \phi _\bullet ^\star \) is the discrete solution to (11), then there holds the reliability estimate (i.e., the global upper bound)
$$\begin{aligned} |||\,\phi ^\star - \phi _\bullet ^\star \,||| \le C_{\mathrm{rel}} \, \eta _\bullet (\phi _\bullet ^\star ), \end{aligned}$$
where \(C_{\mathrm{rel}} > 0\) depends only on \(\Gamma \) and \(\gamma \)-shape regularity of \(\mathcal {T}_\bullet \); see [10, 17] for \(d = 2\) resp. [15] for \(d = 3\). Provided that \(\phi ^\star \in L^2(\Gamma )\), the following weak efficiency
$$\begin{aligned} |||\,\phi ^\star -\phi _\bullet ^\star \,|||+\eta _\bullet (\phi _\bullet ^\star )\le C_{\mathrm{eff}}\,\Vert h_\bullet ^{1/2}\,(\phi ^\star -\phi _\bullet ^\star )\Vert _{L^2(\Gamma )} \end{aligned}$$
has recently been proved in [3], where \(C_{\mathrm{eff}}>0\) depends only on \(\Gamma \) and \(\gamma \)-shape regulartiy of \(\mathcal {T}_\bullet \). We note that the weighted \(L^2\)-norm on the right-hand side of (16) is only slightly stronger than \(|||\,\cdot \,|||\simeq \Vert \cdot \Vert _{\widetilde{H}^{-1/2}(\Gamma )}\), so that one empirically observes \(\eta _\bullet (\phi _\bullet ^\star )\lesssim |||\,\phi -\phi _\bullet ^\star \,|||\) in practice, cf. [10, 15, 17]. In certain situations (e.g., weakly-singular integral formulation of the interior 2D Dirichlet problem), one can rigorously prove the latter (strong) efficiency estimate up to higher-order data oscillations; see [2].

2.6 Preconditioned conjugate gradient method (PCG)

Suppose that \(\varvec{P}_\bullet , \varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) are symmetric and positive definite matrices. Given \(\varvec{b}_\bullet \in \mathbb {R}^N\) and an initial guess \(\varvec{x}_{\bullet 0}\), PCG (see [29, Algorithm 11.5.1]) aims to approximate the solution \(\varvec{x}_\bullet ^\star \in \mathbb {R}^N\) to (5). We note that each step of PCG has the following computational costs:
  • \(\mathcal {O}(N)\) cost for vector operations (e.g., assignment, addition, scalar product),

  • computation of one matrix-vector product with \(\varvec{A}_\bullet \),

  • computation of one matrix-vector product with \(\varvec{P}_\bullet ^{-1}\).

Let \(\widetilde{\varvec{x}}_\bullet ^\star \in \mathbb {R}^N\) be the solution to (7) and recall that \(\varvec{x}_\bullet ^\star = \varvec{P}_\bullet ^{-1/2} \widetilde{\varvec{x}}_\bullet ^\star \). We note that PCG formally applies the conjugate gradient method (CG, see [29, Algorithm 11.3.2]) for the matrix \(\widetilde{\varvec{A}}_\bullet := \varvec{P}_\bullet ^{-1/2} \varvec{A}_\bullet \varvec{P}_\bullet ^{-1/2}\) and the right-hand side \(\widetilde{\varvec{b}}_\bullet = \varvec{P}_\bullet ^{-1/2} \varvec{b}_\bullet \). The iterates \(\varvec{x}_{\bullet k} \in \mathbb {R}^N\) of PCG (applied to \(\varvec{P}_\bullet \), \(\varvec{A}_\bullet \), \(\varvec{b}_\bullet \), and the initial guess \(\varvec{x}_{\bullet 0}\)) and the iterates \(\widetilde{\varvec{x}}_{\bullet k}\) of CG (applied to \(\widetilde{\varvec{A}}_\bullet \), \(\widetilde{\varvec{b}}_\bullet \), and the initial guess \(\widetilde{\varvec{x}}_{\bullet 0} := \varvec{P}_\bullet ^{1/2} \varvec{x}_{\bullet 0}\)) are formally linked by
$$\begin{aligned} \varvec{x}_{\bullet k} = \varvec{P}_\bullet ^{-1/2} \widetilde{\varvec{x}}_{\bullet k}; \end{aligned}$$
see [29, Section 11.5]. Moreover, direct computation proves that
$$\begin{aligned} \Vert \widetilde{\varvec{y}}_\bullet \Vert _{\widetilde{\varvec{A}}_\bullet }^2 := \widetilde{\varvec{y}}_\bullet \cdot \widetilde{\varvec{A}}_\bullet \widetilde{\varvec{y}}_\bullet = \varvec{y} _\bullet \cdot \varvec{A}_\bullet \varvec{y}_\bullet =: \Vert \varvec{y}_\bullet \Vert _{\varvec{A}_\bullet }^2 \quad \text {for all }\widetilde{\varvec{y}}_\bullet \in \mathbb {R}^N\text { and }\varvec{y}_\bullet = \varvec{P}_\bullet ^{-1/2} \widetilde{\varvec{y}}_\bullet . \end{aligned}$$
Consequently, [29, Theorem 11.3.3] for CG (applied to \(\widetilde{\varvec{A}}_\bullet \), \(\widetilde{\varvec{b}}_\bullet \), \(\widetilde{\varvec{x}}_{\bullet 0}\)) yields the following lemma for PCG (which follows from the implicit steepest decent approach of CG).

Lemma 1

Let \(\varvec{A}_\bullet , \varvec{P}_\bullet \in \mathbb {R}^{N \times N}\) be symmetric and positive definite, \(\varvec{b}_\bullet \in \mathbb {R}^N\), \(\varvec{x}_\bullet ^\star := \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \), and \(\varvec{x}_{\bullet 0} \in \mathbb {R}^N\). Suppose the \(\ell _2\)-condition number estimate
$$\begin{aligned} \mathrm{cond}_2(\varvec{P}_\bullet ^{-1/2} \varvec{A}_\bullet \varvec{P}_\bullet ^{-1/2}) \le C_{\mathrm{pcg}}. \end{aligned}$$
Then, the iterates \(\varvec{x}_{\bullet k}\) of the PCG algorithm satisfy the contraction property
$$\begin{aligned} \Vert \varvec{x}_\bullet ^\star - \varvec{x}_{\bullet (k+1)}\Vert _{\varvec{A}_\bullet } \le q_{\mathrm{pcg}} \, \Vert \varvec{x}_\bullet ^\star - \varvec{x}_{\bullet k}\Vert _{\varvec{A}_\bullet } \quad \text {for all } k \in \mathbb {N}_0, \end{aligned}$$
where \(q_{\mathrm{pcg}} := (1-1/C_{\mathrm{pcg}})^{1/2} < 1\). \(\square \)
If the matrix \(\varvec{A}_\bullet \in \mathbb {R}^{N \times N}\) stems from the Galerkin discretization (5) for \(\mathcal {T}_\bullet = \{ T_1, \ldots , T_N \}\), there is a one-to-one correspondence of vectors \(\varvec{y}_\bullet \in \mathbb {R}^N\) and discrete functions \(\psi _\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) via \(\psi _\bullet = \sum _{j=1}^N \varvec{y}_\bullet [j] \, \chi _{\bullet ,j}\). Let \(\phi _{\bullet k} \in \mathcal {P}^0(\mathcal {T}_\bullet )\) denote the discrete function corresponding to the PCG iterate \(\varvec{x}_{\bullet k} \in \mathbb {R}^N\), while the Galerkin solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) of (11) corresponds to \(\varvec{x}_\bullet ^\star = \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \). We note the elementary identity
$$\begin{aligned} |||\,\phi _\bullet ^\star - \phi _{\bullet k}\,|||^2 = (\varvec{x}_\bullet ^\star - \varvec{x}_{\bullet k})\cdot \varvec{A}_\bullet (\varvec{x}_\bullet ^\star - \varvec{x}_{\bullet k}) = \Vert \varvec{x}_\bullet ^\star - \varvec{x}_{\bullet k}\Vert _{\varvec{A}_\bullet }^2. \end{aligned}$$

2.7 Optimal preconditioners

We say that \(\varvec{P}_\bullet \) is an optimal preconditioner, if \(C_{\mathrm{pcg}} \ge 1\) in the \(\ell _2\)-condition number estimate (18) depends only on \(\gamma \)-shape regularity of \(\mathcal {T}_\bullet \) and the initial mesh \(\mathcal {T}_0\) (and is hence essentially independent of the mesh \(\mathcal {T}_\bullet \)).

3 Main results

3.1 Optimal additive Schwarz preconditioner

In this work, we consider multilevel additive Schwarz preconditioners that build on the adaptive mesh-hierarchy.

Let \(\mathcal {E}_\bullet \) denote the set of all nodes (\(d=2\)) resp. edges (\(d=3\)) of the mesh \(\mathcal {T}_\bullet \) which do not belong to the relative boundary \(\partial \Gamma \). Only for \(\Gamma =\partial \Omega \), \(\mathcal {E}_\bullet \) contains all nodes resp. edges of \(\mathcal {T}_\bullet \). For \(E\in \mathcal {E}_\bullet \), let \(T^\pm \in \mathcal {T}_\bullet \) denote the two unique elements with \(T^+\cap T^- = E\). We define the Haar-type function \(\varphi _{\bullet ,E}\in \mathcal {P}^0(\mathcal {T}_\bullet )\) (associated to \(E\in \mathcal {E}_\bullet \)) by
$$\begin{aligned} \varphi _{\bullet ,E}|_T := {\left\{ \begin{array}{ll} \pm \frac{|E|}{|T^\pm |} &{}\quad \text {for } T\in \{T^+,T^-\}, \\ 0 &{}\quad \text {else}, \end{array}\right. } \end{aligned}$$
where \(|E| := 1\) for \(d=2\) and \(|E| :=\mathrm{diam}(E)\) for \(d=3\). Note that
$$\begin{aligned} \varphi _{\bullet ,E} \in \mathcal {P}_*^0(\mathcal {T}_\bullet ) :=\Big \{\psi \in \mathcal {P}^0(\mathcal {T}_\bullet )\,:\,\int _\Gamma \psi \,\mathrm{d}x = 0\Big \}. \end{aligned}$$
For \(d=3\), we additionally suppose that the orientation of each edge E is arbitrary but fixed. We choose \(T^+ \in \mathcal {T}_\bullet \) such that \(\partial T^+\) and \(E \subset \partial T^+\) have the same orientation.
Given a mesh \(\mathcal {T}_0\), suppose that \(\mathcal {T}_\ell \) is a sequence of locally refined meshes, i.e., for all \(\ell \in \mathbb {N}_0\), there exists a set \(\mathcal {M}_\ell \subseteq \mathcal {T}_\ell \) such that \(\mathcal {T}_{\ell +1} = \mathrm{refine}(\mathcal {T}_\ell ,\mathcal {M}_\ell )\). Then, define
$$\begin{aligned} \mathcal {E}_\ell ^\star := \mathcal {E}_\ell \backslash \mathcal {E}_{\ell -1} \cup \big \{E\in \mathcal {E}_\ell \,:\,{\text {supp}}(\varphi _{\ell ,E})\subsetneqq {\text {supp}}(\varphi _{\ell -1,E})\big \} \quad \text {for all }\ell \ge 1, \end{aligned}$$
which consist of new (interior) nodes/edges plus some of their neighbours. We note the following subspace decomposition which is, in general, not direct.

Lemma 2

With \(\mathcal {X}_\bullet := \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\mathcal {X}_{\bullet ,E} := {\text {span}}\{\varphi _{\bullet ,E}\}\), it holds that
$$\begin{aligned} \mathcal {X}_L = \mathcal {X}_0 + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \mathcal {X}_{\ell ,E} \quad \text {for all } L \in \mathbb {N}_0. \end{aligned}$$
\(\square \)
Additive Schwarz preconditioners are based on (not necessarily direct) subspace decompositions. Following the standard theory (see, e.g., [44, Chapter 2]), (23) yields a (local multilevel) preconditioner. To provide its matrix formulation, let \(\varvec{I}_{k,\ell }\in \mathbb {R}^{\#\mathcal {T}_\ell \times \#\mathcal {T}_k}\) be the matrix representation of the canonical embedding \(\mathcal {P}^0(\mathcal {T}_k) \hookrightarrow \mathcal {P}^0(\mathcal {T}_\ell )\) for \(k<\ell \), i.e.,
$$\begin{aligned} \sum _{i=1}^{\#\mathcal {T}_k}\varvec{x}_k[i]\,\chi _{k,i}=\sum _{i=1}^{\#\mathcal {T}_\ell }\varvec{x}_\ell [i]\,\chi _{\ell ,i}\quad \text {for all }\varvec{x}_k\in \mathbb {R}^{\#\mathcal {T}_k}\text { and } \varvec{x}_\ell :=\varvec{I}_{k,\ell }\varvec{x}_k\in \mathbb {R}^{\#\mathcal {T}_{\ell }}. \end{aligned}$$
Let \(\varvec{H}_\ell \in \mathbb {R}^{\#\mathcal {T}_\ell \times \#\mathcal {E}_\ell }\) denote the matrix that represents Haar-type functions, i.e.,
$$\begin{aligned} \varphi _{\ell ,E_j}=\sum _{i=1}^{\#\mathcal {T}_\ell }\varvec{H}_\ell [i,j]\chi _{\ell ,i}\quad \text {for all }E_j\in \mathcal {E}_\ell . \end{aligned}$$
Since only two coefficients per column are non-zero, \(\varvec{H}_\ell \) is sparse, while \(\varvec{I}_{k,\ell }\) is non-sparse in general. Finally, define the (non-invertible) diagonal matrix \(\varvec{D}_\ell \in \mathbb {R}^{\#\mathcal {E}_\ell \times \#\mathcal {E}_\ell }\) by
$$\begin{aligned} (\varvec{D}_\ell )_{jk} := {\left\{ \begin{array}{ll} |||\,\varphi _{\ell ,E_j}\,|||^{-2} &{}\quad E_j\in \mathcal {E}_\ell ^\star \text { and }j=k, \\ 0 &{}\quad \text {else}. \end{array}\right. } \end{aligned}$$
Then, the matrix representation of the preconditioner associated to (23) reads
$$\begin{aligned} \varvec{P}_L^{-1} := \varvec{I}_{0,L}\varvec{A}_0^{-1}\varvec{I}_{0,L}^T + \sum _{\ell =1}^L \varvec{I}_{\ell ,L} \varvec{H}_\ell \varvec{D}_\ell \varvec{H}_\ell ^T \varvec{I}_{\ell ,L}^T. \end{aligned}$$
For \(d=2\), the subsequent Theorem 3 is already proved in [23, Section III.B] for \(\Gamma = \partial \Omega \) and in [26, Section 6.3] for \(\Gamma \subsetneqq \partial \Omega \). For \(d = 3\), we need the following additional assumptions:
  • First, suppose that \(\Omega \subset \mathbb {R}^3\) is simply connected and \(\Gamma = \partial \Omega \).

  • Second, let \(\widehat{\mathcal {T}}_0\) be a conforming triangulation of \(\Omega \) into non-degenerate compact simplices such that \(\mathcal {T}_0 = \widehat{\mathcal {T}}_0|_\Gamma \) is the induced boundary partition on \(\Gamma \).

Then, the following theorem is our first main result. The proof is given in Sect. 5.

Theorem 3

Under the foregoing assumptions, the preconditioner \(\varvec{P}_L\) from (24) is optimal, i.e., there holds (18), where \(C_{\mathrm{pcg}}\ge 1\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\), but is independent of \(L\in \mathbb {N}\).

We stress that the matrix in (24) will never be assembled in practice. The PCG algorithm only needs the action of \(\varvec{P}_L^{-1}\) on a vector. This can be done recursively by using the embeddings \(\varvec{I}_{\ell ,\ell +1}\) which are, in fact, sparse. Up to (storing and) inverting \(\varvec{A}_0\) on the coarse mesh, the evaluation of \(\varvec{P}_L^{-1}\varvec{x}\) can be done in \(\mathcal {O}(\#\mathcal {T}_L)\) operations; see, e.g., [22, Section 3.1] for a detailed discussion. If the mesh \(\mathcal {T}_L\) is fine compared to the initial mesh \(\mathcal {T}_0\) (or if \(\varvec{A}_0\) is realized with, e.g., \(\mathcal {H}\)-matrix techniques), then the computational costs and storage requirements associated with \(\varvec{A}_0\) can be neglected.

Remark 4

Our proof for \(d = 3\) requires additional assumptions on \(\Omega \), \(\Gamma = \partial \Omega \), and \(\mathcal {T}_0\). As stated above, the case \(d = 2\) allows for a different proof (which, however, does not transfer to \(d = 3\)) and can thus avoid these assumptions; see [23, 26]. We believe that Theorem 3 also holds for \(d = 3\) and \(\Gamma \subsetneqq \partial \Omega \). This is also underpinned by a numerical experiment in Sect. 4.4. The mathematical proof, however, remains open.

3.2 Optimal convergence of adaptive algorithm

We analyze the following adaptive strategy which is driven by the weighted-residual error estimator (14). We note that Algorithm 5 as well as the following results are independent of the precise preconditioning strategy as long as the employed preconditioners are optimal; see Sect. 2.7.

Algorithm 5

Input: Conforming triangulation \(\mathcal {T}_0\) of \(\Gamma \), adaptivity parameters \(0<\theta \le 1\) and \(\lambda > 0\), and \(C_{\mathrm{mark}} > 0\), optimal preconditioning strategy \(\varvec{P}_\bullet \) for all \(\mathcal {T}_\bullet \in \mathrm{refine}(\mathcal {T}_0)\).

Loop: With \(k := 0 =: j\) and \(\phi _{00} := 0\), iterate the following steps (i)–(vii):
  1. (i)

    Update counter \((j,k) \mapsto (j,k+1)\).

  2. (ii)

    Do one step of the PCG algorithm with the optimal preconditioner \(\varvec{P}_j\) to obtain \(\phi _{jk} \in \mathcal {P}^0(\mathcal {T}_j)\) from \(\phi _{j(k-1)} \in \mathcal {P}^0(\mathcal {T}_j)\).

  3. (iii)

    Compute the local contributions \(\eta _j(T,\phi _{jk})\) of the error estimator for all \(T \in \mathcal {T}_j\).

  4. (iv)

    If \(|||\,\phi _{jk} - \phi _{j(k-1)}\,||| > \lambda \, \eta _j(\phi _{jk})\), continue with (i).

  5. (v)

    Otherwise, define \(\underline{k}(j):=k\) and determine some set \(\mathcal {M}_j \subseteq \mathcal {T}_j\) with up to the multiplicative factor \(C_{\mathrm{mark}}\) minimal cardinality such that \(\theta \, \eta _j(\phi _{jk}) \le \eta _j(\mathcal {M}_j, \phi _{jk})\).

  6. (vi)

    Generate \(\mathcal {T}_{j+1} := \mathrm{refine}(\mathcal {T}_j,\mathcal {M}_j)\) and define \(\phi _{(j+1)0} := \phi _{jk}\).

  7. (vii)

    Update counter \((j, k) \mapsto (j+1, 0)\) and continue with (i).

Output: Sequences of successively refined triangulations \(\mathcal {T}_j\), discrete solutions \(\phi _{jk}\), and corresponding error estimators \(\eta _j(\phi _{jk})\), for all \(j \ge 0\) and \(k \ge 0\). \(\square \)

Remark 6

The choice \(\lambda = 0\) corresponds to the case that (11) is solved exactly, i.e., \(\phi _{(j+1)0} = \phi _j^\star \). Then, optimal convergence of Algorithm 5 has already been proved in [2, 19, 24, 27] for weakly-singular integral equations and [20, 27] for hyper-singular integral equations. The choice \(\theta = 1\) will generically lead to uniform mesh-refinement, where for each mesh all elements \(\mathcal {M}_j = \mathcal {T}_j\) are refined in step (vi) of Algorithm 5. Instead, small \(0 < \theta \ll 1\), will lead to highly adapted meshes.

Remark 7

Let \(\mathcal {Q} := \big \{(j, k) \in \mathbb {N}_0 \times \mathbb {N}_0\,:\,\text {index }\)(j, k)\(\text { is used in Algorithm}~5\big \}\). It holds that \((0,0) \in \mathcal {Q}\). Moreover, for \(j, k\in \mathbb {N}_0\), it holds that
  • for \(j\ge 1\), \((j, 0) \in \mathcal {Q}\) implies that \((j-1, 0) \in \mathcal {Q}\),

  • for \(k\ge 1\), \((j, k) \in \mathcal {Q}\) implies that \((j, k-1) \in \mathcal {Q}\).

If j is clear from the context, we abbreviate \(\underline{k} := \underline{k}(j)\), e.g., \(\phi _{j\underline{k}} := \phi _{j\underline{k}(j)}\). In particular, it holds that \(\phi _{j\underline{k}} = \phi _{(j+1)0}\). Since PCG (like any Krylov method) provides the exact solution after at most \(\#\mathcal {T}_j\) steps, it follows that \(1 \le \underline{k}(j) < \infty \). Finally, we define the ordering
$$\begin{aligned}&(j',k')< (j,k) \quad {\mathop {\Longleftrightarrow }\limits ^{\mathrm{def}}}\quad \left\{ \begin{array}{ll} \text {either:}&{}\quad j'< j\\ \text {or:} &{}\quad j' = j \text { and } k' < k \end{array}\right\} \qquad \\&\quad \quad \text {for all } (j', k'), (j, k) \in \mathcal {Q}. \end{aligned}$$
Moreover, let
$$\begin{aligned} |(j,k)| := {\left\{ \begin{array}{ll} 0, &{}\quad \text {if } j = 0 = k, \\ \#\big \{(j',k') \in \mathcal {Q}\,:\,(j',k')< (j,k) \text { and } k'<\underline{k}(j^\prime )\big \}, &{}\quad \text {if } j> 0 \text { or } k > 0, \end{array}\right. } \end{aligned}$$
be the total number of PCG iterations until the computation of \(\phi _{jk}\). Note that \(j'>j\) and \(|(j',k')| = |(j,k)|\) imply that \(j'=j+1\), \(k=\underline{k}(j)\), and \(k'=0\) and hence \(\phi _{j'k'} = \phi _{jk}\). \(\square \)

Theorem 8

The output of Algorithm 5 satisfies the following assertions (a)–(c). The constants \(C_{\mathrm{rel}}^\star , C_{\mathrm{eff}}^\star >0\) depend only on \(q_{\mathrm{pcg}}\), \(\Gamma \), and the uniform \(\gamma \)-shape regularity of \(\mathcal {T}_j \in \mathrm{refine}(\mathcal {T}_0)\), whereas \(C_{\mathrm{lin}}>1\) and \(0<q_{\mathrm{lin}}<1\) depend additionally only on \(\theta \) and \(\lambda \), and \(C_{\mathrm{opt}}>0\) depends additionally only on s, \(\mathcal {T}_0\), and \(\Lambda _{0\underline{k}}\).
  1. (a)
    There exists a constant \(C_{\mathrm{rel}}^\star > 0\) such that
    $$\begin{aligned} |||\,\phi ^\star - \phi _{jk}\,|||&\le C_{\mathrm{rel}}^\star \, \big ( \eta _j(\phi _{jk}) + |||\,\phi _{jk} - \phi _{j(k-1)}\,||| \big )\nonumber \\&\qquad \text {for all } (j,k) \in \mathcal {Q}\quad \text { with } k \ge 1. \end{aligned}$$
    There exists a constant \(C_{\mathrm{eff}}^\star > 0\) such that, provided that \(\phi ^\star \in L^2(\Gamma )\), it holds that
    $$\begin{aligned} \eta _j(\phi _{jk})&\le C_{\mathrm{eff}}^\star \, \big ( \Vert h_j^{1/2}(\phi ^\star - \phi _{jk})\Vert _{L^2(\Gamma )} + |||\,\phi _{jk} - \phi _{j(k-1)}\,||| \big )\nonumber \\&\qquad \text {for all } (j,k) \in \mathcal {Q}, k \ge 1. \end{aligned}$$
  2. (b)
    For arbitrary \(0 < \theta \le 1\) and arbitrary \(\lambda > 0\), there exist constants \(C_{\mathrm{lin}} \ge 1\) and \(0< q_{\mathrm{lin}} < 1\) such that the quasi-error
    $$\begin{aligned} \Lambda _{jk}^2 := |||\,\phi ^\star - \phi _{jk}\,|||^2 + \eta _{j}(\phi _{jk})^2 \end{aligned}$$
    is linearly convergent in the sense of
    $$\begin{aligned} \Lambda _{j'k'} \le C_{\mathrm{lin}} \, q_{\mathrm{lin}}^{|(j',k')|-|(j,k)|} \, \Lambda _{jk} \quad \text {for all } (j,k), (j',k') \in \mathcal {Q} \text { with } (j',k') \ge (j,k). \end{aligned}$$
  3. (c)
    For \(s > 0\), define the approximation class
    $$\begin{aligned} \Vert \phi ^\star \Vert _{\mathbb {A}_s} := \sup _{N \in \mathbb {N}_0} \left( (N+1)^s \, \min _{\begin{array}{c} \mathcal {T}_\bullet \in \mathrm{refine}(\mathcal {T}_0) \\ \#\mathcal {T}_\bullet - \#\mathcal {T}_0 \le N \end{array}} \eta _\bullet (\phi _\bullet ^\star )\right) . \end{aligned}$$
    Then, for sufficiently small \(0 < \theta \ll 1\) and \(0 < \lambda \ll 1\), cf. Assumption (68) below, and all \(s>0\), it holds that
    $$\begin{aligned}&\Vert \phi ^\star \Vert _{\mathbb {A}_s}< \infty \Longleftrightarrow \exists \, C_{\mathrm{opt}} > 0: \sup _{(j,k) \in \mathcal {Q}} \big ( \#\mathcal {T}_j - \#\mathcal {T}_0 + 1 \big )^{s} \, \Lambda _{jk}\le C_{\mathrm{opt}}\,\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty . \end{aligned}$$

Remark 9

By definition, it holds that \(\eta _{j}(\phi _{jk}) \le \Lambda _{jk}\) for all \((j,k) \in \mathcal {Q}\). If \(\phi _{jk} \in \{\phi _j^\star ,\phi _{j\underline{k}}\}\), then there also holds the converse inequality \(\eta _{j}(\phi _{jk}) \simeq \Lambda _{jk}\). To see this, note that \(\phi _{jk} = \phi _j^\star \) and (15) prove that \(\Lambda _{jk} \le (1+C_{\mathrm{rel}}) \, \eta _{j}(\phi _{jk})\). If \(\phi _{jk} = \phi _{j\underline{k}}\), then Theorem 8(a) and Step (iv) of Algorithm 5 prove that \(\Lambda _{j\underline{k}} \le (1+C_{\mathrm{rel}}^\star ) \, \eta _{j}(\phi _{j\underline{k}}) + |||\,\phi _{j\underline{k}} - \phi _{j(\underline{k}-1)}\,||| \le (1 + C_{\mathrm{rel}}^\star + \lambda ) \, \eta _{j}(\phi _{j\underline{k}})\).

3.3 Almost optimal computational complexity

Suppose that we use \(\mathcal {H}^2\)-matrices for the efficient treatment of the discrete single-layer integral operator. Recall that the storage requirements (resp. the cost for one matrix-vector multiplication) of an \(\mathcal {H}^2\)-matrix are of order \(\mathcal {O}(Np^2)\), where N is the matrix size and \(p \in \mathbb {N}\) is the local block rank. For \(\mathcal {H}^2\)-matrices (unlike \(\mathcal {H}\)-matrices), these costs are, in particular, independent of a possibly unbalanced binary tree which underlies the hierarchical data structure [30].

For a mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), we employ the local block rank \(p = \mathcal {O}(\log (1+\#\mathcal {T}_\bullet ))\) to ensure that the matrix compression is asymptotically exact as \(N = \#\mathcal {T}_\bullet \rightarrow \infty \), i.e., the error between the exact matrix and the \(\mathcal {H}\)-matrix decays exponentially fast; see [30]. We stress that we neglect this error in the following and assume that the matrix-vector multiplication (based on the \(\mathcal {H}^2\)-matrix) yields the exact matrix-vector product.

The computational cost for storing \(\varvec{A}_\bullet \) (as well as for one matrix-vector multiplication) is \(\mathcal {O}((\#\mathcal {T}_\bullet )\log ^2(1+\#\mathcal {T}_\bullet ))\). In an idealized optimal case, the computation of \(\phi _\bullet ^\star \) is hence (at least) of cost \(\mathcal {O}((\#\mathcal {T}_\bullet )\log ^2(1+\#\mathcal {T}_\bullet ))\).

We consider the computational costs for one step of Algorithm 5:
  • We assume that one step of the PCG algorithm with the employed optimal preconditioner is of cost \(\mathcal {O}\big ((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\); cf. the preconditioner from Sect. 3.1.

  • We assume that we can compute \(\eta _j(\psi _j)\) for any \(\psi _j \in \mathcal {P}^0(\mathcal {T}_j)\) (by means of numerical quadrature) with \(\mathcal {O}\big ((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\) operations.

  • Clearly, the Dörfler marking in Step (v) can be done in \(\mathcal {O}\big ((\#\mathcal {T}_j)\log (1+\#\mathcal {T}_j)\big )\) operations by sorting. Moreover, for \(C_{\mathrm{mark}} = 2\), Stevenson [41] proposed a realization of the Dörfler marking based on binning, which can be performed at linear cost \(\mathcal {O}(\#\mathcal {T}_j)\).

  • Finally, the mesh-refinement in Step (vi) can be done in linear complexity \(\mathcal {O}(\#\mathcal {T}_j)\) if the data structure is appropriate.

Overall, one step of Algorithm 5 is thus done in \(\mathcal {O}((\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j))\) operations. However, an adaptive step \((j',k') \in \mathcal {Q}\) depends on the full history of previous steps.
  • Hence, the cumulative computational complexity for the adaptive step \((j',k') \in \mathcal {Q}\) is of order \(\mathcal {O}\big (\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\).

The following corollary proves that Algorithm 5 does not only lead to convergence of the quasi-error \(\Lambda _{jk}\) with optimal rate with respect to the degrees of freedom (see Theorem 8), but also with almost optimal rate with respect to the computational costs.

Corollary 10

For \(j \in \mathbb {N}_0\), let \(\widehat{\mathcal {T}}_{j+1} = \mathrm{refine}(\widehat{\mathcal {T}}_j,\widehat{\mathcal {M}}_j)\) with arbitrary \(\widehat{\mathcal {M}}_j \subseteq \widehat{\mathcal {T}}_j\) and \(\widehat{\mathcal {T}}_0 = \mathcal {T}_0\). Let \(s > 0\) and suppose that the corresponding error estimator \({\widehat{\eta }}_j(\widehat{\phi }_j^\star )\) converges at rate s with respect to the single-step computational costs, i.e.,
$$\begin{aligned} \sup _{j \in \mathbb {N}_0} \big [(\#\widehat{\mathcal {T}}_j) \, \log ^2(1+\#\widehat{\mathcal {T}}_j)\big ]^s \, \widehat{\eta }_j(\widehat{\phi }_j^\star ) < \infty . \end{aligned}$$
Suppose that \(\lambda \) and \(\theta \) satisfy the assumptions of Theorem 8(c). Then, the quasi-errors \(\Lambda _{jk}\) generated by Algorithm 5 converge almost at rate s with respect to the cumulative computational costs, i.e.,
$$\begin{aligned} \sup _{(j',k') \in \mathcal {Q}} \left[ \sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\right] ^{s-\varepsilon } \, \Lambda _{j'k'} < \infty \quad \text {for all } \varepsilon > 0. \end{aligned}$$

4 Numerical experiments

In this section, we present numerical experiments that underpin our theoretical findings. We use lowest-order BEM for direct and indirect formulations in 2D as well as 3D. For each problem, we compare the performance of Algorithm 5 for
  • different values of \(\lambda \in \{1,10^{-1},10^{-2},10^{-3},10^{-4}\}\),

  • different values of \(\theta \in \{0.2,0.4,0.6,0.8,1\}\),

where \(\theta = 1\) corresponds to uniform mesh-refinement. In particular, we monitor the condition numbers of the arising BEM systems for diagonal preconditioning [7], the proposed additive Schwarz preconditioning from Sect. 3.1, and no preconditioning. The 2D implementation is based on our MATLAB implementation Hilbert [1], while the 3D implementation relies on an extension of the BEM++ library [40].

4.1 Slit problem in 2D

Let \(\Gamma :=(-1,1)\times \{0\}\), cf. Fig. 2. We consider
$$\begin{aligned} V \phi =1\quad \text { on }\Gamma . \end{aligned}$$
The unique exact solution of (34) reads \(\phi ^\star (x,0):=-2x/\sqrt{1-x^2}\). For uniform mesh-refinement, we thus expect a convergence order of \(\mathcal {O}(N^{-1/2})\), while the optimal rate is \(\mathcal {O}(N^{-3/2})\) with respect to the number of elements.
Fig. 2

Example 4.1: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the left end point (left) and for the matrices arising from Algorithm 5 (right)

Fig. 3

Example 4.1: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)

In Fig. 3, we compare Algorithm 5 for different values for \(\theta \) and \(\lambda \) as well as uniform mesh-refinement. Uniform mesh-refinement leads only to the rate \(\mathcal {O}(N^{-1/2})\), while adaptivity, independently of the value of \(\theta \) and \(\lambda \), regains the optimal rate \(\mathcal {O}(N^{-3/2})\). A naive initial guess in Step (vi) of Algorithm 5 (i.e., if \(\phi _{(j+1)0}:=0\)) leads to a logarithmical growth of the number of PCG iterations, whereas for nested iteration \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) (as formulated in Algorithm 5) the number of PCG iterations stays uniformly bounded, cf. Fig. 4. Finally, Fig. 2 shows the condition numbers for an artificial refinement towards the left end point and for Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Fig. 4

Example 4.1: number of PCG iterations in Algorithm 5 for nested iteration (dashed lines), i.e., \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) in Step (vi), and naive initial guess (solid lines), i.e., \(\phi _{(j+1)0}:=0\). We compare fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},10^{-2},10^{-3}\}\)

4.2 Z-shaped domain in 2D

Let \(\Gamma :=\partial \Omega \) be the boundary of the Z-shaped domain with reentrant corner at the origin (0, 0), cf. Fig. 5. The right-hand side is given by \(f=(K+1/2)g\) with the double-layer operator \(K:H^{1/2}(\Gamma )\rightarrow H^{1/2}(\Gamma )\). We note that the weakly-singular integral equation (1) is then equivalent to the Dirichlet problem
$$\begin{aligned} \begin{aligned} -\Delta u&= 0\quad \text { in }\Omega \qquad \text {subject to}\qquad u = g\quad \text { on }\Gamma . \end{aligned} \end{aligned}$$
We prescribe the exact solution in 2D polar coordinates as
$$\begin{aligned} u(x)=r^{4/7}\cos (4\,\varvec{x}i/7)\quad \text {with }x=r(\cos \varvec{x}i,\sin \varvec{x}i). \end{aligned}$$
Then, u admits a generic singularity at the reentrant corner. The exact solution \(\phi ^\star \) of (1) is just the normal derivative of the solution u.
We expect a convergence order of \(\mathcal {O}(N^{-4/7})\) for uniform mesh-refinement, and the optimal rate \(\mathcal {O}(N^{-3/2})\) for the adaptive strategy, which is seen in Fig. 6 for different values of \(\theta \) and \(\lambda \). A naive initial guess in Step (vi) of Algorithm 5 (i.e., if \(\phi _{(j+1)0}:=0\)) leads to a logarithmical growth of the number of PCG iterations, whereas for nested iteration \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) the number of PCG iterations stays uniformly bounded, cf. Fig. 7. Figure 5 shows the condition numbers for an artificial refinement towards the reentrant corner as well as the condition numbers for Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Fig. 5

Example 4.2: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the reentrant corner (top) and for Algorithm 5 (bottom)

Fig. 6

Example 4.2: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)

Fig. 7

Example 4.2: number of PCG iterations in Algorithm 5 for nested iteration (dashed lines), i.e., \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) in Step (vi), and naive initial guess (solid lines), i.e., \(\phi _{(j+1)0}:=0\). We compare fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},10^{-2},10^{-3}\}\)

Fig. 8

Example 4.3: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards one reentrant corner (left) or edge (middle), and for Algorithm 5 (right)

Fig. 9

Example 4.3: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)

4.3 L-shaped domain in 3D

Let \(\Gamma :=\partial \Omega \) be the boundary of the L-shaped domain \(\Omega = (-1,1)^3 \backslash ([-1,0] \times [0,1] \times [-1,1])\), cf. Fig. 8. The right-hand side is given by \(f=(K+1/2)g\) with the double-layer operator \(K:H^{1/2}(\Gamma )\rightarrow H^{1/2}(\Gamma )\). Again, the weakly-singular integral equation (1) is then equivalent to the Dirichlet problem (35). We prescribe the exact solution in 3D cylindrical coordinates as
$$\begin{aligned} u(x)=z\,r^{2/3}\cos (2/3 \, (\varvec{x}i-\pi /4) )\quad \text {with }x=(r\cos \varvec{x}i,r\sin \varvec{x}i,z). \end{aligned}$$
Note that u admits a singularity along the reentrant edge. The exact solution \(\phi ^\star \) of (1) is just the normal derivative of the exact solution u.
In Fig. 9, we compare Algorithm 5 with different values for \(\theta \) and \(\lambda \) to uniform mesh-refinement. Un7767iform mesh-refinement leads only to a reduced rate of \(\mathcal {O}(N^{-1/2})\), while adaptivity, independently of \(\theta \) and \(\lambda \), leads to the improved rate of approximately \(\mathcal {O}(N^{-2/3})\). While one would expect \(\mathcal {O}(N^{-3/4})\) for smooth exact solutions \(\phi ^\star \), this would require anisotropic elements along the reentrant edge for the present solution \(\phi ^\star =\partial _n u\). Since NVB guarantees uniform \(\gamma \)-shape regularity of the meshes, the latter is not possible and hence leads to a reduced optimal rate. Finally, Fig. 8 shows the condition numbers for (diagonal or additive Schwarz) preconditioning and no preconditioning for artificial refinements towards one reentrant corner or the reentrant edge as well as the condition numbers of the matrices arising from Algorithm 5 with \(\lambda = 10^{-3}\) and \(\theta =0.5\).
Fig. 10

Example 4.4: condition numbers of the preconditioned and non-preconditioned Galerkin matrix for an artificial refinement towards the right corner (left), the right edges (middle), and for Algorithm 5 (right)

Fig. 11

Example 4.4: estimator convergence for fixed values of \(\lambda \) (left: \(\lambda =1\), right: \(\lambda =10^{-3}\)) and \(\theta \in \{0.2,0.4,0.6,0.8\}\) (top) and for fixed values of \(\theta \) (left: \(\theta =0.4\), right: \(\theta =0.6\)) and \(\lambda \in \{1,10^{-1},\ldots ,10^{-4}\}\) (bottom)

4.4 Screen problem in 3D

Let \(\Gamma :=((-1,1)^2 {\setminus } [0,1]^2) \times \{0\}\), rotated by \(3\pi /4\), cf. Figure 10. We consider the weakly-singular integral equation \(V \phi ^\star =1\) on \(\Gamma \). The exact solution \(\phi ^\star \in \widetilde{H}^{-1/2}(\Gamma )\) is unknown.

For the numerical solution of the Galerkin system, we employ PCG with the additive Schwarz preconditioner from Sect. 3.1. We note that Theorem 3 does not cover this setting. In particular, we note that the proposed additive Schwarz preconditioner from Sect. 3.1 appears to be optimal (cf. Fig. 10), while the mathematical optimality proof still remains open for screens.

In Fig. 11, we compare Algorithm 5 with different values for \(\theta \) and \(\lambda \) to uniform mesh-refinement. We see that uniform mesh-refinement leads only to a reduced rate of \(\mathcal {O}(N^{-1/4})\), while adaptivity, independently of \(\theta \) and \(\lambda \), leads to the improved rate of approximately \(\mathcal {O}(N^{-1/2})\).
Fig. 12

To underline the quasi-optimal computational complexity of Algorithm 5, we plot the error estimator \(\eta _j(\phi _{j\underline{k}})\) in the different experiments over the cumulative quantities \(\sum _{(j,k) \le (j',k')} \#\mathcal {T}_j\), \(\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log (\#\mathcal {T}_j)\) and \(\sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(\#\mathcal {T}_j)\) for \(\theta = 0.4\) and \(\lambda \in \{1,10^{-3}\}\)

4.5 Computational complexity

With Fig. 12, we aim to underpin the almost optimal computational complexity of Algorithm 5 (see Corollary 10). To this end, we plot the error estimator \(\eta _j(\phi _{jk})\) over the cumulative sums
$$\begin{aligned} \sum _{(j,k) \le (j',k')} \mathrm{cost}(\mathcal {T}_j) \quad \text {with} \quad \mathrm{cost}(\mathcal {T}_j) \in \big \{ \#\mathcal {T}_j \,,\, (\#\mathcal {T}_j)\, \log (\#\mathcal {T}_j) \,,\, (\#\mathcal {T}_j)\, \log ^2(\#\mathcal {T}_j) \big \} \end{aligned}$$
for \(\theta = 0.4\) and \(\lambda \in \{1,10^{-3}\}\). The negative impact of the logarithmic terms on the (preasymptotic) convergence rate is clearly visible.

5 Proof of Theorem 3 (optimal multilevel preconditioner)

For \(d = 2\), we refer to [23, 26] and thus focus only on \(d = 3\) and \(\Gamma = \partial \Omega \). Due to our additional assumption, \(\mathcal {T}_0 = \widehat{\mathcal {T}}_0|_\Gamma \) is the restriction of a conforming simplicial triangulation \(\widehat{\mathcal {T}}_0\) of \(\Omega \) to the boundary \(\Gamma \). Moreover, 2D NVB refinement of \(\mathcal {T}_0\) (on the boundary \(\Gamma \)) is a special case of 3D NVB refinement of \(\widehat{\mathcal {T}}_0\) (in the volume \(\Omega \)) plus restriction to the boundary; see, e.g., [42]. Hence, each mesh \(\mathcal {T}_\bullet \in \mathbb {T} = \mathrm{refine}(\mathcal {T}_0)\) is the restriction of a conforming NVB refinement \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}} := \mathrm{refine}(\widehat{\mathcal {T}}_0)\), i.e., \(\mathcal {T}_\bullet = \widehat{\mathcal {T}}_\bullet |_\Gamma \). Throughout, let \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\) be the coarsest extension of \(\mathcal {T}_\bullet \in \mathbb {T}\). Recall that NVB is a binary refinement rule. Therefore, \(\mathcal {T}_\circ \in \mathrm{refine}(\mathcal {T}_\bullet )\) also implies that \(\widehat{\mathcal {T}}_\circ \in \mathrm{refine}(\widehat{\mathcal {T}}_\bullet )\). Finally, we note that all triangulations \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\) are uniformly \(\gamma \)-shape regular, i.e.,
$$\begin{aligned} \max _{{\widehat{T}} \in \widehat{\mathcal {T}}_\bullet } \frac{\mathrm{diam}({\widehat{T}})}{|\widehat{T}|^{1/3}} \le \gamma < \infty . \end{aligned}$$
where \(\gamma \) depends only on \(\widehat{\mathcal {T}}_0\).

Our argument adapts ideas from [32], where a subspace decomposition for the lowest-order Nédélec space \(\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) (see, e.g., [34]) in \(\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )\) implies a decomposition of the corresponding discrete trace space. While the original idea dates back to [39], a nice summary of the argument is found in [32, Section 2].

Remark 11

  1. (i)

    Our proof is based on the construction of an extension operator from \(\mathcal {P}_*^0(\mathcal {T}_\bullet )\) to \(\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\), see Lemma 13 below. It is not clear if such an operator can be constructed for the case \(\Gamma \subsetneqq \partial \Omega \).

  2. (ii)

    In [31], a subspace decomposition of the lowest-order Raviart–Thomas space \(\varvec{\mathcal {R}}\varvec{\mathcal {T}}^0(\widehat{\mathcal {T}}_\bullet )\) (see, e.g., [45]) in \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\) implies a decomposition of the corresponding normal trace space \(\mathcal {P}^0(\mathcal {T}_\bullet )\). Due to different scaling properties of the Raviart–Thomas basis functions [in the \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\) norm] and their normal trace (in the \(H^{-1/2}(\Gamma )\) norm), this argument does not apply in our case.


5.1 Discrete spaces and extensions

Let \(\widehat{\mathcal {E}}_\bullet \) (resp. \(\widehat{\mathcal {N}}_\bullet \)) denote the set of all edges (resp. all nodes) of \(\widehat{\mathcal {T}}_\bullet \in \widehat{\mathbb {T}}\). For each node \(\varvec{x}\in \widehat{\mathcal {N}}_\bullet \), let \(\eta _{\bullet ,\varvec{x}} \in \mathcal {S}^1(\widehat{\mathcal {T}}_\bullet )\) be the corresponding hat function, i.e., \(\eta _{\bullet ,\varvec{x}}\) is \(\widehat{\mathcal {T}}_\bullet \)-piecewise affine and globally continuous with \(\eta _{\bullet ,\varvec{x}}(\varvec{y}) = \delta _{\varvec{x}\varvec{y}}\) for all \(\varvec{x}, \varvec{y} \in \widehat{\mathcal {N}}_\bullet \). For \(E \in \widehat{\mathcal {E}}_\bullet \), let \({\varvec{u}}_{\bullet ,E} \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) denote the corresponding Nédélec basis function, i.e., for \(K \in \widehat{\mathcal {T}}_\bullet \) with \(E = \mathrm{conv}\{\varvec{x},\varvec{y}\} \subset \partial K\), it holds that
$$\begin{aligned} {\varvec{u}}_{\bullet ,E}|_K = C(\eta _{\bullet ,\varvec{x}}\nabla \eta _{\bullet ,\varvec{y}}-\eta _{\bullet ,\varvec{y}}\nabla \eta _{\bullet ,\varvec{x}}), \end{aligned}$$
where \(C > 0\) is chosen such that \(\int _{E'} {\varvec{u}}_{\bullet ,E} \,ds = |E| \, \delta _{E E'}\) for all \(E,E'\in \widehat{\mathcal {E}}_\bullet \). Scaling arguments yield the next lemma. The proof follows the lines of [32, Lemma 5.7].

Lemma 12

For \(E\in \mathcal {E}_\bullet \), recall the Haar function \(\varphi _{\bullet ,E} \in \mathcal {P}^0(\mathcal {T}_\bullet )\) from (21). Let \({\varvec{u}}_{\bullet ,E} \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) denote the corresponding Nédélec basis function; see (38). Then,
$$\begin{aligned} \varphi _{\bullet ,E}&= \varvec{{\text {curl}}}\, {\varvec{u}}_{\bullet ,E} \cdot \varvec{n}|_\Gamma \text { and }\nonumber \\ C^{-1}\Vert \varphi _{\bullet ,E}\Vert _{H^{-1/2}(\Gamma )}&\le \Vert {\varvec{u}}_{\bullet ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \le C \, \Vert \varphi _{\bullet ,E}\Vert _{H^{-1/2}(\Gamma )}, \end{aligned}$$
where \(C>0\) depends only on \(\Omega \) and the \(\gamma \)-shape regularity of \(\widehat{\mathcal {T}}_\bullet \).


A straightforward calculation shows that \(\varphi _{\bullet ,E} = \varvec{{\text {curl}}}\, {\varvec{u}}_{\bullet ,E} \cdot \varvec{n}|_\Gamma \). Then, continuity of the normal trace operator yields that
$$\begin{aligned} \Vert \varphi _{\bullet ,E}\Vert _{H^{-1/2}(\Gamma )} \lesssim \Vert \varvec{{\text {curl}}}\,{\varvec{u}}_{\bullet ,E}\Vert _{\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )} = \Vert \varvec{{\text {curl}}}\,{\varvec{u}}_{\bullet ,E}\Vert _{L^2(\Omega )} \le \Vert {\varvec{u}}_{\bullet ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}. \end{aligned}$$
Furthermore, scaling arguments prove that
$$\begin{aligned} \Vert {\varvec{u}}_{\bullet ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}&\simeq \Vert \varvec{{\text {curl}}}\,{\varvec{u}}_{\bullet ,E}\Vert _{L^2(\Omega )}\simeq |E|^{1/2} \\&\simeq |E|^{1/2} \Vert \varphi _{\bullet ,E}\Vert _{L^2(\Gamma )} \lesssim \Vert \varphi _{\bullet ,E}\Vert _{H^{-1/2}(\Gamma )}, \end{aligned}$$
where we have finally applied an inverse estimate. This concludes the proof. \(\square \)

The following lemma holds for (simply) connected Lipschitz domains \(\Omega \) and follows essentially from [5]. Recall \(\mathcal {P}^0_*(\mathcal {T}_\bullet )\) from (22).

Lemma 13

There exists a linear operator \(\varvec{E}_\bullet : \mathcal {P}_*^0(\mathcal {T}_\bullet ) \rightarrow \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) such that
$$\begin{aligned} \varvec{{\text {curl}}}\,(\varvec{E}_\bullet \psi _\bullet ) \cdot \varvec{n}|_\Gamma&= \psi _\bullet \text { and } \Vert \varvec{E}_\bullet \psi _\bullet \Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \nonumber \\&\le C \, \Vert \psi _\bullet \Vert _{H^{-1/2}(\Gamma )} \text { for all } \psi _\bullet \in \mathcal {P}_*^0(\mathcal {T}_\bullet ). \end{aligned}$$
The constant \(C > 0\) depends only on \(\Omega \) and \(\gamma \)-shape regularity of \(\widehat{\mathcal {T}}_\bullet \).


Let \(\psi _\bullet \in \mathcal {P}_*^0(\mathcal {T}_\bullet )\). First, [5, Theorem 2.1] provides \({\varvec{\sigma }}_\bullet \in \varvec{\mathcal {R}}\varvec{\mathcal {T}}^0(\widehat{\mathcal {T}}_\bullet )\) with
$$\begin{aligned} {\varvec{\sigma }}_\bullet \cdot \varvec{n}|_\Gamma = \psi _\bullet , \quad \varvec{{\text {div}}}\,{\varvec{\sigma }}_\bullet = 0, \quad \text {and} \quad \Vert {\varvec{\sigma }}_\bullet \Vert _{\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )} \lesssim \Vert \psi _\bullet \Vert _{H^{-1/2}(\partial \Omega )}. \end{aligned}$$
Then, [5, Lemma 4.3] provides \(\varvec{E}_\bullet \psi _\bullet := {\varvec{v}}_\bullet \in \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\) such that
$$\begin{aligned} \varvec{{\text {curl}}}\,{\varvec{v}}_\bullet = {\varvec{\sigma }}_\bullet \quad \text {and} \quad \Vert {\varvec{v}}_\bullet \Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}\lesssim \Vert {\varvec{\sigma }}_\bullet \Vert _{\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )}. \end{aligned}$$
Combining these results, we conclude the proof. \(\square \)

5.2 Abstract additive Schwarz preconditioners

Let \(\mathcal {X}\) denote some finite dimensional Hilbert space with norm \(\Vert \cdot \Vert _{\mathcal {X}}\) and subspace decomposition
$$\begin{aligned} \mathcal {X} = \sum _{i\in \mathcal I} \mathcal {X}_i, \end{aligned}$$
where \(\mathcal {I}\) is a finite index set. The additive Schwarz operator is given by \(\mathcal {S} = \sum _{i\in \mathcal I} \mathcal {S}_i\), where \(\mathcal {S}_i\) is the \(\mathcal {X}\)-orthogonal projection onto \(\mathcal {X}_i\), i.e.,
$$\begin{aligned} \langle \mathcal {S}_i x\,,\,x_i\rangle _{\mathcal {X}} = \langle x\,,\,x_i\rangle _{\mathcal {X}} \quad \text {for all } x_i\in \mathcal {X}_i \text { and all }x\in \mathcal {X}, \end{aligned}$$
where \(\langle \cdot \,,\,\cdot \rangle _{\mathcal {X}}\) denotes the scalar product on \(\mathcal {X}\). Then, the operator \(\mathcal {S}\) is positive definite and symmetric (with respect to \(\langle \cdot \,,\,\cdot \rangle _{\mathcal {X}}\)). Define the multilevel norm
$$\begin{aligned} |||\,x\,|||_\mathcal {X}^2 := \inf \left\{ {\sum _{i\in \mathcal I} \Vert x_i\Vert _{\mathcal {X}}^2}:{x=\sum _{i\in \mathcal {I}} x_i \quad \text {with } x_i\in \mathcal {X}_i \text { for all } i \in \mathcal {I}}\right\} . \end{aligned}$$
It is proved, e.g., in [38, Theorem 16] that \(\langle \mathcal {S}^{-1}x\,,\,x\rangle _{\mathcal {X}} = |||\,x\,|||_\mathcal {X}^2\). Let \(C > 0\). If
$$\begin{aligned} C^{-1} \Vert x\Vert _{\mathcal {X}} \le |||\,x\,|||_\mathcal {X} \le C \, \Vert x\Vert _{\mathcal {X}} \quad \text {for all } x \in \mathcal {X}, \end{aligned}$$
then the extreme eigenvalues of \(\mathcal {S}^{-1}\) (and hence those of \(\mathcal {S}\)) are bounded (from above and below). In particular, the additive Schwarz operator \(\mathcal {S}\) is optimal in the sense that its condition number (ratio of largest and smallest eigenvalues) depends only on \(C>0\).

Let \(\varvec{S}\) denote the matrix representation of \(\mathcal {S}\). Then, the norm equivalence from above and the latter observations imply that the condition number of \(\varvec{S}\) is bounded. The abstract theory on additive Schwarz operators given in [44, Chapter 2] shows that \(\varvec{S}\) has the form \(\varvec{S} = \varvec{P}^{-1}\varvec{A}\), where \(\varvec{A}\) is the Galerkin matrix of \(\langle \cdot \,,\,\cdot \rangle _\mathcal {X}\). Therefore, boundedness of the condition number of \(\varvec{S}\) implies optimality of the preconditioner \(\varvec{P}^{-1}\).

We shortly discuss the matrix representation (24) of the additive Schwarz preconditioner \(\varvec{P}^{-1}\). Following [44, Chapter 2], let \(\varvec{A}_i\) denote the Galerkin matrix of \(\langle \cdot \,,\,\cdot \rangle _\mathcal {X}\) restricted to \(\mathcal {X}_i\), and let \(\varvec{I}_i\) denote the matrix that realizes the embedding from \(\mathcal {X}_i\rightarrow \mathcal {X}\). We consider the matrix representation of \(\mathcal {S}_i : \mathcal {X}\rightarrow \mathcal {X}_i\subset \mathcal {X}\). Let \(x\in \mathcal {X}\) with coordinate vector \(\varvec{x}\), and let \(x_i\in \mathcal {X}_i\) be arbitrary with coordinate vector \(\varvec{x}_i\). The defining relation
$$\begin{aligned} \langle \mathcal {S}_i x\,,\,x_i\rangle _{\mathcal {X}} = \langle x\,,\,x_i\rangle _{\mathcal {X}} \quad \text {for all } x_i\in \mathcal {X}_i \end{aligned}$$
of \(\mathcal {S}_i\) then reads in matrix-vector form (with \(\varvec{S}_i\) being the matrix representation of \(\mathcal {S}_i\)) as
$$\begin{aligned} \varvec{x}_i\cdot (\varvec{A}_i \varvec{S}_i\varvec{x}) = (\varvec{I}_i\varvec{x}_i)\cdot (\varvec{A}\varvec{x}) \quad \text {for all coefficient vectors } \varvec{x}_i, \end{aligned}$$
or equivalently
$$\begin{aligned} \varvec{A}_i \varvec{S}_i\varvec{x} = \varvec{I}_i^{T}\varvec{A}\varvec{x}. \end{aligned}$$
Since \(\varvec{A}_i\) is invertible, we have that
$$\begin{aligned} \varvec{S}_i = \varvec{A}_i^{-1} \varvec{I}_i^T\varvec{A}. \end{aligned}$$
Note that the range of the operator \(\mathcal {S}_i\) is \(\mathcal {X}_i\) and correspondingly for the matrix representation \(\varvec{S}_i\). We therefore apply the embedding \(\varvec{I}_i\) and obtain the representation
$$\begin{aligned} \varvec{S} = \varvec{P}^{-1}\varvec{A}, \quad \text {where} \quad \varvec{P}^{-1} = \sum _{i\in \mathcal I} \varvec{I}_i \varvec{A}_i^{-1} \varvec{I}_i^T. \end{aligned}$$
To finally prove (24), note that for one-dimensional subspaces \(\mathcal {X}_i\), \(\varvec{A}_i\) reduces to the diagonal entry of the matrix \(\varvec{A}\). Overall, we thus derive the matrix representation (24).

5.3 Subspace decomposition of \(\varvec{\varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )}\) in \(\varvec{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}\)

The following result is taken from [33, Theorem 4.1]; see also the references therein. In particular, we note that their proof requires the assumption that \(\Omega \) is simply connected.

Proposition 14

Let \(\mathcal {Y}_\bullet := \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_\bullet )\), \(\mathcal {Y}_{\bullet ,E} := {\text {span}}\{{\varvec{u}}_{\bullet ,E}\}\), \(\mathcal {Y}_{\bullet ,\varvec{x}} := {\text {span}}\{\nabla \eta _{\bullet ,\varvec{x}}\}\), and
$$\begin{aligned} \widehat{\mathcal {E}}_\ell ^\star&:= (\widehat{\mathcal {E}}_\ell {\setminus } \widehat{\mathcal {E}}_{\ell -1}) \cup \big \{E\in \widehat{\mathcal {E}}_\ell \,:\,{\text {supp}}\,{\varvec{u}}_{\ell ,E}\subsetneqq {\text {supp}}\,{\varvec{u}}_{\ell -1,E}\big \}, \\ \widehat{\mathcal {N}}_\ell ^\star&:= (\widehat{\mathcal {N}}_\ell {\setminus }\widehat{\mathcal {N}}_{\ell -1}) \cup \big \{\varvec{x}\in \widehat{\mathcal {N}}_\ell \,:\,{\text {supp}}\,\eta _{\ell ,\varvec{x}}\subsetneqq {\text {supp}}\,\eta _{\ell -1,\varvec{x}}\big \}. \end{aligned}$$
Then, it holds that
$$\begin{aligned} \mathcal {Y}_L = \mathcal {Y}_0 + \sum _{\ell =1}^L \left( \sum _{E\in \widehat{\mathcal {E}}_\ell ^\star } \mathcal {Y}_{\ell ,E} + \sum _{\varvec{x}\in \widehat{\mathcal {N}}_\ell ^\star } \mathcal {Y}_{\ell ,\varvec{x}}\right) . \end{aligned}$$
Moreover, it holds that
$$\begin{aligned} C^{-1}\Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \le |||\,{\varvec{v}}\,|||_{\mathcal {Y}_L} \le C \, \Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \quad \text {for all } {\varvec{v}} \in \mathcal {Y}_L, \end{aligned}$$
where \(C > 0\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\). \(\square \)

5.4 Subspace decomposition of \(\varvec{\mathcal {P}^0(\mathcal {T}_\bullet )}\) in \(\varvec{H^{-1/2}(\Gamma )}\)

It remains to prove the following proposition to conclude the proof of Theorem 3.

Proposition 15

The multilevel norm \(|||\,\cdot \,|||_{\mathcal {X}_L}\) associated with the decompomposition (23) satisfies the equivalence
$$\begin{aligned} C^{-1}\Vert \psi \Vert _{H^{-1/2}(\Gamma )} \le |||\,\psi \,|||_{\mathcal {X}_L} \le C \, \Vert \psi \Vert _{H^{-1/2}(\Gamma )} \quad \text {for all } \psi \in \mathcal {P}^0(\mathcal {T}_L), \end{aligned}$$
where \(C>0\) depends only on \(\Omega \) and \(\widehat{\mathcal {T}}_0\).
Proof of lower estimate in (44). Let \(\psi \in \mathcal {P}^0(\mathcal {T}_L)\) with arbitrary decomposition
$$\begin{aligned} \psi = \psi _0 + \psi _*, \quad \psi _*=\sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \psi _{\ell ,E} \quad \text {with} \quad \psi _0\in \mathcal {X}_0 \text { and } \psi _{\ell ,E} \in \mathcal {X}_{\ell ,E}. \end{aligned}$$
Note that \(\mathcal {X}_{\ell ,E} \subset \mathcal {P}^0_*(\mathcal {T}_\ell )\). Recall the extension operator \(\varvec{E}_\ell \) from Lemma 13. Define \({\varvec{v}}_* := \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \varvec{E}_\ell \psi _{\ell ,E} \in \mathcal {Y}_L\). Then, \(\varvec{{\text {curl}}}\,{\varvec{v}}_* \cdot \varvec{n}|_\Gamma = \psi _*\) and hence
$$\begin{aligned} \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )} \lesssim \Vert \varvec{{\text {curl}}}\,{\varvec{v}}_*\Vert _{\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )} = \Vert \varvec{{\text {curl}}}\,{\varvec{v}}_*\Vert _{L^2(\Omega )} {\mathop {\lesssim }\limits ^{(43)}}|||\,{\varvec{v}}_*\,|||_{\mathcal {Y}_L} \end{aligned}$$
from the continuity of the trace operator in \(\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )\). Moreover, the triangle inequality, the definition of the multilevel norm \(|||\,\cdot \,|||_{\mathcal {Y}_L}\), and Lemma 13 show that
$$\begin{aligned}&\Vert \psi \Vert _{H^{-1/2}(\Gamma )}^2 \\&\qquad \lesssim \Vert \psi _0\Vert _{H^{-1/2}(\Gamma )}^2 + \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )}^2 \lesssim \Vert \psi _0\Vert _{H^{-1/2}(\Gamma )}^2 + |||\,{\varvec{v}}_*\,|||_{\mathcal {Y}_L}^2 \\&\qquad {\mathop {\le }\limits ^{(41)}} \Vert \psi _0\Vert _{H^{-1/2}(\Gamma )}^2 + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \Vert \varvec{E}_\ell \psi _{\ell ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 {\mathop {\lesssim }\limits ^{(40)}}\Vert \psi _0\Vert _{H^{-1/2}(\Gamma )}^2 \\&\quad \qquad + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \Vert \psi _{\ell ,E}\Vert _{H^{-1/2}(\Gamma )}^2. \end{aligned}$$
Taking the infimum over all possible decompositions (45), we derive the lower estimate in (44) by definition (41) of the multilevel norm. \(\square \)
Proof of upper estimate in (44). Let \(\psi \in \mathcal {P}^0(\mathcal {T}_L)\). Define \(\psi _{00} := \langle \psi \,,\,1\rangle _\Gamma /|\Gamma |\) and \(\psi _* := \psi - \psi _{00} \in \mathcal {P}^0_*(\mathcal {T}_L)\). Note that
$$\begin{aligned} \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )} \le \Vert \psi \Vert _{H^{-1/2}(\Gamma )} + \Vert \psi _{00}\Vert _{H^{-1/2}(\Gamma )} \le \big ( 1 + \Vert 1/|\Gamma |\Vert _{H^{1/2}(\Gamma )} \big ) \, \Vert \psi \Vert _{H^{-1/2}(\Gamma )}. \end{aligned}$$
With Lemma 13, choose \({\varvec{v}} = \varvec{E}_L \psi _* \in \mathcal {Y}_L = \varvec{\mathcal {N}}\varvec{\mathcal {D}}^1(\widehat{\mathcal {T}}_L)\). Note that \(\psi _* = \varvec{{\text {curl}}}\, {\varvec{v}} \cdot \varvec{n}|_\Gamma \) and \(\Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )} \lesssim \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )}\). The upper bound in Proposition 14 further provides \({\varvec{v}}_0 \in \mathcal {Y}_0\), \({\varvec{v}}_{\ell ,E} \in \mathcal {Y}_{\ell ,E}\), and \({\varvec{v}}_{\ell ,\varvec{x}} \in \mathcal {Y}_{\ell ,\varvec{x}}\) such that
$$\begin{aligned} {\varvec{v}} = {\varvec{v}}_0 + \sum _{\ell =1}^L\left( \sum _{E\in \widehat{\mathcal {E}}_\ell ^\star } {\varvec{v}}_{\ell ,E} + \sum _{\varvec{x}\in \widehat{\mathcal {N}}_\ell ^\star } {\varvec{v}}_{\ell ,\varvec{x}}\right) \end{aligned}$$
as well as
$$\begin{aligned}&\Vert {\varvec{v}}_0\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 + \sum _{\ell =1}^L \left( \sum _{E\in \widehat{\mathcal {E}}_\ell ^\star } \Vert {\varvec{v}}_{\ell ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 \right. \nonumber \\&\left. \quad + \sum _{\varvec{x}\in \widehat{\mathcal {N}}_\ell ^\star } \Vert {\varvec{v}}_{\ell ,\varvec{x}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 \right) {\mathop {\lesssim }\limits ^{(43)}}\Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2. \end{aligned}$$
Observe that \(\varvec{{\text {curl}}}\, {\varvec{v}}_{\ell ,\varvec{x}} = 0\), since \({\varvec{v}}_{\ell ,\varvec{x}} \in \mathcal {Y}_{\ell ,\varvec{x}} = {\text {span}}\{\nabla \eta _{\ell ,\varvec{x}}\}\). Thus, we see that
$$\begin{aligned} \psi&= \psi _{00} + \psi _* = \psi _{00} + \varvec{{\text {curl}}}\, {\varvec{v}} \cdot \varvec{n}|_\Gamma \\&= \psi _{00} + \varvec{{\text {curl}}}\, {\varvec{v}}_0 \cdot \varvec{n}|_\Gamma + \sum _{\ell =1}^L \sum _{E\in \widehat{\mathcal {E}}_\ell ^\star } \varvec{{\text {curl}}}\, {\varvec{v}}_{\ell ,E} \cdot \varvec{n}|_\Gamma . \end{aligned}$$
Due to the restriction \((\cdot )|_\Gamma \) to the boundary, the latter sum reduces to a sum over all \(E \in \mathcal {E}_\ell \) (instead of all \(E \in \widehat{\mathcal {E}}_\ell ^\star \)). Note that \(\psi _{*0} := \varvec{{\text {curl}}}\, {\varvec{v}}_0 \cdot \varvec{n}|_\Gamma \in \mathcal {X}_0 = \mathcal {P}^0(\mathcal {T}_0)\) and hence \(\psi _{00} + \psi _{*0} \in \mathcal {X}_0\). Note that
$$\begin{aligned}&\Vert \psi _{00} + \psi _{*0}\Vert _{H^{-1/2}(\Gamma )} \le \Vert \psi _{00}\Vert _{H^{-1/2}(\Gamma )} + \Vert \varvec{{\text {curl}}}\, {\varvec{v}}_0 \cdot \varvec{n}\Vert _{H^{-1/2}(\Gamma )} \nonumber \\&\quad \lesssim \Vert \psi \Vert _{H^{-1/2}(\Gamma )} + \Vert \varvec{{\text {curl}}}\, {\varvec{v}}_0\Vert _{\varvec{H}(\varvec{{\text {div}}}\,;\,\Omega )} = \Vert \psi \Vert _{H^{-1/2}(\Gamma )} + \Vert \varvec{{\text {curl}}}\, {\varvec{v}}_0\Vert _{L^2(\Omega )}. \end{aligned}$$
Due to Lemma 12 and \({\varvec{v}}_{\ell ,E} \in \mathcal {Y}_{\ell ,E} = {\text {span}}\{{\varvec{u}}_{\ell ,E}\}\), it holds that \(\psi _{\ell ,E} := \varvec{{\text {curl}}}\, {\varvec{v}}_{\ell ,E} \cdot \varvec{n}|_\Gamma \in \mathcal {X}_{\ell ,E} = {\text {span}}\{\varphi _{\ell ,E}\}\) with \(\Vert \psi _{\ell ,E}\Vert _{H^{-1/2}(\Gamma )} \simeq \Vert {\varvec{v}}_{\ell ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}\). We hence see that
$$\begin{aligned} \psi = (\psi _{00} + \psi _{*0}) + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \psi _{\ell ,E} \end{aligned}$$
$$\begin{aligned} |||\,\psi \,|||_{\mathcal {P}^0(\mathcal {T}_L)}^2&{\mathop {\le }\limits ^{(41)}}\Vert \psi _{00} + \psi _{*0}\Vert _{H^{-1/2}(\Gamma )}^2 + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \Vert \psi _{\ell ,E}\Vert _{H^{-1/2}(\Gamma )}^2 \\&{\mathop {\lesssim }\limits ^{(48)}}\Vert \psi \Vert _{H^{-1/2}(\Gamma )}^2 + \Vert {\varvec{v}}_0\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 + \sum _{\ell =1}^L \sum _{E\in \mathcal {E}_\ell ^\star } \Vert {\varvec{v}}_{\ell ,E}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2 \\&{\mathop {\lesssim }\limits ^{(47)}}\Vert \psi \Vert _{H^{-1/2}(\Gamma )}^2 + \Vert {\varvec{v}}\Vert _{\varvec{H}(\varvec{{\text {curl}}}\,;\,\Omega )}^2\\&{\mathop {\lesssim }\limits ^{(40)}}\Vert \psi \Vert _{H^{-1/2}(\Gamma )}^2 + \Vert \psi _*\Vert _{H^{-1/2}(\Gamma )}^2 {\mathop {\lesssim }\limits ^{(46)}}\Vert \psi \Vert _{H^{-1/2}(\Gamma )}^2. \end{aligned}$$
This concludes the proof. \(\square \)

6 Proof of Theorem 8 (rate optimality of adaptive algorithm)

In the spirit of [11], we give an abstract analysis, where the precise problem and discretization (i.e., Galerkin BEM with piecewise constants for the weakly-singular integral equation for the 2D and 3D Laplacian) enter only through certain properties of the error estimator. These properties are explicitly stated in Sect. 6.1, before Sect. 6.2 provides general PCG estimates. The remaining sections (Sects. 6.36.6) then only exploit these abstract frameworks to prove Theorem 8 and Corollary 10.

6.1 Axioms of adaptivity

In this section, we recall some structural properties of the residual error estimator (14) which have been identified in [11] to be important and sufficient for the numerical analysis of Algorithm 5. For the proof, we refer to [19, 24]. We only note that (A4) already implies (A3) with \(C_{\mathrm{rel}} \le C_{\mathrm{drl}}\) in general; see [11, Section 3.3].

For ease of notation, let \(\mathcal {T}_0\) be the fixed initial mesh of Algorithm 5. Let \(\mathbb {T}:=\mathrm{refine}(\mathcal {T}_0)\) be the set of all possible meshes that can be obtained by successively refining \(\mathcal {T}_0\).

Proposition 16

There exist constants \(C_{\mathrm{stb}},C_{\mathrm{red}},C_{\mathrm{rel}}>0\) and \(0<q_{\mathrm{red}}<1\) which depend only on \(\Gamma \) and the \(\gamma \)-shape regularity, such that the following properties (A1)–(A4) hold:
Stability on non-refined element domains For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), arbitrary discrete functions \(v_\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(v_\circ \in \mathcal {P}^0(\mathcal {T}_\circ )\), and an arbitrary set \(\mathcal {U}_\bullet \subseteq \mathcal {T}_\bullet \cap \mathcal {T}_\circ \) of non-refined elements, it holds that
$$\begin{aligned} |\eta _\circ (\mathcal {U}_\bullet ,v_\circ )-\eta _\bullet (\mathcal {U}_\bullet ,v_\bullet )| \le C_{\mathrm{stb}}\,|||\,v_\circ -v_\bullet \,|||. \end{aligned}$$
Reduction on refined element domains For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), and arbitrary \(v_\bullet \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(v_\circ \in \mathcal {P}^0(\mathcal {T}_\circ )\), it holds that
$$\begin{aligned} \eta _\circ (\mathcal {T}_\circ \backslash \mathcal {T}_\bullet ,v_\circ )^2 \le q_{\mathrm{red}}\,\eta _\bullet (\mathcal {T}_\bullet \backslash \mathcal {T}_\circ ,v_\bullet )^2 + C_{\mathrm{red}}\,|||\,v_\circ -v_\bullet \,|||^2. \end{aligned}$$
Reliability For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\), the error of the exact discrete solution \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) of (11) is controlled by
$$\begin{aligned} |||\,\phi ^\star -\phi _\bullet ^\star \,||| \le C_{\mathrm{rel}}\,\eta _\bullet (\phi _\bullet ^\star ). \end{aligned}$$
Discrete reliability For each mesh \(\mathcal {T}_\bullet \in \mathbb {T}\) and all refinements \(\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet )\), there exists a set \(\mathcal {R}_{\bullet ,\circ }\subseteq \mathcal {T}_\bullet \) with \(\mathcal {T}_\bullet \backslash \mathcal {T}_\circ \subseteq \mathcal {R}_{\bullet ,\circ }\) as well as \(\#\mathcal {R}_{\bullet ,\circ } \le C_{\mathrm{drl}}\,\#(\mathcal {T}_\bullet \backslash \mathcal {T}_\circ )\) such that the difference of \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\) and \(\phi _\circ ^\star \in \mathcal {P}^0(\mathcal {T}_\circ )\) is controlled by
$$\begin{aligned} |||\,\phi _\circ ^\star -\phi _\bullet ^\star \,||| \le C_{\mathrm{drl}}\,\eta _\bullet (\mathcal {R}_{\bullet ,\circ },\phi _\bullet ^\star ). \end{aligned}$$
\(\square \)

6.2 Energy estimates for the PCG solver

This section collects some auxiliary results which rely on the use of PCG and, in particular, PCG with an optimal preconditioner. We first note the following Pythagoras identity.

Lemma 17

Let \(\varvec{A}_\bullet , \varvec{P}_\bullet \in \mathbb {R}^{N \times N}\) be symmetric and positive definite, \(\varvec{b}_\bullet \in \mathbb {R}^N\), \(\varvec{x}_\bullet ^\star := \varvec{A}_\bullet ^{-1} \varvec{b}_\bullet \), \(\varvec{x}_{\bullet 0} \in \mathbb {R}^N\) and \(\varvec{x}_{\bullet k}\) the iterates of the PCG algorithm.

There holds the Pythagoras identity
$$\begin{aligned} |||\,\phi _\bullet ^\star -\phi _{\bullet (k+1)}\,|||^2+|||\,\phi _{\bullet (k+1)}-\phi _{\bullet k}\,|||^2=|||\,\phi _\bullet ^\star -\phi _{\bullet k}\,|||^2\quad \text {for all }k\in \mathbb {N}_0. \end{aligned}$$


According to the definition of PCG (and CG), it holds that
$$\begin{aligned} \Vert \widetilde{\varvec{x}}_\bullet ^\star -\widetilde{\varvec{x}}_{\bullet k}\Vert _{\widetilde{\varvec{A}}_\bullet } = \min _{\widetilde{\varvec{y}}_\bullet \in \mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})}\Vert \widetilde{\varvec{x}}_\bullet ^\star -\widetilde{\varvec{y}}_\bullet \Vert _{\widetilde{\varvec{A}}_\bullet }, \end{aligned}$$
where \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{x}_{\bullet 0}):=\mathrm{span}\{\widetilde{\varvec{r}}_{\bullet 0},\widetilde{\varvec{A}}_\bullet \widetilde{\varvec{r}}_{\bullet 0},\ldots ,\widetilde{\varvec{A}}_\bullet ^{k-1} \widetilde{\varvec{r}}_{\bullet 0}\}\) with \(\widetilde{\varvec{r}}_{\bullet 0}:=\widetilde{\varvec{b}}_\bullet -\widetilde{\varvec{A}}_\bullet \widetilde{\varvec{x}}_{\bullet 0}\). According to Linear Algebra, \(\widetilde{\varvec{x}}_{\bullet k}\) is the orthogonal projection of \(\widetilde{\varvec{x}}_\bullet ^\star \) in \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\) with respect to the matrix norm \(\Vert \cdot \Vert _{\widetilde{\varvec{A}}_\bullet }\). From nestedness \(\mathcal {K}_k(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\subseteq \mathcal {K}_{k+1}(\widetilde{\varvec{A}}_\bullet ,\widetilde{\varvec{b}}_\bullet ,\widetilde{\varvec{x}}_{\bullet 0})\), it thus follows that
$$\begin{aligned} \Vert \widetilde{\varvec{x}}_\bullet ^\star -\widetilde{\varvec{x}}_{\bullet k}\Vert _{\widetilde{\varvec{A}}_\bullet }^2=\Vert \widetilde{\varvec{x}}_\bullet ^\star -\widetilde{\varvec{x}}_{\bullet (k+1)}\Vert _{\widetilde{\varvec{A}}_\bullet }^2+\Vert \widetilde{\varvec{x}}_{\bullet (k+1)}-\widetilde{\varvec{x}}_{\bullet k}\Vert _{\widetilde{\varvec{A}}_\bullet }^2. \end{aligned}$$
Together with (17) and (20), this proves (49). \(\square \)

The following lemma collects some estimates which follow from the contraction property (19) of PCG.

Lemma 18

Algorithm 5 guarantees the following estimates for all \((j,k) \in \mathcal {Q}\) with \(k \ge 1\):
  1. (i)

    \(|||\,\phi _j^\star - \phi _{jk}\,||| \le q_{\mathrm{pcg}} \, |||\,\phi _j^\star - \phi _{j(k-1)}\,|||\)

  2. (ii)

    \(|||\,\phi _{jk}-\phi _{j(k-1)}\,||| \le (1+q_{\mathrm{pcg}}) \, |||\,\phi _j^\star -\phi _{j(k-1)}\,|||\)

  3. (iii)

    \(|||\,\phi _j^\star -\phi _{j(k-1)}\,||| \le (1-q_{\mathrm{pcg}})^{-1} \, |||\,\phi _{jk}-\phi _{j(k-1)}\,|||\)

  4. (iv)

    \(|||\,\phi _j^\star -\phi _{jk}\,||| \le q_{\mathrm{pcg}} (1-q_{\mathrm{pcg}})^{-1} \, |||\,\phi _{jk}-\phi _{j(k-1)}\,|||\)



According to (20), estimate (19) proves (i). The estimates (ii)–(iv) follow from (i) and the triangle inequality. \(\square \)

6.3 Proof of Theorem 8(a)

With reliability (A3) and stability (A1), we see that
$$\begin{aligned}&|||\,\phi ^\star - \phi _{jk}\,||| \le |||\,\phi ^\star - \phi _j^\star \,||| + |||\,\phi _j^\star - \phi _{jk}\,||| \\&\quad {{\mathop {\lesssim }\limits ^{(\mathrm{A}3)}}} \eta _j(\phi _j^\star ) + |||\,\phi _j^\star - \phi _{jk}\,||| \\&\quad {{\mathop {\lesssim }\limits ^{(\mathrm{A}1)}}} \eta _j(\phi _{jk}) + |||\,\phi _j^\star - \phi _{jk}\,||| \quad \text {for all } (j,k) \in \mathcal {Q}. \end{aligned}$$
With Lemma 18(iv), we hence prove the reliability estimate (26).
According to [3], it holds that
$$\begin{aligned} \eta _j(\phi _{jk})&\lesssim \Vert h_j^{1/2}(\phi ^\star - \phi _{jk})\Vert _{L^2(\Gamma )} + |||\,\phi ^\star - \phi _{jk}\,||| \\ {}&\le \Vert h_j^{1/2}(\phi ^\star - \phi _{jk})\Vert _{L^2(\Gamma )} + |||\,\phi ^\star - \phi _j^\star \,||| + |||\,\phi _j^\star - \phi _{jk}\,||| \end{aligned}$$
Let \(\mathbb {G}_j: \widetilde{H}^{-1/2}(\Gamma ) \rightarrow \mathcal {P}^0(\mathcal {T}_j)\) be the Galerkin projection. Let \(\varvec{P}i_j: L^2(\Gamma ) \rightarrow \mathcal {P}^0(\mathcal {T}_j)\) be the \(L^2\)-orthogonal projection. With the Céa lemma and a duality argument (see, e.g., [16, Theorem 4.1]), we see that
$$\begin{aligned} |||\,(1-\mathbb {G}_j)\psi \,|||\le |||\,(1-\varvec{P}i_j)\psi \,|||\lesssim \Vert h_j^{1/2}\,\psi \Vert _{L^2(\Gamma )}\quad \text {for all }~\psi \in L^2(\Gamma ). \end{aligned}$$
Hence, for \(\psi =\phi ^\star -\phi _{jk}\), it follows that
$$\begin{aligned} |||\,\phi ^\star -\phi _j^\star \,||| = |||\,(1-\mathbb {G}_j)\phi ^\star \,||| = |||\,(1-\mathbb {G}_j)(\phi ^\star -\phi _{jk})\,||| \lesssim \Vert h_j^{1/2}(\phi ^\star -\phi _{jk})\Vert _{L^2(\Gamma )}. \end{aligned}$$
Combining the latter estimates, we see that
$$\begin{aligned} \eta _j(\phi _{jk})&\lesssim \Vert h_j^{1/2}(\phi ^\star - \phi _{jk})\Vert _{L^2(\Gamma )} + |||\,\phi _j^\star - \phi _{jk}\,|||. \end{aligned}$$
With Lemma 18(iv), we hence prove the efficiency estimate (27). \(\square \)

6.4 Proof of Theorem 8(b)

The following lemma is the heart of the proof of Theorem 8(b).

Lemma 19

Consider Algorithm 5 for arbitrary parameters \(0 < \theta \le 1\) and \(\lambda > 0\). There exist constants \(0<\mu ,q_{\mathrm{ctr}}<1\) such that
$$\begin{aligned} \Delta _{jk}:=\mu \,\eta _j(\phi _{jk})^2+|||\,\phi ^\star -\phi _{jk}\,|||^2 \quad \text {for } (j, k) \in \mathcal {Q} \end{aligned}$$
satisfies, for all \(j \in \mathbb {N}_0\), that
$$\begin{aligned} \Delta _{j(k+1)}\leqq q_\mathrm{ctr}\,\Delta _{jk}\quad \text {for all }0\le k<k+1<\underline{k} \end{aligned}$$
as well as
$$\begin{aligned} \Delta _{(j+1)0}\leqq q_\mathrm{ctr}\,\Delta _{j(\underline{k}-1)}. \end{aligned}$$
Moreover, for all \((j',k'),(j,k)\in \mathcal {Q}\), it holds that
$$\begin{aligned}&\Delta _{j'k'}\leqq q_\mathrm{ctr}^{|(j',k')|-|(j,k)|}\,\Delta _{jk}\quad \text {provided that }\nonumber \\&\quad (j',k')>(j,k), k'<\underline{k}(j'),\text { and }k<\underline{k}(j). \end{aligned}$$
The constants \(0< \mu , q_{\mathrm{ctr}} < 1\) depend only on \(\lambda \), \(\theta \), \(q_{\mathrm{pcg}}\), and the constants in (A1)–(A3).


The proof is split into five steps.

Step 1   We fix some constants, which are needed below. We note that all these constants depend on \(0 < \theta \le 1\) and \(\lambda > 0\), but do not require any additional constraint. First, define
$$\begin{aligned} 0< q_{\mathrm{est}} := 1-(1-q_{\mathrm{red}})\theta ^2 < 1. \end{aligned}$$
Second, choose \(\gamma >0\) such that
$$\begin{aligned} (1+\gamma )\,q_{\mathrm{est}}&{\mathop {<}\limits ^{(53)}}1. \end{aligned}$$
Third, choose \(\mu >0\) such that
$$\begin{aligned} \mu \,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2(1+q_{\mathrm{pcg}})^2 ~<~ \frac{1-q_{\mathrm{pcg}}^2}{2} \qquad \text {and} \quad \mu \,\lambda ^{-2} ~\le ~ \frac{1}{2}. \end{aligned}$$
Fourth, choose \(\varepsilon >0\) such that
$$\begin{aligned} \varepsilon \,(1-q_{\mathrm{pcg}})^{-2}+2\,\varepsilon \,C_{\mathrm{rel}}^2\,C_{\mathrm{stb}}^2\,(1-q_{\mathrm{pcg}})^{-2} ~\le ~ \frac{1}{2} \qquad \text {and} \quad 2\,\varepsilon \,C_{\mathrm{rel}}^2 ~\le ~ (1-\varepsilon )\mu . \end{aligned}$$
Fifth, choose \(\kappa >0\) such that
$$\begin{aligned} 2\,\kappa \,C_{\mathrm{rel}}^2 {\mathop {<}\limits ^{(54)}} \big (1-(1+\gamma )\,q_{\mathrm{est}}\big )\,\mu \qquad \text {and} \quad 2\,\kappa \,C_{\mathrm{rel}}^2C_{\mathrm{stb}}^2~<~\frac{1-q_{\mathrm{pcg}}^2}{2}. \end{aligned}$$
With (55)–(57), we finally define
$$\begin{aligned} \begin{aligned} 0<q_{\mathrm{ctr}}:=\max \Bigg \{&1-\varepsilon \,,\, \big (\mu \,(1+\gamma )\,q_{\mathrm{est}}+2\,\kappa \,C_{\mathrm{rel}}^2\big )\,\mu ^{-1}\,,\,1-\kappa ,\\&\big (\mu \,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2\,(1+q_{\mathrm{pcg}})^2+q_{\mathrm{pcg}}^2+2\,\kappa \,C_{\mathrm{rel}}^2C_{\mathrm{stb}}^2\big )\Bigg \}<1. \end{aligned} \end{aligned}$$
Step 2   Due to reliability (A3), stability (A1), and Lemma 18(iii), it follows that
$$\begin{aligned}&|||\,\phi ^\star -\phi _j^\star \,|||^2=(1-\varepsilon )|||\,\phi ^\star -\phi _j^\star \,|||^2+\varepsilon \,|||\,\phi ^\star -\phi _j^\star \,|||^2 \\&\quad {\mathop {\le }\limits ^{(\mathrm{A}3)}}(1-\varepsilon )|||\,\phi ^\star -\phi _j^\star \,|||^2+\varepsilon \,C_{\mathrm{rel}}^2\,\eta _j(\phi _j^\star )^2 \\&\quad {\mathop {\le }\limits ^{(\mathrm{A}1)}}(1-\varepsilon )|||\,\phi ^\star -\phi _j^\star \,|||^2+2\,\varepsilon \,C_{\mathrm{rel}}^2\,\big (\eta _j(\phi _{jk})^2+C_{\mathrm{stb}}^2\,|||\,\phi _j^\star -\phi _{jk}\,|||^2\big ) \\&\quad {\mathop {\le }\limits ^{18\mathrm{(iii)}}} (1-\varepsilon )|||\,\phi ^\star -\phi _j^\star \,|||^2+2\,\varepsilon \,C_{\mathrm{rel}}^2\,\eta _j(\phi _{jk})^2\\&\quad \qquad +2\,\varepsilon \,C_{\mathrm{rel}}^2\,C_{\mathrm{stb}}^2\,(1-q_{\mathrm{pcg}})^{-2}\,|||\,\phi _{j(k+1)}-\phi _{jk}\,|||^2. \end{aligned}$$
Step 3.   We consider the case \(k+1<\underline{k}(j)\). Step (iv) of Algorithm 5 yields that
$$\begin{aligned} \eta _j(\phi _{j(k+1)})^2<\lambda ^{-2}|||\,\phi _{j(k+1)}-\phi _{jk}\,|||^2. \end{aligned}$$
Moreover, the Pythagoras identity (49) implies that
$$\begin{aligned} \begin{aligned} |||\,\phi _j^\star -\phi _{j(k+1)}\,|||^2&=(1-\varepsilon ) \, |||\,\phi _j^\star -\phi _{jk}\,|||^2\\&\quad +\,\varepsilon \,|||\,\phi _j^\star -\phi _{jk}\,|||^2-|||\,\phi _{j(k+1)}-\phi _{jk}\,|||^2. \end{aligned} \end{aligned}$$
Further, we note the Pythagoras identity
$$\begin{aligned} |||\,\phi ^\star - \phi _{j}^\star \,|||^2 + |||\,\phi _{j}^\star - \psi _{j}\,|||^2 = |||\,\phi ^\star - \psi _{j}\,|||^2 \quad \text {for all } \psi _{j} \in \mathcal {P}^0(\mathcal {T}_{j}). \end{aligned}$$
Combining (59)–(61) and applying Lemma 18(iii), we see that
$$\begin{aligned}&\Delta _{j(k+1)} =\mu \,\eta _j(\phi _{j(k+1)})^2+|||\,\phi _j^\star -\phi _{j(k+1)}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2 \\&\quad <(1-\varepsilon )|||\,\phi _j^\star -\phi _{jk}\,|||^2+\varepsilon \,|||\,\phi _j^\star -\phi _{jk}\,|||^2 +(\mu \,\lambda ^{-2}-1)|||\,\phi _{j(k+1)}\\&\qquad -\,\phi _{jk}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2 \\&\quad {\mathop {\le }\limits ^{18\mathrm{(iii)}}} (1-\varepsilon )\,|||\,\phi _j^\star -\phi _{jk}\,|||^2+\big (\varepsilon \,(1-q_{\mathrm{pcg}})^{-2}+\mu \,\lambda ^{-2}-1\big )|||\,\phi _{j(k+1)}\\&\qquad -\,\phi _{jk}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2. \end{aligned}$$
Step 2 yields that
$$\begin{aligned}&\le (1-\varepsilon )\big (|||\,\phi _j^\star -\phi _{jk}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2\big )+2 \,\varepsilon \,C_{\mathrm{rel}}^2\,\eta _j(\phi _{jk})^2 \\&\qquad +\big (\varepsilon \,(1-q_{\mathrm{pcg}})^{-2}+\mu \,\lambda ^{-2}-1+2\,\varepsilon \,C_{\mathrm{rel}}^2\,C_{\mathrm{stb}}^2\,(1-q_{\mathrm{pcg}})^{-2}\big )|||\,\phi _{j(k+1)}-\phi _{jk}\,|||^2. \end{aligned}$$
Using (55)–(56) and (61), we thus see that
$$\begin{aligned} \Delta _{j(k+1)} \le (1-\varepsilon )\big (\mu \,\eta _j(\phi _{jk})^2 + |||\,\phi ^\star -\phi _{jk}\,|||^2 \big ) {\mathop {\le }\limits ^{(58)}}q_{\mathrm{ctr}} \, \Delta _{jk} \quad \text {if }k+1 < \underline{k}(j). \end{aligned}$$
This concludes the proof of (50).
Step 4   We use the definition \(\phi _{(j+1)0}:=\phi _{j\underline{k}}\) from Step (vi) of Algorithm 5 to see that
$$\begin{aligned} \begin{aligned} \Delta _{(j+1)0}&= \mu \,\eta _{j+1}(\phi _{(j+1)0})^2 + |||\,\phi ^\star -\phi _{(j+1)0}\,|||^2\\&=\mu \,\eta _{j+1}(\phi _{j\underline{k}})^2 + |||\,\phi ^\star -\phi _{j\underline{k}}\,|||^2. \end{aligned} \end{aligned}$$
For the first summand of (62), we use stability (A1) and reduction (A2). Together with the Dörfler marking strategy in Step (v) of Algorithm 5 and \(\mathcal {M}_j \subseteq \mathcal {T}_j \backslash \mathcal {T}_{j+1}\), we see that
$$\begin{aligned} \begin{aligned} \eta _{j+1}(\phi _{j\underline{k}})^2&= \eta _{j+1}(\mathcal {T}_{j+1} \backslash \mathcal {T}_j, \phi _{j\underline{k}})^2 + \eta _{j+1}(\mathcal {T}_{j+1} \cap \mathcal {T}_j, \phi _{j\underline{k}})^2 \\&\le q_{\mathrm{red}} \, \eta _{j}(\mathcal {T}_j \backslash \mathcal {T}_{j+1}, \phi _{j\underline{k}})^2 + \eta _{j}(\mathcal {T}_{j+1} \cap \mathcal {T}_j, \phi _{j\underline{k}})^2 \\&= \eta _{j}(\phi _{j\underline{k}})^2 - (1-q_{\mathrm{red}}) \, \eta _{j}(\mathcal {T}_j \backslash \mathcal {T}_{j+1}, \phi _{j\underline{k}})^2 \\&\le \eta _{j}(\phi _{j\underline{k}})^2 - (1-q_{\mathrm{red}})\theta ^2 \, \eta _{j}(\phi _{j\underline{k}})^2 {\mathop {=}\limits ^{(53)}} q_{\mathrm{est}} \, \eta _{j}(\phi _{j\underline{k}})^2. \end{aligned} \end{aligned}$$
With this and stability (A1), the Young inequality and Lemma 18(ii) yield that
$$\begin{aligned} \eta _{j+1}(\phi _{j\underline{k}})^2&{\mathop {\le }\limits ^{(63)}}q_{\mathrm{est}}\,\eta _j(\phi _{j\underline{k}})^2 {\mathop {\le }\limits ^{(\mathrm{A}1)}}(1+\gamma )\,q_{\mathrm{est}}\,\eta _j(\phi _{j(\underline{k}-1)})^2\nonumber \\&\quad +\,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2|||\,\phi _{j\underline{k}}-\phi _{j(\underline{k}-1)}\,|||^2\nonumber \\&\, {\mathop {\le }\limits ^{\mathrm{(ii)}}} (1+\gamma )\,q_{\mathrm{est}}\,\eta _j(\phi _{j(\underline{k}-1)})^2\nonumber \\&\quad +\,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2\,(1+q_{\mathrm{pcg}})^2|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2. \end{aligned}$$
For the second summand of (62), we apply the Pythagoras identity (61) together with Lemma 18(i) and obtain that
$$\begin{aligned}&|||\,\phi ^\star -\phi _{j\underline{k}}\,|||^2 {\mathop {=}\limits ^{(61)}} |||\,\phi ^\star -\phi _j^\star \,|||^2 + |||\,\phi _j^\star -\phi _{j\underline{k}}\,|||^2\nonumber \\&\quad {\mathop {\le }\limits ^{18\mathrm{(i)}}}|||\,\phi ^\star -\phi _j^\star \,|||^2 + q_{\mathrm{pcg}}^2 \, |||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2. \end{aligned}$$
Combining (62)–(65), we end up with
$$\begin{aligned} \Delta _{(j+1)0}&\le \mu \,(1+\gamma )\,q_{\mathrm{est}}\,\eta _j(\phi _{j(\underline{k}-1)})^2\\&\quad +\,\big (\mu \,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2\,(1+q_{\mathrm{pcg}})^2+q_{\mathrm{pcg}}^2\big )|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2\\&\quad +\,|||\,\phi ^\star -\phi _j^\star \,|||^2. \end{aligned}$$
Using the same arguments as in Step 2, we get that
$$\begin{aligned} \Delta _{(j+1)0}&\le \mu \,(1+\gamma )\,q_{\mathrm{est}}\,\eta _j(\phi _{j(\underline{k}-1)})^2\\&\quad +\,\big (\mu \,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2\,(1+q_{\mathrm{pcg}})^2+q_{\mathrm{pcg}}^2\big )|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2\\&\quad +\,(1-\kappa )|||\,\phi ^\star -\phi _j^\star \,|||^2+2\,\kappa \,C_{\mathrm{rel}}^2\,\eta _j(\phi _{j(\underline{k}-1)})^2\\&\quad +\,2\,\kappa \,C_{\mathrm{rel}}^2\,C_{\mathrm{stb}}^2\,|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2\\&=\big (\mu \,(1+\gamma )\,q_{\mathrm{est}}+2\,\kappa \,C_{\mathrm{rel}}^2\big )\,\eta _j(\phi _{j(\underline{k}-1)})^2+(1-\kappa )|||\,\phi ^\star -\phi _j^\star \,|||^2\\&\quad +\,\big (\mu \,(1+\gamma ^{-1})\,q_{\mathrm{est}}\,C_{\mathrm{stb}}^2\,(1+q_{\mathrm{pcg}})^2+q_{\mathrm{pcg}}^2\\&\quad +\,2\,\kappa \,C_{\mathrm{rel}}^2C_{\mathrm{stb}}^2\big )|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2 \\&{\mathop {\le }\limits ^{(58)}}q_{\mathrm{ctr}}\,\mu \,\eta _j(\phi _{j(\underline{k}-1)})^2+q_{\mathrm{ctr}}\,|||\,\phi ^\star -\phi _j^\star \,|||^2+q_{\mathrm{ctr}}\,|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2\\&{\mathop {=}\limits ^{(61)}} q_{\mathrm{ctr}}\,\Delta _{j(\underline{k}-1)}. \end{aligned}$$
This concludes the proof of (51).

Step 5   Inequality (52) follows by induction. This concludes the proof. \(\square \)

Proof of Theorem 8(b)

The proof is split into three steps.

Step 1   Let \(j \in \mathbb {N}\). Recall the Pythagoras identity (61). We use stability (A1) and Step (iv) of Algorithm 5 to see that
$$\begin{aligned} \Delta _{j(\underline{k}-1)}&{\mathop {=}\limits ^{(61)}}\mu \,\eta _j(\phi _{j(\underline{k}-1)})^2+|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2\\&{{\mathop {\lesssim }\limits ^{(\mathrm{A}1)}}}\eta _j(\phi _{j\underline{k}})^2+|||\,\phi _{j\underline{k}}-\phi _{j(\underline{k}-1)}\,|||^2+|||\,\phi _j^\star -\phi _{j\underline{k}}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2\\&{\mathop {\lesssim }\limits ^{5\mathrm{(iv)}}} \eta _j(\phi _{j\underline{k}})^2+|||\,\phi _j^\star -\phi _{j\underline{k}}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2 {\mathop {\simeq }\limits ^{(61)}}\Delta _{j\underline{k}}. \end{aligned}$$
With the Pythagoras identity (49), we may argue similarly to obtain that
$$\begin{aligned} \Delta _{j\underline{k}}&{\mathop {=}\limits ^{(61)}}\mu \,\eta _j(\phi _{j\underline{k}})^2+|||\,\phi _j^\star -\phi _{j\underline{k}}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2\\&{{\mathop {\lesssim }\limits ^{(\mathrm{A}1)}}}\eta _j(\phi _{j(\underline{k}-1)})^2+|||\,\phi _{j\underline{k}}-\phi _{j(\underline{k}-1)}\,|||^2+|||\,\phi _j^\star -\phi _{j\underline{k}}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2\\&{\mathop {=}\limits ^{(49)}} \eta _j(\phi _{j(\underline{k}-1)})^2+|||\,\phi _j^\star -\phi _{j(\underline{k}-1)}\,|||^2+|||\,\phi ^\star -\phi _j^\star \,|||^2 {\mathop {\simeq }\limits ^{(61)}}\Delta _{j(\underline{k}-1)}. \end{aligned}$$
Hence, it follows that \(\Delta _{j\underline{k}}\simeq \Delta _{j(\underline{k}-1)}\).
Step 2   For \(0\le j\le j^\prime \), define \({\widehat{k}}(j) := {\widehat{k}} \in \mathbb {N}_0\) by
$$\begin{aligned} {\widehat{k}}:={\left\{ \begin{array}{ll} \underline{k}(j)&{}\quad \text {if } j<j^\prime ,\\ k^\prime &{}\quad \text {if }j=j^\prime . \end{array}\right. } \end{aligned}$$
From Step 1, Lemma 19, and the geometric series (for the sum over k), it follows that
$$\begin{aligned}&\sum _{j=0}^{j^\prime } \sum _{k=0}^{\widehat{k}(j)} \Delta _{jk}^{-1} \lesssim \Delta _{j'k'}^{-1} + \sum _{j=0}^{j^\prime }\sum _{k=0}^{\widehat{k}(j)-1}\Delta _{jk}^{-1}\\&\quad {\mathop {\le }\limits ^{(52)}} \Delta _{j'k'}^{-1} + \sum _{j=0}^{j^\prime } \sum _{k=0}^{\widehat{k}(j)-1}q_{\mathrm{ctr}}^{|(j,{\widehat{k}}-1)| - |(j,k)|}\,\Delta _{j({\widehat{k}}-1)}^{-1} \\&\quad \lesssim \Delta _{j'k'}^{-1} + \sum _{j=0}^{j^\prime }\Delta _{j(\widehat{k}-1)}^{-1}. \end{aligned}$$
For \(k^\prime <\underline{k}(j^\prime )\), inequality (52) and the geometric series (for the sum over j) yield that
$$\begin{aligned} \sum _{j=0}^{j^\prime }\Delta _{j(\widehat{k}-1)}^{-1} {\mathop {\lesssim }\limits ^{(52)}}\sum _{j=0}^{j^\prime }q_{\mathrm{ctr}}^{|(j^\prime ,k^\prime )|-|(j,\widehat{k}-1)|}\,\Delta _{j^\prime k^\prime }^{-1} \lesssim \Delta _{j^\prime k^\prime }^{-1}. \end{aligned}$$
For \(k^\prime =\underline{k}(j^\prime )\), inequality (52), the geometric series, and Step 1 yield that
$$\begin{aligned} \sum _{j=0}^{j^\prime }\Delta _{j(\widehat{k}-1)}^{-1} = \Delta _{j^\prime (\underline{k}-1)}^{-1} + \sum _{j=0}^{j^\prime -1}\Delta _{j(\underline{k}-1)}^{-1}&{\mathop {\lesssim }\limits ^{(52)}}\left( 1 + \sum _{j=0}^{j^\prime -1}q_{\mathrm{ctr}}^{|(j^\prime ,\underline{k}-1)|-|(j,\underline{k}-1)|} \right) \, \Delta _{j^\prime (\underline{k}-1)}^{-1} \\&\lesssim \Delta _{j^\prime (\underline{k}-1)}^{-1} \simeq \Delta _{j^\prime \underline{k}}^{-1} = \Delta _{j^\prime k'}^{-1}. \end{aligned}$$
Overall, it follows that
$$\begin{aligned} \sum _{j=0}^{j^\prime }\sum _{k=0}^{\widehat{k}(j)}\Delta _{jk}^{-1}&\lesssim \Delta _{j^\prime k^\prime }^{-1} \quad \text {for all } (j^\prime ,k^\prime )\in \mathcal {Q}. \end{aligned}$$
Step 3   According to the proof of [11, Lemma 4.9], estimate (66) guarantees (and is even equivalent to) the existence of \(0<q_{\mathrm{lin}}<1\) such that
$$\begin{aligned} \Delta _{j^\prime k^\prime }^{1/2} \lesssim q_{\mathrm{lin}}^{|(j^\prime ,k^\prime )|-|(j,k)|} \, \Delta _{jk}^{1/2}\quad \text {for all }(j,k),(j^\prime ,k^\prime )\in \mathcal {Q}\text { with }(j',k') \ge (j,k). \end{aligned}$$
Clearly, it holds that \(\Lambda _{jk} \simeq \Delta _{jk}^{1/2}\) for all \((j,k) \in \mathcal {Q}\). This and (67) conclude the proof. \(\square \)

6.5 Proof of Theorem 8(c)

The proof of optimal convergence rates requires the following additional properties of the mesh-refinement strategy. For 3D BEM (and 2D NVB from Sect. 2.4) these properties are verified in [8, 41, 42], and any assumption on \(\mathcal {T}_0\) is removed in [35]. For 2D BEM (and the extended 1D bisection from Sect. 2.3), these properties are verified in [2].
Splitting property Each refined element is split in at least 2 and at most in \(C_{\mathrm{son}}\ge 2\) many sons, i.e., for all \(\mathcal {T}_\bullet \in \mathbb {T}\) and all \(\mathcal {M}_\bullet \subseteq \mathcal {T}_\bullet \), the refined mesh \(\mathcal {T}_\circ = {\text {refine}}(\mathcal {T}_\bullet , \mathcal {M}_\bullet )\) satisfies that
$$\begin{aligned} \# (\mathcal {T}_\bullet {\setminus } \mathcal {T}_\circ ) + \# \mathcal {T}_\bullet \le \# \mathcal {T}_\circ \le C_{\mathrm{son}} \, \# (\mathcal {T}_\bullet {\setminus } \mathcal {T}_\circ ) + \# (\mathcal {T}_\bullet \cap \mathcal {T}_\circ ). \end{aligned}$$
Overlay estimate For all meshes \(\mathcal {T} \in \mathbb {T}\) and \(\mathcal {T}_\bullet ,\mathcal {T}_\circ \in {\text {refine}}(\mathcal {T})\) there exists a common refinement \(\mathcal {T}_\bullet \oplus \mathcal {T}_\circ \in {\text {refine}}(\mathcal {T}_\bullet ) \cap {\text {refine}}(\mathcal {T}_\circ ) \subseteq {\text {refine}}(\mathcal {T})\) with
$$\begin{aligned} \# (\mathcal {T}_\bullet \oplus \mathcal {T}_\circ ) \le \# \mathcal {T}_\bullet + \# \mathcal {T}_\circ - \# \mathcal {T}. \end{aligned}$$
Mesh-closure estimate There exists \(C_{\mathrm{mesh}}>0\) such that the sequence \(\mathcal {T}_j\) with corresponding \(\mathcal {M}_j \subseteq \mathcal {T}_j\), which is generated by Algorithm 5, satisfies that
$$\begin{aligned} \# \mathcal {T}_j - \# \mathcal {T}_0 \le C_{\mathrm{mesh}} \sum _{\ell =0}^{j -1} \# \mathcal {M}_\ell . \end{aligned}$$
Recall the constants \(C_{\mathrm{stb}} > 0\) from (A1) and \(C_{\mathrm{drl}}>0\) from (A4). Suppose that \(0<\theta \le 1\) and \(\lambda > 0 \) are sufficiently small such that
$$\begin{aligned}&0< \theta '':=\frac{\theta +\lambda /\lambda _{\mathrm{opt}}}{1-\lambda /\lambda _{\mathrm{opt}}}<\theta _{\mathrm{opt}}:=\big (1+C_{\mathrm{stb}}^2\,C_{\mathrm{drl}}^2\big )^{-1/2},\nonumber \\&\quad \quad \text { where}~~\lambda _{\mathrm{opt}} := \left( C_{\mathrm{stb}} \,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}}\right) ^{-1}. \end{aligned}$$
In particular, it holds that \(0< \theta < \theta _{\mathrm{opt}}\) and \(0< \lambda < \lambda _{\mathrm{opt}}\). We need the following comparison lemma which is found in [11, Lemma 4.14].

Lemma 20

Suppose (R2), (A1), (A2), and (A4). Recall the assumption (68). There exist constants \(C_1, C_2 > 0\) such that for all \(s>0\) with \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \) and all \(j \in \mathbb {N}_0\), there exists \(\mathcal {R}_j \subseteq \mathcal {T}_j \) which satisfies
$$\begin{aligned} \# \mathcal {R}_j \le C_1 C_2^{-1/s} \, \Vert \phi ^\star \Vert _{\mathbb {A}_s}^{1/s} \eta _j(\phi _j^\star )^{-1/s}, \end{aligned}$$
as well as the Dörfler marking criterion
$$\begin{aligned} \theta '' \eta _j(\phi _j^\star ) \le \eta _j(\mathcal {R}_j, \phi _j^\star ). \end{aligned}$$
The constants \(C_1, C_2\) depend only on the constants of (A1), (A2), and (A4). \(\square \)

Another lemma, which we need for the proof of Theorem 8(c), shows that the iterates \(\phi _{\bullet k} \) of Algorithm 5 are close to the exact Galerkin approximation \(\phi _\bullet ^\star \in \mathcal {P}^0(\mathcal {T}_\bullet )\).

Lemma 21

Let \(0<\lambda <\lambda _{\mathrm{opt}}\). For all \(j \in \mathbb {N}_0\), it holds that
$$\begin{aligned} |||\,\phi ^\star _j-\phi _{j\underline{k}}\,||| \le \lambda \,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}}\,\min \Big \{\eta _j(\phi _{j\underline{k}})\,,\,\frac{1}{1-\lambda /\lambda _{\mathrm{opt}}}\,\eta _j(\phi ^\star _j)\Big \}. \end{aligned}$$
Moreover, there holds equivalence
$$\begin{aligned} (1-\lambda /\lambda _{\mathrm{opt}})\,\eta _j(\phi _{j\underline{k}}) \le \eta _j(\phi ^\star _j) \le (1+\lambda /\lambda _{\mathrm{opt}})\,\eta _j(\phi _{j\underline{k}}). \end{aligned}$$


Stability (A1) yields that \(|\eta _j(\phi ^\star _j)-\eta _j(\phi _{j\underline{k}})|\le C_{\mathrm{stb}}\,|||\,\phi ^\star _j - \phi _{j\underline{k}}\,||| \). Therefore, Lemma 18(iv) and the assumption on the PCG iterate in Step (iv) of Algorithm 5 imply that
$$\begin{aligned}&|||\,\phi ^\star _j -\phi _{j\underline{k}} \,||| {\mathop {\le }\limits ^{18\text {(iv)}}} \frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} \,|||\,\phi _{j\underline{k}}-\phi _{j (\underline{k}-1)}\,|||\\&\quad \le \lambda \,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} \,\eta _j(\phi _{j\underline{k}}) \\&\quad {{\mathop {\le }\limits ^{(\mathrm{A}1)}}} \lambda \, \frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} \,\big (\eta _j(\phi ^\star _j) + C_{\mathrm{stb}}\,|||\,\phi ^\star _j -\phi _{j\underline{k}} \,||| \big ). \end{aligned}$$
Since \(0<\lambda <\lambda _{\mathrm{opt}}\) and hence \(\lambda C_{\mathrm{stb}}\,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} = \lambda /\lambda _{\mathrm{opt}}<1\), this yields that
$$\begin{aligned} |||\,\phi ^\star _j -\phi _{j\underline{k}} \,||| \le \frac{ \lambda \,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} }{ 1 - \lambda C_{\mathrm{stb}}\,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} }\,\eta _j(\phi ^\star _j) = \lambda \,\frac{q_{\mathrm{pcg}}}{ 1-q_{\mathrm{pcg}}} \,\frac{1}{1-\lambda /\lambda _{\mathrm{opt}}}\,\eta _j(\phi ^\star _j). \end{aligned}$$
Altogether, this proves (71). Moreover, with stability (A1), we see that
$$\begin{aligned} \eta _j(\phi ^\star _j) {{\mathop {\le }\limits ^{(\mathrm{A}1)}}} \eta _j(\phi _{j\underline{k}}) + C_{\mathrm{stb}}\,|||\,\phi ^\star _j -\phi _{j\underline{k}} \,||| {\mathop {\le }\limits ^{(71)}}(1+\lambda /\lambda _{\mathrm{opt}})\,\eta _j(\phi _{j\underline{k}}) \end{aligned}$$
as well as
$$\begin{aligned}&\eta _j(\phi _{j\underline{k}}) {{\mathop {\le }\limits ^{(\mathrm{A}1)}}} \eta _j(\phi ^\star _j) + C_{\mathrm{stb}}\,|||\,\phi ^\star _j -\phi _{j\underline{k}} \,||| {\mathop {\le }\limits ^{(71)}}\Big (1+ \,\frac{\lambda /\lambda _{\mathrm{opt}}}{1-\lambda /\lambda _{\mathrm{opt}}}\Big )\,\eta _j(\phi ^\star _j) \\&\quad = \frac{1}{1-\lambda /\lambda _{\mathrm{opt}}}\,\eta _j(\phi ^\star _j). \end{aligned}$$
This concludes the proof. \(\square \)

Finally, we need the following lemma which immediately shows “\(\Longleftarrow \)” in (31).

Lemma 22

Suppose (R1). For \(j \in \mathbb {N}_0\), let \(\widehat{\mathcal {T}}_{j+1} = \mathrm{refine}(\widehat{\mathcal {T}}_j,\widehat{\mathcal {M}}_j)\) with arbitrary, but non-empty \(\widehat{\mathcal {M}}_j \subseteq \widehat{\mathcal {T}}_j\) and \(\widehat{\mathcal {T}}_0 = \mathcal {T}_0\). Let \(\widehat{\mathcal {Q}} \subseteq \mathbb {N}_0 \times \mathbb {N}_0\) be an index set and \(\widehat{\phi }_{jk} \in \mathcal {P}^0(\widehat{\mathcal {T}}_j)\) for all \((j,k) \in \widehat{\mathcal {Q}}\). Let \(s > 0\) and suppose that the corresponding quasi-errors \(\widehat{\Lambda }_{jk}^2 := |||\,\phi ^\star - \widehat{\phi }_{jk}\,|||^2 + \widehat{\eta }_j(\widehat{\phi }_j^\star )^2\) satisfy that
$$\begin{aligned} \sup _{(j,k) \in \widehat{\mathcal {Q}}} \big (\#\widehat{\mathcal {T}}_j - \#\mathcal {T}_0 +1\big )^s \, \widehat{\Lambda }_{jk} < \infty . \end{aligned}$$
Then, it follows that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \).


Due to the Pythagoras identity (61) and stability (A1), it holds that
$$\begin{aligned} \begin{aligned} \widehat{\Lambda }_{jk}^2&=|||\,\phi ^\star - \widehat{\phi }_{jk}\,|||^2 + \widehat{\eta }_{j}(\widehat{\phi }_{jk})^2\\&{\mathop {=}\limits ^{(61)}}|||\,\phi ^\star - \widehat{\phi }_j^\star \,|||^2+|||\,\widehat{\phi }_j^\star -\widehat{\phi }_{jk}\,|||^2+ \widehat{\eta }_{j}(\widehat{\phi }_{jk})^2 {{\mathop {\gtrsim }\limits ^{(\mathrm{A}1)}}} \widehat{\eta }_j(\widehat{\phi }_j^\star )^2. \end{aligned} \end{aligned}$$
Additionally, [9, Lemma 22] shows that
$$\begin{aligned} \#\mathcal {T}_\bullet - \#\mathcal {T}_0 + 1 \le \#\mathcal {T}_\bullet \le \#\mathcal {T}_0 \, \big (\#\mathcal {T}_\bullet - \#\mathcal {T}_0 + 1\big ) \quad \text {for all } \mathcal {T}_\bullet \in \mathbb {T}. \end{aligned}$$
Given \(N\in \mathbb {N}_0\), there exists an index \(j\in \mathbb {N}_0\) such that
$$\begin{aligned} \#\widehat{\mathcal {T}}_j - \#\mathcal {T}_0 \le N < N+1 \le \#\widehat{\mathcal {T}}_{j+1} - \#\mathcal {T}_0 + 1 {\mathop {\le }\limits ^{(75)}}\#\widehat{\mathcal {T}}_{j+1} {\mathop {\lesssim }\limits ^{(\mathrm{R}1)}} \#\widehat{\mathcal {T}}_j {\mathop {\lesssim }\limits ^{(75)}}\#\widehat{\mathcal {T}}_j- \#\mathcal {T}_0 + 1. \end{aligned}$$
With (74)–(76), it follows that
$$\begin{aligned}&(N+1)^s \min _{\begin{array}{c} \mathcal {T}_\bullet \in \mathrm{refine}(\mathcal {T}_0) \\ \#\mathcal {T}_\bullet - \#\mathcal {T}_0 \le N \end{array}} \eta _\bullet (\phi _\bullet ^\star ) {\mathop {\lesssim }\limits ^{(76)}}\big (\#\widehat{\mathcal {T}}_j- \#\mathcal {T}_0 + 1\big )^s \, \widehat{\eta }_j(\widehat{\phi }_j^\star )\\&\quad {\mathop {\lesssim }\limits ^{(74)}} \sup _{(j,k) \in \widehat{\mathcal {Q}}} \big (\#\widehat{\mathcal {T}}_j- \#\mathcal {T}_0 + 1\big )^s \, \widehat{\Lambda }_{jk} {\mathop {<}\limits ^{(73)}} \infty . \end{aligned}$$
Since the upper bound is finite and independent of N, this implies that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \). \(\square \)

Proof of Theorem 8(c)

With Lemma 22, it only remains to prove the implication “\(\Longrightarrow \)” in (31). The proof is split into three steps, where we may suppose that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \).

Step 1   By Assumption (68), Lemma 20 provides a set \(\mathcal {R}_j\subseteq \mathcal {T}_j\) with (69)–(70). Due to stability (A1) and \(\lambda _{\mathrm{opt}}^{-1} = C_{\mathrm{stb}}\,\frac{q_{\mathrm{pcg}}}{1-q_{\mathrm{pcg}}}\), it holds that
$$\begin{aligned} \eta _j(\mathcal {R}_j,\phi _j^\star ) {\mathop {\le }\limits ^{(\mathrm{A}1)}} \eta _j(\mathcal {R}_j,\phi _{j\underline{k}}) + C_{\mathrm{stb}}\,|||\,\phi _j^\star -\phi _{j\underline{k}}\,||| {\mathop {\le }\limits ^{(71)}}\eta _j(\mathcal {R}_j,\phi _{j\underline{k}}) + \lambda /\lambda _{\mathrm{opt}}\,\eta _j(\phi _{j\underline{k}}). \end{aligned}$$
Together with \(\theta '' \eta _j(\phi _j^\star ) \le \eta _\ell (\mathcal {R}_\ell ,\phi _j^\star )\), this proves that
$$\begin{aligned}&(1-\lambda /\lambda _{\mathrm{opt}})\theta ''\,\eta _j(\phi _{j\underline{k}}) {\mathop {\le }\limits ^{(72)}}\theta ''\,\eta _j(\phi _j^\star ) \le \eta _j(\mathcal {R}_j,\phi _j^\star )\\&\quad \le \eta _j(\mathcal {R}_j,\phi _{j\underline{k}}) + \lambda /\lambda _{\mathrm{opt}}\,\eta _j(\phi _{j\underline{k}}) \end{aligned}$$
and results in
$$\begin{aligned} \theta \,\eta _j(\phi _{j\underline{k}}) {\mathop {=}\limits ^{(68)}} \Big ((1-\lambda /\lambda _{\mathrm{opt}})\theta '' - \lambda /\lambda _{\mathrm{opt}}\Big )\,\eta _j(\phi _{j\underline{k}}) \le \eta _j(\mathcal {R}_j,\phi _{j\underline{k}}). \end{aligned}$$
Hence, \(\mathcal {R}_j\) satisfies the Dörfler marking for \(\phi _{j\underline{k}}\) with parameter \(\theta \). By choice of \(\mathcal {M}_j\) in Step (v) of Algorithm 5, we thus infer that
$$\begin{aligned} \# \mathcal {M}_j {\mathop {\lesssim }\limits ^{(77)}} \# \mathcal {R}_j {\mathop {\lesssim }\limits ^{(69)}} \eta _j(\phi _j^\star )^{-1/s} {\mathop {\simeq }\limits ^{(72)}}\eta _j(\phi _{j\underline{k}})^{-1/s} \quad \text {for all } j\in \mathbb {N}_0. \end{aligned}$$
The mesh-closure estimate (R3) guarantees that
$$\begin{aligned} \# \mathcal {T}_j - \# \mathcal {T}_0 +1 {\mathop {\lesssim }\limits ^{(\mathrm{R}3)}} \sum _{\ell =0}^{j-1} \# \mathcal {M}_\ell \lesssim \sum _{\ell =0}^{j-1} \eta _\ell (\phi _{\ell \underline{k}})^{-1/s} \quad \text {for all } j>0. \end{aligned}$$
Step 2   For \(j=0\) it holds that \(1\lesssim \Lambda _{0\underline{k}}^{-1/s}\). For \(j>0\), we proceed as follows: Remark 9 yields that \(\eta _\ell (\phi _{\ell \underline{k}}) \simeq \Lambda _{\ell \underline{k}}\). Theorem 8(b) and the geometric series prove that
$$\begin{aligned} \sum _{\ell =0}^{j-1} \eta _\ell (\phi _{\ell \underline{k}})^{-1/s} \simeq \sum _{\ell =0}^{j-1} \Lambda _{\ell \underline{k}}^{-1/s} {\mathop {\lesssim }\limits ^{(29)}}\sum _{\ell =0}^{j-1} (q_{\mathrm{lin}}^{1/s})^{|(j,\underline{k})-(\ell ,\underline{k})|} \, \Lambda _{j\underline{k}}^{-1/s} \lesssim \Lambda _{j\underline{k}}^{-1/s}. \end{aligned}$$
Combining this with (78) and including the estimate for \(j=0\), we derive that
$$\begin{aligned} \# \mathcal {T}_j - \# \mathcal {T}_0 +1 \lesssim \Lambda _{j\underline{k}}^{-1/s} \quad \text {for all } j \in \mathbb {N}_0. \end{aligned}$$
Step 3   Arguing as in (76) and employing Theorem 8(b), we see that
$$\begin{aligned}&\#\mathcal {T}_{j}-\#\mathcal {T}_0+1 {\mathop {\simeq }\limits ^{(76)}}\#\mathcal {T}_{j-1}-\#\mathcal {T}_0+1 {\mathop {\lesssim }\limits ^{(79)}}\Lambda _{(j-1)\underline{k}}^{-1/s}\\&\qquad {\mathop {\lesssim }\limits ^{(29)}}\Lambda _{jk}^{-1/s} \text { for all } (j,k) \in \mathcal {Q} \text { with } j > 0. \end{aligned}$$
Since \(\underline{k}(0) \le \#\mathcal {T}_0< \infty \), we hence conclude that \(\displaystyle \sup _{(j,k) \in \mathcal {Q}} (\#\mathcal {T}_{j}-\#\mathcal {T}_0+1)^s \, \Lambda _{jk} < \infty . \)\(\square \)

6.6 Proof of Corollary 10

For all \(\delta > 0\), it holds that
$$\begin{aligned} \#\mathcal {T}_\bullet - \#\mathcal {T}_0 + 1 {\mathop {\simeq }\limits ^{(75)}}\#\mathcal {T}_\bullet \le (\#\mathcal {T}_\bullet )\log ^2(1+\#\mathcal {T}_\bullet ) \lesssim (\#\mathcal {T}_\bullet )^{1+\delta } \quad \text {for all } \mathcal {T}_\bullet \in \mathbb {T}, \end{aligned}$$
where the hidden constant depends only on \(\delta \). From (32), it thus follows that
$$\begin{aligned} \sup _{j \in \mathbb {N}_0} \big [ \#\widehat{\mathcal {T}}_j - \#\mathcal {T}_0 + 1\big ]^s \, \widehat{\eta }_j(\widehat{\phi }_j^\star ) \lesssim \sup _{j \in \mathbb {N}_0} \big [(\#\widehat{\mathcal {T}}_j) \, \log ^2(1+\#\widehat{\mathcal {T}}_j)\big ]^s \, \widehat{\eta }_j(\widehat{\phi }_j^\star ) < \infty . \end{aligned}$$
From Lemma 22, we derive that \(\Vert \phi ^\star \Vert _{\mathbb {A}_s} < \infty \). Hence, Theorem 8(c) yields that
$$\begin{aligned} \sup _{(j,k) \in \mathcal {Q}} \big [\#\mathcal {T}_j\big ]^s \, \Lambda _{jk} \simeq \sup _{(j,k) \in \mathcal {Q}} \big [ \#\mathcal {T}_j - \#\mathcal {T}_0 + 1 \big ]^s \, \Lambda _{jk} < \infty . \end{aligned}$$
Let \(0< \varepsilon < s\) and choose \(\delta > 0\) such that
$$\begin{aligned} 0 < s - \varepsilon = \frac{s}{1+\delta } =: t. \end{aligned}$$
This leads to
$$\begin{aligned}&(\#\mathcal {T}_j) \log ^2(1+\#\mathcal {T}_j) \lesssim (\#\mathcal {T}_j)^{1+\delta } {\mathop {\lesssim }\limits ^{(80)}}\Lambda _{jk}^{-(1+\delta )/s} \\&\quad = \Lambda _{jk}^{-1/t} \quad \text {for all } (j,k) \in \mathcal {Q}. \end{aligned}$$
From Theorem 8(b) and the geometric series, it follows that
$$\begin{aligned}&\sum _{(j,k) \le (j',k')} \Lambda _{jk}^{-1/t} {\mathop {\lesssim }\limits ^{(29)}}\sum _{(j,k) \le (j',k')} (q_{\mathrm{lin}}^{1/t})^{|(j',k')|-|(j,k)|} \Lambda _{j'k'}^{-1/t} \\&\quad \lesssim \Lambda _{j'k'}^{-1/t} \quad \text {for all } (j',k') \in \mathcal {Q}. \end{aligned}$$
Combining the last two estimates, we see that
$$\begin{aligned} \left[ \sum _{(j,k) \le (j',k')} (\#\mathcal {T}_j)\log ^2(1+\#\mathcal {T}_j)\big )\right] ^{s-\varepsilon } \lesssim \Lambda _{j'k'}^{-(s-\varepsilon )/t} = \Lambda _{j'k'}^{-1} \text { for all } (j',k') \in \mathcal {Q}. \end{aligned}$$
This concludes the proof. \(\square \)

7 Hyper-singular integral equation

We only sketch the setting and refer to [36] for further details and proofs. Given \(f: \Gamma \rightarrow \mathbb {R}\), the hyper-singular integral equation seeks \(u^\star :\Gamma \rightarrow \mathbb {R}\) such that
$$\begin{aligned} (W u^\star )(x) := -\partial _{\varvec{n}(x)} \int _\Gamma \partial _{\varvec{n}(y)} G(x-y) u^\star (y) \,\mathrm{d}y = f(x) \quad \text {for all } x \in \Gamma , \end{aligned}$$
where \(\partial _{\varvec{n}}\) denotes the normal derivative with the outer unit normal vector \(\varvec{n}(\cdot )\) on \(\Gamma \subseteq \partial \Omega \). For \(0 \le \alpha \le 1\), define \(\widetilde{H}^\alpha (\Gamma ) := \big \{v \in H^\alpha (\Gamma )\,:\,{\text {supp}}(v) \subseteq {\overline{\Gamma }}\big \}\) and let \(H^{-\alpha }(\Gamma )\) be its dual space with respect to \(\langle \cdot \,,\,\cdot \rangle \). Note that \(\widetilde{H}^{\pm \alpha }(\Gamma ) = H^{\pm \alpha }(\Gamma )\) for \(\Gamma = \partial \Omega \). The hyper-singular integral operator \(W : \widetilde{H}^{1/2+s}(\Gamma ) \rightarrow H^{-1/2+s}(\Gamma )\) is a bounded linear operator for all \(-1/2 \le s \le 1/2\) which is even an isomorphism for \(-1/2< s < 1/2\). For \(s=0\), the operator W is symmetric and (since \(\Gamma \) is connected) positive semi-definite with kernel being the constant functions. For \(\Gamma \subsetneqq \partial \Omega \), the operator \(W : \widetilde{H}^{1/2}(\Gamma ) \rightarrow H^{-1/2}(\Gamma )\) is hence an elliptic isomorphism. Moreover, for \(\Gamma = \partial \Omega \) and \(H^{\pm 1/2}(\Gamma ) := \big \{\psi \in H^{\pm 1/2}(\Gamma )\,:\,\langle \psi \,,\,1\rangle =0\big \}\), \(W : H^{1/2}_{*}(\Gamma ) \rightarrow H^{-1/2}_{*}(\Gamma )\) is an elliptic isomorphism. Therefore,
$$\begin{aligned} \langle \langle u\,,\,v\rangle \rangle := {\left\{ \begin{array}{ll} \langle Wu\,,\,v\rangle , &{}\quad \text {if } \Gamma \subsetneqq \partial \Omega ,\\ \langle Wu\,,\,v\rangle + \langle u\,,\,v\rangle , &{}\quad \text {if } \Gamma = \partial \Omega \end{array}\right. } \end{aligned}$$
defines a scalar product on \(\widetilde{H}^{1/2}(\Gamma )\), and the induced norm \(|||\,u\,||| := \langle \langle u\,,\,u\rangle \rangle ^{1/2}\) is an equivalent norm on \(\widetilde{H}^{1/2}(\Gamma )\). Let \(f \in H^{-1/2}(\Gamma )\). If \(\Gamma \subsetneqq \partial \Omega \), suppose additionally that \(f \in H^{-1/2}_{*}(\partial \Omega )\). Then, (81) admits a unique solution \(u^\star \in \widetilde{H}^{1/2}(\Gamma )\) resp. \(u^\star \in H^{1/2}_{*}(\partial \Omega )\), which is also the unique solution \(u^\star \in \widetilde{H}^{1/2}(\Gamma )\) of the variational formulation
$$\begin{aligned} \langle \langle u^\star \,,\,v\rangle \rangle = \langle f\,,\,v\rangle \quad \text { for all } v \in \widetilde{H}^{1/2}(\Gamma ). \end{aligned}$$
Given a mesh \(\mathcal {T}_\bullet \) of \(\Gamma \), let
$$\begin{aligned} \widetilde{\mathcal {S}}^1(\mathcal {T}_\bullet ) := \big \{v \in \widetilde{H}^{1/2}(\Gamma )\,:\,\forall T \in \mathcal {T}_\bullet \quad v|_T \text { is affine}\big \}. \end{aligned}$$
The Lax–Milgram theorem yields existence and uniqueness of \(u_\bullet ^\star \in \widetilde{\mathcal {S}}^1(\mathcal {T}_\bullet )\) such that
$$\begin{aligned} \langle \langle u_\bullet ^\star \,,\,v_\bullet \rangle \rangle = \langle f\,,\,v_\bullet \rangle \quad \text { for all } v_\bullet \in \widetilde{\mathcal {S}}^1(\mathcal {T}_\bullet ). \end{aligned}$$
With the corresponding weighted-residual error estimator, it holds that
$$\begin{aligned}&|||\,u^\star - u_\bullet ^\star \,||| \le C_{\mathrm{rel}} \, \eta _\bullet (u_\bullet ^\star ) := \bigg ( \sum _{T \in \mathcal {T}_\bullet } \eta _\bullet (T,u_\bullet ^\star )^2 \bigg )^{1/2} , \\&\quad \quad \text { where } \eta _\bullet (T,u_\bullet ^\star )^2 := h_T \, \Vert f - W u_\bullet ^\star \Vert _{L^2(T)}^2; \end{aligned}$$
see [10, 17] for \(d = 2\) resp. [14] for \(d = 3\).

In [22, 26], optimal additive Schwarz preconditioners are derived for this setting. Hence, Algorithm 5 can also be used in the present setting. We refer to [20, Section 3.3] for the fact that the axioms of adaptivity (A1)–(A4) from Proposition 16 remain valid for the hyper-singular integral equation. All other arguments in Section 6 rely only on general properties of the PCG algorithm (Sect. 6.2), the properties (A1)–(A4), and the Hilbert space setting of \(|||\,\cdot \,|||\). Overall, this proves that our main results (Theorem 8 and Corollary 10) also cover the hyper-singular integral equation.



Open access funding provided by Austrian Science Fund (FWF). The authors thankfully acknowledge the support by the Austrian Science Fund (FWF) through Grant P27005 (AH, DP, SS) as well as Grant F65 (DP) and by CONICYT through FONDECYT project P11170050 (TF).


  1. 1.
    Aurada, M., Ebner, M., Feischl, M., Ferraz-Leite, S., Führer, T., Goldenits, P., Karkulik, M., Mayr, M., Praetorius, D.: A Matlab implementation of adaptive 2D-BEM. Numer. Algorithms 67, 1–32 (2014)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Aurada, M., Feischl, M., Führer, T., Karkulik, M., Praetorius, D.: Efficiency and optimality of some weighted-residual error estimator for adaptive 2D boundary element methods. Comput. Methods Appl. Math. 13(3), 305–332 (2013)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Aurada, M., Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Praetorius., D.: Local inverse estimates for non-local boundary integral operators. Math. Comput. 86(308), 2651–2686 (2017)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Arioli, M., Georgoulis, E.H., Loghin, D.: Stopping criteria for adaptive finite element solvers. SIAM J. Sci. Comput. 35(3), A1537–A1559 (2013)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Ainsworth, M., Guzmán, J., Sayas, F.-J.: Discrete extension operators for mixed finite element spaces on locally refined meshes. Math. Comput. 85(302), 2639–2650 (2016)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Arioli, M., Liesen, J., Międlar, A., Strakoš, Z.: Interplay between discretization and algebraic computation in adaptive numerical solution of elliptic PDE problems. GAMM Mitt. 36(1), 102–129 (2013)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Ainsworth, M., McLean, W., Tran, T.: The conditioning of boundary element equations on locally refined meshes and preconditioning by diagonal scaling. SIAM J. Numer. Anal. 36(6), 1901–1932 (1999)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Binev, P., Dahmen, W., DeVore, R.: Adaptive finite element methods with convergence rates. Numer. Math. 97(2), 219–268 (2004)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Bespalov, A., Haberl, A., Praetorius, D.: Adaptive FEM with coarse initial mesh guarantees optimal convergence rates for compactly perturbed elliptic problems. Comput. Methods Appl. Mech. Eng. 317, 318–340 (2017)MathSciNetGoogle Scholar
  10. 10.
    Carstensen, C.: An a posteriori error estimate for a first-kind integral equation. Math. Comput. 66(217), 139–155 (1997)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Carstensen, C., Feischl, M., Page, M., Praetorius, D.: Axioms of adaptivity. Comput. Math. Appl. 67(6), 1195–1253 (2014)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Carstensen, C., Gedicke, J.: An adaptive finite element eigenvalue solver of asymptotic quasi-optimal computational complexity. SIAM J. Numer. Anal. 50(3), 1029–1057 (2012)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Cascon, J.M., Kreuzer, C., Nochetto, R.H., Siebert, K.G.: Quasi-optimal convergence rate for an adaptive finite element method. SIAM J. Numer. Anal. 46(5), 2524–2550 (2008)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Carstensen, C., Maischak, M., Praetorius, D., Stephan, E.P.: Residual-based a posteriori error estimate for hypersingular equation on surfaces. Numer. Math. 97(3), 397–425 (2004)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Carstensen, C., Maischak, M., Stephan, E.P.: A posteriori error estimate and \(h\)-adaptive algorithm on surfaces for Symm’s integral equation. Numer. Math. 90(2), 197–213 (2001)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Carstensen, C., Praetorius, D.: Averaging techniques for the effective numerical solution of Symm’s integral equation of the first kind. SIAM J. Sci. Comput. 27(4), 1226–1260 (2006)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Carstensen, C., Stephan, E.P.: A posteriori error estimates for boundary element methods. Math. Comput. 64(210), 483–500 (1995)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Dörfler, W.: A convergent adaptive algorithm for Poisson’s equation. SIAM J. Numer. Anal. 33(3), 1106–1124 (1996)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Dirk, P.: Quasi-optimal convergence rates for adaptive boundary element methods with data approximation. Part I: weakly-singular integral equation. Calcolo 51, 531–562 (2014)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Feischl, M., Führer, T., Karkulik, M., Melenk, J.M., Praetorius, D.: Quasi-optimal convergence rates for adaptive boundary element methods with data approximation. Part II: hyper-singular integral equation. Electron. Trans. Numer. Anal. 44, 153–176 (2015)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Feischl, M., Führer, T., Praetorius, D.: Adaptive FEM with optimal convergence rates for a certain class of nonsymmetric and possibly nonlinear problems. SIAM J. Numer. Anal. 52(2), 601–625 (2014)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Feischl, M., Führer, T., Praetorius, D., Stephan, E.P.: Optimal additive Schwarz preconditioning for hypersingular integral equations on locally refined triangulations. Calcolo 54(1), 367–399 (2017)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Feischl, M., Führer, T., Praetorius, D., Stephan, E.P.: Optimal preconditioning for the symmetric and nonsymmetric coupling of adaptive finite elements and boundary elements. Numer. Methods Partial Differ. Equ. 33(3), 603–632 (2017)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Feischl, M., Karkulik, M., Melenk, J.M., Praetorius., D.: Quasi-optimal convergence rate for an adaptive boundary element method. SIAM J. Numer. Anal. 51, 1327–1348 (2013)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Führer, T., Melenk, J.M., Praetorius, D., Rieder, A.: Optimal additive Schwarz methods for the hp-BEM: the hypersingular integral operator in 3D on locally refined meshes. Comput. Math. Appl. 70, 1583–1605 (2015)MathSciNetGoogle Scholar
  26. 26.
    Führer, T.: Zur Kopplung von finiten Elementen und Randelementen. Ph.D. thesis, TU Wien (2014)Google Scholar
  27. 27.
    Gantumur, T.: Adaptive boundary element methods with convergence rates. Numer. Math. 124(3), 471–516 (2013)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Gantner, G., Haberl, A., Praetorius, D., Stiftner, B.: Rate optimal adaptive FEM with inexact solver for nonlinear operators. IMA J. Numer. Anal. 38, 1797–1831 (2018)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, 4th edn. Johns Hopkins University Press, Baltimore (2013)Google Scholar
  30. 30.
    Hackbusch, W.: Hierarchical Matrices: Algorithms and Analysis, Volume 49 of Springer Series in Computational Mathematics. Springer, Heidelberg (2015)Google Scholar
  31. 31.
    Hiptmair, R., Jerez-Hanckes, C., Mao, S.: Extension by zero in discrete trace spaces: inverse estimates. Math. Comput. 84(296), 2589–2615 (2015)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Hiptmair, R., Mao, S.: Stable multilevel splittings of boundary edge element spaces. BIT 52(3), 661–685 (2012)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Hiptmair, R., Haijun, W., Zheng, W.: Uniform convergence of adaptive multigrid methods for elliptic problems and Maxwell’s equations. Numer. Math. Theory Methods Appl. 5(3), 297–332 (2012)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Hiptmair, R., Zheng, W.: Local multigrid in \({ H}(\bf{curl})\). J. Comput. Math. 27(5), 573–603 (2009)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Karkulik, M., Pavlicek, D., Praetorius, D.: On 2D newest vertex bisection: optimality of mesh-closure and \(H^1\)-stability of \(L_2\)-projection. Constr. Approx. 38, 213–234 (2013)MathSciNetzbMATHGoogle Scholar
  36. 36.
    McLean, W.: Strongly Elliptic Systems and Boundary Integral Equations. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  37. 37.
    Morin, P., Nochetto, R.H., Siebert, K.G.: Data oscillation and convergence of adaptive FEM. SIAM J. Numer. Anal. 38(2), 466–488 (2000)MathSciNetzbMATHGoogle Scholar
  38. 38.
    Oswald, P.: Multilevel finite element approximation. Teubner Skripten zur Numerik [Teubner Scripts on Numerical Mathematics]. Theory and Applications. B.G. Teubner, Stuttgart (1994)Google Scholar
  39. 39.
    Oswald, P.: Interface preconditioners and multilevel extension operators. In: Eleventh International Conference on Domain Decomposition Methods (London, 1998), pp. 97–104., Augsburg (1999)Google Scholar
  40. 40.
    Śmigaj, W., Betcke, T., Arridge, S., Phillips, J., Schweiger, M.: Solving Boundary Integral Problems with BEM++. ACM, New York (2013)zbMATHGoogle Scholar
  41. 41.
    Stevenson, R.: Optimality of a standard adaptive finite element method. Found. Comput. Math. 7(2), 245–269 (2007)MathSciNetzbMATHGoogle Scholar
  42. 42.
    Stevenson, R.: The completion of locally refined simplicial partitions created by bisection. Math. Comput. 77(261), 227–241 (2008)MathSciNetzbMATHGoogle Scholar
  43. 43.
    Stevenson, R., van Venetië, R.: Optimal preconditioning for problems of negative order. ArXiv e-prints, March 2018Google Scholar
  44. 44.
    Toselli, A., Widlund, O.: Domain Decomposition Methods-Algorithms and Theory, Volume 34 of Springer Series in Computational Mathematics. Springer, Berlin (2005)zbMATHGoogle Scholar
  45. 45.
    Xu, J., Chen, L., Nochetto, R.H.: Optimal multilevel methods for \(H({\rm grad})\), \(H({\rm curl})\), and \(H({\rm div})\) systems on graded and unstructured grids. In: DeVore, R., Kunoth, A. (eds.) Multiscale, Nonlinear and Adaptive Approximation, pp. 599–659. Springer, Berlin (2009)Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Thomas Führer
    • 1
  • Alexander Haberl
    • 2
  • Dirk Praetorius
    • 2
  • Stefan Schimanko
    • 2
    Email author
  1. 1.Facultad de MatemáticasPontificia Universidad Católica de ChileSantiagoChile
  2. 2.Institute for Analysis and Scientific ComputingTU WienViennaAustria

Personalised recommendations