1 Introduction

Additive Schwarz methods are considered among the most popular domain decomposition methods for solving partial differential equations because of their simplicity, and inherently parallel algorithms, cf. [8, 25, 33, 36]. We consider in this paper variants of the overlapping additive Schwarz method as preconditioners for solving scalar elliptic equations with highly varying coefficient in the space \(\mathbb {R}^3\). Based on the idea of adaptively constructing the coarse space through solving generalized eigenvalue problems in the lower dimensions, we propose two variants of the algorithm in three dimensions. The resulting system has a condition number which is inversely proportional to \(\lambda ^{*}\), where \(\lambda ^{*}\) is the threshold for coarse basis selection of eigenfunctions corresponding to eigenvalues below \(\lambda ^{*}\). The threshold is used as a parameter for the coarse space in the algorithm. The term “lower dimensions” refers to the fact that the eigenvalue problems are solved either on subdomain faces or edges. The methods are effective, inherently parallel, and simple to construct. Additive Schwarz method with adaptive coarse space was recently considered using a different idea based on solving generalized eigenvalue problem in the overlap, cf. [8, 9, 26].

To see the motivation behind such an approach, we take a brief look into the steps of deriving convergence estimates for two-level overlapping Schwarz methods in their abstract Schwarz framework (cf. [25, 33, 36] for the abstract framework). These methods are analyzed in the abstract Schwarz framework. And the abstract Schwarz framework is based on the assumption that any function u in the finite element space can be split into its coarse component \(u_0\) and local components associated with the subdomains, and that this splitting is stable with respect to the energy norm. The condition number bound of the preconditioned system then depends explicitly on the constant \(C_0^2\) which enters into the stability estimate, cf. [25, 33, 36]. Consequently, how robust the method will be with respect to the mesh parameters and the varying coefficients, depends on how robust \(C_0^2\) is concerning those parameters and varying coefficients. Its already known, cf. e.g., [15], that, even with highly varying coefficients as long as the variations are strictly inside the subdomains, this constant is not depend on the variation. The difficulty, however, arises when we have to deal with coefficients which vary along subdomain boundaries. In which case, the crucial term which needs to be bounded in the stability estimate is the boundary term \(\sum _i C/h^2\Vert \sqrt{\alpha }(u-u_0)\Vert ^2_{L^2(\varOmega ^h_i)}\). Here \(\varOmega ^h_i\) denotes the layer of all elements along the subdomain boundary \(\partial \varOmega _i\), \(\alpha \) the varying coefficient, and C a constant. Because the \(\alpha \) is varying, estimating the \(L^2\) term say using the Poincaré or a weighted Poincaré inequality introduces the contrast into the estimate, i.e., the ratio between the largest and the smallest values of \(\alpha \), unless strong assumptions on the distribution of the coefficient are made, cf. [31]. A standard multiscale coarse space alone, cf. [15, 29], cannot make the method robust with respect to the contrast unless some form of enrichment of the coarse space is used that can capture the strong variations along the subdomain boundary and improves the approximation. Including selected eigenfunctions of properly defined eigenvalue problems over the subdomain boundary into coarse spaces, enable us to capture those variations and provide estimates with constants independent of the contrast, which is the primary motivation behind the methods presented here.

The idea of adaptively constructing coarse spaces using eigenfunctions of certain eigenvalue problems has already attracted much interest in recent years. Some earlier works in this direction are found in [1, 2] around the Neumann-Neumann type substructuring domain decomposition method, and [6] around the algebraic multigrid method, although they were not addressed to solve multiscale problems. In the context of multiscale problems this idea has only recently started to emerge, with its first appearance in [12, 13], as well as in [26], and later on in [7, 9,10,11, 14, 35] in the additive Schwarz framework. This idea of adaptively constructing coarse space, in other words, the primal constraints, for the FETI-DP and BDDC substructuring domain decomposition methods has also been developed and extensively analyzed, cf. [17, 19, 20, 24, 34] in 2D and [5, 16, 18, 27, 30] in 3D.

The methods above are preconditioners to iterative methods, designed to obtain fine-scale solutions to elliptic equations. In this recent development of coarse space enrichment, it is important to take note of the parallel development in methods designed to solve elliptic equations directly by a coarse approximation, such as the Reduced Basis method, see, e.g., [4, 28], cf. also [32], and the Localized Orthogonal Decomposition method, see, e.g., [21, 23]. The constructions of these coarse approximations and the constructions of the coarse spaces in the iterative methods show many similarities.

The rest of this paper is organized as follows: In Sect. 2 we define our problem and its discrete formulation. In Sect. 3 we describe the overlapping Schwarz preconditioner, in Sect. 4 we introduce the two new coarse spaces, and in Sect. 5 we establish the convergence estimate for the preconditioned system. Finally, in Sect. 6, we present some numerical results of our method.

2 Differential problem and discrete formulation

Here we present our continuous test problem, the scalar elliptic equation with coefficients piecewise constant on each fine-scale element and its discrete representation. Find \(u \in H^1_0(\varOmega )\) such that

$$\begin{aligned} a(u,v)=f(v),\quad v\in H^1_0(\varOmega ), \end{aligned}$$
(1)

where

$$\begin{aligned} a(u,v):=(\alpha (\cdot )\nabla u,\nabla v)_{L^2(\varOmega )} \quad \text{ and } \quad f(v):=\int _\varOmega fv dx. \end{aligned}$$
(2)

We assume that \(\alpha \in L^\infty (\varOmega )\), \(\alpha (x)\ge \alpha _0 > 0\), and \(f \in L^2(\varOmega )\) with \(\varOmega \) being a polyhedral region in the space \(\mathbb {R}^3\).

Let \(\mathcal {T}_h(\varOmega )\) be a quasi uniform triangulation of \(\varOmega \) into fine tetrahedral elements \(\tau \), where \(h=\max _{\tau \in \mathcal {T}_h(\varOmega )}\)diam\((\tau )\) is the parameter of \(\mathcal {T}_h(\varOmega )\), cf. e.g. [3]. Keeping in mind that a tetrahedral element consists of four triangular faces and six edges, we denote the fine element face by \(\tau _t\) and the finite element edge by \(\tau _e\). Let \(V_h=V_h^0(\varOmega )\) be the finite element solution space of piecewise linear continuous functions:

$$\begin{aligned} V^h=V^0_h(\varOmega ):=\left\{ v\in C_0(\varOmega ): \; v_{|\tau }\in P_1(x), \quad v_{|\partial \varOmega }=0\right\} , \end{aligned}$$

where \(v_{|\tau }\) is the function restricted to an element \(\tau \in \mathcal {T}^h(\varOmega )\) and \(P_1(x)\) is the set of linear polynomials. Note that as the gradient of \(u\in V^h\) is piecewise constant thus \((\alpha \nabla u \nabla v)_{L^2(\tau )}=\nabla u \nabla v \int _\tau \alpha \,dx\). Thus without any loss of generality we assume that \(\alpha (x)=\alpha _\tau \) for \(x\in \tau \) where \(\alpha _\tau \) is a positive constant.

The discrete problem is then defined as: Find \(u_h \in V^0_h(\varOmega )\) such that

$$\begin{aligned} a(u_h,v)=f(v),\quad v\in V^0_h(\varOmega ). \end{aligned}$$
(3)

This problem has a unique solution by the Lax-Milgram theorem. By using standard nodal basis functions \(\phi _i\) with \(i=1,\ldots ,N_\eta \), where \(N_\eta \) are the number of individual vertex nodes of the tetrahedrons in \(\varOmega \), the above equation may be stated as a system of algebraic equations

$$\begin{aligned} Au_h=f_h, \end{aligned}$$
(4)

where \((A)_{i,j}=a(\phi _j,\phi _i)\), \({(f_{h})}_i=f(\phi _i)\) and \({(u_h)}_j=u_h(x_{j})\). Here \(x_j\) is the coordinate value of the node j. The resulting symmetric system is in general very ill-conditioned; any standard iterative method, like the Conjugate Gradients, e.g. cf [22], may perform poorly due to the ill-conditioning of the system. The aim is to introduce an additive Schwarz preconditioner for the original problem (3) to obtain a well-conditioned system for which the convergence of the conjugate gradient method is independent of any variations in the coefficient, thereby improving the overall performance.

3 Additive Schwarz method

The two-level additive Schwarz method is well known and well understood in the literature, and we refer to [36, Chapter 3] for an overview of the method.

3.1 Geometric structures

Let \(\varOmega \) be partitioned into a set of N nonoverlapping subdomains (or generalized subdomains), \(\{\varOmega _i\}_{i=1}^N\), such that each \(\overline{\varOmega }_i\) (the closure of \({\varOmega }_i\)) is a sum of elements (fine elements) from \(\mathcal {T}_h\), \({\varOmega }_i\cap {\varOmega }_j = \emptyset \) for \(i\ne j\), and \(\overline{\varOmega } = \cup _{i\in I}\overline{\varOmega }_i\). The intersection between two closed subdomains is either an empty set, a closed face (or a closed generalized face which is a sum of closures of fine element faces), a closed edge (or a closed generalized edge which is a sum of closures of fine element edges), or a vertex. Thus, the triangulation \(\mathcal {T}^h(\varOmega )\) is aligned with the subdomains \(\varOmega _i\). Each subdomain \(\varOmega _i\) inherits its own local triangulation \(\mathcal {T}^h(\varOmega _i)=\{\tau \in \mathcal {T}^h(\varOmega ):\tau \subset \overline{\varOmega }_i\}\) such that \(\bigcup _i\mathcal {T}^h(\varOmega _i)=\mathcal {T}^h(\varOmega )\). The corresponding set of overlapping subdomains \(\{\varOmega _i'\}_{i\in I}\) is then defined as follows: extend each subdomain \(\varOmega _i\) to \(\varOmega _i'\), by adding to \(\varOmega _i\) a layer of elements, i.e. sum of \(\overline{\tau }_k\in \mathcal {T}^h(\varOmega )\) such that \(\overline{\tau }_k\cap \partial \varOmega _{i}\ne \emptyset \).

Since the subdomains inherit their triangulation from the global triangulation \(\mathcal {T}^h(\varOmega )\), the nodes, the edges, and the faces of the tetrahedral elements along the interface \(\varGamma =\bigcup _i\partial \varOmega _i{\setminus }\partial \varOmega \) are matching across. The interface is composed of three basic structures: open (generalized) faces, open (generalized) edges, and subdomain vertices, cf. Fig 1. The set of all open edges and subdomain vertices constitute the structure which we call the wire basket, and we denote it by \(\mathcal {W}\). The set of all faces is given by \(\mathcal {F}=\{\mathcal {F}_{kl} : \ \mathcal {\overline{F}}_{kl}=\partial \varOmega _k\cap \partial \varOmega _l, \ |\mathcal {F}_{kl}|>0, \ k\in I, \ l\in I, \ \text{ and } \ k>l\}\), where \(|\cdot |\) is a natural measure of a surface. Note that since the subdomain vertices are matching across the interface, we have \(\varGamma =\bigcup _{kl} \mathcal {\overline{F}}_{kl}{\setminus }\partial \varOmega \). A closed edge is an intersection between two closed faces, consisting of closed fine element edges. The set of all edges are denoted by \(\mathcal {E}\). And the set of subdomain vertices are denoted by \(\mathcal {V}\). The wire basket is defined as \(\mathcal {W}=\bigcup _k\mathcal {\overline{E}}_k{\setminus }\partial \varOmega \).

Fig. 1
figure 1

The unit cube domain, decomposed into 8 subdomains. The subdomain in the back at the top is meshed into a fine mesh. The faces that the subdomain \(\varOmega _1\) shares with its neighbours, \(\varOmega _2\) and \(\varOmega _3\) have been meshed. On the face, \(\mathcal {F}_{I_{13}}\), only the internal nodes have been meshed, while on the face \(\mathcal {F}_{12}\) all the nodes have been meshed. Also, an edge \(\mathcal {E}_1\) is indicated where the edge nodes have been dotted. The center red node is a vertex node (color figure online)

In the same way as each subdomain inherits a 3D triangulation, each face \(\mathcal {F}_{k l}\) inherits a 2D triangulation which we denote by \(\mathcal {T}_h(\mathcal {F}_{k l})\), and each edge \(\mathcal {E}_k\) inherits a 1D triangulation which we denote by \(\mathcal {T}_h(\mathcal {E}_k)\). For each of the structures, \(\varOmega \), \(\overline{\varOmega }\), \(\varOmega _k\), \(\overline{\varOmega }_k\), \(\mathcal {F}\), \(\mathcal {E}\), and \(\mathcal {W}\), we use \(\varOmega _h\), \(\overline{\varOmega }_h\), \(\varOmega _{k,h}\), \(\overline{\varOmega }_{k,h}\), \(\mathcal {F}_h\), \(\mathcal {E}_h\), and \(\mathcal {W}_h\), respectively, to denote the corresponding set of nodal points (vertices of the elements of \(\mathcal {T}_h\)) which are on the structure.

3.2 Space decomposition, subproblems, and preconditioned system

Let the two local subspaces on \(\varOmega _k\), be defined as

$$\begin{aligned} V_h(\varOmega _k):= & {} \{u_{|\overline{\varOmega }_k}: u \in V_h\}, \nonumber \\ V^0_h(\varOmega _k):= & {} V_h(\varOmega _k)\cap H^1_0(\varOmega _k). \end{aligned}$$
(5)

Functions of \(V^0_h(\varOmega _k)\) are extended by zero to the rest of the domain \({\overline{\varOmega }}{\setminus }{\overline{\varOmega }}_k\). For the ease of representation, we denote this extended space by the same symbol, that is using \(V^0_h(\varOmega _k)\). We can repeat these definitions for extended domains \(\varOmega _i'\). Let us decompose the finite element solution space into a coarse space and N local subspaces

$$\begin{aligned} V^h=V_0^{(k)}+\sum _{i=1}^N V_i. \end{aligned}$$
(6)

Here \(V_i=V^0_h(\varOmega _i')\) are the local function spaces associated with the overlapping subdomains \(\varOmega _i'\) for \(i=1, \ldots , N\) extended by zero to the rest of \(\varOmega \), cf. (5). Further, \(V_0^{(k)},k=1,2\) is the coarse space. In the next section, we introduce the two coarse spaces, the wire basket based coarse space \(V_0^{(1)}=V_{\mathcal W}\), cf. (14), and the vertex based coarse space \(V_0^{(2)}=V_{{\mathcal {V}}}\), cf. (21). In both cases, these are relatively much smaller subspaces of the finite element space \(V_h\).

We define projection like operators, \(P_0^{(k)}\) for \(k=1, 2\), and \(P_i\) for \(i=1,\ldots ,N\), as follows,

$$\begin{aligned} a(P_0^{(k)}u,v)=a(u,v)\quad \forall v\in V_0^{(k)},\quad k=1,2,\\ a(P_iu,v)=a(u,v)\quad \forall v\in V_i,\quad i=1,\ldots ,N. \end{aligned}$$

Defining the additive operator \(P^{(k)}=P_0^{(k)} + \sum _i P_i\), we get the following system of equation equivalent to the original problem (3) in the operator form,

$$\begin{aligned} P^{(k)}u_h=g^{(k)}, \end{aligned}$$
(7)

where \(g^{(k)}=g_0^{(k)}+\sum _i g_i\) and \(g_0^{(k)}=P_0^{(k)}u_h, g_i=P_iu_h\) for \(u_h\) the solution of (3). The right-hand side functions \(g_0^{(k)},g_i\) can be computed in parallel without knowing \(u_h\), cf. e.g. [33, 36].

4 Coarse spaces with enrichment

For our the additive Schwarz method we introduce two alternative coarse spaces. The first one is based on enriching a wire basket coarse space, and the second one is based on enriching a vertex based coarse space.

Discrete harmonic extensions are used to extend our functions from subdomain boundaries into subdomains. We define our discrete harmonic extension operator below.

Definition 1

Let \(u|_{\partial \varOmega _k}\) be the u restricted to \(\partial \varOmega _k\). We define \(\mathcal {H}_k:V_h(\varOmega _k)\rightarrow V_h(\varOmega _k)\) as the discrete harmonic extension operator in \(\varOmega _k\) as follows,

$$\begin{aligned} \left\{ \begin{array}{ll} a_{|\varOmega _k}(\mathcal {H}_k u,v)=0&{}\quad \forall v\in V^0_h(\varOmega _k),\\ \mathcal {H}_ku=u_{|\partial \varOmega _k} &{}\quad \mathrm {on} \quad \partial \varOmega _k. \end{array}\right. \end{aligned}$$
(8)

A function \(u\in V_h(\varOmega _k)\) is locally discrete harmonic in \(\varOmega _k\) if \(\mathcal {H}_ku=u\). If for any \(u\in V_h\), we have that all its restrictions to the subdomains are locally discrete harmonic then this function is (piecewise) discrete harmonic in \(\varOmega \).

Functions that are locally discrete harmonic have the minimum energy property locally. This property is well known, but for completeness we restate it in the following lemma, cf. [36] for further details on discrete harmonic extensions.

Lemma 1

Let \(u\in V_h(\varOmega _k)\) be given such that \(u=\mathcal {H}_k u\) in the sense of Definition 1, then it follows that

$$\begin{aligned} a_{|\varOmega _k}(u,u)=\min _{\{v| v=u \text { on } \partial \varOmega _k\}} a_{|\varOmega _k}(v,v). \end{aligned}$$
(9)

4.1 Wire basket based coarse space

The wire basket based coarse space consists of basis functions, one for each node in the wire basket \(\mathcal {W}\), plus eigenfunctions corresponding to the first few eigenvalues of generalized eigenvalue problems associated with the faces, cf. Definition 3.

For any face \(\mathcal {F}_{k l}\), let \(V_h(\mathcal {F}_{k l})\) be the space of piecewise linear continuous functions on the face \(\mathcal {F}_{k l}\in \mathcal {F}\), and \(V_h^0(\mathcal {F}_{k l})\) the corresponding subspace of functions with zero boundary values, that is

$$\begin{aligned} V_h^0(\mathcal {F}_{k l}) := \{v\in V_h(\mathcal {F}_{k l}): \ v(x)=0, \quad \ \forall x\in \partial \mathcal {F}_{k l}\}. \end{aligned}$$

We need the following interpolation operator \(I_\mathcal {W}:V_h\rightarrow V_h\).

Definition 2

For any \(u\in V_h\), let \(I_\mathcal {W}u\) be discrete harmonic (cf. Definition 1) such that

$$\begin{aligned} (I_{\mathcal {W}} u) (x)= & {} u(x) \quad \forall x\in \mathcal {W}_h, \\ a_{\mathcal {F}_{k l}}((I_{\mathcal {W}}u)_{|\mathcal {F}_{k l}},v)= & {} 0 \qquad \forall v\in V_h^0(\mathcal {F}_{k l}) \quad \forall \mathcal {F}_{k l}\subset \varGamma , \end{aligned}$$

where

$$\begin{aligned} a_{\mathcal {F}_{k l}}(u,v):= & {} \sum _{\tau _t\in \mathcal {T}_h(\mathcal {F}_{k l})}\overline{\alpha }_{\tau _t}\int _{\tau _t }\nabla u \nabla v dx, \qquad u,v\in V_h(\mathcal {F}_{k l}), \end{aligned}$$
(10)

and \(\overline{\alpha }_{\tau _t}=\max \{\alpha _{\tau _-},\alpha _{\tau _+}\}\) for a triangle \(\tau _t\) which is an element of the 2D triangulation of the face \(\mathcal {F}_{k l}\), such that \(\overline{\tau }_t=\partial \tau _-\cap \partial \tau _+\) for \(\tau _+\in \mathcal {T}_h(\varOmega _k)\) and \(\tau _-\in \mathcal {T}_h(\varOmega _l)\).

For each face \(\mathcal {F}_{k l}\), we define the following generalized eigenvalue problem.

Definition 3

For \(i=1,\ldots ,{\hat{M}}\), where \({\hat{M}}=\dim (V_h^0(\mathcal {F}_{k l}))\), find \((\lambda ^i_{\mathcal {F}_{k l}}, \xi ^i_{\mathcal {F}_{k l}})\in \left( \mathbb {R}\times V^0_h(\mathcal {F}_{k l})\right) \) such that \(\lambda _{\mathcal {F}_{k l}}^1 \le \cdots \le \lambda _{\mathcal {F}_{k l}}^{\hat{M}}\) and

$$\begin{aligned} a_{\mathcal {F}_{k l}}(\xi ^i_{\mathcal {F}_{k l}},v)=\lambda _{\mathcal {F}_{k l}}^i b_{\mathcal {F}_{k}}(\xi ^i_{\mathcal {F}_{k l}}, v)\quad \forall v\in V^0_h(\mathcal {F}_{k l}), \end{aligned}$$
(11)

where \(a_{\mathcal {F}_{k l}}(\cdot ,\cdot )\) is defined in (10), and \(b_{\mathcal {F}_{k l}}(\cdot ,\cdot )\) as follows,

$$\begin{aligned} b_{\mathcal {F}_{k l}}(u,v):=\sum _{x\in \mathcal {F}_{k l,h}}\overline{\alpha }_xu(x)v(x), \qquad u,v\in V_h(\mathcal {F}_{k l}), \end{aligned}$$
(12)

where \(\overline{\alpha }_x=\max \{\alpha _\tau : x\in \partial \tau , \tau \in \mathcal {T}_h\}\). Here \(V^0_h(\mathcal {F}_{k l})\) is the space of piecewise continuous functions on \(\mathcal {T}_h(\mathcal {F}_{k l})\) which are equal to zero on \( \partial \mathcal {F}_{k l}\).

Note that the bilinear forms \(a_{\mathcal {F}_{k l}}(\cdot ,\cdot )\) and \(b_{\mathcal {F}_{k l}}(\cdot ,\cdot )\) in (11) are symmetric and positive definite on \(V^0_h(\mathcal {F}_{k l})\). We extend \(\xi ^i_{\mathcal {F}_{k l}}\) by zero to the whole interface \(\varGamma \), and then as a discrete harmonic function inside each subdomain, denoting the extended function by the same symbol. Now, let

$$\begin{aligned} V_{\mathcal {F}_{k l}}^{en}:= \mathrm {span} \{ \xi ^i_{\mathcal {F}_{k l}} \}_{i=1}^{m_{k l}} \end{aligned}$$
(13)

be the space of eigenfunctions, where \(m_{k l}\) is a nonnegative integer smaller than or equal to \({\hat{M}}\), a number which is either prescribed or decided adaptively.

The wire basket based coarse space is then defined as

$$\begin{aligned} V_{{\mathcal {W}}}=I_{\mathcal {W}}V_h + \sum _{\mathcal {F}_{k l} \in \mathcal {F}} V_{\mathcal {F}_{k l}}^{en}. \end{aligned}$$
(14)

The space \(I_{\mathcal {W}}V_h\) is a natural extension of the 2D multiscale coarse space introduced in [15] to 3D, and the face enrichment spaces \(V_{\mathcal {F}_{k l}}^{en}\) are natural extensions of the edge enrichment spaces introduced in [14] to 3D.

4.2 Vertex based coarse space

The vertex based coarse space consists of basis functions, one for each node in \(\mathcal {V}\), plus eigenfunctions corresponding to the first few eigenvalues of generalized eigenvalue problems associated with the edges, cf. Definition 5, and the faces, cf. Definition 6.

For any edge \(\mathcal {E}_i\), let \(V_h(\mathcal {E}_i)\) be the space of piecewise linear continuous functions on the edge \(\mathcal {E}_i\in \mathcal {E}\), and \(V_h^0(\mathcal {E}_i)\) the corresponding subspace of functions with zero boundary values, that is

$$\begin{aligned} V_h^0(\mathcal {E}_i) := \{v\in V_h(\mathcal {E}_i): \ v(x)=0, \ \forall x\in \partial \mathcal {E}_i\}. \end{aligned}$$

We need the following interpolation operator \(I_{\mathcal V}:V_h\rightarrow V_h\).

Definition 4

For any \(u\in V_h\), let \(I_{{\mathcal {V}}}u\) be discrete harmonic (cf. Definition 1) such that

$$\begin{aligned} (I_{{\mathcal {V}}} u) (x)= & {} u(x) \quad \forall x\in \mathcal {V}, \\ (I_{{\mathcal {V}}} u) (x)= & {} 0 \quad \forall x\in \varGamma _h{\setminus } \mathcal {W}_h, \\ a_{\mathcal {E}_i}((I_{{\mathcal {V}}}u)_{|\mathcal {E}_i},v)= & {} 0 \quad \forall v\in V_h^0(\mathcal {E}_i) \quad \forall \mathcal {E}_i\subset \mathcal {E}, \end{aligned}$$

where

$$\begin{aligned} a_{\mathcal {E}_i}(u,v):= & {} \sum _{e\in \mathcal {T}_h(\mathcal {E}_i)}\overline{\alpha }_e\int _e u' v' d x, \qquad u,v\in V_h(\mathcal {E}_i), \end{aligned}$$
(15)

and \(\overline{\alpha }_e=\max \{\tau \in \mathcal {T}_h: e\subset \partial \tau \}\).

For each edge \(\mathcal {E}_k\), we define the following generalized eigenvalue problem.

Definition 5

For \(i=1,\ldots ,M_{{\mathcal {E}}_k}\), where \(M_{\mathcal E_k}=\dim (V_h(\mathcal {E}_k))\), find \((\lambda ^i_{\mathcal {E}_k}, \xi ^i_{\mathcal {E}_k})\in \left( \mathbb {R}\times V^0_h(\mathcal {E}_k)\right) \) such that \(\lambda _{\mathcal {E}_k}^1 \le \cdots \le \lambda _{\mathcal {E}_k}^{M_{{\mathcal {E}}_k}} \) and

$$\begin{aligned} a_{\mathcal {E}_k}(\xi ^i_{\mathcal {E}_k},v)=\lambda _{\mathcal {E}_k}^i b_{\mathcal {E}_{k}}(\xi ^i_{\mathcal {E}_k},v)\quad \forall v\in V^0_h(\mathcal {E}_k), \end{aligned}$$
(16)

where \(a_{\mathcal {E}_k}(\cdot ,\cdot )\) is defined in (15), and

$$\begin{aligned} b_{\mathcal {E}_k}(u,v):= & {} h^{-1}\sum _{x\in \mathcal {E}_{k,h}}\overline{\alpha }_x u(x)v(x), \qquad u,v\in V_h(\mathcal {E}_k), \end{aligned}$$
(17)

with \(\overline{\alpha }_x\) from Definition 3.

Note that, by definition, the bilinear forms \(a_{\mathcal {E}_{k}}(\cdot ,\cdot )\) and \(b_{\mathcal {E}_k}(\cdot ,\cdot )\) in (16) are both symmetric and positive definite on \(V^0_h(\mathcal {E}_k)\).

For each face \(\mathcal {F}_{k l}\), let

$$\begin{aligned} \overline{\mathcal {F}}_{k l}^B =\bigcup _{\{\tau _t : \ \overline{\tau }_t\cap \partial \mathcal {F}_{k l}\ne \emptyset \}} \overline{\tau }_t \end{aligned}$$

be the sum of closed fine triangles on the face \(\mathcal {F}_{k l}\) that touch the wire basket, and \(\mathcal {F}_{k l}^I=\mathcal {F}_{k l} {\setminus } \overline{\mathcal {F}}_{k l}^B\) the interior of the sum of closed triangles that are lying strictly inside the face. Obviously, \(\overline{\mathcal {F}}_{k l} = \overline{\mathcal {F}}_{k l}^B \cup \mathcal {F}_{k l}^I\).

For each face \(\mathcal {F}_{k l}\), we now define the following generalized eigenvalue problem.

Definition 6

For \(i=1,\ldots ,\hat{M}\), where \({\hat{M}} =\dim (V_h^0(\mathcal {F}_{k l}))\), find \((\lambda ^i_{\mathcal {F}_{k l,I}}, \xi ^i_{\mathcal {F}_{k l,I}}) \in \left( \mathbb {R}\times V_h^0(\mathcal {F}_{k l})\right) \) such that \(\lambda _{\mathcal {F}_{k l, I}}^1\le \cdots \le \lambda _{\mathcal {F}_{k l, I}}^{\hat{M}}\) and

$$\begin{aligned} a_{\mathcal {F}_{k l,I}}(\xi ^i_{\mathcal {F}_{k l,I}},v)=\lambda _{\mathcal {F}_{k l,I}}^i b_{\mathcal {F}_{k l}}(\xi ^i_{\mathcal {F}_{k l,I}}, v)\quad \forall v\in V_h^0(\mathcal {F}_{k l}), \end{aligned}$$
(18)

where

$$\begin{aligned} a_{\mathcal {F}_{k l,I}}(u,v):= & {} \sum _{\tau _t\in \mathcal {F}_{k l}^I}\overline{\alpha }_{\tau _t}\int _{\tau _t}\nabla u \nabla v dx,\qquad u,v\in V_h(\mathcal {F}_{k l}), \end{aligned}$$
(19)

where \(\overline{\alpha }_{\tau _t}\) is defined in Definition 2.

Note that, by definition, the bilinear form \(b_{\mathcal {F}_{k l}}(\cdot ,\cdot )\) is symmetric and positive definite on \(V_h^0(\mathcal {F}_{k l})\), while the bilinear form \(a_{\mathcal {F}_{k l,I}}(\cdot ,\cdot )\) in (18) is symmetric and positive semidefinite on this space. We know its kernel, it is the one dimensional space containing functions that are constant over \(\mathcal {F}_{k l}^I\), consequently, we have \(0=\lambda _{\mathcal {F}_{k l, I}}^1< \lambda _{\mathcal {F}_{k l, I}}^2 \le \cdots \le \lambda _{\mathcal {F}_{k l, I}}^{\hat{M}}\).

Analogously, as in the wire basket based coarse space, we extend \(\xi ^i_{\mathcal {E}_k}\) and \(\xi ^i_{\mathcal {F}_{k l,I}}\) by zero to the whole \(\varGamma \), and then as discrete harmonic functions inside each subdomain, denoting the extended function by the same symbols. Now, define the two spaces of eigenfunctions, one associated with the edges and one associated with the faces, as

$$\begin{aligned} V_{\mathcal {F}_{k l,I}}^{en}:= \mathrm {span} \{ \xi ^i_{\mathcal {F}_{k l,I}} \}_{i=1}^{n_{k l}} \quad \text{ and } \quad V_{{\mathcal {E}}_k}^{en}:= \mathrm {span} \{ \xi ^i_{{\mathcal {E}}_k} \}_{i=1}^{m_k}, \end{aligned}$$
(20)

respectively. Both \(n_{k l}\) and \(m_k\) are integers such that \(1\le n_{k l}\le {\hat{M}}\) and \(0 \le m_k \le M_{{\mathcal {E}}_k}\), and are either prescribed or decided adaptively.

The vertex based coarse space is then defined as

$$\begin{aligned} V_{{\mathcal {V}}}=I_{\mathcal {V}}V_h + \sum _{\mathcal {F}_{k l} \in \mathcal {F}} V_{\mathcal {F}_{k l,I}}^{en} +\sum _{\mathcal {E}_k \in \mathcal {E}} V_{{\mathcal {E}}_k}^{en}. \end{aligned}$$
(21)

Remark 1

It should be pointed out here that many of the calculations indicated by the use of piecewise discrete harmonic extensions in the definitions above are redundant since, in practice, they are often trivial extensions of zero on subdomain boundaries.

5 Convergence estimate for the preconditioned system

In this section, we prove that the condition number of our preconditioned system can be kept low, and independent of the contrast if our coarse space enrichments are appropriately chosen. The main result is stated in Theorem 1.

Theorem 1

Let \(k=1\) and \(k=2\) in the superscript refer to the two coarse spaces: the wire basket based coarse space and the vertex based coarse space, respectively. Then, for \(P^{(k)}\), cf. (7), we have

$$\begin{aligned} \left( C_0^{(k)}\right) ^{-2} a(u,u)\preceq a(P^{(k)}u,u)\preceq a(u,u), \quad u \in V^h, \end{aligned}$$
(22)

with

$$\begin{aligned} \left( C_0^{(1)}\right) ^2= & {} 1+\max _{\mathcal {F}_{k l} \in \mathcal {F}} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}}, \\ \left( C_0^{(2)}\right) ^2= & {} 1+ \max \left\{ \max _{\mathcal {F}_{k l} \in \mathcal {F}} \left( \lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}\right) ^{-1}, \max _{\mathcal {E}_k \in \mathcal {E}} \left( \lambda _{\mathcal {E}_k}^{m_k+1}\right) ^{-1}\right\} . \end{aligned}$$

Here \(\lambda _{\mathcal {F}_{k l}}^{i}\) is defined in (11), \(\lambda _{\mathcal {E}_k}^i\) is from (16), and \(\lambda _{\mathcal {F}_{k l,I}}^i\) is from (18), and the integer parameters \(m_{k l},n_{k l}, m_k\) are defined in (13) or (20), respectively.

The proof is based on the abstract Schwarz framework, cf. e.g. [25, 33, 36] and is given at the end of the section. The following lemmas are required for the proof.

Remark 2

In case of constant \(\alpha \) and regular mesh the minimal eigenvalue of the problems are of \(\mathcal {O}((\frac{h}{H})^2)\) and the maximal eigenvalue is of \(\mathcal {O}(1)\).

Lemma 2

Let \(u\in V_h\) be discrete harmonic i.e. \(u_k := u_{|\overline{\varOmega }_k}=\mathcal {H}_k u\) in the a-norm \(a_{|\varOmega _k}(\cdot ,\cdot )\) or be equal to zero at all interior nodes that are in \(\varOmega _{k,h}\), then it follows that

$$\begin{aligned} a_{|\varOmega _k}(u,u)\le h \sum _{x \in \partial \varOmega _{k,h}} \overline{\alpha }_x |u(x)|^2. \end{aligned}$$
(23)

In particular if u is zero at \(\mathcal {W}_h\), then

$$\begin{aligned} a_{|\varOmega _k}(u,u)&\preceq h\sum \limits _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(u,u), \end{aligned}$$
(24)

and if u is zero at all vertices of \(\partial \varOmega _k\) then

$$\begin{aligned} a_{|\varOmega _k}(u,u)&\preceq h^2\sum \limits _{\mathcal {E}_i\subset \varOmega _k} b_{\mathcal {E}_i}(u,u)+h\sum \limits _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(u,u). \end{aligned}$$
(25)

Proof

The first part of the proof follows from the fact that a discrete harmonic function has the minimal energy among all functions taking the same values on the boundary. Hence, \(a_{|\varOmega _k}(u,u)\le a_{|\varOmega _k}(\hat{u},\hat{u})\) for any \(\hat{u}\in V_h(\varOmega _k)\) which is equal to u on \(\partial \varOmega _k\) and zero at the interior nodes \(\varOmega _{k,h}\). The other case means that \(u=\hat{u}\). Consequently,

$$\begin{aligned} a_{|\varOmega _k}(u,u)\le & {} a_{|\varOmega _k}(\hat{u},\hat{u}) = \sum _{\tau \in \mathcal {T}_h(\varOmega _k)} \alpha _\tau \int _\tau |\nabla \hat{u}|^2 \, d x \preceq h^{-2} \sum _{\tau \in \mathcal {T}_h(\varOmega _k)} \alpha _\tau \int _\tau |\hat{u}|^2 d x \\\preceq & {} h \sum _{\tau \in \varOmega _k^h} \alpha _{\tau } \sum _{x\in \partial \tau } |\hat{u}(x)|^2, \end{aligned}$$

where \(\varOmega _k^h\) is the h boundary layer that is the sum of elements of \(\mathcal {T}_h(\varOmega _k)\) that touch (has a vertex on) the boundary \(\partial \varOmega _k\). We used a local inverse inequality and the discrete equivalence of the \(L^2\) norm on each \(\tau \). Finally, utilizing the fact that \(\hat{u}\) is zero at the interior nodal points and taking the maximum over \(\alpha _\tau \) such that \(x\in \partial \tau \) we get

$$\begin{aligned} a_{|\varOmega _k}(u,u) \preceq h \sum _{x \in \partial \varOmega _{k,h}} \overline{\alpha }_x |u(x)|^2. \end{aligned}$$

The last two statements of the lemma follow directly from the first statement of the lemma plus the definitions of the bilinear forms defined in (12) and (17).

Remark 3

The constant in the estimate of Lemma 2 equals \(C_1\,C_2\,C_3\), where \(C_1\) is the constant of the inverse inequality \(|u|_{H^1(\tau )}^2\le C_1 h^{-2} \Vert u\Vert _{L^2(\tau )}^2, \; u\in V^h, \;\tau \in T_h\), \(C_2\) is the squared constant of the inequality stating local equivalence of the \(L^2\) norm to the discrete local nodal \(l_2\) norm, i.e. \(\Vert u\Vert _{L^2(\tau )}^2 \le C_2 h \sum _{x_k \in \partial \tau } |u(x_k)|^2, \; u\in V^h,\; \tau \in T_h\), and \(C_3\) is the maximum over all \(x\in \partial \varOmega _{k,h}\) of the number of \(\tau \in T_h(\varOmega _k)\) such that x is a vertex of \(\tau \).

We see from the proof that we could define the coefficient \(\alpha _x\) as equal to the sum of \(\alpha _x\) instead of taking the maximum of them.

We restate [14, Lemma 2.2] which contains important estimates for the eigenfunctions found in (11) and (16).

Lemma 3

Let V be a finite dimensional real space and consider the generalized eigenvalue problem: Find the eigenpair \((\lambda _k,\xi _k) \in \mathbb {R}\times V\) such that \(b(\xi _k,\xi _k)=1\) and

$$\begin{aligned} a(\xi _k,v)=\lambda _k b(\xi _k,v) \quad \forall v\in V, \end{aligned}$$

where the bilinear form \(b(\cdot ,\cdot )\) is symmetric positive definite, and the bilinear form \(a(\cdot ,\cdot )\) is symmetric positive semi-definite. Then there exist \(M=dim(V)\) eigenpairs with real eigenvalues ordered as follows \(0\le \lambda _1\le \cdots \le \lambda _M\). If \(\lambda _k\) is the smallest positive eigenvalue then the operator \(\varPi _m:V\rightarrow V\), which is defined for \(k-1\le m < M\) as

$$\begin{aligned} \varPi _mu=\sum _{k=1}^m b(u,\xi _k)\xi _k, \end{aligned}$$

is the \(b(\cdot ,\cdot )\)-orthogonal projection such that

$$\begin{aligned} |\varPi _mv|_a\le |v|_a\quad \text {and}\quad |v-\varPi _mv|_a \le |v|_a \quad \forall v\in V, \end{aligned}$$
(26)

and

$$\begin{aligned} \Vert v-\varPi _mv\Vert _b^2\le \frac{1}{\lambda _{m+1}}|v-\varPi _mv|_a^2 \quad \forall v\in V, \end{aligned}$$
(27)

where \(|v|_a^2=a(v,v)\) and \(\Vert v\Vert _b^2=b(v,v)\).

See [14, 35] for the proof.

We introduce the wire basket based coarse space interpolator or the interpolation operator \(I_0^{{\mathcal {W}}}:V_h \rightarrow V_{{\mathcal {W}}} \subset V_h\) as

$$\begin{aligned} I_0^{{\mathcal {W}}}u:= & {} I_{{\mathcal {W}}} u +\sum _{\mathcal {F}_{k l}\in \mathcal {F}} \varPi ^{\mathcal {F}_{k l}}_{m_{k l}}(u - I_{{\mathcal {W}}} u), \end{aligned}$$
(28)

where \(\varPi ^{\mathcal {F}_{k l}}_{m_{k l}}:V_h \rightarrow V_{\mathcal {F}_{k l}}^{en} \subset V_h\) is defined as follows,

$$\begin{aligned} \varPi ^{\mathcal {F}_{k l}}_{m_{k l}} u= \sum _{\mathcal {F}_{k l} \in \mathcal {F}} \sum _{i=1}^{m_{k l}} b_{\mathcal {F}_{k l}}(u|_{\mathcal {F}_{k l}},\xi ^i_{\mathcal {F}_{k l}})\xi ^i_{\mathcal {F}_{k l}}, \end{aligned}$$

cf. also (13) and (12).

We have the following lemma estimating the coarse space interpolant.

Lemma 4

Let the wire basket based coarse space interpolator \(I_0^{\mathcal W}\) be defined in (28). Then for \(u\in V_h\)

$$\begin{aligned} a(u - I_0^{{\mathcal {W}}} u,u - I_0^{{\mathcal {W}}} u) \preceq \left( 1+ \max _{\mathcal {F}_{k l} \in \mathcal {F}} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} \right) a(u,u), \end{aligned}$$
(29)

where \(\lambda _{\mathcal {F}_{k l}}^{m_{kl}+1}\) is the \((m_{kl}+1)\)th smallest eigenvalue of the generalized eigenvalue problem in Definition 3.

Proof

Note that if we restrict \(w=u - I_0^{{\mathcal {W}}} u\) to \(\overline{\varOmega }_k\) then \(\mathcal {H}_k w= \mathcal {H}_k u - I_0^{{\mathcal {W}}} u\) as \(I_0^{\mathcal {W}}u\) is discrete harmonic, thus by the fact that \(\mathcal {H}_k\) and \(I-\mathcal {H}_k\) are \(a_{|\varOmega _k}\) orthogonal projection in \(V_h(\varOmega _k)\), cf. Definition 1, we have

$$\begin{aligned} a_{|\varOmega _k}(w,w)= & {} a_{|\varOmega _k}(w-\mathcal {H}_kw,w-\mathcal {H}_kw)+ a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw) \\\le & {} a_{|\varOmega _k}(u,u)+ a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw). \end{aligned}$$

Thus it remains to estimate \(a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw) \). By Lemma 2 and the fact that \(H_k w=w\) on \(\partial \varOmega _k\) we get

$$\begin{aligned} a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw) \preceq h\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(w,w). \end{aligned}$$

Let us now consider one such face \(\mathcal {F}_{k l}\subset \partial \varOmega _k\). We have \(w=u-I_{\mathcal {W}}u - \varPi ^{\mathcal {F}_{k l}}_{m_{k l}}(u-I_{\mathcal {W}}u)\) on the face, cf. (28). Using Lemma 3 it follows that

$$\begin{aligned} b_{\mathcal {F}_{k l}}(w,w)= & {} \Vert (I-\varPi ^{\mathcal {F}_{k l}}_{m_{k l}})(u-I_{\mathcal {W}}u)\Vert _{b_{\mathcal {F}_{k l}}}^2 \preceq \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} |(I-\varPi ^{\mathcal {F}_{k l}}_{m_{k l}})(u-I_{\mathcal {W}}u)|_{a_{\mathcal {F}_{k l}}}^2 \\\preceq & {} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} |u-I_{\mathcal {W}}u|_{a_{\mathcal {F}_{k l}}}^2. \end{aligned}$$

Here \(\Vert u\Vert _{b_{\mathcal {F}_{k l}}}^2:=b_{\mathcal {F}_{k l}}(u,u)\) and \(|u|_{a_{\mathcal {F}_{k l}}}^2:=a_{\mathcal {F}_{k l}}(u,u)\). Since by Definition 2\((I_{\mathcal {W}}u)_{|\mathcal {F}_{k l}}\) is orthogonal to \((u-I_{\mathcal {W}}u)_{|\mathcal {F}_{k l}} \in V_h^0(\mathcal {F}_{k l})\) with respect to the bilinear form \(a_{\mathcal {F}_{k l}}(\cdot ,\cdot )\), we have

$$\begin{aligned} |u-I_{\mathcal {W}}u|_{a_{\mathcal {F}_{k l}}}^2 \le |u|_{a_{\mathcal {F}_{k l}}}^2. \end{aligned}$$

From the last two estimates, it follows that

$$\begin{aligned} h b_{\mathcal {F}_{k l}}(w,w) \preceq \frac{h}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} |u|_{a_{\mathcal {F}_{k l}}}^2 \preceq \frac{h}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}}\sum _{\tau _t \in \mathcal {T}_h(\mathcal {F}_{k l})} \sum _{x,y\in \partial \tau _t} \overline{\alpha }_t|u(x)-u(y)|^2. \end{aligned}$$
(30)

Here the last sum is over all pairs of vertices of a 2D face element \(\tau _t\). Using the definition of \(\overline{\alpha }_t\) and the fact that \(|u|_{H^1(\tau )}^2\asymp \mathrm {diam}(\tau )\sum _{x,y \in \partial \tau }|u(x)-u(y)|^2\) (here xy are vertices of 3D element \(\tau \)) we get

$$\begin{aligned} h b_{\mathcal {F}_{k l}}(w,w) \preceq \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}}\left( a_{|\varOmega _k}(u,u)+ a_{|\varOmega _l}(u,u)\right) . \end{aligned}$$
(31)

Finally, summing over the faces yields that

$$\begin{aligned} a_{|\varOmega _k}(u - I_0^{{\mathcal {W}}} u,u - I_0^{{\mathcal {W}}} u)\preceq \sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} \left( a_{|\varOmega _k}(u,u)+ a_{|\varOmega _l}(u,u)\right) , \end{aligned}$$
(32)

and then summing the subdomains ends the proof. \(\square \)

For the next lemma we need a partition of unity. We need a discrete version of a partition of unity, i.e we define \(\theta _i\) is a continuous function which is piecewise linear on \({{\mathcal {T}}}_h\) such that:

$$\begin{aligned} \theta _i(x)= \left\{ \begin{array}{ll} 1\quad &{}\quad \forall x\in \varOmega _{i,h}, \\ \frac{1}{N_x} &{} \quad \forall x\in \partial \varOmega _{i,h}{\setminus } \partial \varOmega _h, \\ 0&{}\quad {\text {otherwise}},\\ \end{array} \right. \end{aligned}$$
(33)

where \(N_x\) is the number of subdomains \(\varOmega _j\) such that \(x\in \overline{\varOmega }_{j,h}\). As an example, \(N_x=2\) for any nodal point x on a subdomain face.

Lemma 5

Let the wire basket based coarse space interpolator \(I_0^{\mathcal W}\) be defined in (28) and let \(v_k=I_h(\theta _k (u - I_0^{{\mathcal {W}}} u))\) for any \(u\in V_h\) and \(I_h:C(\overline{\varOmega })\rightarrow V^h\)—the standard nodal piecewise linear interpolant. Then

$$\begin{aligned} a(v_k,v_k) \preceq \left( 1+ \max _{\mathcal {F}_{k l} \subset \partial \varOmega _k} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}} \right) \sum \limits _{\partial \varOmega _l\cap \partial \varOmega _k=\mathcal {F}_{k l}} a_{|\varOmega _l}(u,u). \end{aligned}$$
(34)

The last sum is taken over all subdomains which have a common face to \(\varOmega _k\).

Proof

Let \(w=u - I_0^{{\mathcal {W}}} u\). Note that

$$\begin{aligned} v_k(x)=I_h \theta _k w(x)= \left\{ \begin{array}{lr} w(x) \quad x\in \varOmega _{k,h} ,\\ \frac{1}{2} w(x) \quad x\in \mathcal {F}_{k l, h}, \ \mathcal {F}_{k l} \subset \partial \varOmega _k,\\ 0 \quad x \in \mathcal {W}_h\cap \partial \varOmega _k,\\ 0 \quad \mathrm {otherwise}. \end{array} \right. \end{aligned}$$
(35)

Thus

$$\begin{aligned} a(v_k,v_k)=a_{|\varOmega _k}(v_k,v_k)+ \sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k}a_{|\varOmega _l}(v_k,v_k). \end{aligned}$$
(36)

We first estimate the second term, that is the sum of the face terms. Note that \(v_{k|\varOmega _l}\) is zero at the interior nodes \(\varOmega _{l,h}\) and the boundary nodes \(\partial \varOmega _{l,h}{\setminus } \mathcal {F}_{k l,h}\). Thus by Lemma 2 and (35) we have

$$\begin{aligned} \sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} a_{|\varOmega _l}(v_k,v_k) \preceq h\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(v_k,v_k)= \frac{h}{4}\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(w,w). \end{aligned}$$

This term has already been estimated in the proof of Lemma 4, cf. (31), that is

$$\begin{aligned} h b_{\mathcal {F}_{k l}}(w,w)\preceq \frac{1}{\lambda _{\mathcal {F}_{k l}^{m_{k l}+1}}}\left( a_{|\varOmega _k}(u,u)+ a_{|\varOmega _l}(u,u)\right) . \end{aligned}$$

We now estimate the first term in (36), that is the restriction of the bilinear form to the \(\varOmega _k\). By a triangle inequality, we can write

$$\begin{aligned} a_{|\varOmega _k}(v_k,v_k)\preceq a_{|\varOmega _k}(w,w)+ a_{|\varOmega _k}(w-v_k,w-v_k). \end{aligned}$$

The first term has already been estimated in the proof of Lemma 4, cf. (32). Also, note that \((w-v_k)(x)\) equals either \(\frac{1}{2} w(x)\) when x is a face node, or zero when \(x\in \partial \varOmega _k\cap \mathcal {W}_h\) and \(x\in \varOmega _{k,h}\). By Lemma 2 and (35) we thus get

$$\begin{aligned} a_{|\varOmega _k}(w-v_k,w-v_k)\preceq h\sum _{\mathcal {F}_{k l}\subset \varOmega _l} b_{\mathcal {F}_{k l}}(w-v_k,w-v_k)= \frac{h}{4} \sum _{\mathcal {F}_{k l}\subset \varOmega _l} b_{\mathcal {F}_{k l}}(w,w). \end{aligned}$$

Again, this term has been estimated in the proof of Lemma 4, cf. (31). Summing all those estimates ends the proof. \(\square \)

Analogous to the wire basket case, we now define the vertex based coarse space interpolator \(I_0^{{\mathcal {V}}}:V_h \rightarrow V_{{\mathcal {V}}}\subset V_h\) as

$$\begin{aligned} I_0^{{\mathcal {V}}}u:= & {} I_{{\mathcal {V}}} u + \sum _{\mathcal {F}_{k l}\in \mathcal {F}} \varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}} u +\sum _{\mathcal {E}_k\in \mathcal {E}} \varPi _{m_k}^{\mathcal {E}_k}(u - I_{{\mathcal {V}}} u), \end{aligned}$$
(37)

where \(\varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}}:V_h \rightarrow V_{\mathcal {F}_{k l},I}^{en} \subset V_h\) and \(\varPi ^{\mathcal {E}_k}_{m_k}:V_h \rightarrow V_{\mathcal {E}_k}^{en} \subset V_h\), are defined as follows,

$$\begin{aligned} \varPi _{n_{k l}}^{\mathcal {F}_{k l,I}}(u):= & {} \sum _{\mathcal {F}_{k l }\in \mathcal {F}} \sum _{i=1}^{n_{k l}} b_{\mathcal {F}_{k l}} (u|_{\mathcal {F}_{k l}},\xi ^i_{\mathcal {F}_{k l,I}})\xi ^i_{\mathcal {F}_{k l,I}},\\ \varPi ^{\mathcal {E}_k}_{m_k} (u):= & {} \sum _{\mathcal {E}_k\in \mathcal {E}} \sum _{i=1}^{m_k} b_{\mathcal {E}_k}(u|_{\mathcal {E}_k},\xi ^i_{\mathcal {E}_k})\xi ^i_{\mathcal {E}_k}, \end{aligned}$$

cf. also (20), (12) and (17).

We have the following lemma.

Lemma 6

Let the vertex based coarse space interpolator \(I_0^{{\mathcal {V}}}\) be defined in (37), then for any \(u \in V_h\)

$$\begin{aligned} a(u - I_0^{{\mathcal {V}}} u,u - I_0^{{\mathcal {V}}} u) \preceq \left( 1+ \max \left\{ \max _{\mathcal {F}_{k l} \in \mathcal {F}} \frac{1}{\lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}}, \max _{\mathcal {E}_k \in \mathcal {E}} \frac{1}{\lambda _{\mathcal {E}_k}^{m_k+1}} \right\} \right) a(u,u), \end{aligned}$$
(38)

where \(\lambda _{\mathcal {F}_{k l}}^{n_{kl}+1}\) and \(\lambda _{\mathcal {E}_k}^{m_k+1}\) are respectively the \((n_{kl}+1)\)th and the \((m_k+1)\)th smallest eigenvalues of the generalized eigenvalue problems in Definitions 5 and 6.

Proof

Let \(w=u - I_0^{{\mathcal {V}}} u\). In the same way as in the proof of Lemma 4, we get

$$\begin{aligned} a_{|\varOmega _k}(w,w)\le & {} a_{|\varOmega _k}(u,u)+ a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw). \end{aligned}$$

Next we bound \(a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw) \). By Lemma 2, cf. (23), and (12) and (17). we get

$$\begin{aligned} a_{|\varOmega _k}(\mathcal {H}_kw,\mathcal {H}_kw)\preceq & {} h^2\sum _{\mathcal {E}_j\subset \varOmega _k} b_{\mathcal {E}_j}(w,w)+ h\sum _{\mathcal {F}_{k l}\subset \varOmega _k} b_{\mathcal {F}_{k l}}(w,w). \end{aligned}$$

Note that by (21) and Definition 4 we have \(w_{|\mathcal {E}_j}=u- I_{{\mathcal {V}}} u - \varPi _{m_j}^{\mathcal {E}_j}(u - I_{{\mathcal {V}}} u)\) for any edge \(\mathcal {E}_j \subset \mathcal {W}\cap \partial \varOmega _k\), and \(w_{|\mathcal {F}_{k l}}=u- \varPi _{m_{k l}}^{\mathcal {F}_{k l,I}}(u)\) for any face \(\mathcal {F}_{k l} \subset \partial \varOmega _k\).

Now, consider the term \(b_{\mathcal {E}_{j}}(w,w)\) related to an edge \(\mathcal {E}_j\). By Lemma 3 we get

$$\begin{aligned} b_{\mathcal {E}_j}(w,w)= & {} \Vert (I-\varPi ^{\mathcal {E}_j}_{m_j})(u-I_{\mathcal {V}}u)\Vert _{b_{\mathcal {E}_j}}^2 \preceq \frac{1}{\lambda _{\mathcal {E}_j}^{m_j+1}} |(I-\varPi ^{\mathcal {E}_j}_{m_j})(u-I_{\mathcal {V}}u)|_{a_{\mathcal {E}_j}}^2 \\\preceq & {} \frac{1}{\lambda _{\mathcal {E}_j}^{m_j+1}} |u-I_{\mathcal {V}}u|_{a_{\mathcal {E}_j}}^2 \preceq \frac{1}{\lambda _{\mathcal {E}_j}^{m_j+1}} |u|_{a_{\mathcal {E}_j}}^2. \end{aligned}$$

Here \(\Vert u\Vert _{b_{\mathcal {E}_j}}^2:=b_{\mathcal {E}_j}(u,u)\) and \(|u|_{a_{\mathcal {E}_j}}^2:=a_{\mathcal {E}_j}(u,u)\). We also used the fact that \((I_{\mathcal {V}}u)_{\mathcal {E}_j}\) is orthogonal to \((u-I_{\mathcal {V}}u)_{|\mathcal {E}_j} \in V_h^0(\mathcal {E}_{j}^0)\) with respect to the bilinear form \(a_{\mathcal {E}_j}(\cdot ,\cdot )\), cf. Definition 4.

Further, using the fact that, for u linear, \(|u|_{H^1(\tau _e)}^2\) is equivalent to \(h^{-1}|u(x)-u(y)|^2\) (xy are the ends of the 1D element \(\tau _e\)), we get

$$\begin{aligned} h^2b_{\mathcal {E}_j}(w,w)\preceq \frac{h^2}{\lambda _{\mathcal {E}_j}^{m_j+1}} |u|_{a_{\mathcal {E}_j}}^2 \preceq \frac{h}{\lambda _{\mathcal {E}_j}^{m_j+1}}\sum _{\tau _e \in \mathcal {T}_h(\mathcal {E}_j)} \sum _{x,y\in \partial \tau _e} \overline{\alpha }_e|u(x)-u(y)|^2. \end{aligned}$$

Here the last sum is over the ends xy of an 1D edge element \(\tau _e\). Note that \(h|u(x)-u(y)|^2 \preceq \int _\tau |\nabla u|^2\, dx \) if xy are vertices of \(\tau \in \mathcal {T}_h\). Thus we get

$$\begin{aligned} h^2 b_{\mathcal {E}_j}(w,w)\preceq \sum _{\partial \tau \cap \mathcal {E}_j=\overline{\tau }_e \subset \mathcal {E}_j} \int _{\tau }\alpha |\nabla u|^2\;d x, \end{aligned}$$

where the sum is over all 3D fine elements such that one of its edges is contained in \(\mathcal {E}_j\).

Now, consider the term \(b_{\mathcal {F}_{k l}}(w,w)\) related to a face \(\mathcal {F}_{k l}\). Note that \(u_{|\partial \mathcal {F}_{k l}}\) does not have to be equal to zero in general but if we define a function \(\hat{u}\) such that \(\hat{u}(x)=u(x)\)\(x\in \mathcal {F}_{kl,h}\) and \(\hat{u}(x)=0\) for \(x\in \partial \mathcal {F}_{k l,h}\), then we have

$$\begin{aligned} b_{\mathcal {F}_{k l}}(u,u)=b_{\mathcal {F}_{k l}}(\hat{u},\hat{u}), \qquad a_{\mathcal {F}_{k l,I}}(u,u)=a_{\mathcal {F}_{k l,I}}(\hat{u},\hat{u}), \end{aligned}$$
(39)

cf. (12) and (19). Thus we see that \(\varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}}u=\varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}}\hat{u}\) and we can apply Lemma 3 replacing u with \(\hat{u}\) and get

$$\begin{aligned} b_{\mathcal {F}_{k l}}(w,w)= & {} \Vert (I-\varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}})u\Vert _{b_{\mathcal {F}_{k l}}}^2= \Vert (I-\varPi ^{\mathcal {F}_{k l,I}}_{n_{k l}})\hat{u}\Vert _{b_{\mathcal {F}_{k l}}}^2 \\\preceq & {} \frac{1}{\lambda _{\mathcal {F}_{k l}}^{n_{k l}+1}} |(I-\varPi ^{\mathcal {F}_{k l}}_{n_{k l}})\hat{u}|_{a_{\mathcal {F}_{k l,I}}}^2 \\\preceq & {} \frac{1}{\lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}} |\hat{u}|_{a_{\mathcal {F}_{k l,I}}}^2= \frac{1}{\lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}} |u|_{a_{\mathcal {F}_{k l,I}}}^2 \le \frac{1}{\lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}} |u|_{a_{\mathcal {F}_{k l}}}^2. \end{aligned}$$

Here \(|u|_{a_{\mathcal {F}_{k l,I}}}^2:=a_{\mathcal {F}_{k l,I}}(u,u)\). The last inequality follows from the fact that

$$\begin{aligned} a_{\mathcal {F}_{k l,I}}(v,v)\le a_{\mathcal {F}_{k l,}}(v,v) \qquad \forall v\in V_h^0(\mathcal {F}_{k l}), \end{aligned}$$

cf. (10) and (19). Further, analogously as in the proof of Lemma 4, cf. (30) and (31), we get

$$\begin{aligned} hb_{\mathcal {F}_{k l}}(w,w)\preceq \frac{1}{\lambda _{\mathcal {F}_{k l,I}^{n_{k l}+1}}}\left( a_{|\varOmega _k}(u,u)+ a_{|\varOmega _l}(u,u)\right) . \end{aligned}$$

Finally, by summing over the edges and faces, and then over the subdomains we end the proof. \(\square \)

Lemma 7

Let the vertex based coarse space interpolator \(I_0^{{\mathcal {V}}}\) be defined in (37), and \(v_k=I_h(\theta _k (u - I_0^{{\mathcal {V}}}u))\) for any \(u\in V_h\). Then

$$\begin{aligned} \begin{array}{l} a(v_k,v_k) \preceq \\ \;\left( 1+ \max \left\{ \max _{\mathcal {F}_{k l} \subset \partial \varOmega _k} \frac{1}{\lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}}, \max _{\mathcal {E}_k \subset \partial \varOmega _k} \frac{1}{\lambda _{\mathcal {E}_k}^{m_k+1}} \right\} \right) \sum _{\mathcal {E}_l\subset \partial \varOmega _k} a_{|\varOmega _l}(u,u). \end{array} \end{aligned}$$

The last sum is taken over all subdomains which share an edge with \(\varOmega _k\).

Proof

Let \(w=u-I_0^{{\mathcal {V}}}u\), then we have \(I_h\theta _k w\) equal to w at interior nodes \(\varOmega _{k,h}\), \(\frac{1}{2} w\) at the nodes \(\mathcal {F}_{k l,h}\) on each face \(\mathcal {F}_{k l}\) of \(\varOmega _k\), \(\frac{1}{n(\mathcal {E}_i)}w\) at the nodes \(\mathcal {E}_{i,h}\) on each edge \(\mathcal {E}_i\) of \(\varOmega _k\), and zero at all remaining nodal points of \(\varOmega _h\). Here \(n(\mathcal {E}_i)\) is the number of subdomains that share the edge \(\mathcal {E}_i\). As in the proof of Lemma 5, we can write

$$\begin{aligned} a(v_k,v_k)= & {} a_{|\varOmega _k}(v_k,v_k)+ \sum _{\mathcal {E}_i\subset \partial \varOmega _k}a_{|\varOmega _i}(v_k,v_k) + \sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} a_{|\varOmega _l}(v_k,v_k) \\\preceq & {} a_{|\varOmega _k}(v_k,v_k)+ h^2\sum _{\mathcal {E}_i\subset \partial \varOmega _k} b_{\mathcal {E}_i}(w,w)+ h\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(w,w)). \end{aligned}$$

The face and the edge terms can be estimated following the lines in the proof of Lemma 6.

The first term, on the other hand, can be estimated following the lines in the proof of Lemma 5, that is using Lemma 2, as follows:

$$\begin{aligned} a_{|\varOmega _k}(v_k,v_k)\preceq & {} a_{|\varOmega _k}(w,w)+ a_{|\varOmega _k}(w-v_k,w-v_k) \\\preceq & {} a_{|\varOmega _k}(w,w) \\&+ h^2\sum _{\mathcal {E}_i\subset \partial \varOmega _k} b_{\mathcal {E}_i}(v_k-w,v_k-w)+h\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(v_k-w,v_k-w) \\\preceq & {} a_{|\varOmega _k}(w,w) + h^2\sum _{\mathcal {E}_i\subset \partial \varOmega _k} b_{\mathcal {E}_i}(w,w)+h\sum _{\mathcal {F}_{k l}\subset \partial \varOmega _k} b_{\mathcal {F}_{k l}}(w,w). \end{aligned}$$

Again, these terms can be estimated in the same way as in the proof of Lemma 6.

Combining those estimates, we get the proof. \(\square \)

The next and final lemma provides estimates for the stability of decomposition for the two preconditioners presented in this paper, which are required in the proof of Theorem 1.

Lemma 8

(Stable Decomposition) Let \(k=1\) and \(k=2\) in the superscript refer to the two coarse spaces: the wire basket based coarse space and the vertex based coarse space, respectively. Then, for all \(u\in V_h\) there exists a representation \(u=u_0^{(k)}+\sum ^N_{i=1} u_i^{(k)}\) such that \(u_0^{(k)}\in V_0^{(k)}\), and \(u_i^{(k)}\in V_i\), \(i=1,\ldots ,N\), and

$$\begin{aligned} a(u_0^{(k)},u_0^{(k)})+\sum _{i=1}^N a(u_i^{(k)},u_i^{(k)}) \preceq \left( C_0^{(k)}\right) ^2 a(u,u), \end{aligned}$$
(40)

for \(k=1, 2\), with

$$\begin{aligned} \left( C_0^{(1)}\right) ^2= & {} 1+\max _{\mathcal {F}_{k l} \in \mathcal {F}} \left( \lambda _{\mathcal {F}_{k l}}^{m_{k l}+1}\right) ^{-1}, \\ \left( C_0^{(2)}\right) ^2= & {} 1+ \max \left\{ \max _{\mathcal {F}_{k l} \in \mathcal {F}} \left( \lambda _{\mathcal {F}_{k l,I}}^{n_{k l}+1}\right) ^{-1}, \min _{\mathcal {E}_k \in \mathcal {E}} \left( \lambda _{\mathcal {E}_k}^{m_k+1}\right) ^{-1}\right\} . \end{aligned}$$

Proof

For the wire basket based coarse space, let \(u_0^{(1)}=I_0^{\mathcal W}u\) and \(u_i^{(1)}=I_h(\theta _i(u-I_0^{{\mathcal {W}}}u))\). The corresponding statement of the lemma then follows immediately from the Lemmas 4 and 5.

For the vertex based coarse space, let \(u_0^{(2)}=I_0^{{\mathcal {V}}}u\) and \(u_i^{(2)}=I_h(\theta _i(u-I_0^{{\mathcal {V}}}u))\). It’s statement of the lemma then follows immediately from the Lemmas 6 and 7. \(\square \)

Proof of Theorem 1

The convergence theory of the abstract Schwarz framework indicates that, under three assumptions, the condition number of our method can be bounded as the following.

$$\begin{aligned} \kappa (P^{(k)})\le (C_0^{(k)})^2\omega (\rho +1). \end{aligned}$$
(41)

The three assumptions are concerned with estimating the three constants \(\omega \), \(\rho \) and \(C^2_0\). It is easy to that \(\omega =1\) here, as we have nested subspaces and are using exact solvers for the subproblems, and \(\rho \le N_c\), where \(N_c\) is the maximum number of subspaces that cover any \(x\in \varOmega \). We refer to [33, Sect. 5.2] or [36, Sect. 2.3] for details. An estimate of the last parameter \((C_0^{(k)})^2\), for \(k=1\) or \(k=2\), is given in Lemma 8. The proof of the theorem now follows. \(\square \)

6 Numerical results

In this section, we present four numerical tests. The test problems are defined on a unit cube domain, with a Dirichlet boundary condition and a constant right-hand side \(f(x) = 100\). For all the test problems, we use coefficient distributions with inclusions across subdomain boundaries (faces and edges), as is illustrated in the Figs. 2 and 3, and channels illustrated in Fig. 4. The coefficient either has a background-value of 1, or a significant value of \(10^6\). For any given distribution the region with high-value coefficient is independent of mesh parameters. We use the PCG method as a solver, with a stopping condition when the residual norm is reduced by a factor \(10^{-6}\).

The algorithms have been implemented in MATLAB, using the functions meshgrid and delaunayTriangulation for discretization, and routines from PDEToolbox for assembling the stiffness and mass matrices. For the iterative solver, we used pcgeig, an extension of MATLAB’s pcg routine, available at m2matlabdb.ma.tum.de. The pcgeig solver returns an estimate of the condition number for the preconditioned system. We use the built-in function eigs for solving the eigenvalue problems.

The spectral components of our coarse spaces consist of local eigenvectors with corresponding eigenvalues below a given threshold \(\lambda ^*\) (cf. Theorem 1). This threshold-parameter makes the coarse spaces adaptive, determining the size of the coarse spaces and the performance of the methods. In our experiments, we choose the thresholds as \(c\frac{h}{H}\), for some constant c. Another reasonable choice of threshold is the smallest non zero eigenvalue of the eigenvalue problems with the uniform background-value distribution. We refer to [18] for other threshold choices.

Fig. 2
figure 2

The distributions of Test 1. Regions with coefficient value \(\alpha =1\mathrm {e}{6}\) are shown in red. The inclusions are in the faces in the \(xz-\)plane. From left to right in the first row, the distributions are called Face 1, Face 2, Face 3, and Face 4. The distribution called Face 1–4 is shown from two different angles in the second row (color figure online)

Table 1 Condition number estimates and iteration counts (inside brackets) for Test 1

In Test 1 the coefficient distributions are inclusions on faces, as shown in Fig. 2. The figure indicates five variants where the high conductivity regions (cells) are shown in red. The distributions vary both in geometrical shape and in the number of separate inclusions. The condition number estimate and the number of iterations are presented in Table 1. Each row corresponds to a particular distribution. The first column lists the condition number estimates when the coarse spaces have no enrichment, while the next three columns have estimates corresponding to solutions with adaptive coarse spaces for varying h. We observe in the first column that the jump in coefficient is reflected in the condition numbers. From the last three columns, we observe that both our suggested preconditioners improve the performance of the PCG method.

Table 2 Showing the smallest eigenvalues (column wise) for the eigenvalue problems on the four different faces, with inclusions, for \(H/h=32\)

To have an idea of the number of eigenfunctions added to the coarse spaces in Test 1, we have listed the lowest eigenvalues of the generalized eigenvalue problems for the case \(\frac{H}{h} = 32\), in Table 2. Each distribution in the first row of Fig. 2 represents inclusions on one face. The eigenvalues in each column of Table 2 are from eigenvalue problems on each of these faces respectively. Any eigenvalue printed in boldface is an eigenvalue above the threshold. The first few eigenvalues in each column are several magnitudes lower than the threshold. In all observed cases, the number of eigenvalues that are several magnitudes lower than the threshold is equal to the number of separate inclusions on that particular face. The number of eigenfunctions included into the coarse space in each case is low.

In Test 2 the distributions in Fig. 2 are slightly modified. The distribution Face 1 is modified by changing the value of the coefficient in the region \(1/16<x<7/16\), \(4/16<y<5/16\) and \(1/16<z<7/16\) from 1 to \(10^6\). This new distribution Face \(1^*\) connects the separate inclusions of Face 1 inside a subdomain. Similarly, the distributions Face 2, Face 3 and Face 4 are modified such that the separate inclusions in each distribution are connected inside a subdomain. These modifications do not change the local eigenvalue problems and the eigenvalues in Table 2 also apply to Test 2. In Test 2 the mesh parameters are fixed \(H=1/2\) and \(H/h=32\). The threshold is the same as in Test 1. Additionally, we place a capping restriction on the number of basis functions selected from each local eigenvalue problem.

Table 3 Condition number estimates and iteration counts (inside brackets) for Test 2

The numerical results of Test 2 are presented in Table 3. Each row corresponds to a specific distribution. The first column lists the results for the methods with no enrichment. The second and third column lists the results for the methods with capped adaptive coarse spaces. The last column lists the results for the methods with adaptive coarse spaces without a cap beyond the threshold. The condition numbers of the first column in Table 3 are notably lower then the condition numbers in the first column of Table 1. From the last column of Table 3 we see that our method improves the performance of PCG.

In Test 3 the distribution has inclusions on edges. The inclusions are two rectangular slabs horizontally placed, as is illustrated by in Fig. 3. The slabs trigger the solution of generalized eigenvalue problems both on edges and faces. The corresponding eigenvalues from the generalized eigenvalue problems with \(\frac{H}{h} = 32\) are presented in Table 4. Due to the symmetry of the distribution, the eigenvalue problems on the edges are identical. Moreover, there are only two unique eigenvalue problems on the faces.

The results of Test 3 are given in Table 5. The first row lists results for the edge distribution shown in Fig. 3. The second row lists results for the distribution where Edge and Face 1–4 are combined. The first column in the table lists the condition number estimates for the coarse space with no enrichment, while the next three columns list the results from solving the test problems with an adaptive coarse space for different mesh sizes. From the results in the last three columns, we see that our preconditioner improves the performance of PCG.

Fig. 3
figure 3

In Test 3 the distributions are the rectangular slabs with large coefficients, \(\alpha =10^6\), shown in red. The slabs are inclusions on the vertical edges. Here they are illustrated by their projections on to the xz plane (left figure) and the yz plane (right figure) (color figure online)

Table 4 Showing the smallest eigenvalues (column wise) for the eigenvalue problems on the edge and the two face types, with inclusions, for \(H/h=32\)
Table 5 Condition number estimates and iteration counts (inside brackets) for the two preconditioners applied in Test 3

Up to now, we have seen that our method can handle various challenging distributions on a small number of subdomains. In Test 4, we divide the unit cube into 64 subdomains. This decomposition leads to 27 vertex nodes, 108 edges, and 144 faces, excluding all vertices, edges, and faces that are entirely part of the boundary of the domain. The distribution in this test, see Fig. 4, are channels that are parallel to the y axis and run through the entire domain touching the boundary at both sides. A third of all the faces have 4 inclusions each. For this test problem, we use a heterogeneous right-hand side \(f(x,y,z)=10^5e^{-5\sqrt{(x-0.25)^2+(y-0.25)^2+(z-0.25)^2}}\).

The results of Test 4 are provided in Table 6. The two rows present results for \(\frac{H}{h}=8\) and \(\frac{H}{h}=16\). The columns list the thresholds, the number of coarse functions in the non-spectral part, the number of coarse functions in the spectral part, and the condition number estimates for the adaptive coarse spaces. The first 4 columns are for the wire basket based preconditioner, and the 4 last columns are for the vertex based preconditioner. Both methods show condition numbers that are independent of the coefficient value in the test problem.

Table 6 Condition number estimates and iteration counts (inside brackets) for Test 4
Fig. 4
figure 4

The distribution in Test 4 shown from two angles. The domain is divided into 64 subdomains. Through all the subdomains there are long channels along the y-axis touching the boundary on both sides. The elements indicated in red have a coefficient of \(10^6\) in the rest of the domain the coefficient is 1 (color figure online)