## 1 Introduction

Consider a linear problem of the form $$A u = f$$, which we assume well posed in a vector space V. To define a two-level method for the solution to this problem, a one-level method and a coarse-correction step are required. One-level Schwarz methods are generally based on a splitting technique: the operator $$A : V \rightarrow V$$ is decomposed as $$A = M - N$$, where $$M : V \rightarrow V$$ is assumed invertible and represents the preconditioner. This splitting leads to a stationary iteration, namely $$u^{k+1} = M^{-1}N u^k + M^{-1} f$$, for $$k=0,1,\ldots$$, and to a preconditioned system $$M^{-1}A u = M^{-1}f$$. These are strongly related, since the stationary iteration, if it converges, produces the solution of the preconditioned system; see, e.g., [1] and references therein. Therefore, DD methods can be used as stationary iterations or preconditioners; see, e.g., [2,3,4,5,6]. Unfortunately, one-level DD methods are in general not scalable and a coarse correction step is often desirable. See, e.g., [7,8,9,10,11,12] for exceptions and detailed scalability and non-scalability analyses.

A two-level method is characterized by the combination of a one-level method, defined on V, and a coarse correction step, performed on a coarse space $$V_c$$. The coarse space $$V_c$$ is finite dimensional and it must satisfy the condition $$\dim V_c~\ll ~\dim V$$. The mappings between V and $$V_c$$ are realized by a restriction operator $$R : V \rightarrow V_c$$ and a prolongation operator $$P : V_c \rightarrow V$$. In general, the restriction of $$A : V \rightarrow V$$ on $$V_c$$ is defined as $$A_c=RAP$$, which is assumed to be an invertible matrix.

Now, we distinguish two cases: a two-level stationary method and a two-level preconditioning method. In the first case, a stationary method is used as first-level method. After each stationary iteration, which produces an approximation $$u_{app}$$, the residual $$r = f - Au_{app}$$ is mapped from V to $$V_c$$, the coarse problem $$A_c e = R r$$ is solved to get $$e \in V_c$$, and the coarse correction step is defined as $$u_{new} = u_{app} + P e$$. This correction provides the new approximation $$u_{new}$$. By repeating these operations iteratively, one gets a two-level stationary method. The preconditioner corresponding to this method is denoted by $$M_{s,2L}$$. Notice that this idea is very closely related to two-grid methods. In the second case, the first-level method is purely a preconditioner $$M^{-1}$$. The corresponding two-level preconditioner, denoted by $$M_{2L}$$, is generally obtained in an additive way: the one-level preconditioner $$M^{-1}$$ is added to the coarse correction matrix $$PA_c^{-1} R$$. When used with appropriate implementations, the two preconditioners $$M_{2L}$$ and $$M_{s,2L}$$ require about the same computational effort per Krylov iteration. However, their different structures can lead to different performances of Krylov methods.

The literature about two-level DD methods is very rich. See, e.g., [7, 12,13,14,15,16,17,18,19], for references considering two-level Schwarz stationary methods, and, e.g., [20,21,22,23,24,25,26,27,28,29,30,31,32], for references considering two-level Schwarz preconditioners, and, e.g., for references considering two-level substructuring preconditioners [31, 33,34,35]. See also general classical references as [4,5,6] and [36, 37].

For any given one-level Schwarz method (stationary or preconditioning), the choices of $$V_c$$, P and R influence very strongly the convergence behavior of the corresponding two-level method. A common choice would be to use as a coarse space the span of the dominant eigenfunctions of the one-level iteration operator $$G:=M^{-1}N$$. Such a coarse space, and more generally coarse spaces obtained as the span of some given functions, are usually called spectral coarse spaces to distinguish them from geometric coarse spaces built implicitly using coarser meshes as in a multigrid framework.

In a more general context, fundamental results are presented in [38]: for a symmetric and positive definite A, it is proved that the coarse space of size m that minimizes the energy norm of the two-level iteration operator is the exactly the spectral coarse space made by the first m dominant eigenfunctions of G. The sharp result of [38] provides a concrete (optimal) choice of $$V_c$$ minimizing the energy norm of the two-level operator associated to a symmetric and positive definite A.

Unfortunately, computing the eigenfunctions of the one-level method is often unfeasible, and thus several works have proposed other spectral coarse spaces which are cheaper to obtain, but still contain information about the slow eigenspace of the one-level method.

As a matter of fact, focusing on a Schwarz iterative procedure, error and residual have generally very special forms. The error is harmonic, in the sense of the underlying PDE operator, in the interior of the subdomains (excluding the interfaces). Moreover, it is predominant in the overlap. The residual is predominant on the interfaces and zero outside the overlap. For examples and more details, see, e.g., [13, 17]. This difference motivated, sometimes implicitly, the construction of different coarse spaces. On the one hand, many references use different techniques to define coarse functions in the overlap (where the error is predominant), and then extending them on the remaining part of the neighboring subdomains; see, e.g., [22,23,24,25,26, 28,29,30, 33]. On the other hand, in other works the coarse space is created by first defining basis function on the interfaces, and then extending them (in different ways) on the portions of the neighboring subdomains; see, e.g., [7, 12, 13, 15, 17,18,19,20,21, 27, 28]. For a good, compact and complete overview of several of the different coarse spaces, we refer to [28, Section 5]. For other different techniques and related discussions, see, e.g., [4, 14,15,16, 31, 39].

This work differs from the existing literature as we introduce for the first time two-level Schwarz substructured methods. These are two-level stationary iterative methods, based on the Schwarz iteration, and the term “substructured” indicates that both the one-level iteration and coarse spaces are defined on the interfaces (or skeletons).Footnote 1 As in this manuscript we consider coarse spaces obtained as the span of certain interface functions, we call our two-level substructured Schwarz methods as Spectral 2-level Substructured (S2S) methods. With this respect, they are defined in the same spirit as two-level methods whose coarse spaces are extensions in volume of interfaces basis functions.

Our S2S framework can accommodate several choices for the coarse space, e.g., as the span of the one-level (substructured) Schwarz iteration operator, or as the ones proposed in several papers as [17, 18, 27, 28]. From a numerical point of view, the S2S framework has several advantages if compared to a classical two-level Schwarz methods defined in volume. Since the coarse space functions are defined on the interfaces, less memory storage is required. For a three-dimensional problem with mesh size h, a discrete interface coarse function is an array of size $$O(1/h^2)$$. This is much smaller than $$O(1/h^3)$$, which is the size of an array corresponding to a coarse function in volume. For this reason the resulting interface restriction and prolongation operators are smaller matrices, and thus the corresponding interpolation operations are cheaper to be performed. Therefore, assuming that the one-level stationary iteration step and the dimension of the coarse space are the same for a S2S method and a method in volume, each S2S iteration is generally computationally less expensive. In terms of iteration number, our S2S methods perform similarly or faster than other two-level methods that use the same DD smoother. Notice also, that the pre-computation part, that consists mainly in constructing the coarse space $$V_c$$ and assembling the operators P, R and $$A_c$$ requires the same computational effort of a method in volume. Moreover, the substructured feature of the S2S framework allows us to introduce two new procedures, based on a PCA and neural networks, to numerically build an efficient coarse space $$V_c$$. Direct numerical experiments will show that the coarse spaces generated by these two approaches either outperform the spectral coarse space and other commonly used coarse spaces, or they lead to a very similar convergence behavior.

In our substructured framework, the matrix A is non-symmetric. Thus the optimality result of [38] does not hold. Relying on our previous work [41], we provide a new general convergence analysis based on an infinite-matrix representation of two-level operator which covers our substructured formulation. This analysis has a rather general applicability, it can be used to tackle non-symmetric problems, and allows us to show, surprisingly, in which cases a spectral coarse space is not (asymptotically) optimal. Indeed, even for symmetric matrices, [38] provides an optimality result for the norm of the iteration operator, which is generally only an upper bound for the asymptotic convergence factor. Specifically, we show that a spectral coarse space made of the first m dominant eigenfunctions of G is not necessarily the coarse space of dimension m minimizing the spectral radius of two-level operator based on the Schwarz iteration. We show this both theoretically and numerically, using principal component analysis (PCA) and deep neural networks to build numerically more efficient coarse spaces.

This paper is organized as follows. In Sect. 2, we formulate the classical parallel Schwarz method in a substructured form. This is done at the continuous level and represents the starting point for the S2S method introduced in Sect. 3. A detailed convergence analysis is presented in Sect. 4. Section 5 discusses both PCA-based and deep neural networks approaches to numerically create an efficient coarse space. Extensive numerical experiments are presented in Sect. 6, where the robustness of the proposed methods with respect to mesh refinement and physical (jumping) parameters is studied. We present our conclusions in Sect. 7. Finally, in the “Appendix” important implementation details are discussed.

## 2 Substructured Schwarz Methods

Consider a bounded Lipschitz domain $$\Omega \subset {\mathbb {R}}^d$$ for $$d\in \{2,3\}$$, a general second-order linear elliptic operator $${\mathcal {L}}$$ and a function $$f \in L^2(\Omega )$$. Our goal is to introduce new domain decomposition methods for the efficient numerical solution of the general linear elliptic problem

\begin{aligned} {\mathcal {L}} u = f \text { in }\Omega ,\ u = 0\text { on }\partial \Omega , \end{aligned}
(1)

which we assume to be uniquely solved by a $$u \in H^1_0(\Omega )$$.

To formulate our methods we need to fix some notation. Given a bounded set $$\Gamma$$ with boundary $$\partial \Gamma$$, we denote by $$\rho _{\Gamma }(x)$$ the function representing the distance of $$x \in \Gamma$$ from $$\partial \Gamma$$. We can then introduce the $$H_{00}^{1/2}(\Gamma )$$ the space

\begin{aligned} H_{00}^{1/2}(\Gamma ) := \{ v \in H^{1/2}(\Gamma ) \, : \, v/\rho _{\Gamma }^{1/2} \in L^2(\Gamma ) \}, \end{aligned}
(2)

which is also known as the Lions–Magenes space; see, e.g., [5, 42, 43]. Notice that $$H_{00}^{1/2}(\Gamma )$$ can be equivalently defined as the space of functions in $$H^{1/2}(\Gamma )$$ such that their extensions by zero to a superset $${\widetilde{\Gamma }}$$ of $$\Gamma$$ are in $$H^{1/2}({\widetilde{\Gamma }})$$ [43].

Next, consider a decomposition of $$\Omega$$ into N overlapping Lipschitz subdomains $$\Omega _j$$, that is $$\Omega = \cup _{j \in {\mathcal {I}}} \Omega _j$$ with $${\mathcal {I}}:=\{1,2,\ldots ,N\}$$. For any $$j \in {\mathcal {I}}$$, we define the set of neighboring indexes $${\mathcal {N}}_j :=\{ \ell \in {\mathcal {I}} \, : \, \Omega _j \cap \partial \Omega _\ell \ne \emptyset \}$$. Given a $$j \in {\mathcal {I}}$$, we introduce the substructure of $$\Omega _j$$ defined as $${\mathcal {S}}_j := \cup _{\ell \in {\mathcal {N}}_j} \bigl (\Omega _j \cap \partial \Omega _\ell \bigr )$$, that is the union of all the portions of $$\partial \Omega _\ell$$ with $$\ell \in {\mathcal {N}}_j$$.Footnote 2 Notice that the sets $${\mathcal {S}}_j$$ are open and their closures are $$\overline{{\mathcal {S}}_j} = {\mathcal {S}}_j \cup \partial {\mathcal {S}}_j$$, with $$\partial {\mathcal {S}}_j := \cup _{\ell \in {\mathcal {N}}_j} \bigl (\partial \Omega _j \cap \partial \Omega _\ell \bigr )$$. Figure 1 provides an illustration of substructures corresponding to a commonly used decomposition of a rectangular domain. The substructure of $$\Omega$$ is defined as $${\mathcal {S}}:=\cup _{j \in {\mathcal {I}}}\overline{{\mathcal {S}}_j}$$. We denote by $${\mathcal {E}}_j^0 : L^2({\mathcal {S}}_j) \rightarrow L^2({\mathcal {S}})$$ the extension by zero operator. Now, we consider a set of continuous functions $$\chi _j : \overline{{\mathcal {S}}_j} \rightarrow [0,1]$$, $$j=1,\ldots ,N$$, such that

\begin{aligned} \chi _j(x) \in {\left\{ \begin{array}{ll} (0,1] &{}\text {for }x \in {\mathcal {S}}_j, \\ \{1\} &{}\text {for }x \in \overline{{\mathcal {S}}_j \setminus \cup _{\ell \in {\mathcal {N}}_j} {\mathcal {S}}_\ell },\\ \{0\} &{}\text {for }x \in \partial {\mathcal {S}}_j \setminus \partial \Omega , \end{array}\right. } \end{aligned}

and $$\sum _{j\in {\mathcal {I}}} {\mathcal {E}}_j^0 \chi _j \equiv 1$$, which means that the functions $$\chi _j$$ form a partition of unity on $${\mathcal {S}}$$. Further, we assume that the functions $$\chi _j$$, $$j \in {\mathcal {I}}$$, satisfy the condition $$\chi _j / \rho _{{\mathcal {S}}_j}^{1/2}~\in ~L^{\infty }({\mathcal {S}}_j)$$.

For any $$j \in {\mathcal {I}}$$, we define $$\Gamma _j^{\mathrm{int}} := \partial \Omega _j \cap \bigl ( \cup _{ \ell \in {\mathcal {N}}_j} \Omega _\ell \bigr )$$ and introduce the following trace and restriction operators

\begin{aligned} \tau _j : H^1(\Omega _j) \rightarrow H^{1/2}({\mathcal {S}}_j) \text { and } \tau _j^{\mathrm{int}} : H^{1/2}({\mathcal {S}}) \rightarrow H^{1/2}(\Gamma _j^{\mathrm{int}}). \end{aligned}

It is well known that (1) is equivalent to the domain decomposition system

\begin{aligned} \begin{aligned} {\mathcal {L}}u_j&= f_j \text { in }\Omega _j, \, u_j = \sum _{\ell \in {\mathcal {N}}_j} {\mathcal {E}}_\ell ^0 (\chi _\ell \tau _\ell u_\ell ) \text { on }\Gamma _j^{\mathrm{int}}, \, u_j = 0 \text { on }\partial \Omega _j \setminus \Gamma _j^{\mathrm{int}}, \end{aligned} \end{aligned}
(3)

where $$f_j \in L^2(\Omega _j)$$ is the restriction of f on $$\Omega _j$$ (see, e.g., [5]). Notice that, since $$\tau _\ell u_\ell \in H^{1/2}({\mathcal {S}}_\ell )$$, the properties of the partition of unity functions $$\chi _\ell$$ guarantee that $$\chi _\ell \tau _\ell u_\ell$$ lies in $$H_{00}^{1/2}({\mathcal {S}}_\ell )$$ and $${\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H_{00}^{1/2}({\mathcal {S}})$$. Moreover, for $$\ell \in {\mathcal {N}}_j$$ it holds that $$\tau _j^{\mathrm{int}} {\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H_{00}^{1/2}(\Gamma _j^{\mathrm{int}})$$ if $$\Gamma _j^{\mathrm{int}} \subsetneq \partial \Omega _j$$, and $$\tau _j^{\mathrm{int}} {\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H^{1/2}(\Gamma _j^{\mathrm{int}})$$ if $$\Gamma _j^{\mathrm{int}} = \partial \Omega _j$$.

Given a $$j \in {\mathcal {I}}$$ such that $$\partial \Omega _j \setminus \Gamma _j^{\mathrm{int}} \ne \emptyset$$, we define the extension operator $${\mathcal {E}}_j : H_{00}^{1/2}(\Gamma _j^{\mathrm{int}}) \times L^2(\Omega _j) \rightarrow H^1(\Omega _j)$$ as $$w={\mathcal {E}}_j(v,f_j)$$, where w solves

\begin{aligned} {\mathcal {L}}w=f_j \text { in }\Omega _j, \, w=v \text { on } \Gamma _j^{\mathrm{int}}, \, w=0 \text { on } \partial \Omega _j\setminus \Gamma _j^{\mathrm{int}} \end{aligned}
(4)

for $$v \in H_{00}^{1/2}(\Gamma _j^{\mathrm{int}})$$. Otherwise, if $$\Gamma _j^{\mathrm{int}} \equiv \partial \Omega _j$$, we define $${\mathcal {E}}_j : H^{1/2}(\Gamma _j^{\mathrm{int}}) \times L^2(\Omega _j) \rightarrow H^1(\Omega _j)$$ as $$w={\mathcal {E}}_j(v,f_j)$$, where w solves

\begin{aligned} {\mathcal {L}}w=f_j \text { in }\Omega _j, \, w=v \text { on } \Gamma _j^{\mathrm{int}}, \, \end{aligned}
(5)

for $$v \in H^{1/2}(\Gamma _j^{\mathrm{int}})$$. The DD system (3) can be then written as

\begin{aligned} u_j = {\mathcal {E}}_j(0,f_j) + {\mathcal {E}}_j\Bigl (\tau _j^{\mathrm{int}}\sum _{\ell \in {\mathcal {N}}_j} {\mathcal {E}}_\ell ^0 (\chi _\ell \tau _\ell u_\ell ),0\Bigr ), \, j\in {\mathcal {I}}. \end{aligned}
(6)

If we define $$v_j := \chi _j \tau _j u_j$$, $$j\in {\mathcal {I}}$$, then system (6) becomes

\begin{aligned} v_j = g_j +\sum _{\ell \in {\mathcal {N}}_j}G_{j,\ell }(v_\ell ), \, j\in {\mathcal {I}}, \end{aligned}
(7)

where $$g_j := \chi _j \tau _j{\mathcal {E}}_{j}(0,f_j)$$ and the operators $$G_{j,\ell } : H_{00}^{1/2}({\mathcal {S}}_\ell ) \rightarrow H_{00}^{1/2}({\mathcal {S}}_j)$$ are defined as

\begin{aligned} G_{j,\ell }(\cdot ) := \chi _j \tau _j{\mathcal {E}}_j \bigl ( \tau _j^{\mathrm{int}} {\mathcal {E}}_\ell ^0 (\cdot ), 0 \bigr ). \end{aligned}
(8)

System (7) is the substructured form of (3). The equivalence between (3) and (7) is explained by the following theorem.

### Theorem 1

(Equivalence between (3) and (7)) Let $$u_j \in H^1(\Omega _j)$$, $$j\in {\mathcal {I}}$$, solve (3), then $$v_{j} := \chi _j\tau _j(u_j)$$, $$j\in {\mathcal {I}}$$, solve (7). Let $$v_j \in H^{1/2}({\mathcal {S}}_j)$$, $$j\in {\mathcal {I}}$$, solve (7), then $$u_j := {\mathcal {E}}_j(\tau _j^{\mathrm{int}}\sum _{\ell \in {\mathcal {N}}_j} {\mathcal {E}}_\ell ^0 (v_\ell ),f_j)$$, $$j\in {\mathcal {I}}$$, solve (3).

### Proof

The first statement is proved before Theorem 1, where the substructured system (7) is derived. To obtain the second statement, we use (7) and the definition of $$u_j$$ to write $$v_j = \chi _j \tau _j {\mathcal {E}}_j(\tau _j^{\mathrm{int}} \sum _{\ell \in {\mathcal {N}}_j } {\mathcal {E}}_\ell ^0 (v_\ell ),f_j) = \chi _j \tau _j u_j$$. The claim follows by using this equality together with the definitions of $$u_j$$ and $${\mathcal {E}}_j$$. $$\square$$

Take any function $$w \in H^1_0(\Omega )$$ and consider the initialization $$u_j^0:= w|_{\Omega _j}$$, $$j \in {\mathcal {I}}$$. The parallel Schwarz method (PSM) is given by

\begin{aligned} \begin{aligned} {\mathcal {L}}u_j^n&= f_j \text { in }\Omega _j, \, u_j^n = \sum _{\ell \in {\mathcal {N}}_j} {\mathcal {E}}_\ell ^0 (\chi _\ell \tau _\ell u_\ell ^{n-1}) \text { on }\Gamma _j^{\mathrm{int}}, \, u_j^n = 0 \text { on } \partial \Omega _j \setminus \Gamma _j^{\mathrm{int}}, \end{aligned} \end{aligned}
(9)

for $$n \in {\mathbb {N}}^+$$, and has the substructured form

\begin{aligned} v_j^n = g_j +\sum _{\ell \in {\mathcal {N}}_j}G_{j,\ell }(v_\ell ^{n-1}), \, j \in {\mathcal {I}}, \end{aligned}
(10)

initialized by $$v_{j}^0 := \chi _j \tau _j(u_j^0) \in H_{00}^{1/2}({\mathcal {S}}_j)$$. Notice that the iteration (10) is well posed in the sense that $$v_{j}^n \in H_{00}^{1/2}({\mathcal {S}}_j)$$ for $$j \in {\mathcal {I}}$$ and $$n \in {\mathbb {N}}$$. Equations (10) and (7) allow us to obtain the substructured PSM in error form, that is

\begin{aligned} e_j^n = \sum _{\ell \in {\mathcal {N}}_j}G_{j,\ell }(e_\ell ^{n-1}), \, j \in {\mathcal {I}}, \end{aligned}
(11)

for $$n \in {\mathbb {N}}^+$$, where $$e_{j}^n:=v_j-v_j^n$$, for $$j \in {\mathcal {I}}$$ and $$n\in {\mathbb {N}}$$. Equation (7) can be written in the matrix form $$A\mathbf{v}=\mathbf{b}$$, where $$\mathbf{v}=[v_1,\ldots ,v_N]^\top$$, $$\mathbf{b}=[g_1,\ldots ,g_N]^\top$$ and the entries of A are

\begin{aligned} {[}A]_{j,j}=I_{d,j} \text { and } [A]_{j,\ell }=-G_{j,\ell }, \, j,\ell \in {\mathcal {I}}, \, j \ne \ell , \end{aligned}
(12)

where $$I_{d,j}$$ are the identities on $$L^2({\mathcal {S}}_j)$$, $$j \in {\mathcal {I}}$$. Similarly, we define G as

\begin{aligned} {[}G]_{j,j}=0 \text { and } [G]_{j,\ell }=G_{j,\ell }, \, j,\ell \in {\mathcal {I}}, \, j \ne \ell , \end{aligned}

and hence write (10) and (11) as $$\mathbf{v}^n=G\mathbf{v}^{n-1}+\mathbf{b}$$ and $$\mathbf{e}^n=G\mathbf{e}^{n-1}$$, respectively, where $$\mathbf{v}^n :=[v_1^n,\ldots ,v_N^n]^\top$$ and $$\mathbf{e}^n :=[e_1^n,\ldots ,e_N^n]^\top$$. Notice that $$G=I-A$$, where $$I:=\text {diag}_{j=1,\ldots ,N}(I_{d,j})$$. Moreover, if we define

\begin{aligned}{\mathcal {H}} := H_{00}^{1/2}({\mathcal {S}}_1) \times \cdots \times H_{00}^{1/2}({\mathcal {S}}_N),\end{aligned}

then one can clearly see that $$A : {\mathcal {H}} \rightarrow {\mathcal {H}}$$ and $$G : {\mathcal {H}} \rightarrow {\mathcal {H}}$$.

It is a standard result that the PSM iteration $$\mathbf{v}^n=G\mathbf{v}^{n-1}+\mathbf{b}$$ converges; see, e.g., [11] for a convergence result of the PSM in a substructured form, [8,9,10, 12, 40] for other convergence results and [4, 6] for standard references. The corresponding limit is the solution to the problem $$A\mathbf{v}=\mathbf{b}$$.

## 3 S2S: Spectral Two-Level Substructured Schwarz Method

The idea of the S2S method is to use a coarse space $$V_c$$ defined as the span of certain linearly independent functions defined on the skeletons of the subdomains $$\Omega _j$$, for $$j \in {\mathcal {I}}$$. Consider the space $${\mathcal {H}}$$, endowed with an inner product $$\langle \cdot , \cdot \rangle$$, and a set of $$m>0$$ linearly independent functions $${\pmb \psi }_k$$, $$k=1,\ldots ,m$$. Notice that each $${\pmb \psi }_k$$ has the form $${\pmb \psi }_k = [ \, \psi _k^1 , \ldots , \psi _k^N \, ]^\top$$, where $$\psi _k^j \in H_{00}^{1/2}({\mathcal {S}}_j)$$ for $$j \in {\mathcal {I}}$$. We define the coarse space $$V_c$$ as

\begin{aligned}V_c := \mathrm{span}\{ {\pmb \psi }_1 , \ldots , {\pmb \psi }_m \}.\end{aligned}

To define a two-level method, we need restriction and prolongation operators. Once the coarse space $$V_c$$ is constructed, the choice of these operators follows naturally. We define the prolongation operator $$P : {\mathbb {R}}^{m} \rightarrow {\mathcal {H}}$$ and the restriction operator $$R : {\mathcal {H}} \rightarrow {\mathbb {R}}^{m}$$ as

\begin{aligned} P \mathbf{v}:= \sum \limits _{k=1}^{m} v_k{\pmb \psi }_k,\quad \text { and }\quad R \mathbf{{h}} :=\begin{bmatrix} \langle {\pmb \psi }_1, \mathbf{{h}}\rangle ,&\cdots ,&\langle {\pmb \psi }_m, \mathbf{{h}}\rangle \end{bmatrix}^\top , \end{aligned}
(13)

for any $$\mathbf{v}=(v_1,\ldots ,v_m)^\top \in {\mathbb {R}}^m$$ and $${\mathbf{h}} \in {\mathcal {H}}$$. Notice that, if the functions $${\pmb \psi }_k$$ are orthogonal, P is the adjoint operator of R and we have that $$RP=I_m$$, where $$I_m$$ is the identity matrix in $${\mathbb {R}}^{m \times m}$$. The restriction of the operator A on $$V_c$$ is the matrix $$A_c \in {\mathbb {R}}^{m \times m}$$ obtained in a Galerkin manner, $$A_c = RAP$$.

With the operators P, R and $$A_c$$ in hands, our two-level method is defined as a classical two-level strategy applied to the substructured problem (7) and using the domain decomposition iteration (10) as a smoother. This results in Algorithm 1, where $$n_1$$ and $$n_2$$ are the numbers of the pre- and post-smoothing steps. The well posedness of Algorithm 1 is proved in the next lemma.

### Lemma 1

(Well posedness of S2S) Consider the inner product space $$({\mathcal {H}},\langle \cdot , \cdot \rangle )$$, a set of linearly independent functions $$\{ {\pmb \psi }_k \}_{k=1,\ldots ,m}$$, for some $$m>0$$, and let $$V_c := \mathrm{span}\{ {\pmb \psi }_1 , \ldots , {\pmb \psi }_m \}$$ be a finite-dimensional subspace of $${\mathcal {H}}$$. Let P and R be defined as in (13) (with $$\langle \cdot , \cdot \rangle$$). If $$A_c = RAP$$ is invertible and the initialization vector $$\mathbf{u}^0$$ is chosen in $${\mathcal {H}}$$, then $$\mathbf{u}^{n_2}$$ (computed at Step 5 of Algorithm 1) is in $${\mathcal {H}}$$.

### Proof

It is sufficient to show that for a given $$\mathbf{u}^0 \in {\mathcal {H}}$$ all the steps of Algorithm 1 are well posed. Since $$\mathbf{b} \in {\mathcal {H}}$$, $$G : {\mathcal {H}} \rightarrow {\mathcal {H}}$$ and $$A : {\mathcal {H}} \rightarrow {\mathcal {H}}$$, Step 1 and Step 2 produce $$\mathbf{u}^{n_1}$$ and $$\mathbf{r}$$ in $${\mathcal {H}}$$. Step 3 is well posed because $$A_c$$ is assumed to be invertible. Since $$V_c$$ is a subset of $${\mathcal {H}}$$, $$P \mathbf{u}_c$$ and $$\mathbf{u}^0$$ in Step 4 lie in $${\mathcal {H}}$$. Clearly, the element $$\mathbf{u}^{n_2}$$ produced by Step 5 is also in $${\mathcal {H}}$$. Therefore, by induction Algorithm 1 is well posed in $${\mathcal {H}}$$. $$\square$$

The key hypothesis of Lemma 1 is the invertibility of the coarse matrix $$A_c$$. An equivalent characterization of this property is proved in Sect. 4.1. This result (and the discussion thereafter) allows us to obtain the invertibility of $$A_c$$ if, e.g., $$V_c$$ is a spectral coarse space. Moreover, it is worth to remark that, the pseudo-inverse of $$A_c$$ can be used in case $$A_c$$ is not invertible; see [38].

Let us now turn our attention to the coarse space $$V_c$$. We distinguish two general classes of coarse space functions: global and local coarse functions. Global coarse functions refer to functions defined directly on the global skeleton of $$\Omega$$. An ideal choice of global coarse functions would be to define $$V_c$$ as the span of of the dominating eigenfunctions of the one-level operator G. In the context of multigrid methods, this choice is extensively discussed in [38], where the authors prove that, if A and the preconditioner corresponding to the one-level iteration are symmetric, the spectral coarse space minimizes the energy norm of T. This sharp result provides a concrete optimal choice of $$V_c$$ minimizing the energy norm of T, but it is generally an upper bound for the asymptotic convergence factor $$\rho (T)$$ and does not extend to the non-symmetric case (the substructured matrix A is non-symmetric), as we will see in Sect. 4.2. Moreover, we will show in Sect. 5 two numerical approaches, based on a PCA approach and neural networks, for the construction of global coarse space functions. These are generally different from the spectral ones and may lead to a better convergence.

Another possibility is to build local coarse functions using eigenfunctions of the local operators $$G_j$$. However, the eigenfunctions of $$G_j$$ (or G) are known only in very special cases and their numerical computation could be quite expensive. To overcome this problem one could define $$V_c$$ as the span of some Fourier basis functions, that could be obtained by solving a Laplace-Beltrami eigenvalue problem on each interface (or skeleton); see, e.g., [27, 28]. In this case, assuming that local basis functions $$\psi _k^j \in H_{00}^{1/2}({\mathcal {S}}_j)$$ (endowed with inner product $$\langle \cdot , \cdot \rangle _j$$) are available, the coarse space $$V_c$$ can be constructed as

\begin{aligned}V_c := \mathrm{span}_{j \in {\mathcal {I}} , k=1,\ldots ,{\widehat{m}}} \left\{ \mathbf{e}_j \otimes \psi _k^j \right\} ,\end{aligned}

for some positive integer $${\widehat{m}}$$, where $$\otimes$$ denotes the standard Kronecker product and $$\mathbf{e}_j$$, for $$j \in {\mathcal {I}}$$, are the canonical vectors in $${\mathbb {R}}^N$$. In this case prolongation and restriction operators defined in (13) are

\begin{aligned} \begin{aligned} P\begin{bmatrix} \mathbf{v}^1\\ \vdots \\ \mathbf{v}^N\\ \end{bmatrix}&=\begin{bmatrix} \sum \limits _{k=1}^{{\widehat{m}}} v^1_k\psi _k^1,&\ldots&\sum \limits _{k=1}^{{\widehat{m}}} v^N_k\psi _k^N \end{bmatrix}^\top , \\ \quad R\begin{bmatrix} {h}_1\\ \vdots \\ {h}_N \\ \end{bmatrix}&=\begin{bmatrix} \langle \psi _1^1,{h}_1\rangle _1,&\ldots ,&\langle \psi _{{\widehat{m}}}^1,{h}_1\rangle _1,&\ldots&\langle \psi _1^N,{h}_N\rangle _N,&\ldots ,&\langle \psi _{{\widehat{m}}}^N,{h}_N\rangle _N \end{bmatrix}^\top , \end{aligned} \end{aligned}
(14)

for any $$\mathbf{v}^1,\ldots ,\mathbf{v}^N \in {\mathbb {R}}^{{\widehat{m}}}$$ and any $$({h}_1,\ldots ,{h}_N) \in {\mathcal {H}}$$. We wish to remark, that the choice of the inner product $$\langle \cdot , \cdot \rangle$$ (or $$\langle \cdot , \cdot \rangle _j$$ for $$j\in {\mathcal {I}}$$) in the definition of P and R is arbitrary. One possible choice is the classical $$H^{1/2}$$ inner product. However, this could be too expensive from a numerical point of view. Another possibility would be to consider the classical $$L^2$$ inner product, which is the choice we make in our implementations.

A detailed convergence analysis that covers our S2S method, is presented in Sect. 4. This is based on the general structure of a two-level iteration operator. A direct calculation reveals that one iteration of the S2S method can be written as

\begin{aligned} \mathbf{u}^{\mathrm{new}}=G^{n_2}(I-PA_c^{-1}RA)G^{n_1}{} \mathbf{u}^{\mathrm{old}} + {\widetilde{M}} \mathbf{b}, \end{aligned}
(15)

where I is the identity operator over $${\mathcal {H}}$$; see, also, [13, 15, 37]. Here, $${\widetilde{M}}$$ is an operator which acts on the right-hand side vector $$\mathbf{b}$$. Such operator can be regarded as the preconditioner corresponding to our two-level method. In error form, the iteration (15) becomes

\begin{aligned} \mathbf{e}^{\mathrm{new}}=T\mathbf{e}^{\mathrm{old}} \text { with } T:=G^{n_2}(I-PA_c^{-1}RA)G^{n_1}, \end{aligned}
(16)

where $$\mathbf{e}^{\mathrm{new}}:= \mathbf{u}-\mathbf{u}^{\mathrm{new}}$$ and $$\mathbf{e}^{\mathrm{old}}:= \mathbf{u}-\mathbf{u}^{\mathrm{old}}$$. Hence, to prove convergence of the S2S method we study the operator T.

## 4 Convergence Analysis

In this section, we provide convergence results for two-level iterative methods in a general framework that covers the setting of the S2S domain decomposition method presented in Sect. 3.

Let $$({{\mathcal {X}}}, \langle \cdot , \cdot \rangle )$$ be a complexFootnote 3 inner-product space and $${{\mathbb {A}}}x = b$$ a linear problem, where the operator $${{\mathbb {A}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}$$ is bijective and $$b \in {{\mathcal {X}}}$$ is a given vector. Consider a set of $$m>0$$ linearly independent functions $$\{{\pmb \psi }_k\}_{k=1,\ldots ,m}$$, and denote by $$V_c$$ the finite-dimensional subspace of $${\mathcal {X}}$$ defined as the span of the functions $$\{{\pmb \psi }_k\}_{k=1,\ldots ,m}$$. We denote by $$P : {{\mathbb {C}}}^m \rightarrow {{\mathcal {X}}}$$ and $$R : {{\mathcal {X}}}\rightarrow {{\mathbb {C}}}^m$$ the prolongation and restriction operators defined as in (13), and define the matrix $${{\mathbb {A}}}_c := R {{\mathbb {A}}}P \in {{\mathbb {C}}}^{m \times m}$$. Given a smoothing operator $${{\mathbb {G}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}$$, a two-level iterative method (as the one defined in Algorithm 1) is characterized by the iteration operator $${{\mathbb {T}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}$$ defined by

\begin{aligned} {{\mathbb {T}}}:= {{\mathbb {G}}}^{n_2} ( {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}) {{\mathbb {G}}}^{n_1}, \end{aligned}
(17)

where $${{\mathbb {I}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}$$ is the identity operator. In what follows the properties of $${{\mathbb {T}}}$$ are analyzed. In particular, the invertibility of $${{\mathbb {A}}}_c$$ is characterized in Sect. 4.1, the convergence (spectral) properties of $${{\mathbb {T}}}$$ are discussed in the case of global coarse functions in Sect. 4.2 and in the case of local coarse functions in Sect. 4.3.

### 4.1 Invertibility of the Coarse Matrix

The well-posedness of a two-level method (like the S2S) is essentially related to the invertibility of the coarse operator $${{\mathbb {A}}}_c$$. Even though one could replace the inverse of $${{\mathbb {A}}}_c$$ with its pseudo-inverse, as discussed in, e.g., [38], in our analysis we will assume that $${{\mathbb {A}}}_c$$ is invertible. The next Lemma provides an equivalent characterization for the invertibility of $${{\mathbb {A}}}_c$$.

### Lemma 2

(Invertibility of a coarse operator $${{\mathbb {A}}}_c$$) Let $${\mathbb {P}}_{V_c} : {{\mathcal {X}}}\rightarrow V_c$$ be a projection operator onto $$V_c$$. The coarse matrix $${{\mathbb {A}}}_c = R {{\mathbb {A}}}P$$ has full rank if and only if $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0 \, \forall \mathbf{v} \in V_c\setminus \{0\}$$.

### Proof

We first show that if $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0$$ for any $$\mathbf{v} \in V_c\setminus \{0\}$$, then $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P$$ has full rank. This result follows from the rank-nullity theorem, if we show that the only element in the kernel of $${{\mathbb {A}}}_c$$ is the zero vector. To do so, we recall the definitions of P and R given in (13). Let us now consider a vector $$\mathbf{z}\in {{\mathbb {C}}}^m$$ . Clearly, $$P \mathbf{z}=0$$ if and only if $$\mathbf{z}=0$$. Moreover, for any $$\mathbf{z} \in {{\mathbb {C}}}^m$$ the function $$P \mathbf{z}$$ is in $$V_c$$. Since $${{\mathbb {A}}}$$ is invertible, then $${{\mathbb {A}}}P\mathbf{z}=0$$ if and only if $$\mathbf{z}=0$$. Moreover, by our assumption it holds that $${\mathbb {P}}_{V_c}(AP\mathbf{z}) \ne 0$$. Now, we notice that $$R\mathbf{w} \ne 0$$ for all $$\mathbf{w} \in V_c \setminus \{0\}$$, and $$R\mathbf{w} = 0$$ for all $$\mathbf{w} \in V_c^\perp$$, where $$V_c^\perp$$ denotes the orthogonal complement of $$V_c$$ in $${\mathcal {X}}$$ with respect to $$\langle \cdot , \cdot \rangle$$. Since $$({\mathcal {X}},\langle \cdot , \cdot \rangle )$$ is an inner-product space, we have $${{\mathbb {A}}}P\mathbf{z} = {\mathbb {P}}_{V_c}({{\mathbb {A}}}P\mathbf{z}) + ({{\mathbb {I}}}-{\mathbb {P}}_{V_c})({{\mathbb {A}}}P\mathbf{z})$$ with $$({{\mathbb {I}}}-{\mathbb {P}}_{V_c})({{\mathbb {A}}}P\mathbf{z}) \in V_c^\perp$$. Hence, $$R{{\mathbb {A}}}P\mathbf{z} = R{\mathbb {P}}_{V_c}({{\mathbb {A}}}P\mathbf{z}) \ne 0$$ for any non-zero $$\mathbf{z}$$.

Now we show that, if $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P$$ has full rank, then $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0$$ for any $$\mathbf{v} \in V_c\setminus \{0\}$$. We proceed by contraposition and prove that if there exists a $$\mathbf{v} \in V_c\setminus \{0\}$$ such that $${{\mathbb {A}}}\mathbf{v} \in V_c^\perp$$, then $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P$$ has not full rank. Assume that there is a $$\mathbf{v} \in V_c\setminus \{0\}$$ such that $${{\mathbb {A}}}\mathbf{v} \in V_c^\perp$$. Since $$\mathbf{v}$$ is in $$V_c$$, there exists a nonzero vector $$\mathbf{z}$$ such that $$\mathbf{v}=P\mathbf{z}$$. Hence $${{\mathbb {A}}}P\mathbf{z} \in V_c^\perp$$. We can now write that $${{\mathbb {A}}}_c \mathbf{z}= R({{\mathbb {A}}}P\mathbf{z})=0$$, which implies that $${{\mathbb {A}}}_c$$ has not full rank. $$\square$$

The following example shows that the invertibility of $${{\mathbb {A}}}$$ does not necessarily imply the invertibility of $${{\mathbb {A}}}_c$$.

### Example 1

Consider the invertible matrix $${{\mathbb {A}}}:= \begin{bmatrix} 0 &{} 1 \\ 1 &{} 0 \end{bmatrix}$$. Let us denote by $$\mathbf{e}_1$$ and $$\mathbf{e}_2$$ the canonical vectors in $${\mathbb {R}}^2$$, define $$V_c := \mathrm{span}\{ \mathbf{e}_1 \}$$, and consider the classical scalar product for $${\mathbb {R}}^2$$. This gives $$V_c^\perp := \mathrm{span}\{ \mathbf{e}_2 \}$$. The prolongation and restriction operators are $$P=\mathbf{e}_1$$ and $$R=P^\top$$. Clearly, we have that $${{\mathbb {A}}}\mathbf{e}_1 = \mathbf{e}_2$$, which implies that $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) = 0$$ for all $$\mathbf{v} \in V_c$$. Moreover, in this case we get $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P = 0$$, which shows that $${{\mathbb {A}}}_c$$ is not invertible.

Notice that, if $${{\mathbb {A}}}(V_c) \subseteq V_c$$, then it holds that $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0$$ $$\forall \mathbf{v} \in V_c \setminus \{ 0\}$$, and $${{\mathbb {A}}}_c$$ is invertible. The condition $${{\mathbb {A}}}(V_c) \subseteq V_c$$ is satisfied for operators of the form $${{\mathbb {A}}}= {{\mathbb {I}}}- {{\mathbb {G}}}$$, as for instance those defined in (12), if the functions $${\pmb \psi }_k$$ are eigenfunctions of $${{\mathbb {G}}}$$. However, it represents only a sufficient condition for the invertibility of $${{\mathbb {A}}}_c$$. As the following example shows, there exist invertible operators $${{\mathbb {A}}}$$ that do not satisfy this condition, but lead to invertible $${{\mathbb {A}}}_c$$.

### Example 2

Consider the invertible matrix $${{\mathbb {A}}}:= \begin{bmatrix} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 \end{bmatrix}$$. Let us denote by $$\mathbf{e}_1$$, $$\mathbf{e}_2$$ and $$\mathbf{e}_3$$ the three canonical vectors in $${\mathbb {R}}^3$$, define $$V_c := \mathrm{span}\{ \mathbf{e}_1 , \mathbf{e}_2 \}$$, and consider the classical scalar product for $${\mathbb {R}}^3$$. This gives $$V_c^\perp := \mathrm{span}\{ \mathbf{e}_3 \}$$. The prolongation and restriction operators are $$P=[\mathbf{e}_1 , \mathbf{e}_2]$$ and $$R=P^\top$$, and we get $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P = I$$, where I is the $$2\times 2$$ identity matrix. Now, we notice that $${{\mathbb {A}}}\mathbf{e}_2 = \mathbf{e}_2 + \mathbf{e}_3$$, which implies that $${\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{e}_2 )\ne 0$$ and $${\mathbb {P}}_{V_c^\perp }( {{\mathbb {A}}}\mathbf{e}_2 )\ne 0$$. Hence $$V_c$$ is not invariant under $${{\mathbb {A}}}$$, but $${{\mathbb {A}}}_c$$ is invertible.

### 4.2 Global Coarse Functions

In this section, we study general convergence properties of the operator $${{\mathbb {T}}}$$. The first theorem characterizes the relation between the kernel of $${{\mathbb {T}}}$$ and the coarse space $$V_c$$.

### Theorem 2

(Kernel of $${{\mathbb {T}}}$$, coarse space $$V_c$$) Let P and R be defined as in (13) by linearly independent functions $${\pmb \psi }_1,\ldots ,{\pmb \psi }_m$$ such that $${{\mathbb {A}}}_c = R{{\mathbb {A}}}P$$ is invertible. For any $${\pmb \psi }\in {{\mathcal {X}}}$$ it holds that

\begin{aligned} {[} {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] {\pmb \psi }= 0 \; \Leftrightarrow \; {\pmb \psi }\in V_c := \mathrm{span}\{ {\pmb \psi }_1,\ldots ,{\pmb \psi }_m \} . \end{aligned}
(18)

### Proof

Assume that $${\pmb \psi }\in V_c$$. This implies that there exists a vector $$\mathbf{z}$$ such that $${\pmb \psi }= P \mathbf{z}$$. Hence, we can compute

\begin{aligned} {[} {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] {\pmb \psi }= {\pmb \psi }- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}{\pmb \psi }= P \mathbf{z} - P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}P\mathbf{z}=P\mathbf{z}-P\mathbf{z}=0. \end{aligned}

Let us now prove the reverse, that is $$[ {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] {\pmb \psi }= 0 \Rightarrow {\pmb \psi }\in V_c$$. We proceed by contraposition and assume that $${\pmb \psi }\notin V_c$$, that is there exists a nonzero $${\pmb \psi }_b \in V_c^\perp$$ such that $${\pmb \psi }= {\pmb \psi }_a + {\pmb \psi }_b$$ with $${\pmb \psi }_a \in V_c$$. Since $${\pmb \psi }_a \in V_c$$, we already know that $$[ {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] {\pmb \psi }_a = 0$$. Hence, it holds that

\begin{aligned} {[} {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] {\pmb \psi }= [ {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}] ({\pmb \psi }_a + {\pmb \psi }_b) = \underbrace{{\pmb \psi }_b}_{ \in V_c^\perp }-\underbrace{P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}{\pmb \psi }_b}_{ \in V_c }\ne 0. \end{aligned}
(19)

$$\square$$

To continue our analysis we construct a matrix representation of the operator $${{\mathbb {T}}}$$ defined in (17). From now on, for the sake of simplicity, we set $$n_1=1$$ and $$n_2=0$$. We further consider the following assumptions:

1. (H1)

$$V_c$$ is the span of m linearly independent functions $$\{ \mathbf{p }_k\}_{k=1}^m \subset {{\mathcal {X}}}$$, which are used to define the operators P and R as in (13).

2. (H2)

The operators $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ have the same linearly independent eigenvectors $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$, The corresponding eigenvalues of $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are denoted by $${{\widetilde{\lambda }}}_k$$ and $$\lambda _k$$, respectively.

3. (H3)

The eigenvalues $$\lambda _k$$ satisfy $$|\lambda _k |\in (0,1)$$, $$|\lambda _k|\le |\lambda _{k-1}|$$ for all k.

4. (H4)

There exists an index $${\widetilde{m}}\ge m$$ such that $$V_c \subseteq \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}.$$

### Remark 1

Notice that the hypothesis (H2) is valid in the context of our S2S method, where the operators $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ satisfy the relation $${{\mathbb {A}}}= {{\mathbb {I}}}- {{\mathbb {G}}}$$. Hence, they have the same eigenvectors. Moreover, the hypothesis (H3) is satisfied if $${{\mathbb {G}}}$$ corresponds to a classical parallel Schwarz method, as in the case of our S2S method. The classical damped Jacobi method is another important instance that satisfies (H2) and (H3). Moreover, if one supposes that the vectors $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ in (H2) are orthogonal or $${\mathcal {X}}$$ is finite dimensional, then (H2) and (H4) imply $$V_c \cap \mathrm{span}\,\{ {\pmb \psi }_k \}_{k={\widetilde{m}}+1}^\infty =\{0\}.$$

Let us now construct a matrix representation of the operator $${{\mathbb {T}}}$$. Since $$V_c \subseteq \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$, the structure of $${{\mathbb {T}}}$$ allows us to obtain that the set $$\mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$ is invariant, that is $${{\mathbb {T}}}{\pmb \psi }_j \in \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$ for any $$j=1,\ldots ,{\widetilde{m}}$$. Similarly, a direct calculation reveals that $${{\mathbb {T}}}{\pmb \psi }_j = \lambda _j {\pmb \psi }_j - \sum _{\ell =1}^{{\widetilde{m}}} x_{j-{\widetilde{m}},\ell } {\pmb \psi }_\ell$$ for $$j \ge {\widetilde{m}}+1$$ for some coefficients $$x_j$$. Therefore, for any $${\pmb \psi }_j$$ there exist at most $${\widetilde{m}}+1$$ nonzero coefficients $${\widetilde{t}}_{j,\ell }$$ such that $${{\mathbb {T}}}{\pmb \psi }_j = {\widetilde{t}}_{j,j} {\pmb \psi }_j + \sum _{\ell =1,\ell \ne j}^{{\widetilde{m}}} {\widetilde{t}}_{j,\ell }{\pmb \psi }_\ell$$. If we order the coefficients $${\widetilde{t}}_{j,\ell }$$ into an infinite matrix denoted by $${\widetilde{T}}$$, we obtain that

\begin{aligned} \begin{aligned} {\widetilde{T}}= \begin{bmatrix} {\widetilde{T}}_{{\widetilde{m}}} &{} 0 \\ X &{} \Lambda _{{\widetilde{m}}}\\ \end{bmatrix}, \end{aligned} \qquad \begin{aligned}&\Lambda _{{\widetilde{m}}} = \mathrm{diag}\, (\lambda _{{\widetilde{m}}+1},\lambda _{{\widetilde{m}}+2},\ldots ), \\&{\widetilde{T}}_{{\widetilde{m}}} \in {\mathbb {C}}^{{\widetilde{m}}\times {\widetilde{m}}}, [X]_{j,\ell } = x_{j,\ell }, \ell =1,\ldots ,{\widetilde{m}}, j=1,2,\ldots \end{aligned} \end{aligned}
(20)

The infinite matrix $${\widetilde{T}}$$ can be regarded as a linear operator acting on the space of sequences. The matrix representation (20) turns to be very useful to analyze the convergence properties of the operator $${{\mathbb {T}}}$$. Now, we can compute by an induction argument that

\begin{aligned} {\widetilde{T}}^n = \begin{bmatrix} {\widetilde{T}}_{{\widetilde{m}}}^n &{} 0 \\ P_n &{} \Lambda _{{\widetilde{m}}}^n\\ \end{bmatrix} \quad \text {with} \quad P_n = \sum _{j=1}^n \Lambda _{{\widetilde{m}}}^{n-j} X {\widetilde{T}}_{{\widetilde{m}}}^{j-1}. \end{aligned}
(21)

If the matrix $${\widetilde{T}}_{{\widetilde{m}}}$$ is nilpotent with degree $$q \in {\mathbb {N}}_+$$, that is $${\widetilde{T}}_{{\widetilde{m}}}^p=0$$ for all $$p\ge q$$, then we get for $$n>q$$ that

\begin{aligned}P_n = \sum _{j=1}^q \Lambda _{{\widetilde{m}}}^{n-j} X {\widetilde{T}}_{{\widetilde{m}}}^{j-1} + \sum _{j=q+1}^n \Lambda _{{\widetilde{m}}}^{n-j} X {\widetilde{T}}_{{\widetilde{m}}}^{j-1} = \Lambda _{{\widetilde{m}}}^n \sum _{j=1}^{q} \Lambda _{{\widetilde{m}}}^{-j}X {\widetilde{T}}_{{\widetilde{m}}}^{j-1}.\end{aligned}

Thus, by defining $$X_{q} := \sum _{j=1}^{q} \Lambda _{{\widetilde{m}}}^{-j}X {\widetilde{T}}_{{\widetilde{m}}}^{j-1}$$, one gets for $$n>q$$ that

\begin{aligned} {\widetilde{T}}^n = T_a^{n-1} T_b \quad \text {with} \quad T_a := \begin{bmatrix} 0 &{} 0 \\ 0 &{} \Lambda _{{\widetilde{m}}} \end{bmatrix}, T_b := \begin{bmatrix} 0 &{} 0 \\ \Lambda _{{\widetilde{m}}} X_{q} &{} \Lambda _{{\widetilde{m}}}\\ \end{bmatrix}. \end{aligned}
(22)

Let us begin with a case where the linear operators $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are bounded and self-adjoint, and the functions $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ form an orthonormal basis with respect to an inner product (not necessarily equal to $$\langle \cdot , \cdot \rangle$$) such that is a Hilbert space. We denote by $$\Vert \cdot \Vert _{{{\mathcal {X}}}}$$ the norm induced by , and by

\begin{aligned} \Vert S\Vert _{{{\mathcal {X}}}}:=\sup \limits _{\Vert \mathbf{v}\Vert _{{{\mathcal {X}}}}=1}\Vert S\mathbf{v}\Vert _{{{\mathcal {X}}}} \text { for any }S \in {\mathcal {L}}({{\mathcal {X}}}), \end{aligned}
(23)

the corresponding operator norm. Notice that, since $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are bounded, $${{\mathbb {T}}}$$ is bounded as well. Thus, we can study the asymptotic convergence factor $$\rho ({{\mathbb {T}}})$$ defined as $$\lim \limits _{n\rightarrow \infty } \Vert {{\mathbb {T}}}^n\Vert _{{{\mathcal {X}}}}^{1/n}=\rho ({{\mathbb {T}}})$$; see, e.g., [44, Chapter 17]. Since we assumed that $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ are orthonormal with respect to , a direct calculationFootnote 4 allows one to prove that $$\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}$$, where

\begin{aligned} \Vert S \Vert _{\ell ^2}:=\sup \limits _{\Vert \mathbf{v}\Vert _{\ell ^2}=1}\Vert S\mathbf{v}\Vert _{\ell ^2} \text { for any }S \in {\mathcal {L}}(\ell ^2). \end{aligned}
(24)

Hence, we obtain $$\rho ({{\mathbb {T}}}) = \lim \limits _{n\rightarrow \infty } \Vert {{\mathbb {T}}}^n\Vert _{{{\mathcal {X}}}}^{1/n} = \lim \limits _{n\rightarrow \infty } \Vert {\widetilde{T}}^n\Vert _{\ell ^2}^{1/n}$$. Notice that since $${{\mathbb {T}}}$$ is a bounded operator and $$\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}$$, the operator $${\widetilde{T}}$$ is bounded in the $$\Vert \cdot \Vert _{\ell ^2}$$ norm. Thus, the submatrices X, $$\Lambda _{{\widetilde{m}}}$$ and $${\widetilde{T}}_{{\widetilde{m}}}$$ are bounded in the $$\Vert \cdot \Vert _{\ell ^2}$$ norm as well. Therefore, $$T_a$$ and $$T_b$$ are also bounded in the $$\Vert \cdot \Vert _{\ell ^2}$$ norm. Thus, Eq. (22) allows us to estimate $$\rho ({{\mathbb {T}}})$$:

\begin{aligned} \begin{aligned} \rho ({{\mathbb {T}}})&= \lim \limits _{n\rightarrow \infty } \Vert {\widetilde{T}}^n\Vert _{\ell ^2}^{1/n} \le \lim \limits _{n\rightarrow \infty } (\Vert T_a^{n-1}\Vert _{\ell ^2} \Vert T_b\Vert _{\ell ^2})^{1/n}\le \lim \limits _{n\rightarrow \infty } |\lambda _{{\widetilde{m}}+1}|^{(n-1)/n} \Vert T_b\Vert _{\ell ^2}^{1/n} \\&= |\lambda _{{\widetilde{m}}+1}|. \end{aligned} \end{aligned}

Now, recalling (24), one obtains for $$n>q$$ that

\begin{aligned} \Vert {\widetilde{T}}^n \Vert _{\ell ^2}=\sup \limits _{\Vert \mathbf{v}\Vert _{\ell ^2}=1}\Vert {\widetilde{T}}^n\mathbf{v}\Vert _{\ell ^2} \ge \Vert {\widetilde{T}}^n\mathbf{e}_{{\widetilde{m}}+1}\Vert _{\ell ^2} = |\lambda _{{\widetilde{m}}+1}|^n, \end{aligned}

where $$\mathbf{e}_{{\widetilde{m}}+1} \in \ell ^2$$ is the $${\widetilde{m}}+1$$-th canonical vector. This estimate implies that $$\rho ({{\mathbb {T}}}) = \lim \limits _{n\rightarrow \infty } \Vert {\widetilde{T}}^n\Vert _{\ell ^2}^{1/n} \ge |\lambda _{{\widetilde{m}}+1}|$$, and thus $$\rho ({{\mathbb {T}}}) = |\lambda _{{\widetilde{m}}+1}|$$. Using Theorem 2, it is possible to see that the matrix $${\widetilde{T}}_{{\widetilde{m}}}$$ is nilpotent with degree $$q=1$$, if $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$. In this case $$|\lambda _{{\widetilde{m}}+1}|=|\lambda _{m+1}|$$. We can summarize these findings in the next theorem.

### Theorem 3

(Convergence of a two-level method) Let the hypotheses (H1), (H2), (H3) and (H4) be satisfied, $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ be self-adjoint, and assume that the functions $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ form an orthonormal basis with respect to an inner product such that is a Hilbert space. If $${\widetilde{T}}_{{\widetilde{m}}}$$ is nilpotent (e.g., if $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$), then

\begin{aligned} \rho ({{\mathbb {T}}}) = \lim \limits _{n\rightarrow \infty } \Vert {{\mathbb {T}}}^n\Vert _{{{\mathcal {X}}}}^{1/n} = |\lambda _{{\widetilde{m}}+1} |< 1, \end{aligned}

where $$\Vert \cdot \Vert _{{{\mathcal {X}}}}$$ is the operator norm defined in (23).

In the case of a spectral coarse space, the expression of $${\widetilde{T}}$$ in (20) simplifies. The following result holds.

### Theorem 4

(The matrix $${\widetilde{T}}$$ for self-adjoint $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ and a spectral coarse space) Let the hypotheses (H1), (H2), (H3) and (H4) be satisfied. If the functions $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ form an orthonormal basis for , the operators $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are self adjoint, and $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$ with $${\widetilde{m}}=m$$, then

\begin{aligned} {\widetilde{T}}= \begin{bmatrix} 0 &{} 0 \\ 0 &{} \Lambda _{{\widetilde{m}}}\\ \end{bmatrix}, \end{aligned}

where $$\Lambda _{{\widetilde{m}}}$$ is defined in (20).

### Proof

Since $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$, Theorem 2 implies that $${\widetilde{T}}_{{\widetilde{m}}}=0$$. Thus, to obtain the result, it is sufficient to show that all the components of the submatrix X (see (20)) are zero. These components are $$x_{j,\ell } = {\widetilde{t}}_{j,\ell }$$ for $$j>{\widetilde{m}}$$ and $$\ell \le {\widetilde{m}}$$. Thus, we assume that $$j>{\widetilde{m}}$$ and $$\ell \le {\widetilde{m}}$$, recall the formula $${{\mathbb {T}}}{\pmb \psi }_j = {\widetilde{t}}_{j,j} {\pmb \psi }_j + \sum _{k=1,k\ne j}^{{\widetilde{m}}} {\widetilde{t}}_{j,k}{\pmb \psi }_k$$, and multiply this by $${\pmb \psi }_\ell$$ to obtain . Since $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are self adjoint, one obtains by a direct calculation that $$[ {{\mathbb {I}}}- P {{\mathbb {A}}}_c^{-1} R {{\mathbb {A}}}]^* = [ {{\mathbb {I}}}- {{\mathbb {A}}}P {{\mathbb {A}}}_c^{-1} R ]$$. Using this property and recalling the structure of $${{\mathbb {T}}}$$, we can compute

Now, since $$[ {{\mathbb {I}}}- {{\mathbb {A}}}P {{\mathbb {A}}}_c^{-1} R ] {\pmb \psi }_\ell \in \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}$$ as $$\ell \le {\widetilde{m}}$$, the orthogonality of the functions $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ and the hypothesis (H4) imply that . Hence, the result follows. $$\square$$

Theorem 4 implies directly that

\begin{aligned} \Vert {{\mathbb {T}}}\Vert _{{\mathcal {X}}}= \rho ({{\mathbb {T}}}) = |\lambda _{m+1}|. \end{aligned}

Let us now assume that $${{\mathbb {A}}}$$ is positive definite, and thus there exists a unique positive square root operator $${{\mathbb {A}}}^{1/2}$$ such that $$A^{1/2}{\pmb \psi }_j={{\widetilde{\lambda }}}_j^{1/2}{\pmb \psi }_j$$, [45, Theorem 6.6.4]. A straight calculation leads to $$\Vert S\Vert _{{{\mathbb {A}}}} = \Vert {{\mathbb {A}}}^{1/2} S {{\mathbb {A}}}^{-1/2} \Vert _{{{\mathcal {X}}}}$$ (see, e.g., [46, Section C.1.3] for a finite-dimensional matrix counterpart). Notice that, as for $${{\mathbb {T}}}$$ and $${\widetilde{T}}$$, we can obtain the matrix representation $${\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{-1/2}$$ of $${{\mathbb {A}}}^{1/2} {{\mathbb {T}}}{{\mathbb {A}}}^{-1/2}$$, where $${\widetilde{T}}$$ is defined in (20) and $${\widetilde{\Lambda }}=\mathrm{diag}\, ({{\widetilde{\lambda }}}_{1},{{\widetilde{\lambda }}}_{2},\ldots )$$. Thus, as for $$\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}$$, one can prove that $$\Vert {{\mathbb {A}}}^{1/2} {{\mathbb {T}}}{{\mathbb {A}}}^{-1/2} \Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{-1/2} \Vert _{\ell ^2}$$. Hence, we get $$\Vert {{\mathbb {T}}}\Vert _{{{\mathbb {A}}}} = \Vert {\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{-1/2} \Vert _{\ell ^2}$$. Now, if it holds that $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$, then Theorem 4 implies that

\begin{aligned} \Vert {{\mathbb {T}}}\Vert _{{{\mathbb {A}}}} = \Vert {\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{-1/2} \Vert _{\ell ^2} = \Vert \Lambda _{{\widetilde{m}}} \Vert _{\ell ^2} = |\lambda _{m+1} |= \rho ({{\mathbb {T}}}). \end{aligned}

It has been proved in [38, Theorem 5.5], that this result is optimal in the sense that, if $${{\mathbb {A}}}$$ and $${{\mathbb {G}}}$$ are symmetric and positive (semi-)definite, then the coarse space $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$ minimizes the energy norm of the two-level operator $${{\mathbb {T}}}$$. Clearly, if $${{\mathbb {A}}}$$ has positive and negative eigenvalues (even though it remains symmetric), this result is no longer valid. In this case, as we are going to see in Theorem 6, the coarse space $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$ is not necessarily (asymptotically) optimal.

The situation is very different if the functions $$\{ {\pmb \psi }_k \}_{k=1}^\infty$$ are not orthogonal and $${{\mathbb {A}}}$$ is not symmetric. To study this case, we work in a finite-dimensional setting and assume that $${{\mathcal {X}}}= {\mathbb {C}}^N = \mathrm{span}\, \{ {\pmb \psi }_k \}_{k=1}^N$$. Thus, both $${{\mathbb {T}}}$$ and $${\widetilde{T}}$$ are matrices in $${\mathbb {C}}^{N \times N}$$ and it holds that $${{\mathbb {T}}}V = V {\widetilde{T}}^\top$$, where $$V=[{\pmb \psi }_1,\ldots ,{\pmb \psi }_N]$$. This means that $${{\mathbb {T}}}$$ and $${\widetilde{T}}$$ are similar matrices and, thus, have the same spectrum. Hence, using Theorem 2 we obtain a finite-dimensional counterpart of Theorem 3, which does not require the orthogonality of $$\{ {\pmb \psi }_k \}_{k=1}^N$$.

### Theorem 5

(Convergence of a two-level method in finite-dimension) Assume that $${{\mathcal {X}}}= {{\mathbb {C}}}^N$$ and let the hypotheses (H1), (H2), (H3) and (H4) be satisfied. If $$V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}$$ (with $$m={\widetilde{m}}<N$$), then

\begin{aligned} \rho ({{\mathbb {T}}}) = \rho ({\widetilde{T}}) = |\lambda _{m+1} |<1. \end{aligned}

The coarse space $$V_c = \mathrm{span} \, \{ {\pmb \psi }_k \}_{k=1}^{m}$$ is not necessarily (asymptotically) optimal. A different choice can lead to better asymptotic convergence or even to a divergent two-level method. To show these results, we consider an analysis based on the perturbation of functions belonging to the coarse space $$V_c = \mathrm{span} \, \{ {\pmb \psi }_k \}_{k=1}^{m}$$. We have seen in Theorem 2, that an eigenvector of $${{\mathbb {G}}}$$ is in the kernel of the two-level operator $${{\mathbb {T}}}$$ if and only if it belongs to $$V_c$$. Assume that the coarse space cannot represent exactly one eigenvector $${\pmb \psi }$$ of $${{\mathbb {G}}}$$. How is the convergence of the method affected? Let us perturb the coarse space $$V_c$$ using the eigenvector $${\pmb \psi }_{m+1}$$, that is $$V_c(\varepsilon ) := \mathrm {span}\, \{ {\pmb \psi }_j + \varepsilon \, {\pmb \psi }_{m+1} \}_{j=1}^m$$. Clearly, $$\text {dim}\, V_c(\varepsilon ) = m$$ for any $$\varepsilon \in {\mathbb {R}}$$. In this case, (21) holds with $${\widetilde{m}}= m+1$$ and $${\widetilde{T}}\in {\mathbb {C}}^{N \times N}$$ becomes

\begin{aligned} {\widetilde{T}}(\varepsilon ) = \begin{bmatrix} {\widetilde{T}}_{{\widetilde{m}}}(\varepsilon ) &{} 0 \\ X(\varepsilon ) &{} \Lambda _{{\widetilde{m}}}\\ \end{bmatrix}, \end{aligned}
(25)

where we make explicit the dependence on $$\varepsilon$$. Notice that $$\varepsilon =0$$ clearly leads to $${\widetilde{T}}_{{\widetilde{m}}}(0)=\text { diag}\, (0,\ldots ,0,\lambda _{m+1}) \in {\mathbb {C}}^{{\widetilde{m}}\times {\widetilde{m}}}$$, and we are back to the unperturbed case with $${\widetilde{T}}(0)={\widetilde{T}}$$ having spectrum $$\{0,\lambda _{m+1},\ldots ,\lambda _{N}\}$$. Now, notice that $$\min _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) \le \rho ({\widetilde{T}}(0)) = |\lambda _{m+1} |$$. Thus, it is natural to ask the question: is this inequality strict? Can one find an $${\widetilde{\varepsilon }}\ne 0$$ such that $$\rho ({\widetilde{T}}({\widetilde{\varepsilon }}))=\min _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon ))<\rho ({\widetilde{T}}(0))$$ holds? If the answer is positive, then we can conclude that choosing the coarse vectors equal to the dominating eigenvectors of $${{\mathbb {G}}}$$ is not an optimal choice. Moreover, one could ask an opposite question: can one find a perturbation of the eigenvectors that leads to a divergent method ($$\rho ({\widetilde{T}}(\varepsilon ))>1$$)? The next key result provides precise answers to these questions in the case $$m=1$$.

### Theorem 6

(Perturbation of $$V_c$$) Let $$({\pmb \psi }_1,\lambda _1)$$, $$({\pmb \psi }_2,\lambda _2)$$ and $$({\pmb \psi }_3,\lambda _3)$$ be three eigenpairs of $${{\mathbb {G}}}$$, $${{\mathbb {G}}}{\pmb \psi }_j = \lambda _j {\pmb \psi }_j$$ such that $$0<|\lambda _3|<|\lambda _2|\le |\lambda _1|$$, $$\Vert {\pmb \psi }_j \Vert _2 =1$$, $$j=1,2$$, and denote with $${\widetilde{\lambda }}_j$$ the eigenvalues of A corresponding to $${\pmb \psi }_j$$. Assume that both $$\lambda _j$$ and $${{\widetilde{\lambda }}}_j$$ are real for $$j=1,2$$ and $${\widetilde{\lambda }}_1{\widetilde{\lambda }}_2>0$$.Footnote 5 Define $$V_c := \mathrm {span}\,\{ {\pmb \psi }_1 + \varepsilon {\pmb \psi }_2 \}$$ with $$\varepsilon \in {\mathbb {R}}$$, and $$\gamma := \langle {\pmb \psi }_1 , {\pmb \psi }_2 \rangle \in [-1,1]$$. Then

1. (A)

The spectral radius of $${\widetilde{T}}(\varepsilon )$$ is $$\rho ({\widetilde{T}}(\varepsilon ))=\max \{ |\lambda (\varepsilon ,\gamma )|, |\lambda _3 |\}$$, where

\begin{aligned} \lambda (\varepsilon ,\gamma ) = \frac{\lambda _1 {\widetilde{\lambda }}_2 \varepsilon ^2 + \gamma (\lambda _1 {\widetilde{\lambda }}_2 + \lambda _2 {\widetilde{\lambda }}_1)\varepsilon + \lambda _2 {\widetilde{\lambda }}_1}{{\widetilde{\lambda }}_2 \varepsilon ^2 + \gamma ({\widetilde{\lambda }}_1+{\widetilde{\lambda }}_2)\varepsilon + {\widetilde{\lambda }}_1}. \end{aligned}
(26)
2. (B)

Let $$\gamma =0$$. If $$\lambda _1>\lambda _2>0$$ or $$0>\lambda _2>\lambda _1$$, then $$\min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) = \rho ({\widetilde{T}}(0))$$.

3. (C)

Let $$\gamma =0$$, If $$\lambda _2>0>\lambda _1$$ or $$\lambda _1>0>\lambda _2$$, then there exists an $${\widetilde{\varepsilon }}\ne 0$$ such that $$\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = |\lambda _3|= \min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))$$.

4. (D)

Let $$\gamma \ne 0$$. If $$\lambda _1>\lambda _2>0$$ or $$0>\lambda _2>\lambda _1$$, then there exists an $${\widetilde{\varepsilon }}\ne 0$$ such that $$|\lambda ({\widetilde{\varepsilon }},\gamma )|<|\lambda _2|$$ and hence $$\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = \max \{|\lambda ({\widetilde{\varepsilon }},\gamma )|,|\lambda _3|\} < \rho ({\widetilde{T}}(0))$$.

5. (E)

Let $$\gamma \ne 0$$. If $$\lambda _2>0>\lambda _1$$ or $$\lambda _1>0>\lambda _2$$, then there exists an $${\widetilde{\varepsilon }}\ne 0$$ such that $$\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = |\lambda _3|= \min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))$$.

6. (F)

The map $$\gamma \mapsto \lambda (\varepsilon ,\gamma )$$ has a vertical asymptote at $$\gamma ^*(\varepsilon )=-\frac{\varepsilon ^2 {{\widetilde{\lambda }}}_2 + {{\widetilde{\lambda }}}_1}{\varepsilon ({{\widetilde{\lambda }}}_1+{{\widetilde{\lambda }}}_2)}$$ for any $$\varepsilon ^2 \ne - \frac{(\lambda _2 {{\widetilde{\lambda }}}_1)({{\widetilde{\lambda }}}_1+{{\widetilde{\lambda }}}_2)}{\lambda _1{{\widetilde{\lambda }}}_2^2+{{\widetilde{\lambda }}}_1^2 \lambda _2}$$. Thus there exits a neighborhood $$I(\gamma ^*)$$ such that $$\forall \gamma \in I(\gamma ^*)$$, $$\lambda (\varepsilon ,\gamma )\notin (-1,1)$$.

### Proof

Since $$m=1$$, a direct calculation allows us to compute the matrix

\begin{aligned}{\widetilde{T}}_{{\widetilde{m}}}(\varepsilon )=\begin{bmatrix} \lambda _1 - \frac{\lambda _1{\widetilde{\lambda }}_1(1+\varepsilon \gamma )}{g} &{} -\varepsilon \frac{\lambda _1{\widetilde{\lambda }}_1(1+\varepsilon \gamma )}{g} \\ - \frac{\lambda _2{\widetilde{\lambda }}_2(\varepsilon + \gamma )}{g} &{} \lambda _2 - \frac{(\varepsilon \lambda _2{\widetilde{\lambda }}_2)(\varepsilon + \gamma )}{g} \\ \end{bmatrix},\end{aligned}

where $$g={\widetilde{\lambda }}_1 + \varepsilon \gamma [ {\widetilde{\lambda }}_1+{\widetilde{\lambda }}_2] + \varepsilon ^2 {\widetilde{\lambda }}_2$$. The spectrum of this matrix is $$\{0, \lambda (\varepsilon ,\gamma )\}$$, with $$\lambda (\varepsilon ,\gamma )$$ given in (26). Hence, point $$\mathrm{(A)}$$ follows recalling (25).

To prove points $$\mathrm{(B)}$$, $$\mathrm{(C)}$$, $$\mathrm{(D)}$$ and $$\mathrm{(E)}$$ we use some properties of the map $$\varepsilon \mapsto \lambda (\varepsilon ,\gamma )$$. First, we notice that

\begin{aligned} \lambda (0,\gamma )=\lambda _2, \; \lim _{\varepsilon \rightarrow \pm \infty } \lambda (\varepsilon ,\gamma ) = \lambda _1, \; \lambda (\varepsilon ,\gamma )=\lambda (-\varepsilon ,-\gamma ). \end{aligned}
(27)

Second, the derivative of $$\lambda (\varepsilon ,\gamma )$$ with respect to $$\varepsilon$$ is

\begin{aligned} \frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon } = \frac{(\lambda _1-\lambda _2){\widetilde{\lambda }}_1{\widetilde{\lambda }}_2(\varepsilon ^2+2\varepsilon /\gamma +1)\gamma }{({\widetilde{\lambda }}_2 \varepsilon ^2+\gamma ({\widetilde{\lambda }}_1+{\widetilde{\lambda }}_2)\varepsilon +{\widetilde{\lambda }}_1)^2}. \end{aligned}
(28)

Because of $$\lambda (\varepsilon ,\gamma )=\lambda (-\varepsilon ,-\gamma )$$ in (27), we can assume without loss of generality that $$\gamma \ge 0$$.

Let us now consider the case $$\gamma =0$$. In this case, the derivative (28) becomes $$\frac{d \lambda (\varepsilon ,0)}{d \varepsilon } = \frac{(\lambda _1-\lambda _2){\widetilde{\lambda }}_1{\widetilde{\lambda }}_2 2\varepsilon }{({\widetilde{\lambda }}_2 \varepsilon ^2+{\widetilde{\lambda }}_1^2)^2}$$. Moreover, since $$\lambda (\varepsilon ,0)=\lambda (-\varepsilon ,0)$$ we can assume that $$\varepsilon \ge 0$$.

Case $$\mathrm{(B)}$$. If $$\lambda _1>\lambda _2>0$$, then $$\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }>0$$ for all $$\varepsilon >0$$. Hence, $$\varepsilon \mapsto \lambda (\varepsilon ,0)$$ is monotonically increasing, $$\lambda (\varepsilon ,0) \ge 0$$ for all $$\varepsilon >0$$ and, thus, the minimum of $$\varepsilon \mapsto |\lambda (\varepsilon ,0)|$$ is attained at $$\varepsilon = 0$$ with $$|\lambda (0,0)|=|\lambda _2|>|\lambda _3|$$, and the result follows. Analogously, if $$0>\lambda _2>\lambda _1$$, then $$\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }<0$$ for all $$\varepsilon >0$$. Hence, $$\varepsilon \mapsto \lambda (\varepsilon ,0)$$ is monotonically decreasing, $$\lambda (\varepsilon ,0) < 0$$ for all $$\varepsilon >0$$ and the minimum of $$\varepsilon \mapsto |\lambda (\varepsilon ,0)|$$ is attained at $$\varepsilon = 0$$.

Case $$\mathrm{(C)}$$. If $$\lambda _1>0>\lambda _2$$, then $$\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }>0$$ for all $$\varepsilon >0$$. Hence, $$\varepsilon \mapsto \lambda (\varepsilon ,0)$$ is monotonically increasing and such that $$\lambda (0,0)=\lambda _2<0$$ and $$\lim _{\varepsilon \rightarrow \infty } \lambda (\varepsilon ,0) = \lambda _1>0$$. Thus, the continuity of the map $$\varepsilon \mapsto \lambda (\varepsilon ,0)$$ guarantees the existence of an $${\widetilde{\varepsilon }}>0$$ such that $$\lambda ({\widetilde{\varepsilon }},0)=0$$. Analogously, if $$\lambda _2>0>\lambda _1$$, then $$\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }<0$$ for all $$\varepsilon >0$$ and the result follows by the continuity of $$\varepsilon \mapsto \lambda (\varepsilon ,0)$$.

Let us now consider the case $$\gamma >0$$. The sign of $$\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }$$ is affected by the term $$f(\varepsilon ):=\varepsilon ^2+2\varepsilon /\gamma +1$$, which appears at the numerator of (28). The function $$f(\varepsilon )$$ is strictly convex, attains its minimum at $$\varepsilon =-\frac{1}{\gamma }$$, and is negative in $$({\bar{\varepsilon }}_1,{\bar{\varepsilon }}_2)$$ and positive in $$(-\infty ,{\bar{\varepsilon }}_1)\cup ({\bar{\varepsilon }}_2,\infty )$$, with $${\bar{\varepsilon }}_1,{\bar{\varepsilon }}_2=-\frac{1\mp \sqrt{1-\gamma ^2}}{\gamma }$$.

Case $$\mathrm{(D)}$$. If $$\lambda _1>\lambda _2>0$$, then $$\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }>0$$ for all $$\varepsilon > {\bar{\varepsilon }}_2$$. Hence, $$\frac{d \lambda (0,\gamma )}{d \varepsilon }>0$$, which means that there exists an $${\widetilde{\varepsilon }}<0$$ such that $$|\lambda ({\widetilde{\varepsilon }},\gamma )|<|\lambda (0,\gamma )|=|\lambda _2|$$. The case $$0>\lambda _2>\lambda _1$$ follows analogously.

Case $$\mathrm{(E)}$$. If $$\lambda _1>0>\lambda _2$$, then $$\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }>0$$ for all $$\varepsilon >0$$. Hence, by the continuity of $$\varepsilon \mapsto \lambda (\varepsilon ,\gamma )$$ (for $$\varepsilon \ge 0$$) there exists an $${\widetilde{\varepsilon }}>0$$ such that $$\lambda ({\widetilde{\varepsilon }},\gamma )=0$$. The case $$\lambda _2>0>\lambda _1$$ follows analogously.

Case $$\mathrm{(F)}$$. It is sufficient to observe that the denominator of $$\lambda (\varepsilon ,\gamma )$$ is equal to zero for $$\gamma =\gamma ^*$$, while the numerator is nonzero and finite. Hence, $$\lim _{\gamma \rightarrow \gamma ^*}|\lambda (\varepsilon ,\gamma )|=+\infty$$. As the map $$\gamma \mapsto \lambda (\varepsilon ,\gamma )$$ is continuous in $$(-\infty ,\gamma ^*) \cup (\gamma ^*,+\infty )$$, the result follows. $$\square$$

Theorem 6 and its proof say that, if the two eigenvalues $$\lambda _1$$ and $$\lambda _2$$ have opposite signs (but they could be equal in modulus), then it is always possible to find an $$\varepsilon \ne 0$$ such that the coarse space $$V_c := \mathrm {span}\{ {\pmb \psi }_1 + \varepsilon {\pmb \psi }_2 \}$$ leads to a faster method than $$V_c := \mathrm {span}\{ {\pmb \psi }_1 \}$$, even though both are one-dimensional subspaces. In addition, if $$\lambda _3 \ne 0$$ the former leads to a two-level operator T with a larger kernel than the one corresponding to the latter. The situation is completely different if $$\lambda _1$$ and $$\lambda _2$$ have the same sign. In this case, the orthogonality parameter $$\gamma$$ is crucial. If $${\pmb \psi }_1$$ and $${\pmb \psi }_2$$ are orthogonal ($$\gamma =0$$), then one cannot improve $$V_c:= \mathrm {span}\{ {\pmb \psi }_1 \}$$ by a simple perturbation using $${\pmb \psi }_2$$. However, if $${\pmb \psi }_1$$ and $${\pmb \psi }_2$$ are not orthogonal ($$\gamma \ne 0$$), then one can still find an $$\varepsilon \ne 0$$ such that $$\rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))$$.

Notice that, if $$|\lambda _3|=|\lambda _2|$$, Theorem 6 shows that one cannot obtain a $$\rho (T)$$ smaller than $$|\lambda _2|$$ using a one-dimensional perturbation. However, if one optimizes the entire coarse space $$V_c$$ (keeping m fixed), then one can find coarse spaces leading to better contraction factor of the two-level iteration, even though $$|\lambda _3|=|\lambda _2|$$.

Theorem 6 has another important meaning. If the eigenvectors $${\pmb \psi }_j$$ are not orthogonal and one defines the coarse space $$V_c$$ using approximations to $${\pmb \psi }_j$$, then the two-level method is not necessarily convergent. Even though the one-level iteration characterized by $${{\mathbb {G}}}$$ is convergent, a wrong choice of coarse functions can lead to a divergent iteration. This phenomenon is observed numerically in Sect. 6. However, the analysis performed in Theorem 6 suggests a remedy to this situation.

### Corollary 1

(Correction of perturbed coarse space functions) Let the hypotheses of Theorem 6 be satisfied. For any $$r \in {\mathbb {N}}$$ it holds that

\begin{aligned} {{\mathbb {G}}}^r V_c=\mathrm{span}\left\{ {\pmb \psi }_1 +\varepsilon _r {\pmb \psi }_2\right\} ,\end{aligned}

with $$\varepsilon _r = \frac{\lambda _2^r}{\lambda _1^r}\varepsilon$$. Moreover, if the coarse space $$V_c$$ is replaced by $${{\mathbb {G}}}^r V_c$$ (hence $$\varepsilon$$ is replaced by $$\varepsilon _r$$), there exists an $${\widehat{r}} \in {\mathbb {N}}$$ such that $$\rho (\varepsilon _{{\widehat{r}}},\gamma )<1$$ for any $$\gamma \in [-1,1]$$.

### Proof

By computing

\begin{aligned} {{\mathbb {G}}}^r V_c={{\mathbb {G}}}^r\mathrm{span}\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_2\right\} =\text {span}\left\{ {\pmb \psi }_1 +\frac{\lambda _2^r}{\lambda _1^r}\varepsilon {\pmb \psi }_2\right\} ,\end{aligned}

one obtains the first statement. The second statement follows from Theorem 6, which guarantees that $$\rho (0,\gamma )=|\lambda _2|<1$$. Since the map $$\varepsilon \mapsto \rho (\varepsilon ,\gamma )$$ is continuous and $$|\lambda _2|/|\lambda _1|<1$$, there exists a sufficiently large $$r\in {\mathbb {N}}$$ such that $$\rho (\varepsilon _r,\gamma )<1$$ holds. $$\square$$

Corollary 1 has the following important consequence. If some “bad-convergent” eigenvectors of $${{\mathbb {G}}}$$ are not sufficiently well represented by the coarse space functions, one can apply r smoothing steps to the coarse space functions. The new space $${{\mathbb {G}}}^r V_c$$ is a better approximation to the “bad-convergent” eigenfunctions of $${{\mathbb {G}}}$$. Therefore, one can replace $$V_c$$ by $${{\mathbb {G}}}^r V_c$$ to improve the convergence properties of the two-level method.

### 4.3 Local Coarse Functions

In this section, we consider an operator $${{\mathbb {G}}}$$ having the block form

\begin{aligned} {{\mathbb {G}}}= \begin{bmatrix} 0 &{} {{\mathbb {G}}}_1 \\ {{\mathbb {G}}}_2 &{} 0 \\ \end{bmatrix} \end{aligned}

and defined on the space $${{\mathcal {X}}}:= {\widehat{{\mathcal {X}}}}\times {\widehat{{\mathcal {X}}}}$$, where $${\widehat{{\mathcal {X}}}}$$ is a Hilbert space endowed by an inner product . The corresponding operator $${{\mathbb {A}}}$$ is $${{\mathbb {A}}}= {{\mathbb {I}}}- {{\mathbb {G}}}$$. Moreover, we assume that the operators $${{\mathbb {G}}}_j$$, $$j=1,2$$, have the same eigenvectors $$\{\psi _k\}_{k=1}^\infty$$ forming an orthonormal basis of $${\widehat{{\mathcal {X}}}}$$ with respect to . The eigenvalues of $${{\mathbb {G}}}_j$$, for $$j=1,2$$, are denoted by $$\theta _j(k)$$. This is exactly the structure of the substructured domain decomposition problem introduced in Sect. 2 and corresponding to two subdomains, as the following examples show.

### Example 3

Consider a rectangle $$\Omega :=(-L_1,L_2)\times (0,{\widetilde{L}})$$, $${\widetilde{L}},L_1,L_2>0$$ that is decomposed as $$\Omega =\Omega _1 \cup \Omega _2$$ by two overlapping subdomains $$\Omega _1:=(-L_1,\delta )\times (0,{\widetilde{L}})$$ and $$\Omega _2:=(-\delta ,L_2)\times (0,{\widetilde{L}})$$ for some $$0<\delta <\min (L_1,L_2)$$. The two interfaces are $$\Gamma _1:=\{\delta \}\times (0,{\widetilde{L}})$$ and $$\Gamma _2:=\{-\delta \}\times (0,{\widetilde{L}})$$. If $${\mathcal {L}}=-\Delta$$, then the Schwarz operators $${{\mathbb {G}}}_1$$ and $${{\mathbb {G}}}_2$$ are diagonalized by the sine-Fourier functions $$\psi _k(y)=\sin (k y \pi /{\widetilde{L}})$$, for $$k=1,2,\ldots$$ The eigenvalues of $${{\mathbb {G}}}_j$$ are $$\theta _j(k)=\sinh \left( \frac{k\pi }{{\widetilde{L}}}(L_j-\delta )\right) / \sinh \left( \frac{k\pi }{{\widetilde{L}}}(L_j+\delta )\right)$$, for $$j=1,2$$; see, e.g., [1, 47].

### Example 4

Consider a disc $$\Omega$$ of radius r and centered in the origin. One can decompose $$\Omega$$ as the union of two overlapping subdomains $$\Omega _1$$ and $$\Omega _2$$, where $$\Omega _1$$ is a disc of radius $$r_1 < r$$ and centered in the origin, and $$\Omega _2$$ is an annulus of external radius equal to r and internal radius $$r_2 \in (r_1,r)$$. If $${\mathcal {L}}=-\Delta +\eta$$ with $$\eta >0$$, then the two Schwarz operators $${{\mathbb {G}}}_1$$ and $${{\mathbb {G}}}_2$$ are diagonalized by periodic Fourier functions defined on circles; see, e.g., [48].

Now, we assume that $$V_c:=(\mathrm{span}\{\psi _1,\psi _2,\ldots ,\psi _m\})^2\subset {{\mathcal {X}}}$$. Prolongation and restriction operators are given (as in (14)) by

(29)

The restriction of $${{\mathbb {A}}}$$ onto the coarse space $$V_c$$ is $${{\mathbb {A}}}_c=R{{\mathbb {A}}}P$$. Notice that, since in this case $${{\mathbb {A}}}(V_c) \subseteq V_c$$, Theorem 2 guarantees that the operator $${{\mathbb {A}}}_c$$ is invertible. Now, we study the spectral properties of $${{\mathbb {T}}}$$ defined in (17).

### Theorem 7

(Convergence of the two-level method with local coarse space functions) Consider the coarse space $$V_c=(\mathrm{span}\{\psi _1,\psi _2,\ldots ,\psi _m\})^2$$ and the operators P and R defined in (29). All pairs $$(\psi _k,\psi _{\ell })$$ with $$k,\ell \le m$$ are in the kernel of the operator $${{\mathbb {T}}}$$. Moreover, for any $$S \in {\mathcal {L}}({\mathcal {X}})$$ denote by $$\Vert S\Vert _{\mathrm{op}}:=\sup \limits _{\Vert \mathbf{v}\Vert _{\infty }=1}\Vert S\mathbf{v}\Vert _{\infty }$$, where $$\Vert \mathbf{v}\Vert _{\infty }:=\max _{j=1,2} \Vert v_j\Vert$$, with , . If the eigenvalues $$\theta _{j}(k)$$, $$j=1,2$$, are in absolute value non-increasing functions of k, then the spectral radius of $${{\mathbb {T}}}$$, $$\rho ({{\mathbb {T}}}):=~\lim \limits _{n\rightarrow \infty }\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}}^{\frac{1}{n}}$$, is given by

\begin{aligned} \rho ({{\mathbb {T}}})= {\left\{ \begin{array}{ll} |\theta _1(m+1)\theta _2(m+1)|^{\frac{n_1+n_2}{2}}, \text { if } n_1, n_2 \text { are both even or odd},\\ |\theta _1(m+1)\theta _2(m+1)|^{\frac{n_1+n_2-1}{2}}\max \{|\theta _1(m+1)|,|\theta _2(m+1)|\}, \text { otherwise.} \end{array}\right. } \end{aligned}

### Proof

Let us suppose that both $$n_1$$ and $$n_2$$ are even. The other cases can be treated similarly. For $$n_1$$ even we define $$\pi ^{n_1}(k):=\theta ^{\frac{n_1}{2}}_1(k)\theta ^{\frac{n_1}{2}}_2(k)$$ and study the action of the operator $${{\mathbb {T}}}$$ on a vector $$\left[ \psi _k, \psi _{\ell } \right] ^\top$$:

\begin{aligned} {{\mathbb {T}}}\begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix} ={{\mathbb {G}}}^{n_2}({{\mathbb {I}}}-P{{\mathbb {A}}}_c^{-1}R{{\mathbb {A}}}){{\mathbb {G}}}^{n_1} \begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix}. \end{aligned}

We begin with the case $$k\le m$$ and $$\ell \le m$$. First, let us compute the action of the operator $$R{{\mathbb {A}}}{{\mathbb {G}}}^{n_1}$$ on $$\left[ \psi _k, \psi _{\ell } \right] ^\top$$. Since the operators $${{\mathbb {G}}}_j$$ are diagonalized by the basis $$\{ \psi _k\}_k$$ one obtains $${{\mathbb {G}}}^{n_1} \begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix} = \begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}$$. The action of $${{\mathbb {A}}}$$ on $$\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top$$ is

\begin{aligned} A\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}=\begin{bmatrix} I_d &{} -{{\mathbb {G}}}_1\\ -{{\mathbb {G}}}_2 &{} I_d \end{bmatrix}\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}=\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-\begin{bmatrix} \pi ^{n_1}(\ell )\theta _1(\ell )\psi _{\ell }\\ \pi ^{n_1}(k)\theta _2(k)\psi _k\end{bmatrix}. \end{aligned}

Since $${{\mathbb {A}}}$$ is invertible and has the form $${{\mathbb {A}}}={{\mathbb {I}}}- {{\mathbb {G}}}$$, the eigenvalues $$\theta _j(k)$$ must different from one. Hence, the product $${{\mathbb {A}}}\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top \ne 0$$. Now, the application of the restriction operator R on $${{\mathbb {A}}}\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top$$ gives us

\begin{aligned} R{{\mathbb {A}}}\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}= \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}-\begin{bmatrix} \pi ^{n_1}(\ell )\theta _1(\ell )\mathbf{e}_{\ell }\\ \pi _1^{n_1}(k)\theta _2(k)\mathbf{e}_{k}\end{bmatrix}=\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}, \end{aligned}

where $$\mathbf{e}_{k}$$ and $$\mathbf{e}_{\ell }$$ are canonical vectors in $${\mathbb {R}}^m$$ and $$\Lambda := \begin{bmatrix} I &{} -\theta _1(\ell )I\\ -\theta _2(k)I &{} I \end{bmatrix}$$, with I the $$m\times m$$ identity matrix. We have then obtained

\begin{aligned} R{{\mathbb {A}}}{{\mathbb {G}}}^{n_1} \begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix}=\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}. \end{aligned}
(30)

Now, by computing

\begin{aligned} \begin{aligned} {{\mathbb {A}}}_c\begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix} =&R{{\mathbb {A}}}\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix} =R \begin{bmatrix} \pi ^{n_1}(k)\psi _k-\pi ^{n_1}(\ell )\theta _1(\ell )\psi _{\ell }\\ \pi ^{n_1}(\ell )\psi _{\ell }-\pi ^{n_1}(k)\theta _2(k)\psi _k \end{bmatrix}\\ =&\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix} \end{aligned} \end{aligned}

one obtains the action of $${{\mathbb {A}}}_c^{-1}$$ on $$\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}$$, that is

\begin{aligned} \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}={{\mathbb {A}}}_c^{-1} \Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}. \end{aligned}
(31)

Using (30) and (31) we have

\begin{aligned} \begin{aligned} ({{\mathbb {I}}}-P {{\mathbb {A}}}_c^{-1}R{{\mathbb {A}}}) {{\mathbb {G}}}^{n_1}\begin{bmatrix} \psi _k \\ \psi _{\ell } \end{bmatrix}&=\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-P {{\mathbb {A}}}_c^{-1}\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix} \\&=\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-P \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix} \\&=\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-\begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}=0. \end{aligned} \end{aligned}
(32)

This means that all the pairs $$(\psi _k,\psi _{\ell })$$ with $$k\le m$$ and $$\ell \le m$$ are in the kernel of $${{\mathbb {T}}}$$. The result for $$n_1$$ odd follows by similar calculations.

Next, let us consider the case $$k>m$$ and $$\ell \le m$$. Recalling that the basis $$\{\psi _k\}_k$$ is orthonormal, one has

\begin{aligned} R{{\mathbb {A}}}{{\mathbb {G}}}^{n_1}\begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix}= & {} R\left( \begin{bmatrix} \pi ^{n_1}(k)\psi _{k}\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-\begin{bmatrix} \pi ^{n_1}(\ell )\theta _1(\ell )\psi _{\ell }\\ \pi ^{n_1}(k)\theta _2(k)\psi _{k}\end{bmatrix}\right) \\= & {} \begin{bmatrix} 0 &{} -\theta _1(\ell )I\\ 0 &{} I \end{bmatrix} \begin{bmatrix} 0\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}. \end{aligned}

Similarly as before, we compute

\begin{aligned} {{\mathbb {A}}}_c\begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}= & {} R{{\mathbb {A}}}\begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix} =R\begin{bmatrix} -\pi ^{n_1}(\ell )\theta _1(\ell )\psi _{\ell }\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix} \\= & {} \begin{bmatrix} 0 &{} -\theta _1(\ell )I\\ 0 &{} I \end{bmatrix}\begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}, \end{aligned}

which implies that

\begin{aligned} \begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}={{\mathbb {A}}}_c^{-1}\begin{bmatrix} 0 &{} -\theta _1(\ell )I\\ 0 &{} I \end{bmatrix} \begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}. \end{aligned}

Thus, we have

\begin{aligned} \begin{aligned} {{\mathbb {T}}}\begin{bmatrix} \psi _k \\ \psi _{\ell } \end{bmatrix}&={{\mathbb {G}}}^{n_2}\left( \begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-P{{\mathbb {A}}}_c^{-1} \begin{bmatrix} 0 &{} -\theta _1(\ell )I\\ 0 &{} I\\ \end{bmatrix} \begin{bmatrix} 0\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}\right) \\&={{\mathbb {G}}}^{n_2}\left( \begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}-P \begin{bmatrix} 0 \\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}\right) =\begin{bmatrix} \pi ^{n_1+n_2}(k)\psi _k \\ 0 \end{bmatrix}. \end{aligned} \end{aligned}
(33)

For the remaining case $$k>m$$ and $$\ell >m$$, the same arguments as before imply that

\begin{aligned} {{\mathbb {T}}}\begin{bmatrix} \psi _k \\ \psi _{\ell } \end{bmatrix}&={{\mathbb {G}}}^{n_2}({\mathbb {I}}-P {{\mathbb {A}}}_c^{-1}R {{\mathbb {A}}}){{\mathbb {G}}}^{n_1} \begin{bmatrix} \psi _k \\ \psi _{\ell } \end{bmatrix} ={{\mathbb {G}}}^{n_2}{{\mathbb {G}}}^{n_1}\begin{bmatrix} \psi _k \\ \psi _{\ell } \end{bmatrix}\nonumber \\&=\begin{bmatrix} \pi ^{n_1+n_2}(k)\psi _k \\ \pi ^{n_1+n_2}(\ell )\psi _{\ell } \end{bmatrix}. \end{aligned}
(34)

We can now study the norm of $${{\mathbb {T}}}$$. To do so, we first use (32), (33) and (34), and that $$\{ \psi _k,\psi _{\ell }\}_{k,\ell }$$ is a basis of $${{\mathcal {X}}}$$, to write

\begin{aligned} {{\mathbb {T}}}\mathbf{v} = {{\mathbb {T}}}\begin{bmatrix} \sum _{k=1}^{\infty } \mathbf{c}_k \psi _k \\ \sum _{\ell =1}^{\infty } \mathbf{d}_\ell \psi _\ell \\ \end{bmatrix} =\begin{bmatrix} \sum _{k=m+1}^{\infty } \pi ^{n_1+n_2}(k) \mathbf{c}_k \psi _k \\ \sum _{\ell =m+1}^{\infty } \pi ^{n_1+n_2}(\ell ) \mathbf{d}_\ell \psi _\ell \\ \end{bmatrix}, \end{aligned}

for any $$\mathbf{v} \in {{\mathcal {X}}}$$. Since $$|\theta _1(k)|$$ and $$|\theta _2(k)|$$ are non-increasing functions of k, $$|\pi (k)|$$ is also a non-increasing function of k. Therefore, using that the basis $$\{ \psi _k,\psi _{\ell }\}_{k,\ell }$$ is orthonormal, we get

\begin{aligned} \Vert {{\mathbb {T}}}\Vert _{\mathrm{op}} =\sup \limits _{\Vert \mathbf{v}\Vert _{\infty }=1}\Vert {{\mathbb {T}}}\mathbf{v}\Vert _{\infty } \le \max \left( |\pi ^{n_1+n_2}(k)|,|\pi ^{n_1+n_2}(\ell )|\right) =|\pi ^{n_1+n_2}(m+1)|. \end{aligned}

This upper bound is achieved at $$\mathbf{v}=[\psi _{m+1} , 0]^\top$$. Hence, $$\Vert {{\mathbb {T}}}\Vert _{\mathrm{op}} = |\pi ^{n_1+n_2}(m+1)|$$. Now, a similar direct calculation leads to $$\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}} = |\pi ^{n(n_1+n_2)}(m+1)|$$, which implies that $$\rho ({{\mathbb {T}}})=\lim \limits _{n\rightarrow \infty } (\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}})^{1/n}=|\pi ^{n_1+n_2}(m+1)|$$. $$\square$$

Theorems 25 and 7 show that the choice of the basis functions to construct $$V_c$$ can affect drastically the convergence of the method. On the one hand, an inappropriate choice of $$V_c$$ can lead to a two-level method that performs as the corresponding one-level method. On the other hand, a good choice of $$V_c$$ can even make convergent a non-converging stationary method; see, e.g., [13].

## 5 Numerical Construction of the Coarse Space

The construction of a good coarse space $$V_c$$ is not an easy task. Several works rely on the solution of generalized eigenvalue problems on the interfaces; see, e.g., [13, 17, 18, 21, 28]. Despite one could re-use these techniques to build a coarse space for the S2S method, see the S2S-HEM method discussed in Sect. 6, we now present two alternative numerical approaches for the generation of coarse space functions. The first one relies on the principal component analysis (PCA) and share some similarities with some of the strategies presented in [38, 49]. The second approach is based on modeling the two-level iteration operator as a deep neural network where the coarse space functions are regarded as variables to be optimized. A similar approach has been presented in the context of multigrid methods in [50]. We refer to [51] for a recent review of altenative ways to combine machine learning and DD methods.

We remark that the S2S framework facilitates the use of these two numerical techniques which could be even numerically unfeasible if applied to a two-level volume method. Indeed, at the discrete level, the substructured coarse functions are much shorter vectors than the corresponding volume ones. This means that, for the PCA approach, one has to compute the SVD decomposition of a much smaller matrix, while for the deep neural network approach, the neural net has much less parameters to optimize.

### 5.1 A PCA Approach for Coarse Space Generation

The idea that we present in this section is to construct an approximation of the image of the smoother $$G^r$$, for some positive integer r. In fact, the image of $$G^r$$ contains information about the “bad converging” eigenvectors of G. Notice that $$\mathrm{im}(G^r)=\mathrm{im}(G^rX)$$ for any surjective matrix X. Therefore, the idea is to construct a coarse space using the information contained in $$G^rX$$, for some randomly chosen matrix X. Clearly, if $$\rho (G)<1$$ and r is large, then one expects that the slowest convergent eigenvectors are predominant in $$G^rX$$. Notice also the relation of this idea with the perturbation Theorem 6 and Corollary 1.

Motivated by these observations, we use a principal component analysis (PCA), also known as proper orthogonal decomposition (POD); see, e.g., [52] and references therein. We consider the following procedure.

1. 1.

Consider a set of q linearly independent randomly generated vectors $$\{ \mathbf{s}_k \}_{k=1}^q \subset {\mathbb {R}}^{N^s}$$, where $$N^s$$ is the number of degrees of freedom on the interfaces, and define the matrix $$S = [ \mathbf{s}_1 ,\ldots , \mathbf{s}_q ]$$. Here, $$q \approx m$$ and m is the desired dimension of the coarse space.

2. 2.

Use the vectors $$\mathbf{s}_k$$ as initial vectors and perform r smoothing steps to create the matrix $$W = G^r S$$. This computation can be performed in parallel, applying G to each column of S separately. Further, we assume that r is “small”.

3. 3.

Compute the SVD of W: $$W=U \Sigma V^\top$$. This is cheap ($$O(q(N^s)^2)$$) because $$W\in {\mathbb {R}}^{N^s \times q}$$ is “small”, since q is “small” and $$\mathbf{v}_k$$ are interface vectors.

4. 4.

Since the left-singular vectors (corresponding to the non-zero singular values) span the image of W, we define $$V_c:= \mathrm{span}\{ \mathbf{u}_j \}_{j=1}^{m}$$ and $$P:=[\mathbf{u}_1,\ldots ,\mathbf{u}_{m}]$$.

We wish to remark that, in light of Theorem 6 and Corollary 1, one can also use approximations of the eigenfunctions of G (if available) in the matrix S (in step 1 above). A numerical study of the above procedure is presented in Sect. 6. To qualitatively describe the obtained coarse space, we prove the following bound.

### Lemma 3

(Approximation of the random generated coarse space) Consider a full rank orthogonal matrix $$X \in {\mathbb {R}}^{N^s \times N^s}$$ and its decomposition $$X=[S,{\widetilde{S}}]$$. Let $$W={{\mathbb {G}}}^r [S,0]$$ and $$P_\ell = U_\ell \Sigma _\ell V_\ell ^\top$$ be the rank-$$\ell$$ SVD of W ($$\ell \le m$$), where $$(\Sigma _\ell )_{j,j} = \sigma _j$$, $$j=1,\ldots ,\ell$$, are the singular values of W. Then, it holds that

\begin{aligned} \Vert P_\ell - G^r X \Vert _2 \le \sigma _{\ell +1} + \Vert G^r \Vert _2. \end{aligned}

### Proof

Using the triangle inequality, we get

\begin{aligned} \Vert P_\ell - G^r(X) \Vert _2 \le \Vert P_\ell - G^r [S,0] \Vert _2 + \Vert G^r [S,0] - G^r X \Vert _2. \end{aligned}

The first term on the right-hand side is equal to $$\sigma _{\ell +1}$$ by the best approximation properties of the SVD. The second term can be bounded as $$\Vert G^r [S,0] - {{\mathbb {G}}}^r X \Vert _2\le \Vert G^r \Vert _2 \Vert [S,0] - X \Vert _2$$, and a direct calculation of $$\Vert [S,0] - X \Vert _2=\Vert [0,{\widetilde{S}}] \Vert _2$$ leads to the result as $${\widetilde{S}}^\top {\widetilde{S}}=I_{N_s-q}$$. $$\square$$

Despite its very simple proof, Lemma 3 allows us to describe the quality of the created coarse space. Larger values of q and $$\ell$$ lead to a smaller error in the approximation of the image of G. Moreover, a smoother G with good contraction properties, namely $$\Vert G \Vert _2 \ll 1$$, leads to a better approximation. Clearly, one can improve the approximation by enlarging r at the cost of extra subdomain solves.

### 5.2 Generating the Coarse Space by Deep Neural Networks

Theorem 6 shows that the spectral coarse space obtained by the first dominant eigenvector of $${{\mathbb {G}}}$$ is not necessarily the one-dimensional coarse space minimizing $$\rho ({{\mathbb {T}}})$$. Now, we wish to go beyond this one-dimensional analysis and optimize the entire coarse space $$V_c$$ keeping its dimension m fixed. This is equivalent to optimize the prolongation operator P whose columns span $$V_c$$. Thus, we consider the optimization problem

\begin{aligned} \min _{P \in {\mathbb {R}}^{N^s\times m}} \rho (T(P)). \end{aligned}
(35)

To solve approximately (35), we follow the approach proposed by [50]. Due to the Gelfand formula $$\rho (T)=\lim _{k\rightarrow \infty } \root k \of {\Vert T^k\Vert _F}$$, we replace (35) with the simpler optimization problem $$\min _{P} \Vert T(P)^k\Vert ^2_F$$ for some positive k. Here, $$\Vert \cdot \Vert _F$$ is the Frobenius norm. We then consider the unbiased stochastic estimator [53]

\begin{aligned}\Vert T^k\Vert ^2_F=\text {trace}\left( (T^k)^\top T^k\right) ={\mathbb {E}}_{{\mathbf {z}}}\left[ {\mathbf {z}}^\top (T^k)^\top T^k {\mathbf {z}}\right] ={\mathbb {E}}_{{\mathbf {z}}}\left[ \Vert T^k {\mathbf {z}}\Vert ^2_2\right] ,\end{aligned}

where $${\mathbf {z}}\in {\mathbb {R}}^{N^s}$$ is a random vector with Rademacher distribution, i.e. $${\mathbb {P}}({\mathbf {z}}_i=\pm 1)=1/2$$. Finally, we rely on a sample average approach, replacing the unbiased stochastic estimator with its empirical mean such that (35) is approximated by

\begin{aligned} \min _{P \in {\mathbb {R}}^{N^s\times m}} \frac{1}{N}\sum _{i=1}^N\Vert T(P)^k {\mathbf {z}}_i\Vert ^2_F, \end{aligned}
(36)

where $${\mathbf {z}}_i$$ are a set of independent, Rademacher distributed, random vectors. The action of T onto the vectors $${\mathbf {z}}_i$$ can be interpreted as the feed-forward process of a neural net, where each layer represents one specific step of the two-level method, that is the smoothing step, the residual computation, the coarse correction and the prolongation/restriction operations. In our setting, the weights of most layers are fixed and given, and the optimization is performed only on the weights of the layer representing the prolongation step. The restriction layer is constraint to have as weights the transpose of the weights of the prolongation layer. To solve (36), we rely on the stochastic gradient algorithm which requires at each iteration to compute k times the action of T. The optimization parameters are heuristically tuned to a learning rate of 0.1, a mini-batch of ten vectors $${\mathbf {z}}_i$$, maximum 20,000 epochs, and the optimization is stopped when the loss functions increases for 5 consecutive updates. Each stochastic gradient iteration is expensive as it is equivalent to perform k iterations of the two-level method. Hence, the proposed deep neural network approach is not computationally efficient to build coarse spaces, unless one considers an offline-online paradigm or a many query context. We will use this approach in Sect. 6 to go beyond the result of Theorem 6 and show numerically that given an integer m, a spectral coarse made by the first m dominant eigenvectors of G is not necessarily the asymptotic optimal coarse space of dimension m.

## 6 Numerical Experiments

This section is concerned with the numerical validation of the framework proposed in this manuscript. We first consider a Poisson equation in 2D and 3D rectangular boxes and we show the convergence behavior of the S2S method with different coarse spaces and of the SHEM method [17]. In this simplified setting, we also report the computational time and memory storage requirements of the S2S and SHEM methods. We then solve a Poisson problem with many-subdomain decompositions and discuss a further way to build a substructured coarse space, that is, using the SHEM interface functions. Next, we focus on a diffusion problem with highly jumping coefficients and validate Theorem 6 showing how a perturbed coarse space can affect the convergence of the methods.

For the sake of completeness, let us recall that the SHEM coarse space is defined as the union of a multiscale and of a spectral coarse space. The multiscale coarse space is built solving for each internal vertex $$v^k$$ of the nonoverlapping decomposition $$\Omega =\cup _{i=1}^N {\widetilde{\Omega }}_i$$, a homogeneous boundary value problem on each of the edges $$\Gamma _{i,j}:=\partial {\widetilde{\Omega }}_i\cap \partial {\widetilde{\Omega }}_j$$, which have the vertex $$v^k$$ as extreme, imposing a Dirichlet boundary condition equal to 1 on the vertex $$v^k$$ and zero on the opposite vertex. The solutions of these interface boundary value problems $$\phi ^k_{i,j}$$ are then extended (through the PDE operator, see eq (14) in [27]) on both subdomains $${\widetilde{\Omega }}_i$$ and $${\widetilde{\Omega }}_j$$ and set to zero on the rest of $$\Omega$$.

The spectral coarse space is obtained by solving localized eigenvalue problems on each of the internal edges $$\Gamma _{i,j}$$, and then extending a certain number of the eigenfunctions harmonically on both $${\widetilde{\Omega }}_i$$ and $${\widetilde{\Omega }}_j$$ and zero otherwise (see eq (15) in [27]).

### 6.1 Poisson Equation in 2D and 3D Rectangular Boxes

Let us consider a rectangular domain $$\Omega =\Omega _1\cup \Omega _2$$, where $$\Omega _1=(-1,\delta )\times (0,1)$$ and $$\Omega _2=(-\delta ,1)\times (0,1)$$, and a Poisson equation $$-\Delta u=f$$ with homogeneous Dirichlet boundary condition on $$\partial \Omega$$. Given an integer $$\ell \ge 2$$, we discretize each subdomain with a standard second order finite difference scheme with $$N_y=2^\ell -1$$ points in the y direction and $$N_x=N_y$$ points in the x direction. The overlap has size $$2\delta$$ with $$\delta =N_{ov}h$$, where h is the mesh size and $$N_{ov}\in {\mathbb {N}}$$. In our computations, we consider $$f=1$$ and initialize the iterations with a random initial guess.

Figure 2 shows the relative error decay for several methods. Specifically, we compare the one-level parallel Schwarz method ($$G_s$$ (Schwarz) in the figures), a S2S method with a coarse space made by eigenfunctions of G (S2S-G), a S2S method with a coarse space made of eigenfunctions of the operators $$G_j$$ (S2S-$$G_j$$), a S2S method with a coarse space obtained with the PCA procedure (S2S-PCA), a S2S method with coarse space obtained using deep neural networks (S2S-DNN), and the spectral volume method based on the SHEM coarse space (SHEM), see [27]. For the PCA coarse space, we average the relative error decay over 30 realizations and the parameters for the PCA procedure are $$q=2 \, \text {dim}V_c$$ and $$r=2$$, where $$\text {dim}V_c$$ is the desired size of the spectral coarse space. For the deep neural network approach, the parameters are $$N=N^s$$ and $$k=4$$.

Figure 2 shows that most of the spectral methods have a very similar convergence. Indeed, we have numerically observed that the S2S-G, the S2S-$$G_j$$ and the SHEM methods all have the same spectral radius in this simplified setting. We remark that the S2S-PCA method has on average the same convergence behavior as the other two-level methods, even tough sometimes it could be slightly different (faster or slower). The S2S-DNN method outperforms the others. In this particular setting, the eigenvalues of G are $$\lambda _j=\pm \sqrt{\mu _j}$$, where $$\mu _j>0,\; \forall j=1,\ldots ,N^s$$ are the eigenvalues of $$G_1=G_2$$, and A is symmetric. Hence, we are under the assumptions of point (C) of Theorem 6, and Fig. 2 confirms that a spectral coarse space is not necessarily the coarse space leading to the fastest convergence.

As we claimed that the deep neural network approach is computationally expensive, it is worth remarking that the PCA approach builds a coarse space as efficient as the spectral ones performing $$q\cdot r$$ subdomains solves in parallel, instead of solving eigenvalue problems as required by all others two-level methods, either locally (as the S2S-$$G_j$$ and SHEM methods) or on the whole skeleton (as the S2S-G method).

Next, we compare the computational costs required by the S2S method and a spectral volume method in Table 1. For simplicity we assume $$n_1=1$$, $$n_2=0$$. Let $$A_v=M-N$$ be a volume matrix of size $$N^v\times N^v$$ and A be the substructured matrix of size $$N^s \times N^s$$, P and R the substructured restriction and prolongation operators, while $$P_v$$ and $$R_v$$ are the corresponding volume operators. On each subdomain, we suppose to have $$N_{\text {sub}}$$ unknowns and m is the dimension of the coarse space. The cost of the smoothing step is equal in both case to $$\gamma _s(N_{\text {sub}})$$, where $$\gamma _s$$ depends on the choice of the linear solver, e.g. for a Poisson problem, $${\gamma _s}(N_{\mathrm{sub}})=N_{\mathrm{sub}}\log (N_{\mathrm{sub}})$$ if a fast Poisson solver is used, or $${\gamma _s}(N_{\mathrm{sub}})=b N_{\mathrm{sub}}$$ for sparse banded matrices with bandwidth b; see, e.g., [54]. Further, the cost of solving the coarse problem is identical as well, equal to $$\gamma _c(m)$$, where m is the size of the coarse space and $$\gamma _c$$ depends on the linear solver used. The coarse matrices are usually small, fully dense, matrices so that it is reasonable to factorize them using an LU decomposition. In a standard implementation, the S2S method requires to perform subdomain solves when computing the residual, as the matrix vector multiplication with the matrix A is needed. To avoid this extra-cost, in the “Appendix 8” we show two alternative algorithms to implement smartly the S2S method, where the residual is computed cheaply and the two applications of the smoothing operators per iteration are avoided. We further show that these two alternatives have the same convergence behavior of Algorithm 1.

The main advantage of the S2S method is that the restriction and prolongation operators are performed on the substructures, with objects which are smaller than the corresponding volume counterparts. Thus the S2S method naturally requires less memory storage. For instance, given a coarse space of dimension m, the restriction and prolongation operators are matrices of size $$m\times N^s$$ and $$N^s\times m$$ where $$N^s$$ is the number of unknowns on the substructures. The corresponding volume objects have size $$m\times N^v$$ and $$N^v\times m$$, where $$N^v$$ is the number of unknowns on the whole domain. Thus the S2S method presents advantages both from memory storage and from the computational time point of view, due to smaller number of floating point operations.

We now discuss the cost of the off-line computation phases. To build prolongation and restriction operators in the volume case, one needs to define some functions, usually by solving eigenvalue problems, along the interfaces between non-overlapping subdomains or in the overlap region between overlapping subdomains. These functions are then extended in the interior of the subdomains and this extension costs $$\gamma _s(N_{\text {sub}})$$. Notice that the way of extending these functions is not unique and we refer to [28, Section 5] for an overview. In the substructured framework, we have analyzed theoretically several ways among which a global eigenvalue problem (S2S-G), local eigenvalue problems (S2S-$$G_j$$), and randomized approaches using either PCA (S2S-PCA), or deep neural networks (S2S-DNN). The relative costs of these approaches with respect to the volume ones are difficult to estimate as they depend on the features of the problem at hand. Nevertheless, for any method used to generate the interface functions, we do not need to perform any extension step in the substructured framework. Besides the approaches studied theoretically, we emphasize that one can use the interface functions computed in a volume method as a basis for the S2S coarse space. In this way one avoids the extension step and exploits at best the intrinsic substructured nature of the S2S method. In the next section we show numerical results where we used the SHEM interface functions as a basis for the S2S method (called the S2S-HEM method).

To conclude, we consider a Poisson equation on a three-dimensional box $$\Omega = (-1, 1) \times (0, 1) \times (0, 1)$$ decomposed into two overlapping subdomains $$\Omega _1 = (-1, \delta ) \times (0, 1) \times (0, 1)$$ and $$\Omega _2 = (-\delta , 1) \times (0, 1) \times (0, 1)$$. Table 2 shows the computational times to reach a relative error smaller than $$10^{-8}$$, and the computational memory required to store the restriction and interpolation operators in a sparse format in Matalb for the S2S method and the SHEM method. The experiments have been performed on a workstation with 8 processor cores Intel Core i7-6700 CPU 3.40GHz and with 32 GB of RAM. We remark that the S2S method requires drastically less memory than the SHEM method, which becomes inefficient for large problems from the memory point of view. Concerning computational times, we observe that the two methods are equivalent in this setting. The substructured restriction and prolongation operators are faster than the volume ones, since to compute the action for instance of the substructured prolongation operator on the largest problem takes about $$7\cdot 10^{-4}$$ seconds compared to $$3\cdot 10^{-2}$$ seconds of the volume prolongation. However, the bottleneck here is represented by the two, very large, subdomain solves. A many subdomain decomposition and a parallel implementation on a high performance programming language should make more evident the advantage of using substructured coarse spaces in terms of computational time.

### 6.2 Decomposition into Many Subdomains

In this section, we consider a Poisson equation in a square domain $$\Omega$$ decomposed into $$M\times M$$ nonoverlapping square subdomains $${\widetilde{\Omega }}_j$$, $$j=1,\ldots ,M^2=N$$. Each subdomain $${\widetilde{\Omega }}_j$$ contains $$N_{\text {sub}}:=(2^\ell -1)^2$$ interior degrees of freedom. The subdomains $${\widetilde{\Omega }}_j$$ are extended by $$N_{ov}$$ points to obtain subdomains $$\Omega _j$$ which form an overlapping decomposition of $$\Omega$$. Each discrete local substructure is made by one-dimensional segments. Figure 3 provides a graphical representation.

Figure 4 compares several versions of the S2S method to the SHEM method. Specifically, we consider a S2S method with a coarse space made by eigenfunctions of G (S2S-G), a S2S method with a coarse space obtained with the PCA procedure (S2S-PCA), and a S2S method with a coarse space which is inspired by the SHEM coarse space (S2S-HEM, that is S2S Harmonically Enriched Multiscale) and a S2S method with a coarse space obtained with the deep neural network approach (S2S-DNN).

In more detail, we create the HEM coarse space by computing harmonic functions and solving interface eigenvalue problems on each one-dimensional segment that forms the local discrete substructure, similarly to the SHEM coarse space [27]. However notice that the SHEM method extends these interface functions into the interior of the nonoverlapping subdomains as the method is naturally defined in volume. We do not need to perform this extra step of extending the functions in the neighboring subdomains. We report that we have also tried to build the a coarse space by simply restricting the volume functions of the SHEM coarse space onto the substructures and we observed a similar behavior compared to the HEM coarse space. For the PCA approach, we generated $$q=2\times$$dim$$V_c$$ random vectors and we set $$r=2$$. The result we plot is averaged over 30 different random coarse spaces. For the deep neural network, we used $$k=4$$ and $$N=N^s$$.

The size of the coarse space is set by the SHEM coarse space. In the top-left panel, we consider only multiscale functions without solving any eigenvalue problem along the interfaces. In the top-right panel, we include the first eigenfunctions on each interface, and on the bottom-central panel we include the first and the second eigenfunctions. In all cases we observe that the methods have a similar convergence, which is slightly faster for the substructured methods for smaller coarse spaces. As we already remarked, S2S-G is not necessarily the fastest.

### 6.3 Diffusion Problem with Jumping Diffusion Coefficients

In this paragraph, we test the S2S method for the solution of a diffusion equation $$-\text {div}(\alpha \nabla u) = f$$ in a square domain $$\Omega :=(0,1)^2$$ with $$f:=\sin (4\pi x)\sin (2\pi y)\sin (2\pi xy)$$. The domain $$\Omega$$ is decomposed into 16 non-overlapping subdomains and we suppose $$\alpha =1$$ everywhere except in some channels where $$\alpha$$ takes the values large values. Each non-overlapping subdomain is discretized with $$N_{\text {sub}}=2^{2\ell }$$ cells and enlarged by $$N_{ov}$$ cells to create an overlapping decomposition with overlap $$\delta =2N_{ov} h$$. We use a finite-volume scheme and we assume that the jumps of the diffusion coefficients are aligned with the cell edges. We consider two configurations represented in Fig. 5.

We first aim to validate numerically Theorem 6. We consider the two channels configuration with $$\alpha =5\cdot 10^3$$, $$\ell =4$$, $$N=4$$. The first five eigenvalues are $$\lambda _1=0.999$$, $$\lambda _2=-0.9989$$, $$\lambda _3=-0.99863$$, $$\lambda _4=0.99861$$ and $$\lambda _5=0.2392$$. We consider the coarse space $$V_c:=\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_2\right\}$$, where $$\gamma =\langle {\pmb \psi }_1,{\pmb \psi }_2\rangle \approx 10^{-15}$$ so that the two eigenvectors are orthogonal. As $$\lambda _1$$ and $$\lambda _2$$ have opposite signs, point (C) of Theorem 6 guarantees the existence of an $${\widetilde{\varepsilon }}$$ such that $$\rho (T({\widetilde{\varepsilon }}))=|\lambda _3|<|\lambda _2|$$. Figure 6 confirms on the left panel that $$|\lambda (\varepsilon ,0)|$$ reaches a zero for two values of $$\varepsilon$$. The right panel clearly shows that for several values of $$\varepsilon$$, $$\rho (T(\varepsilon ))=|\lambda _3|$$. It is interesting to remark that, in this setting, choosing $$\varepsilon =0$$ (that is, a standard spectral coarse space) is actually the worse choice, as for any $$\varepsilon \ne 0$$ $$\rho (T(\varepsilon ))\le |\lambda _2|$$ as the proof of point (C) of Theorem 6 shows.

Second, we build the coarse space $$V_c:=\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_5\right\}$$, where $${\pmb \psi }_i$$ $$i=1,5$$ are the first and fifth eigenfunctions of G and $$\varepsilon =0.01$$. Figure 7 shows that the S2S method method with this specific choice of coarse space is diverging. Computing the spectral radius, we obtain $$\rho (T)= 1.2322$$. In this setting, we have $$\gamma =-0.5628$$ which replaced into the expression of $$\lambda (\gamma ,\varepsilon )$$, together with the values of $$\lambda _1$$ and $$\lambda _5$$, leads to $$\lambda (\gamma ,\varepsilon )=1.2322=\rho (T)$$. Indeed $$\lambda (\gamma ,\varepsilon )$$ has a vertical asymptote in $$\gamma ^*=-0.1404$$ as shown on the right panel of Fig. 7. We can restore the convergence of the S2S method by a sufficient decrease of $$\varepsilon$$, that is by reducing the perturbation in the coarse space. In a numerical implementation, this is obtained by performing $$r\ge 1$$ iterations of the smoother G on the coarse space (see Corollary 1). Indeed it holds that

\begin{aligned} G^r V_c=G^r\text {span}\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_5\right\} =\text {span}\left\{ {\pmb \psi }_1 +\frac{\lambda _5^r}{\lambda _1^r}\varepsilon {\pmb \psi }_5\right\} .\end{aligned}

Applying twice the smoother in the case at hand, we get a new “smoothed” coarse space where the perturbation has size $$\varepsilon ^*=\frac{\lambda _5^2}{\lambda _1^2}\varepsilon =5.73\cdot 10^{-4}$$ so that now $$\lambda (\gamma ,\varepsilon ^*)=-0.0080$$. We remark that $$\lambda (\gamma ,\varepsilon ^*)$$ is the convergence factor of T on $$\text {span}\left\{ {\pmb \psi }_1+\varepsilon ^* {\pmb \psi }_5\right\}$$, so that the convergence of the S2S method is now determined by second largest eigenvalue of T, i.e. $$\lambda _2=-0.9990$$ as Fig. 7 shows.

We then investigate the performances of the S2S methods and we compared them with the SHEM coarse space in the multiple channel configuration. We set $$\ell =4$$, $$N=16$$, which correspond to $$N^v=4096$$ degrees of freedom, and $$N_{ov}=2$$. Table 3 shows the number of iterations to reach a relative error smaller than $$10^{-8}$$ for the S2S-G, S2S-PCA, S2S-HEM and SHEM methods. The relative error is computed with respect to the first iterate. We consider coarse spaces of dimensions 84, 132 and 180, which, for the SHEM and S2S-HEM methods, correspond to multiscale coarse spaces enriched by respectively the first, second and third eigenvectors of the interface eigenvalues problems. For the PCA coarse space, we set $$q=2N_c$$ and $$r=6$$ if $$\alpha =10^6$$, $$r=4$$ if $$\alpha =10^4$$ and $$r=2$$ if $$\alpha =10^2$$. We remark that for smaller values of r, the S2S-PCA method diverges. This increase in the value of r can be explained noticing that for the multichannel configuration, the smoother G has several eigenvalues approximately 1 for large values of $$\alpha$$. Thus the PCA procedure, which essentially relies on a power method idea to approximate the image of G, suffers due to the presence of several clustered eigenvalues, and hence does not provide accurate approximations of the eigenfunctions of G. Similarly also the HEM coarse space obtained by solving on each segment of a skeleton an eigenvalue problem could lead to a divergent method. Thus, to improve this coarse space, we apply few iterations of the smoother to obtain a better $$V_c$$. Tables 3 and 4 report the number of iterations to reach a tolerance of $$10^{-8}$$ when the algorithms are used either as stationary methods or as preconditioners. We remark that all spectral methods have very similar performance, and all methods are robust with respect to the strength of the jumps.

## 7 Conclusions

In this work we introduced a new computational framework of two-level substructured Schwarz methods. This is called S2S and is based on coarse spaces defined exclusively on some interfaces provided by the overlapping decomposition of the domain. We presented a broader convergence analysis for two-level iterative methods, which covers the proposed substructured framework as a special case. The analysis pushes forward the current understanding of asymptotic optimality of coarse spaces. From the computational point of view, we have discussed approaches based on the PCA and deep neural networks for the numerical computation of efficient coarse spaces. Finally, the effectiveness of our new methods is confirmed by extensive numerical experiments, where stationary elliptic problems (with possibly highly jumping diffusion coefficients) are efficiently solved.