Abstract
The parallel Schwarz method (PSM) is an overlapping domain decomposition (DD) method to solve partial differential equations (PDEs). Similarly to classical nonoverlapping DD methods, the PSM admits a substructured formulation, that is, it can be formulated as an iteration acting on variables defined exclusively on the interfaces of the overlapping decomposition. In this manuscript, spectral coarse spaces are considered to improve the convergence and robustness of the substructured PSM. In this framework, the coarse space functions are defined exclusively on the interfaces. This is in contrast to classical twolevel volume methods, where the coarse functions are defined in volume, though with local support. The approach presented in this work has several advantages. First, it allows one to use some of the wellknown efficient coarse spaces proposed in the literature, and facilitates the numerical construction of efficient coarse spaces. Second, the computational work is comparable or lower than standard volume twolevel methods. Third, it opens new interesting perspectives as the analysis of the new twolevel substructured method requires the development of a new convergence analysis of general twolevel iterative methods. The new analysis casts light on the optimality of coarse spaces: given a fixed dimension m, the spectral coarse space made by the first m dominant eigenvectors is not necessarily the minimizer of the asymptotic convergence factor. Numerical experiments demonstrate the effectiveness of the proposed new numerical framework.
1 Introduction
Consider a linear problem of the form \(A u = f\), which we assume well posed in a vector space V. To define a twolevel method for the solution to this problem, a onelevel method and a coarsecorrection step are required. Onelevel Schwarz methods are generally based on a splitting technique: the operator \(A : V \rightarrow V\) is decomposed as \(A = M  N\), where \(M : V \rightarrow V\) is assumed invertible and represents the preconditioner. This splitting leads to a stationary iteration, namely \(u^{k+1} = M^{1}N u^k + M^{1} f\), for \(k=0,1,\ldots \), and to a preconditioned system \(M^{1}A u = M^{1}f\). These are strongly related, since the stationary iteration, if it converges, produces the solution of the preconditioned system; see, e.g., [1] and references therein. Therefore, DD methods can be used as stationary iterations or preconditioners; see, e.g., [2,3,4,5,6]. Unfortunately, onelevel DD methods are in general not scalable and a coarse correction step is often desirable. See, e.g., [7,8,9,10,11,12] for exceptions and detailed scalability and nonscalability analyses.
A twolevel method is characterized by the combination of a onelevel method, defined on V, and a coarse correction step, performed on a coarse space \(V_c\). The coarse space \(V_c\) is finite dimensional and it must satisfy the condition \(\dim V_c~\ll ~\dim V\). The mappings between V and \(V_c\) are realized by a restriction operator \(R : V \rightarrow V_c\) and a prolongation operator \(P : V_c \rightarrow V\). In general, the restriction of \(A : V \rightarrow V\) on \(V_c\) is defined as \(A_c=RAP\), which is assumed to be an invertible matrix.
Now, we distinguish two cases: a twolevel stationary method and a twolevel preconditioning method. In the first case, a stationary method is used as firstlevel method. After each stationary iteration, which produces an approximation \(u_{app}\), the residual \(r = f  Au_{app}\) is mapped from V to \(V_c\), the coarse problem \(A_c e = R r\) is solved to get \(e \in V_c\), and the coarse correction step is defined as \(u_{new} = u_{app} + P e\). This correction provides the new approximation \(u_{new}\). By repeating these operations iteratively, one gets a twolevel stationary method. The preconditioner corresponding to this method is denoted by \(M_{s,2L}\). Notice that this idea is very closely related to twogrid methods. In the second case, the firstlevel method is purely a preconditioner \(M^{1}\). The corresponding twolevel preconditioner, denoted by \(M_{2L}\), is generally obtained in an additive way: the onelevel preconditioner \(M^{1}\) is added to the coarse correction matrix \(PA_c^{1} R\). When used with appropriate implementations, the two preconditioners \(M_{2L}\) and \(M_{s,2L}\) require about the same computational effort per Krylov iteration. However, their different structures can lead to different performances of Krylov methods.
The literature about twolevel DD methods is very rich. See, e.g., [7, 12,13,14,15,16,17,18,19], for references considering twolevel Schwarz stationary methods, and, e.g., [20,21,22,23,24,25,26,27,28,29,30,31,32], for references considering twolevel Schwarz preconditioners, and, e.g., for references considering twolevel substructuring preconditioners [31, 33,34,35]. See also general classical references as [4,5,6] and [36, 37].
For any given onelevel Schwarz method (stationary or preconditioning), the choices of \(V_c\), P and R influence very strongly the convergence behavior of the corresponding twolevel method. A common choice would be to use as a coarse space the span of the dominant eigenfunctions of the onelevel iteration operator \(G:=M^{1}N\). Such a coarse space, and more generally coarse spaces obtained as the span of some given functions, are usually called spectral coarse spaces to distinguish them from geometric coarse spaces built implicitly using coarser meshes as in a multigrid framework.
In a more general context, fundamental results are presented in [38]: for a symmetric and positive definite A, it is proved that the coarse space of size m that minimizes the energy norm of the twolevel iteration operator is the exactly the spectral coarse space made by the first m dominant eigenfunctions of G. The sharp result of [38] provides a concrete (optimal) choice of \(V_c\) minimizing the energy norm of the twolevel operator associated to a symmetric and positive definite A.
Unfortunately, computing the eigenfunctions of the onelevel method is often unfeasible, and thus several works have proposed other spectral coarse spaces which are cheaper to obtain, but still contain information about the slow eigenspace of the onelevel method.
As a matter of fact, focusing on a Schwarz iterative procedure, error and residual have generally very special forms. The error is harmonic, in the sense of the underlying PDE operator, in the interior of the subdomains (excluding the interfaces). Moreover, it is predominant in the overlap. The residual is predominant on the interfaces and zero outside the overlap. For examples and more details, see, e.g., [13, 17]. This difference motivated, sometimes implicitly, the construction of different coarse spaces. On the one hand, many references use different techniques to define coarse functions in the overlap (where the error is predominant), and then extending them on the remaining part of the neighboring subdomains; see, e.g., [22,23,24,25,26, 28,29,30, 33]. On the other hand, in other works the coarse space is created by first defining basis function on the interfaces, and then extending them (in different ways) on the portions of the neighboring subdomains; see, e.g., [7, 12, 13, 15, 17,18,19,20,21, 27, 28]. For a good, compact and complete overview of several of the different coarse spaces, we refer to [28, Section 5]. For other different techniques and related discussions, see, e.g., [4, 14,15,16, 31, 39].
This work differs from the existing literature as we introduce for the first time twolevel Schwarz substructured methods. These are twolevel stationary iterative methods, based on the Schwarz iteration, and the term “substructured” indicates that both the onelevel iteration and coarse spaces are defined on the interfaces (or skeletons).^{Footnote 1} As in this manuscript we consider coarse spaces obtained as the span of certain interface functions, we call our twolevel substructured Schwarz methods as Spectral 2level Substructured (S2S) methods. With this respect, they are defined in the same spirit as twolevel methods whose coarse spaces are extensions in volume of interfaces basis functions.
Our S2S framework can accommodate several choices for the coarse space, e.g., as the span of the onelevel (substructured) Schwarz iteration operator, or as the ones proposed in several papers as [17, 18, 27, 28]. From a numerical point of view, the S2S framework has several advantages if compared to a classical twolevel Schwarz methods defined in volume. Since the coarse space functions are defined on the interfaces, less memory storage is required. For a threedimensional problem with mesh size h, a discrete interface coarse function is an array of size \(O(1/h^2)\). This is much smaller than \(O(1/h^3)\), which is the size of an array corresponding to a coarse function in volume. For this reason the resulting interface restriction and prolongation operators are smaller matrices, and thus the corresponding interpolation operations are cheaper to be performed. Therefore, assuming that the onelevel stationary iteration step and the dimension of the coarse space are the same for a S2S method and a method in volume, each S2S iteration is generally computationally less expensive. In terms of iteration number, our S2S methods perform similarly or faster than other twolevel methods that use the same DD smoother. Notice also, that the precomputation part, that consists mainly in constructing the coarse space \(V_c\) and assembling the operators P, R and \(A_c\) requires the same computational effort of a method in volume. Moreover, the substructured feature of the S2S framework allows us to introduce two new procedures, based on a PCA and neural networks, to numerically build an efficient coarse space \(V_c\). Direct numerical experiments will show that the coarse spaces generated by these two approaches either outperform the spectral coarse space and other commonly used coarse spaces, or they lead to a very similar convergence behavior.
In our substructured framework, the matrix A is nonsymmetric. Thus the optimality result of [38] does not hold. Relying on our previous work [41], we provide a new general convergence analysis based on an infinitematrix representation of twolevel operator which covers our substructured formulation. This analysis has a rather general applicability, it can be used to tackle nonsymmetric problems, and allows us to show, surprisingly, in which cases a spectral coarse space is not (asymptotically) optimal. Indeed, even for symmetric matrices, [38] provides an optimality result for the norm of the iteration operator, which is generally only an upper bound for the asymptotic convergence factor. Specifically, we show that a spectral coarse space made of the first m dominant eigenfunctions of G is not necessarily the coarse space of dimension m minimizing the spectral radius of twolevel operator based on the Schwarz iteration. We show this both theoretically and numerically, using principal component analysis (PCA) and deep neural networks to build numerically more efficient coarse spaces.
This paper is organized as follows. In Sect. 2, we formulate the classical parallel Schwarz method in a substructured form. This is done at the continuous level and represents the starting point for the S2S method introduced in Sect. 3. A detailed convergence analysis is presented in Sect. 4. Section 5 discusses both PCAbased and deep neural networks approaches to numerically create an efficient coarse space. Extensive numerical experiments are presented in Sect. 6, where the robustness of the proposed methods with respect to mesh refinement and physical (jumping) parameters is studied. We present our conclusions in Sect. 7. Finally, in the “Appendix” important implementation details are discussed.
2 Substructured Schwarz Methods
Consider a bounded Lipschitz domain \(\Omega \subset {\mathbb {R}}^d\) for \(d\in \{2,3\}\), a general secondorder linear elliptic operator \({\mathcal {L}}\) and a function \(f \in L^2(\Omega )\). Our goal is to introduce new domain decomposition methods for the efficient numerical solution of the general linear elliptic problem
which we assume to be uniquely solved by a \(u \in H^1_0(\Omega )\).
To formulate our methods we need to fix some notation. Given a bounded set \(\Gamma \) with boundary \(\partial \Gamma \), we denote by \(\rho _{\Gamma }(x)\) the function representing the distance of \(x \in \Gamma \) from \(\partial \Gamma \). We can then introduce the \(H_{00}^{1/2}(\Gamma )\) the space
which is also known as the Lions–Magenes space; see, e.g., [5, 42, 43]. Notice that \(H_{00}^{1/2}(\Gamma )\) can be equivalently defined as the space of functions in \(H^{1/2}(\Gamma )\) such that their extensions by zero to a superset \({\widetilde{\Gamma }}\) of \(\Gamma \) are in \(H^{1/2}({\widetilde{\Gamma }})\) [43].
Next, consider a decomposition of \(\Omega \) into N overlapping Lipschitz subdomains \(\Omega _j\), that is \(\Omega = \cup _{j \in {\mathcal {I}}} \Omega _j\) with \({\mathcal {I}}:=\{1,2,\ldots ,N\}\). For any \(j \in {\mathcal {I}}\), we define the set of neighboring indexes \({\mathcal {N}}_j :=\{ \ell \in {\mathcal {I}} \, : \, \Omega _j \cap \partial \Omega _\ell \ne \emptyset \}\). Given a \(j \in {\mathcal {I}}\), we introduce the substructure of \(\Omega _j\) defined as \({\mathcal {S}}_j := \cup _{\ell \in {\mathcal {N}}_j} \bigl (\Omega _j \cap \partial \Omega _\ell \bigr )\), that is the union of all the portions of \(\partial \Omega _\ell \) with \(\ell \in {\mathcal {N}}_j\).^{Footnote 2} Notice that the sets \({\mathcal {S}}_j\) are open and their closures are \(\overline{{\mathcal {S}}_j} = {\mathcal {S}}_j \cup \partial {\mathcal {S}}_j\), with \(\partial {\mathcal {S}}_j := \cup _{\ell \in {\mathcal {N}}_j} \bigl (\partial \Omega _j \cap \partial \Omega _\ell \bigr )\). Figure 1 provides an illustration of substructures corresponding to a commonly used decomposition of a rectangular domain. The substructure of \(\Omega \) is defined as \({\mathcal {S}}:=\cup _{j \in {\mathcal {I}}}\overline{{\mathcal {S}}_j}\). We denote by \({\mathcal {E}}_j^0 : L^2({\mathcal {S}}_j) \rightarrow L^2({\mathcal {S}})\) the extension by zero operator. Now, we consider a set of continuous functions \(\chi _j : \overline{{\mathcal {S}}_j} \rightarrow [0,1]\), \(j=1,\ldots ,N\), such that
and \(\sum _{j\in {\mathcal {I}}} {\mathcal {E}}_j^0 \chi _j \equiv 1\), which means that the functions \(\chi _j\) form a partition of unity on \({\mathcal {S}}\). Further, we assume that the functions \(\chi _j\), \(j \in {\mathcal {I}}\), satisfy the condition \(\chi _j / \rho _{{\mathcal {S}}_j}^{1/2}~\in ~L^{\infty }({\mathcal {S}}_j)\).
For any \(j \in {\mathcal {I}}\), we define \(\Gamma _j^{\mathrm{int}} := \partial \Omega _j \cap \bigl ( \cup _{ \ell \in {\mathcal {N}}_j} \Omega _\ell \bigr )\) and introduce the following trace and restriction operators
It is well known that (1) is equivalent to the domain decomposition system
where \(f_j \in L^2(\Omega _j)\) is the restriction of f on \(\Omega _j\) (see, e.g., [5]). Notice that, since \(\tau _\ell u_\ell \in H^{1/2}({\mathcal {S}}_\ell )\), the properties of the partition of unity functions \(\chi _\ell \) guarantee that \(\chi _\ell \tau _\ell u_\ell \) lies in \(H_{00}^{1/2}({\mathcal {S}}_\ell )\) and \({\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H_{00}^{1/2}({\mathcal {S}})\). Moreover, for \(\ell \in {\mathcal {N}}_j\) it holds that \(\tau _j^{\mathrm{int}} {\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H_{00}^{1/2}(\Gamma _j^{\mathrm{int}})\) if \(\Gamma _j^{\mathrm{int}} \subsetneq \partial \Omega _j\), and \(\tau _j^{\mathrm{int}} {\mathcal {E}}_\ell ^0(\chi _\ell \tau _\ell u_\ell ) \in H^{1/2}(\Gamma _j^{\mathrm{int}})\) if \(\Gamma _j^{\mathrm{int}} = \partial \Omega _j\).
Given a \(j \in {\mathcal {I}}\) such that \(\partial \Omega _j \setminus \Gamma _j^{\mathrm{int}} \ne \emptyset \), we define the extension operator \({\mathcal {E}}_j : H_{00}^{1/2}(\Gamma _j^{\mathrm{int}}) \times L^2(\Omega _j) \rightarrow H^1(\Omega _j)\) as \(w={\mathcal {E}}_j(v,f_j)\), where w solves
for \(v \in H_{00}^{1/2}(\Gamma _j^{\mathrm{int}})\). Otherwise, if \(\Gamma _j^{\mathrm{int}} \equiv \partial \Omega _j\), we define \({\mathcal {E}}_j : H^{1/2}(\Gamma _j^{\mathrm{int}}) \times L^2(\Omega _j) \rightarrow H^1(\Omega _j)\) as \(w={\mathcal {E}}_j(v,f_j)\), where w solves
for \(v \in H^{1/2}(\Gamma _j^{\mathrm{int}})\). The DD system (3) can be then written as
If we define \(v_j := \chi _j \tau _j u_j\), \(j\in {\mathcal {I}}\), then system (6) becomes
where \(g_j := \chi _j \tau _j{\mathcal {E}}_{j}(0,f_j)\) and the operators \(G_{j,\ell } : H_{00}^{1/2}({\mathcal {S}}_\ell ) \rightarrow H_{00}^{1/2}({\mathcal {S}}_j)\) are defined as
System (7) is the substructured form of (3). The equivalence between (3) and (7) is explained by the following theorem.
Theorem 1
(Equivalence between (3) and (7)) Let \(u_j \in H^1(\Omega _j)\), \(j\in {\mathcal {I}}\), solve (3), then \(v_{j} := \chi _j\tau _j(u_j)\), \(j\in {\mathcal {I}}\), solve (7). Let \(v_j \in H^{1/2}({\mathcal {S}}_j)\), \(j\in {\mathcal {I}}\), solve (7), then \(u_j := {\mathcal {E}}_j(\tau _j^{\mathrm{int}}\sum _{\ell \in {\mathcal {N}}_j} {\mathcal {E}}_\ell ^0 (v_\ell ),f_j)\), \(j\in {\mathcal {I}}\), solve (3).
Proof
The first statement is proved before Theorem 1, where the substructured system (7) is derived. To obtain the second statement, we use (7) and the definition of \(u_j\) to write \(v_j = \chi _j \tau _j {\mathcal {E}}_j(\tau _j^{\mathrm{int}} \sum _{\ell \in {\mathcal {N}}_j } {\mathcal {E}}_\ell ^0 (v_\ell ),f_j) = \chi _j \tau _j u_j\). The claim follows by using this equality together with the definitions of \(u_j\) and \({\mathcal {E}}_j\). \(\square \)
Take any function \(w \in H^1_0(\Omega )\) and consider the initialization \(u_j^0:= w_{\Omega _j}\), \(j \in {\mathcal {I}}\). The parallel Schwarz method (PSM) is given by
for \(n \in {\mathbb {N}}^+\), and has the substructured form
initialized by \(v_{j}^0 := \chi _j \tau _j(u_j^0) \in H_{00}^{1/2}({\mathcal {S}}_j)\). Notice that the iteration (10) is well posed in the sense that \(v_{j}^n \in H_{00}^{1/2}({\mathcal {S}}_j)\) for \(j \in {\mathcal {I}}\) and \(n \in {\mathbb {N}}\). Equations (10) and (7) allow us to obtain the substructured PSM in error form, that is
for \(n \in {\mathbb {N}}^+\), where \(e_{j}^n:=v_jv_j^n\), for \(j \in {\mathcal {I}}\) and \(n\in {\mathbb {N}}\). Equation (7) can be written in the matrix form \(A\mathbf{v}=\mathbf{b}\), where \(\mathbf{v}=[v_1,\ldots ,v_N]^\top \), \(\mathbf{b}=[g_1,\ldots ,g_N]^\top \) and the entries of A are
where \(I_{d,j}\) are the identities on \(L^2({\mathcal {S}}_j)\), \(j \in {\mathcal {I}}\). Similarly, we define G as
and hence write (10) and (11) as \(\mathbf{v}^n=G\mathbf{v}^{n1}+\mathbf{b}\) and \(\mathbf{e}^n=G\mathbf{e}^{n1}\), respectively, where \(\mathbf{v}^n :=[v_1^n,\ldots ,v_N^n]^\top \) and \(\mathbf{e}^n :=[e_1^n,\ldots ,e_N^n]^\top \). Notice that \(G=IA\), where \(I:=\text {diag}_{j=1,\ldots ,N}(I_{d,j})\). Moreover, if we define
then one can clearly see that \(A : {\mathcal {H}} \rightarrow {\mathcal {H}}\) and \(G : {\mathcal {H}} \rightarrow {\mathcal {H}}\).
It is a standard result that the PSM iteration \(\mathbf{v}^n=G\mathbf{v}^{n1}+\mathbf{b}\) converges; see, e.g., [11] for a convergence result of the PSM in a substructured form, [8,9,10, 12, 40] for other convergence results and [4, 6] for standard references. The corresponding limit is the solution to the problem \(A\mathbf{v}=\mathbf{b}\).
3 S2S: Spectral TwoLevel Substructured Schwarz Method
The idea of the S2S method is to use a coarse space \(V_c\) defined as the span of certain linearly independent functions defined on the skeletons of the subdomains \(\Omega _j\), for \(j \in {\mathcal {I}}\). Consider the space \({\mathcal {H}}\), endowed with an inner product \(\langle \cdot , \cdot \rangle \), and a set of \(m>0\) linearly independent functions \({\pmb \psi }_k\), \(k=1,\ldots ,m\). Notice that each \({\pmb \psi }_k\) has the form \({\pmb \psi }_k = [ \, \psi _k^1 , \ldots , \psi _k^N \, ]^\top \), where \(\psi _k^j \in H_{00}^{1/2}({\mathcal {S}}_j)\) for \(j \in {\mathcal {I}}\). We define the coarse space \(V_c\) as
To define a twolevel method, we need restriction and prolongation operators. Once the coarse space \(V_c\) is constructed, the choice of these operators follows naturally. We define the prolongation operator \(P : {\mathbb {R}}^{m} \rightarrow {\mathcal {H}}\) and the restriction operator \(R : {\mathcal {H}} \rightarrow {\mathbb {R}}^{m}\) as
for any \(\mathbf{v}=(v_1,\ldots ,v_m)^\top \in {\mathbb {R}}^m\) and \({\mathbf{h}} \in {\mathcal {H}}\). Notice that, if the functions \({\pmb \psi }_k\) are orthogonal, P is the adjoint operator of R and we have that \(RP=I_m\), where \(I_m\) is the identity matrix in \({\mathbb {R}}^{m \times m}\). The restriction of the operator A on \(V_c\) is the matrix \(A_c \in {\mathbb {R}}^{m \times m}\) obtained in a Galerkin manner, \(A_c = RAP\).
With the operators P, R and \(A_c\) in hands, our twolevel method is defined as a classical twolevel strategy applied to the substructured problem (7) and using the domain decomposition iteration (10) as a smoother. This results in Algorithm 1, where \(n_1\) and \(n_2\) are the numbers of the pre and postsmoothing steps. The well posedness of Algorithm 1 is proved in the next lemma.
Lemma 1
(Well posedness of S2S) Consider the inner product space \(({\mathcal {H}},\langle \cdot , \cdot \rangle )\), a set of linearly independent functions \(\{ {\pmb \psi }_k \}_{k=1,\ldots ,m}\), for some \(m>0\), and let \(V_c := \mathrm{span}\{ {\pmb \psi }_1 , \ldots , {\pmb \psi }_m \}\) be a finitedimensional subspace of \({\mathcal {H}}\). Let P and R be defined as in (13) (with \(\langle \cdot , \cdot \rangle \)). If \(A_c = RAP\) is invertible and the initialization vector \(\mathbf{u}^0\) is chosen in \({\mathcal {H}}\), then \(\mathbf{u}^{n_2}\) (computed at Step 5 of Algorithm 1) is in \({\mathcal {H}}\).
Proof
It is sufficient to show that for a given \(\mathbf{u}^0 \in {\mathcal {H}}\) all the steps of Algorithm 1 are well posed. Since \(\mathbf{b} \in {\mathcal {H}}\), \(G : {\mathcal {H}} \rightarrow {\mathcal {H}}\) and \(A : {\mathcal {H}} \rightarrow {\mathcal {H}}\), Step 1 and Step 2 produce \(\mathbf{u}^{n_1}\) and \(\mathbf{r}\) in \({\mathcal {H}}\). Step 3 is well posed because \(A_c\) is assumed to be invertible. Since \(V_c\) is a subset of \({\mathcal {H}}\), \(P \mathbf{u}_c\) and \(\mathbf{u}^0\) in Step 4 lie in \({\mathcal {H}}\). Clearly, the element \(\mathbf{u}^{n_2}\) produced by Step 5 is also in \({\mathcal {H}}\). Therefore, by induction Algorithm 1 is well posed in \({\mathcal {H}}\). \(\square \)
The key hypothesis of Lemma 1 is the invertibility of the coarse matrix \(A_c\). An equivalent characterization of this property is proved in Sect. 4.1. This result (and the discussion thereafter) allows us to obtain the invertibility of \(A_c\) if, e.g., \(V_c\) is a spectral coarse space. Moreover, it is worth to remark that, the pseudoinverse of \(A_c\) can be used in case \(A_c\) is not invertible; see [38].
Let us now turn our attention to the coarse space \(V_c\). We distinguish two general classes of coarse space functions: global and local coarse functions. Global coarse functions refer to functions defined directly on the global skeleton of \(\Omega \). An ideal choice of global coarse functions would be to define \(V_c\) as the span of of the dominating eigenfunctions of the onelevel operator G. In the context of multigrid methods, this choice is extensively discussed in [38], where the authors prove that, if A and the preconditioner corresponding to the onelevel iteration are symmetric, the spectral coarse space minimizes the energy norm of T. This sharp result provides a concrete optimal choice of \(V_c\) minimizing the energy norm of T, but it is generally an upper bound for the asymptotic convergence factor \(\rho (T)\) and does not extend to the nonsymmetric case (the substructured matrix A is nonsymmetric), as we will see in Sect. 4.2. Moreover, we will show in Sect. 5 two numerical approaches, based on a PCA approach and neural networks, for the construction of global coarse space functions. These are generally different from the spectral ones and may lead to a better convergence.
Another possibility is to build local coarse functions using eigenfunctions of the local operators \(G_j\). However, the eigenfunctions of \(G_j\) (or G) are known only in very special cases and their numerical computation could be quite expensive. To overcome this problem one could define \(V_c\) as the span of some Fourier basis functions, that could be obtained by solving a LaplaceBeltrami eigenvalue problem on each interface (or skeleton); see, e.g., [27, 28]. In this case, assuming that local basis functions \(\psi _k^j \in H_{00}^{1/2}({\mathcal {S}}_j)\) (endowed with inner product \(\langle \cdot , \cdot \rangle _j\)) are available, the coarse space \(V_c\) can be constructed as
for some positive integer \({\widehat{m}}\), where \(\otimes \) denotes the standard Kronecker product and \(\mathbf{e}_j\), for \(j \in {\mathcal {I}}\), are the canonical vectors in \({\mathbb {R}}^N\). In this case prolongation and restriction operators defined in (13) are
for any \(\mathbf{v}^1,\ldots ,\mathbf{v}^N \in {\mathbb {R}}^{{\widehat{m}}}\) and any \(({h}_1,\ldots ,{h}_N) \in {\mathcal {H}}\). We wish to remark, that the choice of the inner product \(\langle \cdot , \cdot \rangle \) (or \(\langle \cdot , \cdot \rangle _j\) for \(j\in {\mathcal {I}}\)) in the definition of P and R is arbitrary. One possible choice is the classical \(H^{1/2}\) inner product. However, this could be too expensive from a numerical point of view. Another possibility would be to consider the classical \(L^2\) inner product, which is the choice we make in our implementations.
A detailed convergence analysis that covers our S2S method, is presented in Sect. 4. This is based on the general structure of a twolevel iteration operator. A direct calculation reveals that one iteration of the S2S method can be written as
where I is the identity operator over \({\mathcal {H}}\); see, also, [13, 15, 37]. Here, \({\widetilde{M}}\) is an operator which acts on the righthand side vector \(\mathbf{b}\). Such operator can be regarded as the preconditioner corresponding to our twolevel method. In error form, the iteration (15) becomes
where \(\mathbf{e}^{\mathrm{new}}:= \mathbf{u}\mathbf{u}^{\mathrm{new}}\) and \(\mathbf{e}^{\mathrm{old}}:= \mathbf{u}\mathbf{u}^{\mathrm{old}}\). Hence, to prove convergence of the S2S method we study the operator T.
4 Convergence Analysis
In this section, we provide convergence results for twolevel iterative methods in a general framework that covers the setting of the S2S domain decomposition method presented in Sect. 3.
Let \(({{\mathcal {X}}}, \langle \cdot , \cdot \rangle )\) be a complex^{Footnote 3} innerproduct space and \({{\mathbb {A}}}x = b\) a linear problem, where the operator \({{\mathbb {A}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}\) is bijective and \(b \in {{\mathcal {X}}}\) is a given vector. Consider a set of \(m>0\) linearly independent functions \(\{{\pmb \psi }_k\}_{k=1,\ldots ,m}\), and denote by \(V_c\) the finitedimensional subspace of \({\mathcal {X}}\) defined as the span of the functions \(\{{\pmb \psi }_k\}_{k=1,\ldots ,m}\). We denote by \(P : {{\mathbb {C}}}^m \rightarrow {{\mathcal {X}}}\) and \(R : {{\mathcal {X}}}\rightarrow {{\mathbb {C}}}^m\) the prolongation and restriction operators defined as in (13), and define the matrix \({{\mathbb {A}}}_c := R {{\mathbb {A}}}P \in {{\mathbb {C}}}^{m \times m}\). Given a smoothing operator \({{\mathbb {G}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}\), a twolevel iterative method (as the one defined in Algorithm 1) is characterized by the iteration operator \({{\mathbb {T}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}\) defined by
where \({{\mathbb {I}}}: {{\mathcal {X}}}\rightarrow {{\mathcal {X}}}\) is the identity operator. In what follows the properties of \({{\mathbb {T}}}\) are analyzed. In particular, the invertibility of \({{\mathbb {A}}}_c\) is characterized in Sect. 4.1, the convergence (spectral) properties of \({{\mathbb {T}}}\) are discussed in the case of global coarse functions in Sect. 4.2 and in the case of local coarse functions in Sect. 4.3.
4.1 Invertibility of the Coarse Matrix
The wellposedness of a twolevel method (like the S2S) is essentially related to the invertibility of the coarse operator \({{\mathbb {A}}}_c\). Even though one could replace the inverse of \({{\mathbb {A}}}_c\) with its pseudoinverse, as discussed in, e.g., [38], in our analysis we will assume that \({{\mathbb {A}}}_c\) is invertible. The next Lemma provides an equivalent characterization for the invertibility of \({{\mathbb {A}}}_c\).
Lemma 2
(Invertibility of a coarse operator \({{\mathbb {A}}}_c\)) Let \({\mathbb {P}}_{V_c} : {{\mathcal {X}}}\rightarrow V_c\) be a projection operator onto \(V_c\). The coarse matrix \({{\mathbb {A}}}_c = R {{\mathbb {A}}}P\) has full rank if and only if \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0 \, \forall \mathbf{v} \in V_c\setminus \{0\}\).
Proof
We first show that if \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0\) for any \(\mathbf{v} \in V_c\setminus \{0\}\), then \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P\) has full rank. This result follows from the ranknullity theorem, if we show that the only element in the kernel of \({{\mathbb {A}}}_c\) is the zero vector. To do so, we recall the definitions of P and R given in (13). Let us now consider a vector \(\mathbf{z}\in {{\mathbb {C}}}^m\) . Clearly, \(P \mathbf{z}=0\) if and only if \(\mathbf{z}=0\). Moreover, for any \(\mathbf{z} \in {{\mathbb {C}}}^m\) the function \(P \mathbf{z}\) is in \(V_c\). Since \({{\mathbb {A}}}\) is invertible, then \({{\mathbb {A}}}P\mathbf{z}=0\) if and only if \(\mathbf{z}=0\). Moreover, by our assumption it holds that \({\mathbb {P}}_{V_c}(AP\mathbf{z}) \ne 0\). Now, we notice that \(R\mathbf{w} \ne 0\) for all \(\mathbf{w} \in V_c \setminus \{0\}\), and \(R\mathbf{w} = 0\) for all \(\mathbf{w} \in V_c^\perp \), where \(V_c^\perp \) denotes the orthogonal complement of \(V_c\) in \({\mathcal {X}}\) with respect to \(\langle \cdot , \cdot \rangle \). Since \(({\mathcal {X}},\langle \cdot , \cdot \rangle )\) is an innerproduct space, we have \({{\mathbb {A}}}P\mathbf{z} = {\mathbb {P}}_{V_c}({{\mathbb {A}}}P\mathbf{z}) + ({{\mathbb {I}}}{\mathbb {P}}_{V_c})({{\mathbb {A}}}P\mathbf{z})\) with \(({{\mathbb {I}}}{\mathbb {P}}_{V_c})({{\mathbb {A}}}P\mathbf{z}) \in V_c^\perp \). Hence, \(R{{\mathbb {A}}}P\mathbf{z} = R{\mathbb {P}}_{V_c}({{\mathbb {A}}}P\mathbf{z}) \ne 0\) for any nonzero \(\mathbf{z}\).
Now we show that, if \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P\) has full rank, then \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0\) for any \(\mathbf{v} \in V_c\setminus \{0\}\). We proceed by contraposition and prove that if there exists a \(\mathbf{v} \in V_c\setminus \{0\}\) such that \({{\mathbb {A}}}\mathbf{v} \in V_c^\perp \), then \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P\) has not full rank. Assume that there is a \(\mathbf{v} \in V_c\setminus \{0\}\) such that \({{\mathbb {A}}}\mathbf{v} \in V_c^\perp \). Since \(\mathbf{v}\) is in \(V_c\), there exists a nonzero vector \(\mathbf{z}\) such that \(\mathbf{v}=P\mathbf{z}\). Hence \({{\mathbb {A}}}P\mathbf{z} \in V_c^\perp \). We can now write that \({{\mathbb {A}}}_c \mathbf{z}= R({{\mathbb {A}}}P\mathbf{z})=0\), which implies that \({{\mathbb {A}}}_c\) has not full rank. \(\square \)
The following example shows that the invertibility of \({{\mathbb {A}}}\) does not necessarily imply the invertibility of \({{\mathbb {A}}}_c\).
Example 1
Consider the invertible matrix \({{\mathbb {A}}}:= \begin{bmatrix} 0 &{} 1 \\ 1 &{} 0 \end{bmatrix} \). Let us denote by \(\mathbf{e}_1\) and \(\mathbf{e}_2\) the canonical vectors in \({\mathbb {R}}^2\), define \(V_c := \mathrm{span}\{ \mathbf{e}_1 \}\), and consider the classical scalar product for \({\mathbb {R}}^2\). This gives \(V_c^\perp := \mathrm{span}\{ \mathbf{e}_2 \}\). The prolongation and restriction operators are \(P=\mathbf{e}_1\) and \(R=P^\top \). Clearly, we have that \({{\mathbb {A}}}\mathbf{e}_1 = \mathbf{e}_2\), which implies that \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) = 0\) for all \(\mathbf{v} \in V_c\). Moreover, in this case we get \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P = 0\), which shows that \({{\mathbb {A}}}_c\) is not invertible.
Notice that, if \({{\mathbb {A}}}(V_c) \subseteq V_c\), then it holds that \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{v} ) \ne 0\) \(\forall \mathbf{v} \in V_c \setminus \{ 0\}\), and \({{\mathbb {A}}}_c\) is invertible. The condition \({{\mathbb {A}}}(V_c) \subseteq V_c\) is satisfied for operators of the form \({{\mathbb {A}}}= {{\mathbb {I}}} {{\mathbb {G}}}\), as for instance those defined in (12), if the functions \({\pmb \psi }_k\) are eigenfunctions of \({{\mathbb {G}}}\). However, it represents only a sufficient condition for the invertibility of \({{\mathbb {A}}}_c\). As the following example shows, there exist invertible operators \({{\mathbb {A}}}\) that do not satisfy this condition, but lead to invertible \({{\mathbb {A}}}_c\).
Example 2
Consider the invertible matrix \({{\mathbb {A}}}:= \begin{bmatrix} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 \end{bmatrix} \). Let us denote by \(\mathbf{e}_1\), \(\mathbf{e}_2\) and \(\mathbf{e}_3\) the three canonical vectors in \({\mathbb {R}}^3\), define \(V_c := \mathrm{span}\{ \mathbf{e}_1 , \mathbf{e}_2 \}\), and consider the classical scalar product for \({\mathbb {R}}^3\). This gives \(V_c^\perp := \mathrm{span}\{ \mathbf{e}_3 \}\). The prolongation and restriction operators are \(P=[\mathbf{e}_1 , \mathbf{e}_2]\) and \(R=P^\top \), and we get \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P = I\), where I is the \(2\times 2\) identity matrix. Now, we notice that \({{\mathbb {A}}}\mathbf{e}_2 = \mathbf{e}_2 + \mathbf{e}_3\), which implies that \({\mathbb {P}}_{V_c}( {{\mathbb {A}}}\mathbf{e}_2 )\ne 0\) and \({\mathbb {P}}_{V_c^\perp }( {{\mathbb {A}}}\mathbf{e}_2 )\ne 0\). Hence \(V_c\) is not invariant under \({{\mathbb {A}}}\), but \({{\mathbb {A}}}_c\) is invertible.
4.2 Global Coarse Functions
In this section, we study general convergence properties of the operator \({{\mathbb {T}}}\). The first theorem characterizes the relation between the kernel of \({{\mathbb {T}}}\) and the coarse space \(V_c\).
Theorem 2
(Kernel of \({{\mathbb {T}}}\), coarse space \(V_c\)) Let P and R be defined as in (13) by linearly independent functions \({\pmb \psi }_1,\ldots ,{\pmb \psi }_m\) such that \({{\mathbb {A}}}_c = R{{\mathbb {A}}}P\) is invertible. For any \({\pmb \psi }\in {{\mathcal {X}}}\) it holds that
Proof
Assume that \({\pmb \psi }\in V_c\). This implies that there exists a vector \(\mathbf{z}\) such that \({\pmb \psi }= P \mathbf{z}\). Hence, we can compute
Let us now prove the reverse, that is \([ {{\mathbb {I}}} P {{\mathbb {A}}}_c^{1} R {{\mathbb {A}}}] {\pmb \psi }= 0 \Rightarrow {\pmb \psi }\in V_c\). We proceed by contraposition and assume that \({\pmb \psi }\notin V_c\), that is there exists a nonzero \({\pmb \psi }_b \in V_c^\perp \) such that \({\pmb \psi }= {\pmb \psi }_a + {\pmb \psi }_b\) with \({\pmb \psi }_a \in V_c\). Since \({\pmb \psi }_a \in V_c\), we already know that \([ {{\mathbb {I}}} P {{\mathbb {A}}}_c^{1} R {{\mathbb {A}}}] {\pmb \psi }_a = 0\). Hence, it holds that
\(\square \)
To continue our analysis we construct a matrix representation of the operator \({{\mathbb {T}}}\) defined in (17). From now on, for the sake of simplicity, we set \(n_1=1\) and \(n_2=0\). We further consider the following assumptions:

(H1)
\(V_c\) is the span of m linearly independent functions \(\{ \mathbf{p }_k\}_{k=1}^m \subset {{\mathcal {X}}}\), which are used to define the operators P and R as in (13).

(H2)
The operators \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) have the same linearly independent eigenvectors \(\{ {\pmb \psi }_k \}_{k=1}^\infty \), The corresponding eigenvalues of \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are denoted by \({{\widetilde{\lambda }}}_k\) and \(\lambda _k\), respectively.

(H3)
The eigenvalues \(\lambda _k\) satisfy \(\lambda _k \in (0,1)\), \(\lambda _k\le \lambda _{k1}\) for all k.

(H4)
There exists an index \({\widetilde{m}}\ge m\) such that \(V_c \subseteq \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}.\)
Remark 1
Notice that the hypothesis (H2) is valid in the context of our S2S method, where the operators \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) satisfy the relation \({{\mathbb {A}}}= {{\mathbb {I}}} {{\mathbb {G}}}\). Hence, they have the same eigenvectors. Moreover, the hypothesis (H3) is satisfied if \({{\mathbb {G}}}\) corresponds to a classical parallel Schwarz method, as in the case of our S2S method. The classical damped Jacobi method is another important instance that satisfies (H2) and (H3). Moreover, if one supposes that the vectors \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) in (H2) are orthogonal or \({\mathcal {X}}\) is finite dimensional, then (H2) and (H4) imply \(V_c \cap \mathrm{span}\,\{ {\pmb \psi }_k \}_{k={\widetilde{m}}+1}^\infty =\{0\}.\)
Let us now construct a matrix representation of the operator \({{\mathbb {T}}}\). Since \(V_c \subseteq \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\), the structure of \({{\mathbb {T}}}\) allows us to obtain that the set \(\mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\) is invariant, that is \({{\mathbb {T}}}{\pmb \psi }_j \in \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\) for any \(j=1,\ldots ,{\widetilde{m}}\). Similarly, a direct calculation reveals that \({{\mathbb {T}}}{\pmb \psi }_j = \lambda _j {\pmb \psi }_j  \sum _{\ell =1}^{{\widetilde{m}}} x_{j{\widetilde{m}},\ell } {\pmb \psi }_\ell \) for \(j \ge {\widetilde{m}}+1\) for some coefficients \(x_j\). Therefore, for any \({\pmb \psi }_j\) there exist at most \({\widetilde{m}}+1\) nonzero coefficients \({\widetilde{t}}_{j,\ell }\) such that \({{\mathbb {T}}}{\pmb \psi }_j = {\widetilde{t}}_{j,j} {\pmb \psi }_j + \sum _{\ell =1,\ell \ne j}^{{\widetilde{m}}} {\widetilde{t}}_{j,\ell }{\pmb \psi }_\ell \). If we order the coefficients \({\widetilde{t}}_{j,\ell }\) into an infinite matrix denoted by \({\widetilde{T}}\), we obtain that
The infinite matrix \({\widetilde{T}}\) can be regarded as a linear operator acting on the space of sequences. The matrix representation (20) turns to be very useful to analyze the convergence properties of the operator \({{\mathbb {T}}}\). Now, we can compute by an induction argument that
If the matrix \({\widetilde{T}}_{{\widetilde{m}}}\) is nilpotent with degree \(q \in {\mathbb {N}}_+\), that is \({\widetilde{T}}_{{\widetilde{m}}}^p=0\) for all \(p\ge q\), then we get for \(n>q\) that
Thus, by defining \(X_{q} := \sum _{j=1}^{q} \Lambda _{{\widetilde{m}}}^{j}X {\widetilde{T}}_{{\widetilde{m}}}^{j1}\), one gets for \(n>q\) that
Let us begin with a case where the linear operators \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are bounded and selfadjoint, and the functions \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) form an orthonormal basis with respect to an inner product (not necessarily equal to \(\langle \cdot , \cdot \rangle \)) such that is a Hilbert space. We denote by \(\Vert \cdot \Vert _{{{\mathcal {X}}}}\) the norm induced by , and by
the corresponding operator norm. Notice that, since \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are bounded, \({{\mathbb {T}}}\) is bounded as well. Thus, we can study the asymptotic convergence factor \(\rho ({{\mathbb {T}}})\) defined as \(\lim \limits _{n\rightarrow \infty } \Vert {{\mathbb {T}}}^n\Vert _{{{\mathcal {X}}}}^{1/n}=\rho ({{\mathbb {T}}})\); see, e.g., [44, Chapter 17]. Since we assumed that \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) are orthonormal with respect to , a direct calculation^{Footnote 4} allows one to prove that \(\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}\), where
Hence, we obtain \(\rho ({{\mathbb {T}}}) = \lim \limits _{n\rightarrow \infty } \Vert {{\mathbb {T}}}^n\Vert _{{{\mathcal {X}}}}^{1/n} = \lim \limits _{n\rightarrow \infty } \Vert {\widetilde{T}}^n\Vert _{\ell ^2}^{1/n}\). Notice that since \({{\mathbb {T}}}\) is a bounded operator and \(\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}\), the operator \({\widetilde{T}}\) is bounded in the \(\Vert \cdot \Vert _{\ell ^2}\) norm. Thus, the submatrices X, \(\Lambda _{{\widetilde{m}}}\) and \({\widetilde{T}}_{{\widetilde{m}}}\) are bounded in the \(\Vert \cdot \Vert _{\ell ^2}\) norm as well. Therefore, \(T_a\) and \(T_b\) are also bounded in the \(\Vert \cdot \Vert _{\ell ^2}\) norm. Thus, Eq. (22) allows us to estimate \(\rho ({{\mathbb {T}}})\):
Now, recalling (24), one obtains for \(n>q\) that
where \(\mathbf{e}_{{\widetilde{m}}+1} \in \ell ^2\) is the \({\widetilde{m}}+1\)th canonical vector. This estimate implies that \(\rho ({{\mathbb {T}}}) = \lim \limits _{n\rightarrow \infty } \Vert {\widetilde{T}}^n\Vert _{\ell ^2}^{1/n} \ge \lambda _{{\widetilde{m}}+1}\), and thus \(\rho ({{\mathbb {T}}}) = \lambda _{{\widetilde{m}}+1}\). Using Theorem 2, it is possible to see that the matrix \({\widetilde{T}}_{{\widetilde{m}}}\) is nilpotent with degree \(q=1\), if \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\). In this case \(\lambda _{{\widetilde{m}}+1}=\lambda _{m+1}\). We can summarize these findings in the next theorem.
Theorem 3
(Convergence of a twolevel method) Let the hypotheses (H1), (H2), (H3) and (H4) be satisfied, \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) be selfadjoint, and assume that the functions \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) form an orthonormal basis with respect to an inner product such that is a Hilbert space. If \({\widetilde{T}}_{{\widetilde{m}}}\) is nilpotent (e.g., if \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\)), then
where \(\Vert \cdot \Vert _{{{\mathcal {X}}}}\) is the operator norm defined in (23).
In the case of a spectral coarse space, the expression of \({\widetilde{T}}\) in (20) simplifies. The following result holds.
Theorem 4
(The matrix \({\widetilde{T}}\) for selfadjoint \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) and a spectral coarse space) Let the hypotheses (H1), (H2), (H3) and (H4) be satisfied. If the functions \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) form an orthonormal basis for , the operators \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are self adjoint, and \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\) with \({\widetilde{m}}=m\), then
where \(\Lambda _{{\widetilde{m}}}\) is defined in (20).
Proof
Since \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\), Theorem 2 implies that \({\widetilde{T}}_{{\widetilde{m}}}=0\). Thus, to obtain the result, it is sufficient to show that all the components of the submatrix X (see (20)) are zero. These components are \(x_{j,\ell } = {\widetilde{t}}_{j,\ell }\) for \(j>{\widetilde{m}}\) and \(\ell \le {\widetilde{m}}\). Thus, we assume that \(j>{\widetilde{m}}\) and \(\ell \le {\widetilde{m}}\), recall the formula \({{\mathbb {T}}}{\pmb \psi }_j = {\widetilde{t}}_{j,j} {\pmb \psi }_j + \sum _{k=1,k\ne j}^{{\widetilde{m}}} {\widetilde{t}}_{j,k}{\pmb \psi }_k\), and multiply this by \({\pmb \psi }_\ell \) to obtain . Since \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are self adjoint, one obtains by a direct calculation that \([ {{\mathbb {I}}} P {{\mathbb {A}}}_c^{1} R {{\mathbb {A}}}]^* = [ {{\mathbb {I}}} {{\mathbb {A}}}P {{\mathbb {A}}}_c^{1} R ]\). Using this property and recalling the structure of \({{\mathbb {T}}}\), we can compute
Now, since \([ {{\mathbb {I}}} {{\mathbb {A}}}P {{\mathbb {A}}}_c^{1} R ] {\pmb \psi }_\ell \in \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{{\widetilde{m}}}\) as \(\ell \le {\widetilde{m}}\), the orthogonality of the functions \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) and the hypothesis (H4) imply that . Hence, the result follows. \(\square \)
Theorem 4 implies directly that
Let us now assume that \({{\mathbb {A}}}\) is positive definite, and thus there exists a unique positive square root operator \({{\mathbb {A}}}^{1/2}\) such that \(A^{1/2}{\pmb \psi }_j={{\widetilde{\lambda }}}_j^{1/2}{\pmb \psi }_j\), [45, Theorem 6.6.4]. A straight calculation leads to \(\Vert S\Vert _{{{\mathbb {A}}}} = \Vert {{\mathbb {A}}}^{1/2} S {{\mathbb {A}}}^{1/2} \Vert _{{{\mathcal {X}}}}\) (see, e.g., [46, Section C.1.3] for a finitedimensional matrix counterpart). Notice that, as for \({{\mathbb {T}}}\) and \({\widetilde{T}}\), we can obtain the matrix representation \({\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{1/2} \) of \({{\mathbb {A}}}^{1/2} {{\mathbb {T}}}{{\mathbb {A}}}^{1/2}\), where \({\widetilde{T}}\) is defined in (20) and \({\widetilde{\Lambda }}=\mathrm{diag}\, ({{\widetilde{\lambda }}}_{1},{{\widetilde{\lambda }}}_{2},\ldots )\). Thus, as for \(\Vert {{\mathbb {T}}}\Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{T}}\Vert _{\ell ^2}\), one can prove that \(\Vert {{\mathbb {A}}}^{1/2} {{\mathbb {T}}}{{\mathbb {A}}}^{1/2} \Vert _{{{\mathcal {X}}}} = \Vert {\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{1/2} \Vert _{\ell ^2}\). Hence, we get \(\Vert {{\mathbb {T}}}\Vert _{{{\mathbb {A}}}} = \Vert {\widetilde{\Lambda }}^{1/2} {\widetilde{T}}{\widetilde{\Lambda }}^{1/2} \Vert _{\ell ^2}\). Now, if it holds that \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\), then Theorem 4 implies that
It has been proved in [38, Theorem 5.5], that this result is optimal in the sense that, if \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\) are symmetric and positive (semi)definite, then the coarse space \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\) minimizes the energy norm of the twolevel operator \({{\mathbb {T}}}\). Clearly, if \({{\mathbb {A}}}\) has positive and negative eigenvalues (even though it remains symmetric), this result is no longer valid. In this case, as we are going to see in Theorem 6, the coarse space \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\) is not necessarily (asymptotically) optimal.
The situation is very different if the functions \(\{ {\pmb \psi }_k \}_{k=1}^\infty \) are not orthogonal and \({{\mathbb {A}}}\) is not symmetric. To study this case, we work in a finitedimensional setting and assume that \({{\mathcal {X}}}= {\mathbb {C}}^N = \mathrm{span}\, \{ {\pmb \psi }_k \}_{k=1}^N\). Thus, both \({{\mathbb {T}}}\) and \({\widetilde{T}}\) are matrices in \({\mathbb {C}}^{N \times N}\) and it holds that \({{\mathbb {T}}}V = V {\widetilde{T}}^\top \), where \(V=[{\pmb \psi }_1,\ldots ,{\pmb \psi }_N]\). This means that \({{\mathbb {T}}}\) and \({\widetilde{T}}\) are similar matrices and, thus, have the same spectrum. Hence, using Theorem 2 we obtain a finitedimensional counterpart of Theorem 3, which does not require the orthogonality of \(\{ {\pmb \psi }_k \}_{k=1}^N\).
Theorem 5
(Convergence of a twolevel method in finitedimension) Assume that \({{\mathcal {X}}}= {{\mathbb {C}}}^N\) and let the hypotheses (H1), (H2), (H3) and (H4) be satisfied. If \(V_c = \mathrm{span}\,\{ {\pmb \psi }_k \}_{k=1}^{m}\) (with \(m={\widetilde{m}}<N\)), then
The coarse space \(V_c = \mathrm{span} \, \{ {\pmb \psi }_k \}_{k=1}^{m}\) is not necessarily (asymptotically) optimal. A different choice can lead to better asymptotic convergence or even to a divergent twolevel method. To show these results, we consider an analysis based on the perturbation of functions belonging to the coarse space \(V_c = \mathrm{span} \, \{ {\pmb \psi }_k \}_{k=1}^{m}\). We have seen in Theorem 2, that an eigenvector of \({{\mathbb {G}}}\) is in the kernel of the twolevel operator \({{\mathbb {T}}}\) if and only if it belongs to \(V_c\). Assume that the coarse space cannot represent exactly one eigenvector \({\pmb \psi }\) of \({{\mathbb {G}}}\). How is the convergence of the method affected? Let us perturb the coarse space \(V_c\) using the eigenvector \({\pmb \psi }_{m+1}\), that is \(V_c(\varepsilon ) := \mathrm {span}\, \{ {\pmb \psi }_j + \varepsilon \, {\pmb \psi }_{m+1} \}_{j=1}^m\). Clearly, \(\text {dim}\, V_c(\varepsilon ) = m\) for any \(\varepsilon \in {\mathbb {R}}\). In this case, (21) holds with \({\widetilde{m}}= m+1\) and \({\widetilde{T}}\in {\mathbb {C}}^{N \times N}\) becomes
where we make explicit the dependence on \(\varepsilon \). Notice that \(\varepsilon =0\) clearly leads to \({\widetilde{T}}_{{\widetilde{m}}}(0)=\text { diag}\, (0,\ldots ,0,\lambda _{m+1}) \in {\mathbb {C}}^{{\widetilde{m}}\times {\widetilde{m}}}\), and we are back to the unperturbed case with \({\widetilde{T}}(0)={\widetilde{T}}\) having spectrum \(\{0,\lambda _{m+1},\ldots ,\lambda _{N}\}\). Now, notice that \(\min _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) \le \rho ({\widetilde{T}}(0)) = \lambda _{m+1} \). Thus, it is natural to ask the question: is this inequality strict? Can one find an \({\widetilde{\varepsilon }}\ne 0\) such that \(\rho ({\widetilde{T}}({\widetilde{\varepsilon }}))=\min _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon ))<\rho ({\widetilde{T}}(0))\) holds? If the answer is positive, then we can conclude that choosing the coarse vectors equal to the dominating eigenvectors of \({{\mathbb {G}}}\) is not an optimal choice. Moreover, one could ask an opposite question: can one find a perturbation of the eigenvectors that leads to a divergent method (\(\rho ({\widetilde{T}}(\varepsilon ))>1\))? The next key result provides precise answers to these questions in the case \(m=1\).
Theorem 6
(Perturbation of \(V_c\)) Let \(({\pmb \psi }_1,\lambda _1)\), \(({\pmb \psi }_2,\lambda _2)\) and \(({\pmb \psi }_3,\lambda _3)\) be three eigenpairs of \({{\mathbb {G}}}\), \({{\mathbb {G}}}{\pmb \psi }_j = \lambda _j {\pmb \psi }_j\) such that \(0<\lambda _3<\lambda _2\le \lambda _1\), \(\Vert {\pmb \psi }_j \Vert _2 =1\), \(j=1,2\), and denote with \({\widetilde{\lambda }}_j\) the eigenvalues of A corresponding to \({\pmb \psi }_j\). Assume that both \(\lambda _j\) and \({{\widetilde{\lambda }}}_j\) are real for \(j=1,2\) and \({\widetilde{\lambda }}_1{\widetilde{\lambda }}_2>0\).^{Footnote 5} Define \(V_c := \mathrm {span}\,\{ {\pmb \psi }_1 + \varepsilon {\pmb \psi }_2 \}\) with \(\varepsilon \in {\mathbb {R}}\), and \(\gamma := \langle {\pmb \psi }_1 , {\pmb \psi }_2 \rangle \in [1,1]\). Then

(A)
The spectral radius of \({\widetilde{T}}(\varepsilon )\) is \(\rho ({\widetilde{T}}(\varepsilon ))=\max \{ \lambda (\varepsilon ,\gamma ), \lambda _3 \}\), where
$$\begin{aligned} \lambda (\varepsilon ,\gamma ) = \frac{\lambda _1 {\widetilde{\lambda }}_2 \varepsilon ^2 + \gamma (\lambda _1 {\widetilde{\lambda }}_2 + \lambda _2 {\widetilde{\lambda }}_1)\varepsilon + \lambda _2 {\widetilde{\lambda }}_1}{{\widetilde{\lambda }}_2 \varepsilon ^2 + \gamma ({\widetilde{\lambda }}_1+{\widetilde{\lambda }}_2)\varepsilon + {\widetilde{\lambda }}_1}. \end{aligned}$$(26) 
(B)
Let \(\gamma =0\). If \(\lambda _1>\lambda _2>0\) or \(0>\lambda _2>\lambda _1\), then \(\min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) = \rho ({\widetilde{T}}(0))\).

(C)
Let \(\gamma =0\), If \(\lambda _2>0>\lambda _1\) or \(\lambda _1>0>\lambda _2\), then there exists an \({\widetilde{\varepsilon }}\ne 0\) such that \(\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = \lambda _3= \min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))\).

(D)
Let \(\gamma \ne 0\). If \(\lambda _1>\lambda _2>0\) or \(0>\lambda _2>\lambda _1\), then there exists an \({\widetilde{\varepsilon }}\ne 0\) such that \(\lambda ({\widetilde{\varepsilon }},\gamma )<\lambda _2\) and hence \(\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = \max \{\lambda ({\widetilde{\varepsilon }},\gamma ),\lambda _3\} < \rho ({\widetilde{T}}(0))\).

(E)
Let \(\gamma \ne 0\). If \(\lambda _2>0>\lambda _1\) or \(\lambda _1>0>\lambda _2\), then there exists an \({\widetilde{\varepsilon }}\ne 0\) such that \(\rho ({\widetilde{T}}({\widetilde{\varepsilon }})) = \lambda _3= \min \limits _{\varepsilon \in {\mathbb {R}}} \rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))\).

(F)
The map \(\gamma \mapsto \lambda (\varepsilon ,\gamma )\) has a vertical asymptote at \(\gamma ^*(\varepsilon )=\frac{\varepsilon ^2 {{\widetilde{\lambda }}}_2 + {{\widetilde{\lambda }}}_1}{\varepsilon ({{\widetilde{\lambda }}}_1+{{\widetilde{\lambda }}}_2)}\) for any \(\varepsilon ^2 \ne  \frac{(\lambda _2 {{\widetilde{\lambda }}}_1)({{\widetilde{\lambda }}}_1+{{\widetilde{\lambda }}}_2)}{\lambda _1{{\widetilde{\lambda }}}_2^2+{{\widetilde{\lambda }}}_1^2 \lambda _2}\). Thus there exits a neighborhood \(I(\gamma ^*)\) such that \(\forall \gamma \in I(\gamma ^*)\), \(\lambda (\varepsilon ,\gamma )\notin (1,1)\).
Proof
Since \(m=1\), a direct calculation allows us to compute the matrix
where \(g={\widetilde{\lambda }}_1 + \varepsilon \gamma [ {\widetilde{\lambda }}_1+{\widetilde{\lambda }}_2] + \varepsilon ^2 {\widetilde{\lambda }}_2\). The spectrum of this matrix is \(\{0, \lambda (\varepsilon ,\gamma )\}\), with \(\lambda (\varepsilon ,\gamma )\) given in (26). Hence, point \(\mathrm{(A)}\) follows recalling (25).
To prove points \(\mathrm{(B)}\), \(\mathrm{(C)}\), \(\mathrm{(D)}\) and \(\mathrm{(E)}\) we use some properties of the map \(\varepsilon \mapsto \lambda (\varepsilon ,\gamma )\). First, we notice that
Second, the derivative of \(\lambda (\varepsilon ,\gamma )\) with respect to \(\varepsilon \) is
Because of \(\lambda (\varepsilon ,\gamma )=\lambda (\varepsilon ,\gamma )\) in (27), we can assume without loss of generality that \(\gamma \ge 0\).
Let us now consider the case \(\gamma =0\). In this case, the derivative (28) becomes \(\frac{d \lambda (\varepsilon ,0)}{d \varepsilon } = \frac{(\lambda _1\lambda _2){\widetilde{\lambda }}_1{\widetilde{\lambda }}_2 2\varepsilon }{({\widetilde{\lambda }}_2 \varepsilon ^2+{\widetilde{\lambda }}_1^2)^2}\). Moreover, since \(\lambda (\varepsilon ,0)=\lambda (\varepsilon ,0)\) we can assume that \(\varepsilon \ge 0\).
Case \(\mathrm{(B)}\). If \(\lambda _1>\lambda _2>0\), then \(\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }>0\) for all \(\varepsilon >0\). Hence, \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) is monotonically increasing, \(\lambda (\varepsilon ,0) \ge 0\) for all \(\varepsilon >0\) and, thus, the minimum of \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) is attained at \(\varepsilon = 0\) with \(\lambda (0,0)=\lambda _2>\lambda _3\), and the result follows. Analogously, if \(0>\lambda _2>\lambda _1\), then \(\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }<0\) for all \(\varepsilon >0\). Hence, \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) is monotonically decreasing, \(\lambda (\varepsilon ,0) < 0\) for all \(\varepsilon >0\) and the minimum of \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) is attained at \(\varepsilon = 0\).
Case \(\mathrm{(C)}\). If \(\lambda _1>0>\lambda _2\), then \(\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }>0\) for all \(\varepsilon >0\). Hence, \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) is monotonically increasing and such that \(\lambda (0,0)=\lambda _2<0\) and \(\lim _{\varepsilon \rightarrow \infty } \lambda (\varepsilon ,0) = \lambda _1>0\). Thus, the continuity of the map \(\varepsilon \mapsto \lambda (\varepsilon ,0)\) guarantees the existence of an \({\widetilde{\varepsilon }}>0\) such that \(\lambda ({\widetilde{\varepsilon }},0)=0\). Analogously, if \(\lambda _2>0>\lambda _1\), then \(\frac{d \lambda (\varepsilon ,0)}{d \varepsilon }<0\) for all \(\varepsilon >0\) and the result follows by the continuity of \(\varepsilon \mapsto \lambda (\varepsilon ,0)\).
Let us now consider the case \(\gamma >0\). The sign of \(\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }\) is affected by the term \(f(\varepsilon ):=\varepsilon ^2+2\varepsilon /\gamma +1\), which appears at the numerator of (28). The function \(f(\varepsilon )\) is strictly convex, attains its minimum at \(\varepsilon =\frac{1}{\gamma }\), and is negative in \(({\bar{\varepsilon }}_1,{\bar{\varepsilon }}_2)\) and positive in \((\infty ,{\bar{\varepsilon }}_1)\cup ({\bar{\varepsilon }}_2,\infty )\), with \({\bar{\varepsilon }}_1,{\bar{\varepsilon }}_2=\frac{1\mp \sqrt{1\gamma ^2}}{\gamma }\).
Case \(\mathrm{(D)}\). If \(\lambda _1>\lambda _2>0\), then \(\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }>0\) for all \(\varepsilon > {\bar{\varepsilon }}_2\). Hence, \(\frac{d \lambda (0,\gamma )}{d \varepsilon }>0\), which means that there exists an \({\widetilde{\varepsilon }}<0\) such that \(\lambda ({\widetilde{\varepsilon }},\gamma )<\lambda (0,\gamma )=\lambda _2\). The case \(0>\lambda _2>\lambda _1\) follows analogously.
Case \(\mathrm{(E)}\). If \(\lambda _1>0>\lambda _2\), then \(\frac{d \lambda (\varepsilon ,\gamma )}{d \varepsilon }>0\) for all \(\varepsilon >0\). Hence, by the continuity of \(\varepsilon \mapsto \lambda (\varepsilon ,\gamma )\) (for \(\varepsilon \ge 0\)) there exists an \({\widetilde{\varepsilon }}>0\) such that \(\lambda ({\widetilde{\varepsilon }},\gamma )=0\). The case \(\lambda _2>0>\lambda _1\) follows analogously.
Case \(\mathrm{(F)}\). It is sufficient to observe that the denominator of \(\lambda (\varepsilon ,\gamma )\) is equal to zero for \(\gamma =\gamma ^*\), while the numerator is nonzero and finite. Hence, \(\lim _{\gamma \rightarrow \gamma ^*}\lambda (\varepsilon ,\gamma )=+\infty \). As the map \(\gamma \mapsto \lambda (\varepsilon ,\gamma )\) is continuous in \((\infty ,\gamma ^*) \cup (\gamma ^*,+\infty )\), the result follows. \(\square \)
Theorem 6 and its proof say that, if the two eigenvalues \(\lambda _1\) and \(\lambda _2\) have opposite signs (but they could be equal in modulus), then it is always possible to find an \(\varepsilon \ne 0\) such that the coarse space \(V_c := \mathrm {span}\{ {\pmb \psi }_1 + \varepsilon {\pmb \psi }_2 \}\) leads to a faster method than \(V_c := \mathrm {span}\{ {\pmb \psi }_1 \}\), even though both are onedimensional subspaces. In addition, if \(\lambda _3 \ne 0\) the former leads to a twolevel operator T with a larger kernel than the one corresponding to the latter. The situation is completely different if \(\lambda _1\) and \(\lambda _2\) have the same sign. In this case, the orthogonality parameter \(\gamma \) is crucial. If \({\pmb \psi }_1\) and \({\pmb \psi }_2\) are orthogonal (\(\gamma =0\)), then one cannot improve \(V_c:= \mathrm {span}\{ {\pmb \psi }_1 \}\) by a simple perturbation using \({\pmb \psi }_2\). However, if \({\pmb \psi }_1\) and \({\pmb \psi }_2\) are not orthogonal (\(\gamma \ne 0\)), then one can still find an \(\varepsilon \ne 0\) such that \(\rho ({\widetilde{T}}(\varepsilon )) < \rho ({\widetilde{T}}(0))\).
Notice that, if \(\lambda _3=\lambda _2\), Theorem 6 shows that one cannot obtain a \(\rho (T)\) smaller than \(\lambda _2\) using a onedimensional perturbation. However, if one optimizes the entire coarse space \(V_c\) (keeping m fixed), then one can find coarse spaces leading to better contraction factor of the twolevel iteration, even though \(\lambda _3=\lambda _2\).
Theorem 6 has another important meaning. If the eigenvectors \({\pmb \psi }_j\) are not orthogonal and one defines the coarse space \(V_c\) using approximations to \({\pmb \psi }_j\), then the twolevel method is not necessarily convergent. Even though the onelevel iteration characterized by \({{\mathbb {G}}}\) is convergent, a wrong choice of coarse functions can lead to a divergent iteration. This phenomenon is observed numerically in Sect. 6. However, the analysis performed in Theorem 6 suggests a remedy to this situation.
Corollary 1
(Correction of perturbed coarse space functions) Let the hypotheses of Theorem 6 be satisfied. For any \(r \in {\mathbb {N}}\) it holds that
with \(\varepsilon _r = \frac{\lambda _2^r}{\lambda _1^r}\varepsilon \). Moreover, if the coarse space \(V_c\) is replaced by \({{\mathbb {G}}}^r V_c\) (hence \(\varepsilon \) is replaced by \(\varepsilon _r \)), there exists an \({\widehat{r}} \in {\mathbb {N}}\) such that \(\rho (\varepsilon _{{\widehat{r}}},\gamma )<1\) for any \(\gamma \in [1,1]\).
Proof
By computing
one obtains the first statement. The second statement follows from Theorem 6, which guarantees that \(\rho (0,\gamma )=\lambda _2<1\). Since the map \(\varepsilon \mapsto \rho (\varepsilon ,\gamma )\) is continuous and \(\lambda _2/\lambda _1<1\), there exists a sufficiently large \(r\in {\mathbb {N}}\) such that \(\rho (\varepsilon _r,\gamma )<1\) holds. \(\square \)
Corollary 1 has the following important consequence. If some “badconvergent” eigenvectors of \({{\mathbb {G}}}\) are not sufficiently well represented by the coarse space functions, one can apply r smoothing steps to the coarse space functions. The new space \({{\mathbb {G}}}^r V_c\) is a better approximation to the “badconvergent” eigenfunctions of \({{\mathbb {G}}}\). Therefore, one can replace \(V_c\) by \({{\mathbb {G}}}^r V_c\) to improve the convergence properties of the twolevel method.
4.3 Local Coarse Functions
In this section, we consider an operator \({{\mathbb {G}}}\) having the block form
and defined on the space \({{\mathcal {X}}}:= {\widehat{{\mathcal {X}}}}\times {\widehat{{\mathcal {X}}}}\), where \({\widehat{{\mathcal {X}}}}\) is a Hilbert space endowed by an inner product . The corresponding operator \({{\mathbb {A}}}\) is \({{\mathbb {A}}}= {{\mathbb {I}}} {{\mathbb {G}}}\). Moreover, we assume that the operators \({{\mathbb {G}}}_j\), \(j=1,2\), have the same eigenvectors \(\{\psi _k\}_{k=1}^\infty \) forming an orthonormal basis of \({\widehat{{\mathcal {X}}}}\) with respect to . The eigenvalues of \({{\mathbb {G}}}_j\), for \(j=1,2\), are denoted by \(\theta _j(k)\). This is exactly the structure of the substructured domain decomposition problem introduced in Sect. 2 and corresponding to two subdomains, as the following examples show.
Example 3
Consider a rectangle \(\Omega :=(L_1,L_2)\times (0,{\widetilde{L}})\), \({\widetilde{L}},L_1,L_2>0\) that is decomposed as \(\Omega =\Omega _1 \cup \Omega _2\) by two overlapping subdomains \(\Omega _1:=(L_1,\delta )\times (0,{\widetilde{L}})\) and \(\Omega _2:=(\delta ,L_2)\times (0,{\widetilde{L}})\) for some \(0<\delta <\min (L_1,L_2)\). The two interfaces are \(\Gamma _1:=\{\delta \}\times (0,{\widetilde{L}})\) and \(\Gamma _2:=\{\delta \}\times (0,{\widetilde{L}})\). If \({\mathcal {L}}=\Delta \), then the Schwarz operators \({{\mathbb {G}}}_1\) and \({{\mathbb {G}}}_2\) are diagonalized by the sineFourier functions \(\psi _k(y)=\sin (k y \pi /{\widetilde{L}})\), for \(k=1,2,\ldots \) The eigenvalues of \({{\mathbb {G}}}_j\) are \(\theta _j(k)=\sinh \left( \frac{k\pi }{{\widetilde{L}}}(L_j\delta )\right) / \sinh \left( \frac{k\pi }{{\widetilde{L}}}(L_j+\delta )\right) \), for \(j=1,2\); see, e.g., [1, 47].
Example 4
Consider a disc \(\Omega \) of radius r and centered in the origin. One can decompose \(\Omega \) as the union of two overlapping subdomains \(\Omega _1\) and \(\Omega _2\), where \(\Omega _1\) is a disc of radius \(r_1 < r\) and centered in the origin, and \(\Omega _2\) is an annulus of external radius equal to r and internal radius \(r_2 \in (r_1,r)\). If \({\mathcal {L}}=\Delta +\eta \) with \(\eta >0\), then the two Schwarz operators \({{\mathbb {G}}}_1\) and \({{\mathbb {G}}}_2\) are diagonalized by periodic Fourier functions defined on circles; see, e.g., [48].
Now, we assume that \(V_c:=(\mathrm{span}\{\psi _1,\psi _2,\ldots ,\psi _m\})^2\subset {{\mathcal {X}}}\). Prolongation and restriction operators are given (as in (14)) by
The restriction of \({{\mathbb {A}}}\) onto the coarse space \(V_c\) is \({{\mathbb {A}}}_c=R{{\mathbb {A}}}P\). Notice that, since in this case \({{\mathbb {A}}}(V_c) \subseteq V_c\), Theorem 2 guarantees that the operator \({{\mathbb {A}}}_c\) is invertible. Now, we study the spectral properties of \({{\mathbb {T}}}\) defined in (17).
Theorem 7
(Convergence of the twolevel method with local coarse space functions) Consider the coarse space \(V_c=(\mathrm{span}\{\psi _1,\psi _2,\ldots ,\psi _m\})^2\) and the operators P and R defined in (29). All pairs \((\psi _k,\psi _{\ell })\) with \(k,\ell \le m\) are in the kernel of the operator \({{\mathbb {T}}}\). Moreover, for any \(S \in {\mathcal {L}}({\mathcal {X}})\) denote by \(\Vert S\Vert _{\mathrm{op}}:=\sup \limits _{\Vert \mathbf{v}\Vert _{\infty }=1}\Vert S\mathbf{v}\Vert _{\infty }\), where \(\Vert \mathbf{v}\Vert _{\infty }:=\max _{j=1,2} \Vert v_j\Vert \), with , . If the eigenvalues \(\theta _{j}(k)\), \(j=1,2\), are in absolute value nonincreasing functions of k, then the spectral radius of \({{\mathbb {T}}}\), \(\rho ({{\mathbb {T}}}):=~\lim \limits _{n\rightarrow \infty }\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}}^{\frac{1}{n}}\), is given by
Proof
Let us suppose that both \(n_1\) and \(n_2\) are even. The other cases can be treated similarly. For \(n_1\) even we define \(\pi ^{n_1}(k):=\theta ^{\frac{n_1}{2}}_1(k)\theta ^{\frac{n_1}{2}}_2(k)\) and study the action of the operator \({{\mathbb {T}}}\) on a vector \(\left[ \psi _k, \psi _{\ell } \right] ^\top \):
We begin with the case \(k\le m\) and \(\ell \le m\). First, let us compute the action of the operator \(R{{\mathbb {A}}}{{\mathbb {G}}}^{n_1}\) on \(\left[ \psi _k, \psi _{\ell } \right] ^\top \). Since the operators \({{\mathbb {G}}}_j\) are diagonalized by the basis \(\{ \psi _k\}_k\) one obtains \({{\mathbb {G}}}^{n_1} \begin{bmatrix} \psi _k\\ \psi _{\ell } \end{bmatrix} = \begin{bmatrix} \pi ^{n_1}(k)\psi _k\\ \pi ^{n_1}(\ell )\psi _{\ell } \end{bmatrix}\). The action of \({{\mathbb {A}}}\) on \(\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top \) is
Since \({{\mathbb {A}}}\) is invertible and has the form \({{\mathbb {A}}}={{\mathbb {I}}} {{\mathbb {G}}}\), the eigenvalues \(\theta _j(k)\) must different from one. Hence, the product \({{\mathbb {A}}}\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top \ne 0\). Now, the application of the restriction operator R on \({{\mathbb {A}}}\left[ \pi ^{n_1}(k)\psi _k, \pi ^{n_1}(\ell )\psi _{\ell } \right] ^\top \) gives us
where \(\mathbf{e}_{k}\) and \(\mathbf{e}_{\ell }\) are canonical vectors in \({\mathbb {R}}^m\) and \(\Lambda := \begin{bmatrix} I &{} \theta _1(\ell )I\\ \theta _2(k)I &{} I \end{bmatrix}\), with I the \(m\times m\) identity matrix. We have then obtained
Now, by computing
one obtains the action of \({{\mathbb {A}}}_c^{1}\) on \(\Lambda \begin{bmatrix} \pi ^{n_1}(k)\mathbf{e}_{k}\\ \pi ^{n_1}(\ell )\mathbf{e}_{\ell } \end{bmatrix}\), that is
This means that all the pairs \((\psi _k,\psi _{\ell })\) with \(k\le m\) and \(\ell \le m\) are in the kernel of \({{\mathbb {T}}}\). The result for \(n_1\) odd follows by similar calculations.
Next, let us consider the case \(k>m\) and \(\ell \le m\). Recalling that the basis \(\{\psi _k\}_k\) is orthonormal, one has
Similarly as before, we compute
which implies that
Thus, we have
For the remaining case \(k>m\) and \(\ell >m\), the same arguments as before imply that
We can now study the norm of \({{\mathbb {T}}}\). To do so, we first use (32), (33) and (34), and that \(\{ \psi _k,\psi _{\ell }\}_{k,\ell }\) is a basis of \({{\mathcal {X}}}\), to write
for any \(\mathbf{v} \in {{\mathcal {X}}}\). Since \(\theta _1(k)\) and \(\theta _2(k)\) are nonincreasing functions of k, \(\pi (k)\) is also a nonincreasing function of k. Therefore, using that the basis \(\{ \psi _k,\psi _{\ell }\}_{k,\ell }\) is orthonormal, we get
This upper bound is achieved at \(\mathbf{v}=[\psi _{m+1} , 0]^\top \). Hence, \(\Vert {{\mathbb {T}}}\Vert _{\mathrm{op}} = \pi ^{n_1+n_2}(m+1)\). Now, a similar direct calculation leads to \(\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}} = \pi ^{n(n_1+n_2)}(m+1)\), which implies that \(\rho ({{\mathbb {T}}})=\lim \limits _{n\rightarrow \infty } (\Vert {{\mathbb {T}}}^n\Vert _{\mathrm{op}})^{1/n}=\pi ^{n_1+n_2}(m+1)\). \(\square \)
Theorems 2, 5 and 7 show that the choice of the basis functions to construct \(V_c\) can affect drastically the convergence of the method. On the one hand, an inappropriate choice of \(V_c\) can lead to a twolevel method that performs as the corresponding onelevel method. On the other hand, a good choice of \(V_c\) can even make convergent a nonconverging stationary method; see, e.g., [13].
5 Numerical Construction of the Coarse Space
The construction of a good coarse space \(V_c\) is not an easy task. Several works rely on the solution of generalized eigenvalue problems on the interfaces; see, e.g., [13, 17, 18, 21, 28]. Despite one could reuse these techniques to build a coarse space for the S2S method, see the S2SHEM method discussed in Sect. 6, we now present two alternative numerical approaches for the generation of coarse space functions. The first one relies on the principal component analysis (PCA) and share some similarities with some of the strategies presented in [38, 49]. The second approach is based on modeling the twolevel iteration operator as a deep neural network where the coarse space functions are regarded as variables to be optimized. A similar approach has been presented in the context of multigrid methods in [50]. We refer to [51] for a recent review of altenative ways to combine machine learning and DD methods.
We remark that the S2S framework facilitates the use of these two numerical techniques which could be even numerically unfeasible if applied to a twolevel volume method. Indeed, at the discrete level, the substructured coarse functions are much shorter vectors than the corresponding volume ones. This means that, for the PCA approach, one has to compute the SVD decomposition of a much smaller matrix, while for the deep neural network approach, the neural net has much less parameters to optimize.
5.1 A PCA Approach for Coarse Space Generation
The idea that we present in this section is to construct an approximation of the image of the smoother \(G^r\), for some positive integer r. In fact, the image of \(G^r\) contains information about the “bad converging” eigenvectors of G. Notice that \(\mathrm{im}(G^r)=\mathrm{im}(G^rX)\) for any surjective matrix X. Therefore, the idea is to construct a coarse space using the information contained in \(G^rX\), for some randomly chosen matrix X. Clearly, if \(\rho (G)<1\) and r is large, then one expects that the slowest convergent eigenvectors are predominant in \(G^rX\). Notice also the relation of this idea with the perturbation Theorem 6 and Corollary 1.
Motivated by these observations, we use a principal component analysis (PCA), also known as proper orthogonal decomposition (POD); see, e.g., [52] and references therein. We consider the following procedure.

1.
Consider a set of q linearly independent randomly generated vectors \(\{ \mathbf{s}_k \}_{k=1}^q \subset {\mathbb {R}}^{N^s}\), where \(N^s\) is the number of degrees of freedom on the interfaces, and define the matrix \(S = [ \mathbf{s}_1 ,\ldots , \mathbf{s}_q ]\). Here, \(q \approx m\) and m is the desired dimension of the coarse space.

2.
Use the vectors \(\mathbf{s}_k\) as initial vectors and perform r smoothing steps to create the matrix \(W = G^r S\). This computation can be performed in parallel, applying G to each column of S separately. Further, we assume that r is “small”.

3.
Compute the SVD of W: \(W=U \Sigma V^\top \). This is cheap (\(O(q(N^s)^2)\)) because \(W\in {\mathbb {R}}^{N^s \times q}\) is “small”, since q is “small” and \(\mathbf{v}_k\) are interface vectors.

4.
Since the leftsingular vectors (corresponding to the nonzero singular values) span the image of W, we define \(V_c:= \mathrm{span}\{ \mathbf{u}_j \}_{j=1}^{m}\) and \(P:=[\mathbf{u}_1,\ldots ,\mathbf{u}_{m}]\).
We wish to remark that, in light of Theorem 6 and Corollary 1, one can also use approximations of the eigenfunctions of G (if available) in the matrix S (in step 1 above). A numerical study of the above procedure is presented in Sect. 6. To qualitatively describe the obtained coarse space, we prove the following bound.
Lemma 3
(Approximation of the random generated coarse space) Consider a full rank orthogonal matrix \(X \in {\mathbb {R}}^{N^s \times N^s}\) and its decomposition \(X=[S,{\widetilde{S}}]\). Let \(W={{\mathbb {G}}}^r [S,0]\) and \(P_\ell = U_\ell \Sigma _\ell V_\ell ^\top \) be the rank\(\ell \) SVD of W (\(\ell \le m\)), where \((\Sigma _\ell )_{j,j} = \sigma _j\), \(j=1,\ldots ,\ell \), are the singular values of W. Then, it holds that
Proof
Using the triangle inequality, we get
The first term on the righthand side is equal to \(\sigma _{\ell +1}\) by the best approximation properties of the SVD. The second term can be bounded as \(\Vert G^r [S,0]  {{\mathbb {G}}}^r X \Vert _2\le \Vert G^r \Vert _2 \Vert [S,0]  X \Vert _2\), and a direct calculation of \(\Vert [S,0]  X \Vert _2=\Vert [0,{\widetilde{S}}] \Vert _2\) leads to the result as \({\widetilde{S}}^\top {\widetilde{S}}=I_{N_sq}\). \(\square \)
Despite its very simple proof, Lemma 3 allows us to describe the quality of the created coarse space. Larger values of q and \(\ell \) lead to a smaller error in the approximation of the image of G. Moreover, a smoother G with good contraction properties, namely \(\Vert G \Vert _2 \ll 1\), leads to a better approximation. Clearly, one can improve the approximation by enlarging r at the cost of extra subdomain solves.
5.2 Generating the Coarse Space by Deep Neural Networks
Theorem 6 shows that the spectral coarse space obtained by the first dominant eigenvector of \({{\mathbb {G}}}\) is not necessarily the onedimensional coarse space minimizing \(\rho ({{\mathbb {T}}})\). Now, we wish to go beyond this onedimensional analysis and optimize the entire coarse space \(V_c\) keeping its dimension m fixed. This is equivalent to optimize the prolongation operator P whose columns span \(V_c\). Thus, we consider the optimization problem
To solve approximately (35), we follow the approach proposed by [50]. Due to the Gelfand formula \(\rho (T)=\lim _{k\rightarrow \infty } \root k \of {\Vert T^k\Vert _F}\), we replace (35) with the simpler optimization problem \(\min _{P} \Vert T(P)^k\Vert ^2_F\) for some positive k. Here, \(\Vert \cdot \Vert _F\) is the Frobenius norm. We then consider the unbiased stochastic estimator [53]
where \({\mathbf {z}}\in {\mathbb {R}}^{N^s}\) is a random vector with Rademacher distribution, i.e. \({\mathbb {P}}({\mathbf {z}}_i=\pm 1)=1/2\). Finally, we rely on a sample average approach, replacing the unbiased stochastic estimator with its empirical mean such that (35) is approximated by
where \({\mathbf {z}}_i\) are a set of independent, Rademacher distributed, random vectors. The action of T onto the vectors \({\mathbf {z}}_i\) can be interpreted as the feedforward process of a neural net, where each layer represents one specific step of the twolevel method, that is the smoothing step, the residual computation, the coarse correction and the prolongation/restriction operations. In our setting, the weights of most layers are fixed and given, and the optimization is performed only on the weights of the layer representing the prolongation step. The restriction layer is constraint to have as weights the transpose of the weights of the prolongation layer. To solve (36), we rely on the stochastic gradient algorithm which requires at each iteration to compute k times the action of T. The optimization parameters are heuristically tuned to a learning rate of 0.1, a minibatch of ten vectors \({\mathbf {z}}_i\), maximum 20,000 epochs, and the optimization is stopped when the loss functions increases for 5 consecutive updates. Each stochastic gradient iteration is expensive as it is equivalent to perform k iterations of the twolevel method. Hence, the proposed deep neural network approach is not computationally efficient to build coarse spaces, unless one considers an offlineonline paradigm or a many query context. We will use this approach in Sect. 6 to go beyond the result of Theorem 6 and show numerically that given an integer m, a spectral coarse made by the first m dominant eigenvectors of G is not necessarily the asymptotic optimal coarse space of dimension m.
6 Numerical Experiments
This section is concerned with the numerical validation of the framework proposed in this manuscript. We first consider a Poisson equation in 2D and 3D rectangular boxes and we show the convergence behavior of the S2S method with different coarse spaces and of the SHEM method [17]. In this simplified setting, we also report the computational time and memory storage requirements of the S2S and SHEM methods. We then solve a Poisson problem with manysubdomain decompositions and discuss a further way to build a substructured coarse space, that is, using the SHEM interface functions. Next, we focus on a diffusion problem with highly jumping coefficients and validate Theorem 6 showing how a perturbed coarse space can affect the convergence of the methods.
For the sake of completeness, let us recall that the SHEM coarse space is defined as the union of a multiscale and of a spectral coarse space. The multiscale coarse space is built solving for each internal vertex \(v^k\) of the nonoverlapping decomposition \(\Omega =\cup _{i=1}^N {\widetilde{\Omega }}_i\), a homogeneous boundary value problem on each of the edges \(\Gamma _{i,j}:=\partial {\widetilde{\Omega }}_i\cap \partial {\widetilde{\Omega }}_j\), which have the vertex \(v^k\) as extreme, imposing a Dirichlet boundary condition equal to 1 on the vertex \(v^k\) and zero on the opposite vertex. The solutions of these interface boundary value problems \(\phi ^k_{i,j}\) are then extended (through the PDE operator, see eq (14) in [27]) on both subdomains \({\widetilde{\Omega }}_i\) and \({\widetilde{\Omega }}_j\) and set to zero on the rest of \(\Omega \).
The spectral coarse space is obtained by solving localized eigenvalue problems on each of the internal edges \(\Gamma _{i,j}\), and then extending a certain number of the eigenfunctions harmonically on both \({\widetilde{\Omega }}_i\) and \({\widetilde{\Omega }}_j\) and zero otherwise (see eq (15) in [27]).
6.1 Poisson Equation in 2D and 3D Rectangular Boxes
Let us consider a rectangular domain \(\Omega =\Omega _1\cup \Omega _2\), where \(\Omega _1=(1,\delta )\times (0,1)\) and \(\Omega _2=(\delta ,1)\times (0,1)\), and a Poisson equation \(\Delta u=f\) with homogeneous Dirichlet boundary condition on \(\partial \Omega \). Given an integer \(\ell \ge 2\), we discretize each subdomain with a standard second order finite difference scheme with \(N_y=2^\ell 1\) points in the y direction and \(N_x=N_y\) points in the x direction. The overlap has size \(2\delta \) with \(\delta =N_{ov}h\), where h is the mesh size and \(N_{ov}\in {\mathbb {N}}\). In our computations, we consider \(f=1\) and initialize the iterations with a random initial guess.
Figure 2 shows the relative error decay for several methods. Specifically, we compare the onelevel parallel Schwarz method (\(G_s\) (Schwarz) in the figures), a S2S method with a coarse space made by eigenfunctions of G (S2SG), a S2S method with a coarse space made of eigenfunctions of the operators \(G_j\) (S2S\(G_j\)), a S2S method with a coarse space obtained with the PCA procedure (S2SPCA), a S2S method with coarse space obtained using deep neural networks (S2SDNN), and the spectral volume method based on the SHEM coarse space (SHEM), see [27]. For the PCA coarse space, we average the relative error decay over 30 realizations and the parameters for the PCA procedure are \(q=2 \, \text {dim}V_c\) and \(r=2\), where \(\text {dim}V_c\) is the desired size of the spectral coarse space. For the deep neural network approach, the parameters are \(N=N^s\) and \(k=4\).
Figure 2 shows that most of the spectral methods have a very similar convergence. Indeed, we have numerically observed that the S2SG, the S2S\(G_j\) and the SHEM methods all have the same spectral radius in this simplified setting. We remark that the S2SPCA method has on average the same convergence behavior as the other twolevel methods, even tough sometimes it could be slightly different (faster or slower). The S2SDNN method outperforms the others. In this particular setting, the eigenvalues of G are \(\lambda _j=\pm \sqrt{\mu _j}\), where \(\mu _j>0,\; \forall j=1,\ldots ,N^s\) are the eigenvalues of \(G_1=G_2\), and A is symmetric. Hence, we are under the assumptions of point (C) of Theorem 6, and Fig. 2 confirms that a spectral coarse space is not necessarily the coarse space leading to the fastest convergence.
As we claimed that the deep neural network approach is computationally expensive, it is worth remarking that the PCA approach builds a coarse space as efficient as the spectral ones performing \(q\cdot r\) subdomains solves in parallel, instead of solving eigenvalue problems as required by all others twolevel methods, either locally (as the S2S\(G_j\) and SHEM methods) or on the whole skeleton (as the S2SG method).
Next, we compare the computational costs required by the S2S method and a spectral volume method in Table 1. For simplicity we assume \(n_1=1\), \(n_2=0\). Let \(A_v=MN\) be a volume matrix of size \(N^v\times N^v\) and A be the substructured matrix of size \(N^s \times N^s\), P and R the substructured restriction and prolongation operators, while \(P_v\) and \(R_v\) are the corresponding volume operators. On each subdomain, we suppose to have \(N_{\text {sub}}\) unknowns and m is the dimension of the coarse space. The cost of the smoothing step is equal in both case to \(\gamma _s(N_{\text {sub}})\), where \(\gamma _s\) depends on the choice of the linear solver, e.g. for a Poisson problem, \({\gamma _s}(N_{\mathrm{sub}})=N_{\mathrm{sub}}\log (N_{\mathrm{sub}})\) if a fast Poisson solver is used, or \({\gamma _s}(N_{\mathrm{sub}})=b N_{\mathrm{sub}}\) for sparse banded matrices with bandwidth b; see, e.g., [54]. Further, the cost of solving the coarse problem is identical as well, equal to \(\gamma _c(m)\), where m is the size of the coarse space and \(\gamma _c\) depends on the linear solver used. The coarse matrices are usually small, fully dense, matrices so that it is reasonable to factorize them using an LU decomposition. In a standard implementation, the S2S method requires to perform subdomain solves when computing the residual, as the matrix vector multiplication with the matrix A is needed. To avoid this extracost, in the “Appendix 8” we show two alternative algorithms to implement smartly the S2S method, where the residual is computed cheaply and the two applications of the smoothing operators per iteration are avoided. We further show that these two alternatives have the same convergence behavior of Algorithm 1.
The main advantage of the S2S method is that the restriction and prolongation operators are performed on the substructures, with objects which are smaller than the corresponding volume counterparts. Thus the S2S method naturally requires less memory storage. For instance, given a coarse space of dimension m, the restriction and prolongation operators are matrices of size \(m\times N^s\) and \(N^s\times m\) where \(N^s\) is the number of unknowns on the substructures. The corresponding volume objects have size \(m\times N^v\) and \(N^v\times m\), where \(N^v\) is the number of unknowns on the whole domain. Thus the S2S method presents advantages both from memory storage and from the computational time point of view, due to smaller number of floating point operations.
We now discuss the cost of the offline computation phases. To build prolongation and restriction operators in the volume case, one needs to define some functions, usually by solving eigenvalue problems, along the interfaces between nonoverlapping subdomains or in the overlap region between overlapping subdomains. These functions are then extended in the interior of the subdomains and this extension costs \(\gamma _s(N_{\text {sub}})\). Notice that the way of extending these functions is not unique and we refer to [28, Section 5] for an overview. In the substructured framework, we have analyzed theoretically several ways among which a global eigenvalue problem (S2SG), local eigenvalue problems (S2S\(G_j\)), and randomized approaches using either PCA (S2SPCA), or deep neural networks (S2SDNN). The relative costs of these approaches with respect to the volume ones are difficult to estimate as they depend on the features of the problem at hand. Nevertheless, for any method used to generate the interface functions, we do not need to perform any extension step in the substructured framework. Besides the approaches studied theoretically, we emphasize that one can use the interface functions computed in a volume method as a basis for the S2S coarse space. In this way one avoids the extension step and exploits at best the intrinsic substructured nature of the S2S method. In the next section we show numerical results where we used the SHEM interface functions as a basis for the S2S method (called the S2SHEM method).
To conclude, we consider a Poisson equation on a threedimensional box \(\Omega = (1, 1) \times (0, 1) \times (0, 1)\) decomposed into two overlapping subdomains \(\Omega _1 = (1, \delta ) \times (0, 1) \times (0, 1)\) and \(\Omega _2 = (\delta , 1) \times (0, 1) \times (0, 1)\). Table 2 shows the computational times to reach a relative error smaller than \(10^{8}\), and the computational memory required to store the restriction and interpolation operators in a sparse format in Matalb for the S2S method and the SHEM method. The experiments have been performed on a workstation with 8 processor cores Intel Core i76700 CPU 3.40GHz and with 32 GB of RAM. We remark that the S2S method requires drastically less memory than the SHEM method, which becomes inefficient for large problems from the memory point of view. Concerning computational times, we observe that the two methods are equivalent in this setting. The substructured restriction and prolongation operators are faster than the volume ones, since to compute the action for instance of the substructured prolongation operator on the largest problem takes about \(7\cdot 10^{4}\) seconds compared to \(3\cdot 10^{2}\) seconds of the volume prolongation. However, the bottleneck here is represented by the two, very large, subdomain solves. A many subdomain decomposition and a parallel implementation on a high performance programming language should make more evident the advantage of using substructured coarse spaces in terms of computational time.
6.2 Decomposition into Many Subdomains
In this section, we consider a Poisson equation in a square domain \(\Omega \) decomposed into \(M\times M\) nonoverlapping square subdomains \({\widetilde{\Omega }}_j\), \(j=1,\ldots ,M^2=N\). Each subdomain \({\widetilde{\Omega }}_j\) contains \(N_{\text {sub}}:=(2^\ell 1)^2\) interior degrees of freedom. The subdomains \({\widetilde{\Omega }}_j\) are extended by \(N_{ov}\) points to obtain subdomains \(\Omega _j\) which form an overlapping decomposition of \(\Omega \). Each discrete local substructure is made by onedimensional segments. Figure 3 provides a graphical representation.
Figure 4 compares several versions of the S2S method to the SHEM method. Specifically, we consider a S2S method with a coarse space made by eigenfunctions of G (S2SG), a S2S method with a coarse space obtained with the PCA procedure (S2SPCA), and a S2S method with a coarse space which is inspired by the SHEM coarse space (S2SHEM, that is S2S Harmonically Enriched Multiscale) and a S2S method with a coarse space obtained with the deep neural network approach (S2SDNN).
In more detail, we create the HEM coarse space by computing harmonic functions and solving interface eigenvalue problems on each onedimensional segment that forms the local discrete substructure, similarly to the SHEM coarse space [27]. However notice that the SHEM method extends these interface functions into the interior of the nonoverlapping subdomains as the method is naturally defined in volume. We do not need to perform this extra step of extending the functions in the neighboring subdomains. We report that we have also tried to build the a coarse space by simply restricting the volume functions of the SHEM coarse space onto the substructures and we observed a similar behavior compared to the HEM coarse space. For the PCA approach, we generated \(q=2\times \)dim\(V_c\) random vectors and we set \(r=2\). The result we plot is averaged over 30 different random coarse spaces. For the deep neural network, we used \(k=4\) and \(N=N^s\).
The size of the coarse space is set by the SHEM coarse space. In the topleft panel, we consider only multiscale functions without solving any eigenvalue problem along the interfaces. In the topright panel, we include the first eigenfunctions on each interface, and on the bottomcentral panel we include the first and the second eigenfunctions. In all cases we observe that the methods have a similar convergence, which is slightly faster for the substructured methods for smaller coarse spaces. As we already remarked, S2SG is not necessarily the fastest.
6.3 Diffusion Problem with Jumping Diffusion Coefficients
In this paragraph, we test the S2S method for the solution of a diffusion equation \(\text {div}(\alpha \nabla u) = f\) in a square domain \(\Omega :=(0,1)^2\) with \(f:=\sin (4\pi x)\sin (2\pi y)\sin (2\pi xy)\). The domain \(\Omega \) is decomposed into 16 nonoverlapping subdomains and we suppose \(\alpha =1\) everywhere except in some channels where \(\alpha \) takes the values large values. Each nonoverlapping subdomain is discretized with \(N_{\text {sub}}=2^{2\ell }\) cells and enlarged by \(N_{ov}\) cells to create an overlapping decomposition with overlap \(\delta =2N_{ov} h\). We use a finitevolume scheme and we assume that the jumps of the diffusion coefficients are aligned with the cell edges. We consider two configurations represented in Fig. 5.
We first aim to validate numerically Theorem 6. We consider the two channels configuration with \(\alpha =5\cdot 10^3\), \(\ell =4\), \(N=4\). The first five eigenvalues are \(\lambda _1=0.999\), \(\lambda _2=0.9989\), \(\lambda _3=0.99863\), \(\lambda _4=0.99861\) and \(\lambda _5=0.2392\). We consider the coarse space \(V_c:=\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_2\right\} \), where \(\gamma =\langle {\pmb \psi }_1,{\pmb \psi }_2\rangle \approx 10^{15}\) so that the two eigenvectors are orthogonal. As \(\lambda _1\) and \(\lambda _2\) have opposite signs, point (C) of Theorem 6 guarantees the existence of an \({\widetilde{\varepsilon }}\) such that \(\rho (T({\widetilde{\varepsilon }}))=\lambda _3<\lambda _2\). Figure 6 confirms on the left panel that \(\lambda (\varepsilon ,0)\) reaches a zero for two values of \(\varepsilon \). The right panel clearly shows that for several values of \(\varepsilon \), \(\rho (T(\varepsilon ))=\lambda _3\). It is interesting to remark that, in this setting, choosing \(\varepsilon =0\) (that is, a standard spectral coarse space) is actually the worse choice, as for any \(\varepsilon \ne 0\) \(\rho (T(\varepsilon ))\le \lambda _2\) as the proof of point (C) of Theorem 6 shows.
Second, we build the coarse space \(V_c:=\left\{ {\pmb \psi }_1+\varepsilon {\pmb \psi }_5\right\} \), where \({\pmb \psi }_i\) \(i=1,5\) are the first and fifth eigenfunctions of G and \(\varepsilon =0.01\). Figure 7 shows that the S2S method method with this specific choice of coarse space is diverging. Computing the spectral radius, we obtain \(\rho (T)= 1.2322\). In this setting, we have \(\gamma =0.5628\) which replaced into the expression of \(\lambda (\gamma ,\varepsilon )\), together with the values of \(\lambda _1\) and \(\lambda _5\), leads to \(\lambda (\gamma ,\varepsilon )=1.2322=\rho (T)\). Indeed \(\lambda (\gamma ,\varepsilon )\) has a vertical asymptote in \(\gamma ^*=0.1404\) as shown on the right panel of Fig. 7. We can restore the convergence of the S2S method by a sufficient decrease of \(\varepsilon \), that is by reducing the perturbation in the coarse space. In a numerical implementation, this is obtained by performing \(r\ge 1\) iterations of the smoother G on the coarse space (see Corollary 1). Indeed it holds that
Applying twice the smoother in the case at hand, we get a new “smoothed” coarse space where the perturbation has size \(\varepsilon ^*=\frac{\lambda _5^2}{\lambda _1^2}\varepsilon =5.73\cdot 10^{4}\) so that now \(\lambda (\gamma ,\varepsilon ^*)=0.0080\). We remark that \(\lambda (\gamma ,\varepsilon ^*)\) is the convergence factor of T on \(\text {span}\left\{ {\pmb \psi }_1+\varepsilon ^* {\pmb \psi }_5\right\} \), so that the convergence of the S2S method is now determined by second largest eigenvalue of T, i.e. \(\lambda _2=0.9990\) as Fig. 7 shows.
We then investigate the performances of the S2S methods and we compared them with the SHEM coarse space in the multiple channel configuration. We set \(\ell =4\), \(N=16\), which correspond to \(N^v=4096\) degrees of freedom, and \(N_{ov}=2\). Table 3 shows the number of iterations to reach a relative error smaller than \(10^{8}\) for the S2SG, S2SPCA, S2SHEM and SHEM methods. The relative error is computed with respect to the first iterate. We consider coarse spaces of dimensions 84, 132 and 180, which, for the SHEM and S2SHEM methods, correspond to multiscale coarse spaces enriched by respectively the first, second and third eigenvectors of the interface eigenvalues problems. For the PCA coarse space, we set \(q=2N_c\) and \(r=6\) if \(\alpha =10^6\), \(r=4\) if \(\alpha =10^4\) and \(r=2\) if \(\alpha =10^2\). We remark that for smaller values of r, the S2SPCA method diverges. This increase in the value of r can be explained noticing that for the multichannel configuration, the smoother G has several eigenvalues approximately 1 for large values of \(\alpha \). Thus the PCA procedure, which essentially relies on a power method idea to approximate the image of G, suffers due to the presence of several clustered eigenvalues, and hence does not provide accurate approximations of the eigenfunctions of G. Similarly also the HEM coarse space obtained by solving on each segment of a skeleton an eigenvalue problem could lead to a divergent method. Thus, to improve this coarse space, we apply few iterations of the smoother to obtain a better \(V_c\). Tables 3 and 4 report the number of iterations to reach a tolerance of \(10^{8}\) when the algorithms are used either as stationary methods or as preconditioners. We remark that all spectral methods have very similar performance, and all methods are robust with respect to the strength of the jumps.
7 Conclusions
In this work we introduced a new computational framework of twolevel substructured Schwarz methods. This is called S2S and is based on coarse spaces defined exclusively on some interfaces provided by the overlapping decomposition of the domain. We presented a broader convergence analysis for twolevel iterative methods, which covers the proposed substructured framework as a special case. The analysis pushes forward the current understanding of asymptotic optimality of coarse spaces. From the computational point of view, we have discussed approaches based on the PCA and deep neural networks for the numerical computation of efficient coarse spaces. Finally, the effectiveness of our new methods is confirmed by extensive numerical experiments, where stationary elliptic problems (with possibly highly jumping diffusion coefficients) are efficiently solved.
Data Availability
No datasets were generated or analysed during the current study.
Notes
Notice that the term “substructured” refers very often to DD methods that are defined on nonoverlapping subdomains; see, e.g., [5, 6]. However, in this work it indicates methods are purely defined on the interfaces, independently of the type of (overlapping or nonoverlapping) decomposition of the domain; see, e.g., [40, Section 5].
Notice that the substructure of a subdomain is sometimes called “skeleton”; see, e.g., [11].
The hypothesis of a complex innerproduct space is general and has the goal of dealing with possibly complex eigenvectors of nonsymmetric \({{\mathbb {A}}}\) and \({{\mathbb {G}}}\). Nevertheless, the analysis presented in this section is valid also in the (more commonly used) real case.
The hypothesis \({\widetilde{\lambda }}_1{\widetilde{\lambda }}_2>0\) is not restrictive. The same calculations can be performed for \({\widetilde{\lambda }}_1{\widetilde{\lambda }}_2<0\), as the sign of the product only influences the sign of the derivative \(\frac{d \lambda (\varepsilon ,0)}{d\varepsilon }.\)
Given two matrices A and B, AB and BA share the same nonzero eigenvalues.
References
Saad, Y.: Iterative Methods for Sparse Linear Systems. Other Titles in Applied Mathematics, 2nd edn. SIAM, Philadelphia, PA (2003)
Lions, P.L.: On the Schwarz alternating method. I. In: First International Symposium on Domain Decomposition Methods for Partial Differential Equations, PP. 1–42. Paris, France (1988)
Lions, P.L.: On the Schwarz alternating method. II. Stochastic interpretation and other properties. In: Second International Symposium on Domain Decomposition Methods for Partial Differential Equations, pp. 47–70 (1989)
Dolean, V., Jolivet, P., Nataf, F.: An Introduction to Domain Decomposition Methods. SIAM, Philadelphia, PA (2015)
Quarteroni, A., Valli, A.: Domain Decomposition Methods for Partial Differential Equations. Numerical Mathematics and Scientific Computation. Oxford Science Publications, Oxford (1999)
Toselli, A., Widlund, O.: Domain Decomposition Methods: Algorithms and Theory. Series in Computational Mathematics, vol. 34. Springer, Berlin (2005)
Chaouqui, F., Gander, M.J., SantuginiRepiquet, K.: A continuous analysis of Neumann–Neumann methods: scalability and new coarse spaces. SIAM J. Sci. Comput. 42(6), 3785–3811 (2020)
Ciaramella, G., Gander, M.J.: Analysis of the parallel Schwarz method for growing chains of fixedsized subdomains: part I. SIAM J. Numer. Anal. 55(3), 1330–1356 (2017)
Ciaramella, G., Gander, M.J.: Analysis of the parallel Schwarz method for growing chains of fixedsized subdomains: part II. SIAM J. Numer. Anal. 56(3), 1498–1524 (2018)
Ciaramella, G., Gander, M.J.: Analysis of the parallel Schwarz method for growing chains of fixedsized subdomains: part III. Electron. Trans. Numer. Anal. 49, 201–243 (2018)
Ciaramella, G., Hassan, M., Stamm, B.: On the scalability of the Schwarz method. SMAI J. Comput. Appl. Math. 6, 33–68 (2020)
Chaouqui, F., Ciaramella, G., Gander, M.J., Vanzan, T.: On the scalability of classical onelevel domaindecomposition methods. Vietnam J. Math. 46(4), 1053–1088 (2018)
Ciaramella, G., Gander, M.J., Halpern, L., Salomon, J.: Methods of reflections: relations with Schwarz methods and classical stationary iterations, scalability and preconditioning. SMAI J. Comput. Appl. Math. 5, 161–193 (2019)
Dubois, O., Gander, M.J., Loisel, S., StCyr, A., Szyld, D.B.: The optimized Schwarz method with a coarse grid correction. SIAM J. Sci. Comput. 34(1), 421–458 (2012)
Gander, M.J., Halpern, L., Repiquet, K.: A new coarse grid correction for RAS/AS, 275–283 (2014)
Gander, M.J., Halpern, L., SantuginiRepiquet, K.: On optimal coarse spaces for domain decomposition and their approximation. In: Domain Decomposition Methods in Science and Engineering XXIV, pp. 271–280. Springer, Cham (2018)
Gander, M.J., Loneland, A.: SHEM: An optimal coarse space for RAS and its multiscale approximation. In: Domain Decomposition Methods in Science and Engineering XXIII, pp. 313–321. Springer, Cham (2017)
Gander, M.J., Song, B.: Complete, optimal and optimized coarse spaces for additive Schwarz. In: Domain Decomposition Methods in Science and Engineering XXIV, pp. 301–309. Springer, Cham (2018)
Gander, M.J., Vanzan, T.: Multilevel optimized Schwarz methods. SIAM J. Sci. Comput. 42(5), 3180–3209 (2020)
Aarnes, J., Hou, T.Y.: Multiscale domain decomposition methods for elliptic problems with high aspect ratios. Acta Math. Appl. Sin. 18(1), 63–76 (2002)
Bjorstad, P., Gander, M.J., Loneland, A., Rahman, T.: Does SHEM for additive Schwarz work better than predicted by its condition number estimate? In: Domain Decomposition Methods in Science and Engineering XXIV, LNCSE, Springer, 129–138 (2018)
Dohrmann, C.R., Klawonn, A., Widlund, O.B.: A family of energy minimizing coarse spaces for overlapping Schwarz preconditioners. In: Domain Decomposition Methods in Science and Engineering XVII, pp. 247–254 (2008)
Dolean, V., Nataf, F., Scheichl, R., Spillane, N.: Analysis of a twolevel Schwarz method with coarse spaces based on local DirichlettoNeumann maps. Comput. Methods Appl. Math. 12(4), 391–414 (2012)
Efendiev, Y., Galvis, J., Lazarov, R., Willems, J.: Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms. ESAIM Math. Model. Numer. Anal. 46(5), 1175–1199 (2012)
Galvis, J., Efendiev, Y.: Domain decomposition preconditioners for multiscale flows in highcontrast media. Multiscale Model. Sim. 8(4), 1461–1483 (2010)
Galvis, J., Efendiev, Y.: Domain decomposition preconditioners for multiscale flows in high contrast media: reduced dimension coarse spaces. Multiscale Model. Sim. 8(5), 1621–1644 (2010)
Gander, M.J., Loneland, A., Rahman, T.: Analysis of a new harmonically enriched multiscale coarse space for domain decomposition methods. Preprint arXiv:1512.05285 (2015)
Heinlein, A., Klawonn, A., Knepper, J., Rheinbach, O.: Multiscale coarse spaces for overlapping Schwarz methods based on the ACMS space in 2D. Electron. Trans. Numer. Anal. 48, 156–182 (2018)
Spillane, N., Dolean, V., Hauret, P., Nataf, F., Pechstein, C., Scheichl, R.: A robust twolevel domain decomposition preconditioner for systems of PDEs. C. R. Math. 349(23), 1255–1259 (2011)
Spillane, N., Dolean, V., Hauret, P., Nataf, F., Pechstein, C., Scheichl, R.: Abstract robust coarse spaces for systems of PDEs via generalized eigenproblems in the overlaps. Numer. Math. 126(4), 741–770 (2014)
Zampini, S., Tu, X.: Multilevel balancing domain decomposition by constraints deluxe algorithms with adaptive coarse spaces for flow in porous media. SIAM J. Sci. Comput. 39(4), 1389–1415 (2017)
Bastian, P., Scheichl, R.S., Seelinger, L., Strehlow, A.: Multilevel spectral domain decomposition. arXiv preprint arXiv:2106.06404 (2021)
Klawonn, A., Radtke, P., Rheinbach, O.: FETIDP methods with an adaptive coarse space. SIAM J. Numer. Anal. 53(1), 297–320 (2015)
Heinlein, A., Klawonn, A., Lanser, M., Weber, J.: A frugal FETIDP and BDDC coarse space for heterogeneous problems. Electron. Trans. Numer. Anal. 53, 562–591 (2020)
Klawoon, A., Kühn, M., Rheinbach, O.: Coarse spaces for FETIDP and BDDC methods for heterogeneous problems: connections of deflation and a generalized transformationofbasis approach. Electron. Trans. Numer. Anal. 52, 43–76 (2020)
Greenbaum, A.: Iterative Methods for Solving Linear Systems Frontiers in Applied Mathematics. Frontiers in Applied Mathematics, SIAM, Philadelphia, PA (1997)
Hackbusch, W.: Multigrid Methods and Applications. Series in Computational Mathematics, Springer, Berlin (2013)
Xu, J., Zikatanov, L.: Algebraic multigrid methods. Acta Numer. 26, 591–721 (2017)
Graham, I.G., Lechner, P.O., Scheichl, R.: Domain decomposition for multiscale PDEs. Numer. Math. 106(4), 589–626 (2007)
Gander, M.J.: Optimized Schwarz methods. SIAM J. Numer. Anal. 44(2), 699–731 (2006)
Ciaramella, G., Vanzan, T.: On the asymptotic optimality of spectral coarse spaces. In: Domain Decomposition in Science and Engineering XXVI (accepted) (2021)
Lions, J.L., Magenes, E.: Nonhomogeneous Boundary Value Problems and Applications. Die Grundlehren der mathematischen Wissenschaften, vol. I. Springer, Berlin (1972)
Tartar, L.: An Introduction to Sobolev Spaces and Interpolation Spaces. Lecture Notes of the Unione Matematica Italiana, Springer, Berlin (2007)
Lax, P.: Functional Analysis. A Wiley Series of Texts, Monographs and Tracts. Pure and Applied Mathematics, Wiley, New York (2002)
Friedman, A.: Foundations of Modern Analysis. Dover Books on Mathematics Series, Dover, New York (1982)
Hackbusch, W.: Iterative Solution of Large Sparse Systems of Equations. Applied Mathematical Sciences, vol. 95, 2nd edn. Springer, Cham (2016)
Gander, M.J.: On the influence of geometry on optimized Schwarz methods. SeMA J. 53(1), 71–78 (2011)
Gander, M.J., Xu, Y.: Optimized Schwarz methods for circular domain decompositions with overlap. SIAM J. Numer. Anal. 52(4), 1981–2004 (2014)
Brezina, M., Falgout, R., MacLachlan, S., Manteuffel, T., McCormick, S., Ruge, J.: Adaptive smoothed aggregation (\(\alpha \) SA) multigrid. SIAM Rev. 47(2), 317–346 (2005)
Katrutsa, A., Daulbaev, T., Oseledets, I.: Deep multigrid: learning prolongation and restriction matrices. arXiv preprint arXiv:1711.03825 (2017)
Heinlein, A., Klawonn, A., Lanser, M., Weber, J.: Combining machine learning and domain decomposition methods for the solution of partial differential equations—a review. GAMMMitteilungen 44(1), 202100001 (2021)
Gubisch, M., Volkwein, S.: Chapter 1: Proper Orthogonal Decomposition for LinearQuadratic Optimal Control. Computational Science & Engineering, pp. 3–63. SIAM, Philadelphia, PA (2017)
Hutchinson, M.F.: A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Commun. Stat. Simul. C. 18(3), 1059–1076 (1989)
Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, 4th edn. Johns Hopkins University Press, Baltimore, MD (2013)
Acknowledgements
Gabriele Ciaramella and Tommaso Vanzan are members of GNCS (Gruppo Nazionale per il Calcolo Scientifico) of INdAM.
Funding
Open access funding provided by EPFL Lausanne. No funding was received.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this Appendix, important implementation details of our substructured twolevel methods are discussed. We reformulate Algorithm 1 in equivalent forms that are computationally more efficient. This is essential to make our methods computationally equal or more efficient than other existing strategies.
As already remarked in Sect. 6, a naive implementation of Algorithm 1 would lead to a quite expensive method as the computation of the residual involves a matrix multiplication with A, which requires to perform subdomain solves. Hence, one would need two subdomain solves per iteration. To avoid this extra cost, we use the special form of the matrix \(A=IG\) and propose two new versions of Algorithm 1. These are called S2SB1 and S2SB2 and given by Algorithms 2 and 3. The relations between S2S, S2SB1 and S2SB2 are given in the following theorem.
Theorem 8
(Equivalence between S2S, S2SB2 and S2SB1)
 (a)

(b)
Algorithm 3 corresponds to the stationary iterative method \(\mathbf{u}^n=G({\mathbb {I}}PA_c^{1}RA)\mathbf{u}^{n1} + {\widetilde{M}}{} \mathbf{b},\) where \(G({\mathbb {I}}PA_c^{1}RA)\) is the iteration matrix and \({\widetilde{M}}\) the relative preconditioner. Moreover, Algorithms 3 and 2 have the same convergence behavior.
Proof
For simplicity, we suppose to work with the error equation and thus \(\mathbf{b}=0\). We call \(\widetilde{\mathbf{u}}^0\) the output of the first five steps of Algorithm 2 and with \(\widehat{\mathbf{u}}^0\) the output of Algorithm 1. Then given a initial guess \(\mathbf{u}^0\), we have
Similar calculations show that also steps 610 of S2SB1 are equivalent to an iteration of 1. For the second part of the Theorem, we write the iteration matrix for Algorithm 3 as
Hence, Algorithm 3 performs a postsmoothing step instead of a presmoothing step as Algorithm 2 does. The method still has the same convergence behavior since the matrices \(G({\mathbb {I}}PA_c^{1}RA)\) and \(({\mathbb {I}}PA_c^{1}RA)G\) have the same eigenvalues.^{Footnote 6}\(\square \)
Notice that Algorithm 2 requires for the first iteration two applications of the smoothing operator G, namely two subdomains solves. The next iterations, given by Steps 6–10, need only one application of the smoothing operator G. Theorem 8 (a) shows that Algorithm 2 is equivalent to Algorithm 1. This means that each iteration after the first one of Algorithm 2 is computationally less expensive than one iteration of a volume twolevel DD method. Since twolevel DD methods perform generally few iteration, it could be important to get rid of the expensive first iteration. For this reason, we introduce Algorithm 3, which overcomes the problem of the first iteration. Theorem 8 (b) guarantees that Algorithm 3 is exactly an S2S method with no presmoothing and one postsmoothing step. Moreover, it has the same convergence of Algorithm 2.
We wish to remark that, the reformulations S2SB1 and S2SB2 require to store the matrix \({\widetilde{P}}:= GP\), which is anyway needed in the assembly phase of the coarse matrix, hence no extra cost is required, if compared to a volume twolevel DD method. Finally, we stress that these implementation tricks can be readily generalized to a general number of pre and postsmoothing steps.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ciaramella, G., Vanzan, T. Spectral Coarse Spaces for the Substructured Parallel Schwarz Method. J Sci Comput 91, 69 (2022). https://doi.org/10.1007/s10915022018409
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915022018409
Keywords
 Domain decomposition methods
 Schwarz methods
 Substructured methods
 Twolevel methods
 Coarse correction
 Coarse spaces
Mathematics Subject Classification
 65N55
 65F10
 65N22
 35J55