Substructured two-grid and multi-grid domain decomposition methods

Two-level Schwarz domain decomposition methods are very powerful techniques for the efficient numerical solution of partial differential equations (PDEs). A two-level domain decomposition method requires two main components: a one-level preconditioner (or its corresponding smoothing iterative method), which is based on domain decomposition techniques, and a coarse correction step, which relies on a coarse space. The coarse space must properly represent the error components that the chosen one-level method is not capable to deal with. In the literature, most of the works introduced efficient coarse spaces obtained as the span of functions defined on the entire space domain of the considered PDE. Therefore, the corresponding two-level preconditioners and iterative methods are defined in volume. In this paper, we use the excellent smoothing properties of Schwarz domain decomposition methods to define, for general elliptic problems, a new class of substructured two-level methods, for which both Schwarz smoothers and coarse correction steps are defined on the interfaces (except for the application of the smoother that requires volumetric subdomain solves). This approach has several advantages. On the one hand, the required computational effort is cheaper than the one required by classical volumetric two-level methods. On the other hand, our approach does not require, like classical multi-grid methods, the explicit construction of coarse spaces, and it permits a multilevel extension, which is desirable when the high dimension of the problem or the scarce quality of the coarse space prevents the efficient numerical solution. Numerical experiments demonstrate the effectiveness of the proposed new numerical framework.


Introduction
Domain decomposition (DD) methods are powerful divide-and-conquer strategies that permit the solution of linear systems of equations, generally discrete partial differential equation (PDE) problems, by efficient parallelization processes [20,48,52]. Over the course of the time, several different parallel DD strategies have been developed; see [28] for an elegant review with a historical flavor. The first idea of parallelizing a Schwarz method goes back to P.L. Lions, who introduced the classical parallel Schwarz method [47]. This method is based on Dirichlet transmission conditions and its discrete version is proved to be equivalent to the famous Restricted Additive Schwarz (RAS) method [28], which was discovered much later in [6]. Similar to RAS is the famous Additive Schwarz (AS) method introduced in [54]. Even though both RAS and AS are based on Dirichlet transmission conditions, they are not equivalent methods; see, e.g., [24] for a detailed comparison analysis. If different transmission conditions are used, one obtains different DD methods, like the optimized Schwarz method [27,36], the Neumann-Neumann (or FETI) method [9,52], the Dirichlet-Neumann method [7,48], etc. The main drawback of these classical one-level DD methods is that they are not (weakly) scalable, since their convergence generally deteriorates when the number of subdomains increases; see, e.g., [7,20,52]. Only in few cases a particular scalability behavior has been proved and investigated [7, 10-12, 16, 17]. To overcome this scalability issue, a coarse correction step is usually used. This leads to "two-level DD methods". In the literature, with "twolevel DD method" one refers to either a two-level preconditioner [1-4, 19, 21, 23, 25, 26, 33, 39, 44, 45, 49, 50, 55], or to a two-level stationary method [8, 9, 14, 22, 30-32, 34, 35, 37]. While in the first class one seeks for a coarse space matrix to add to the one-level DD preconditioner, in the second class the goal is to design a correction step, where the residual equation is solved on a coarse space. This second class follows an idea similar to the one of multi-grid methods [42]. For any given one-level DD method (stationary or preconditioning), the choice of the coarse space influences very strongly the convergence behavior of the corresponding two-level method. For this reason, the main focus of all the references mentioned above is the definition of different coarse spaces and new strategies to build coarse space functions, leading to efficient two-level DD stationary and preconditioning methods. Despite that the mentioned references consider several one-level DD methods and different partial differential equation (PDE) problems, it is still possible to classify them in two main groups. These depend on the idea governing the definition of the coarse space. To explain it, let us consider a DD iterative method (e.g., RAS) applied to a well-posed PDE problem. Errors and residuals of the DD iterative procedure have generally very special forms. The errors are predominant in the overlaps and are harmonic, in the sense of the underlying PDE operator, in the interior of the subdomains (excluding the interfaces). The residuals are predominant on the interfaces and zero outside the overlap. For examples and more details, see, e.g., [14,15,32]. This difference motivated, sometimes implicitly, the construction of different coarse spaces. On the one hand, many references use different techniques to define coarse functions in the overlap (where the error is predominant), and then extend them on the remaining part of the neighboring subdomains; see, e.g., [19,21,23,25,26,44,45,49,50]. On the other hand, in other works the coarse space is created by first defining basis functions on the interfaces (where the residual is nonzero), and then extending them (in different ways) on the portions of the neighboring subdomains; see, e.g., [1, 2, 8, 9, 14, 30, 32-35, 37, 44]. For a good, compact and complete overview of several of the different coarse spaces, we refer to [44,Section 5]. For Helmholtz and time-harmonic equations, we refer to [3,4,39], where the coarse space is based on a (volumetric) coarse mesh. For other different techniques and other related discussions, see, e.g., [20,22,30,31,40,55].
The starting point of this work is related to an important property of Schwarz methods: one-level Schwarz iterative methods are generally very efficient smoothers; see, e.g., [10,13,27,37] and references therein. This property was already discussed in [42,Chapter 15], where Schwarz methods are used as classical smoothers in the context of multi-grid methods or to define local defect corrections; see, e.g., [37,41]. In these frameworks, Schwarz methods are used to smooth and correct the approximation in some subdomains of the entire computational domain. However, as we already mentioned, after one Schwarz smoothing iteration, the residuals are generally predominant on the interfaces and in some cases even zero outside the interfaces. This remark is the key starting point of our work. We introduce for the first time so-called two-level and multilevel DD substructured methods. We call these methods Geometric 2-level Substructured (G2S) method and Geometric Multilevel Substructured (GMS) method. The term "substructured" indicates that iterations and coarse correction steps are defined on the interfaces (or more precisely on the substructures 1 ) of the domain decomposition (note that volumetric subdomains solves are still required to apply the smoother). With this respect, our methods are defined in the same spirit as two-level methods whose coarse spaces are extensions in volume of interfaces basis functions: they attempt to correct the residual only where it is truly necessary. The G2S method is essentially a two-grid parallel Schwarz method defined on the substructures, for which the coarse correction is performed on coarser interface grids. The GMS is the extension of the G2S to a multilevel framework. In other words, by the G2S and GMS methods, we propose a new methodology that attempts the best use of Schwarz smoothers in the context of two-grid and multi-grid methods. Direct numerical experiments show that these methods converge in less iterations than classical two-grid methods defined in volume and using a Schwarz smoother. In many cases, this improvement in terms of iteration number is significantly high. Moreover, our new methods have, in addition, other advantages. On the one hand, like classical multi-grid methods, the G2S method does not require the explicit construction of coarse spaces, and it permits a multilevel extension, which is desirable when the dimension of the coarse space becomes too large. On the other hand, since the entire solution process is defined on the substructures, less memory storage is required and it is not necessary to store the entire approximation array on each point of the (discrete) domain. For a three-dimensional problem with mesh size h, a discrete substructure array is of size O(1/h 2 ). This is much smaller than O(1/h 3 ), which is the size of an array corresponding to an approximation in volume. For this reason, the resulting interface restriction and prolongation operations are generally much cheaper and the dimension of the coarse space is much smaller. This paper is organized as follows. In Section 2, we formulate the classical parallel Schwarz method in a substructured form. This is done at the continuous level and represents the starting point for the G2S method introduced in Section 3, where also a convergence analysis is presented for two subdomains in 2d. In Section 4, we discuss implementation details and multilevel extensions of the G2S method. Extensive numerical experiments are presented in Section 5, where the robustness of the proposed methods with respect to mesh refinement and physical parameters is studied. Finally, we present our conclusions in Section 6.

Substructured Schwarz domain decomposition methods
Consider a bounded Lipschitz domain Ω ⊂ R d for d ∈ {2, 3}, a general secondorder linear elliptic operator L and a function f ∈ L 2 (Ω). Our goal is to introduce new domain decomposition-based methods for the efficient numerical solution of the general linear elliptic problem which we assume to be uniquely solved by a u ∈ H 1 0 (Ω). To formulate our methods, we need to fix some notation. Given a bounded set Γ with boundary ∂Γ , we denote by ρ Γ (x) the distance of x ∈ Γ from ∂Γ . The space H 1/2 00 (Γ ) is then defined as H and it is also known as the Lions-Magenes space; see, e.g., [46,48,51]. Equivalently, H 1/2 00 (Γ ) can be defined as the space of functions in H 1/2 (Γ ) such that their extensions by zero to a superset Γ of Γ are in H 1/2 ( Γ ); see, e.g., [51].
Next, consider a decomposition of Ω into N overlapping Lipschitz subdomains Ω j , that is Ω = ∪ j ∈I Ω j with I := {1, 2, . . . , N}. For any j ∈ I, we define the set of neighboring indexes N j := { ∈ I : Ω j ∩ ∂Ω = ∅}. Notice that j / ∈ N j , and ∪ j ∈I N j = I. Given a j ∈ I, we introduce the substructure of Ω j defined as S j := ∪ ∈N j ∂Ω ∩ Ω j , that is the union of all portions of ∂Ω intersecting with Ω j with ∈ N j . 2 The sets S j are open and their closures are S j = S j ∪ ∂S j , with  Figure 1 provides an illustration of substructures corresponding to a commonly used decomposition of a rectangular domain. We denote by E 0 j : L 2 (S j ) → L 2 (S) the extension by zero operator. Now, we consider a set of continuous functions χ j : S j → [0, 1], j = 1, . . . , N, such that for x ∈ ∂S j \ ∂Ω, and j ∈I E 0 j χ j ≡ 1, which means that the functions χ j form a partition of unity. Further, we assume that the functions χ j , j ∈ I, satisfy the condition . This is satisfied, for example, in the case of Fig. 1 with piecewise linear partition of unity function χ j .
For any j ∈ I, we define Γ int j := ∂Ω j ∩ ∪ ∈N j Ω and introduce the following trace and restriction operators . It is well known that (1) is equivalent to the domain decomposition system (see, e.g., [48]) where f j ∈ L 2 (Ω j ) is the restriction of f on Ω j . Notice that χ τ u lies in H Given a j ∈ I such that ∂Ω j \ Γ int j = ∅, we define the extension operator E j : H for a v ∈ H 1/2 (Γ int j ). The domain decomposition system (3) can be then written as If we define v j := χ j τ j u j , j ∈ I, then system (6) becomes where g j := χ j τ j E(0, f j ) and the operators G j, : H System (7) is the substructured form of (3). The equivalence between (3) and (7) is explained by the following theorem.
Theorem 1 (Relation between (3) and (7)) Let u j ∈ H 1 (Ω j ), j ∈ I, solve (3), then v j := χ j τ j u j , j ∈ I, solve (7). Let v j ∈ H 1/2 00 (S j ), j ∈ I, solve (7), then Proof The first statement is proved before Theorem 1, where the substructured system (7) is derived. To obtain the second statement, we use (7) and the definition of The claim follows by using this equality together with the definitions of u j and E j .
Take any function w ∈ H 1 0 (Ω) and consider the initialization u 0 j := w| Ω j , j ∈ I. The parallel Schwarz method (PSM) is then given by for n ∈ N + , and has the substructured form initialized by v 0 j := χ j τ j u 0 j ∈ H 1/2 00 (S j ). Notice that the iteration (10) is well posed in the sense that v n j ∈ H 1/2 00 (S j ) for j ∈ I and n ∈ N. Equations (10) and (7) allow us to obtain the substructured PSM in error form, that is for n ∈ N + , where e n j := v j − v n j , for j ∈ I and n ∈ N. Equation (7) can be written in the matrix form where I d,j are the identities on L 2 (S j ), j ∈ I. Similarly, we define the operator G as . Moreover, we wish to remark that neither the operator A nor G is necessarily symmetric.
If the iteration v n = Gv n−1 + b converges, then the limit is the solution to the problem Av = b. From a numerical point of view, this is not necessarily true if the (discretized) subproblems (9) are not solved exactly. For this reason, we assume in what follows that the subproblems (9) are always solved exactly.

G2S: geometric two-level substructured DD method
In this section, we introduce our G2S method. The main drawback of many twolevel DD methods (including our two-level G2S method) is that the dimension of the coarse space can grow for increasing number of subdomains. This situation becomes even worse if the basis functions are not "good enough", a fact that would require an even larger dimension of the coarse space. In this case, the extension from two-level to multilevel framework would be suitable. These comments lead to the following questions. Is it possible to avoid the explicit construction of a coarse space? Is there any practical way to implicitly define a coarse space? Can one define a framework in which an extension of the two-level method to a multilevel framework is possible and easy?
In this section, we answer the above questions by introducing the so-called Geometric 2-level Substructured (G2S) method, which is a two-grid-type method (allowing a multi-grid generalization). This is detailed in Section 3.1. The corresponding convergence analysis for a two-subdomain case is presented in Section 3.2.2.

Description of the G2S method
Let us consider a discretization of the substructures such that S j is approximated by a mesh of N j points, j ∈ I. The discrete substructures are denoted by S N j j , j ∈ I. An example is given in Fig. 2 (left).
Moreover, we set N s := j ∈I N j . The corresponding finite-dimensional discretization of the operators G j, in (8) are denoted by G h,j, ∈ R N j ×N . Similarly as in (12), we define the block operators A h ∈ R N s ×N s and G h ∈ R N s ×N s as where I h,j ∈ R N j ×N j are identity matrices. Notice that A h = I h − G h , where I h = diag(I h,1 , . . . , I h,N ). Therefore, the substructured problem Av = b becomes , and the PSM is then The matrices G h and A h = I h − G h are not necessarily symmetric and never assembled explicitly. Instead their action of given vectors is computed directly. Notice that the computation of the action of G h,j, on a given vector requires a subdomain solve. We insist on the fact that this subdomain solve is performed exactly. Furthermore, if the discrete PSM (14) converges, then ρ(G h ) < 1 and the matrix A h is invertible. Next, we introduce coarser discretizations S M j j , j ∈ I, where the j th substructure is discretized with M j < N j points. An example is given in Fig. 2 (right). The total number of discrete coarse points is M s := j ∈I M j . For each j ∈ I we introduce restriction and prolongation matrices R j ∈ R M j ×N j and P j ∈ R N j ×M j . These could be classical interpolation operators used, e.g., in multi-grid methods. If for example S j is a one-dimensional interval, then the prolongation matrix can be chosen as and the corresponding restriction matrix would be the full weighting restriction operator R j := 1 2 P j . The global restriction and prolongation matrices are defined as R := diag(R 1 , . . . , R N ) ∈ R M s ×N s and P := diag(P 1 , . . . , P N ) ∈ R N s ×M s . The restriction of A h on the coarse level is then defined as A 2h := RA h P . Notice that this matrix can be either precomputed exactly or assembled in an approximate way. For more details see Section 4.1.

Remark 1
Notice that, for the definition of the G2S method, fine and coarse meshes need not to be nested and the sets S M j j need not to coincide on overlapping areas. In this manuscript, we work with nested meshes. The case of non-nested meshes is beyond the scope of this paper and will be the subject of future work. Moreover, it is natural to include cross points in both fine and coarse discrete substructures, since these are generally the corners of the (overlapping) subdomains (see, e.g., Figs. 2 and 7).
The G2S procedure is defined by the following Algorithm 1, where n 1 and n 2 are the numbers of the pre-and post-smoothing steps. This is a classical two-grid type iteration, but instead of having the classical grids in volume, we consider two discrete levels on the substructures. This has the advantage of performing all restriction and interpolation operations of smaller coarse problems. More details are given in Section 4.1. We insist on the fact that the G2S method does not require the explicit construction of a coarse space V c , but it exploits directly a discretization of the interfaces. Moreover, it is clear that a simple recursion allows us to embed the G2S method into a multi-grid framework. Further implementation details are discussed in Section 4.
where M is a matrix which acts on the right-hand side vector b h and which can be regarded as the preconditioner corresponding to our two-level method. In error form, the iteration (16) becomes

Analysis of the G2S method
In this section, we analyze the convergence of the G2S method. To do so, we recall our model problem (1) and assume a two-subdomain decomposition Ω = Ω 1 ∪ Ω 2 such that the two substructures S 1 and S 2 are two segments of the same length L. Notice that in this case the substructures coincide with the interfaces. An example for Ω equal to a rectangle is given in Fig. 3.
For a given ∈ N + , ≥ 2, we discretize (1) using a uniform grid of N h = 2 − 1 points on each substructure (without counting the two points on ∂Ω) so that the grid where N j are used in Section 3.1 to denote the number of discretization points of the substructures. We also introduce a coarser mesh of N c = 2 −1 −1 points on each substructure and mesh size h c = L N c +1 . We define the geometric prolongation operator P ∈ R 2N h ×2N c as P := diag( P , P ), where P = P 1 = P 2 is the matrix given in (15) where R is the full weighting restriction matrix R := 1 2 P . Due to the special decomposition into two subdomains, let us simplify the notation defining G h,1 := G h,1,2 and G h,2 := G h,2,1 , that is the action of G h,j represents a subdomain solution in the j th subdomain. We suppose that the operators G h,1 and G h,2 have eigenvectors ψ k with eigenvalues ρ j (k), k = 1, . . . , N h , j = 1, 2. Here, ψ k are discrete Fourier modes given by (ψ k ) j = sin(kπ hj ), for j, k = 1, . . . , N h . Notice that ψ ψ k = δ ,k N c +1 2 , with δ ,k the Kronecker delta. It is well known that the actions of R and P on the combination of a low-frequency mode ψ k with its high-frequency companion ψ k , with k = N h − k + 1, are notice that the two points on ∂Ω are excluded); see, e.g., [13,42]. The vectors φ k are Fourier modes on the coarse grid. As before, the coarse matrix is A 2h = RA h P , and the G2S iteration In the following subsections, we prove that G2S method is well posed and convergent. Well-posedness follows by the invertibility of the coarse matrix A 2h , which is proved in Section 3.2.1 along with an interpretation of the G2S method. A detailed convergence analysis is presented in Section 3.2.2, where sharp estimates of the spectral radius of T h are derived under certain assumptions. Finally, in Section 3.2.3 we discuss the relations between our G2S method and a classical two-grid method in volume using the PSM as a smoother.

Interpretation of G2S as a general two-level method
Let us begin by considering any invertible matrix U ∈ R 2N c ×2N c and compute where P : , and the operators P := P U, R = U R 3 and A 2h := RA h P . Notice that the columns of P := P U are the vectors spanning the space where the relation (17) is used. This means that the G2S method can be written as a two-level method characterized by an iteration operator T h defined via the prolongation and restriction operators P and R. Moreover, in this case the actions of P and R on two vectors can be expressed by for any v, w ∈ R N c and any f, g ∈ R N h , where ·, · denotes the usual Euclidean scalar product. Now, we turn our attention to the matrix A 2h , whose invertibility is proved in the following lemmas.

Lemma 1 (Invertibility of a coarse operator
Then A c has full rank if and only if Proof We first show that if P V c (Av) = 0 for any v ∈ V c \ {0}, then A c = RAP has full rank. This result follows from the rank-nullity theorem, if we show that the only element in the kernel of A c is the zero vector. To do so, we recall the definitions of P and R given in (21). Clearly, Pz = 0 if and only if z = 0. For any z ∈ R 2m the vector Pz is in V c . Since A is invertible, then APz = 0 if and only if z = 0. Moreover, by our assumption it holds that P V c (APz) = 0. Now, we notice that Now we show that, if A c = RAP has full rank, then P V c (Av) = 0 for any v ∈ V c \{0}. We proceed by contraposition and prove that if there exists a We can now write that A c z = R(APz) = 0, which implies that A c has not full rank.

Lemma 2 (Invertibility of
To do so, we recall that A 2h = RA h P and we wish to prove that for any z ∈ V c \ {0} (with V c defined in (19)) it holds P V c (A h z) = 0 and then invoke Lemma 1. Here the orthogonality is understood with respect to the classical scalar product of R 2N h . First, it is possible to show that the orthogonal complement of V c is Since the vectors spanning V c in (19) are orthogonal, we have P V c (w) = P P w for any w ∈ R 2N h , Since P has full rank, to prove that P V c (A h z) = 0 for any z ∈ V c \{0} it is sufficient to show that P A h v = 0 holds for any column v of P , that is any element of the form [( P φ k ) , ( P φ ) ] . Therefore, we use (17) and compute for any k, , for any k = 1, . . . , N c . Inserting this equality into (22), multiplying to the left with [( P φ k ) , ( P φ ) ] and using (17) together with the orthogonality relation ψ ψ k = δ ,k N c +1 2 , we obtain for k = that Similarly, for k = we obtain that A direct calculation using the assumptions on ρ j (k) shows that this is nonzero.

Convergence of the G2S method
The previous section focused on the well-posedness of the method. In particular, we proved Lemma 2 that guarantees that A 2h is invertible and that the G2S method is well posed. In this section, our attention is turned to the analysis of the G2S convergence behavior. This is performed by studying the spectral properties of the G2S iteration operator. Our first key result is the following technical lemma.

Lemma 3 Consider the G2S matrix
where and D n (k) is given by for n even and for n odd, respectively, whose entries are π(k) : Proof We consider the case in which both n 1 and n 2 are even. The other cases can be obtained by similar arguments. Since n 1 is even, we have that Similarly, we obtain that G n 2

Moreover, direct calculations reveal that
and where we used (17). It follows that Let us now study the action of the coarse matrix A 2h on φ k 0 0 φ k . We use (17), (24) and .
A direct calculation reveals that the eigenvalues of ) and they are nonzero for k = 1, . . . , N c . Hence, Λ 2 (k) is invertible and, using (26), we get Summarizing our results and using the definition of T h , we conclude that and our claim follows.
Using Lemma 3, it is possible to factorize the iteration matrix T h . This factorization is obtained in the following theorem.

Theorem 2 (Factorization of the iteration matrix T h ) There exists an invertible
where the matrices G k ∈ R 4×4 are defined in Lemma 3 and γ j ( N h +1 2 ) depend on n 1 , n 2 and the eigenvalues ρ j ( N h +1 2 ) of G h,j , for j = 1, 2.
Proof We define the invertible matrix where the expressions of γ 1 ( N h +1 2 ) and γ 2 ( N h +1 2 ) depend on n 1 and n 2 . For instance, if n 1 + n 2 is an even number, then . Hence, we conclude that T h Q = Q G and our claim follows.
The factorization of T h proved in Theorem 2 allows one to obtain accurate convergence results of a G2S method. Clearly, an optimal result would be a direct calculation of the spectral radii of the matrices G k . However, this is in general a difficult task that requires cumbersome calculations. Nevertheless, in Theorem 3 we obtain an explicit expression for the spectral radii of G k under some reasonable assumptions that are in general satisfied in case of Schwarz methods and symmetric decompositions; see, e.g., [29,Section 3]. Notice also that Theorem 3 guarantees that only one (pre-or post-) smoothing step is necessary for the G2S method to converge. 0 for any k and that ρ(k) is a decreasing function of k. The convergence factor of the G2S method is Proof The convergence factor of the G2S is given by the spectral radius of the iteration matrix T h . Theorem 2 implies that Regardless of the values of n 1 and n 2 , direct calculations show that the matrices G k have four eigenvalues: , .
Moreover, we observe that where we used the monotonicity of ρ(k). On the other hand, since , and the result follows by observing that λ 3

Two-level substructured and volumetric methods
At this stage, it is fair to pose the following questions: What is the difference between our G2S and other two-level DD methods? Is our G2S different from a classical twogrid method that uses a PSM as smoother? Is there any relation between these two apparently similar approaches? The answers are given in this section. Let A v u = f be a discretization of our problem (1). In particular, A v ∈ R N v ×N v is the discretization of the elliptic operator L, while u ∈ R N v and f ∈ R N v are the discrete counterparts of the solution u and the right-hand side function f . Consider the following splittings of the matrix A v : where A j ∈ R N a j ×N a j for j = 1, 2. Notice that these correspond to a two-subdomain decomposition. We assume that A v , A 1 and A 2 are invertible. The matrices are restriction operators that take as input vectors of sizes N v − N a 1 and N v − N a 2 and returns as output substructure vectors of sizes N 1 (substructure S 1 ) and N 2 (substructure S 2 ). The two matrices E 1 ∈ R N a 1 ×N 1 and E 2 ∈ R N a 2 ×N 2 are extension by zero operators. In order to obtain a discrete substructured problem, we introduce the augmented system where and u j , f j ∈ R N a j , for j = 1, 2. The matrices R 1 ∈ R N 1 ×N a 2 and R 2 ∈ R N 2 ×N a 1 are restriction operators that map volume vectors, of sizes N a 2 (second subdomain) and N a 1 (first subdomain), respectively, to substructure vectors, of sizes N 1 (substructure S 1 ) and N 2 (substructure S 2 ), respectively. Notice that R j R j = I N j , with I N j the identity of size N j , for j = 1, 2. Moreover, we define N s := N 1 + N 2 and N a := N a 1 + N a 2 . The substructure vectors v 21 := R 1 u 2 and v 12 := R 2 u 1 solve the discrete substructured system where The vectors v 12 and v 21 are restrictions on the substructures S 2 and S 1 of the solution vectors u 1 and u 2 , and (28) is the substructured form of (27). Notice that (28) is the discrete counterpart of the substructured problem (10). The block-Jacobi method applied to (27) and (28) leads to the iteration matrices where G h is the discretization of G, as denoted in Sections 3.1 and 3.2.
Let us now introduce the matrices It is easy to verify the relations

T T = I N s , A h T = T DA a , G a = −D E T and G h T
In particular, the relation T T = I N s is trivial, and A h T = T DA a can be obtained by calculating A similar calculation allows us to obtain that G h T = T G a . Since the matrices G h and G a are two different representations of the PSM, one expects that their spectra coincide. This is shown in the next lemma.

Lemma 4 The matrices G h ∈ R N s ×N s and G a ∈ R N a ×N a have the same nonzero eigenvalues, that is σ (G h ) = σ (G a ) \ {0}.
Proof Recalling the structure of G a , one can clearly see that rank(G a ) = N s , because the matrices E j R j have rank N j for j = 1, 2. Hence, G a has N s nonzero eigenvalues. Take any eigenvector v ∈ R N a of G a with eigenvalue λ = 0. We note that T v = 0, otherwise we would have G a v = −D E T v = 0, which contradicts the hypothesis λ = 0. Using the last relation in (29) Hence ( T v, λ) is an eigenpair of G h . Since this holds for any eigenpair (v, λ) of G a , the result follows.
Let us now consider arbitrary restriction and prolongation operators R s and P s (with R s = P s ). The corresponding discrete substructured two-level iteration matrix is then given by Our goal is to find a volumetric two-level iteration operator G 2L a that has the same spectrum of G 2L h . Such a volumetric operator must be formulated for the augmented system (27) and based on the iteration matrix G a . Let us recall (29) and compute

G 2L h T = I N s − P s (R s A h P s ) −1 R s A h G h T = I N s − P s (R s A h P s ) −1 R s A h T G a = T − P s (R s A h P s ) −1 R s A h T G a = T I N a − T P s (R s A h P s ) −1 R s A h T G a = T I N a − T P s (R s A h T T P s ) −1 R s T DA a G a = T I N a − T P s (R s T DA a T P s ) −1 R s T DA a G a
where P a := T P s , R a := R s T = P a and G 2L a := I N a − P a (R a DA a P a ) −1 R a DA a G a .
We obtained that G 2L h T = T G 2L a . Similarly as in the proof of Lemma 4, one can show that σ (G 2L h ) = σ (G 2L a ) \ {0}. This means that we have found a two-level volumetric iteration operator that is spectrally equivalent to our substructured twolevel operator. Moreover, for any invertible matrix U ∈ R N a ×N a we can repeat the calculations done in (18), to obtain where P a = P a U and R a = U −1 R a (with R a = P a if U is orthogonal). This means that there exist many two-level DD methods in volume that are equivalent to our substructured two-level methods. We can summarize the obtained result in the following theorem.

Theorem 4 (Volumetric formulation of substructured methods) Consider the substructured two-level iteration operator G 2L
h given in (30) and denote its spectrum by σ (G 2L h ). For any invertible matrix U ∈ R N a ×N a , the spectrum of the matrix G  The matrix G 2L a has a special structure. Since D is the block-Jacobi preconditioner for the augmented system (27), one can say that G 2L a corresponds to a two-level method applied to the preconditioned system DA a u a = Df a , in a similar spirit of the smoothed aggregation method defined in [5,Section 2].
Let us now focus on the question: what is the relation between our G2S method and a two-grid (volumetric) method that uses the same smoother (PSM)? A two-grid method in volume applied to the augmented system (27) would correspond to an iteration operator G 2L a of the form G 2L a = I N a − P a ( R a A a P a ) −1 R a A a G a .
Natural choices for P a and R a are the usual (volumetric) restriction and prolongation operators. For example, for a one-dimensional problem a natural choice is the prolongation matrix P a given in (15) and R a = 1 2 P a . On the other hand, our prolongation operator P a := T P s is an extension by zero of a coarse substructure vector to a fine volumetric vector. Moreover, R a := R s T restricts a fine volumetric vector v to a coarse substructure vector by only interpolating the components of v belonging to the (fine) substructures. Another crucial difference is that G 2L a is constructed on DA a , while G 2L a is obtained using the matrix A a . Therefore, G 2L a is constructed on the original augmented system A a u a = f a , while G 2L a is defined over the preconditioned system DA a u a = Df a .
These facts indicate clearly that our method is by far distant from a classical volumetric two-grid method that uses the PSM as smoother. This is also confirmed by the numerical results shown in Fig. 4, where the spectral radii of three different two-level iteration matrices are depicted.
In particular, we consider the Laplace problem defined on a unit square Ω (of side L = 1). This domain is decomposed into two overlapping rectangles of width L = 1 2 + δ. Hence, the length of the overlap is 2δ. This problem is discretized using a classical second-order finite difference scheme with a uniform grid of size h = 1 N h +1 , where N h = 2 −1. The length of the overlap is δ = (N ov +1)h, for some positive odd integer N ov . We consider three different two-level iterations matrices G 2L h , G 2L a and G 2L RAS . The first one G 2L h is the iteration matrix corresponding to our G2S method. The second one G 2L a is the iteration matrix of a two-level method applied on the augmented volumetric system (27). In both cases, the same classical Schwarz method is used as smoother. The third matrix G 2L RAS is the iteration operator of a classical two-grid method applied to the volumetric system A v u = f and using as smoother the RAS method. In all cases, restriction and prolongation operators correspond to linear interpolation matrices (as in (15)) and to the full weighting restriction matrices, respectively. Indeed, for our G2S method these are one-dimensional operators, while for the other two methods they are two-dimensional operators. In particular, for the augmented system these interpolation and restriction operators take into account the nonzero values of the discrete functions on the substructures. For the two-level RAS method, they are obtained by a two-dimensional extension of (15).
In Fig. 4, we show the spectral radii of G 2L h , G 2L a and G 2L RAS , obtained by a direct numerical computation, as a function of N ov , hence the size of the overlap. The two figures correspond to two different discretizations. It is clear that our G2S method outperforms the other two methods, which have also very small contraction factors. Moreover, by comparing the two plots, we observe that the coarse correction makes all the methods very robust with respect to the number of discretization points.

Implementation details and multilevel algorithm
In Section 4.1, after having explained pro and cons of substructured and volume twolevel methods, we reformulate Algorithm 1 in equivalent forms, which are essential to make our method computationally efficient. In Section 4.2, we explain how to extend our G2S method to a multi-grid strategy.

A practical form of two-level substructured methods
One of the advantages of our new substructured framework is that a large part of the computations are performed with objects (vectors, matrices, arrays, etc.) that are defined on the substructures and hence have very small sizes if compared to their volumetric counterparts. This is clear if one carefully studies Algorithm 1, where for example the products Rr and P v c are performed on substructure vectors. In volumetric two-level methods, the same prolongation and restriction operators involve volume entities, thus their application is more costly and they might be generally more difficult to implement due to the higher dimensions.
We now compare the computational costs of one iteration of the G2S and of a 2-grid method in volume that uses the same smoother. Let N v be the size of the volume problem and N s the size of the substructured problem (N s N v ). The size of each subdomain volume problem is N sub . The coarse spaces are of dimension M s for the G2S method and M v for the volume method. The restriction and prolongation operators in volume are denoted by R v and P v . For simplicity we assume n 1 = 1, n 2 = 0.
The computational costs of one iteration are reported in Table 1. Table 1 Computational cost (C.C.) per iteration. Notice that the smoother in volume is written as a standard stationary method based on the splitting The first row of this table corresponds to the smoothing step performed by G2S and a DD method in volume. Since we assumed that both strategies use the same DD smoother, their computational costs coincide and are equal to O(γ c (N sub )), where γ c depends on the choice of the linear solver. For example, for a Poisson problem, one has γ c (N sub ) = N sub log(N sub ) if a fast Poisson solver is used, or γ c (N sub ) = bN sub for sparse banded matrices with bandwidth b; see, e.g., [38]. For a general problem, the complexity of sparse direct solvers is a power of N sub , which depends on the dimension. Moreover, one could consider the precomputation of the factorization of the subdomain matrices and just using forward and backward substitutions along the iterations.
For simplicity, we assume that restriction and prolongation operations are classical nodal operations whose computational costs are supposed to grow linearly with the dimension of the problem. Notice that since N s N v , assuming that the same interpolation method is used, the cost in the substructured case is much lower than the corresponding cost in volume; see last row in Table 1.
Let us now discuss the third row of Table 1, which corresponds to the solution of the coarse problems. Since the dimension of the substructured coarse space is smaller, the G2S could require much less computational effort in the solution of the coarse problem. We remark that on the one hand, the coarse matrix A 2h is typically block sparse, where the block structure is related to the connectivity among the subdomains (namely the j th block-row of G h has a sparsity pattern governed by the set N j ). Furthermore, for a large class of PDE problems, these blocks admit very accurate low-rank approximations that can make the solution process more efficient; see, e.g., [43] and references therein. On the other hand, A vc is typically a sparse matrix, whose sparsity pattern depends on the discretization method used (e.g., finite differences, finite elements, etc). In both cases there exist sophisticated algorithms for the solution of the corresponding linear systems; see, e.g., [18,38,43] and references therein. For this reason, we use the two functions γ s and γ v to indicate the computational cost of the coarse solvers. A general direct comparison in this sense is problem dependent, it could be very complicated, and it is beyond the scope of this paper. Nevertheless, we provide in Section 5.1 a detailed analysis for a specific test case.
Let us now turn our attention to the second row of Table 1, which corresponds to the computation of the residual. Here, a volumetric method requires O((N v ) γ m ) operations, where γ m depends on the sparsity structure of A v . For example, if A v is a second-order finite difference matrix, then γ m = 1. In contrast to this favorable situation, the computation of the residual for a G2S method requires the action of A h on a vector v n+ 1 2 , which in turn requires a subdomain solve that is assumed to cost O(γ c (N sub )) (as discussed above). Hence, two smoothing steps are needed by the G2S method. If we could avoid this extra cost, then all the other steps of the G2S methods are cheaper since they are performed on arrays of much smaller sizes. Moreover, we wish to remark that, as we are going to see in Section 5, the G2S method requires in general less iterations than the corresponding method in volume. Hence, if we could avoid one of the two smoothing applications in each iteration, we would get a method which is faster in terms of iterations and computational cost per iteration. To avoid one of the two applications of the smoother in the G2S method, we exploit the special form of the matrix A h = I h −G h and propose two new versions of Algorithm 1. These are called G2S-B1 and G2S-B2 and given by Algorithms 2 and 3.
These substructured algorithms require only one smoothing step per iteration. Hence, they are potentially cheaper than a two-grid method using the same smoother. Moreover, it turns out that G2S and G2S-B1 are equivalent and they have the same spectral properties of G2S-B2. These relations are proved in the following theorem.

(b) Algorithm 3 corresponds to the stationary iterative method
is the iteration matrix and M the relative preconditioner. Moreover, Algorithm 3 and Algorithm 2 have the same convergence behavior.
Proof For simplicity, we suppose to work with the error equation and thus b h = 0. We call v 0 the output of the first five steps of Algorithm 2 and v 0 the output of Algorithm 1. Then given an initial guess v 0 , we have To verify that steps 6-10 of G2S-B1 are equivalent to an iteration of Algorithm 1, let v 0,k−1 := v 1,k−1 + P v c be the output on line 10 of the (k − 1)-th iteration of the G2S-B1 algorithm. Then the smoothing step on line 6 of the k-iteration reads where we use the definition of P and the quantity w k−1 computed at the previous iteration. Steps 7-8 are just the residual computation using the identity A h = I − G h and the remaining steps are standard. For the second part of the Theorem, we write one iteration of Algorithm 3 as Notice that Algorithm 2 requires for the first iteration two applications of the smoothing operator G h . The next iterations, given by Steps 6-10, need only one application of the smoothing operator G h . Theorem 5 (a) shows that Algorithm 2 is equivalent to Algorithm 1. This means that each iteration of Algorithm 2 after the first one is computationally less expensive than one iteration of a volume two-level DD method. Since two-level DD methods perform generally few iterations, it could be important to get rid of the expensive first iteration. For this reason, we introduce Algorithm 3, which overcomes the problem of the first iteration. Theorem 5 (b) guarantees that Algorithm 3 is exactly a G2S method with no pre-smoothing and one post-smoothing step. Moreover, it has the same convergence behavior of Algorithm 2.
We wish to remark that, the reformulations G2S-B1 and G2S-B2 require to store the (substructured) matrix P := G h P . This matrix is anyway computed in a precomputation phase to assemble the coarse matrix A 2h = RA h P = RP − RG h P = RP − R P . Hence, no extra cost is required. These implementation tricks can be readily generalized to a general number of pre-and post-smoothing steps.
Concerning the specific implementation details for the G2S, we remark that one can lighten the off-line assembly of the matrix A 2h = RA h P , using instead the matrix which corresponds to a direct discretization of A on the coarse level. Moreover, since our two-level method works directly on the interfaces, we have more freedom in the discretization of the smoothing operators on each level. For instance, one could keep the corresponding volume mesh close to the substructures, while having a coarser grid away from them. This strategy would follow a similar idea of the methods discussed in, e.g., [15] and references therein.

GMS: extension to multilevel framework
The solution of large problems can be challenging using classical two-level methods in volume. This is mainly due to the dimension of the coarse space, which can still be too large in volume to be solved exactly. In our substructured framework, the size of the substructured coarse matrix corresponds to the number of degrees of freedom on the coarse substructures, and thus it is already much smaller if compared to the volume case (see Section 5.1 for a comparison of their sizes in a concrete model problem). However, there might be problems for which the direct solution of the coarse problem is inconvenient also in our substructured framework. For instance, if we considered multiple subdomains, then we would have several substructures and therefore the size of the substructured coarse matrix increases. The G2S is suitable to a multilevel generalization following a classical multigrid strategy [42]. Given a sequence of grids on the substructures labeled from the coarsest to the finest by { min , min + 1, . . . , max }, we denote by P −1 and R −1 the interpolation and restriction operators between grids and − 1. To build the substructured matrices on the different grids we have two possible choices. The first one corresponds to the standard Galerkin projection. Being A max the substructured matrix on the finest grid, we can define the coarse matrices A := R +1 A +1 P +1 , for ∈ { min , min + 1, . . . , max − 1}. The second choice consists in defining A directly as the discretization of (12) on the grid labeled by , and corresponds exactly to (33) for the two-grid case. The two choices are not equivalent. On the one hand, the Galerkin approach leads to a faster method in terms of iterations. However, the Galerkin matrices A do not have the block structure as in (12). For instance, Thus, the identity matrix is replaced by the sparse matrix R max max −1 P max max −1 . On the other hand, defining A directly on the current grid as in (33) leads to a minimum increase of the iteration number, but it permits to preserve the original block-diagonal structure (which is important if one wants to use G2S-B1 and G2S-B2). The difference between the two approaches is also studied by numerical experiments in Section 5.
In spite of the choice for A , we define the geometric multilevel substructured DD method (GMS) in Algorithm 4, which is a substructured multi-grid V-cycle.

Numerical experiments
In this section, we demonstrate the effectiveness of our new computational framework by extensive numerical experiments. These experiments have two main purposes. On the one hand, we wish to validate the theoretical results of Section 3.1, while discussing the implementation details of Section 4.1 and comparing our new method with other classical existing methods, like a two-grid method in volume using RAS as smoother. This is the focus of Section 5.1, where a Poisson problem on twodimensional and three-dimensional boxes is considered. Both convergence rates and computational times are studied.
On the other hand, we wish to show the effectiveness of our new methods in case of multiple-subdomain decompositions and for classical test problems. In particular, in Section 5.2 we consider a multiple-subdomain decomposition for a classical Poisson problem, while in Section 5.3 a diffusion problem with highly jumping diffusion coefficients is solved. For the efficient solution of these two problems different discretization methods are required. These are the finite difference method, for the classical Poisson problem, and the finite-volume method, in case of jumping diffusion coefficients. These two methods require different definitions of restriction and prolongation operators. We thus sketch some implementation details of our algorithms for a regular decomposition. In both cases, the robustness of our methods for increasing number of subdomains is studied and compared to classical two-grid and multi-grid methods defined in volume and using RAS as smoother. The obtained numerical results show clearly, and particularly for the jumping diffusion coefficient case, that our methods converge in less iterations than the classical two-level RAS method. In Sections 5.1 and 5.2, all the methods are used as iterative solvers, without any Krylov acceleration, while in Section 5.3 we test their efficiency as preconditioners for GMRES.

Laplace equation on 2D and 3D boxes
We first consider the Poisson equation − u = f with homogeneous Dirichlet boundary condition. The geometry of the domain and its decomposition are shown in Fig. 3, where Ω is decomposed into two overlapping rectangles Ω 1 = (−1, δ)×(0, 1) and Ω 2 = (−δ, 1) × (0, 1). The length of the overlap is 2δ. On each subdomain, we use a standard second-order finite difference scheme based on a uniform grid of N y = 2 − 1 interior points in direction y and N x = N y interior points in direction x. Here, is a positive integer. The grid size is denoted by h. The overlap is assumed to be δ = hN ov , where N ov represents the number of interior points in the overlap in direction x.
The results of our numerical experiments are shown in Fig. 5. All figures show the decay of the relative errors with respect to the number of iterations. To study the asymptotic convergence behavior of the G2S and compare it with the theoretical results of Section 3.2, all methods are stopped if the relative error is smaller than the very low tolerance 10 −12 .
The problem is solved by the classical parallel Schwarz method, a classical twogrid and a three-grid method using RAS as smoother ("2L-RAS" and "3L-RAS" in the figures), the G2S method, and its extension to three-grid denoted by G3S. For the G2S method, we further distinguish two cases: "G2S" indicates the G2S method using the coarse matrix A 2h := RA h P , while " G2S" refers to the G2S method using the coarse matrix obtained by a direct discretization of A on the coarse grid (instead of A 2h := RA h P ), see (33) and the discussion in Section (4.2). For the G2S and G2S methods, we use the one-dimensional linear interpolation operator P = diag(P 1 , P 2 ), where the expression of P j , j = 1, 2 is given in (15), and R = 1 2 (P ) (as explained in Section 3.1 and Fig. 3). For 2L-RAS, we use the classical full weighting restriction and interpolation operators, P V = kron(P x , P y ), R V = 1 4 P V , where P x , P y are one-dimensional interpolation operators of the same form of (15).
The left panels of Fig. 5 validate the theoretical convergence factor obtained in Theorem 3. The center panels compare the classical one-level PSM, the G2S and G2S method and the 2L-RAS method. The slower performance of 2L-RAS with respect to G2S can be traced back to the interpolation step. This operation breaks the harmonicity of the obtained correction, which therefore does not lie any more in the space of the error; see, e.g., [34]. One could use interpolators which extend harmonically the correction inside the overlapping subdomains although this would increase significantly the computational cost of each iteration. We refer also to [37] for a similar observation. Further notice that, while the G2S coarse space has dimension about N y , the one corresponding to the 2L-RAS method has dimension about N x N y /4 ≈ N 2 y /2 N y . In the setting of Fig. 5, the dimensions of the coarse spaces of G2S and 2L-RAS are about 60 and 1900, respectively. Notice that the convergence of G2S is comparable to 2L-RAS, hence a little slower than G2S, but the assembly of the coarse matrix is cheaper.
The right panels compare the convergence behavior of the two-grid methods, G2S and 2L-RAS, with their three-level variants. We remark that the addition of a third level does not result in a noticeable convergence deterioration.
Next, we are interested in computational times and in numerically validating the computational cost presented in Table 1. To do so, we consider a three-dimensional box Ω = (−1, 1) × (0, 1) × (0, 1) decomposed into two overlapping subdomains Ω 1 = (−1, δ) × (0, 1) × (0, 1) and Ω 2 = (−δ, 1) × (0, 1) × (0, 1). We solve the problem (up to a tolerance of 10 −6 on the relative error) using the G2S method, its equivalent forms G2S-B1 and G2S-B2, introduced in Section 4.1, and 2L-RAS. The length of the overlap is δ = hN ov , where h is the grid size and N ov is fixed to 4. Hence the overlap is proportional to the grid size. The experiments have been performed on a workstation with a processor Intel Core i9-10900X CPU 3.7GHz and with 32GB of RAM. The subdomain problems are solved sequentially using the MATLAB backslash command, which calls a direct solver for sparse banded matrices (with small band density threshold) with almost linear complexity. The smoothing steps have the same cost for both 2L-RAS and G2S implementations, and it permits to better remark the advantages of the substructured methods in the coarse step and the prolongation/restriction steps. The results are shown in Tables 2 and 3.
The G2S method outperforms 2L-RAS in terms of iteration numbers and computational times. To better understand why the G2S method is faster and to validate the computational cost analysis presented in Table 1, Fig. 6 shows the computational  time spent by the G2S and 2L-RAS methods in the different steps of a two-level method. As expected, the smoothing step requires the same effort in both methods. This is shown in Fig. 6 (left). In Fig. 6 (right) we compare the computational times required by one coarse correction step performed by the two methods. The two curves correspond to the same volumetric dimensions of the problem (as in Table 3), but the coarse space dimensions corresponding to G2S and 2L-RAS are different. This means that, the kth point (circle) from the left of the G2S curve has to be compared with the kth point (cross) from the left of the 2L-RAS curve. It must also be said that for both cases we use the Matlab backslash command. This is clearly a choice more favorable for the 2L-RAS coarse problem (which is sparse and banded). A different and more appropriate solver for the G2S coarse matrix exploiting the block-sparse structure (see, e.g., [38]) could lead to further improvement of these computational times.

Decompositions into many subdomains
In this section, we consider a square domain Ω decomposed into M × M nonoverlapping square subdomains Ω j , j = 1, . . . , M 2 = N. Each subdomain Ω j On each subdomain Ω j , we locate the discrete substructure S N j j , marked with blue lines in Fig. 7, which is made by four (one-dimensional) segments.
For each discrete substructure S N j j , the interpolation operator P j acts block-wise on each one-dimensional interval, i.e., P j = diag{ P 1 , P 2 , P 3 , P 4 }, where each P k , k = 1, . . . , 4, corresponds to the prolongation matrix (15). We remark that using P k implies assuming that on the boundary of S j the function attains zero. This holds since on each substructure v j ∈ H 1 2 00 (S j ), due to the partition of unity. The results of our numerical experiments are reported in Fig. 8. The left panel shows the dependence of the spectral radius on the size of the overlap for the different methods and N = 16, = 5. We then study the robustness of the method with respect to an increasing number of subdomains. We first keep the size of each subdomain fixed, N sub = (2 5 − 1) 2 , and thus we consider larger global problems as N grows.  Then, we fix a global domain Ω with approximately 17 · 10 3 interior degrees of freedom, and we get smaller subdomains as N grows. In both cases, we observe that the spectral radius of both 2L-RAS and G2S does not deteriorate as the number of subdomains increases. We further compare G2S, 2L-RAS and Geometric MultiGrid (GMG) for the solution of the Poisson equation − u = 1. We decompose Ω into N = 9 and N = 16 subdomains and set N ov = 2. Table 4 reports the number of iterations and computational times to reach a relative tolerance of 10 −6 . For the G2S method, we preassembled the coarse matrix.
Concerning GMG, we implemented a V-cycle with two pre-and post-smoothing steps using a damped Jacobi smoother with optimal damping parameter ω = 4/5 [53]. The coarsest level of GMG corresponds to = 3 and the size of the coarse matrix is 961. Concerning the implementation of the DD methods, the subdomain problems are solved in parallel, using the Matlab parallel Toolbox. The G2S method is implemented according to Algorithm 3. The sizes of the G2S coarse matrices are 3096, 6168, 12312 for = 7, 8, 9, respectively. For both G2S and GMG, we compute once for all the LU decompositions of the corresponding coarse matrices as their size is small. The cost of the LU decompositions is included in the computational times reported. Table 4 shows the G2S is competitive with GMG.

Diffusion problem with jumping diffusion coefficients
In this section, we test our method for the solution of a diffusion equation −div(α∇u) = f in a square domain Ω := (0, 1) 2 with f := sin(4πx) sin(2πy) sin(2πxy). The domain Ω is decomposed into 16 non-overlapping subdomains.
We suppose α = 1 everywhere except in some channels where α takes the values 10 2 , 10 4 and 10 6 . We consider two configurations represented in Fig. 9.
We use a finite-volume discretization, where each non-overlapping subdomain is discretized with N sub = 2 2 cells and it is enlarged by N ov cells to create an overlapping decomposition with overlap δ = 2N ov h. We further assume that the discontinuities of the diffusion coefficient are aligned with the edges of the cells and they do not cross any cell. The mapping between the fine and coarse mesh is illustrated in Fig. 10.
At the volume level, the restriction operator maps four fine cells to a single coarse cell by averaging the four cell values and the interpolation operator is its transpose. At the substructured level, the restriction operator maps two fine cells to a single coarser    Convergence curves for = 5, N ov = 2 for the two channels configuration. The parameter α is equal to 10 2 (left), 10 4 (center), 10 6 (right) cell by averaging. The interpolation operator splits one coarse cell to two fine cells assigning the same coarse value to each new cell. It still holds that the interpolation operator is the transpose of the restriction operator.
In this setting, we study the robustness of the G2S method with respect to the mesh size and the amplitudes of the jumps of α and we compare it to the 2L-RAS method. In Table 5 we report the number of iterations to reach a relative error of Tol = 10 −6 .
Both methods are used as iterative solvers. The iterations performed by the G2S method are the numbers on the left in each cell of the table, while the iterations of the 2L-RAS are the numbers in brackets on the right. We can observe that the G2S outperforms 2L-RAS. Figure 11 show the convergence curves for a fixed mesh size and three different values of α.
These results show that the G2S method is robust both with respect to the jumps of the diffusion coefficient and the mesh size, and that it outperforms the 2L-RAS method. Now, we study the performance of G2S and 2L-RAS as preconditioners for GMRES. Table 6 reports the number of iterations when both methods are used to accelerate GMRES. We further specify the final size of the Krylov subspaces. GMRES preconditioned by the G2S method builds a much smaller Krylov subspace as the system and preconditioner have dimensions equal to the size of the substructured space.

Conclusions
In this work we introduced a new framework of two-level and multilevel substructured DD methods, namely the G2S method and its extension called GMS method. These are formulated on the substructures of the considered overlapping domain decomposition. Under certain reasonable hypotheses, for 2 subdomains in 2d, we proved that the G2S method is well posed and convergent, and we also estimated the corresponding convergence factor. The effectiveness of our new methods is confirmed by extensive numerical experiments, where elliptic PDE problems with possibly highly jumping diffusion coefficients are efficiently solved.