Super-localization of spatial network models

Spatial network models are used as a simplified discrete representation in a wide range of applications, e.g., flow in blood vessels, elasticity of fiber based materials, and pore network models of porous materials. Nevertheless, the resulting linear systems are typically large and poorly conditioned and their numerical solution is challenging. This paper proposes a numerical homogenization technique for spatial network models which is based on the Super Localized Orthogonal Decomposition (SLOD), recently introduced for elliptic multiscale partial differential equations. It provides accurate coarse solution spaces with approximation properties independent of the smoothness of the material data. A unique selling point of the SLOD is that it constructs an almost local basis of these coarse spaces, requiring less computations on the fine scale and achieving improved sparsity on the coarse scale compared to other state-of-the-art methods. We provide an a-posteriori analysis of the proposed method and numerically confirm the method's unique localization properties. In addition, we show its applicability also for high-contrast channeled material data.


Introduction
Spatial networks are a useful tool for constructing simplified discrete representations of complex geometric structures.Blood vessels may, for example, be modeled as connected one dimensional line segments forming a network of nodes and edges, see [FKO + 22].Paper consists of a web of wooden fibers that may be modeled as one dimensional beams forming a network, see [KMM + 20].Also porous materials, such as sandstone, may be represented by a pore network model, see [CEPT12].Such simplifications reduce the complexity from a full three dimensional geometry to a discrete model for which computer simulations can be performed more easily.Nevertheless, highly heterogeneous materials and complicated geometries typically cause the underlying linear systems to be poorly conditioned.
This paper considers spatial network models that can be described by weighted graph Laplacians, arising from applications modeled by elliptic partial differential equations (PDEs).There already exists a variety of well-established iterative multilevel solvers for such problems.A prominent example is algebraic multigrid, see e.g.[LB12,XZ17,HWZ21] and the references therein.In a purely algebraic setting, the construction of multiple discretization levels can be challenging.For spatial networks with sufficient structure, it is possible to embed the network into a domain Ω ⊂ R d and introduce scales by interpolating the network onto a family of finite element meshes, see e.g.[GHM22].On coarser scales, the spatial network behaves like a continuous material and therefore inherits many advantageous properties from the continuous setting.This can be utilized for transferring successful algorithms for solving PDEs to spatial network models.We will focus on numerical homogenization, with the goal to compute an accurate effective coarse scale model of the full network.
Numerical homogenization is about the construction of problem-adapted, optimally approximating ansatz spaces possessing almost local computable bases.Near optimal numerical homogenization is achieved by the Localized Orthogonal Decomposition (LOD) [MP14, HP13, KPY18, MP20, BGS20, AHP21] and Gamblets [Owh17,OS19] which construct a fixed number of basis functions per mesh entity that decay exponentially relative to the mesh.For the computation of the basis, a localization of the basis functions to element patches is performed.The number of element layers in the element patches needs to be increased logarithmically with the desired accuracy.An alternative approach is taken by the AL-basis [GGS12] and G(ms)FEM methods [BL11,MSD22] which solve local spectral problems and construct the global ansatz space using a partition of unity.Here, the support of the basis functions is fixed by the choice of partition of unity and the number of local eigenfunctions needs to be increased logarithmically with the desired accuracy.
Recently, the Super Localized Orthogonal Decomposition (SLOD) has been proposed in [HP22] performing a localization of the same space as the LOD, but utilizing a novel localization strategy allowing for super-exponentially decaying basis functions.This improved localization results in smaller local patch problems for the basis computation and a sparser coarse system matrix.The method proved its effectiveness also beyond elliptic multiscale problems, see [FHP21,BFP22].
There are some contributions specifically targeting numerical homogenization of spatial network models, see [EIL + 09, ILW10, KMM + 20, EGH + 22].In particular, a spatial network version of the LOD has been developed in [EGH + 22].Therein a rigorous proof of the exponential decay of the LOD basis functions in the spatial network setting is provided utilizing the techniques developed in [KY16] and [GHM22].
In this paper, we develop a SLOD for spatial network models.As model problem, we consider a weighted graph Laplacian K on the spatial network G = (N , E), i.e., we seek the solution to the possibly poorly conditioned linear system (1.1) where u fulfills homogeneous Dirichlet boundary conditions, M is a diagonal mass matrix, and f is a given source term.For a depiction of a possible spatial network, see Figure 2.1.Let the spatial network G be embedded into a domain Ω; for the ease of presentation, let Ω = [0, 1] d .We consider coarse finite element meshes T H of Ω and define problem-adapted ansatz spaces by applying the solution operator K −1 of (1.1) to T H -piecewise constants.Such ansatz spaces have uniform approximation rates, independently of the smoothness of the material data.The SLOD then identifies local T H -piecewise constant right-hand sides that yields a rapidly decaying response under the operator K −1 .These responses are then localized to element patches and used as basis functions of the SLOD.
We derive an a-posteriori error bound for the SLOD error in terms of the size of the element patches and the coarse mesh size which holds under mild assumptions on the underlying network.In a series of numerical experiments, we illustrate the network assumptions, the super-locality of the SLOD basis and the method's performance in presence of high-contrast data, in particular channeled data.
The outline of the paper is as follows.In Section 2, we introduce the spatial network model.A coarse scale discretization together with assumptions on the underlying network is presented in Section 3. Section 4 then constructs a prototypical problem-adapted ansatz space.We identify rapidly decaying basis functions and perform a localization of the basis in Section 5. Section 6 states the SLOD method along with an a-posteriori error estimate and, finally, Section 7 presents numerical experiments.

A spatial network model
We consider a connected graph G = (N , E) of nodes N and edges E = {{x, y} : an edge connects nodes x, y ∈ N } that is embedded into the unit hyper-cube Ω = [0, 1] d , d ∈ N. The presented methodology can also be generalized to polygonal and polyhedral domains, however, for the ease of presentation, we only consider hyper-cubes.We write x ∼ y if two nodes x, y ∈ N are connected by an edge in E and denote by |x − y| the Euclidean distance between the nodes.By N (ω), we denote all nodes that are contained in the subset ω ⊂ Ω.We impose homogeneous Dirichlet boundary conditions for nodes on the boundary segment Γ ⊂ ∂Ω.For a depiction of an example of a spatial network, see Figure 2.1.For the subsequent presentation, we introduce function spaces on the network.Let V denote the space of all real valued functions defined on the set of nodes N and denote by the subset of V satisfying homogeneous Dirichlet boundary conditions on the boundary segment Γ. Furthermore, let V ω and Vω denote the spaces of functions in V and V , respectively, that are supported in the subdomain ω.
2.1.Linear operators.We define a diagonal edge length weighted mass operator M : V → V by the following sum of node contributions It shall be noted that the node contributions M x are uniquely defined by their associated quadratic forms.For subdomains ω ⊂ Ω, we define a local version of M as The bilinear form (M • , •) is an inner product on the space V with induced norm which can be interpreted as the mass of the network in subdomain ω.
Furthermore, we define a reciprocal edge length weighted graph Laplacian L : V → V by (2.2) where the node contributions L x are symmetric and are again uniquely defined by their associated quadratic forms.A local version of L is given by It shall be noted that (L ω v)(x) is nonzero for nodes x outside of ω that are adjacent to nodes in ω.Since the graph G is assumed to be connected, the kernel of L consists only of the globally constant functions.Hence, | • | 2 L = (L•, •) defines a semi-norm on V .By restriction to ω, we obtain | • | 2 L,ω := (L ω •, •).Remark 2.1 (Weighting of M and L).The non-standard weightings in (2.1) and (2.2) are chosen such that, in one dimension, the operators L and M resemble the first order finite element stiffness matrix of the Poisson equation and the corresponding lumped mass matrix, respectively.This weighting enables an analysis which is similar in style to wellestablished PDE analysis, see e.g.[KMM + 20].
By combining the mass matrix and the graph Laplacian, we obtain another inner product on V , namely ((L + M ) • , •).For its induced norm, we write L and the restriction of the norm to a subdomain ω is defined by For demonstrating the extension of the SLOD to the spatial network setting, this paper considers a weighted graph Laplacian as model problem.It is defined as follows (2.4) determining the edge's conductivity.For subdomains ω, local versions K ω of can be defined analogously to (2.3).
Given a right-hand side f ∈ V , we seek u ∈ V such that, for all v ∈ V , The unique solvability of this problem can be ensured under the minimal requirements that the underlying network G is connected and that there exists at least one Dirichlet node in Γ, i.e., N (Γ) = ∅.Subsequently, these assumptions are always assumed to hold.
Using the bounds on the edge weights (2.5) we obtain that, for all v ∈ V , which, in particular implies that on V , the energy norm 2.3.Global Friedrichs' inequality.It is possible to derive a Friedrichs' inequality in the spatial network setting.
Lemma 2.2 (Friedrichs' inequality).There exists C fr > 0, such that, for all v ∈ V , Proof.We consider the generalized eigenvalue problem Lv = λM v posed in the space V with λ 1 ≤ λ 2 ≤ . . .denoting its eigenvalues.The connectivity of the spatial network G and N (Γ) = ∅ ensure that λ 1 > 0. The first eigenvalue λ 1 can be characterized by the min-max theorem as follows Note that Lemma 2.2 does not provide an explicit characterization of C fr in terms of the properties of the underlying network.However, one can establish a connection to the first eigenvalue of the well-studied normalized graph Laplacian for which such explicit characterizations exist, see e.g.[CY95,Chu05].

Stability estimates.
Let us also introduce a local variant of (2.6) which, for a subdomain ω ⊂ Ω and a given f Existence and uniqueness of the solution u ω follow since K ω , by construction, is also a weighted graph Laplacian and V ω ⊂ V .Friedrichs' inequality then allows us to show the stability of the model problem (2.6) and its local counterpart (2.9) with respect to | • | L and | • | L,ω which are norms on V and V ω , respectively.
Lemma 2.3 (Stable solvability).The solution to (2.6) is stable in the sense that with C fr from Lemma 2.2.This also holds for the local version (2.9), i.e., Proof.Using the uniform lower bound (2.5), definition (2.6) and Lemma 2.2, we obtain Dividing by |u| L , the global stability estimate follows.
The local stability estimate can be obtained similarly noting that u ω , in particular, is also an element of V for which we can apply Lemma 2.2.Using that , the local stability estimate can be concluded.
For later use, we define the solution operator K −1 : V → V, f → u mapping a right-hand side f to its corresponding solution u solving (2.6).On subsets ω ⊂ Ω, we also define the local solution operator K −1 ω : Vω → V ω , f → u ω mapping a local right-hand side to the local solution u ω satisfying (2.9).

Coarse scale discretization and network assumptions
3.1.Coarse mesh.The proposed method constructs its problem-adapted basis functions with respect to some coarse mesh T H .For simplicity, we restrict ourselves to Cartesian meshes which we define, unlike classical textbooks on finite element theory, as with elements that are neither closed nor open but are rather defined as This definition ensures that the elements form a true partition of Ω meaning that any point in Ω is contained in exactly one element.We also introduce the concept of patches which is based on neighborhood relations of elements.The first order patch of a subset ω ⊂ Ω is defined as By recursion, we can then define the -th order patch as N (ω) := N −1 (N(ω)) with N 1 = N.

3.2.
Network assumption.The proposed method relies on the assumption that the mesh T H is coarse compared to the underlying network.On the length scale of the coarse mesh, the network then inherits many properties known from the continuous setting, e.g., an element-wise Poincaré inequality.
Hence, we restrict ourselves to meshes with mesh sizes larger than some parameter H 0 > 0.More precisely, we consider a hierarchy of meshes with H denoting a finite set of positive mesh size parameters with the smallest element being H 0 .The parameter H 0 can be interpreted as the smallest mesh size for which desired properties from the continuous case still carry over.The following assumption provides an insight into the choice of H 0 .
Assumption 3.1 (Network connectivity).Assume that the smallest mesh size H 0 of the hierarchy of meshes (3.1) is chosen such that, for any element • all edges with one or both endpoint in T , • only edges with endpoints contained within N(T ).
3.3.Element-wise Poincaré inequality.Under the previous assumption, it is possible to prove the following element-wise Poincaré inequality.
Lemma 3.2 (Element-wise Poincaré's inequality).Let Assumption 3.1 be satisfied.Then, for all T ∈ T H , H ∈ H, there exists a constant C po > 0 such that, for all v ∈ V , there exists a constant c such that The constant c resembles the element-average in the continuous case.
Proof.The proof can be found in [GHM22, Lemma 3.5].For the sake of completeness, it is also presented here.Let M , L be the operators defined on the subgraph Ḡ from Assumption 3.1.We consider the generalized eigenvalue problem Lv = λ M v posed in the space V ( N ) with λ 1 ≤ λ 2 ≤ . . .denoting its eigenvalues.Due to Assumption 3.1, the subgraph Ḡ is connected and we have that λ 1 = 0 (corresponding to constant eigenfunctions) and λ 2 > 0. By the min-max theorem, the second eigenvalue admits the characterization Denoting with c the M -orthogonal projection of v onto the constant functions, we obtain which is the desired inequality.
This lemma states, similarly as Lemma 2.2, only the existence of a constant C po and not its qualitative behavior in dependence of H.In Section 7.1, a numerical experiment confirms that C po is proportional to H provided that the considered meshes are sufficiently coarse, see Assumption 3.1.Theoretically, such a scaling can be proved under the assumption of a d-dimensional isoperimetric inequality, cf.[GHM22, Lemma 3.6].
This yields us to the following assumption.

Prototypical approximation
In this section, we construct prototypical problem-adapted ansatz spaces with uniform approximation properties independent of the material data.The word prototypical shall emphasize that, without modification, the constructed ansatz spaces are not feasible in a practical implementation.
A common technique for the construction of such problem-adapted ansatz spaces is the application of the inverse operator K −1 to some classical finite element spaces.Here, we adapt this technique to the spatial network setting.Let us consider the space of T H -piecewise constants given by with the indicator function equaling one for x ∈ N that are contained in T and zero else.4.1.L 2 -projection.We introduce the L 2 -projection onto P 0 (T H ), defined as The following lemma states global stability and approximation estimates for Π H .
Lemma 4.1 (Properties of the L 2 -projection).The L 2 -projection is stable, i.e., for all v ∈ V , it holds that Further, if Assumption 3.1 and Assumption 3.3 are satisfied, then there is a constant C Π > 0 independent of H such that the following approximation estimate holds, for all v ∈ V , Proof.For the proof of (4.2), we use the local stability of Π H which follow immediately from definition (4.1).This yields For proving (4.3), we again split the norm into a sum of element contributions and employ (3.2) and Assumption 3.3 to obtain where the constant 3 d reflects the finite overlap of the patches N(T ).
The corresponding Galerkin method seeks a discrete approximation When using problem-adapted ansatz spaces as V H , the approximation problem of the solution u in V H is transformed into an approximation problem of the right-hand side f in P 0 (T H ).This allows to show the desired uniform approximation properties without pre-asymptotic effects.
Lemma 4.2 (Uniform approximation).Let the network satisfy Assumption 3.1 and Assumption 3.3.Then, for any f ∈ V , the prototypical approximation u H defined in (4.5) converges quadratically in H, i.e., Proof.This is the spatial network counterpart of [HP22, Lemma 3.2].For the error estimate, we introduce ūH := K −1 Π H f ∈ V H and employ Céa's lemma for estimating the energy approximation error of u H against the one for ūH , i.e., Denoting e := u − ūH , we further obtain using (4.3) Dividing by |e| K , inferring that and using (2.7), the assertion follows.
Remark 4.3 (T H -piecewise right-hand sides).For f ∈ P 0 (T H ), the prototypical method is exact, as for T H -piecewise constant right-hand sides, it holds |f − Π H f | M = 0 in (4.6).

Localization
This section provides a localization strategy that turns the prototypical multiscale method introduced in the previous section into a practical method.Inspired by [HP22], we introduce a localization strategy for spatial network models that identifies local T Hpiecewise constant source terms with rapidly decaying responses under the solution operator K −1 .This rapid decay enables a localization of the basis functions to element patches which paves the way to an efficiently computable localized ansatz space.
In order to simplify the notation in the subsequent derivation, we fix an element T ∈ T H and its surrounding -th order patch ω := N (T ).Furthermore, let T H,ω denote the submesh of T H consisting of elements contained in ω.

Localization ansatz.
For the construction of an almost local basis of the prototypical ansatz space V H defined in (4.4), we assign one basis function to each element T ∈ T H .The (in general global) basis function ψ ∈ V H associated to element T is determined by the following ansatz (5.1) with coefficients (g T ) T ∈T H to be determined subsequently.A local counterpart of ψ which is supported on the patch ω can be defined by applying the patch-local solution operator K −1 ω instead of K −1 , i.e., ϕ := K −1 ω g.In general, the locally supported function ϕ is a poor approximation of its global counterpart ψ.However, we aim to choose the coefficients (g T ) T ∈T H such that ψ is well approximated by its patch-local counterpart ϕ.

Conormal derivatives.
For this purpose, we transfer the concept of conormal derivatives to the spatial network setting.Let Ṽω denote the subset of V consisting of functions that are supported on nodes in ω or its neighboring nodes.
The following preliminary result is inevitable for the definition of conormal derivatives.
Lemma 5.1 (Inner product on Ṽω ).The bilinear form is an inner product on the space Ṽω .
Proof.We only need to show the positivity of the inner product which we do by contradiction.Assume that there exists 0 = v ∈ Ṽω such that ((L ω + M ω )v , v) = 0. Let G denote the subgraph of G within ω extended by its neighboring nodes and the respective edges.
We can decompose G into a finite number of connected components Gi .For each connected component, we have that v equals a constant c i (otherwise (L ω v, v) > 0).Denoting with M i the mass matrix with respect to Gi , we have that (M i c i , c i ) = c 2 i (M i 1 , 1) which is only zero if c i = 0. Here, it shall be noted that a connected component cannot have nodes solely outside of ω since, by definition, these nodes are connected to nodes within ω.The assertion follows immediately.
In the spatial network setting, we define the conormal derivative of ϕ, denoted by B ω ϕ ∈ Ṽ ω , as the functional that satisfies, for all v ∈ Ṽω , (5.2) Further, we define its operator norm as The following lemma shows that the operator norm of B ω ϕ can be bounded in terms of ϕ and its corresponding right-hand side g.
Lemma 5.2 (Continuity of conormal derivative).The operator norm of B ω ϕ can be bounded as follows Proof.We denote by G = ( Ñ , Ẽ) the subgraph of G within ω extended by its neighboring nodes and the respective edges.We define a semi-norm on Ṽω by More precisely, for all v ∈ Ṽω , it holds |v| L,ω ≤ |v| L,ω and Using that g ∈ Vω , ϕ ∈ V ω , and v ∈ Ṽω , the norm equivalence, and the discrete Cauchy-Schwarz inequality, we obtain which is the boundedness of the conormal derivative.
The following result enables a computation of the Ṽ ω -norm of the conormal derivative and is key for the method's practical implementation.
Lemma 5.3 (Computation of the conormal derivative's norm).The operator norm of the conormal derivative B ω ϕ ∈ Ṽ ω can be computed as where τ ∈ Ṽω solves, for all v ∈ Ṽω , Proof.Lemma 5.1 yields the unique existence of a solution τ to problem (5.3).Furthermore, using (5.3), we can compute the operator norm of B ω ϕ as follows where we used that the supremum is attained for v = τ .

Choice of local basis.
It turns out that ϕ approximates ψ well provided that the parameters (g T ) T ∈T H are chosen such that the conormal derivative of ϕ is small, since, for all v ∈ V , it holds Hence, denoting by R : P 0 (T H,ω ) → Ṽω , g → τ the linear mapping that maps the right-hand side g to its corresponding function τ defined in (5.
where {T i : i = 1, . . .N } is some numbering of the elements in T H,ω .
As the notation in (5.4) already indicates, the solution to the minimization problem is, in general, non-unique.Indeed, for large oversampling parameters and for patches ω close to the boundary ∂Ω, there might by several (linearly independent) g with similar in size Rayleigh quotients.In such cases, the appropriate choice of g can be difficult, cf.Section 6.1 for an implementation resolving this issue in practice.5.4.Localization error.We define the quantity σ T as the norm of the conormal derivative of the selected basis function The quantity σ T determines the local localization error, i.e., the error when approximating ψ by its local counterpart ϕ.

Super-localized approximation
Using the localized basis function introduced in the previous section, this section transforms the prototypical method (4.5) into a practically feasible method.For a fixed oversampling parameter, we define the localized ansatz space as Note that, in the case of ambiguity, we write ϕ T, , ψ T, , and g T, for ϕ, ψ, and g in order to emphasize their dependence on T, .
The SLOD determines the Galerkin approximation of the solution u to (2.6) in the space V H, , i.e., it seeks u H, ∈ V H, such that, for all v H, ∈ V H, , (6.1) 6.1.Riesz basis property of right-hand sides.The choice of right-hand sides (5.4), in general, does not guarantee their linear independence.For the analysis, we assume that the right-hand sides {g T, : T ∈ T H } span P 0 (T H ) in a stable way.Subsequently, we also present an implementation strategy that ensures the basis stability in practice.
Assumption 6.1 (Riesz stability).The set {g T, : T ∈ T H } is a Riesz basis of P 0 (T H ), i.e., there is C r (H, ) > 0 depending polynomially on H, such that, for all (c T ) T ∈T H , (6.2) The Riesz basis property is closely related to the eigenvalues of the matrix containing inner products of the right-hand sides g T, as the following remark shows.Remark 6.2 (Riesz constant).The constant C r is determined by the smallest and largest eigenvalue of the matrix G ∈ R M ×M , M := #T H defined as where {T i : i = 1, . . ., M } is some numbering of the elements in T H . Denoting its eigenvalues by λ 1 ≤ λ 2 ≤ . . ., ≤ λ M , the constant in the lower (resp.upper) bound of (6.2) is then the smallest (resp.largest) eigenvalue of G. Thus, we can set The stability issue of the basis of right-hand sides similarly appears in the continuous setting for the SLOD from [HP22].Therein, an algorithm is proposed that selects the righthand sides in a stable and efficient manner.The algorithm is based on the observation that stability issues only occur for patches close to the boundary ∂Ω.By grouping certain troubled patches and solving one minimization problem of the type (5.4) for such a group of patches, the stability of the right-hand sides within this group of patches can be ensured by construction.As this procedure only needs to be applied close to the global boundary for certain groups of patches, the algorithm only introduces minimal communication between the patches.For more information and a detailed illustrative description of the algorithm, see [HP22, Appendix B]. 6.2.A-posteriori error estimate.We derive an error estimate for the SLOD solution (6.1) which is explicit in the quantity with σ T defined in (5.6).The quantity σ determines the size of the method's localization error.Remark 6.4 presents a summary of existing decay results for σ in the continuous setting and the spatial network setting.
Theorem 6.3 (Uniform localized approximation).Let the network satisfy Assumption 3.1 and Assumption 3.3.Further, let {g T, : T ∈ T H } be stable in the sense of Assumption 6.1.Then, for any f ∈ V , the SLOD approximation defined in (6.1) converges quadratically in H plus a localization error, i.e., there exists C, C > 0 such that, for all f ∈ V , Proof.Using Céa's Lemma, we can estimate the energy error of u H, approximating u by the respective energy error for v H, , for any v H, ∈ V H, .This yields Next, we add and subtract ūH := K −1 Π H f and obtain with the triangle inequality The first term has already been estimated in the proof of Lemma 4.2.Therein, it was shown that For the second term, we use that the prototypical method (4.5) is exact for piecewise constant right-hand sides, i.e., in particular for Π H f .This enables us to represent ūH using the functions ψ T, = K −1 g T, from (5.1) as ūH = where (c T ) T ∈T H are the coefficients of the expansion of Π H f in terms of the right-hand sides g T, .This is possible as the g T, are linearly independent which is, in particular, guaranteed by Assumption 6.1.For the particular choice ) , e). (6.5) Writing N (T ) instead of ω, we obtain using Lemma 5.3 and the definitions of ψ T, , the conormal derivative B N (T ) in (5.2), and σ T from (5.6) that (6.6) (K(ψ T, − ϕ T, ) , e) = (M g T, , e) − (Kϕ T, , e) Here, we extended definition (5.2) to test functions in V and use the notation ẽ for the restriction of e ∈ V to the space ṼN (T ) .In the last step, we utilized |ẽ | V,N (T ) = |e| V,N (T ) which holds as the norm only considers nodes in the support of ẽ.The combination of (6.5), (6.6), Assumption 6.1, the discrete Cauchy-Schwarz inequality, the finite overlap of the patches, and (4.2) yields 0 reflecting the overlap of the patches N (T ).In the last step, we applied Friedrichs' inequality (2.8) on the full network.Putting together the previous estimates and using (2.5), the assertion can be concluded.Remark 6.4 (Decay of localization error).In the continuous setting, [HP22] conjectures that σ decays super-exponentially in .More precisely, there exists C σ > 0 depending polynomially on H, and C d > 0 independent of H, such that, for all , (6.7) ).In [HP22, Section 7], this result is justified theoretically using a conjecture from spectral geometry.Numerical experiments in [HP22, Section 8] confirm the super-exponential decay numerically.Furthermore, [HP22, Lemma 6.4] provides a rigorous proof that σ decays at least exponentially in .
In the spatial network setting, we also observe a rapid decay of the quantity σ as is increased.Qualitatively, the decay behavior is is similar to the one in the continuous setting, see the numerical experiments in Section 7.2 and Section 7.3.Using techniques from [EGH + 22], one can show, similarly as in the continuous setting, that σ decays at least exponentially.Remark 6.5 (T H -piecewise right-hand sides).For f ∈ P 0 (T H ), only the localization error in (6.4) is present, as for T H -piecewise constant right-hand sides, it holds |f − Π H f | M = 0.

Numerical experiments
For the subsequent numerical experiments, we utilize a spatial network constructed as follows.First, we sample 20,000 lines of length 0.05 which are uniformly rotated and with midpoints uniformly distributed in [−0.025, 1.025] 2 .Next, we remove all line segments outside of the unit square so that all lines are contained in the domain Ω = [0, 1] 2 .We then define the network nodes as the line segments' endpoints and intersections.Two nodes are connected by an edge if the nodes share a line segment.We only consider the largest connected component of the network in order to ensure that the network is connected.We also remove all hanging nodes (nodes of degree one) in the interior of the domain along with the respective edges.All nodes at the boundary ∂Ω are Dirichlet nodes.The total number of nodes is around 80, 000.7.1.Poincaré constant.This numerical experiment shall justify Assumption 3.3 numerically.Given T ∈ T H , H ∈ H, we construct the subgraph Ḡ = ( N , Ē) by means of a breadth-first search algorithm.Then, we compute the second eigenvalue λ 2 of the generalized eigenvalue problem Lv = λ M v posed in the space V ( N ) with L, M being defined on Ḡ, see also the proof of Lemma 3.2.
In Figure 7.1, the reciprocal square root of the numerically computed eigenvalues λ 2 corresponding to elements T ∈ T H is depicted for different mesh sizes H ∈ H.The dotted black line interpolates, for all mesh sizes H, the averaged eigenvalue for the respective H.Note that the scaling of the y-axis is chosen such that the black line should appear linear if the desired scaling from Assumption 3.3 holds.We note that for smaller mesh sizes, there is a considerable variation in the eigenvalues.This happens when we reach the critical mesh size, which is H 0 ≈ 2 −5 in this example.7.2.Decay of σ.In this experiment, we numerically investigate the decay of σ defined in (6.3) which is the maximum of the local quantities σ T from (5.6).For illustration purposes, we pick an element T ∈ T 2 −4 whose fourth order patch has no intersection with the global boundary ∂Ω.In Figure 7.2, one observes a rapid decay of σ T as is increased.Note that for ≥ 3, the difference in magnitude of the smallest and largest eigenvalue of (5.5) is at least of order 10 14 which means that the respective matrices have a large condition number.This affects the accuracy of the eigenvalue solver and explain the flatting of the decay for large oversampling parameters as observed in Figure 7.2 and Figure 7.3.7.3.Super-localization.For this numerical experiment, we consider a weighted graph Laplacian (2.4) with edge weights γ xy independently and uniformly distributed in the interval [0.01, 1].We choose the right-hand side f ≡ 1, as for f ∈ P 0 (T H ), the error is bounded solely by the localization error, see Remark 6.5.In Figure 7.3 (left), the localization errors for the SLOD are shown.The localization errors are plotted for several coarse grids T H with respect to .We additionally depict the localization errors when using the LOD for localizing the same prototypical ansatz space V H from (4.4).As reference, we indicate lines showing the expected rates of decay of the localization errors for the SLOD and LOD which is super-exponential decay for the SLOD, cf.(6.7), and exponential decay for the LOD.In Figure 7.3 (right), we again depict the localization errors for the SLOD but this time together with the values of its estimator (7.1) est(H, ) := C 1/2 r (H, ) d/2 σ(H, ) appearing in the error estimate in Theorem 6.3.Note that the scaling of the x-axis in Figure 7.3 is chosen such that a super-exponentially decaying curve appears to be a straight line.
Figure 7.3 (left) numerically confirms the super-exponential decay rates of the localization errors as known for the continuous setting [HP22].Note that, similarly as for the decay of σ in Section 7.2, we can also observe a flattening of the decay for ≥ 3.This might again be explained by the high condition number of the matrices in (5.5), see Section 7.2.The localization error of the LOD, depicted in    shows that the error estimator is quite well aligned with the localization error and thereby underlines the sharpness of the estimator.For = 4, we observe that also the estimator is slightly affected by the aforementioned conditioning issue.7.4.Optimal convergence.For demonstrating the convergence of the SLOD for spatial networks, we consider the edge weights from the previous numerical example and the right-hand side f (x 1 , x 2 ) = sin(x 1 ) sin(x 2 ). Figure 7.4 shows the convergence plots for the SLOD (left) and the LOD (right) for different oversampling parameters as the coarse mesh T H is refined.Note that we only consider combinations of H, for which there is no patch N (T ) that coincides with the whole domain Ω.As reference, lines of slope 2 are depicted.
Figure 7.4 confirms the method's convergence properties as stated in Theorem 6.3, i.e., provided that the oversampling parameter is chosen sufficiently large, optimal convergence of order 2 can be observed.Note that in Figure 7.4 (left), the yellow line is partially below the purple line and thus is difficult to see.For the LOD, the optimal order convergence is difficult to see.The reason is that, for all considered localization parameters, the localization error still dominates the optimal order error.7.5.High-contrast example.In this numerical experiment, we use edge weights γ xy that are independently and uniformly distributed in [0.01, 1] and add several channels of high conductivity with edge weights of 10 4 .Hence, the contrast in this numerical example is of order 10 6 .For an illustration of the setup, we refer to Figure 7.5 (left), where the high-conductivity channels are marked in green.The high conductivity effectively extends the homogeneous Dirichlet boundary conditions also to the channels.We consider this experiment as particularly challenging as the channels are not aligned with the considered Cartesian meshes T H and thus, artificial boundary-like conditions are imposed on the  For this experiment, the SLOD is again able to achieve very accurate approximations for much smaller oversampling parameters than the LOD.For example, for the mesh T 2 −4 and = 3, the SLOD achieves a relative L-norm error of 2.75 × 10 −3 , while the LOD, for the same discretization parameters, only achieves an error of 1.73 × 10 −1 .For reaching a similar accuracy as the SLOD, for the LOD, we would need to choose so large that many patch problems are already global problems.

.
3), we choose g as an | • | M,ω -normalized minimizer of the following quadratic constraint minimization problem (5.4) g ∈ argmin q∈P 0 (T H,ω ) Due to Lemma 5.3, this is equivalent to the minimization of the conormal derivative.For a depiction of selected basis functions and their corresponding right-hand sides, we refer to Figure 5.1.

Figure 7 .
Figure7.1 confirms Assumption 3.3 numerically.We note that for smaller mesh sizes, there is a considerable variation in the eigenvalues.This happens when we reach the critical mesh size, which is H 0 ≈ 2 −5 in this example.
Figure 7.2 then depicts the square root of the eigenvalues of the eigenvalue problem (5.5) for the patches N (T ) with = 1, . . ., 4. By definition, the square root of the smallest eigenvalue coincides with σ T .The values of σ T are marked using dashed horizontal lines.

Figure 7 . 2 .
Figure 7.2.Eigenvalues of the eigenvalue (5.5) for patches of orders .The dashed lines marks the values of σ T for the respective patches.
Figure 7.3 (left), decays exponentially, see e.g.[AHP21, EGH + 22].Much larger values of are necessary in order to reach the accuracy level of the SLOD.

L
Figure 7.3.Plot of the relative localization errors in dependence of the oversampling parameter of the SLOD and the LOD (left).Depiction of the relative localization errors of the SLOD with the estimator (7.1) (right).

Figure 7
Figure7.3 (right) shows that the error estimator is quite well aligned with the localization error and thereby underlines the sharpness of the estimator.For = 4, we observe that also the estimator is slightly affected by the aforementioned conditioning issue.

L
Figure 7.4.Plot of the errors |u − u H, | L /|u| L in dependence of the mesh size H of the SLOD (left) and the LOD (right).