A Reduced Basis Method For Fractional Diffusion Operators I

We propose and analyze new numerical methods to evaluate fractional norms and apply fractional powers of elliptic operators. By means of a reduced basis method, we project to a small dimensional subspace where explicit diagonalization via the eigensystem is feasible. The method relies on several independent evaluations of $(\textrm{I}-t_i^2\Delta)^{-1}f$, which can be computed in parallel. We prove exponential convergence rates for the optimal choice of sampling points $t_i$, provided by the so-called Zolotar\"ev points. Numerical experiments confirm the analysis and demonstrate the efficiency of our algorithm.


Introduction
Fractional powers of differential operators are a field of substantial interest in different branches of mathematics. Their augmented appearance in real world problems, such as ecology [9], finance [5], image processing [19], material science [7], and porous media flow [8] has given rise to several approaches in order to understand and analyze problems of this kind. Choosing the fractional exponent s in a way, such that the observed data matches the mathematical model, is an issue of major concern in applied science, requiring multiple evaluations with respect to s.
Typically, direct computations rely on matrix approximations L of the desired operator, whose s th -power is computed subsequently. This procedure requires diagonalization of L, which amounts to a large number of time-consuming eigenvalue problems, making this approach inapplicable for general purposes. The demand for suitable methods that address these problems has substantially increased throughout the last years.
Fractional powers of the Laplace operator appear to be of particular interest. Widely varying definitions of (−∆) s have emerged, e.g., as pseudo differential operator defined by the Fourier transform, by means of its involved eigensystem, as a singular integral operator, or as inverse of the Riesz potential operator. All those definitions turn out to be equivalent in R n , see [21]. This result no longer holds as bounded domains are incorporated. A detailed excursion of its versatile definitions as well as the comparison of both existing and newly proposed numerical schemes is explained in [23].
A difficulty that all fractional operators have in common is their nonlocal character. Caffarelli and Silvestre managed to avoid this inconvenience in [16] by relating any fractional power of the Laplacian in R n to a Dirichlet-to-Neumann map of an involved harmonic extension problem in R n × R + , providing a local realization of (−∆) s . Adaptions for bounded domains Ω have been conducted in [14], [15], and [17], yielding a boundary value problem on the semi-infinite cylinder C := Ω × R + . Enhancements for a more general class of operators has been presented in [32].
A large number of methods exploits the structure of harmonic extension techniques to approximate fractional differential operators and their inverses, see [2], [4], [6], and [18]. In [27], the solution of the aforementioned boundary value problem is computed on the truncated cylinder C γ := Ω × [0, γ), with γ > 0 of moderate size, by standard finite element techniques, at the cost of one additional space dimension. Truncation can be justified by the fact that the solution decreases exponentially in the extended direction. Other strategies rely on integral representations of the involved operator and utilize sophisticated quadrature rules, see [12]. A block-wise low-rank approximation of the associated stiffness-matrix has been investigated in [25]. Of particular interest are fractional elliptic operators in context of parabolic equations. Tackling problems of this kind has been a matter of several recent publications, e.g., [3], [11], [26], and [31].
In this article, we contribute to the vast field of approximation techniques by means of a reduced basis method. The matrix approximation L of the operator is projected to a lower dimensional space, where its fractional power can be determined directly according to the small eigensystem. The projection is based on several independent evaluations of type (M + t 2 i A) −1 f , where M and A represent finite element matrices, f a suitable vector, and t i a real parameter. The decoupled structure of this problem trivially admits an efficient implementation in parallel. Multigrid preconditioner can be utilized, whose convergence rates are bounded independently of the shift parameter t i and uniform mesh size h. The proposed method can be interpreted as model order reduction of the approach devised in [27], without requiring truncation of the domain. Among others, it provides accurate approximations for evaluations of both types s → (−∆) s u and u → (−∆) s u with considerably reduced computational expense. The arising operator incorporates a nonlinear dependency in u. This inconvenience is compensated by its analytically affirmed exponential convergence property, while, at the same time, the computational effort grows only logarithmically in the condition number. Emerging estimates rely on the theory of K-functionals and their attendant interpolation spaces. Realization of both the inverse operator and its appearance in context of parabolic problems is the matter of a consecutive paper.
The paper is organized as follows. In Section 2, we introduce three different concepts of Hilbert space interpolation. They serve as abstract template to provide a setting that is suitable for the study of our problem. One of those interpolation techniques can be regarded as conceptional enhancement of the well-known Caffarelli-Silvestre extension and its variants. Equivalence of all three methods appears to be the main result of this section. A model order reduction in terms of a reduced basis strategy is devised in Section 3. Feasible choices of the reduced space and the efficient implementation of its arising reduced basis interpolation norms are taken into account. Having understood its underlying structure, we proceed, in Section 4, to deduce the induced fractional operator, providing the essential definition of this paper. In order to justify the former course of action, we perform the involved numerical analysis in Section 5. The optimal choice of the reduced space is elaborated, yielding exponential decay in the error for both norm and operator approximation. The core of this paper is summarized in Theorem 5.20. Thereupon, in Section 6, we conduct several numerical examples that illustrate the performance of our method. Eventually, in the appendix, we prove two technical results that are referred to within Section 2.

Notation and Preliminaries
In this section, we establish the notation and terminology we utilize throughout this paper and introduce several function spaces we address in the subsequent.
Throughout what follows, let the induced norm · of a Hilbert space (V, ·, · ) be defined as Conversely, given a Banach space (V, · ), such that · satisfies the parallelogram law, we define the induced scalar product of · as the unique scalar product ·, · on V that satisfies (2.1), obtained by polarization identity. Whenever referring to a Banach space (V, · ) as Hilbert space, we mean that · induces a scalar product ·, · , such that (V, ·, · ) is a Hilbert space.

Hilbert space interpolation.
Throughout what follows, let (V 0 , ·, · 0 ) and (V 1 , ·, · 1 ) denote two real Hilbert spaces, such that V 1 ⊆ V 0 is dense with compact embedding. It is well-known that there exists an orthonormal basis (ϕ k ) ∞ k=1 of V 0 and a sequence of positive real numbers (λ k ) ∞ k=1 with λ k −→ ∞ as k −→ ∞, satisfying ∀w ∈ V 1 : ϕ k , w 1 = λ 2 k ϕ k , w 0 for all k ∈ N. Along with these premises, we introduce, based on [10], [13], and [22], the first of three space interpolation techniques. For each s ∈ (0, 1) we define an interpolation space between V 0 and V 1 by equipped with its Hilbert interpolation norm incorporates a Hilbert space structure and satisfies Another approach is provided by the real method of interpolation in terms of the K-functional. It was first published by Peetre [28], Lions and Magenes [22] and also works for Banach spaces. Let · 0 and · 1 denote the induced norms on V 0 and V 1 , respectively. We define for all t > 0 and u ∈ V 0 the K-functional as to obtain the K-norm Along with its inner product, obtained by parallelogram law, the norm induces a Hilbert space which again turns out to be intermediate. Based on the work of Peetre and Lions, it has been shown that [V 0 , V 1 ] K s can be characterized as space of trace, see [33] and [34]. The arising norm, which turns the trace space into a Banach space, is known to be equivalent to the K-norm. We affirm this observation by proving that these norms are not only equivalent but do also coincide up to a multiplicative constant. This result is well-established for some particular choices of V 0 and V 1 , see e.g., [17,Proposition 2.1], but has not been recorded in its most general setting to the best of our knowledge. To make matters precise, we investigate some technical results.
For each s ∈ (0, 1) let α := 1 − 2s henceforth. For all i = 0, 1, we define the space and further Thereupon, we introduce This space is amenable to trace evaluations, as the following Theorem shows.
Theorem 2.1. There exists a linear, surjective trace operator By d s we refer to a positive constant whose value can be specified by means of the Gamma function, Proof. See Appendix.
Theorem 2.1 justifies the introduction of an interpolation space which is is well-defined by standard arguments from calculus of variation. Due to surjectivity, By means of Euler-Lagrange formalism, we observe that the infimum in (2.4) is the unique solution of an involved variational formulation. Thereupon, we introduce the following definition.
Definition 2.2. The α-harmonic extension U of u ∈ [V 0 , V 1 ] E s is defined as the unique solution of the variational formulation: Find U ∈ V(V 0 , V 1 ; y α ), such that for all y ∈ R + and W ∈ V(V 0 , V 1 ; y α ) there holds (2.5) Proof. Follows directly from the fact that (2.5) is the Euler-Lagrange equation of the minimization problem in (2.4).
As the following Theorem shows, all three interpolation methods coincide.
Theorem 2.4. Let V 0 , V 1 denote two Hilbert spaces, such that V 1 ⊆ V 0 is dense with compact embedding. Then there holds Throughout what follows, we denote by [V 0 , V 1 ] s the unique interpolation space emerging from one and hence from all three interpolation methods. The norms · H s (V0,V1) and · K s (V0,V1) satisfy the parallelogram law on [V 0 , V 1 ] s . By virtue of Theorem 2.4, this property also applies to · E s (V0,V1) . We summarize these observations in the following corollary. Corollary 2.5. All three interpolation norms, · E s (V0,V1) , · H s (V0,V1) , and · K s (V0,V1) , induce a respective scalar product, ·, · E s (V0,V1) , ·, · H s (V0,V1) , and ·, · K s (V0,V1) , such that for all s v, w K s (V0,V1) . Definition 2.6. Let · s ∈ { · E s (V0,V1) , · H s (V0,V1) , · K s (V0,V1) }. By means of the Riesz-representation Theorem, we define the induced operator of · s as the unique linear function L s : where ·, · s refers to the induced scalar product of · s . Corollary 2.5 immediately reveals the following result.  2.2. The finite element framework. Results from Section 2.1 are also valid in a discretized setting. Depending on two fixed Hilbert spaces V 0 and V 1 which satisfy the premises from Section 2.1, we denote by V h ⊆ V 1 a conforming finite element space of dimension N henceforth. Further, let (b k ) N k=1 ⊆ V h denote an arbitrary basis of V h . By M, A ∈ R N ×N , we refer to the mass-and stiffness-matrix of V h , arising from finite element discretization in terms of Due to its finite dimensional nature, the spaces (V h , · 0 ) and (V h , · 1 ) satisfy the conditions from Section 2.1, such that the discrete interpolation norms on V h are well-defined. The finite element space equipped with each of these norms is a Banach space, inducing both scalar product, ·, · E s , ·, · H s , ·, · K s , and operator, L E s , L H s , L K s , respectively. The aim of this paper is to provide an accurate approximation of these operators with considerably reduced computational expense. By virtue of Corollary 2.7, it suffices to address this problem in any of those three interpolation settings. Each of them comes with its own benefits and difficulties attached. In the following section, we exploit the advantages of all three strategies to derive a computationally beneficial norm approximation, such that the induced operator satisfies the desired properties.

Approximation of the interpolation norms
The goal of this section is to devise an accurate approximation of the discrete interpolation norms, introduced in (2.7), with downsized computational effort. For convenience, we neglect the subscript h for all finite element functions u h ∈ V h and solely write u henceforth. Furthermore, by (ϕ k , λ 2 k ) N k=1 ⊆ V h × R + we refer to the 0-orthonormal eigenpairs of (V h , · 0 ) and (V h , · 1 ) from now on, such that The eigenvalues are assumed to be in ascending order according to their value, such that 3.1. The reduced basis approach. For each u ∈ V h we define its approximate interpolation norms as follows.

Definition 3.1 (Reduced basis interpolation norms). For each
for all w ∈ V h . Given some real parameters 0 = t 0 < t 1 < ... < t r , specified in Section 5, we introduce the reduced space The reduced basis interpolation norms on V r are defined by either of the three equivalent definitions Remark 3.2. Definition 3.1 incorporates a nonlinear dependency in u. For simplicity, we neglect this relation in both terminology and notation throughout our discussions. We point out, however, that all V r -connected constructions are subject to this dependency.
In analogy to (3.1), we denote the eigenpairs of (V r , · 0 ) and (V r , The choice of V r is motivated by means of the K-method. The variational problem (3.2), which is uniquely solvable according to Lax-Milgram, appears to be the Euler-Lagrange equation of K 2 (t; u), such that v N (t) coincides with the minimizer of K 2 (t; u). Based on a sophisticated selection of t 1 , ..., t r , (3.3) aims to provide a both accurate and efficient approximation to the family of solutions (v N (t)) t∈R + . The choice of t 0 = 0 yields v N (t 0 ) = u ∈ V r and is indispensable to ensure that (3.4a)-(3.4c) are well-defined. In general, the construction of V r yields a r + 1 dimensional space. The proof is carried out in two steps.
The proof of Lemma 3.4 immediately reveals that the set {v N (t 0 ), ..., v N (t r )} becomes linearly dependent as r + 1 > m. In practice, we observe two possible constellations. In the common case, where u provides contributions from multiple basis vectors ϕ k , solutions of (3.2) for different shift parameters t i indeed lead to an enrichment of V r , as long as r is small enough. If the amount of non-zero Fourier-components of u is rather small, we might be confronted with the case, where augmenting r does not affect the dimension of V r any further. In this case, however, enlargement of V r is no longer necessary, as the following Theorem shows.
Proof. We validate the claim with respect to the Hilbert space interpolation norm.
Thus, each eigenfunction φ j of V r is contained in one of the orthogonal subspaces from (3.7). We make matters more precise by proving the following claim: where m = dim(V r ) according to Lemma 3.4. Uniqueness is shown by contradiction. Presume the existence of k, l ∈ {0, ..., m − 1}, such that φ k , φ l ∈ E u λ 2 i for some i ∈ I. Then, there exists an index i * ∈ I, such that j=0 is a basis of V r . Thus, (3.9) is contradictory to u ∈ V r , affirming the uniqueness of the proposed index j. Making use of dim V r = m, we also obtain its existence. Let now u = i∈I u i refer to the orthogonal decomposition of u according to the eigenspaces and define Based on (3.8) and u ∈ V r , we conclude that η j ∈ V r for all j = 0, ..., m − 1. Moreover, the family (η j ) m−1 j=0 is maximally linearly independent in V r and therefore a basis itself. By definition, each η i is a 0-orthonormal eigenfunction to the eigenvalue With exception of Theorem 4.6, we only consider the case where r + 1 ≤ m for the rest of this paper, such that the dimension of V r coincides with r + 1.

3.2.
Computational aspects. The remainder of this section reviews the major ingredients to supply the reduced basis interpolation norms with a computationally applicable form. Before addressing this issue explicitly, we specify some further notation. Throughout what follows, for each v ∈ V h we denote by v ∈ R N its uniquely assigned coefficient vector, such that denotes the finite element basis from (2.6). Moreover, we introduce the s th -power of any symmetric matrix Q ∈ R l×l , l ∈ N, by diagonalization, i.e., where Φ ∈ R l×l denotes the matrix of eigenvectors of Q and Λ s the involved diagonal matrix, containing the s th -power of all corresponding eigenvalues. If Q is also positive definite, we set Based on these definitions, we devise an accurate procedure to compute (3.4a) -(3.4c) for one and hence for all norms efficiently. The structure of u E s r and u K s r is less amenable to direct computations, since this would require quadrature rules on the unbounded domain R + . A more convenient setting can be provided by the eigensystem. Targeting at the computation of V r , we introduce the matrix V r appears to be unsuitable for direct computations. Whenever t j−1 ≈ t j for some j = 1, ..., r, the matrix V r becomes "almost singular" as v N (t) depends continuously on t. To circumvent this numerical difficulty, we introduce the following terminology.
Definition 3.6. The reduced basis matrix V r ∈ R N ×(r+1) is defined as the unique matrix that arises from Gram-Schmidt orthonormalization chronologically applied to the columns of V r with respect to the scalar product ·, · M .
Remark 3.7. The chronological performance of Gram-Schmidt orthonormalization in Definition 3.6 yields that the first column of V r coincides with β −1 u, where β := u 0 .
The reduced basis matrix suggests a canonical basis on V r by referring to the unique functions b r 1 , ..., b r r ∈ V r ⊆ V h , whose assigned coefficient vectors coincide with the columns of V r , i.e., By virtue of V r ⊆ V h , each v r ∈ V r admits two vector-representatives, v r and v r , depending on which basis we refer to. The relation between them reads as follows, where the last equality follows by the orthonormal property of V r . We introduce the following definition.
Definition 3.8. The projected stiffness matrix A r ∈ R (r+1)×(r+1) is defined by Theorem 3.9. Let e 1 ∈ R r+1 denote the first unit vector and β = u 0 . Then there holds The proof is postponed to Section 4. Theorem 3.9 highlights the beneficial structure of the proposed reduced basis algorithm. The arising problem size is of much smaller magnitude r N , making direct computations of the eigensystem affordable.

Approximation of the operators
All three reduced basis interpolation norms (3.4a)-(3.4c) induce an operator, L E s r , L H s r , and L K s r on V r , which do all coincide up to s-dependent constants. In the same way as L H s r serves as reduced basis surrogate for the spectral finite element operator L E s r can be interpreted as model order reduction of the generalized Dirichlet-to-Neumann map from [32]. The latter coincides with L E s (V0,V1) , if V 0 and V 1 are chosen appropriately. The purpose of L K s r is more of a technical than theoretical kind. We make use of the K-method as vital tool to proof convergence for one and hence for all three reduced basis operators. However, due to its computationally beneficial form, we stick to the equivalent spectral setting at first and refer to L H s r as our truth reduced basis approximation. In dependency of u ∈ V h , we state the essential definition of this paper.
Proof. We define Equation (3.5) combined with (3.11) and (3.12) yields Thus, We catch up on the postponed proof from the previous section.
Proof of Theorem 3.9. Follows directly from Theorem 4.3 and Remark 3.7 by utilizing the substitution u = βV r e 1 .
Recalling the inclusion V r ⊆ V h by means of the embedding V r v r = v r , we state the following result.
Proof. Follows directly from Theorem 4.3.
Corollary 4.5. The matrix representation of L H s r is given by Proof. Follows directly from Corollary 4.4.
L H s r serves as efficient approximation of (M −1 A) s . Ahead of investigating its accuracy, we examine the arising computational costs, requiring knowledge of the map r → (t 1 , ..., t r ) and its involved complexity. We address this problem adequately in Section 5, indicating for now that its computational expense amounts to finding a lower and upper bound for the spectrum of M −1 A. The overall complexity has to be regarded from two different perspectives, the so-called offline and online phase, and depends on the particular problem.
At first, we consider evaluations of type u → L H s r (u), which are of nonlinear character. The underlying offline phase encompasses computations of the spectral bounds as a one-time investment. The online phase has to be performed for each argument separately. It incorporates computations of r finite element solutions v N (t j ) of the original, expensive problem size, followed by orthonormalization of V r ∈ R N ×(r+1) to obtain the reduced basis matrix V r . Furthermore, the projected matrix A r = V T r AV r has to be established in order to determine its eigensystem. The assembly of first A s r and second of V r A s r V T r M u completes the computations. Despite its nonlinear nature, the savings gained substantially outweigh the arising inaccuracy, if r is of moderate size.
The offline-online decomposition is of particular interest, if we target at approximations of type s → L H s r (u) for fixed u and several values of s ∈ (0, 1). In this case, the online phase breaks down to the assembly of A s r and V r A s r V T r M u, while all remaining computations are the matter of a one-time investment within the offline phase.
Several properties of the reduced basis interpolation norms also apply to Definition 4.1. Exemplary, the operator counterpart of Theorem 3.5 is procured in the following.

Convergence Analysis
The goal of this section is to specify the choice of snapshots in Definition 3.1 to gain optimal convergence properties. We affirm that there exists a tuple of positive numbers t 1 , ..., t r , naturally arising from the analysis, such that exponential decay in the error for both norm and operator action is obtained. While computations are carried out in the spectral setting, the approach involving the K-functional turns out to provide a more beneficial environment for the analysis. We adopt Corollary 4.4 in this context and take advantage of the, up to a multiplicative constant, interchangeable role of the reduced basis scalar products. Before going further into detail, we investigate two fundamental definitions that are based on [35] and [20], involving the theory of elliptic integrals and Jacobi elliptic functions, see [1,Section 16 & 17].
Definition 5.2. Let 0 < a < b ∈ R + . For each r ∈ N we define the transformed Zolotarëv points Z 1 , ..., Z r on [a, b] by Z j := bZ j , j = 1, ..., r, where Z 1 , ..., Z r refer to the Zolotarëv points on a b , 1 . As shown in the further course of action, the transformed Zolotarëv points on [λ −2 N , λ −2 1 ] turn out to be perfectly tailored for our reduced basis strategy. We agree on the following nomenclature.
such that the squared snapshots t 2 1 , ..., t 2 r coincide with the transformed Zolotarëv points on σ inv := [λ −2 U , λ −2 L ]. We call σ := [λ 2 L , λ 2 U ] the spectral interval of V r . 5.1. Error of the reduced basis interpolation norms. We specify some further notation. By a b we mean that there exists a constant C ∈ R + , independent of a, b, and r, such that a ≤ Cb. Along with this premiss, we prove that Definition 5.3 provides an optimal choice for the reduced space.

Theorem 5.4 (Exponential convergence of the reduced basis interpolation norms).
Let u ∈ V h and V r ⊆ V h a Zolotarëv space with σ = [λ 2 L , λ 2 U ] and δ = λ 2 L/λ 2 U . Then there holds The constant C only depends on δ and satisfies Its precise value coincides with 2C * , where C * refers to the constant from Remark 5.16.
Verifying the claim of Theorem 5.4 is challenging, which is why we conduct the proof in several steps. Under the prescribed assumptions, we start with the first inequality of (5.1).
Corollary 5.6. There holds u K s r ≥ u K s . Proof. According to Lemma 5.5, there holds the error of the norms can be traced back to the error of the K-functionals. To make matters precise, we conduct some technical preparations.
Definition 5.7. For all t ∈ R + we denote by v r (t) ∈ V r the unique minimizer of K 2 r (t; u). Remark 5.8. Similarly to v N (t), utilizing Euler-Lagrange formalism, the minimizer v r (t) ∈ V r is the unique solution of the variational problem ∀w r ∈ V r : v r (t), w r 0 + t 2 v r (t), w r 1 = u, w r 0 , or equivalently, v r (t) ∈ R r+1 solves the linear system of equations where I r ∈ R (r+1)×(r+1) represents the identity matrix.
Lemma 5.9. For all t ∈ R + and w r ∈ V r there holds Proof. For convenience, we omit the dependency in t in consecutive elaborations. One observes The accuracy of K 2 r (t; u) rests upon the approximation quality of the minimizer v r (t) ≈ v N (t), as the following result shows. Corollary 5.10. For all t ∈ R + there holds Proof. Follows directly from Lemma 5.9 with w r = v r (t).
Dealing with (5.3) is challenging. We derive an upper bound for the error which turns out to be more amenable to analytical considerations.
Corollary 5.11. For all t ∈ R + and w r ∈ V r there holds Proof. Due to the minimization property of v r (t), there holds for all t ∈ R + and w r ∈ V r Utilizing Lemma 5.9 concludes the proof.
In the subsequent, we aim to choose from Corollary 5.11 in a clever way, such that the upper bound in (5.4) becomes small. The idea of how to choose its coefficients α j emerges from the following investigation.
Theorem 5.12. For all α 0 , ..., α r ∈ R there holds Proof. Due to Corollary 5.11, there holds for any function of type (5.5) Based on the orthogonal property of (ϕ k ) N k=1 , we observe Computations in the 0-norm can be concluded analogously, proving the claim.
Theorem 5.12 reveals that (5.5) has to be chosen in a way, such that for all λ 1 , ..., λ N , or more generally, for all values of λ ∈ [λ 1 , λ N ], the difference becomes small. Typically, neither λ 1 nor λ N are known a-priori, which is why we consider (5.6) with respect to the spectral interval from Definition 5.3, admitting λ ∈ [λ L , λ U ] instead. Any possible bound of (5.6) then trivially also holds on [λ 1 , λ N ].
In the further course of action, we derive two different candidates for the coefficients (α j ) r j=0 in dependency of t. The first one ensures that (5.6) becomes small for t ≥ 1, while the second achieves the same as t < 1. To this extent, we make a first ansatz and set α 0 = 0. The latter coefficients are determined by means of a rational interpolation problem, which we inquire in the subsequent Lemma.
Lemma 5.13. Assume that κ ∈ R + , κ 1 , ..., κ r ∈ σ inv , and κ i = κ j for i = j. Consider the space R of all rational functions R, which admit a representation for coefficients α 1 , ..., α r ∈ R. Further define Then the solution of the rational interpolation problem: Find q ∈ R, such that ∀j ∈ {1, ..., r} : denote the unique solution of (5.7). Then there holds for a suitable polynomial p of degree r. The interpolation property yields ∀j ∈ {1, ..., r} : The fundamental Theorem of algebra affirms the existence of a constant c ∈ R, such that (κ − κ j ) (κ + κ j ) .
All together, we obtain for all x ∈ σ Minimizing the maximal deviation of the upper bound in (5.8) leads to a minmax problem of the following kind: Find κ 1 , ..., κ r ∈ σ inv , such that min θ1,...,θr∈σ inv max Closely related problems have been investigated in [35] and [20]. We summarize the essential results. Consider the slightly modified problem: Find κ 1 , ..., κ r ∈ [δ, 1], x − κ j x + κ j . (5.11) Zolotarëv and Gonchar showed that its unique solution is given by the Zolotarëv points Z 1 , ..., Z r on [δ, 1]. They further approved that there exists a positive constant C, depending on δ only, such that The product in (5.12), considered as function in x, features r+1 points of alternance and has the least deviation from zero on [δ, 1] among all functions of this form. We set results from problem (5.11) in correspondence with (5.10).
Within the proof of Theorem 5.14, we have additionally shown that which, due to (5.12), immediately reveals the following result.
and K the elliptic integral from Definition 5.1. The following asymptotic formulas are known to hold, , as µ → 1, see e.g., [1,Section 17]. This yields the asymptotic behaviour of C * in dependency of δ, Along with the choice λ L := λ 1 and λ U := λ N , this reveals that the constant C * only deteriorates at logarithmical rate as the condition number λ N /λ1 = δ −1 increases.
We eventually return to the original problem of interest in (5.6).
Assembling results from above finally enables us to derive an upper bound for the error in the reduced basis K-functional.
Proof. Theorem 5.12 combined with Lemma 5.17 yields As it turns out in the further course of action, the upper bound derived in Theorem 5.18 is only sharp enough in case of t ≥ 1, but not as t < 1. We overcome this inconvenience by subtle adjustments of the interpolation problem (5.7), such that its arising solution leads to the desired properties.
Theorem 5.19. Let C * denote the constant from Remark 5.16. Then there holds for all t ∈ R + Proof. In analogy to Lemma 5.13, we consider the following rational interpolation problem: For κ ∈ R + and κ 1 , ..., κ r ∈ σ inv pairwise distinct, find q ∈R, such that whereR denotes the linear span of R enriched by constant functions. We remark that α 0 = 0 is no longer constrained. Similarly to the proof of Lemma 5.13, one affirms that The transformed Zolotarëv points on σ inv ensure exponential convergence of the product. Proceeding in the same manner as before, one concludes the proof.
We have now all the required tools to prove exponential convergence of the reduced basis interpolation norms in the K-setting.

5.2.
Error of the reduced basis operator. The reduced basis interpolation norms provide an exponential decay in the error, granting good chances that similar results are valid with respect to the induced operators. Indeed, convergence of the operators is based on the results from Section 5.1. The core of this paper is summarized in the following Theorem, relying on the notation Proof. Due to (3.2), there holds Hence, Proof. Follows directly from Lemma 5.21.
There holds for all 2 1 , which validates the conjecture by virtue of Corollary 5.10.
Theorem 5.27. Let C * denote the constant from Remark 5.16. Then there holds Moreover, if s ∈ 0, 1 2 , the 2-norm of u in (5.16) can be replaced by u 1 . Proof. We prove that for any w ∈ V h and sufficiently small ε > 0, satisfying 4s + 2ε < 4, there holds which directly implies (5.16). Moreover, if s < 1 2 , we can choose ε < 1 − 2s to verify the latter claim and conclude the proof. To this extent, let ε ∈ (0, 2 − 2s), if s ≥ 1 2 , and ε ∈ (0, 1 − 2s) otherwise. Applying Theorem 5.19 followed by Cauchy-Schwarz inequality yields Define i := min{k ∈ {1, ..., N } : λ k ≥ 1} to observe Similarly, we obtain for the rest of the sum Remark 5.29. Results from Theorem 5.4 and 5.20 can be ameliorated in a sense that the lower and upper bound, λ 2 L and λ 2 U , solely have to be chosen with respect to that minimal and maximal eigenvalue, whose corresponding eigenfunction nontrivially contributes to the linear combination of the argument u. This leads to improvements of the constant C * and hence to faster convergence.

Numerical examples
In the subsequent, we present several numerical examples in order to validate results from Section 5. To make matters precise, let Ω ⊆ R 2 be an open, bounded domain with Lipschitz boundary ∂Ω. We then define V 0 := (L 2 (Ω), · L2 ) and V 1 := (H 1 0 (Ω), ∇· L2 ) to consider its arising interpolation space [V 0 , V 1 ] s . It turns out that the induced operator L H s (V0,V1) of · H s (V0,V1) coincides with the spectral fractional Laplacian subject to homogeneous Dirichlet boundary conditions, i.e., where (ϕ k , λ 2 k ) ∞ k=1 ⊆ H 1 0 (Ω) × R + denote the L 2 -orthonormal eigenfunctions and eigenvalues of V 0 and V 1 . This setting is utilized to adequately study and analyze the performance of our method. All tests were implemented within the open source finite element packages Netgen and NGSolve, see [29] and [30]. Example 1. Consider the unit square Ω = (0, 1) 2 and a finite element space V h ⊆ H 1 0 (Ω) of polynomial order p = 3 on a quasi-uniform, triangular mesh T h , together with its arising eigenbasis (ϕ k ) N k=1 and N = 1753. We randomly choose u ∈ span{ϕ 1 , ..., ϕ n }, n = 300, such that the exact interpolation norm is given by On Ω, the exact eigenvalues (ν 2 k ) ∞ k=1 of −∆ are given in closed form in terms of ν 2 k = ν 2 i,j = π 2 (i 2 + j 2 ). Consistent with Remark 5.29, we set λ 2 L := ν 2 1 ≤ λ 2 1 and λ 2 U := ν 2 n + 1, such that λ 2 U ≥ λ 2 n . In accordance with Theorem 5.4, Figure 1 affirms the exponential decay of the error u 2 H s r − u 2 H s in r. Furthermore, we observe, as indicated in Section 5, that the error increases as s is augmented. of the reduced basis operator, L H s r (u h ) − L H s u h L2 , from Theorem 5.20 in Figure  2A. Here, the exact operator action L H s u h has been replaced by L H s r * (u h ), where r * ∈ N is taken large enough to neglect the arising inaccuracy. Again, the predicted exponential convergence rates become visible.
It is evident that the performance of our method relies on the quality of σ. However, as Theorem 5.20 shows, the exponential approximation property of the reduced basis operator responds rather insensitively towards degraded choices of the spectral bounds. Therefore, very rough estimates for the spectrum of −∆ on the respective domain suffice to achieve satisfying results. Figure 2B y α λ 2 k ψ k (y) 2 + ψ k (y) 2 dy. (7.1) It is well-known that the minimizer ψ k coincides with the solution of a Bessel-type differential equation which admits the following representation ψ k (y) = c k y s K s (λ k y).
Integration by parts and incorporating the asymptotic behaviour of K s yields We conclude that tr defines a linear, bounded operator that satisfies (2.2), such that its range is contained in [V 0 , V 1 ] H s . To evidence surjectivity, we observe that for each u ∈ [V 0 , V 1 ] H s the function U(y) := ∞ k=1 u k ϕ k ψ(λ k y) is contained in V(V 0 , V 1 ; y α ) and satisfies tr U = u.
Moreover, U satisfies (2.5), by construction of ψ. What follows is that U is the α-harmonic extension of u. Due to its minimization property from Lemma 2.3, we further have u E s (V0,V1) = U V(V0,V1;y α ) = d s u H s (V0,V1) .
To evidence the second equality, we refer to [13,Theorem A.2].