Inverses of SBP-SAT finite difference operators approximating the first and second derivative

The scalar, one-dimensional advection equation and heat equation are considered. These equations are discretized in space, using a finite difference method satisfying summation-by-parts (SBP) properties. To impose the boundary conditions, we use a penalty method called simultaneous approximation term (SAT). Together, this gives rise to two semi-discrete schemes where the discretization matrices approximate the first and the second derivative operators, respectively. The discretization matrices depend on free parameters from the SAT treatment. We derive the inverses of the discretization matrices, interpreting them as discrete Green's functions. In this direct way, we also find out precisely which choices of SAT parameters that make the discretization matrices singular. In the second derivative case, it is shown that if the penalty parameters are chosen such that the semi-discrete scheme is dual consistent, the discretization matrix can become singular even when the scheme is energy stable. The inverse formulas hold for SBP-SAT operators of arbitrary order of accuracy. For second and fourth order accurate operators, the inverses are provided explicitly.


Introduction
Consider the time-dependent partial differential equation (1a) below, where L represents a linear differential operator and f (x) is a forcing function. We assume that some suitable initial condition and -for the moment homogeneous -boundary conditions are given such that we have a well-posed problem. Applying the method of lines, that is discretizing first in space while keeping time continuous, yields a system of ordinary differential equations (1b), where we refer to L as the discretizarion matrix.
We first look at the scalar advection equation and thereafter at the heat equation, both in one spatial dimension. Thus L approximates either the first or the second derivative operator, including boundary treatments. In this paper, L is obtained using the SBP-SAT finite difference method. This class of finite difference method is based on difference operators fulfilling summation-by-parts (SBP) properties, and is modified by the penalty technique simultaneous approximation term (SAT) for treating the boundary conditions. The SBP operators were first developed for first derivatives [20,26] and then later for second derivatives [7,23] and are designed to facilitate the derivation of energy estimates. A means to impose boundary conditions without destroying these properties is to use SAT [6]. The SATs included in L contain free parameters. We follow the common practice of determining these parameters using the energy method, such that (1b) is guaranteed to be time-stable. Thereafter, any remaining degrees of freedom in the SATs can be used to make the scheme dual consistent. Dual consistency is advantageous when computing functionals of the solution, since the order of accuracy of functionals from dual consistent schemes can be higher compared to those from non-dual consistent schemes [17]. For more details about SBP-SAT, see [28,13].
Thanks to the SBP-SAT properties, the discretization matrix can be factorized as L = H −1 K, where H is a symmetric, positive definite matrix that has the role of a quadrature rule, see [18]. Now consider the steady version of (1a), Lu = f . Its solution u(x) may be represented as in (2a) below, where G is the Green's function. The steady version of (1b) is Lv = f . Solving for v, yields (2b).
With H's role as a quadrature rule in mind, we can see a clear similarity between (2a) and (2b), and realize that K −1 resembles the Green's function. It makes sense to refer to K −1 as a discrete Green's function.
A finite difference analogue of the Green's function was introduced already in the fundamental article [9]. Thereafter, discrete Green's functions appear sporadically in the literature, see for example [10,8] and references therein. E.g. in [4] (and correspondingly in [9] for two-dimensional problems) the finite formula approximating (2a) is scaled with the spatial mesh size h, which then corresponds closely to (2b). However, since traditional finite difference stencils usually do not have an assigned quadrature rule in the same sense as the SBP operators, the term "discrete Green's functions" often refers to L −1 rather than to K −1 , for example in [8,25,5].
In the above-mentioned articles, the standard way of enforcing boundary conditions, injection, has been used instead of SAT (for descriptions of these two boundary methods, see for example [28]). In [12], the first and second derivatives were approximated using an SBP-SAT finite volume method, the inverses analogous to K −1 were derived and used for analysing errors. Here, we derive formulas for K −1 corresponding to the first and second derivatives as well, however, as an extension to the results in [12], our formulas hold for arbitrary orders of accuracy and in the second derivative case we consider general Robin boundary conditions instead of only Dirichlet boundary conditions.
The inverses are full matrices and are therefore probably not competitive for solving systems Lv = f directly, compared to fast solvers for banded matrices. It is however often advisable to use pre-conditioning to improve the convergence of iterative methods, [15]. A preconditioning matrix P should ideally approximate the inverse of L in some sense, and knowledge about the structure of the inverses could -speculatively -be used when designing preconditioning matrices. If P is a sparse approximate inverse, the computations are cheap, but preconditioners P may also be essentially dense matrices, as for example the fundamental solution preconditioners considered in [5].
The paper is organized as follows: In Section 2, we look at the semi-discrete scheme approximating the advection equation. The matrix K associated with ∂ ∂x is denoted Q, and its inverse is presented in Theorem 2.1. In Section 3, we consider the heat equation, thus approximating ∂ 2 ∂x 2 . The related matrix K, denoted A, is inverted in Theorem 3.1. The SAT parameters are chosen to give stability and dual consistency, and additionally it is of interest to know if some choices of SAT parameters result in a singular discretization matrix L. In the second derivative case, it turns out that an energy stable scheme can actually have a singular L if the scheme is also dual consistent. Some relations between stability, dual consistency and a singular discretization matrix is discussed in Section 3.3. We also discuss the relations between two different ways of showing energy stability, in Section 3.4. The paper is summarized in Section 4.

The first derivative
Consider the scalar advection equation with a Dirichlet boundary condition at the inflow boundary, that is valid for t ≥ 0, with initial condition u(x, 0) = u 0 (x). The forcing function f (x, t), the initial data u 0 (x) and the boundary data g L (t) are known functions.
We call (3) well-posed if it has a unique solution and is stable (can be bounded by data). Techniques for showing existence and uniqueness can be found in for example [19,16]. We focus on showing stability, since we will derive a corresponding stable discrete problem later. We use the energy method, and multiply the partial differential equation in (3) by u, and integrate over the spatial domain. Thereafter, we use integration by parts and apply the boundary condition. For simplicity, we consider the homogeneous case, that is with the data f = 0 and g L = 0. This yields where u 2 = ℓ 0 u 2 dx and where we have used that (u 2 ) t = 2uu t . In the homogeneous case, the growth rate thus becomes d dt u 2 ≤ 0. Integrating this in time yields the energy estimate u 2 ≤ u 0 2 and the solution is thus bounded. Since (3) is an one-dimensional hyperbolic problem it is also possible to show strong well-posedness, i.e., that u is bounded by the data f , g L and u 0 . See [19,16] for different definitions of well-posedness.

The semi-discrete scheme
We first discretize in space, on the interval x ∈ [0, ℓ], using n + 1 equidistant grid points x i = ih, where h = ℓ/n and i = 0, 1, . . . , n. Using the SBP-SAT finite difference method, we obtain a semi-discrete scheme approximating (3) as where v(t) = [v 0 , v 1 , . . . , v n ] T is the approximation of the continuous solution u(x, t), and where f = [f (x 0 , t), f (x 1 , t), . . . , f (x n , t)] T is the restriction of f (x, t) to the grid. In the same way, we let the initial data be v(0) = [u 0 (x 0 ), u 0 (x 1 ), . . . , u 0 (x n )] T . The matrix D 1 approximates the first derivative operator ∂/∂x, and fulfills the SBP-properties [20,26] where e L = [1, 0, . . . , 0] T and e R = [0, . . . , 0, 1] T . By the notation >, we mean that the matrix H is positive definite. As mentioned in the introduction, H has the role of a quadrature rule and v 2 H ≡ v T Hv approximates the L 2 -norm of u(x, t), see [18]. The scalar σ L determines the strength of the SAT, and will be chosen below such that the scheme (4) is energy stable and dual consistent.

Stability and dual consistency
To show energy stability, we multiply (4) by v T H from the left and use the relations (5). We thereafter add the transpose, and we consider f = 0 and g L = 0, just as in the continous case. This yields For a dual consistent scheme, we need σ L = −1, see [17,3].

The inverse of the discretization matrix
We first rewrite (4) as where We identify Q as the first derivative version of K discussed in the introduction. The second order accurate version of Q was inverted in [12] and inspired by those results, we make a similar ansatz and derive Q −1 of arbitrary order of accuracy. The result is given in Theorem 2.1.
Theorem 2.1. Consider the (n + 1) × (n + 1)-matrices Q from (5) and Q found in (7). The structures of Q and Q are where q is an n × 1-vector and Q is an n × n-matrix. The inverse of Q is where Proof of Theorem 2.1. We aim to show that Q Q −1 = I, where I is the (n + 1) × (n + 1) identity matrix. Using Q from (7) and Q −1 from (9), we compute Note that D 1 1 = 0, since D 1 in (5) is a consistent difference operator. Hence, Q1 = 0. Furthermore, e T L G 1 = 0 since the first row of G 1 consists of zeros. These relations, the fact that e T L 1 = 1 and the structures of the components in (8) and (10) yields where I is the n × n identity matrix.
The existence of G 1 and b in (10), and consequently the validity of Theorem 2.1, rely on the assumption that Q is invertible. In the (2,1) order accurate case -where we by the notation "(2,1) order accurate", refer to a matrix D 1 which has second order of accuracy in the interior finite difference stencil and first order of accuracy at the boundaries -the inverse of Q is derived and presented in Section A.1, which directly proves its existence. The same is done for the inverse of the (4,2) order accurate operator, which is presented in Section A.2.
Higher order operators, on the other hand, have free parameters. For example, for the diagonal norm (6,3) order accurate version of D 1 described in [26], x 1 is a free parameter. In this case, we find numerically that Q is singular when x 1 ≈ 0.69. Remark 2.3. For the steady version of (3), that is u x = f with u(0) = g L , we have where G is a Green's function. Starting from v = Q −1 H f , using (7) and (9) as well as the relations b T e L = 1 and G 1 e L = 0 deduced from (10), we obtain v = g L 1 + Q −1 Hf .
Recall from the introduction that K −1 = Q −1 resembles G. The version of Q −1 found in (34) in Appendix A.1 (which corresponds to the second order accurate operator) is The dual consistent choice σ L = −1 is optimal in the sense that it cancels the oscillations such that ( Q −1 ) i,j = 1 for j ≤ i, however ( Q −1 ) i,j = (−1) i+j = 0 for i ≤ j. If we instead let σ L → −∞, interpreted as mimicking the injection treatment, results in Q −1 = G 1 .

The second derivative
Now consider the scalar heat equation with Robin boundary conditions, that is valid for t ≥ 0, with initial condition u(x, 0) = u 0 (x). The forcing function f (x, t), the initial data u 0 (x) and the boundary data g L,R (t) are known functions. We multiply the partial differential equation in (11) by u and integrate the result over the spatial domain, with the data put to f = 0 and g L,R = 0. Thereafter using integration by parts and the boundary conditions, yields For a decaying growth rate, we need α L,R β L,R ≥ 0.

The semi-discrete scheme
Using the SBP-SAT finite difference method, we obtain a scheme approximating (11) as where v, f , H and e L,R are described as in Section 2.1. The matrix D 2 approximates the second derivative operator, and fulfills the SBP-properties The vectors d L and d R are consistent finite difference stencils approximating the first derivative, see [7]. Two common categories of D 2 operators are wide-stencil and narrowstencil operators. Wide-stencil operators can be factorized as D 2 = D 2 1 , and the term "narrow" describes finite difference schemes with a minimal stencil width [24].
The penalty parameters σ L,R and τ L,R in (12) are scalars that will be further specified and discussed in the next sections. Now, we use (13) to rewrite (12) as where and We identify A as the second derivative version of the matrix K from the introduction.

Stability
To show energy stability, we multiply (12) by v T H from the left and use the relations (13). We thereafter add the transpose, and let f = 0 and g L,R = 0. This yields where we need to show that d dt v 2 H ≤ 0. We will determine the stability limits of σ L,R and τ L,R using a procedure sometimes called the borrowing technique [7,14,2,22,27,29,1]. The idea is to "borrow" a maximum amount γ of "positivity" from A, more precisely as Inserting the relation in (17) into (16), we obtain For stability, we need both the matrices in the two quadratic forms above to be negative semi-definite. This is fulfilled if

Dual consistency
To make the scheme (12) dual consistent we first note that the operator ∂ 2 /∂x 2 (including boundary conditions) is a symmetric operator and that the matrix A must be symmetric to mimic this. From (15) where δ L,R = 0 for dual consistent choices of penalty parameters. The relations in (19), with δ L,R = 0, can also be derived from the penalty parameters of the scalar problem in [11]. For a background and more thorough descriptions of dual consistency, see [17].
Note that now, using the dual consistency parameters δ L,R defined in (19), the three stability requirements in (18) can be reformulated as

The inverse of the discretization matrix
We consider the steady version of (14), that is We derive this inverse and present the result in Theorem 3.1. (15), which depends on A and d L,R in (13) and on the boundary related scalars σ L,R , τ L,R , α L,R and β L,R . Let the parts of A be denoted as follows, where a L , a R and a C are scalars, that depends on α L,R and β L,R , that is on the choices of boundary conditions in (11), on the choices of penalty parameters σ L,R and τ L,R in (12) and on the duality parameters δ L,R in (19), as well as on the scalars Proof of Theorem 3.1. The proof is given in Appendix B.
Note that the quantities in (23), and thus the validity of Theorem 3.1, rely on the existence ofĀ −1 . In Appendix D, the explicit values ofĀ −1 , as well as of G 2 , b L,R , ξ L,R and ξ C , are provided for the (2,0), (2,1) and (4,2) order accurate narrow-stencil operators and the (2,0) order accurate wide-stencil operator. This directly proves the existence of A −1 for these operators. Higher order accurate operators have free parameters, but empirically we can draw the conclusion thatĀ −1 must exist at least for the parameter choices in [23], since the operators therein have been applied successfully for many years.
Given the existence ofĀ −1 , we note that A in (22) is singular if and only if Σ in (24) is singular. The matrix Σ is in turn singular if any of the two relations holds. The first condition is related to the continuous boundary conditions, and makes the matrix singular if Neumann boundary conditions are imposed on both boundaries, i.e. if α L = α R = 0. The second condition has to do with the choice of penalty parameters, and leads us to the following corollary of Theorem 3.1: (15), is singular when the penalty parameters Proof of Corollary 3.2. We make the ansatz σ L,R = −τ L,R ξ L,R − ε L,R with some unknown scalars ε L,R . Inserting this into (27) above gives ε L ε R = τ L τ R ξ 2 C which is fulfilled for all pairs ε L = τ L |ξ C |ζ and ε R = τ R |ξ C |/ζ with arbitrary choices of ζ = 0. If ξ C , τ L or τ R is equal to zero, it is enough if either ε L = 0 or ε R = 0.
The requirements on A and d L,R in Theorem 3.1 are only that A is symmetric, that A −1 exists (as discussed above) and that D 2 and d L,R in (13) are consistent such that the relations (43) and (44) in Appendix B holds. In addition we will assume that D 2 is constructed such the left and right boundary closures are equivalent. This implies that A is a centrosymmetric matrix, that is This additional assumption leads to ξ L = ξ R (this is easiest seen by expressing the quantities in (25) and thereafter using the fact that the inverse of a centrosymmetric matrix is also centrosymmetric). For later reference we define and assume that the penalty is chosen to be equally strong on both boundaries: Choosing an equal penalty strength on both boundaries corresponds to having ζ = 1 in Corollary 3.2. If in addition equivalent boundary closures are assumed, such that ξ L = ξ R , we can use ξ T ≡ ξ L,R + |ξ C | from (28). This simplifies the condition of singularity in Corollary 3.2 to σ L,R = −ξ T τ L,R .
Remark 3.4. The inverse of A mimics a fundamental solution. For example, the Green's Recalling that the matrix H has the role of a quadrature rule, we see the clear similarity to the time-independent, homogeneous version of (14), v = A −1 Hf . The resemblance is more obvious if the penalty dependent part in (22) is ignored, since then v = G 2 Hf . For the second order accurate approximation, G 2 is exact in the grid points, see (64). This is identical with the result noted for the classical finite difference method using injection instead of SAT, compare [4,25].

Relations between stability, singularity and dual consistency
We take a look at the relation between the stability requirements on the scheme (12) and the conditions that make its discretization matrix singular. First, we note that: (17) and ξ T in (28). It holds that hγ = 1/ξ T .
Proof. Theorem 3.5 is proven in Appendix C.1.
A consequence of Theorem 3.5 is that the stability demands in (20) can be written with δ L,R from (19). We will see that the penalty can be chosen such that we have energy stability and a singular discretization matrix at the same time: From Assumption 3.3 we know that the matrix A is singular when σ L,R = −τ L,R ξ T . Inserting this into (29), the third stability demand becomes δ 2 L,R ≤ 0, which is only fulfilled if the penalty parameters are chosen in a dual consistent way. This means that if (12) is an energy stable scheme, it must also be dual consistent to risk having a singular discretization matrix. Note though that even if the scheme is dual consistent, a singular discretization matrix is avoided by choosing σ L,R = −τ L,R ξ T . To be precise, simultaneous having σ L,R = −ξ T /(β L,R ξ T + α L,R ) and τ L,R = 1/(β L,R ξ T +α L,R ) should be avoided, since this particular choice makes δ L,R = 0, fulfills the stability demands but at the same time makes A singular.
In Assumption 3.3, one can argue that ζ = −1 gives just as an equal penalty strength as ζ = 1, simplifying Corollary 3.2 to σ L,R = − (ξ L,R − |ξ C |) τ L,R . However, these choices do not give energy stability and are therefore not interesting for our further discussions. Besides, |ξ C | tend to be very small so in practice it does not make much of a difference.

3.4
Relations to the stability demands in [11] In Section 3.1.1 the "borrowing technique" is used for deriving the stability restrictions on the penalty parameters. In [11], a different approach (inspired by [17,3] where widestencil discretizations are rewritten as first order systems) is used for showing stability, and here we are going to comment on some connections between the two methods.
In [11], it is assumed that A can be decomposed as in [7], that is as and the strategy for showing stability is to modify the approximation of u x from Sv to the auxiliary variable w = Sv + M −1 e L ρ L + M −1 e R ρ R . In [11], ρ L,R are penalty-like terms proportional to the solution deviations from boundary data, but other options are possible. Computing w T M w makes the terms available to the boundary terms in (16), where q L,R , q C and q T are defined as The "borrowing technique" on the other hand, makes the terms −hγv )v available for the boundary terms in (16).
Although these two approaches of showing stability are different, they are closely related. In Lemma 3.6 we formalize this relation and show that q T = 1/(hγ). Lemma 3.6. Assume that A in (13) can be factorized as in (30) with M > 0, and define q T as stated in (31). Next, consider (17), where the parameter γ is defined as the maximum number such thatÃ γ ≥ 0 still holds. Then it holds that hγ = 1/q T .
Proof. Lemma 3.6 is proven in Appendix C.2.
For wide-stencil operators, S = D 1 and M = H in (30), and the parameters q L,R and q C in (31) are easily obtained since M is known. For narrow-stencil operators on the other hand, M and the interior of S are not uniquely defined. In [11], the strategy was (under the contrary assumption that S is non-singular and M is singular) to compute instead, where M ≡ S −T (A + pe L e T L )S −1 with p = 0 being a perturbation parameter. For wide-stencil operators though, it can easily be checked numerically that q L,R = q L,R and q C = q C . This is somewhat alarming, but it can as easily be checked that it still holds that q T = q T . We confirm this analytically in Theorem 3.8 below, and the use of q T in [11] is thus justified. First though, we note the following: Lemma 3.7. The quantities q L,R and q C defined in (32) are identical to the quantities ξ L,R and ξ C in (25).
Proof. Lemma 3.7 is proven in Appendix C.3.
Thus, in summary, we have that: (13) can be factorized as in (30) with M > 0, and define q T as stated in (31). Next, assume that M is singular instead, with M ≥ 0, and define q T as stated in (32). Then it holds that q T = q T .
Proof. From Lemma 3.6 we have that q T = 1/(hγ) and from Theorem 3.5 we have that 1/(hγ) = ξ T . Combining Lemma 3.7 with the definitions in (32) and (28) we deduce that ξ T = q T . All in all, this gives q T = 1/(hγ) = ξ T = q T concluding the proof.
For an example, see the derived values of q L,R,C and q L,R,C for the wide-stencil (2,0) order operator in Appendix D.4. As a numerical confirmation, in Table 1 we compare the values of h q T from [11] to the values of γ computed in [22,29]. In Table 1 though, it appears that h q T ≥ 1/γ. This is because the listed γ are computed for n → ∞, and are as such slightly too large for very coarse meshes.

Conclusions
We discretize the scalar advection equation and the heat equation in one-dimensional space, using the SBP-SAT finite difference method. This gives rise to two semi-discrete schemes of the form v t + Lv = f , where the discretization matrix L is approximating either the first derivative or the second derivative, including treatment of the boundary conditions. The matrix L is, due to properties of the SBP-SAT method, associated with a positive definite matrix H such that L = H −1 K, where the inverse of K is interpreted as a discrete Green's function. We derive the general forms of these inverses, and provide explicit examples of K −1 for some operators L of second and fourth order accuracy. The boundary treatment SAT induces free parameters in L. We first determine these parameters such that the semi-discrete schemes are energy stable. Any remaining degrees of freedom can be used to make the schemes dual consistent. Another important question is whether the discretization matrices L are invertible. Conveniently, the formula for K −1 reveals precisely which combinations of SAT parameters that make L singular.
In the second derivative case, it turns out that for one very particular choice of SAT parameters, L can become singular even when the scheme is energy stable. Here, we can avoid this and instead choose the parameters such that the scheme is energy stable, dual consistent and guaranteed to have an invertible discretization matrix (and consequently a unique solution). However, for more complex problems it might not be feasible to prove that the discretization matrix is invertible, not even for energy stable schemes.
Last, we take a look at two supposedly different approaches of proving energy stability. Curiously, they are closely related, leading to the same demands on the SAT parameters.

A.2 The (4,2) order accurate operator
In [26], we find D 1 with fourth order interior accuracy and the associated H. Together with (5) and (7), this gives us We identify Q and q as indicated in (8). We are now looking for a matrix G such that QG = I. Let G be composed as G = g 1 g 2 . . . g n , g j = g 1,j g 2,j . . . g n,j T .

A.2.1 The inverse with an even number of grid points n
To make the expressions manageable, we simplify by assuming that n is an even number. In this particular case, when solving the 7 × 7 system above, we obtain where C n and D n are integers given in (37) below. Note that D n ≥ 1 for even n, so there is no risk of division by zero. Moreover, we obtain The quantities B j are integers for integers j, and are specified below D n = ν n/2−1 + ν n/2−2 10 , C n = 9ν n/2−1 + 4ν n/2−2 10 where ν j = φ j + φ −j . For convenience, all the g i,1 presented above will be restated in (38) and (39), wherein we will also make use of A j defined above. We use the same strategy for the other columns j > 1. For 2 ≤ j ≤ n − 2, we need two different versions of the constants c 1,2,3,4 , depending on if we consider g i,j for i ≤ j or for i ≥ j. We let Thus for every 2 ≤ j ≤ n − 2, we have eight unknown constants, as well as the three remaining unknowns g 1,j , g n−1,j and g n,j . The three first and the four last rows in the system above gives us seven conditions. From the rows i = j −1, j, j +1, we get three more conditions and in addition, we demand that the two versions of g j,j are identical. All in all, this gives a linear system with eleven unknowns g 1,j , c u 1 , c u 2 , c u 3 , c u 4 , c l 1 , c l 2 , c l 3 , c l 4 , g n−1,j and g n,j and eleven conditions. We still consider even numbers of n. Solving for the unknowns and inserting c u 1,2,3,4 and c l 1,2,3,4 into their respective ansatz, we eventually end up with g i,j for the inner columns, presented below in (40) and (41). Furthermore, repeating the procedure for the last two columns, we obtain g i,j for j = n − 1 and j = n, given in (38) and (39). To simplify the expressions in (39), (40) and (41), we have used A j in (37). In summary, when n is even, the inverse of Q is given by Q −1 = (g i,j ) n×n with g i,j as described in (38) g n,1 = 12C n 59D n , g n,n−1 = 12C n 59D n , g n,n = 0.
For 2 ≤ i ≤ n − 2, we obtain while we for 2 ≤ j ≤ n − 2 have Finally, the interior elements are In the expressions above we have used D n , C n , B j and A j defined in (37). Next, we recall the structure in (8), and identify q in (35) as and compute q T Q −1 as ( q T Q −1 ) j = 59 96 g 1,j − 1 12 g 2,j − 1 32 g 3,j . This gives where we have used the structures of g i,j in (38), (39), (40) and (41), together with the following relations: Inserting Q −1 from (38), (39), (40) and (41), and (42) into (9) and (10) yields the inverse of Q in the (4,2) order accurate case (for n even).

B Proof of Theorem 3.1
Theorem 3.1 states that the inverse of A from (15) is equal to the expression (22). This is shown in Section B.2, however, first, we present some useful relations.

B.1 Preliminaries
Note that D 2 1 = 0 and D 2 x = 0, since D 2 approximates the second derivative operator (these two relations actually hold also for the inconsistent (2,0) order accurate operators in Sections D.1 and D.4). Furthermore, d T L,R consistently approximate the first derivative, so that d T L,R 1 = 0 and d T L,R x = 1. Hence Combining the above relations with A = −HD 2 + e R d T R − e L d T L from (13), gives Now, we define the additional (n−1)×1-vectors 1 = [1 1 . . . 1] T and x = h[1 2 . . . n−1] T (they are shorter versions of 1 and x in Theorem 3.1). With these new variables and with the notation from (21), the relations (44) can be expressed as Given that A is correctly constructed, such thatĀ in invertible, this leads to the relations and Now, multiplying A from (21) by G 2 from (23) and using the relations (45), we get whereĪ is the (n − 1) × (n − 1) identity matrix. From (23) we have b L = 1 − x/ℓ − G 2 d L and b R = x/ℓ + G 2 d R , and using the relations (44), (47) and (43), we arrive at The vectors e L,R picks out the first and last elements in the vectors they are multiplied by, such that Finally, from (23) we have We are now ready to prove Theorem 3.1. (22) with (23), (24) and (25) We multiply A in (15) by the expression for A −1 in (22), with the aim of showing that A A −1 = I indeed holds. In the first step, (22) yields

B.2 Confirmation of Equation
We start by looking at the first term in (51). First using (15), followed by the relations in (47) and (50), and thereafter just rearranging the terms, we arrive at Next, we look at the part Γ in (51). After rewriting A using (15), we use the relations in (48), (44), (49), (43) and (25). Thereafter, the resulting terms are rearranged. These steps are shown below in (53).
We note that the last 4 × 4-matrix is nothing but Σ from (24). Inserting the results from (52) and (53) into (51) gives us concluding the proof.
C Proofs of the relations between ξ T , γ, q T and q T Below we present the proofs of Theorem 3.5 and the Lemmas 3.6 and 3.7.

C.1 Proof of Theorem 3.5
We aim to relate γ in (17) to ξ T in (28). Note that the latter quantity relies on that ξ L = ξ R in (25). To emphasize this, we introduce ξ D = ξ L,R . We start by defining using (48) and (25). The (n + 1) × 1-vector v is arbitrary and for the scalars ρ L,R we make the ansatz ρ L = (s L d T where t L,R and s L,R are scalars yet to be determined. Inserted into (54), this yields where we have defined Using the "borrowing technique", γ is the maximum value such thatÃ γ ≥ 0 still holds, referring to γ andÃ γ from (17). For (55) to correspond to (17), we need z L = z R and z C = 0, and under these constraints we must mimimize z L,R . To get there, we first define Inserted into z L and z R in (56), these relations gives us where we have used that ξ D = ξ L = ξ R . Note that for fixed values of z L and z R , the pairs (x L , y L ) and (x R , y R ) describe ellipses. Reformulated in a parametric form, they are where To enforce z L = z R , we simply let r L = r R = r. This gives us (58) Next, we need to fulfull the requirement z C = 0. Inserting the relations (56), and thereafter using (57) with r L,R = r, leads to Now, we want z C = 0 while keeping r 2 to a minimum (in order to in turn minimize z L,R ). We achieve this by putting It can be shown that ξ 2 D −ξ 2 C ≥ 0 (by inserting (48) into (25) and using that A T = A ≥ 0), therefore the absolute value is only needed for ξ C . Inserting the above choice of r 2 into z L,R in (58) and thereafter using (28) with ξ L,R = ξ D , we obtain We have thereby shown that, with z C = 0 and z L = z R in (55), 1/ξ T is the maximum amount of "positivity" in form of (d L d T L +d R d T R ) we can extract from A. Inserting z C = 0 and z L,R = −1/ξ T into (55) and noting that v Comparing with (17), we deduce that hγ = 1/ξ T .

C.2 Proof of Lemma 3.6
We define w = Sv + M −1 e L ρ L + M −1 e R ρ R and use the relations in (30) to compute where q L,R,C are defined in (31) and where ρ L,R are any scalars. It is assumed that M > 0 and that w T M w ≥ 0. Note that the right-hand-side of (60) has the same form as (54), but with ξ L,R,C replaced by q L,R,C . Thus, by following the same procedure, we obtain the relation corresponding to (59), namely with q T defined in (31). Comparing with (17) we see that hγ = 1/q T .

C.3 Proof of Lemma 3.7
In [11], it was shown that q L,R and q C in (32) can be computed as with K 0 defined (using our notation from (21)) as Now, we want to show that the quantities in (61) are equal to the ones in (25). Applying the formula for inverses of block matrices to the above definition of K 0 , and thereafter using the relation for a R in (46), we obtain Comparing (62) with (23) and (45), we note that K 0 = G 2 + xx T /ℓ. Inserting this into (61), and thereafter using (50) and that d T L,R 1 = 0 and d T L,R x = 1, yields that is exactly the same relations as in (25).

D Explicit inverses of the second derivative operator
We provide the explicit expressions ofĀ −1 , b L,R , ξ L,R and ξ C for the (2,0), (2,1) and (4,2) order accurate narrow-stencil operators and the (2,0) order accurate wide-stencil operator. By the notation "(2,0) order accurate operator", we refer to a matrix D 2 which has order 2 in the interior finite difference stencil and order 0 at the boundaries.

D.1 The narrow-stencil (2,0) order operator
The simplest possible example of a second derivative operator D 2 fulfilling the SBPproperties in (13) is the narrow-stencil (2,0) order operator, and its corresponding matrix A was inverted already in [12] for the special case α L,R = 1, β L,R = 0 and τ L,R = 0. It is given below, together with its associated d L,R vectors.
The operator D 2 is also associated with H = h diag 1 2 , 1, 1, . . . , 1, 1, 1 2 , and using (13) we obtain the (n + 1) × (n + 1) matrix A given below. The (n − 1) × (n − 1) matrixĀ is identified using (21). Gauss-Jordan elimination then leads toĀ −1 as InsertingĀ −1 from above into (23), and using that x i = ih, yields Note the striking similarity to the continuous Green's function in Remark 3.4. Next, by noticing the structure of d L,R in (63) and identifying the first and last columns ofĀ −1 as h( 1 − x/ℓ) and h x/ℓ we can compute G 2 d L,R and consequently b L,R in (23) as Furthermore, inserting these b L,R and d L,R from (63) into (25), we obtain

D.2 The narrow-stencil (2,1) order operator
The narrow-stencil (2,1) order operator (see Section C.1 in [23]), have the same matrices H and A as the (2,0) order operator, and hence its G 2 is given by (64). However, the difference matrices d L,R differ, for the (2,1) order operator they are We can compute G 2 d L as and repeating the procedure for G 2 d R and thereafter using (23), we arrive at Finally, we use (25) to compute where ξ C = 0 holds for n ≥ 4.

D.3 The narrow-stencil (4,2) order operator
The operator D 2 with fourth order interior accuracy and diagonal norm H, see Section C.2 in [23], is associated with the difference operators Using (13) and identifying the interior of A according to (21), we obtain We are now looking for a matrixḠ such thatḠ =Ā −1 , and make the ansatz G = g 1 g 2 . . . g n−1 , g j = g 1,j g 2,j . . . g n−1,j T .
ForĀḠ =Ī to hold,Ā g j = e j must be fulfilled for all j = 1, 2, . . . , n − 1, where the vector e j = [0 . . . 0 1 0 . . . 0] T is non-zero only in its jth element. From the mid rows ofĀ g j , given the inner structure ofĀ, we thus need where δ i,j is the Kronecker delta. Hence, the fourth order linear homogeneous recurrence relation g i−2,j −16g i−1,j +30g i,j −16g i+1,j +g i+2,j = 0 has to be fulfilled by almost all g i,j .
The explicit solution to this recursive relation has the form g i,j = c 1 + c 2 i + c 3 ψ i + c 4 ψ −i , where ψ = 7+ √ 48 ≈ 13.9 and where c 1,2,3,4 are j-dependent constants. To be precise, g i,j has this form for 2 ≤ i ≤ n − 2, and we need two versions of the j-dependent constants, that is g i,j = c u 1 + c u 2 i + c u 3 ψ i + c u 4 ψ −i for 2 ≤ i ≤ j and g i,j = c l 1 + c l 2 i + c l 3 ψ i + c l 4 ψ −i for j ≤ i ≤ n − 2. For each j = 2, 3, . . . , n − 2, we thus have eight unknown constants c u 1,2,3,4 and c l 1,2,3,4 , as well as the two remaining unknowns g 1,j and g n−1,j . These are determined by the three first and the three last rows in the requirementĀ g j = e j , which gives us six conditions. From the rows i = j − 1, j, j + 1, we get three more conditions and in addition, we demand that the two versions of g j,j are identical. Altogether, this leads to a 10 × 10 system of equations which we solve using Gauss-Jordan elimination. The boundary columns j = 1 and j = n − 1 must be treated separately, in a similar manner. All in all, these steps lead to the elements of the inverse (Ā −1 ) i,j = g i,j as which is thus similar to the second order version ofĀ −1 , plus an additional term κ i,j . This additional correction term is, for 2 ≤ i, j ≤ n − 2, given by where P i = (51 − 2ψ −1 )ψ i−2 − (51 − 2ψ)ψ 2−i ψ − ψ −1 , Q n = ψ n−4 (2ψ −1 − 51) 2 − ψ 4−n (2ψ − 51) 2 ψ − ψ −1 .
From (23) we have that the interior of G 2 is given byĀ −1 described above. Next, we use d L from (65) to compute G 2 d L and thereafter (23) where we have used that Q n + 2P n−3 = 51P n−2 . Then, b R is given by (b R ) i = (b L ) n−i . We also compute the scalars from (25), as Evaluating hξ L,R and hξ C explicitly for some values of n, see Table 2, we see that these numbers corresponds exactly (to machine precision) to q L h and q C h tabulated in [11]. This serves as a numerical verification of Lemma 3.7 and indirectly of Theorem 3.1.

D.4 The wide-stencil (2,0) order operator
The wide-stencil (2,0) order accurate operator D 2 , which is obtained by squaring the (2,1) order accurate operator D 1 from (33), is given below together with d L,R = D T 1 e L,R The operator is also associated with the same H = h diag 1 2 , 1, 1, . . . , 1, 1, 1 2 as the other operators with second order accuracy, and from this we can compute the (n + 1) × (n + 1) matrix A. Identifying the parts of A according to (21), gives us the (n − 1) × (n − 1) matrixĀ. The inverse of this matrixĀ is that is the discrete Green's function in (23) becomes Thus the discrete Green's function produced by the wide operator oscillate, jumping between 0 and 2 times the exact value. Next, using (23) we obtain the vectors Last, we compute the (2,0) order wide-stencil version of (25), as ξ L = ξ R = 2 h − 1/ℓ, ξ C = −(−1) n /ℓ.
In the wide-stencil case, q L,R = e T L,R H −1 e L,R = 2/h and q C = e T L,R H −1 e R,L = 0 can be computed directly. We recall that q L,R,C = ξ L,R,C and note that q L,R = q L,R and q C = q C , but still q T = q T = 2/h. Compare with the discussion in Section 3.4.