Continuum and Discrete InitialBoundary Value Problems and Einstein’s Field Equations
 1.3k Downloads
 29 Citations
Abstract
Many evolution problems in physics are described by partial differential equations on an infinite domain; therefore, one is interested in the solutions to such problems for a given initial dataset. A prominent example is the binary blackhole problem within Einstein’s theory of gravitation, in which one computes the gravitational radiation emitted from the inspiral of the two black holes, merger and ringdown. Powerful mathematical tools can be used to establish qualitative statements about the solutions, such as their existence, uniqueness, continuous dependence on the initial data, or their asymptotic behavior over large time scales. However, one is often interested in computing the solution itself, and unless the partial differential equation is very simple, or the initial data possesses a high degree of symmetry, this computation requires approximation by numerical discretization. When solving such discrete problems on a machine, one is faced with a finite limit to computational resources, which leads to the replacement of the infinite continuum domain with a finite computer grid. This, in turn, leads to a discrete initialboundary value problem. The hope is to recover, with high accuracy, the exact solution in the limit where the grid spacing converges to zero with the boundary being pushed to infinity.
The goal of this article is to review some of the theory necessary to understand the continuum and discrete initial boundaryvalue problems arising from hyperbolic partial differential equations and to discuss its applications to numerical relativity; in particular, we present wellposed initial and initialboundary value formulations of Einstein’s equations, and we discuss multidomain highorder finite difference and spectral methods to solve them.
1 Introduction
This review discusses fundamental tools from the analytical and numerical theory underlying the Einstein field equations as an evolution problem on a finite computational domain. The process of reaching the current status of numerical relativity after decades of effort not only has driven the community to use state of the art techniques but also to extend and work out new approaches and methodologies of its own. This review discusses some of the theory involved in setting up the problem and numerical approaches for solving it. Its scope is rather broad: it ranges from analytical aspects related to the wellposedness of the Cauchy problem to numerical discretization schemes guaranteeing stability and convergence to the exact solution.
At the continuum, emphasis is placed on setting up the initialboundary value problem (IBVP) for Einstein’s equations properly, by which we mean obtaining a wellposed formulation, which is flexible enough to incorporate coordinate conditions, which allow for longterm and accurate stable numerical evolutions. Here, the wellposedness property is essential, in that it guarantees the existence of a unique solution, which depends continuously on the initial and boundary data. In particular, this assures that small perturbations in the data do not get arbitrarily amplified. Since such small perturbations do appear in numerical simulations because of discretization errors or finite machine precision, if such unbounded growth were allowed, the numerical solution would not converge to the exact one as resolution is increased. This picture is at the core of Lax’ historical theorem, which implies that the consistency of the numerical scheme is not sufficient for its solution to converge to the exact one. Instead, the scheme also needs to be numerically stable, a property, which is the discrete counterpart of wellposedness of the continuum problem.
While the wellposedness of the Cauchy problem in general relativity in the absence of boundaries was established a long time ago, only relatively recently has the IBVP been addressed and wellposed problems formulated. This is mainly due to the fact that the IBVP presents several new challenges, related to constraint preservation, the minimization of spurious reflections, and wellposedness. In fact, it is only very recently that such a wellposed problem has been found for a metric based formulation used in numerical relativity, and there are still open issues that need to be sorted out. It is interesting to point out that the IBVP in general relativity has driven research, which has led to wellposedness results for secondorder systems with a new large class of boundary conditions, which, in addition to Einstein’s equations, are also applicable to Maxwell’s equations in their potential formulation.
At the discrete level, the focus of this review is mainly on designing numerical schemes for which fast convergence to the exact solution is guaranteed. Unfortunately, no or very few general results are known for nonlinear equations and, therefore, we concentrate on schemes for which stability and convergence can be shown at the linear level, at least. If the exact solution is smooth, as expected for vacuum solutions of Einstein’s field equations with smooth initial data and appropriate gauge conditions, at least as long as no curvature singularities form, it is not unreasonable to expect that schemes guaranteeing stability at the linearized level, perhaps with some additional filtering, are also stable for the nonlinear problem. Furthermore, since the solutions are expected to be smooth, emphasis is placed here on using fast converging space discretizations, such as highorder finitedifference or spectral methods, especially those which can be applied to multidomain implementations.
The organization of this review is at follows. Section 3 starts with a discussion of wellposedness for initialvalue problems for evolution problems in general, with special emphasis on hyperbolic ones, including their algebraic characterization. Next, in Section 4 we review some formulations of Einstein’s equations, which yield a wellposed initialvalue problem. Here, we mainly focus on the harmonic and BSSN formulations, which are the two most widely used ones in numerical relativity, as well as the ADM formulation with different gauge conditions. Actual numerical simulations always involve the presence of computational boundaries, which raises the need of analyzing wellposedness of the IBVP. For this reason, the theory of IBVP for hyperbolic problems is reviewed in Section 5, followed by a presentation of the state of the art of boundary conditions for the harmonic and BSSN formulations of Einstein’s equations in Section 6, where open problems related with gauge uniqueness are also described.
Section 7 reviews some of the numerical stability theory, including necessary eigenvalue conditions. These are quite useful in practice for analyzing complicated systems or discretizations. We also discuss necessary and sufficient conditions for stability within the method of lines, and RungeKutta methods. Sections 8 and 9 are devoted to two classes of spatial approximations: finite differences and spectral methods. Finite differences are rather standard and widespread, so in Section 8 we mostly focus on the construction of optimized operators of arbitrary high order satisfying the summationbyparts property, which is useful in stability analyses. We also briefly mention classical polynomial interpolation and how to systematically construct finitedifference operators from it. In Section 9 we present the main elements and theory of spectral methods, including spectral convergence from solutions to SturmLiouville problems, expansions in orthogonal polynomials, Gauss quadratures, spectral differentiation, and spectral viscosity. We present several explicit formulae for the families of polynomials most widely used: Legendre and Chebyshev. Section 10 describes boundary closures. In the present context they refer to procedures for imposing boundary conditions leading to stability results. We emphasize the penalty technique, which applies to both finitedifference methods of arbitrary highorder and spectral ones, as well as outer and interface boundaries, such as those appearing when there are multiple grids as in complex geometries domain decompositions. We also discuss absorbing boundary conditions for Einstein’s equations. Finally, Section 11 presents a random sample of approaches in numerical relativity using multiple, semistructured grids, and/or curvilinear coordinates. In particular, some of these examples illustrate many of the methods discussed in this review in realistic simulations.
There are many topics related to numerical relativity, which are not covered by this review. It does not include discussions of physical results in general relativity obtained through numerical simulations, such as critical phenomena or gravitational waveforms computed from binary blackhole mergers. For reviews on these topics we refer the reader to [223] and [337, 122], respectively. See also [9, 45] for recent books on numerical relativity. Next, we do not discuss setting up initial data and solving the Einstein constraints, and refer to [133]. For reviews on the characteristic and conformal approach, which are only briefly mentioned in Section 6.4, we refer the reader to [432] and [172], respectively. Most of the results specific to Einstein’s field equations in Sections 4 and 6 apply to fourdimensional gravity only, though it should be possible to generalize some of them to higherdimensional theories. Also, as we have already mentioned, the results described here mostly apply to the vacuum field equations, in which case the solutions are expected to be smooth. For aspects involving the presence of shocks, such as those present in relativistic hydrodynamics we refer the reader to [165, 295]. Finally, see [352] for a more detailed review on hyperbolic formulations of Einstein’s equations, and [351] for one on global existence theorems in general relativity. Spectral methods in numerical relativity are discussed in detail in [215]. The 3+1 approach to general relativity is thoroughly reviewed in [214]. Finally, we refer the reader to [126] for a recent book on general relativity and the Einstein equations, which, among many other topics, discusses local and global aspects of the Cauchy problem, the constraint equations, and selfgravitating matter fields such as relativistic fluids and the relativistic kinetic theory of gases.
Except for a few historical remarks, this review does not discuss much of the historical path to the techniques and tools presented, but rather describes the state of the art of a subset of those which appear to be useful. Our choice of topics is mostly influenced by those for which some analysis is available or possible.
We have tried to make each section as selfconsistent as possible within the scope of a manageable review, so that they can be read separately, though each of them builds from the previous ones. Numerous examples are included.
2 Notation and Conventions
For a differentiable function u, we denote by u_{ t }, u_{ x }, u_{ y }, u_{ z } its partial derivatives with respect to t, x, y, z.
3 The InitialValue Problem
We start here with a discussion of hyperbolic evolution problems on the infinite domain ℝ^{ n }. This is usually the situation one encounters in the mathematical description of isolated systems, where some strong field phenomena take place “near the origin” and generates waves, which are emitted toward “infinity”. Therefore, the goal of this section is to analyze the wellposedness of the Cauchy problem for quasilinear hyperbolic evolution equations without boundaries. The case with boundaries is the subject of Section 5. As mentioned in the introduction (Section 1), the wellposedness results are fundamental in the sense that they give existence (at least local in time if the problem is nonlinear) and uniqueness of solutions and show that these depend continuously on the initial data. Of course, how the solution actually appears in detail needs to be established by more sophisticated mathematical tools or by numerical experiments, but it is clear that it does not make sense to speak about “the solution” if the problem is not well posed.
Our presentation starts with the simplest case of linear constant coefficient problems in Section 3.1, where solutions can be constructed explicitly using Fourier transform. Then, we consider in Section 3.2 linear problems with variable coefficients, which we reduce to the constant coefficient case using the localization principle. Next, in Section 3.3, we treat firstorder quasilinear equations, which we reduce to the previous case by the principle of linearization. Finally, in Section 3.4 we summarize some basic results about abstract evolution operators, which give the general framework for treating evolution problems including not only those described by local partial differential operators, but also more general ones.
Much of the material from the first three subsections is taken from the book by Kreiss and Lorenz [259]. However, our summary also includes recent results concerning secondorder equations, examples of wave systems on curved spacetimes, and a very brief review of semigroup theory.
3.1 Linear, constant coefficient problems
Example 1. The advection equation u_{ t }(t, x) = λu_{ x }(t, x) with speed λ ∈ ℝ in the negative x direction.
Example 3. The Schrödinger equation u_{ t }(t, x) = iΔu(t, x).
3.1.1 Wellposedness

The space of admissible initial data is very restrictive. Indeed, since \(f \in {{\mathcal S}^\omega}\) is necessarily analytic it is not possible to consider nontrivial data with, say, compact support, and study the propagation of the support for such data.

For fixed t > 0, the solution may grow without bound when perturbations with arbitrarily small amplitude but higher and higher frequency components are considered. Such an effect is illustrated in Example 6 below.

The function space \({{\mathcal S}^\omega}\) does not seem to be useful as a solution space when considering linear variable coefficient or quasilinear problems, since, for such problems, the different k modes do not decouple from each other. Hence, mode coupling can lead to components with arbitrarily high frequencies.^{2}
The importance of this definition relies on the property that for each fixed time t > 0 the norm e^{P(ik)t} of the propagator is bounded by the constant C(t) := Ke^{ αt }, which is independent of the wave vector k. The definition does not state anything about the growth of the solution with time other that this growth is bounded by an exponential. In this sense, unless one can choose α ≤ 0 or α > 0 arbitrarily small, wellposedness is not a statement about the stability in time, but rather about stability with respect to mode fluctuations.
Let us illustrate the meaning of Definition 1 with a few examples:
Example 5. The heat equation u_{ t }(t, x) = Δu(t, x).
Fourier transformation converts this equation into û_{ t }(t, k) = −k^{2}û(t, k). Hence, the symbol is \(P(ik) =  \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}t}} \leq 1\). The problem is well posed.
Example 6. The backwards heat equation u_{ t }(t, x) = −Δu(t, x).
In this case the symbol is \(P(ik) = + \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}}}\). In contrast to the previous case, e^{P(ik)t} exhibits exponential frequencydependent growth for each fixed t > 0 and the problem is not well posed. Notice that small initial perturbations with large k are amplified by a factor that becomes larger and larger as k increases. Therefore, after an arbitrarily small time, the solution is contaminated by highfrequency modes.
Example 7. The Schrödinger equation u_{ t }(t, x) = iΔu(t, x).
In this case we have P(ik) = ik^{2} and e^{P(ik)}^{ t } = 1. The problem is well posed. Furthermore, the evolution is unitary, and we can evolve forward and backwards in time. When compared to the previous example, it is the factor i in front of the Laplace operator that saves the situation and allows the evolution backwards in time.
More generally one can show (see Theorem 2.1.2 in [259]):
Lemma 1. The Cauchy problem for the firstorder equation u_{ t } = Au_{ x } + B with complex m × m matrices A and B is well posed if and only if A is diagonalizable and has only real eigenvalues.
By considering the eigenvalues of the symbol P(ik) we obtain the following simple necessary condition for wellposedness:
Although the Petrovskii condition is a very simple necessary condition, we stress that it is not sufficient in general. Counterexamples are firstorder systems, which are weakly, but not strongly, hyperbolic; see Example 10 below.
3.1.2 Extension of solutions
3.1.3 Algebraic characterization
In order to extend the solution concept to initial data more general than analytic, we have introduced the concept of wellposedness in Definition 1. However, given a symbol P(ik), it is not always a simple task to determine whether or not constants K ≥ 0 and α ∈ ℝ exist such that e^{P(ik)t} ≤ Xe^{ αt } for all t ≥ 0 and k ∈ ℝ^{ n }. Fortunately, the matrix theorem by Kreiss [257] provides necessary and sufficient conditions on the symbol P(ik) for wellposedness.
 (i)There exists a constant K ≥ 0 such thatfor all t ≥ 0 and k ∈ ℝ^{ n }.$$\vert{e^{P(ik)t}}\vert \leq K{e^{\alpha t}}$$(3.24)
 (ii)There exists a constant M > 0 and a family H(k) of m × m Hermitian matrices such thatfor all k ∈ ℝ^{ n }.$${M^{ 1}}I \leq H(k) \leq MI,\quad H(k)P(ik) + P{(ik)^{\ast}}H(k) \leq 2\alpha H(k)$$(3.25)
3.1.4 Firstorder systems
 A necessary condition for the problem to be well posed is that for each k ∈ ℝ^{ n } with k = 1 the symbol P_{0}(ik) is diagonalizable and has only purely imaginary eigenvalues. To see this, we require the inequalityfor all t ≥ 0 and k ∈ ℝ^{ n }, k ≠ 0, replace t by t/k, and take the limit k → ∞, which yields \(\vert{e^{{P_0}(i{k\prime})t}}\vert\, \leq K\) for all k′ ∈ ℝ^{ n } with k′ = 1. Therefore, there must exist for each such k′ a complex m × m matrix S(k′) such that S(k′)^{−1}P_{0}(ik′)S(k′) = iΛ(k′), where Λ(k′) is a diagonal real matrix (cf. Lemma 1).$$\vert {e^{\vert k\vert {P_0}(ik\prime)t + Bt}}\vert \leq K{e^{\alpha t}},\quad k\prime: = {k \over {\vert k\vert}},$$(3.29)
 In this case the family of Hermitian m × m matrices H(k′) := (S(k′)^{−1})*S(k′)^{−1} satisfiesfor all k′ ∈ ℝ^{ n } with k′ = 1.$$H(k\prime){P_0}(ik\prime) + {P_0}{(ik\prime)^{\ast}}H(k\prime) = 0$$(3.30)

However, in order to obtain the energy estimate, one also needs the condition M^{−1}I ≤ H(k′) ≤ MI, that is, H(k′) must be uniformly bounded and positive. This follows automatically if H(k′) depends continuously on k′, since k′ varies over the (n − 1)dimensional unit sphere, which is compact.^{3} In turn, it follows that H(k′) depends continuously on k′ if S(k′) does. However, although this may hold in many situations, continuous dependence of S(k′) on k′ cannot always be established; see Example 12 for a counterexample.
These observations motivate the following three notions of hyperbolicity, each of them being a stronger condition than the previous one:
 (i)
weakly hyperbolic, if all the eigenvalues of its principal symbol P_{0}(ik) are purely imaginary.
 (ii)strongly hyperbolic, if there exists a constant M > 0 and a family of Hermitian m × m matrices H (k), k ∈ S^{n−1}, satisfyingfor all k ∈ S^{n−1}, where S^{n−1} := {k ∈ ℝ^{ n } : k = 1} denotes the unit sphere.$${M^{ 1}}I \leq H(k) \leq MI,\quad H(k){P_0}(ik) + {P_0}{(ik)^{\ast}}H(k) = 0,$$(3.31)
 (iii)symmetric hyperbolic, if there exists a Hermitian, positive definite m × m matrix H (which is independent of k) such thatfor all k ∈ S^{n−1}.$$H{P_0}(ik) + {P_0}{(ik)^{\ast}}H = 0,$$(3.32)
 Strongly and symmetric hyperbolic systems give rise to a wellposed Cauchy problem. According to Theorem 1, their principal symbol satisfiesand this property is stable with respect to lowerorder perturbations,$$\vert {e^{{P_0}(ik)t}}\vert \leq K,\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R},$$(3.33)The last inequality can be proven by applying Duhamel’s formula (3.23) to the function \(\hat u(t): = {e^{p(ik)t}}\hat f\), which satisfies û_{ t }(t) = P_{0}(ik)û(t) + F(t) with F(t) = Bû(t). The solution formula (3.23) then gives \(\vert \hat u(t)\vert \, \leq K(\vert \hat f\vert + \vert B\vert \int\nolimits_0^t {\vert \hat u(s)\vert ds)}\), which yields \(\vert \hat u(t)\vert \, \leq K{e^{K\vert B\vert t}}\vert \hat f\vert\) upon integration.$$\vert {e^{P(ik)t}}\vert = \vert {e^{{P_0}(ik)t + Bt}}\vert \leq K{e^{K\vert B\vert t}},\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R}.$$(3.34)

As we have anticipated above, a necessary condition for wellposedness is the existence of a complex m × m matrix S(k) for each k ∈ S^{n−1} on the unit sphere, which brings the principal symbol P_{0}(ik) into diagonal, purely imaginary form. If, furthermore, S(k) can be chosen such that S(k) and S(k)^{−1}are uniformly bounded for all k ∈ S^{n−1}, then H(k) := (S(k)^{−1})*S(k)^{−1} satisfies the conditions (3.31) for strong hyperbolicity. If the system is well posed, Theorem 2.4.1 in [259] shows that it is always possible to construct a symmetrizer H(k) satisfying the conditions (3.31) in this manner, and hence, strong hyperbolicity is also a necessary condition for wellposedness. The symmetrizer construction H(k) := (S(k)^{−1})*S(k)^{−1} is useful for applications, since S(k) is easily constructed from the eigenvectors and S(k)^{−1} from the eigenfields of the principal symbol; see Example 15.

Weakly hyperbolic systems are not well posed in general because \(\vert {e^{{P_0}(ik)t}}\vert\) might exhibit polynomial growth in kt. Although one might consider such polynomial growth as acceptable, such systems are unstable with respect to lowerorder perturbations. As the next example shows, it is possible that e^{P(ik)t} grows exponentially in k if the system is weakly hyperbolic.
Of course, A(k) is symmetric and so S(k) can be chosen to be unitary, which yields the trivial symmetrizer H(k) = I. Therefore, the system is symmetric hyperbolic and yields a wellposed Cauchy problem; however, this example shows that it is not always possible to choose S(k) as a continuous function of k.
Yet another way of reducing secondorder equations to firstorder ones without introducing constraints will be discussed in Section 3.1.5.
3.1.5 Secondorder systems
Conversely, suppose that the problem is well posed with symmetrizer H(k). Then, the vanishing of H(k)Q_{0}(ik) + Q_{0}(ik)*H(k) yields the conditions H_{11}(k) = H_{22}(k)R(k) = R(k)*H_{22}(k) and the conditions (3.63) are satisfied for h(k) := H_{22}(k). □
Remark: The conditions (3.63) imply that R(k) is symmetric and positive with respect to the scalar product defined by h(k). Hence it is diagonalizable, and all its eigenvalues are positive. A practical way of finding h(k) is to construct T(k), which diagonalizes R(k), T(k)^{−1} R(k)T(k) = P(k) with P(k) diagonal and positive. Then, h(k) := (T(k)^{−1})*T(k)^{−1} is the candidate for satisfying the conditions (3.63).
Let us give some examples and applications:
Example 16. The KleinGordon equation v_{ tt } = Δv − m^{2}v on flat spacetime. In this case, A^{ ij } = δ^{ ij } and B^{ j } = 0, and R(k) = k^{2} trivially satisfies the conditions of Theorem 2.
3.2 Linear problems with variable coefficients

In the constant coefficient case, inequality (3.75) is equivalent to inequality (3.11), and in this sense Definition 3 is a generalization of Definition 1.
 If u_{1} and u_{2} are the solutions corresponding to the initial data \({f_1},{f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\), then the difference u = u_{2} − u_{1} satisfies the Cauchy problem (3.73, 3.74) with ƒ = ƒ_{2} − ƒ_{1} and the estimate (3.75) implies thatIn particular, this implies that u_{2}(t, ·) converges to u_{1}(t, ·) if ƒ_{2} converges to ƒ_{1} in the L^{2}sense. In this sense, the solution depends continuously on the initial data. This property is important for the convergence of a numerical approximation, as discussed in Section 7.$$\vert \vert {u_2}(t, \cdot)  {u_1}(t, \cdot)\vert \vert \leq K{e^{\alpha t}}\vert \vert {f_2}  {f_1}\vert \vert ,\quad \quad t \geq 0.$$(3.76)

Estimate (3.75) also implies uniqueness of the solution, because for two solutions u_{1} and u_{2} with the same initial data \({f_1} = {f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\) the inequality (3.76) implies u_{1} = u_{2}.

As in the constant coefficient case, it is possible to extend the solution concept to weak ones by taking sequences of C^{ ∞ }elements. This defines a propagator U(t, s) : L^{2}(ℝ^{ n }) → L^{2}(ℝ^{ n }), which maps the solution at time s ≥ 0 to the solution at time t ≥ s and satisfies similar properties to the ones described in Section 3.1.2: (i) U(t, t) = I for all t ≥ 0, (ii) U(t, s)U(s, r) = U(t, r) for all t ≥ s ≥ r ≥ 0, (iii) for \(f \in C_0^\infty ({{\rm{R}}^n}),U(t,0)f\), U(t, 0)ƒ is the unique solution of the Cauchy problem (3.73, 3.74), (iv) ‖U(t, s)ƒ‖ ≤ Ke^{α(t−s)}‖ƒ‖ for all ƒ ∈ L^{2}(ℝ) and all t ≥ s ≥ 0. Furthermore, the Duhamel formula (3.23) holds with the replacement U(t − s) ↦ U (t, s).
3.2.1 The localization principle
Like in the constant coefficient case, we would like to have a criterion for wellposedness that is based on the coefficients A_{ v }(t, x) of the differential operator alone. As we have seen in the constant coefficient case, wellposedness is essentially a statement about high frequencies. Therefore, we are led to consider solutions with very high frequency or, equivalently, with very short wavelength. In this regime we can consider small neighborhoods and since the coefficients A_{ v }(t, x) are smooth, they are approximately constant in such neighborhoods. Therefore, intuitively, the question of wellposedness for the variable coefficient problem can be reduced to a frozen coefficient problem, where the values of the matrix coefficients A_{ v }(t, x) are frozen to their values at a given point.
This leads us to the following statement: a necessary condition for the linear, variable coefficient Cauchy problem for the equation u_{ t } = P(t, x, ∂/∂x)u to be well posed is that all the corresponding problems for the frozen coefficient equations v_{ t } = P_{0}(t_{0}, x_{0}, ∂/∂x)v are well posed. For a rigorous proof of this statement for the case in which P(t, x, ∂/∂x) is timeindependent; see [397]. We stress that it is important to replace P(t, x, ∂/∂x) by its principal part P_{0}(t, x, ∂/∂x) when freezing the coefficients. The statement is false if lowerorder terms are retained; see [259, 397] for counterexamples.
 (i)
weakly hyperbolic if all the eigenvalues of its principal symbol P_{0}(t, x, ik) are purely imaginary.
 (ii)strongly hyperbolic if there exist M > 0 and a family of positive definite, Hermitian m × m matrices H(t, x, k), (t, x, k) ∈ Ω × S^{n−1}, whose coefficients belong to the class \(C_b^\infty (\Omega \times {S^{n  1}})\), such thatfor all (t, x, k) ∈ Ω × S^{n−1}, where Ω := [0, ∞) × ℝ^{ n }.$${M^{ 1}}I \leq H(t,x,k) \leq MI,\quad \quad H(t,x,k){P_0}(t,x,ik) + {P_0}{(t,x,ik)^{\ast}}H(t,x,k) = 0,$$(3.81)
 (iii)
symmetric hyperbolic if it is strongly hyperbolic and the symmetrizer H(t, x, k) can be chosen independent of k.
We see that these definitions are straight extrapolations of the corresponding definitions (see Definition 2) in the constant coefficient case, except for the smoothness requirements for the symmetrizer H(t, x, k).^{8} There are examples of illposed Cauchy problems for which a Hermitian, positivedefinite symmetrizer H(t, x, k) exists but is not smooth [397] showing that these requirements are necessary in general.
Theorem 3. If the firstorder system (3.77) is strongly or symmetric hyperbolic in the sense of Definition 4, then the Cauchy problem ( 3.73 , 3.74 ) is well posed in the sense of Definition 3.
For a proof of this theorem, see, for instance, Proposition 7.1 and the comments following its formulation in Chapter 7 of [411]. Let us look at some examples:
One can still show that the system is well posed, if one takes into account the constraint ∇ · B = 0, which is preserved by the evolution equation (3.85). In Fourier space, this constraint forces B_{1} = 0, which eliminates the first row and column in the principal symbol, and yields a strongly hyperbolic symbol. However, at the numerical level, this means that special care needs to be taken when discretizing the system (3.85) since any discretization, which does not preserve ∇ · B = 0, will push the solution away from the constraint manifold, in which case the system is weakly hyperbolic. For numerical schemes, which explicitly preserve (divergencetransport) or enforce (divergencecleaning) the constraints, see [159] and [136], respectively. For alternative formulations, which are strongly hyperbolic without imposing the constraint; see [120].
In particular, it follows that the Cauchy problem for the KleinGordon equation on a globallyhyperbolic spacetime M = [0, ∞) × ℝ^{ n } with \(\alpha, {\beta ^i},{\gamma _{ij}} \in C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\), is well posed provided that α^{2}γ^{ ij } is uniformly positive definite; see Example 17.
3.2.2 Characteristic speeds and fields
Example 21. In the formulation of Maxwell’s equations discussed in Example 15, the characteristic speeds are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, and the corresponding characteristic fields are the components of the vector on the righthand side of Eq. (3.54).
3.2.3 Energy estimates and finite speed of propagation
Here we focus our attention on firstorder linear systems, which are symmetric hyperbolic. In this case it is not difficult to derive a priori energy estimates based on integration by parts. Such estimates assume the existence of a sufficiently smooth solution and bound an appropriate norm of the solution at some time t > 0 in terms of the same norm of the solution at the initial time t = 0. As we will illustrate here, such estimates already yield quite a lot of information on the qualitative behavior of the solutions. In particular, they give uniqueness, continuous dependence on the initial data and finite speed of propagation.
The word “energy” stems from the fact that for many problems the squared norm satisfying the estimate is directly or indirectly related to the physical energy of the system, although for many other problems the squared norm does not have a physical interpretation of any kind.
 Finite speed of propagation. Let p_{0} = (t_{0}, x_{0}) ∈ Ω be a given event, and setDefine the past cone at p_{0} as^{9}$$v({t_0}): = \sup \left\{{{{{u^\ast}H(t,\,x){P_0}(t,\,x,\,s)u} \over {{u^\ast}H(t,\,x)u}}:0 \leq t \leq {t_0},\,x \in {{\mathbb R}^n},\,s \in {S^{n  1}},\,u \in {{\mathbb C}^m},\,u \neq 0} \right\}.$$(3.110)The unit outward normal to its boundary is e_{ µ }dx^{ µ } = N[v(t_{0})dt+x·dx/x], which satisfies the condition (3.109). It follows from the estimate (3.107) applied to the domain Ω_{ T } = C^{−}(p_{0}) that the solution is zero on C^{−}(p_{0}) if the initial data is zero on the intersection of the cone C^{−}(p_{0}) with the initial surface t = 0. In other words, a perturbation in the initial data outside the ball x ≤ v (t_{0})t_{0} does not alter the solution inside the cone C^{−}(p_{0}). Using this argument, it also follows that if ƒ has compact support, the corresponding solution u(t, ·) also has compact support for all t > 0.$${C^ }({p_0}): = \{(t,\,x) \in \Omega :\vert x\vert\,\leq v({t_0})({t_0}  t)\} .$$(3.111)
 Continuous dependence on the initial data. Let \(f \in C_0^\infty ({{\rm{R}}^n})\) be smooth initial data with compact support. As we have seen above, the corresponding smooth solution u(t, ·) also has compact support for each t ≥ 0. Therefore, applying the estimate (3.107) to the case Σ_{ t } := {t} × ℝ^{ n }, the boundary integral vanishes and we obtainIn view of the definition of E(Σ_{ t }), see Eq. (3.104), and the properties (3.81) of the symmetrizer, it follows that$$E({\Sigma _t}) \leq {e^{2\alpha t}}E({\Sigma _0}),\quad \quad t \geq 0.$$(3.112)which is of the required form; see Definition 3. In particular, we have uniqueness and continuous dependence on the initial data.$$\Vert u(t, \cdot)\Vert \, \leq M{e^{\alpha t}}\Vert f \Vert ,\quad \quad t \geq 0,$$(3.113)
 The statements about finite speed of propagation and continuous dependence on the data can easily be generalized to the case of a firstorder symmetric hyperbolic inhomogeneous equation u_{ t } = P(t, x, ∂/∂x)u + F(t, x), with F : Ω → ℂ^{ m } a bounded, C^{ ∞ }function with bounded derivatives. In this case, the inequality (3.113) is replaced by$$\Vert u(t, \cdot)\Vert \, \leq \,M{e^{\alpha t}}\left[ {\Vert f \Vert + \int\limits_0^t {{e^{ \alpha s}}}\Vert F(s, \cdot)\Vert ds} \right],\quad \quad t \geq 0.$$(3.114)

If the boundary surface \({\mathcal T}\) does not satisfy the condition (3.109) for the boundary integral to be positive, then suitable boundary conditions need to be specified in order to control the sign of this term. This will be discussed in Section 5.2.

Although different techniques have to be used to prove them, very similar results hold for strongly hyperbolic systems [353].

For definitions of hyperbolicity of a geometric PDE on a manifold, which do not require a 3+1 decomposition of spacetime, see, for instance, [205, 353], for firstorder systems and [47] for secondorder ones.
Example 22. We have seen that for the KleinGordon equation propagating on a globallyhyperbolic spacetime, the characteristic speeds are the speed of light. Therefore, in the case of a constant metric (i.e., Minkowksi space), the past cone C^{−}(p_{0}) defined in Eq. (3.111) coincides with the past light cone at the event p_{0}. A slight refinement of the above argument shows that the statement remains true for a KleinGordon field propagating on any hyperbolic spacetime.
Example 23. In Example 21 we have seen that the characteristic speeds of the system given in Example 15 are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, where αβ > 0 is assumed for strong hyperbolicity. Therefore, the past cone C^{−}(p_{0}) corresponds to the past light cone provided that 0 < αβ ≤ 1. For αβ > 1, the formulation has superluminal constraintviolating modes, and an initial perturbation emanating from a region outside the past light cone at p_{0} could affect the solution at p_{0}. In this case, the past light cone at p_{0} is a proper subset of C^{−}(p_{0}).
3.3 Quasilinear equations

The nonlinear term F(t, x, u) may induce blowup of the solutions in finite time. This is already the case for the simple example where m = 1, all the matrices A^{ j } vanish identically and F(t, x, u) = u^{2}, in which case Eq. (3.115) reduces to u_{ t } = u^{2}. In the context of Einstein’s equations such a blowup is expected when a curvature singularity forms, or it could also occur in the presence of a coordinate singularity due to a “bad” gauge condition.

In contrast to the linear case, the matrix functions A^{ j } in front of the derivative operator now depend pointwise on the state vector itself, which implies, in particular, that the characteristic speeds and fields depend on u. This can lead to the formation of shocks where characteristics cross each other, like in the simple example of Burger’s equation u_{ t } = uu_{ x } corresponding to the case m = n = 1, A^{1}(t, x, u) = and F(t, x, u) = 0. In general, shocks may form when the system is not linearly degenerated or genuinely nonlinear [250]. The Einstein vacuum equations, on the other hand, can be written in linearly degenerate form (see, for example, [6, 7, 348, 8]) and are therefore expected to be free of physical shocks.
Under such restrictions, it is possible to prove wellposedness of the Cauchy problem. The idea is to linearize the problem and to apply Banach’s fixedpoint theorem. This is discussed next.
3.3.1 The principle of linearization
Here, the norms X and Y appearing on both sides of Eq. (3.119) are different from each other because ‖u − u^{(0)}‖_{ X } controls the function u − u^{(0)} over the spacetime region [0, T] × ℝ^{ n } while is a norm controlling the function δƒ on ℝ^{ n }.
3.4 Abstract evolution operators
 (i)
P(0) = I,
 (ii)
P(t + s) = P(t)P(s) for all t, s ≥ 0,
 (iii)
\(\underset {t \rightarrow 0} {\lim} P(t)u = u\) for all \(u \in X\),
 (iv)
\(D(A) = \left\{{u \in X:\underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u  u]{\rm{exists in}}X} \right\}\) and \(Au = \underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u  u],u \in D(A)\).
There are several results giving necessary and sufficient conditions for the linear operator A to generate a strongly continuous semigroup; see, for instance, [327, 51]. One useful result, which we formulate for Hilbert spaces, is the following:
 (i)
A is the infinitesimal generator of a strongly continuous semigroup P(t) such that ‖P(t)‖ ≤ e^{ αt } for all t ≥ 0.
 (ii)
A − αI is dissipative, that is, Re(u, Au − αu) ≤ 0 for all u ∈ D(A), and the range of A − λI is equal X for some λ > α.
Finding the correct domain D(A) for the infinitesimal generator A is not always a trivial task, especially for equations involving singular coefficients. Fortunately, there are weaker versions of the LumerPhillips theorem, which only require checking conditions on a subspace D ⊂ D(A), which is dense in X. It is also possible to formulate the LumerPhillips theorem on Banach spaces. See [327, 152, 51] for more details.
The semigroup theory can be generalized to timedependent operators A(t), and to quasilinear equations where A(u) depends on the solution u itself. We refer the reader to [51] for these generalizations and for applications to examples from mathematical physics including general relativity. The theory of strongly continuous semigroups has also been used for formulating wellposed initialboundary value formulations for the Maxwell equations [354] and the linearized Einstein equations [309] with elliptic gauge conditions.
4 InitialValue Formulations for Einstein’s Equations
In this section, we apply the theory discussed in Section 3 to wellposed Cauchy formulations of Einstein’s vacuum equations. The first such formulation dates back to the 1950s [169] and will be discussed in Section 4.1. Since then, there has been a plethora of new formulations, which distinguish themselves by the choice of variables (metric vs. tetrad, Christoffel symbols vs. connection coefficients, inclusion or not of curvature components as independent variables, etc.), the choice of gauges and the use of the constraint equations in order to modify the evolution equations off the constraint surface. Many of these new formulations have been motivated by numerical calculations, which try to solve a given physical problem in a stable way.
By far the most successful formulations for numericallyevolving compactobject binaries have been the harmonic system, which is based on the original work of [169], and that of BaumgarteShapiroShibataNakamura (BSSN) [390, 44]. For this reason, we review these two formulations in detail in Sections 4.1 and 4.3, respectively. In Section 4.2 we also review the ArnowittDeserMisner (ADM) formulation [30], which is based on a Hamiltonian approach to general relativity and serves as a starting point for many hyperbolic systems, including the BSSN one. A list of references for hyperbolic reductions of Einstein’s equations not discussed here is given in Section 4.4.
4.1 The harmonic formulation
4.1.1 Hyperbolicity
However, as indicated above, the firstorder symmetric hyperbolic reduction (4.9, 4.10, 4.11) is not unique. A different reduction is based on the variables ũ = (h_{ αβ }, Π_{ αβ }, Φ_{ jαβ }), where \({\Pi _{\alpha \beta}}: = {n^\mu}{\overset \circ \nabla _\mu}{h_{\alpha \beta}}\) is the derivative of g_{ αβ } in the direction of the futuredirected unit normal n^{ µ } to the timeslices t = const, and \({\Phi _{j\alpha \beta}}: = {\overset \circ \nabla _j}{h_{\alpha \beta}}\). This yields a firstorder system, which is symmetric hyperbolic as long as the t = const slices are spacelike, independent of whether or not ∂_{ t } is timelike [18, 286].
4.1.2 Constraint propagation and damping
The hyperbolicity results described above guarantee that unique solutions of the nonlinear wave system (4.5) exist, at least for short times, and that they depend continuously on the initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\). However, in order to obtain a solution of Einstein’s field equations one has to ensure that the harmonic constraint (4.3) is identically satisfied.
For a discussion on possible effects due to nonlinearities in the constraint propagation system; see [185].
4.1.3 Geometric issues
The next issue is the question of geometric uniqueness. Let g^{(1)} and g^{(2)} be two solutions of Einstein’s equations with the same initial data on t = 0, i.e., \(g_{\alpha \beta}^{(1)}(0,x) = g_{\alpha \beta}^{(2)}(0,x),{\partial _t}g_{\alpha \beta}^{(1)}(0,x) = {\partial _t}g_{\alpha \beta}^{(2)}(0,x)\). Are these solutions related, at least for small time, by a diffeomorphism? Again, the answer is affirmative [169, 164] because one can transform both solutions to harmonic coordinates using the above diffeomorphism ϕ without changing their initial data. It then follows by the uniqueness property of the nonlinear wave system (4.5) that the transformed solutions must be identical, at least on some sufficientlysmall time interval. Note that this geometric uniqueness property also implies that the solutions are, at least locally, independent of the background metric. For further results on geometric uniqueness involving only the first and second fundamental forms of the initial surface; see [127], where it is shown that every such initialdata set satisfying the Hamiltonian and momentum constraints possesses a unique maximal Cauchy development.
Finally, we mention that results about the nonlinear stability of Minkowski spacetime with respect to vacuum and vacuumscalar perturbations have been established based on the harmonic system [283, 284], offering an alternative proof to the one of [129].
4.2 The ADM formulation
4.2.1 Algebraic gauge conditions
These arguments were used in [308] to show that the evolution system (4.20, 4.21) with fixed lapse and shift is weakly but not strongly hyperbolic. The results in [308] also analyze modifications of the equations for which the lapse is densitized and the Hamiltonian constraint is used to modify the trace of Eq. (4.21). The conclusion is that such changes cannot make the evolution equations (4.20, 4.21) strongly hyperbolic. Therefore, these equations, with given shift and densitized lapse, are not suited for numerical evolutions.^{14}
4.2.2 Dynamical gauge conditions leading to a wellposed formulation
 If instead of imposing the dynamical shift condition (4.28), β is a priori specified, then the resulting evolution system, consisting of Eqs. (4.27, 4.20, 4.21), is weakly hyperbolic for any choice of ƒ. Indeed, in that case the symbol (4.36) in the vector sector reduces to the Jordan blockwhich cannot be diagonalized.$${Q^{(vector)}}(ik)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} 0 & 1 \\ 0 & 0 \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right),$$(4.38)

When linearized about Minkowski spacetime, it is possible to classify the characteristic fields into physical, constraintviolating and gauge fields; see [106]. For the system (4.29–4.32) the physical fields are the ones in the tensor sector, \({{\hat l}_{ij}} \pm {{\hat p}_{ij}}\), the constraintviolating ones are \({{\bar p}_j}\) and \({{\bar l}{\prime}} \pm {{\bar p}{\prime}}\), and the gauge fields are the remaining characteristic variables. Observe that the constraintviolating fields are governed by a stronglyhyperbolic system (see also Section 4.2.4 below), and that in this particular formulation of the ADM equations the gauge fields are coupled to the constraintviolating ones. This coupling is one of the properties that make it possible to cast the system as a strongly hyperbolic one.
We conclude that the evolution system (4.27, 4.28, 4.20, 4.21) is strongly hyperbolic if and only if ƒ > 0 and ƒ ≠ 1. Although the full harmonic gauge condition (4.3) is excluded from these restrictions,^{15} there is still a large family of evolution equations for the lapse and shift that give rise to a strongly hyperbolic problem together with the standard evolution equations (4.20, 4.21) from the 3+1 decomposition.
4.2.3 Elliptic gauge conditions leading to a wellposed formulation
4.2.4 Constraint propagation
4.3 The BSSN formulation
The BSSN formulation is based on the 3+1 decomposition of Einstein’s field equations. Unlike the harmonic formulation, which has been motivated by the mathematical structure of the equations and the understanding of the Cauchy formulation in general relativity, this system has been mainly developed and improved based on its capability of numerically evolving spacetimes containing compact objects in a stable way. Interestingly, it turns out that in spite of the fact that the BSSN formulation is based on an entirely different motivation, mathematical questions like the wellposedness of its Cauchy problem can be answered, at least for most gauge conditions.
4.3.1 The hyperbolicity of the BSSN evolution equations
In fact, the ADM formulation in the spatial harmonic gauge described in Section 4.2.3 and the BSSN formulation are based on some common ideas. In the covariant reformulation of BSSN just mentioned, the variable \({{\tilde \Gamma}^i}\) is just the quantity V^{ i } defined in Eq. (4.40), where γ_{ ij } is replaced by the conformal metric \({\overset \circ \gamma _{ij}}\). Instead of requiring \({{\tilde \Gamma}^i}\) to vanish, which would convert the operator on the righthand side of Eq. (4.60) into a quasilinear elliptic operator, one promotes this quantity to an independent field satisfying the evolution equation (4.59) (see also the discussion below Equation (2.18) in [390]). In this way, the \({{\tilde \gamma}_{ij}}  {{\tilde A}_{ij}}\)block of the evolution equations forms a wave system. However, this system is coupled through its principal terms to the evolution equations of the remaining variables, and so one needs to analyze the complete system. As follows from the discussion below, it is crucial to add the momentum constraint to Eq. (4.59) with an appropriate factor m in order to obtain a hyperbolic system.
Yet a different approach to analyzing the hyperbolicity of BSSN has been given in [219, 220] based on a new definition of strongly and symmetric hyperbolicity for evolution systems, which are first order in time and second order in space. Based on this definition, it has been verified that the BSSN system (4.69, 4.53, 4.56–4.59) is strongly hyperbolic for σ > 0 and m > 1/4 and symmetric hyperbolic for 6σ = 4m − 1 > 0. (Note that this generalizes the original result in [373] where, in addition, m > 1 was required.) The results in [220] also discuss more general 3+1 formulations, including the one in [308] and construct constraintpreserving boundary conditions. The relation between the different approaches to analyzing hyperbolicity of evolution systems, which are first order in time and second order in space, has been analyzed in [221].
The structure of a Butcher table.
0  
c _{2}  a _{21}  
c _{3}  a _{31}  a _{32}  
⋮  ⋮  ⋮  ⋱  
c _{s}  a _{s1}  a _{s2}  ⋯  a _{s,s−1}  
b _{1}  b _{2}  ⋯  b _{s−1}  b _{ s } 
4.3.2 Constraint propagation
The hyperbolicity of the constraint propagation system (4.74–4.76) has been analyzed in [220, 52, 81, 80], and [315] and shown to be reducible to a symmetric hyperbolic firstorder system for m > 1/4. Furthermore, there are no superluminal characteristic fields if 1/4 < m ≤ 1. Because of finite speed of propagation, this means that BSSN with 1/4 < m ≤ 1 (which includes the standard choice m = 1) does not possess superluminal constraintviolating modes. This is an important property, for it shows that constraint violations that originate inside black hole regions (which usually dominate the constraint errors due to high gradients at the punctures or stuffing of the blackhole singularities in the turducken approach [156, 81, 80]) cannot propagate to the exterior region.
In [353] a general result is derived, showing that under a mild assumption on the form of the constraints, strong hyperbolicity of the main evolution system implies strong hyperbolicity of the constraint propagation system, with the characteristic speeds of the latter being a subset of those of the former. The result does not hold in general if “strong” is replaced by “symmetric”, since there are known examples for which the main evolution system is symmetric hyperbolic, while the constraint propagation system is only strongly hyperbolic [108].
4.4 Other hyperbolic formulations
There exist many other hyperbolic reductions of Einstein’s field equations. In particular, there has been a large amount of work on casting the evolution equations into firstorder symmetric [2, 182, 195, 3, 21, 155, 248, 443, 22, 74, 234, 254, 383, 377, 18, 285, 285, 86] and strongly hyperbolic [62, 63, 12, 59, 60, 13, 64, 367, 222, 78, 58, 82] form; see [182, 352, 188, 353] for reviews. For systems involving wave equations for the extrinsic curvature; see [128, 2]; see also [424] and [20, 75, 374, 379, 436] for applications to perturbation theory and the linear stability of solitons and hairy black holes.
Recently, there has also been work deriving strongly or symmetric hyperbolic formulations from an action principle [79, 58, 243].
5 Boundary Conditions: The InitialBoundary Value Problem
In Section 3 we discussed the general Cauchy problem for quasilinear hyperbolic evolution equations on the unbounded domain ℝ^{ n }. However, in the numerical modeling of such problems one is faced with the finiteness of computer resources. A common approach for dealing with this problem is to truncate the domain via an artificial boundary, thus forming a finite computational domain with outer boundary. Absorbing boundary conditions must then be specified at the boundary such that the resulting IBVP is well posed and such that the amount of spurious reflection is minimized.
 For a smooth solution to exist, the data f and g must satisfy appropriate compatibility conditions at the intersection S := {0} × ∂Σ between the initial and boundary surface [344]. Assuming that u is continuous, for instance, Eqs. (5.2, 5.3) imply that g(0, x) = b(0, x, f(x))f(x) for all x ∈ ∂Σ. If u is continuously differentiable, then taking a time derivative of Eq. (5.3) and using Eqs. (5.1, 5.2) leads towhere c(x) is the complex r × m matrix with coefficients$${g_t}(0,x) = c(x)\;\left[ {\sum\limits_{j = 1}^n {{A^j}} (0,x,f(x)){{\partial f} \over {\partial {x^j}}}(x) + F(0,x,f(x))} \right] + {b_t}(0,x,f(x))f(x),\qquad x \in \partial \Sigma ,$$(5.4)Assuming higher regularity of u, one obtains additional compatibility conditions by taking further time derivatives of Eq. (5.3). In particular, for an infinitelydifferentiable solution u, one has an infinite family of such compatibility conditions at S, and one must make sure that the data f, g satisfies each of them if the solution u is to be reproduced by the IBVP. If an exact solution u(^{0)} of the partialdifferential equation (5.1) is known, a convenient way of satisfying these conditions is to choose the data such that in a neighborhood of S, f and g agree with the corresponding values for u^{(0)} i.e., such that f(x) = u^{(0)}(0, x) and g(t, x) = b(t, x, u^{(0)}(t, x))u^{(0)}(t, x) for (t, x) in a neighborhood of S. However, depending on the problem at hand, this might be too restrictive.$$c{(x)^A}_{\;B} = b{(0,x,f(x))^A}_{\;B} + \sum\limits_{C = 1}^m {{{\partial {b^A}_C} \over {\partial {u^B}}}} (0,x,f(x))f{(x)^C},\qquad A = 1, \ldots ,r,\quad B = 1, \ldots ,m.$$(5.5)

The next issue is the question of what class of boundary conditions (5.3) leads to a wellposed problem. In particular, one would like to know, which are the restrictions on the matrix b(t, x, u) implying existence of a unique solution, provided the compatibility conditions hold. In order to illustrate this issue on a very simple example, consider the advection equation u_{ t } = u_{ x } on the interval [−1, 1]. The most general solution has the form u(t, x) = h(t + x) for some differentiable function h: (−1, ∞) → ℂ. The function h is determined on the interval [−1, 1] by the initial data alone, and so the initial data alone fixes the solution on the strip −1 −t ≤ x ≤ 1 − t. Therefore, one is not allowed to specify any boundary conditions at x = −1, whereas data must be specified for u at x = 1 in order to uniquely determine the function h on the interval (1, ∞).

Additional difficulties appear when the system has constraints, like in the case of electromagnetism and general relativity. In the previous Section 4, we saw in the case of Einstein’s equations that it is usually sufficient to solve these constraints on an initial Cauchy surface, since the Bianchi identities and the evolution equations imply that the constraints propagate. However, in the presence of boundaries one can only guarantee that the constraints remain satisfied inside the future domain of dependence of the initial surface Σ_{0}:= {0} × Σ unless the boundary conditions are chosen with care. Methods for constructing constraintpreserving boundary conditions, which make sure that the constraints propagate correctly on the whole spacetime domain [0, T] × Σ will be discussed in Section 6.
There are two common techniques for analyzing an IBVP. The first, discussed in Section 5.1, is based on the linearization and localization principles, and reduces the problem to linear, constant coefficient IBVPs which can be explicitly solved using Fourier transformations, similar to the case without boundaries. This approach, called the Laplace method, is very useful for finding necessary conditions for the wellposedness of linear, constant coefficient IBVPs. Likely, these conditions are also necessary for the quasilinear IBVP, since smallamplitude highfrequency perturbations are essentially governed by the corresponding linearized, frozen coefficient problem. Based on the Kreiss symmetrizer construction [258] and the theory of pseudodifferential operators, the Laplace method also gives sufficient conditions for the linear, variable coefficient problem to be well posed; however, the general theory is rather technical. For a discussion and interpretation of this approach in terms of wave propagation we refer to [241].
The second method, which is discussed in Section 5.2, is based on energy inequalities obtained from integration by parts and does not require the use of pseudodifferential operators. It provides a class of boundary conditions, called maximal dissipative, which leads to a wellposed IBVP. Essentially, these boundary conditions specify data to the incoming normal characteristic fields, or to an appropriate linear combination of the in and outgoing normal characteristic fields. Although technically less involved than the Laplace one, this method requires the evolution equations (5.1) to be symmetric hyperbolic in order to be applicable, and it gives sufficient, but not necessary, conditions for wellposedness.
In Section 5.3 we also discuss absorbing boundary conditions, which are designed to minimize spurious reflections from the boundary surface.
5.1 The Laplace method
5.1.1 Necessary conditions for wellposedness and the Lopatinsky condition

The condition (5.27) implies that we must specify exactly as many linearlyindependent boundary conditions as there are incoming characteristic fields, since q is the number of negative eigenvalues of the boundary matrix A = A^{1}.
 The violation of condition (5.27) at some (s_{0}, k_{0}) with Re(s_{0}) > 0 and k ∈ ℝ^{n−1} gives rise to the simple wave solutionswhere ũ(s_{0},·, k_{0}) = T(s, k)ṽ(s_{0},·, k_{0}) ∈ L^{2}(0, ∞) is a nontrivial solution of the problem (5.23, 5.24) with homogeneous data \(\tilde F = 0\) and \(\tilde g = 0\). Therefore, an equivalent necessary condition for wellposedness is that no such simple wave solutions exist. This is known as the Lopatinsky condition.$$u(t,{x_1},y) = {e^{{s_0}t + i{k_0}\cdot y}}\tilde u({s_0},{x_1},{k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$(5.28)
 If such a simple wave solution exists for some (s_{0}, k_{0}), then the homogeneity of the problem implies the existence of a whole family,of such solutions parametrized by α > 0. In particular, it follows that$${u_\alpha}(t,{x_1},y) = {e^{\alpha ({s_0}t + i{k_0}\cdot y)}}\tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$(5.29)such that$$\vert {u_\alpha}(t,{x_1},y)\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert \tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0})\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert {u_\alpha}(0,{x_1},y)\vert ,$$(5.30)for all t > 0, as α → ∞. Therefore, one has solutions growing exponentially in time at an arbitrarily large rate.^{18}$${{\vert {u_\alpha}(t,{x_1},y)\vert} \over {\vert {u_\alpha}(0,{x_1},y)\vert}} = {e^{\alpha {\rm{Re}}(s)t}} \rightarrow \infty$$(5.31)
We conclude that the strongly hyperbolic evolution system (3.50, 3.51) with αβ = 1 and incoming normal characteristic fields set to zero at the boundary does not give rise to a wellposed IBVP when α > 0 or α < −2. This excludes the parameter range −3/2 < α < 0 for which the system is symmetric hyperbolic. This case is covered by the results in Section 5.2, which utilize energy estimates and show that symmetric hyperbolic problems with zero incoming normal characteristic fields are well posed.
5.1.2 Sufficient conditions for wellposedness and boundary stability
The relevant concept of wellposedness is the following one.
Since boundary stability only requires considering solutions for trivial source terms, F = 0, it is a much simpler condition than Eq. (5.55). Clearly, strong wellposedness in the generalized sense implies boundary stability. The main result is that, modulo technical assumptions, the converse is also true: boundary stability implies strong wellposedness in the generalized sense.
Theorem 5. [258, 340] Consider the linear, constant coefficient IBVP ( 5.50 , 5.51 , 5.52 ) on the half space Σ = {(x_{1}, x_{2}, …, x_{ n }) ∈ ℝ^{ n }: x_{1} > 0}. Assume that equation (5.50) is strictly hyperbolic, meaning that the eigenvalues of the principal symbol P_{0}(ik) are distinct for all k ∈ S^{n−1}. Assume that the boundary matrix A = A^{1} is invertible. Then, the problem is strongly well posed in the generalized sense if and only if it is boundary stable.
Maybe the importance of Theorem 5 is not so much its statement, which concerns only the linear, constant coefficient case for which the solutions can also be constructed explicitly, but rather the method for its proof, which is based on the construction of a smooth symmetrizer symbol, and which is amendable to generalizations to the variable coefficient case using pseudodifferential operators.
In order to formulate the result of this construction, define \(\rho := \sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}, {s{\prime}}: = s/\rho, {k{\prime}}: = k/\rho\), such that \(({s{\prime}},{k{\prime}}) \in S_ + ^n\) lies on the half sphere \(S_ + ^n: = \{({s{\prime}},{k{\prime}}) \in {\rm{C}} \times {{\rm{R}}^n}:\vert {s{\prime}}{\vert ^2} + \vert k{\vert ^2} = 1,{\rm Re} ({s{\prime}}) > 0\}\) for Re(s) > 0 and k ∈ ℝ^{n−1}. Then, we have,
 (i)
H(s′, k′) = H(s′, k′)* is Hermitian.
 (ii)
H(s′, k′)M(s′, k′) + M(s′, k′)*H(s′, k′) ≥ 2Re(s′)I for all (s′, k′) \(({s{\prime}},{k{\prime}}) \in S_ + ^n\).
 (iii)There is a constant C > 0 such thatfor all ũ ∈ ℂ^{ m } and all (s′, k′) \(\in S_n^ +\).$${\tilde u^{\ast}}H(s\prime ,k\prime)\tilde u + C\vert b\tilde u{\vert ^2}\; \geq \;\vert \tilde u{\vert ^2}$$(5.58)
This, together with the results obtained in Example 25, yields the following conclusions: the IBVP (5.32, 5.33, 5.34) gives rise to an illposed problem if b = 0 or if ∣a/b∣ > 1 and a/b ∈ ℝ and to a problem, which is strongly well posed in the generalized sense if b ≠ 0 and ∣a/b∣ < 1. The case a∣ = ∣b ≠ 0 is covered by the energy method discussed in Section 5.2. For the case ∣a/b∣ > 1 with a/b ∈ ℝ, see Section 10.5 in [228].
 The boundary stability condition (5.57) is often called the Kreiss condition. Provided the eigenvalues of the matrix M(s, k) are suitably normalized, it can be shown [258, 228, 241] that the determinant det(b_{−}(s, k)) in Eq. (5.27) can be extended to a continuous function defined for all Re(s) ≥ 0 and k ∈ ℝ^{n−1}, and condition (5.57) can be restated as the following algebraic condition:for all Re(s) ≥ 0 and k ∈ ℝ^{n−1}. This is a strengthened version of the Lopatinsky condition, since it requires the determinant to be different from zero also for s on the imaginary axis.$$\det ({b_ }(s,k)) \neq 0$$(5.64)

As anticipated above, the importance of the symmetrizer construction in Theorem 6 relies on the fact that, based on the theory of pseudodifferential operators, it can be used to treat the linear, variable coefficient IBVP [258]. Therefore, the localization principle holds: if all the frozen coefficient IBVPs are boundary stable and satisfy the assumptions of Theorem 5, then the variable coefficient problem is strongly well posed in the generalized sense.

If the problem is boundary stable, it is also possible to estimate higherorder derivatives of the solutions. For example, if we multiply both sides of the inequality (5.59) by ∣k∣^{2}, integrate over ξ = Im(s) and k and use Parseval’s identity as before, we obtain the estimate (5.55) with u, F and g replaced by their tangential derivatives u_{ y }, F_{ y } and g_{ y }, respectively. Similarly, one obtains the estimate (5.55) with u, F and g replaced by their time derivatives u_{ t }, F_{ t } and g_{ t } if we multiply both sides of the inequality (5.59) by ∣s∣^{2} and assume that u_{ t }(0, x) = 0 for all x ∈ Σ.^{21} Then, a similar estimate follows for the partial derivative, ∂_{1}u, in the x_{1}direction using the evolution equation (5.6) and the fact that the boundary matrix A^{1} is invertible. Estimates for higherorder derivatives of u follow by an analogous process.

Theorem 5 assumes that the initial data f is trivial, which is not an important restriction since one can always achieve f = 0 by transforming the source term F and the boundary data g, as described below Eq. (5.52). Since the transformed F involves derivatives of f, this means that derivatives of f would appear on the righthand side of the inequality (5.55), and at first sight it looks like one “loses a derivative” in the sense that one needs to control the derivatives of f to one degree higher than the ones of u. However, the results in [341, 342] improve the statement of Theorem 5 by allowing nontrivial initial data and by showing that the same hypotheses lead to a stronger concept of wellposedness (strong wellposedness, defined below in Definition 9 as opposed to strong wellposedness in the generalized sense).

The results mentioned so far assume strict hyperbolicity and an invertible boundary matrix, which are toorestrictive conditions for many applications. Unfortunately, there does not seem to exist a general theory, which removes these two assumptions. Partial results include [5], which treats strongly hyperbolic problems with an invertible boundary matrix that are not necessarily strictly hyperbolic, and [293], which discusses symmetric hyperbolic problems with a singular boundary matrix.
5.1.3 Secondorder systems
For a more geometric derivation of these results based on estimates derived from the stressenergy tensor associated to the scalar field v, which shows that the above construction for L is sufficient for strong wellposedness; see Appendix B in [263]. For a generalization to the shifted wave equation; see [369].
Example 28. As an application of the theory for systems of wave equations, which are coupled through the boundary conditions, we discuss Maxwell’s equations in their potential formulation on the half space Σ [267]. In the Lorentz gauge and the absence of sources, this system is described by four wave equations ∂^{ μ }∂_{ μ }A_{ν} = 0 for the components (A_{ t }, A_{ x }, A_{ y }, A_{ z }) of the vector potential A_{ μ }, which are subject to the constraint C:= ∂^{ μ }A_{ μ } = 0, where we use the Einstein summation convention.
For a recent development based on the Laplace method, which allows the treatment of secondorder IBVPs with more general classes of boundary conditions, including those admitting boundary phenomena like glancing and surface waves; see [262].
5.2 Maximal dissipative boundary conditions
 (i)
u*H(t, x)P_{0}(t, x, s)u ≤ 0 for all u ∈ V_{ p },
 (ii)
V_{ p } is maximal with respect to condition (i); that is, if W_{ p } ⊃ V_{ p } is a linear subspace of ℂ^{ m } containing V_{ p }, which satisfies (i), then W_{ p } = V_{ p }.
Maximal dissipative boundary conditions were proposed in [189, 275] in the context of symmetric positive operators, which include symmetric hyperbolic operators as a special case. With such boundary conditions, the IBVP is well posed in the following sense:
This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data was assumed and only a timeintegral of the L^{2}(Σ)norm of the solution could be estimated (see Definition 6). The main result of the theory of maximal dissipative boundary conditions is:
Theorem 7. Consider the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ), where the matrix functions A^{ j }(t, x) and b(t, x) and the vector function F(t, x) do not depend on u. Suppose the system is symmetric hyperbolic, and that the boundary conditions (5.3) are maximal dissipative. Suppose, furthermore, that the rank of the boundary matrix P_{0}(t, x, s) is constant in \((t,x) \in {\mathcal T}\).
Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed if the boundary matrix P_{0}(t, x, s) is invertible.
This theorem was first proven in [189, 275, 344] for the case where the boundary surface \({\mathcal T}\) is noncharacteristic, that is, the boundary matrix P_{0}(t, x, s) is invertible for all \((t,x) \in {\mathcal T}\). A difficulty with the characteristic case is the loss of derivatives of u in the normal direction to the boundary (see [422]). This case was studied in [293, 343, 387], culminating with the regularity theorem in [387], which is based on special function spaces, which control the L^{2}norms of 2k tangential derivatives and k normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear case; see [218, 388].
In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94), which describes a linear coupling of the outgoing characteristic fields u_{−} to the incoming ones, u_{+}. In particular, there are exactly as many independent boundary conditions as there are incoming fields, in agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not involve the zero speed fields. The simplest choice for q is the trivial one, q = 0, in which case data for the incoming fields is specified. A nonzero value of q would be chosen if the boundary is to incorporate some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for example.

q = 0: Sommerfeld boundary condition,

q = −1: Dirichlet boundary condition,

q = 1: Neumann boundary condition.

q = −1, g_{∥} = 0: The boundary condition E_{∥} = 0 describes a perfectly conducting boundary surface.
 q = 0, g_{∥} =0: This is a Sommerfeldtype boundary condition, which, locally, is transparent to outgoing plane waves traveling in the normal direction s,where ω the frequency, k = ωs the wave vector, and \({\mathcal E}\) the polarization vector, which is orthogonal to k. The generalization of this boundary condition to inhomogeneous data g_{∥} ≠ 0 allows one to specify data on the incoming field E_{∥} +s Λ B_{∥} at the boundary surface, which is equal to \(2{\mathcal E}{e^{i\omega t}}\) for the plane waves traveling in the normal inward direction −s.$$E(t,x) = {\mathcal E}{e^{i(\omega t  k\cdot x)}},\qquad B(t,x) = s \wedge E(t,x),$$(5.99)
At this point, one might ask why we were able to formulate a wellposed IBVP based on the secondorder formulation in Example 28, while the firstorder reduction discussed here fails. As we shall see, the reason for this is that there exist many firstorder reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for wellposed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.
5.2.1 Application to systems of wave equations
As anticipated in Example 31, the theory of symmetric hyperbolic firstorder equations with maximal dissipative boundary conditions can also be used to formulate wellposed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong wellposedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.
Summarizing, we have seen that the most straightforward firstorder reduction of the KleinGordon equation does not lead to strong wellposedness. However, strong wellposedness can be obtained by choosing a more sophisticated reduction, in which the timederivative of Φ is replaced by its derivative Φ_{t} − bΦ_{ x } along the timelike vector (1, −b), which is pointing outside the domain at the boundary surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong wellposedness for any futuredirected timelike vector field u, which is pointing outside the domain at the boundary. Based on the geometric definition of firstorder symmetric hyperbolic systems in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved backgrounds [264].
Theorem 8. [264] The IBVP ( 5.113 , 5.114 , 5.115 ) is well posed. Given T > 0 and sufficiently small and smooth initial and boundary data \(\Phi _0^A,\Pi _0^A\) and G^{ A } satisfying the compatibility conditions at the edge S = {0} × ∂Σ, there exists a unique smooth solution on M satisfying the evolution equation (5.113) , the initial condition (5.114) and the boundary condition (5.115) . Furthermore, the solution depends continuously on the initial and boundary data.
Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.
5.2.2 Existence of weak solutions and the adjoint problem
Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a more general class of linear symmetric operators with maximal dissipative boundary conditions; see [189, 275]. The proof below will shed some light on the maximality condition for the boundary space V_{ p }.
Lemma 4. Let \(p \in {\mathcal T}\) be a boundary point. Then, V_{ p } is maximal nonpositive if and only if V*_{ p } is maximal nonnegative.
The lemma implies that solving the original problem −Lu = F with u ∈ D(L) is equivalent to solving the adjoint problem L*v = F with v ∈ D(L*), which, since v(T, x) = 0 is held fixed at Σ_{ T }, corresponds to the timereversed problem with the adjoint boundary conditions. From the a priori energy estimates we obtain:
In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique. Since L and L* are closable operators [345], their closures \(\overline L\) and \(\overline {{L^\ast}}\) satisfy the same inequalities as in Eq. (5.128). Now we are ready to define weak solutions and to prove their existence:
If u ∈ X is a weak solution, which is sufficiently smooth, it follows from the Green type identity (5.120) that u has vanishing initial data and that it satisfies the required boundary conditions, and hence is a solution to the original IBVP (5.118). The difficult part is to show that a weak solution is indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such “weak=strong” results.
5.3 Absorbing boundary conditions
When modeling isolated systems, the boundary conditions have to be chosen such that they minimize spurious reflections from the boundary surface. This means that inside the computational domain, the solution of the IBVP should lie as close as possible to the true solution of the Cauchy problem on the unbounded domain. In this sense, the dynamics outside the computational domain is replaced by appropriate conditions on a finite, artificial boundary. Clearly, this can only work in particular situations, where the solutions outside the domain are sufficiently simple so that they can be computed and used to construct boundary conditions, which are, at least, approximately compatible with them. Boundary conditions, which give rise to a wellposed IBVP and achieve this goal are called absorbing, nonreflecting or radiation boundary conditions in the literature, and there has been a substantial amount of work on the construction of such conditions for wave problems in acoustics, electromagnetism, meteorology, and solid geophysics (see [206] for a review). Some recent applications to general relativity are mentioned in Sections 6 and 10.3.1.
One approach in the construction of absorbing boundary conditions is based on suitable series or Fourier expansions of the solution, and derives a hierarchy of local boundary conditions with increasing order of accuracy [153, 46, 240]. Typically, such higherorder local boundary conditions involve solving differential equations at the boundary surface, where the order of the differential equation is increasing with the order of the accuracy. This problem can be dealt with by introducing auxiliary variables at the boundary surface [207, 208].
The starting point for a slightly different approach is an exact nonlocal boundary condition, which involves the convolution with an appropriate integral kernel. A method based on an efficient approximation of this integral kernel is then implemented; see, for instance, [16, 17] for the case of the 2D and 3D flat wave equations and [271, 270, 272] for the ReggeWheeler [347] and Zerilli [453] equations describing linear gravitational waves on a Schwarzschild background. Although this method is robust, very accurate and stable, it is based on detailed knowledge of the solutions, which might not always be available in more general situations.
In the following, we illustrate some aspects of the problem of constructing absorbing boundary conditions on some simple examples [372]. Specifically, we construct local absorbing boundary conditions for the wave equation with a spherical outer boundary at radius R > 0.
5.3.1 The onedimensional wave equation
5.3.2 The threedimensional wave equation
5.3.3 The wave equation on a curved background
When the background is curved, it is not always possible to construct in and outgoing solutions explicitly, as in the previous example. Therefore, it is not even clear how a hierarchy of absorbing boundary conditions should be formulated. However, in many applications the spacetime is asymptotically flat, and if the boundary surface is placed sufficiently far from the strong field region, one can assume that the metric is a small deformation of the flat, Minkowski metric. To first order in M/R with M the ADM mass and R the areal radius of the outer boundary, these correction terms are given by those of the Schwarzschild metric, and approximate in and outgoing solutions for all (ℓ,m) modes can again be computed [372].^{25} The M/R terms in the background metric induce two kind of corrections in the in and outgoing solutions u_{ℓm↖}. The first is a curvature correction term, which just adds M/R terms to the coefficients in the sum of Eq. (5.144). This term is local and still obeys Huygens’ principle. The second term is fast decaying (it decays as R/r^{ℓ+1}) and describes the backscatter off the curvature of the background. As a consequence, it is nonlocal (it depends on the past history of the unperturbed solution) and violates Huygens’ principle.
By construction, the boundary conditions \({{\mathcal B}_L}\) are perfectly absorbing for outgoing waves with angular momentum number ℓ ≤ L, including their curvature corrections to first order in M/R. If the firstorder correction terms responsible for the backscatter are taken into account, then \({{\mathcal B}_L}\) are not perfectly absorbing anymore, but the spurious reflections arising from these correction terms have been estimated in [372] to decay at least as fast as (M/R)(kR)^{−2} for monochromatic waves with wave number k satisfying M ≪ k^{−1} ≪ R.
The wellposedness of higherorder absorbing boundary conditions for wave equations on a curved background can be established by assuming the localization principle and the Laplace method [369]. Some applications to general relativity are discussed in Sections 6 and 10.3.1.
6 Boundary Conditions for Einstein’s Equations
The subject of this section is the discussion of the IBVP for Einstein’s field equations. There are at least three difficulties when formulating Einstein’s equations on a finite domain with artificial outer boundaries. First, as we have seen in Section 4, the evolution equations are subject to constraints, which, in general, propagate with nontrivial characteristic speeds. As a consequence, in general there are incoming constraint fields at the boundary that need to be controlled in order to make sure that the constraints propagate correctly, i.e., that constraintsatisfying initial data yields a solution of the evolution equations and the constraints on the complete computational domain, and not just on its domain of dependence. The control of these incoming constraint fields leads to constraintpreserving boundary conditions, and a nontrivial task is to fit these conditions into one of the admissible boundary conditions discussed in the previous Section 5, for which wellposedness can be shown.
A second issue is the construction of absorbing boundary conditions. Unlike the simple examples considered in Section 5.3, for which the fields evolve on a fixed background and in and outgoing solutions can be represented explicitly, or at least characterized precisely, in general relativity it is not even clear how to define in and outgoing gravitational radiation since there are no local expressions for the gravitational energy density and flux. Therefore, the best one can hope for is to construct boundary conditions, which approximately control the incoming gravitational radiation in certain regimes, like, for example, in the weak field limit where the field equations can be linearized around, say, a Schwarzschild or Minkowski spacetime.
Finally, the third issue is related to the diffeomorphism invariance of the theory. Ideally, one would like to formulate a geometric version of the IBVP, for which the data given on the initial and boundary surfaces Σ_{0} and \({\mathcal T}\) can be characterized in terms of geometric quantities such as the first and second fundamental forms of these surfaces as embedded in the yet unknown spacetime (M, g). In particular, this means that one should be able to identify equivalent data sets, i.e., those which are related to each other by a diffeomorphism of M, leaving Σ_{0} and \({\mathcal T}\) invariant, by local transformations on Σ_{0} and \({\mathcal T}\), without knowing the solution (M, g). It is currently not even clear if such a geometric uniqueness property does exist; see [186, 355] for further discussions on these points.
A wellposed IBVP for Einstein’s vacuum field equations was first formulated by Friedrich and Nagy [187] based on a tetrad formalism, which incorporates the Weyl curvature tensor as an independent field. This formulation exploits the freedom of choosing local coordinates and the tetrad orientation in order to impose very precise gauge conditions, which are adapted to the boundary surface \({\mathcal T}\) and tailored to the IBVP. These gauge conditions, together with a suitable modification of the evolution equations for the Weyl curvature tensor using the constraints (cf. Example 32), lead to a firstorder symmetric hyperbolic system in which all the constraint fields propagate tangentially to \({\mathcal T}\) at the boundary. As a consequence, no constraintpreserving boundary conditions need to be specified, and the only incoming fields are related to the gravitational radiation, at least in the context of the approximations mentioned above. With this, the problem can be shown to be well posed using the techniques described in Section 5.2.
After the pioneering work of [187], there has been much effort in formulating a wellposed IBVP for metric formulations of general relativity, on which most numerical calculations are based. However, with the exception of particular cases in spherical symmetry [249], the linearized field equations [309] or the restriction to flat, totally reflecting boundaries [404, 405, 106, 98, 219, 220, 410, 29, 15], not much progress had been made towards obtaining a manifestly wellposed IBVP with nonreflecting, constraintpreserving boundary conditions. The difficulties encountered were similar to those described in Examples 31 and 32. Namely, controlling the incoming constraint fields usually resulted in boundary conditions for the main system involving either derivatives of its characteristic fields or fields propagating with zero speed, when it was written in firstorder symmetric hyperbolic form. Therefore, the theory of maximal dissipative boundary conditions could not be applied in these attempts. Instead, boundary conditions controlling the incoming characteristic constraint fields were specified and combined with more or less ad hoc conditions controlling the gauge and gravitational degrees of freedom and verified to satisfy the Lopatinsky condition (5.27) using the Laplace method; see [395, 108, 378, 220, 363, 368].
The breakthrough in the metric case came with the work by Kreiss and Winicour [267] who formulated a wellposed IBVP for the linearized Einstein vacuum field equations with harmonic coordinates. Their method is based on the pseudodifferential firstorder reduction of the wave equation described in Section 5.1.3, which, when combined with Sommerfeld boundary conditions, yields a problem, which is strongly well posed in the generalized sense and, when applied to systems of equations, allows a certain hierarchical coupling in the boundary conditions. This work was then generalized to shifted wave equations and higherorder absorbing boundary conditions in [369]. Later, it was recognized that the results in [267] could also be established based on the usual a priori energy estimates based on integration by parts [263]. Finally, it was found that the boundary conditions imposed were actually maximal dissipative for a specific nonstandard class of firstorder symmetric hyperbolic reduction of the wave system; see Section 5.2.1. Unlike the reductions considered in earlier work, such nonstandard class has the property that the boundary surface is noncharacteristic, which implies that no zero speed fields are present, and yields a strong wellposed system. Based on this reduction and the theory of quasilinear symmetric hyperbolic formulations with maximal dissipative boundary conditions [218, 388], it was possible to extend the results in [267, 263] and formulate a wellposed IBVP for quasilinear systems of wave equations [264] with a certain class of boundary conditions (see Theorem 8), which was sufficiently flexible to treat the Einstein equations. Furthermore, the new reduction also offers the interesting possibility to extend the proof to the discretized case using finite difference operators satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.
In order to parallel the presentation in Section 4, here we focus on the IBVP for Einstein’s equations in generalized harmonic coordinates and the IBVP for the BSSN system. The first case, which is discussed in Section 6.1, is an application of Theorem 8. In the BSSN case, only partial results have been obtained so far, but since the BSSN system is widely used, we nevertheless present some of these results in Section 6.2. In Section 6.3 we discuss some of the problems encountered when trying to formulate a geometric uniqueness theorem and, finally, in Section 6.4 we briefly mention alternative approaches to the IBVP, which do not require an artificial boundary.
For an alternative approach to treating the IBVP, which is based on the imposition of the GaussCodazzi equations at T; see [191, 192, 194, 193]. For numerical studies, see [249, 104, 40, 404, 405, 98, 287, 244, 378, 253, 61, 362, 35, 33, 368, 57, 56], especially [366] and [369] for a comparison between different boundary conditions used in numerical relativity and [365] for a numerical implementation of higher absorbing boundary conditions. For review articles on the IBVP in general relativity, see [372, 355, 435].
At present, there are no numerical simulations that are based directly on the wellposed IBVP for the tetrad formulation [187] or the wellposed IBVP for the harmonic formulation [267, 263, 264] described in Section 6.1, nor is there a numerical implementation of the constraintpreserving boundary conditions for the BSSN system presented in Section 6.2. The closest example is the harmonic approach described in [286, 363, 366], which has been shown to be well posed in the generalized sense in the highfrequency limit [369]. However, as mentioned above, the well posed IBVP in [264] opens the door for a numerical discretization based on the energy method, which can be proven to be stable, at least in the linearized case.
6.1 The harmonic formulation
Here, we discuss the IBVP formulated in [264] for the Einstein vacuum equations in generalized harmonic coordinates. The starting point is a manifold of the form M = [0, T] × Σ, with Σ a threedimensional compact manifold with C^{∞}boundary ∂Σ, and a given, fixed smooth background metric \({\overset \circ g _{\alpha \beta}}\) with corresponding LeviCivita connection \(\overset \circ \nabla\), as in Section 4.1. We assume that the time slices Σ_{ t }:= {t} × Σ are spacelike and that the boundary surface \({\mathcal T}: = [0,T] \times \partial \Sigma\) is timelike with respect to \({\overset \circ g_{\alpha \beta}}\).
6.1.1 Wellposedness of the IBVP
This result also applies the the modified system (4.16), since the constraint damping terms, which are added, do not modify the principal part of the main evolution system nor the one of the constraint propagation system.
6.2 Boundary conditions for BSSN
Here we discuss boundary conditions for the BSSN system (4.52–4.59), which is used extensively in numerical calculations of spacetimes describing dynamic black holes and neutron stars. Unfortunately, to date, this system lacks an initialboundary value formulation for which wellposedness in the full nonlinear case has been proven. Without doubt the reason for this relies on the structure of the evolution equations, which are mixed first/second order in space, and whose principal part is much more complicated than the harmonic case, where one deals with a system of wave equations.
A first step towards formulating a wellposed IBVP for the BSSN system was performed in [52], where the evolution equations (4.52, 4.53, 4.56–4.59) with a fixed shift and the relation f = μ ≡ (4m −1)/3 were reduced to a firstorder symmetric hyperbolic system. Then, a set of six boundary conditions consistent with this system could be formulated based on the theory of maximal dissipative boundary conditions. Although this gives rise to a wellposed IBVP, the boundary conditions specified in [52] are not compatible with the constraints, and therefore, one does not necessarily obtain a solution to the full set of Einstein’s equations beyond the domain of dependence of the initial data surface. In a second step, constraintpreserving boundary conditions for BSSN with a fixed shift were formulated in [220], and cast into maximal dissipative form for the linearized system (see also [15]). However, even at the linearized level, these boundary conditions are too restrictive because they constitute a combination of Dirichlet and Neumann boundary conditions on the metric components, and in this sense they are totally reflecting instead of absorbing. More general constraintpreserving boundary conditions were also considered in [220] and, based on the Laplace method, they were shown to satisfy the Lopatinsky condition (5.27).

Boundary conditions on the gauge variables
There are four conditions that must be imposed on the gauge functions, namely the lapse and shift. These conditions are motivated by the linearized analysis, where the gauge propagation system, consisting of the evolution equations for lapse and shift obtained from the BSSN equations (4.52–4.55, 4.59), decouples from the remaining evolution equations. Surprisingly, this gauge propagation system can be cast into symmetric hyperbolic form [315], for which maximal dissipative boundary conditions can be specified, as described in Section 5.2. It is remarkable that the gauge propagation system has such a nice mathematical structure, since the equations (4.52, 4.54, 4.55) have been specified by hand and mostly motivated by numerical experiments instead of mathematical analysis.
In terms of the operator \({\Pi ^i}_j = {\delta ^i}_j  {s^i}{s_j}\) projecting onto vectors tangential to the boundary, the four conditions on the gauge variables can be written as$${s^i}{\partial _i}\alpha = 0,$$(6.10)$${\beta ^s} = 0,$$(6.11)Eq. (6.10) is a Neumann boundary condition on the lapse, and Eq. (6.11) sets the normal component of the shift to zero, as explained above. Geometrically, this implies that the boundary surface \({\mathcal T}\) is orthogonal to the time slices Σ_{ t }. The other two conditions in Eq. (6.12) are Sommerfeldlike boundary conditions involving the tangential components of the shift and the tangential derivatives of the lapse; they arise from the analysis of the characteristic structure of the gauge propagation system. An alternative to Eq. (6.12) also described in [315] is to set the tangential components of the shift to zero, which, together with Eq. (6.11) is equivalent to setting β^{ i } = 0 at the boundary. This alternative may be better suited for IBVP with nonsmooth boundaries, such as cubes, where additional compatibility conditions must be enforced at the edges.$${\Pi ^i}_j\,\left({{\partial _t} + {{\sqrt {3\kappa}} \over 2}{s^k}{\partial _k}} \right){\beta ^j} = {\kappa \over {f  \kappa}}{\Pi ^i}_j\,{\tilde \gamma ^{jk}}{\partial _k}\alpha .$$(6.12) 
Constraintpreserving boundary conditions
Next, there are three conditions requiring that the momentum constraint be satisfied at the boundary. In terms of the BSSN variables this impliesAs shown in [315], Eqs. (6.13) yields homogeneous maximal dissipative boundary conditions for a symmetric hyperbolic firstorder reduction of the constraint propagation system (4.74, 4.75, 4.76). Since this system is also linear and its boundary matrix has constant rank if β^{ s } = 0, it follows from Theorem 7 that the propagation of constraint violations is governed by a wellposed IBVP. This implies, in particular, that solutions whose initial data satisfy the constraints exactly automatically satisfy the constraints on each time slice Σ_{ t }. Furthermore, small initial constraint violations, which are usually present in numerical applications yield solutions for which the growth of the constraint violations can be bounded in terms of the initial violations.$${\tilde D^j}{\tilde A_{ij}}  {2 \over 3}{\tilde D_i}K + 6{\tilde A_{ij}}{\tilde D^j}\phi = 8\pi {G_N}{j_i}.$$(6.13) 
Radiation controlling boundary conditions
Finally, the last two boundary conditions are intended to control the incoming gravitational radiation, at least approximately, and specify the complex Weyl scalar Ψ_{0}, cf. Example 32. In order to describe this boundary condition we first define the quantities \({{\bar {\mathcal E}}_{ij}}: = {{\tilde R}_{ij}} + R_{ij}^\phi + {e^{4\phi}}({1 \over 3}K{{\tilde A}_{ij}}  {{\tilde A}_{il}}\tilde A_j^l)  4\pi {G_N}{\sigma _{ij}}\) and \({{\bar {\mathcal B}}_{kij}}: = {e^{4\phi}}\left[ {{{\tilde D}_k}{{\tilde A}_{ij}}  4\left({{{\tilde D}_{(i}}\phi} \right){{\tilde A}_{j)k}}} \right]\), which determine the electric and magnetic parts of the Weyl tensor through \({E_{ij}} = {{\bar {\mathcal E}}_{ij}}  {1 \over 3}{\gamma _{ij}}{\gamma ^{kl}}{{\bar {\mathcal E}}_{kl}}\) and \({B_{ij}} = {\varepsilon _{kl(i}}{{\bar {\mathcal B}}^{kl}}{\,_{j)}}\), respectively. Here, ε_{ kij } denotes the volume form with respect to the three metric γ_{ ij }. In terms of the operator \({P^{ij}}_{lm} = {\Pi ^i}_{(l}{\Pi ^j}_{m)}  {1 \over 2}{\Pi ^{ij}}{\Pi _{lm}}\) projecting onto symmetric traceless tangential tensors to the boundary, the boundary condition readswith G_{ ij } a given smooth tensor field on the boundary surface \({\mathcal T}\). The relation between G_{ ij } and Ψ_{0} is the following: if n = α^{−1}(∂_{ t } − β^{ i }∂_{ i }) denotes the futuredirected unit normal to the time slices, we may construct an adapted NewmanPenrose null tetrad \(\{K,L,Q,\bar Q\}\) at the boundary by defining K := n + s, L := n − s, and by choosing Q to be a complex null vector orthogonal to K and L, normalized such that \({Q^\mu}{{\bar Q}_\mu} = 2\). Then, we have Ψ_{0} = (E_{ kl } − iB_{ kl })Q^{ k }Q^{ l } = G_{ kl }Q^{ k }Q^{ l }. For typical applications involving the modeling of isolated systems one may set G_{ ij } to zero. However, this in general is not compatible with the initial data (see the discussion in Section 10.3), an alternative is then to freeze the value of G_{ ij } to the one computed from the initial data.$${P^{ij}}_{lm}{\bar {\mathcal E} _{ij}} + \left({{s^k}{P^{ij}}_{lm}  {s^i}{P^{kj}}_{lm}} \right){\bar {\mathcal B} _{kij}} = {P^{ij}}_{lm}{G_{ij}},$$(6.14)The boundary condition (6.14) can be partially motivated by considering an isolated system, which, globally, is described by an asymptoticallyflat spacetime. Therefore, if the outer boundary is placed far enough away from the strong field region, one may linearize the field equations on a Minkowski background to a first approximation. In this case, one is in the same situation as in Example 32, where the Weyl scalar Ψ_{0} is an outgoing characteristic field when constructed from the adapted null tetrad. Furthermore, one can also appeal to the peeling behavior of the Weyl tensor [328], in which Ψ_{0} is the fastest decaying component along an outgoing null geodesics and describes the incoming radiation at past null infinity. While Ψ_{0} can only be defined in an unambiguous way at null infinity, where a preferred null tetrad exists, the boundary condition (6.14) has been successfully numerically implemented and tested for truncated domains with artificial boundaries in the context of the harmonic formulation; see, for example, [366]. Estimates on the amount of spurious reflection introduced by this condition have also been derived in [88, 89]; see also [135].
6.3 Geometric existence and uniqueness

Geometric existence. Let (M, g) be any smooth solution of Einstein’s vacuum field equations on the manifold M = [0, T] × Σ corresponding to initial data (h, k) on Σ_{0} and boundary data ψ on \({\mathcal T}\), where h and k represent, respectively, the first and second fundamental forms of the initial surface Σ_{0} as embedded in (M, g). Is it possible to reproduce this solution with any of the wellposed IBVP mentioned so far, at least on a submanifold M′ = [0, T′] × Σ with 0 < T′ ≤ T? That is, does there exist initial data f and boundary data q for this IBVP and a diffeomorphism ϕ: M′ → ϕ(M′) ⊂ M, which leaves Σ_{0} and \({{\mathcal T}\prime}\) invariant, such that the metric constructed from this IBVP is equal to ⊂*g on M′?

Geometric uniqueness. Is the solution (M,g) uniquely determined by the data (h,k,ψ)? Given a wellposed IBVP for which geometric existence holds, the question about geometric uniqueness can be reduced to the analysis of this particular IBVP in the following way: let u_{1} and u_{2} be two solutions of the IBVP on the manifold M = [0, T] × Σ with corresponding data (f_{1}, q_{1}) and (f_{2}, q_{2}). Suppose the two solutions induce the same data (h, k) on Σ_{0} and ψ on \({\mathcal T}\). Does there exist a diffeomorphism ϕ: M′ = [0, T′] × Σ → ϕ(M′) ⊂ M, which leaves Σ_{0} and \({{\mathcal T}\prime}\) invariant, such that the metrics g_{1} and g_{2} corresponding to u_{1} and u_{2} are related to each other by g_{2} = φ*g_{1} on M′?
 (i)
It is a priori not clear what the boundary data ψ should represent geometrically. Unlike the case of the initial surface, where the data represents the first and second fundamental forms of Σ_{0} as a spatial surface embedded in the constructed spacetime (M, g), it is less clear what the geometric meaning of ψ should be since it is restricted by the characteristic structure of the evolution equations, as discussed in Section 5.
 (ii)
The boundary data (q_{ K }, q_{ L }, q_{ Q }, q_{ QQ }) in the boundary conditions (6.2, 6.3, 6.4, 6.5) for the harmonic formulation and the boundary data G_{ ij } in the boundary condition (6.14) for the BSSN formulation ultimately depend on the specific choice of a futuredirected timelike vector field T at the boundary surface \({\mathcal T}\). Together with the unit outward normal N to \({\mathcal T}\), this vector defines the preferred null directions K = T + N and L = T − N, which are used to construct the boundaryadapted null tetrad in the harmonic case and the projection operators \({\Pi ^\mu}_\nu = {\delta ^\mu}_\nu + {T^\mu}{T_\nu}  {N^\mu}{N_\nu}\) and \({P^{\mu \nu}}_{\alpha \beta} = {\Pi ^\mu}_\alpha {\Pi ^\nu}_\beta  {1 \over 2}{\Pi ^{\mu \nu}}{\Pi _{\alpha \beta}}\) in the BSSN one. Although it is tempting to define T as the unit, futuredirected timelike vector tangent to \({\mathcal T}\), which is orthogonal to the cross sections ∂Σ_{ t }, this definition would depend on the particular foliation Σ_{ t } the formulation is based on, and so the resulting vector T would be gaugedependent. A similar issue arises in the tetrad formulation of [187].
 (iii)
When addressing the geometric uniqueness issue, an interesting question is whether or not it is possible to determine from the data sets (f_{1}, q_{1}) and (f_{2}, q_{2}) alone if they are equivalent in the sense that their solutions u_{1} and u_{2} induce the same geometric data (h, k, ψ). Therefore, the question is whether or not one can identify equivalent data sets by considering only transformations on the initial and boundary surfaces Σ_{0} and \({\mathcal T}\), without knowing the solutions u_{1} and u_{2}.
Although a complete answer to these questions remains a difficult task, there has been some recent progress towards their understanding. In [186] a method was proposed to geometrically single out a preferred time direction T at the boundary surface \({\mathcal T}\). This is done by considering the tracefree part of the second fundamental form, and proving that under certain conditions, which are stable under perturbations, the corresponding linear map on the tangent space possesses a unique timelike eigenvector. Together with the unit outward normal vector N, the vector field T defines a distinguished adapted null tetrad at the boundary, from which geometrically meaningful boundary data could be defined. For instance, the complex Weyl scalar Ψ_{0} can then be defined as the contraction Ψ_{0} = C_{ αβγδ }K^{ α }Q^{ β }K^{ γ }Q^{ δ } of the Weyl tensor C_{ αβγδ } associated to the metric g_{ μν } along the null vectors K and Q, and the definition is unique up to the usual spin rotational freedom Q ↦ e e^{ tφ }Q, and therefore, the Weyl scalar Ψ_{0} is a good candidate for forming part of the boundary data ψ.
In [355] it was suggested that the unique specification of a vector field T may not be a fundamental problem, but rather the manifestation of the inability to specify a nonincoming radiation condition correctly. In the linearized case, for example, setting the Weyl scalar Ψ_{0} to zero computed from the boundaryadapted tetrad is transparent to gravitational plane waves traveling along the specific null direction K = T + N, see Example 32, but it induces spurious reflections for outgoing plane waves traveling in other null direction. Therefore, a genuine nonincoming radiation condition should be, in fact, independent of any specific null or timelike direction at the boundary, and can only depend on the normal vector N. This is indeed the case for much simpler systems like the scalar wave equation on a Minkowski background [153], where perfectly absorbing boundary conditions are formulated as a nonlocal condition, which is independent of a preferred time direction at the boundary.
Aside from controlling the incoming gravitational degrees of freedom, the boundary data ψ should also comprise information related to the geometric evolution of the boundary surface. In [187] this was achieved by specifying the mean curvature of \({\mathcal T}\) as part of the boundary data. In the harmonic formulation described in Section 6.1 this information is presumably contained in the functions q_{ K }, q_{ L } and q_{ Q }, but their geometric interpretation is not clear.
In order to illustrate some of the issues related to the geometric existence and uniqueness problem in a simpler context, in what follows we analyze the IBVP for linearized gravitational waves propagating on a Minkowski background. Before analyzing this case, however, we make two remarks. First, it should be noted [186] that the geometric uniqueness problem, especially an understanding of point (iii), also has practical interest, since in long term evolutions it is possible that the gauge threatens to break down at some point, requiring a redefinition. The second remark concerns the formulation of the Einstein IBVP in generalized harmonic coordinates, described in Sections 4.1 and 6.1, where general covariance was maintained by introducing a background metric \({\overset \circ g _{\mu \nu}}\) on the manifold M. IBVPs based on this approach have been formulated in [369] and [264] and further developed in [434] and [433]. However, one has to emphasize that this approach does not automatically solve the geometric existence and uniqueness problems described here: although it is true that the IBVP is invariant with respect to any diffeomorphism ϕ: M → M, which acts on the dynamical and the background metric at the same time, the question of the dependency of the solution on the background metric remains.
6.3.1 Geometric existence and uniqueness in the linearized case
Theorem 9. The initialvalue problem ( 6.15 , 6.18 ) possesses a smooth solution h_{ αβ }, which is unique up to an infinitesimal coordinate transformation h_{ αβ } = h_{ αβ } + 2∇(_{α}ξ_{ β }) generated by a vector field ξ^{ α }.
Proof. We first show the existence of a solution in the linearized harmonic gauge \({C_\beta} = {\nabla ^\mu}{h_{\beta \mu}}  {1 \over 2}{\nabla _\beta}h = 0\), for which Eq. (6.15) reduces to the system of wave equations ∇^{ μ }∇_{ μ }h_{ αβ } = 0. The initial data, \(({h_{\alpha \beta}}{\vert _{{\Sigma _0}}},{\partial _t}{h_{\alpha \beta}}{\vert _{{\Sigma _0}}})\), for this system is chosen such that \({h_{ij}}{\vert _{{\Sigma _0}}} = h_{ij}^{(0)},{\partial _t}{h_{ij}}{\vert _{{\Sigma _0}}} = 2{\partial _{(i}}{h_{j)0}}{\vert _{{\Sigma _0}}}  2k_{ij}^{(0)}\) and \({\partial _t}{h_{00}}{\vert _{{\Sigma _0}}} = 2{\delta ^{ij}}k_{ij}^{(0)},{\partial _t}{h_{{0_j}}}{\vert _{{\Sigma _0}}} = {\partial ^i}(h_{ij}^{(0)}  {1 \over 2}{\delta _{ij}}{\delta ^{kl}}h_{kl}^{(0)}) + {1 \over 2}{\partial _j}{h_{00}}{\vert _{{\Sigma _0}}}\), where \((h_{ij}^{(0)},k_{ij}^{(0)})\) satisfy the constraint equations (6.17) and where the initial data for h_{00} and h_{0j} is chosen smooth but otherwise arbitrary. This choice implies the satisfaction of Eq. (6.18) with X_{ j } = 0 and f = 0 and the initial conditions \({C_\beta}{\vert _{{\Sigma _0}}} = 0\) and \({\partial _t}{C_\beta}{\vert _{{\Sigma _0}}} = 0\) on the constraint fields C_{ β }. Therefore, solving the wave equation ∇^{ μ }∇_{ μ }h_{ αβ } = 0 with such data, we obtain a solution of the linearized Einstein equations (6.15) in the harmonic gauge with initial data satisfying (6.18) with X_{ j } = 0 and f = 0. This shows geometric existence for the linearized harmonic formulation.
It follows from the existence part of the proof that the quantities \({h_{00}}{\vert _{{\Sigma _0}}}\) and \({h_{0j}}{\vert_{{\Sigma _0}}}\), corresponding to linearized lapse and shift, parametrize pure gauge modes in the linearized harmonic formulation.
Theorem 10. [355] The IBVP ( 6.15 , 6.18 , 6.21 ) possesses a smooth solution h_{ αβ }, which is unique up to an infinitesimal coordinate transformation \({{\tilde h}_{\alpha \beta}} = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}}\) generated by a vector field ξ^{ α }.
In conclusion, we can say that, in the simple case of linear gravitational waves propagating on a Minkowksi background, we have resolved the issues (i–iii). Correct boundary data is given to the linearized Weyl scalar Ψ_{0} computed from the boundaryadapted tetrad. To linear order, Ψ_{0} is invariant with respect to coordinate transformations, and the timelike vector field T appearing in its definition can be defined geometrically by taking the futuredirected unit normal to the initial surface Σ_{0} and parallel transport it along the geodesics orthogonal to Σ_{0}.
Whether or not this result can be generalized to the full nonlinear case is not immediately clear. In our linearized analysis we have imposed no restrictions on the normal component ξN of the vector field generating the infinitesimal coordinate transformation. However, such a restriction is necessary in order to keep the boundary surface fixed under a diffeomorphism. Unfortunately, it does not seem possible to restrict ξ_{N} in a natural way with the boundary conditions constructed so far.
6.4 Alternative approaches
Although the formulation of Einstein’s equations on a finite space domain with an artificial timelike boundary is currently the most used approach in numerical simulations, there are a number of difficulties associated with it. First, as discussed above, spurious reflections from the boundary surface may contaminate the solution unless the boundary conditions are chosen with great care. Second, in principle there is a problem with wave extraction, since gravitational waves can only be defined in an unambiguous (gaugeinvariant) way at future null infinity. Third, there is an efficiency problem, since in the far zone the waves propagate along outgoing null geodesics so that hyperboloidal surfaces, which are asymptotically null, should be better adapted to the problem. These issues have become more apparent as numerical simulations have achieved higher accuracy to the point that boundary and wave extraction artifacts are noticeable, and have driven a number of other approaches.
One of them is that of compactification schemes, which include spacelike or null infinity into the computational domain. For schemes compactifying spacelike infinity; see [335, 336]. Conformal compactifications are reviewed in [172, 183], and a partial list of references to date includes [328, 176, 177, 180, 179, 170, 245, 172, 247, 100, 446, 447, 316, 87, 451, 452, 448, 449, 450, 305, 364, 42].
Another approach is Cauchycharacteristic matching (CCM) [99, 392, 401, 143, 148, 53], which combines a Cauchy approach in the strong field regime (thereby avoiding the problems that the presence of caustics would cause on characteristic evolutions) with a characteristic one in the wave zone. Data from the Cauchy evolution is used as inner boundary conditions for the characteristic one and, viceversa, the latter provides outer boundary conditions for the Cauchy IBVP. An understanding of the Cauchy IBVP is still a requisite. CCM is reviewed in [432]. A related idea is Cauchyperturbative matching [455, 356, 4, 370], where the Cauchy code is instead coupled to one solving gaugeinvariant perturbations of Schwarzschild black holes or flat spacetime. The multipole decomposition in the ReggeWheelerZerilli equations [347, 453, 376, 294, 307] implies that the resulting equations are 1+1 dimensional and can therefore extend the region of integration to very large distances from the source. As in CCM, an understanding of the IBVP for the Cauchy sector is still a requisite.
One way of dealing with the ambiguity of extracting gravitational waves from Cauchy evolutions at finite radii is by extrapolating procedures; see, for example, [72, 331] for some approaches and quantification of their accuracies. Another approach is Cauchy characteristic extraction (CCE) [350, 37, 349, 32, 34, 54]. In CCE a Cauchy IBVP is solved, and the numerical data on a world tube is used to provide inner boundary conditions for a characteristic evolution that “transports” the data to null infinity. The difference with CCM is that in CCE there is no “feedback” from the characteristic evolution to the Cauchy one, and the extraction is done as a postprocessing step.
7 Numerical Stability
In the previous sections we have discussed continuum initial and IBVPs. In this section we start with the study of the discretization of such problems. In the same way that a PDE can have a unique solution yet be ill posed^{27}, a numerical scheme can be consistent yet not convergent due to the unbounded growth of small perturbations as resolution is increased. The definition of numerical stability is the discrete version of wellposedness. One wants to ensure that small initial perturbations in the numerical solution, which naturally appear due to discretization errors and finite precision, remain bounded for all resolutions at any given time t > 0. Due to the classical LaxRichtmyer theorem [276], this property, combined with consistency of the scheme, is equivalent in the linear case to convergence of the numerical solution, and the latter approaches the continuum one as resolution is increased (at least within exact arithmetic). Convergence of a scheme is in general difficult to prove directly, especially because the exact solution is in general not known. Instead, one shows stability.
The different definitions of numerical stability follow those of wellposedness, with the L^{2} norm in space replaced by a discrete one, which is usually motivated by the spatial approximation. For example, discrete norms under which the summation by parts property holds are natural in the context of some finite difference approximations and collocation spectral methods (see Sections 8 and 9).
We start with a general discussion of some aspects of stability, and explicit analyses of simple, loworder schemes for test models. There follows a discussion of different variations of the von Neumann condition, including an eigenvalue version, which can be used to analyze in practice necessary conditions for IBVPs. Next, we discuss a rather general stability approach for the method of lines, the notion of timestability, RungeKutta methods, and we close the section with some references to other approaches not covered here, as well as some discussion in the context of numerical relativity.
7.1 Definitions and examples
 The previous definition applies both to the semidiscrete case (where space but not time is discretized) as well as the fullydiscrete one. In the latter case, Eq. (7.3) is to be interpreted at fixed time. For example, if the timestep discretization is constant,then Eq. (7.3) needs to hold for fixed t_{ k } and arbitrarily large k. In other words, the solution is allowed to grow with time, but not with the number of timesteps at fixed time when resolution is increased.$${t_k} = k\Delta t,\qquad k = 0,1,2 \ldots$$(7.4)

The norm ∥ · ∥_{d} in general depends on the spatial approximation, and in Sections 8 and 9 we discuss some definitions for the finite difference and spectral cases.

From Definition 11, one can see that an illposed problem cannot have a stable discretization, since otherwise one could take the continuum limit in (7.3) and reach the contradiction that the original system was well posed.

As in the continuum, Eq. (7.3) implies uniqueness of the numerical solution v.
 In Section 3 we discussed that if, in a wellposed homogeneous Cauchy problem, a forcing term is added to Eq. (7.1),then the new problem admits another estimate, related to the original one via Duhamel’s formula, Eq. (3.23). A similar concept holds at the semidiscrete level, and the discrete estimates change accordingly (in the fullydiscrete case the integral in time is replaced by a discrete sum),$${u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x)\qquad \mapsto \qquad {u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x) + F(t,x),$$(7.5)In other words, the addition of a lowerorder term does not affect numerical stability, and without loss of generality one can restrict stability analyses to the homogeneous case.$$\Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\Vert f \Vert_{\rm{d}}\qquad \mapsto \qquad \Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}} \left(\Vert f \Vert_{\rm{d}} + \int\limits_0^t \Vert F(s, \cdot) \Vert_{\rm{d}} ds \right).$$(7.6)
 The difference w:= u − v between the exact solution and its numerical approximation satisfies an equation analogous to (7.5), where F is related to the truncation error of the approximation. If the scheme is numerically stable, then in the linear and semidiscrete cases Eq. (7.6) impliesIf the approximation is consistent, the truncation error converges to zero as resolution is increased, and Equation (7.7) implies that so does the norm of the error ∥w(t, ·)∥_{d}· That is, stability implies convergence. The inverse is also true and this equivalence between convergence and stability is the celebrated LaxRichtmyer theorem. The equivalence also holds in the fullydiscrete case.$${\Vert {w(t,\cdot)} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\int\limits_0^t {{{\Vert {F(s,\cdot)} \Vert}_{\rm{d}}}} ds\,.$$(7.7)

In the quasilinear case, one follows the principle of linearization, as described in Section 3.3. One linearizes the problem, and constructs a stable numerical scheme for the linearization. The expectation, then, is that the scheme also converges for the nonlinear scheme. For particular problems and discretizations this expectation can be rigorously proven (see, for example, [259]).
Example 33. The onesided Euler scheme.
Next we consider a scheme very similar to the previous one, but which turns out to be unconditionally unstable for a ≠ 0, regardless of the direction of propagation.
Example 34. A centered Euler scheme.
The semidiscrete centered approximation (7.20) and the fullydiscrete centered Euler scheme (7.21) constitute the simplest example of an approximation, which is not fullydiscrete stable, even though its semidiscrete version is. This is related to the fact that the Euler time integration is not locally stable, as discussed in Section 7.3.2.
The previous two examples were onestep methods, where v^{k+1} can be computed in terms of v^{ k }. The following is an example of a twostep method.
Example 35. Leapfrog.
The previous examples were explicit methods, where the solution \(\upsilon _j^{k + 1}\) (or \(w_j^{k + 1}\)) can be explicitly computed from the one at the previous timestep, without inverting any matrices.
Example 36. CrankNicholson.
Example 37. Iterated CrankNicholson.
 First iteration: an intermediate variable ^{(1)}ṽ is calculated using a secondorderinspace centered difference (7.32) and an Euler, firstorder forwardtime approximation,Next, a second intermediate variable is computed through averaging,$${1 \over {\Delta t}}\left({{}^{(1)}\tilde v_j^{n + 1}  v_j^n} \right) = {D_0}\,v_j^n\,.$$(7.35)The full time step for this first iteration is$$^{(1)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(1)}\tilde v_j^{n + 1} + v_j^n} \right).$$(7.36)$${1 \over {\Delta t}}\left({v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2}.$$(7.37)
 Second iteration: it follows the same steps. Namely, the intermediate variablesare computed, and the full step is obtained from$$\begin{array}{*{20}c} {{1 \over {\Delta t}}\left({{}^{(2)}\tilde v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2},\quad \quad \quad \quad \quad \quad} \\ {{}^{(2)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(2)}\tilde v_j^{n + 1} + v_j^n} \right),} \\ \end{array}$$$${1 \over {\Delta t}}\left({v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(2)}}\bar v_j^{n + 1/2}\,.$$(7.38)

Further iterations proceed in the same way.
Similar definitions to the one of Definition 11 are introduced for the IBVP. For simplicity we explicitly discuss the semidiscrete case. In analogy with the definition of a stronglywellposed IBVP (Definition 9) one has
In addition, the semidiscrete version of Definitions 6 and 7 lead to the concepts of strong stability in the generalized sense and boundary stability, respectively, which we do not write down explicitly here. The definitions for the fullydiscrete case are similar, with time integrals such as those in Eq. (7.39) replaced by discrete sums.
7.2 The von Neumann condition
7.2.1 The periodic, scalar case
7.2.2 The general, linear, timeindependent case
Next, we discuss two examples where the von Neumann condition is satisfied but the resulting scheme is unconditionally unstable. The first one is for a wellposed underlying continuum problem and the second one for an illposed one.
Example 38. An unstable discretization, which satisfies the von Neumann condition for a triviallywellposed problem [228].
The vonNeumann condition is clearly not sufficient for stability in this example because the amplification matrix not only cannot be uniformly diagonalized, but it cannot be diagonalized at all because of the Jordan block structure in (7.65).
Example 39. Illposed problems are unconditionally unstable, even if they satisfy the von Neumann condition. The following example is drawn from [107].
7.3 The method of lines
A convenient approach both from an implementation point of view as well as for analyzing numerical stability or constructing numericallystable schemes is to decouple spatial and time discretizations. That is, one first analyzes stability under some spatial approximation assuming time to be continuous (semidiscrete stability) and then finds conditions for time integrators to preserve stability in the fullydiscrete case.
In general, this method provides only a subclass of numericallystable approximations. However, it is a very practical one, since spatial and time stability are analyzed separately and stable semidiscrete approximations and appropriate time integrators can then be combined at will, leading to modularity in implementations.
7.3.1 Semidiscrete stability
For a large class of problems, which can be shown to be well posed using the energy estimate, one can construct semibounded operators L by satisfying the discrete counterpart of the properties of the differential operator P in Eq. (7.1) that were used to show wellposedness. This leads to the construction of spatial differential approximations satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.
7.3.2 Fullydiscrete stability
In the previous Section 7.3.1 we derived necessary conditions for semidiscrete stability of such systems. Namely, the von Neumann one in its weak (7.80) and strong (7.82) forms. Below we shall derive necessary conditions for fullydiscrete stability for a large class of time integration methods, including RungeKutta ones. Upon time discretization, stability analyses of (7.85) require the introduction of the notion of the region of absolute stability of ODE solvers. Part of the subtlety in the stability analysis of fullydiscrete systems is that the size N of the system of ODEs is not fixed; instead, it depends on the spatial resolution. However, the obtained necessary conditions for fullydiscrete stability will also turn out to be sufficient when combined with additional assumptions. We will also discuss sufficient conditions for fullydiscrete stability using the energy method.
The necessary condition (7.92) can then be restated as:
In the absence of lowerorder terms and under the already assumed conditions (7.89) the strong von Neumann condition (7.82) then implies that \({{\mathcal S}_{\rm{R}}}\) must overlap the half complex plane {z ∈ ℂ: Re(z) ≤ 0}. In particular, this is guaranteed by locallystable schemes, defined as follows.
As usual, the von Neumann condition is not sufficient for numerical stability and we now discuss an example, drawn from [268], showing that the particular version of Lemma 6 is not either.
Since L is triangular, its eigenvalues are the elements of the diagonal; namely, {ℓ_{ i }} = {−1/Δx}, i.e., there is a single, degenerate eigenvalue q = −1/Δx.
Sufficient conditions. Under additional assumptions, fullydiscrete stability does follow from semidiscrete stability if the time integration is locally stable:
 (i)
A consistent semidiscrete approximation to a constantcoefficient, firstorder IBVP is stable in the generalized sense (see Definition 12 and the following remarks).
 (ii)
The resulting system of ODEs is integrated with a locallystable method of the form (7.89) , with stability radius r > 0.
 (iii)If α ∈ ℝ is such that ∣α∣ < r andthen there is no β ∈ ℝ such that ∣β∣ < r, R(iβ) = e^{ iϕ }, and β ≠ α.$${\rm{R}}(i\alpha) = {e^{i\phi}}\,,\quad \phi \in \;{\rm{R}},$$(7.104)

Condition (iii) can be shown to hold for any consistent approximation, if r is sufficiently small [268].
 Explicit, onestep RungeKutta (RK) methods, which will be discussed in Section 7.5, are in particular of the form (7.89) when applied to linear, timeindependent problems. In fact, consider an arbitrary, consistent, onestep, explicit ODE solver (7.87) of the form given in Eq. (7.89),the integer s is referred to as the number of stages. Since the exact solution to Eq. (7.85) is v(t) = e^{ t }^{ L }v(0) and, in particular,$${\bf{Q}} = {\rm{R}}(\Delta t{\bf{L}}) = \sum\limits_{j = 0}^s {{\alpha _j}} {{{{\left({\Delta t{\bf{L}}} \right)}^j}} \over {j!}}\quad {\rm{with}}\;{\alpha _s} \neq 0;$$(7.106)Eq. (7.106) must agree with the first n terms of the Taylor expansion of e^{ΔtL}, where n is the order of the global^{29} truncation error of the ODE solver, defined through$${\bf{v}}({t_{k + 1}}) = {e^{\Delta t{\bf{L}}}}{\bf{v}}({t_k}) = \sum\limits_{j = 0}^\infty {{{{{\left({\Delta t{\bf{L}}} \right)}^j}} \over {j!}}} {\bf{v}}({t_k}),$$(7.107)Therefore, we must have$${\rm{R}}(\Delta t{\bf{L}})  {e^{\Delta t{\bf{L}}}} = {\mathcal O}{\left({\Delta t{\bf{L}}} \right)^{n + 1}}.$$(7.108)We then see that a scheme of order n needs at least n stages, s ≥ n, and$${\alpha _j} = 1\;\;{\rm{for}}\;0 \leq j \leq n.$$(7.109)The above expression in particular shows that when s = n (i.e., when the second sum on the righthand side is zero) the scheme is unique, with coefficients given by Eq. (7.109). In particular, for n = 1 = s, such a scheme corresponds to the Euler one discussed in the example above; see Eq. (7.99).$${\rm{R}}(\Delta t{\bf{L}}) = \sum\limits_{j = 0}^n {{{{{\left({\Delta t{\bf{L}}} \right)}^j}} \over {j!}}} + \sum\limits_{j = n + 1}^s {{\alpha _j}} {{{{\left({\Delta t{\bf{L}}} \right)}^j}} \over {j!}}.$$(7.110)

As we will discuss in Section 7.5, in the nonlinear case it is possible to choose RK methods with s = n if and only if n < 5.

When s = n, first and secondorder RK are not locally stable, while third and fourth order are. The fifthorder DormandPrince scheme (also introduced in Section 7.5) is also locally stable.
Using the energy method, fullydiscrete stability can be shown (resulting in a more restrictive CFL limit) for thirdorder RungeKutta integration and arbitrary dissipative operators L [282, 409]:
Notice that the restriction α_{d} = 0 is not so severe, since one can always achieve it by replacing L with L − α_{d}I. A generalization of Theorem 12 to higherorder RungeKutta methods does not seem to be known.
7.4 Strict or timestability
Similar definitions hold in the fullydiscrete case and/or when the spatial approximation is not a finite difference one. Essentially, (7.115) attempts to capture the notion that the numerical solution should not have, at a fixed resolution, growth in time, which is not present at the continuum. However, the problem with the definition is that it is not useful if the estimate (7.113) is not sharp, since neither will be the estimate (7.114), and the numerical solution can still exhibit artificial growth.
The right panel of Figure 3 shows a comparison between discretizations (7.122) and (7.124), as well as (7.122) with the addition of numerical dissipation (see Section 8.5), in all cases at the same fixed resolution. Even though numerical dissipation does stabilize the spurious growth in time, the strictlystable discretization (7.124) is considerably more accurate. Technically, according to Definition 15, the approximation (7.122) is also strictly stable, but it is more useful to reserve the term to the cases in which the estimate is sharp. The approximation (7.124), on the other hand, is (modulo the flux at boundaries, discussed in Section 10) energy preserving or conservative.
In order to construct conservative or timestable semidiscrete schemes, one essentially needs to write the approximation by grouping terms in such a way that when deriving at the semidiscrete level what would be the conservation law at the continuum, the need of using the Leibnitz rule is avoided. In addition, the numerical imposition of boundary conditions also plays a role (see Section 10).
In many application areas, conservation or timestability play an important role in the design of numerical schemes. That is not so much (at least so far) the case for numerical solutions of Einstein’s equations, because in general relativity there is no gaugeinvariant local notion of conserved energy unlike many other nonlinear hyperbolic systems (most notably, in Newtonian or special relativistic Computational Fluid Dynamics); see, however, [400]. In addition, there are no generic sharp estimates for the growth of the solution that can guide the design of numerical schemes. However, in simpler settings such as fields propagating on some stationary fixedbackground geometry, there is a notion of conserved local energy and accurate conservative schemes are possible. Interestingly, in several cases such as KleinGordon or Maxwell fields in stationary background spacetimes the resulting conservation of the semidiscrete approximations follows regardless of the constraints being satisfied (see, for example, [278]). A local conservation law in stationary spacetimes can also guide the construction of schemes to guarantee stability in the presence of coordinate singularities [105, 375, 225, 310], as discussed in Section 7.6.
In addition, there has been work done on variational, symplectic or mimetic integration techniques for Einstein’s equations, which aim at exactly or approximately preserving the discrete constraints, while solving the discrete evolution equations. See, for example, [304, 139, 201, 200, 76, 110, 359, 173, 358, 174, 360, 357].
7.5 RungeKutta methods
Next we present a few examples, in increasing order of accuracy. The simplest one is a forward finitedifference scheme [cf. Eq. (7.70)].
From left to right: The Euler method, second and thirdorder RK, thirdorder Heun.
The standard fourthorder RungeKutta method.
0  
1/2  1/2  
1/2  0  1/2  
1  0  0  1  
1/6  1/3  1/3  1/6 
The above examples explicitly show that up to, and including, fourthorder accuracy there are RungeKutta methods of order p and s stages with s = p. It is interesting that even though the first RK methods date back to the end of the 19h century, the question of whether there are higherorder (than four) RK methods remained open until the following result was shown by Butcher in 1963 [93]: s = p cannot be achieved anymore starting with fifthorder accurate schemes, and there are a number of barriers.
Theorem 13. For p ≥ 5 there are no RungeKutta methods with s = p stages.
However, there are fifth and sixthorder RK methods with six and seven stages, respectively. Butcher in 1965 [94] and 1985 [95] respectively showed the following barriers.
Theorem 14. For p ≥ 7 there are no RungeKutta methods with s = p + 1 stages.
Theorem 15. For p ≥ 8 there are no RungeKutta methods with s = p + 2 stages.
Seventh and eighthorder methods with s = 9 and s = 11 stages, respectively, have been constructed, as well as a tenthorder one with s = 17 stages.
7.5.1 Embedded methods
In practice many approaches in numerical relativity use an adaptive timestep method. One way of doing so is to evolve the system of equations two steps with timestep Δt and one with 2Δt. The difference in both solutions at t + 2Δt can be used, along with Richardson extrapolation, to estimate the new timestep needed to achieve any given tolerance error.
The structure of embedded methods.
0  
c _{2}  a _{21}  
c _{3}  a _{31}  a _{32}  ⋱  
c _{s}  a _{s1}  a _{s2}  ⋯  a _{s,s−1}  
b _{1}  b _{2}  ⋯  b _{s−1}  b _{ s }  
b′_{1}  b′_{2}  ⋯  b′_{s−1}  b′_{ s } 
Embedded methods are denoted by p(p′), where p is the order of the scheme, which advances the solution. For example, a 5(4) method would be of fifth order, with a fourthorder scheme, which shares its function calls used to estimate the error.
The 5(4) DormandPrince method.
0  
\({1 \over 5}\)  \({1 \over 5}\)  
\({3 \over 10}\)  \({3 \over 40}\)  \({9 \over 40}\)  
\({4 \over 5}\)  \({44 \over 45}\)  \({56 \over 15}\)  \({32 \over 9}\)  
\({8 \over 9}\)  \({19372 \over 6561}\)  \({25360 \over 2187}\)  \({64448 \over 6561}\)  \({212 \over 729}\)  
1  \({9017 \over 3168}\)  \({355 \over 33}\)  \({46732 \over 5247}\)  \({49 \over 176}\)  \({5103 \over 18656}\)  
1  \({35 \over 384}\)  0  \({500 \over 1113}\)  \({125 \over 192}\)  \({2187 \over 6784}\)  \({11 \over 84}\)  
y _{1}  \({35 \over 384}\)  0  \({500 \over 1113}\)  \({125 \over 192}\)  \({2187 \over 6784}\)  \({11 \over 84}\)  0 
y′_{1}  \({5179 \over 57600}\)  0  \({7571 \over 16695}\)  \({393 \over 640}\)  \({92097 \over 339200}\)  \({187 \over 2100}\)  \({1 \over 40}\) 
7.6 Remarks
The classical reference for the stability theory of finitedifference for timedependent problems is [361]. A modern account of stability theory for initialboundary value discretizations is [228]. [227] includes a discussion of some of the main stability definitions and results, with emphasis on multiple aspects of highorder methods, and [415, 416] many examples at a more introductory level. We have omitted discussing the discrete version of the Laplace theory for IBVP, developed by Gustafsson, Kreiss and Sundström (known as GKS theory or GKS stability) [229] since it has been used very little (if at all) in numerical relativity, where most stability analyses instead rely on the energy method.
The simplest stability analysis is that of a periodic, constantcoefficient test problem. An eigenvalue analysis can include boundary conditions and is typically used as a rule of thumb for CFL limits or to remove some instabilities. The eigenvalues are usually numerically computed for a number of different resolutions. See [171, 175] for some examples within numerical relativity.
Our discussion of RungeKutta methods follows [96] and [230], which we refer to, along with [231], for the rich area of methods for solving ordinary differential equations, in particular RungeKutta ones. We have only mentioned (onestep) explicit methods, which are the ones used the most in numerical relativity, but they are certainly not the only ones. For example, stiff problems in general require implicit integrations. [274, 322, 273] explored implicitexplicit (IMEX) time integration schemes in numerical relativity. Among many of the topics that we have not included is that of dense output. This refers to methods, which allow the evaluation of an approximation to the numerical solution at any time between two consecutive timesteps, at an order comparable or close to that of the integration scheme, and at low computational cost.
8 Spatial Approximations: Finite Differences
As mentioned in Section 7.6, a general stability theory (referred to as GKS) for IBVPs was developed by Gustafsson, Kreiss and Sundström [229], and a simpler approach, when applicable, is the energy method. The latter is particularly and considerably simpler than a GKS analysis for complicated systems such as Einstein’s field equations, highorder schemes, spectral methods, and/or complex geometries. The Einstein vacuum equations can be written in linearlydegenerate form and are therefore expected to be free of physical shocks (see the discussion at the beginning of Section 3.3) and ideally suited for methods, which exploit the smoothness of the solution to achieve fast convergence, such as highorder finitedifference and spectral methods. In addition, an increasing number of approaches in numerical relativity use some kind of multidomain or grid structure approach (see Section 11). There are multidomain schemes for which numerical stability can relatively easily be established for a large class of linear symmetric hyperbolic problems and maximal dissipative boundary conditions through the energy method. In particular, such schemes could be applied to the symmetric hyperbolic formulations of Einstein’s equations discussed in Sections 4 and 6.
In this section we discuss spatial finite difference (FD) approximations of arbitrary high order for which the energy method can be applied, and in Section 10 boundary closures for them. We start by reviewing polynomial interpolation, followed by the systematic construction of FD approximations of arbitrary high order and stencils through interpolation. Next, we introduce the concept of operators satisfying SBP, present a semidiscrete stability analysis, and the construction of highorder operators optimized in terms of minimizing their boundary truncation error and their associated timestep (CFL) limits (more specifically, their spectral radius). Finally, we discuss numerical dissipation, with emphasis on the region near boundaries or grid interfaces.
8.1 Polynomial interpolation
Although interpolation is not strictly a finite differencing topic, we briefly present it here because it is used below and in Section 9, when discussing spectral methods.
Given a set of (N + 1) distinct points \(\{{x_j}\} _{j = 0}^N\) (sometimes referred to as nodal points or nodes) and arbitrary associated function values f(x_{ j }), the interpolation problem amounts to finding (in this case) a polynomial \({{\mathcal I}_N}[f](x)\) of degree less than or equal to N such that \({{\mathcal I}_N}[f]({x_j}) = f({x_j})\) for j = 0, 1, 2, …, N.
8.2 Finite differences through interpolation
Example 47. A firstorder onesided FD approximation for d/dx.
Example 48. A secondorder centered finitedifference approximation for d/dx.
8.3 Summation by parts
Since numerical stability is, by definition, the discrete counterpart of wellposedness, one way to come up with schemes that are, by construction, numerically stable, is by designing them so that they satisfy the same properties used at the continuum when showing wellposedness through an energy estimate. As discussed in Section 3.2.3 one such property is integration by parts or the application of Gauss’ theorem, which leads to its numerical counterpart: SBP [265, 266].
Consider a discrete grid consisting of points \(\{{x_i}\} _{i = 0}^N\) and uniform spacing Δx on some, possibly unbounded, domain [a, b].
If the interval is infinite, say (−∞, b) or (−∞, ∞), certain falloff conditions are required and Eq. (8.22) replaced by dropping the corresponding boundary term(s).
Accuracy and Efficiency. As mentioned, in the absence of boundaries, standard centered FDs (which have even order of accuracy 2p) satisfy SBP with respect to the trivial (Σ = ΔxI) scalar product. In their presence the operators can be modified at and near boundaries so as to satisfy SBP, examples are given below. It can be seen that the accuracy at those points drops to p in the diagonal case and to 2p − 1 in the restricted full one. Therefore, the latter is more desirable from an accuracy perspective, but less so from a stability one, as we will discuss at the end of this subsection. Depending on the system, numerical dissipation might be enough to stabilize the discretization in the restricted full case. This is discussed below in Section 8.5.
When constructing SBP operators, the discrete scalar product cannot be arbitrarily fixed and afterward the difference operator solved for so that it satisfies the SBP property (8.22) — in general this leads to no solutions. The coefficients of Σ and those of D have to be simultaneously solved for. The resulting systems of equations lead to SBP operators being in general not unique, with increasing freedom with the accuracy order. In the diagonal case the resulting norm is automatically positive definite but not so in the fullrestricted case.
We label the operators by their order of accuracy in the interior and near boundary points. For diagonal norms and restricted full ones this would be D_{2p − p} and D_{2p − (p − 1)}, respectively.
The operator D_{4−2} and its associated scalar product are also unique in the diagonal norm case:
On the other hand, the operators D_{6−3}, D_{8−4}, D_{10−5} have one, three and ten free parameters, respectively. Up to D_{10−5} their associated scalar products are unique, while for D_{10−5} one of the free parameters enters in Σ. For the fullrestricted case, D_{4−3}, D_{6−5}, D_{8−7} have three, four and five free parameters, respectively, all of which appear in the corresponding scalar products.
A possibility [396] is to use the nonuniqueness of SBP operators to minimize the boundary stencil size s. If the difference operator in the interior is a standard centered difference with accuracyorder 2p then there are b points at and near each boundary, where the accuracy is of order q (with q = p in the diagonal case and q = 2p − 1 in the full restricted one). The integer b can be referred to as the boundary width. The boundary stencil size s is the number of gridpoints that the difference operator uses to evaluate its approximation at those b boundary points.
However, minimizing such size, as well as any naive or arbitrary choice of the free parameters, easily leads to a large spectral radius and as a consequence restrictive CFL (see Section 7) limit in the case of explicit evolutions. Sometimes it also leads to rather large boundary truncation errors. Thus, an alternative is to numerically compute the spectral radius for these multiparameter families of SBP operators and find in each case the parameter choice that leads to a minimum [399, 281]. It turns out that in this way the order of accuracy can be increased from the very low one of D_{2−1} to higherorder ones such as D_{10−5} or D_{8−7} with a very small change in the CFL limit. It involves some work, but since the SBP property (8.22) is independent of the system of equations one wants to solve, it only needs to be done once. In the fullrestricted case, when marching through parameter space and minimizing the spectral radius, this minimization has to be constrained with the condition that the resulting norm is actually positive definite.
Comparison, for the D_{10−5} operator, of both the spectral radius and average boundary truncation error (ABTE) when minimizing the bandwidth or a combination of the spectral radius and ABTE. For comparison, the spectral radius and ABTE for the lowestaccuracy operator, D_{1−2}, (which is unique) are 1.414 and 0.25, respectively. Note: the ABTE, as defined, is larger for this operator, but its convergence rate is faster.
Operator  Min. bandwidth  Min. ABTE and spectral radius 
Spectral radius  995.9  2.240 
ABTE  20.534  0.766 

The requirement of uniform spacing is not an actual restriction, since a coordinate transformation can always be used so that the computational grid is uniformly spaced even though the physical distance between gridpoints varies. In fact, this is routinely done in the context of multipledomains or curvilinear coordinates (see Section 11). In that case, though, stability needs to be guaranteed for systems with variable coefficients, since they appear due to the coordinate transformation(s) even if the original system had constant coefficients. This has relevance in terms of the distinction between diagonal and blockdiagonal SBP norms, as mentioned below.

A similar concept of SBP holds for discrete expansions into Legendre polynomials using Gausstype quadratures, as discussed in Section 9.4.

The definition of SBP depends only on the computational domain, not on the system of equations being solved. This allows the construction of optimized SBP operators once and for all.

Difference operators satisfying SBP, which are genuinely multidimensional, can be explicitly constructed (see, for example, [103, 102]). However, they become rather complicated even for simple geometries as higherorder accuracy is sought. An easier approach, for the case in which the domain is the cross product of one dimensional ones (say, topologically a cube in three dimensions), which is usually the case in many domaindecompositions for complex geometries (Section 11) is to simply apply a onedimensional operator satisfying SBP in each direction, and this is the approach that we will discuss from now on. The question then is whether SBP holds in several dimensions; the answer is affirmative in the case of diagonal norms but not necessarily otherwise.
8.4 Stability

A key ingredient used above to uniformly bound the norm of [D, A] is that the SBP scalar products used in practice have the form (8.29). In those cases, both the boundary width and the boundary stencil size (defined below Example 52) associated with the corresponding difference operators are independent of N. Therefore, the constants C_{1} and C_{2} can also be bounded independently of N.
 For the D_{2−1} operator defined in Example 51, for instance, Eq. (8.48) gives C_{1} = 3/2, C_{2} = 1, and we obtain the optimal estimate corresponding to the one in the continuum limit,$$\left\vert {\;\langle u,\left[ {{d \over {dx}},A} \right]u\rangle} \right\vert \leq \;\vert {A_x}{\vert _\infty}\Vert u\Vert ^{2}\,.$$(8.49)

For the D_{4−2} operator defined in Example 52, in turn, Eq. (8.48) gives C_{1} = 1770/731 ≈ 2.421 and C_{2} =42/17 ≈ 2.471.

For spectral methods the constants C_{1} and C_{2} typically grow with N as the coefficients d_{ jk } do not form a banded matrix anymore. This leads to difficulties when estimating the commutator; see [407] for a discussion on this point.
 It is also possible to avoid the estimate on the commutator between D and A altogether through skewsymmetric differencing [260], in which the problem is discretized according toA straightforward energy estimate shows that this leads to strict stability, after the imposition of appropriate boundary conditions.$${u_t} = {1 \over 2}(AD + DA)\;u  {1 \over 2}{A_x}u.$$(8.50)

SBP by itself is not enough to obtain an energy estimate since the boundary conditions still need to be imposed, and in a way such that the boundary terms in the estimate after SBP are under control. This is the topic of Section 10.
8.5 Numerical dissipation
The use of numerical dissipation consistently with the underlying system of equations is a standard way of filtering unresolved modes, stabilizing a scheme, or both, without spoiling the convergence order of the scheme. As an example of unresolved modes, for centered differences, the mode with highest frequency for any given resolution does not propagate at all, while the semidiscrete group velocity with highest frequency is exactly in the opposite direction to the one at the continuum. In addition, the speed increases with the order of the scheme. See, for example, [281], for more details.
In the presence of boundaries, it is standard to simply set the operators (8.53) to zero near them. The result is, in general, not seminegative definite as in (8.54), which cannot only not help resolve instabilities but also trigger them. Many times this is not the case in practice if the boundary is an outer one, where the solution is weak, but not for interdomain boundaries (see Section 10). For example, for a discretization of the standard wave equation on a multidomain, curvilinear grid setting, using the D_{6−5} SBP operator with KreissOliger dissipation set to zero near interpatch boundaries does not lead to stability while the more elaborate construction below does [141].
8.6 Going further
Besides the applications already mentioned, highorder FD operators satisfying SBP have been used, for example, in simulations of blackhole binaries immersed in an external magnetic field in the forcefree approximation [321], orbiting binary black holes in vacuum [325], and for the metric sector in binary blackholeneutronstar evolutions [124] and binary neutronstar evolutions, which include magnetohydrodynamics [23]. Other works are referred to in combination to multidomain interface numerical methods in Section 10.
In [398], the authors present a numerical spectrum stability analysis for blockdiagonalbased SBP operators in the presence of curvilinear coordinates. However, the case of nondiagonal SBP norms and the full Einstein equations in multidomain scenarios for orders higher than four in the interior needs further development and analysis.
Efficient algorithms for computing the weights for generic FDs operators (though not necessarily satisfying SBP or with proven stability) are given in [166].
Discretizing secondorder timedependent problems without reducing them to first order leads to a similar concept of SBP for operators approximating second derivatives. There is steady progress in an effort to combine SBP with penalty interface and outer boundary conditions for highorder multidomain simulations of secondorderinspace systems. At present though these tools have not yet reached the state of those for firstorder systems, and they have not been used within numerical relativity except for the test case of a ‘shifted advection equation’ [302]. The difficulties appear in the variable coefficient case. We discuss some of these difficulties and the state of the art in Section 10. In short, unlike the firstorder case, SBP by itself does not imply an energy estimate in the variable coefficient case, even if using diagonal norms, unless the operators are built taking into account the PDE as well. In [300] the authors explicitly constructed minimalwidth diagonal norms SBP difference operators approximating d^{2}/dx^{2} up to eighth order in the interior, and in [118] nonminimal width operators up to sixth order using full norms are given.
[440] presents a stability analysis around flat spacetime for a family of generalized BSSNtype formulations, along with numerical experiments, which include binary blackhole inspirals.
SBP operators have also been constructed to deal with coordinate singularities in specific systems of equations [105, 375, 225]. Since a sharp semidiscrete energy estimate is explicitly derived in these references, (strict) stability is guaranteed. In particular, in [225] schemes for which the truncation error converges pointwise everywhere — including the origin — are derived for wave equations on arbitrary space dimensions decomposed in spherical harmonics. Interestingly enough, popular schemes [158] to deal with the singularity at the origin, which had not been explicitly designed to satisfy SBP, were found a posteriori to do so at the origin and closed at the outer boundary, see [225] for more details. In these cases the SBP operators are tailored to deal with specific equations and coordinate singularities; therefore, they are problem dependent. For this reason their explicit construction has so far been restricted to second and fourthorder operators (with diagonal scalar products), though the procedure conceptually extends to arbitrary orders. For higherorder operators, optimization of at least the spectral radius might become necessary to address.
In [166] the authors use SBP operators to design highorder quadratures. The reference also includes a detailed description of many properties of SBP operators.
Superconvergence of some estimates in the case of diagonal SBP operators is discussed in [239].
9 Spatial Approximations: Spectral Methods
In this section, we review some of the theory for spectral spatial approximations, and their applications in numerical relativity. These are global representations, which display very fast convergence for smooth enough functions. They are therefore very well suited for Einstein’s vacuum equations, where physical shocks are not expected since they can be written in linearlydegenerate form, as discussed at the beginning of Section 3.3.
We start in Section 9.1 discussing expansions onto orthogonal polynomials, which are solutions to SturmLiouville problems. In those cases it is easy to see that for smooth functions the decay of the error in truncated expansions with respect to the number of polynomials is in general faster than any power law, which is usually referred to as spectral convergence. Next, in Section 9.2 we discuss a few properties of general orthogonal polynomials; most important that they can be generated through a threeterm recurrence formula. Follows Section 9.3 with a discussion of the mostused families of polynomials in bounded domains; namely, Legendre and Chebyshev ones, including the minmax property of Chebyshev points. Approximating integrals through a global interpolation with a careful choice of nodal points makes it possible to maximize the degree with respect to which they are exact for polynomials (Gauss quadratures). When applied to compute discrete truncated expansions, they lead to two remarkable features. One of them is SBP for Legendre polynomials, in analogy with the FD version discussed in Section 8.3. As in that case, SBP can also be sometimes used to show semidiscrete stability when solving timedependent partial differential equations (PDEs). The second one is an exact equivalence, for general Jacobi polynomials, between the resulting discrete expansions and interpolation at the Gauss points, a very useful property for collocation methods. Gauss quadratures and SBP are discussed in Section 9.4, followed by interpolation at Gauss points in Section 9.5. In Sections 9.6, 9.7 and 9.8 we discuss spectral differentiation, the collocation method for timedependent PDEs, and applications to numerical relativity.
The results for orthogonal polynomials to be discussed are classical ones, but we present them because spectral methods are less widespread in the relativity community, at least compared to FDs. The proofs and a detailed discussion of many other properties can be found in, for example, [197] and references therein. [237] is a modern approach to the use of spectral methods in timedependent problems with the latest developments, while [70] discusses many issues, which appear in applications, and [167] presents a very clear practical guide to spectral methods, in particular to the collocation approach. A good fraction of our presentation of this section follows [197] and [237], to which we refer when we do not provide any other references, or for further material.
9.1 Spectral convergence
9.1.1 Periodic functions
In fact, an estimate for the difference between u and its projection similar to (9.11) but on the infinity norm can also be obtained [237].
The main property that leads to spectral convergence is then the fast decay of the Fourier coefficients; see Eq. (9.16), provided the norm of \({{\mathcal D}^{(s)}}\) remains bounded for large s.
Before moving to the nonperiodic case we notice that in either the full or truncated expansions, the integrals (9.6) need to be computed. Numerically approximating the latter leads to discrete expansions and an appropriate choice of quadratures for doing so leads to a powerful connection between the discrete expansion and interpolation. We discuss this in Section 9.4, directly for the nonperiodic case.
9.1.2 Singular SturmLiouville problems
Next, consider nonperiodic domains (a, b) (which can actually be unbounded; for example, (0, ∞) as in the case of Laguerre polynomials) in the real axis. We discuss how bases of orthogonal polynomials with spectral convergence properties arise as solutions to singular SturmLiouville problems.
Notice that the eigenvalues satisfy the asymptotic condition (9.27) and, roughly speaking, guarantees spectral convergence. More precisely, the following holds (see, for example, [197]) — in analogy with Theorem 16 for the Fourier case — for the expansion \({{\mathcal P}_N}[f]\) of a function f in Jacobi polynomials:
SturmLiouville problems are discussed in, for example, [431]. Below we discuss some properties of general orthogonal polynomials.
9.2 Some properties of orthogonal polynomials
In order to obtain an orthonormal basis of P_{ N }, a GramSchmidt procedure could be applied to the standard basis \(\{{x^j}\} _{j = 0}^N\). However, exploiting properties of polynomials, a more efficient approach can be used, where the first two polynomials p_{0}, p_{1} are constructed and then a threeterm recurrence formula is used.

The zerothorder polynomial:
The conditions that p_{0} has degree zero and that it is monic only leaves the choice$${p_0}(x) = 1.$$(9.38) 
The firstorder one:
Writing p_{1} (x) = x + b_{1} the condition 〈p_{0}, p_{1}〉_{ ω } = 0 yields$${p_1}(x) = x  {{{{\langle 1,x\rangle}_\omega}} \over {{{\langle 1,1\rangle}_\omega}}}\,.$$(9.39) 
The higherorder polynomials:
Theorem 19 (Threeterm recurrence formula for orthogonal polynomials). For monic polynomials \(\{{p_k}\} _{k = 0}^N\), which are orthogonal with respect to the scalar product 〈·, ·〉_{ ω }, where each p_{ k } is of degree k, the following relation holdsfor k = 1, 2, …, N − 1.$${p_{k + 1}} = x{p_k}  {{{{\langle x{p_k},{p_k}\rangle}_\omega}} \over {{{\langle {p_k},{p_k}\rangle}_\omega}}}{p_k}  {{{{\langle x{p_k},{p_{k  1}}\rangle}_\omega}} \over {{{\langle {p_{k  1}},{p_{k  1}}\rangle}_\omega}}}{p_{k  1}},$$(9.40)Proof. Let 1 ≤ k ≤ N − 1. Since xp_{ k } is a polynomial of degree k + 1, it can be expanded aswhere the orthogonality of the polynomials \(\{{p_k}\} _{k = 0}^N\) implies that a_{ j } = 〈xp_{ k }, p_{ j }〉_{ ω }/〈p_{ j }, p_{ j }〉_{ ω } for j = 0, 1, 2, …, k + 1. However, since 〈xp_{ k }, p_{ j }〉_{ ω } = 〈p_{ k }, xp_{ j }〉_{ ω } and xp_{ j } can be expanded in terms of the polynomials p_{0}, p_{1}, …, p_{j+1}, it follows again by the orthogonality of \(\{{p_k}\} _{k = 0}^N\) that a_{ j } = 0 for j ≤ k − 2. Finally, a_{k+1} = 1 since p_{ k } and p_{k+1} are both monic. This proves Eq. (9.40). □$$x{p_k}(x) = \sum\limits_{j = 0}^{k + 1} {{a_j}} {p_j}(x),$$(9.41)Notice that p_{k+1}, as defined in Eq. (9.40), remains monic and can therefore be automatically used for constructing p_{k+2}, without any rescaling.
Eqs. (9.38, 9.39, 9.40) allow one to compute orthogonal polynomials for any weight function ω, without the expense of a GramSchmidt procedure. For specific weight cases, there are even more explicit recurrence formulae, such as those in Eqs. (9.43, 9.44) and (9.48, 9.49, 9.50) below for Legendre and Chebyshev polynomials, respectively.
9.3 Legendre and Chebyshev polynomials
For finite intervals, Legendre and Chebyshev polynomials are the ones most typically used. In the Chebyshev case, the polynomials themselves, their roots and quadrature points in general can be computed in closed form. They also satisfy a minmax property and lend themselves to using a fast Fourier transform (FFT).
9.3.1 Legendre
9.3.2 Chebyshev
9.3.3 The minmax property of Chebyshev points
Example 53. Runge phenomenon (see, for instance, [154]).
In other words, using Chebyshev points, that is, the roots of the Chebyshev polynomials, as interpolating nodes, minimizes the maximum error associated with the nodal polynomial term. Notice that, in this case, the nodal polynomial is given by T_{N+1}(x)/2^{ N }.
9.4 Gauss quadratures and summation by parts
Suppose now that, in addition to having the freedom to choose the coefficients {A_{ i }}, we can choose the nodal points {x_{ i }}. Then we have (N + 1) points and (N + 1) {A_{ i }}, i.e., (2N + 2) degrees of freedom. Therefore, we expect that we can make the quadrature exact for all polynomials of degree at most (2N + 1). This is indeed true and is referred to as Gauss quadratures. Furthermore, the optimal choice of A_{ i } remains the same as in Eq. (9.65), and only the nodal points need to be adjusted.
Theorem 20 (Gauss quadratures). Let ω be a weight function on the interval (a, b), as introduced in Eq. (9.17) , and let p_{N+1} be the associated orthogonal polynomial of degree N + 1. Then, the quadrature rule (9.63) with the choice (9.65) for the discrete weights, and as nodal points {x_{ j }} the roots of p_{N+1} is exact for all polynomials of degree at most (2N + 1).

The roots of p_{N+1}(x) are referred to as Gauss points or nodes.

Suppose that ω(x) = (1−x^{2})−^{1/2}. Then the (N +1) Gauss points, i.e., the roots of the Chebyshev polynomial T_{N+1}(x) [see Eq. (9.68)], are exactly the points that minimize the infinity norm of the nodal polynomial in the interpolation problem, as discussed in Section 9.3.3.
One can similarly enforce that only one of the end points coincides with a quadrature one, leading to GaussRadau quadratures. The proofs of Theorems 20 and 21 can be found in most numerical analysis books, in particular [242].
For Chebyshev polynomials there are closed form expressions for the nodes and weights in Eqs. (9.63) and (9.65):
Summation by parts. For any two polynomials p(x), q(x) of degree N, in the Legendre case SBP follows for Gauss, GaussLobatto or GaussRadau quadratures, in analogy with the FD case described in Section 8.3.
9.5 Discrete expansions and interpolation
The above simple proof did not assume any special properties of the polynomial basis, but does not hold for the GaussLobatto case (for which the associated quadrature is exact for polynomials of degree 2N − 1). However, the result still holds (at least for Jacobi polynomials); see, for example, [237].
Examples of Gausstype nodal points {x_{ i }} are those given in Eq. (9.68) or Eq. (9.70). As we will see below, the identity (9.80) is very useful for spectral differentiation and collocation methods, among other things, since one can equivalently operate with the interpolant, which only requires knowledge of the function at the nodes.
9.6 Spectral collocation differentiation
9.7 The collocation approach
The weighted norm case ω ≠ 1 is more involved. In fact, already the advection equation is not well posed under the Chebyshev norm; see, for example, [237].
The Legendre and Chebyshev cases are similar and are discussed in [291, 292]. The webpage [406] keeps a selected list of publications on spectral viscosity.
9.8 Going further, applications in numerical relativity
Based on the minimum gridspacing between spectral collocation points, one would naively expect the CFL limit to scale as 1/N^{2}, where N is the number of points. The expectation indeed holds, but the reason is related to the \({\mathcal O}({N^2})\) (N^{2}) scaling of the eigenvalues of Jacobi polynomials as solutions to SturmLiouville problems (in fact, the result holds for noncollocation spectral methods as well) [212].
There are relatively few rigorous results on convergence and stability of Chebyshev collocation methods for IBVPs; some of them are [211] and [210].
Even though this review is concerned with timedependent problems, we note in passing that there are a significant number of efforts in relativity using spectral methods for the constraint equations; see [215]. The use of spectral methods in relativistic evolutions can be traced back to pioneering work in the mid1980s [66] (see also [67, 68, 213]). Over the last decade they have gained popularity, with applications in scenarios as diverse as relativistic hydrodynamics [313, 427, 428], characteristic evolutions [43], absorbing and/or constraintpreserving boundary conditions [314, 369, 365, 363], constraint projection [244], late time “tail” behavior of blackhole perturbations [382, 420], cosmological studies [19, 49, 50], extrememassratio inspirals within perturbation theory and selfforces [112, 162, 111, 425, 114, 113, 123] and, prominently, binary blackhole simulations (see, for example, [384, 329, 71, 381, 132, 288, 402, 131, 90, 289]) and blackholeneutronstar ones [150, 168]. The method of lines (Section 7.3) is typically used with a small enough timestep so that the time integration error is smaller than the one due to the spatial approximation and spectral convergence is observed. Spectral collocation methods were first used in sphericallysymmetric blackhole evolutions of the Einstein equations in [255] and in three dimensions in [254]. The latter work showed that some constraint violations in the EinsteinChristoffel [22] type of formulations do not go away with resolution but are a feature of the continuum evolution equations (though the point — namely, that time instabilities are in some cases not a product of lack of resolution — applies to many other scenarios).
Most of these references use explicit symmetric hyperbolic firstorder formulations. More recently, progress has been made towards using spectral methods for the BSSN formulation of the Einstein equations directly in secondorder form in space [419, 163], and, generally, on multidomain interpatch boundary conditions for secondorder systems [413] (numerical boundary conditions are discussed in the next Section 10). A spectral spacetime approach (as opposed to spectral approximation in space and marching in time) for the 1+1 wave equation in compactified Minkowski was proposed in [233]; in higher dimensions and dynamical spacetimes the cost of such approach might be prohibitive though.
[83] presents an implementation of the harmonic formulation of the Einstein equations on a spherical domain using a double Fourier expansion and, in addition, significant speedups using Graphics Processing Units (GPUs).
[215] presents a detailed review of spectral methods in numerical relativity.
A promising approach, which, until recently, has been largely unexplored within numerical relativity is the use of discontinuous Galerkin methods [238, 457, 162, 163, 339].
10 Numerical Boundary Conditions
In most practical computations, one inevitably deals with an IBVP and numerical boundary conditions have to be imposed. Usually the boundary is artificial and, as discussed in Section 5.3, absorbing boundary conditions are imposed. In other cases the boundary of the computational domain may actually represent infinity via compactification; see Section 6.4. Here we discuss some approaches for imposing numerical boundary conditions, with emphasis on sufficient conditions for stability based on the energy method, simplicity, and applicability to high order and spectral methods. In addition to outer boundaries, we also discuss interface ones appearing when there are multiple grids.
General stability results through the energy method are available for symmetric hyperbolic firstorder linear systems with maximal dissipative boundary conditions. Unfortunately, in many cases of physical interest the boundary conditions are often neither in maximal dissipative form nor is the system linear. In particular, this is true for Einstein’s field equations, which are nonlinear, and, as we have seen in Section 6, require constraintpreserving absorbing boundary conditions, which do not always result in algebraic conditions on the fields at the boundary. Therefore, in many cases one does “the best that one can”, implementing the outer boundary conditions using discretizations, which are known to be stable, at least in the linearized, maximal dissipative case. Fortunately, since the outer boundaries are usually placed in the weak field, wave zone, more often than not this approach works well in practice. At the same time, it should be noted that the IBVPs for general relativity formulated in [187] and [264] (discussed in Section 5) are actually based on a symmetric hyperbolic firstorder reduction of Einstein’s field equations with maximal dissipative boundary conditions (including constraintpreserving ones). Therefore, it should be possible to construct numerical schemes, which can be provably stable, at least in the linearized regime, using the techniques described in the last two Sections 8 and 9, and in Section 10.1 below. A numerical implementation of the formulations of [187] and [264] has not yet been pursued.
The situation at interface boundaries between grids, which are at least partially contained in the strong field region, is more subtle. Fortunately, only the characteristic structure of the equations is in principle needed at such boundaries, and not constraintpreserving boundary conditions. Methods for dealing with interfaces are discussed in Section 10.2.
Finally, in Section 10.3 we give an overview of some applications to numerical relativity of the boundary treatments discussed in Sections 10.1 and 10.2. As mentioned above, most of the techniques that we discuss have been mainly developed for firstorder symmetric hyperbolic systems with maximal dissipative boundary conditions. In Section 10.3 we also point out ongoing and prospective work for secondorder systems, as well as the important topic of absorbing boundary conditions in general relativity.
Most of the methods reviewed below involve decomposition of the principal part, its time derivative, or both, into characteristic variables, imposing the boundary conditions and changing back to the original variables. This can be done a priori, analytically, and the actual online numerical computational cost of these operations is negligible.
10.1 Outer boundary conditions
10.1.1 Injection
Injecting boundary conditions is presumably the simplest way to numerically impose them. It implies simply overwriting, at every or some number of timesteps, the numerical solution for each incoming characteristic variable or its time derivative with the conditions that they should satisfy.
Stability of the injection approach can be analyzed through GKS theory [229], since energy estimates are, in general, not available for it (the reason for this should become more clear when discussing the projection and penalty methods). Stability analyses, in general, not only depend on the spatial approximation (and time integration in the fullydiscrete case) but are in general also equationdependent. Therefore, stability is, in general, difficult to establish, especially for highorder schemes and nontrivial systems. For this reason a semidiscrete eigenvalue analysis is many times performed. Even though this only provides necessary conditions (namely, the von Neumann condition (7.80)) for stability, it serves as a rule of thumb and to discard obviously unstable schemes.
There are other difficulties with the injection method, besides the fact that stability results are usually partial and incomplete for realistic systems and/or highorder or spectral methods. One of them is that it sometimes happens that a full GKS analysis can actually be carried out for a simple problem and scheme, and the result turns out to be stable but not timestable (see Section 7.4 for a discussion on timestability). Or the scheme is not timestable when applied to a more complicated system (see, for example, [116, 1]).
Seeking stable numerical boundary conditions for realistic systems, which preserve the accuracy of highorder finitedifference and spectral methods has been a recurring theme in numerical methods for timedependent partial differential equations for a long time, especially for nontrivial domains, with substantial progress over the last decade — especially with the penalty method discussed below. Before doing so we review another method, which improves on the injection one in that stability can be shown for rather general systems and arbitrary highorder FD schemes.
10.1.2 Pro jections
Assume that a given IBVP is well posed and admits an energy estimate, as discussed in Section 5.2. Furthermore, assume that, up to the control of boundary terms, a semidiscrete approximation to it also admits an energy estimate. The key idea of the projection method [317, 319, 318] is to impose the boundary conditions by projecting at each time the numerical solution to the space of gridfunctions satisfying those conditions, the central aspect being that the projection is chosen to be orthogonal with respect to the scalar product under which a semidiscrete energy estimate in the absence of boundaries can be shown. The orthogonality of the projection then guarantees that the estimate including the control of the boundary term holds.
Details on how to explicitly construct the projection can be found in [227]. The orthogonal projection method guarantees stability for a large class of problems admitting a continuum energy estimate. However, its implementation is somewhat involved.
10.1.3 Penalty conditions
A simple and robust method for imposing numerical boundary conditions, either at outer or interpatch boundaries, such as those appearing in domain decomposition approaches, is through penalty terms. The boundary conditions are not imposed strongly but weakly, preserving the convergence order of the spatial approximation and leading to numerical stability for a large class of problems. It can be applied both to the case of FDs and spectral approximations. In fact, the spirit of the method can be traced back to Finite Elementsdiscontinuous Galerkin methods (see [147] and [28] for more recent results). Terms are added to the evolution equations at the boundaries to consistently penalize the mismatch between the numerical solution and the boundary conditions the exact solution is subject to.
Finite differences. In the FD context the method is known as the Simultaneous Approximation Technique (SAT) [117]. For semidiscrete approximations of IBVPs of arbitrary high order with an energy estimate, both the order of accuracy and the presence of energy estimates are preserved when imposing the boundary conditions through it.
In the case of diagonal SBP norms it is straightforward to derive similar energy estimates for general linear symmetric hyperbolic systems of equations in several dimensions, simply by working with each characteristic variable at a time, at each boundary. A penalty term is applied to the evolution term of each incoming characteristic variable at a time, as in Eq. (10.18), where λ is replaced by the corresponding characteristic speed. In particular, edges and corners are dealt with by simply imposing the boundary conditions with respect to the normal to each boundary, and an energy estimate follows.
 In special cases it is possible to improve the error estimate exploiting the strong stability of the problem. Consider, for instance, the case of the D_{2−1} operator defined in Example 51 with the associated diagonal scalar product (σ_{ ij }) = diag(…, 1, 1, 1, 1/2). Then, Eq. (10.30) giveswhere we have chosen 0 < ε^{2}/2 ≤ 2S − λ. This implies that the error converges to zero in the Σnorm with second order and pointwise with order 3/2.$$\begin{array}{*{20}c} {{d \over {dt}}\Vert e \Vert_\Sigma ^2 = (\lambda  2S)e_0^2  2{{\langle e,F\rangle}_\Sigma} = (\lambda  2S)e_0^2  \Delta x{e_0}{F_0}  2\Delta x\sum\limits_{i =  \infty}^{ 1} {{e_i}} {F_i}} \\ {\leq \left[ {(\lambda  2S) + {{{\varepsilon ^2}} \over 2}} \right]e_0^2 + {{\Delta {x^2}} \over {2{\varepsilon ^2}}}F_0^2 + \Delta x\sum\limits_{i =  \infty}^{ 1} {\left({e_i^2 + F_i^2} \right)} \quad \;} \\ {\leq \Vert e \Vert_\Sigma ^2 + {\mathcal O}\left({{{(\Delta x)}^4}} \right),\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;} \\ \end{array}$$

At any fixed resolution, the error typically^{30} decreases with larger values of the penalty parameter S, but the spectral radius of the discretization grows quickly in the process by effectively introducing dissipative eigenvalues (on the left half plane) in the spectrum, leading to demanding CFL limits (see, for example, [281]). Because the method is usually applied along with highorder methods, decreasing the error for a fixed resolution at the expense of increasing the CFL limit does not seem worthwhile. In practice values of S in the range λ/2 < S < λ give reasonable CFL limits.
Spectral methods. The penalty method for imposing boundary conditions was actually introduced for spectral methods prior to FDs in [198, 199]. In fact, as we will see below, the FD and spectral cases follow very similar paths. Here we only discuss its application to the collocation method. Furthermore, as discussed in Section 9.4, when solving an IBVP, GaussLobatto collocation points are natural among Gausstype nodes, because they include the end points of the interval. We restrict our review to them, but the penalty method applies equally well to the other nodes. We refer to [236] for a thorough analysis of spectral penalty methods.

The approach for linear symmetric hyperbolic systems is the same as that we discussed for FDs: the evolution equation for each characteristic variable is penalized as in Eq. (10.33), where λ is replaced by the corresponding characteristic speed.
Devising a Chebyshev penalty scheme that guarantees stability is more convoluted. In particular, the advection equation is already not in the Chebyshev norm, for example (see [209]). Stability in the L_{2} norm can be established using the ChebyshevLegendre method [144], where the ChebyshevGaussLobatto nodes are used for the approximation, but the Legendre ones for satisfying the equation. In this approach, the penalty method is global, because it adds terms to the righthand side of the equations not only at the endpoint, but at all other collocation points as well.
10.2 Interface boundary conditions
Interface boundary conditions are needed when there are multiple grids in the computational domain, as discussed in Section 11 below. This applies to complex geometries, when multiple patches are used for computational efficiency, to adapt the domain decomposition to the wave zone, mesh refinement, or a combination of these.
A simple approach for exchanging the information between two or more grids is a combination of interpolation and extrapolation of the whole state vector at the intersection of the grids. This is the method of choice in mesh refinement, for example, and works well in practice. In the case of curvilinear grids and veryhighorder FD or spectral methods, though, it is in general not only difficult to prove numerical stability, but even to find a scheme that exhibits stability from a practical point of view.
10.2.1 Penalty conditions
The penalty method discussed above for outer boundary conditions can also be used for multidomain interface ones, including those present in complex geometries [115, 235, 118, 311, 312, 140]. It is simple to implement, robust, leads to stability for a very large class of problems and preserves the accuracy of arbitrary highorder FD and spectral methods.
 Positive λ:The estimate is$${S_l} = \lambda + \delta ,\quad {S_r} = \delta ,\quad {\rm{with}}\;\delta \geq  {\lambda \over 2}.$$(10.42)$${d \over {dt}}{E_{\rm{d}}} =  {(u_0^l  u_0^r)^2}(\lambda + 2\delta) \leq 0.$$(10.43)
 Negative λ: this is obtained from the previous case after the transformation λ ↦ −λand$${S_r} =  \lambda + \delta ,\quad {S_l} = \delta ,\quad {\rm{with}}\;\delta \geq {\lambda \over 2}$$(10.44)$${d \over {dt}}{E_{\rm{d}}} = {(u_0^l  u_0^r)^2}(\lambda  2\delta) \leq 0.$$(10.45)
 Vanishing λ: this can be seen as the limiting case of any of the above two, with$${d \over {dt}}{E_{\rm{d}}} =  {(u_0^l  u_0^r)^2}2\delta \leq 0.$$(10.46)

For the minimum values of δ allowed by the above inequalities, the energy estimate is the same as for the single grid case with outer boundary conditions, see Section 10.1.3, and the discretization is timestable (see Section 7.4), while for larger values of δ there is damping in the energy, which is proportional to the mismatch at the interface.

Except for the case of the most natural choice δ = 0, the evolution equations for outgoing modes also need to be penalized in a consistent way in order to derive an energy estimate. However, as is always the case, the lack of an energytype estimate does not mean that the scheme is unstable, since the energy method provides sufficient but not always necessary conditions for stability.

The general case of symmetric hyperbolic systems follows along the same lines: a decomposition into characteristic variables is performed and the evolution equation for each of them is penalized as in the advection equation example. At least for diagonal norms, stability also follows for general linear symmetric hyperbolic systems in several dimensions. With the standard caveats for nondiagonal norms, the procedure follows in a similar way except that penalty terms are not only added to the evolution equations at the interface on each grid, but also near them. In practice, though, applying penalties just at the interfaces appears to work well in practice in many situations.
Spectral methods. The standard procedure for interface spectral methods is to penalize each incoming characteristic variable, exactly as in the outer boundary condition case. Namely, as in Eq. (10.33) with lower bounds for the penalty strengths given by Eqs. (10.34) and (10.36) for Legendre and Chebyshev polynomials, respectively. We know from the FD analysis above, though, that in general this does not imply an energy estimate, and in order to achieve one, outgoing modes also need to be penalized, with strengths that are coupled to the penalty for incoming modes. However, the procedure of penalizing just incoming modes at interfaces appears to works well in practice, so we analyze this in some detail.
The figure also shows the spectral radius as a function of the penalty strength. Beyond S =1 it grows very quickly, and even though as mentioned the timestep is usually determined by keeping the time integration error below that one due to spatial discretization, that might not be the case if the spectral radius is too large. Thus, it is probably good to keep S ≲ 1.
10.3 Going further, applications in numerical relativity
In numerical relativity, the projection method for outer boundary conditions has been used in references [102, 103, 421, 278, 225], the penalty FD one for multidomain boundary conditions in [281, 141, 385, 145, 324, 325, 425, 256, 454, 426], and for spectral methods in — among many others — [130, 289, 90, 168, 306, 450, 402, 131, 149, 290, 97, 91, 31, 381, 150, 384].
Many systems in numerical relativity, starting with Einstein’s equations themselves, are numerically solved by reducing them to firstorder systems, because there is a large pool of advanced and mature analytical and numerical techniques for them. However, this is at the expense of enlarging the system and, in particular, introducing extra constraints (though this seems to be less of a concern; see, for example, [286, 82]). It seems more natural to solve such equations directly in secondorder (at least in space) form. It turns out, though, that it is considerably more complicated to ensure stability for such systems. Trying to “integrate back” an algorithm for a firstorder reduction to its original secondorder form is, in general, not possible; see, for example, [101, 413] for a discussion of this apparent paradox and the difficulties associated with constructing boundary closures for secondorder systems such that an energy estimate follows.
In [301] the projection method and highorder SBP operators approximating second derivatives were used to provide interface boundary conditions for a wave equation (directly in secondorderinspace form) in discontinuous media, while guaranteeing an energy estimate. The domain in this work is a rectangular multiblock domain with piecewise constant coefficients, where the interfaces coincide with the location of the discontinuities. This approach was generalized, using the SAT for the jump discontinuities instead of projection, to variable coefficients and complex geometries in [298].
The difficulty is not related to FDs: in [413] a penalty multidomain method was derived for secondorder systems. For the Legendre and constant coefficient case the method guarantees an energy estimate but difficulties are reported in guaranteeing one in the variable coefficient case. Nevertheless, it appears to work well in practice in the variable coefficient case as well. An interesting aspect of the approach of [413] is that an energy estimate is obtained by applying the penalty in the whole domain (as an analogy we recall the above discussion about the LegendreChebyshev penalty method in Section 10.1.3).
A recent generalization, valid both for FDs and — at least Legendre — collocation methods (as discussed, the underlying tool is the same: SBP), to more general penalty couplings, where the penalty terms are not scalar but matrices (i.e., there is coupling between the penalty for different characteristic variables) can be found in [119].
Energy estimates are in general lost when the different grids are not conforming (different types of domain decompositions are discussed in the next Section 11), and interpolation is needed. This is the case when using overlapping patches with curvilinear coordinates but also mesh refinement with Cartesian, nested boxes (see, for example, [277] and references therein). A recent promising development has been the introduction of a procedure for systematically constructing interpolation operators preserving the SBP property for arbitrary highorder cases; see [297]. Numerical tests are presented with a 2:1 refinement ratio, where the design convergence rate is observed. It is not clear whether reflections at refinement boundaries such as those reported in [39] would still need to be addressed or if they would be taken care of by the highorder accuracy.
10.3.1 Absorbing boundary conditions
Finally, we mention some results in numerical relativity concerning absorbing artificial boundaries. In [314], boundary conditions based on the work of [46], which are perfectly absorbing for quadrupolar solutions of the flatwave equation, were numerically implemented via spectral methods, and proposed to be used in a constrained evolution scheme of Einstein’s field equations [65]. For a different method, which provides exact, nonlocal outer boundary conditions for linearized gravitational waves propagating on a Schwarzschild background; see [271, 270, 272]. A numerical implementation of the wellposed harmonic IBVP with Sommerfeldtype boundary conditions given in [267] was worked out in [33], where the accuracy of the boundary conditions was also tested.
In [366], various boundary treatments for the Einstein equations were compared to each other using the test problem of a Schwarzschild black hole perturbed by an outgoing gravitational wave. The solutions from different boundary algorithms were compared to a reference numerical solution obtained by placing the outer boundary at a distance large enough to be causally disconnected from the interior spacetime region where the comparison was performed. The numerical implementation in [366] was based on the harmonic formulation described in [286].
A similar comparison was performed for alternative boundary conditions, including spatial compactifications and sponge layers. The errors in the gravitational waves were also estimated in [366], by computing the complex Weyl scalar Ψ_{4} for the different boundary treatments; see Figure 10 in that reference.
For higherorder absorbing boundary conditions, which involve derivatives of the Weyl scalar Ψ_{0}; see [369] and [365] for their numerical implementation.
11 Domain Decomposition
Most threedimensional codes solving the Einstein equations currently use several nonuniform grids/numerical domains. Adaptive mesh refinement (AMR) à la Berger & Oliger [48], where the computational domain is covered with a set of nested grids, usually taken to be Cartesian ones, is used by many efforts. See, for instance, [386, 338, 394, 277, 160, 24, 393, 38, 84, 109, 430, 442, 439, 157, 321]). Other approaches use multiple patches with curvilinear coordinates, or a combination of both. Typical simulations of Einstein’s equations do not fall into the category of complex geometries and usually require a fairly “simple” domain decomposition (in comparison to fully unstructured approaches in other fields).
Below we give a brief overview of some domain decomposition approaches. Our discussion is far from exhaustive, and only a few representative samples from the rich variety of efforts are mentioned. In the context of Cauchy evolutions, the use of multiple patches in numerical relativity was first advocated and pursued by Thornburg [417, 418].
11.1 The power and need of adaptivity
11.2 Adaptive mesh refinement for BBH in higher dimensional gravity
11.3 Adaptive mesh refinement and curvilinear grids
This hybrid approach has been used in several applications, including the validation of extrapolation procedures of gravitational waves extracted from numerical simulations at finite radii to large distances from the “sources” [331]. Since the outermost grid structure is well adapted to the wave zone, the outer boundary can be located at large distances with only linear cost on its location. Other applications include CauchyCharacteristic extraction (CCE) of gravitational waves [350, 55], a waveform hybrid development [371], and studies of memory effect in gravitational waves [330]. The accuracy necessary to study small memory effects is enabled both by the grid structure — being able to locate the outer boundary far away — and CCE.
11.4 Spectral multidomain binary blackhole evolutions
Simulating nonvacuum systems such as relativistic hydrodynamical ones using spectral methods can be problematic, particularly when surfaces, shocks, or other nonsmooth behavior appears in the fluid. Without further processing, the fast convergence is lost, and Gibbs’ oscillations can destabilize the simulation. A method that has been successfully used to overcome this in generalrelativistic hydrodynamics is evolving the spacetime metric and the fluid on two different grids, each using different numerical techniques. The spacetime is evolved spectrally, while the fluid is evolved using standard finite difference/finite volume shockcapturing techniques on a separate uniform grid. The first code adopting this approach was described in [142], which is a stellarcollapse code assuming a conformallyflat threemetric, with the resulting elliptic equations being solved spectrally. The twogrid approach was adopted for full numericalrelativity simulations of blackholeneutronstar binaries in [150, 149, 168]. The main advantage of this method when applied to binary systems is that at any given time the fluid evolution grid only needs to extend as far as the neutronstar matter. During the premerger phase, then, this grid can be chosen to be a small box around the neutron star, achieving very high resolution for the fluid evolution at low computational cost. More recently, in [168] an automated regridder was added, so that the fluid grid automatically adjusts itself at discrete times to accommodate expansion or contraction of the nuclear matter. The main disadvantage of the twogrid method is the large amount of interpolation required for the two grids to communicate with each other. Straightforward spectral interpolation would be prohibitively expensive, but a combination of spectral refinement and polynomial interpolation [69] reduces the cost to about 20–30 percent of the simulation time.
11.5 Multidomain studies of accretion disks around black holes
The freedom to choose arbitrary (smooth) coordinate transformations allows the design of sophisticated problemfitted meshes to address a number of practical issues. In [256] the authors used a hybrid multiblock approach for a generalrelativistic hydrodynamics code developed in [456] to study instabilities in accretion disks around black holes in the context of gammarayburst central engines. They evolved the spacetime metric using the firstorder form of the generalized harmonic formulation of the Einstein equations (see Section 4.1) on conforming grids, while using a highresolution shock capturing scheme for relativistic fluids on the same grid but with additional overlapping boundary zones (see [456] for details on the method). The metric differentiation was performed using the optimized D_{8−4} FD operators satisfying the SBP property, as described in Section 8.3. The authors made extensive use of adapted curvilinear coordinates in order to achieve desired resolutions in different parts of the domain and to make the coordinate lines conform to the shape of the solution. Maximal dissipative boundary conditions as defined in Section 5.2 were applied to the incoming fields, and interdomain boundary conditions for the metric were implemented using the finitedifference version of the penalty method described in Section 10.
11.6 Finitedifference multiblock orbiting binary blackhole simulations
Footnotes
 1.More precisely, it follows from the PaleyWiener theorem (see Theorem IX.11 in [346]) that \(f\epsilon {{\mathcal T}^\omega}\) with \(\hat f(k) = 0\) if and only if ƒ possesses an analytic extension \(\bar f:{{\rm{{\mathbb C}}}^n} \rightarrow {\rm{{\mathbb C}}}\) such that for each N = 0, 1, 2,… there exists a constant C_{ N } with$$\vert \bar f(\zeta)\vert \; \leq {C_N}{{{e^{R\vert {\rm Im} (\zeta)\vert}}} \over {{{(1 + \vert \zeta \vert)}^N}}},\qquad \zeta \in {{\mathbb C}^n}.$$
 2.
In this regard we should mention the CauchyKovalevskaya theorem (see, for example, [161]), which always provides unique local analytic solutions to the Cauchy problem for quasilinear partial differential equations with analytic coefficients and data on a noncharacteristic surface. However, this theorem does not say anything about causal propagation and stability with respect to highfrequency perturbations.
 3.
In fact, we will see in Section 3.2 that in the variable coefficient case, smoothness of the symmetrizer in k′ is required.
 4.
Here, the factor m^{2} could, in principle, be replaced by any positive number, which shows that the symmetrizer is not always unique. The choice here is such that the expression u*Hu is proportional to the physical energy density of the system. Notice, however, that for the massless case m = 0 one must replace m^{2} = 0 by a positive constant in order for the symmetrizer to be positive definite.
 5.
Notice that Ē_{ i } has only two degrees of freedom since it is orthogonal to k. Likewise, the quantities \({{\bar W}_i},{{\bar V}_i}\) have two degrees of freedom and \({{\bar W}_{ij}}\) has three since it its orthogonal to k and tracefree.
 6.
Here, the advection term \(\sum\limits_{j = 1}^n {{B^j}{\partial \over {\partial {x^j}}}\upsilon}\) is subtracted from v_{ t } for convenience only.
 7.The Fourier transform of the function v(t, ·) = P(t, ·, ∂/∂x)u(t, ·) is formally given bywhere Â^{ j }(t, ·), \(\hat B(t, \cdot)\) denote the Fourier transform of A^{ j }(t, ·) and B(t, ·), respectively, and where the star denotes the convolution operator. Unless A^{ j } and B are independent of x, the different kmodes couple to each other.$$\hat v(t, \cdot) = {1 \over {{{(2\pi)}^{n/2}}}}\left[ {\sum\limits_{j = 1}^n {{{\hat A}^j}} (t, \cdot){\ast}i{k_j}\hat u(t, \cdot) + \hat B(t, \cdot){\ast}\hat u(t, \cdot)} \right],$$
 8.
These smoothness requirements are sometimes omitted in the numericalrelativity literature.
 9.
In principle, the maximum propagation speed v(t_{0}) could be infinite. Finiteness can be guaranteed, for instance, by requiring the principal symbol P_{0}(t, x, s) to be independent of (t, x) for x > R outside a large ball of radius R > 0.
 10.
See Theorem 4.1.3 in [327].
 11.Geometrically, this means that the identity map \(\phi : (M,g) \rightarrow (M,\overset \circ g),p \mapsto p\), satisfies the inhomogeneous harmonic wave map equationwhere x^{ µ } and x^{ A } are local coordinates on (M, g) and \((M,\overset \circ g)\), respectively.$${\nabla ^\mu}{\nabla _\mu}{\phi ^D} + \overset \circ \Gamma {\,^D}AB{{\partial {\phi ^A}} \over {\partial {x^\mu}}}{{\partial {\phi ^B}} \over {\partial {x^\nu}}}{g^{\mu \nu}} = {H^D},$$(4.2)
 12.
As indicated above, given initial data \(g_{\alpha \beta}^{(0)}\) and \(k_{\alpha \beta}^{(0)}\) it is always possible to adjust the background metric \({\overset \circ g_{\alpha \beta}}\) such that the initial data for h_{ αβ } is trivial; just replace \({\overset \circ g_{\alpha \beta}}(t,x)\) by \({\overset \circ g _{\alpha \beta}}(t,x) + h_{\alpha \beta}^{(0)}(x) + tk_{\alpha \beta}^{(0)}(x)\).
 13.
Notice that the condition of T^{ αβ } being divergencefree depends on the metric g_{ αβ } itself, which is not known before actually solving the nonlinear wave equation (4.5), and the latter, in turn, depends on T^{ αβ }. Therefore, one cannot specify T^{ αβ } by hand, except in the vacuum case, T^{ αβ } = 0, or in the case T^{ αβ } = −Λg^{ αβ } with Λ the cosmological constant. In the more general case, the stressenergy tensor has to be computed from a diffeomorphismindependent action for the matter fields and one has to consistently solve the coupled Einsteinmatter system.
 14.
Weak hyperbolicity of the system (4.20, 4.21) with given shift and densitized lapse has also been pointed out in [254] based on a reduction, which is first order in time and space. However, there are several inequivalent such reductions, and so it is not sufficient to show that a particular one is weakly hyperbolic in order to infer that the secondorderinspace system (4.20, 4.21) is weakly hyperbolic.
 15.
Notice that even when ƒ =1, the evolution system (4.27, 4.28, 4.20, 4.21) is not equivalent to the harmonic system discussed in Section 4.1. In the former case, the harmonic constraint is exactly enforced in order to obtain evolution equations for the lapse and shift; while in the latter case first derivatives of the harmonic constraint are combined with the Hamiltonian and momentum constraints; see Eq. (4.13).
 16.
If \({\overset \circ \gamma _{ij}}\) is timeindependent but not flat, additional curvature terms appear in the equations; see Appendix A in [82].
 17.
However, one should mention that this convergence is usually not sufficient for obtaining accurate solutions. If the constraint manifold is unstable, small departures from it may grow exponentially in time and even though the constraint errors converge to zero they remain large for finite resolutions; see [254] for an example of this effect.
 18.
 19.
Alternatively, if k = −∂_{ x } is taken to be the unit outward normal, then the incoming normal characteristic fields are the ones with positive characteristic speeds with respect to k. This more geometrical definition will be the one taken in Section 5.2.
 20.
\({\hat k}\) is not well defined if k = 0; however, in this case the scalar block comprising (Ẽ_{1}, Ũ_{1}) decouples from the vector block comprising (Ẽ_{ A }, Ũ_{ A }) and it is simple to verify that the resulting system does not possess nontrivial simple wave solutions.
 21.
One can always assume that u(0, x) = u_{ t }(0, x) = 0 for all x ∈ Σ by a suitable redefinition of u, F and g.
 22.
 23.
Instead of imposing the constraint itself on the boundary one might try to set some linear combination of its normal and time derivatives to zero, obtaining a constraintpreserving boundary condition that does not involve zero speed fields. Unfortunately, this trick only seems to work for reflecting boundaries; see [405] and [106] for the case of general relativity. In our example, such boundary conditions are given by ∂_{ t }A_{ t } = ∂_{ x }A_{ x } = ∂tA_{ y } = ∂_{ t }A_{ z } = 0, which imply ∂_{ t }(∂^{ ν }A_{ ν }) = 0
 24.
This relation is explicitly given in terms of the Weyl tensor C, namely \({\Psi _0} = {C_{\alpha \beta \gamma \delta}}{K^\alpha}{Q^\beta}{K^\gamma}{Q^\delta} = 2({E_{\alpha \beta}}  {\varepsilon _\gamma}_{\delta (\alpha}{s^\gamma}{B^\delta}_{\beta)}){Q^\alpha}{Q^\beta}\), where E_{ αβ } = C_{ αγβδ }T^{ γ }T^{ δ } and B_{ αβ } = *C_{ αγβδ }T^{ γ }T^{ δ } are the electric and magnetic parts of C with respect to the timelike vector field T = ∂_{ t }.
 25.
A s