Skip to main content

Continuum and Discrete Initial-Boundary Value Problems and Einstein’s Field Equations


Many evolution problems in physics are described by partial differential equations on an infinite domain; therefore, one is interested in the solutions to such problems for a given initial dataset. A prominent example is the binary black-hole problem within Einstein’s theory of gravitation, in which one computes the gravitational radiation emitted from the inspiral of the two black holes, merger and ringdown. Powerful mathematical tools can be used to establish qualitative statements about the solutions, such as their existence, uniqueness, continuous dependence on the initial data, or their asymptotic behavior over large time scales. However, one is often interested in computing the solution itself, and unless the partial differential equation is very simple, or the initial data possesses a high degree of symmetry, this computation requires approximation by numerical discretization. When solving such discrete problems on a machine, one is faced with a finite limit to computational resources, which leads to the replacement of the infinite continuum domain with a finite computer grid. This, in turn, leads to a discrete initial-boundary value problem. The hope is to recover, with high accuracy, the exact solution in the limit where the grid spacing converges to zero with the boundary being pushed to infinity.

The goal of this article is to review some of the theory necessary to understand the continuum and discrete initial boundary-value problems arising from hyperbolic partial differential equations and to discuss its applications to numerical relativity; in particular, we present well-posed initial and initial-boundary value formulations of Einstein’s equations, and we discuss multi-domain high-order finite difference and spectral methods to solve them.


This review discusses fundamental tools from the analytical and numerical theory underlying the Einstein field equations as an evolution problem on a finite computational domain. The process of reaching the current status of numerical relativity after decades of effort not only has driven the community to use state of the art techniques but also to extend and work out new approaches and methodologies of its own. This review discusses some of the theory involved in setting up the problem and numerical approaches for solving it. Its scope is rather broad: it ranges from analytical aspects related to the well-posedness of the Cauchy problem to numerical discretization schemes guaranteeing stability and convergence to the exact solution.

At the continuum, emphasis is placed on setting up the initial-boundary value problem (IBVP) for Einstein’s equations properly, by which we mean obtaining a well-posed formulation, which is flexible enough to incorporate coordinate conditions, which allow for long-term and accurate stable numerical evolutions. Here, the well-posedness property is essential, in that it guarantees the existence of a unique solution, which depends continuously on the initial and boundary data. In particular, this assures that small perturbations in the data do not get arbitrarily amplified. Since such small perturbations do appear in numerical simulations because of discretization errors or finite machine precision, if such unbounded growth were allowed, the numerical solution would not converge to the exact one as resolution is increased. This picture is at the core of Lax’ historical theorem, which implies that the consistency of the numerical scheme is not sufficient for its solution to converge to the exact one. Instead, the scheme also needs to be numerically stable, a property, which is the discrete counterpart of well-posedness of the continuum problem.

While the well-posedness of the Cauchy problem in general relativity in the absence of boundaries was established a long time ago, only relatively recently has the IBVP been addressed and well-posed problems formulated. This is mainly due to the fact that the IBVP presents several new challenges, related to constraint preservation, the minimization of spurious reflections, and well-posedness. In fact, it is only very recently that such a well-posed problem has been found for a metric based formulation used in numerical relativity, and there are still open issues that need to be sorted out. It is interesting to point out that the IBVP in general relativity has driven research, which has led to well-posedness results for second-order systems with a new large class of boundary conditions, which, in addition to Einstein’s equations, are also applicable to Maxwell’s equations in their potential formulation.

At the discrete level, the focus of this review is mainly on designing numerical schemes for which fast convergence to the exact solution is guaranteed. Unfortunately, no or very few general results are known for nonlinear equations and, therefore, we concentrate on schemes for which stability and convergence can be shown at the linear level, at least. If the exact solution is smooth, as expected for vacuum solutions of Einstein’s field equations with smooth initial data and appropriate gauge conditions, at least as long as no curvature singularities form, it is not unreasonable to expect that schemes guaranteeing stability at the linearized level, perhaps with some additional filtering, are also stable for the nonlinear problem. Furthermore, since the solutions are expected to be smooth, emphasis is placed here on using fast converging space discretizations, such as highorder finite-difference or spectral methods, especially those which can be applied to multi-domain implementations.

The organization of this review is at follows. Section 3 starts with a discussion of well-posedness for initial-value problems for evolution problems in general, with special emphasis on hyperbolic ones, including their algebraic characterization. Next, in Section 4 we review some formulations of Einstein’s equations, which yield a well-posed initial-value problem. Here, we mainly focus on the harmonic and BSSN formulations, which are the two most widely used ones in numerical relativity, as well as the ADM formulation with different gauge conditions. Actual numerical simulations always involve the presence of computational boundaries, which raises the need of analyzing well-posedness of the IBVP. For this reason, the theory of IBVP for hyperbolic problems is reviewed in Section 5, followed by a presentation of the state of the art of boundary conditions for the harmonic and BSSN formulations of Einstein’s equations in Section 6, where open problems related with gauge uniqueness are also described.

Section 7 reviews some of the numerical stability theory, including necessary eigenvalue conditions. These are quite useful in practice for analyzing complicated systems or discretizations. We also discuss necessary and sufficient conditions for stability within the method of lines, and Runge-Kutta methods. Sections 8 and 9 are devoted to two classes of spatial approximations: finite differences and spectral methods. Finite differences are rather standard and widespread, so in Section 8 we mostly focus on the construction of optimized operators of arbitrary high order satisfying the summation-by-parts property, which is useful in stability analyses. We also briefly mention classical polynomial interpolation and how to systematically construct finite-difference operators from it. In Section 9 we present the main elements and theory of spectral methods, including spectral convergence from solutions to Sturm-Liouville problems, expansions in orthogonal polynomials, Gauss quadratures, spectral differentiation, and spectral viscosity. We present several explicit formulae for the families of polynomials most widely used: Legendre and Chebyshev. Section 10 describes boundary closures. In the present context they refer to procedures for imposing boundary conditions leading to stability results. We emphasize the penalty technique, which applies to both finite-difference methods of arbitrary high-order and spectral ones, as well as outer and interface boundaries, such as those appearing when there are multiple grids as in complex geometries domain decompositions. We also discuss absorbing boundary conditions for Einstein’s equations. Finally, Section 11 presents a random sample of approaches in numerical relativity using multiple, semi-structured grids, and/or curvilinear coordinates. In particular, some of these examples illustrate many of the methods discussed in this review in realistic simulations.

There are many topics related to numerical relativity, which are not covered by this review. It does not include discussions of physical results in general relativity obtained through numerical simulations, such as critical phenomena or gravitational waveforms computed from binary blackhole mergers. For reviews on these topics we refer the reader to [223] and [337, 122], respectively. See also [9, 45] for recent books on numerical relativity. Next, we do not discuss setting up initial data and solving the Einstein constraints, and refer to [133]. For reviews on the characteristic and conformal approach, which are only briefly mentioned in Section 6.4, we refer the reader to [432] and [172], respectively. Most of the results specific to Einstein’s field equations in Sections 4 and 6 apply to four-dimensional gravity only, though it should be possible to generalize some of them to higher-dimensional theories. Also, as we have already mentioned, the results described here mostly apply to the vacuum field equations, in which case the solutions are expected to be smooth. For aspects involving the presence of shocks, such as those present in relativistic hydrodynamics we refer the reader to [165, 295]. Finally, see [352] for a more detailed review on hyperbolic formulations of Einstein’s equations, and [351] for one on global existence theorems in general relativity. Spectral methods in numerical relativity are discussed in detail in [215]. The 3+1 approach to general relativity is thoroughly reviewed in [214]. Finally, we refer the reader to [126] for a recent book on general relativity and the Einstein equations, which, among many other topics, discusses local and global aspects of the Cauchy problem, the constraint equations, and self-gravitating matter fields such as relativistic fluids and the relativistic kinetic theory of gases.

Except for a few historical remarks, this review does not discuss much of the historical path to the techniques and tools presented, but rather describes the state of the art of a subset of those which appear to be useful. Our choice of topics is mostly influenced by those for which some analysis is available or possible.

We have tried to make each section as self-consistent as possible within the scope of a manageable review, so that they can be read separately, though each of them builds from the previous ones. Numerous examples are included.

Notation and Conventions

Throughout this article, we use the following notation and conventions. For a complex vector u ∈ ℂm, we denote by u* its transposed, complex conjugate, such that u · v := u*v is the standard scalar product for two vectors u, v ∈ ℂm. The corresponding norm is defined by \(\vert u\vert := \sqrt {{u^\ast}u}\). The norm of a complex, m × k matrix A is

$$\vert A\vert : = \underset {u \in {{\mathbb {C}}^k}\backslash \{0\}} {\sup} {{\vert Au\vert} \over {\vert u\vert}}.$$

The transposed, complex conjugate of A is denoted by A*, such that v · (Au) = (A*v) · u for all u ∈ ℂk and v ∈ ℂm. For two Hermitian m × m matrices A = A* and B = B*, the inequality AB means u · Auu · Bu for all u ∈ ℂm. The identity matrix is denoted by I.

The spectrum of a complex, m × m matrix A is the set of all eigenvalues of A,

$$\sigma (A): = \{\lambda \in {\mathbb C}:\lambda I - A{\rm{is not invertible}}\} ,$$

which is real for Hermitian matrices. The spectral radius of A is defined as

$$\rho (A): = \max \{\vert \lambda \vert :\lambda \in \sigma (A)\}.$$

then, the matrix norm |B| of a complex m × k matrix B can also be computed as \(\vert B\vert := \sqrt {\rho ({B^\ast}B)}\).

Next, we denote by L2(U) the class of measurable functions ƒ: U ⊂ ℝn → ℂm on the open subset U of ℝn, which are square-integrable. Two functions ƒ, gL2(U), which differ from each other only by a set of measure zero, are identified. The scalar product on L2(U) is defined as

$$\left\langle {f,\,g} \right\rangle : = \int\limits_U f {(x)^{\ast}}g(x){d^n}x,\quad f,g \in {L^2}(U),$$

and the corresponding norm is \(\Vert f\Vert := \sqrt {\langle f,f\rangle}\). According to the Cauchy-Schwarz inequality we have

$$\left\langle {f,g} \right\rangle \leq \left\Vert f \right\Vert \left\Vert g \right\Vert ,\quad f,g \in {L^2}(U).$$

The Fourier transform of a function ƒ, belonging to the class \(C_0^\infty ({{\rm{\mathbb R}}^n})\) of infinitely-differentiable functions with compact support, is defined as

$$\hat f(k): = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{- ik \cdot x}}} f(x){d^n}x,\quad k \in {{\mathbb R}^n}.$$

According to Parseval’s identities, \(\langle \hat f,\hat g\rangle = \langle f,g\rangle\) for all \(f,g \in C_0^\infty ({{\rm{\mathbb R}}^n})\), and the map \(C_0^\infty ({{\rm{{\mathbb R}}}^n}) \rightarrow {L^2}({{\rm{{\mathbb R}}}^n}),f \mapsto \hat f\) can be extended to a linear, unitary map Ƒ : L2(ℝn) → L2(ℝn) called the Fourier-Plancharel operator; see, for example, [346]. Its inverse is given by \({{\mathcal F}^{- 1}}(f)(x) = \hat f(- x)\) for ƒ ∈ L2(ℝn) and x ∈ ℝn.

For a differentiable function u, we denote by u t , u x , u y , u z its partial derivatives with respect to t, x, y, z.

Indices labeling gridpoints and number of basis functions range from 0 to N. Superscripts and subscripts are used to denote the numerical solution at some discrete timestep and gridpoint, as in

$$v_j^k: = v({t_k},{x_j}).$$

We use boldface fonts for gridfunctions, as in

$${{\mathbf {v}}^k}: = \{v({t_k},{x_j})\} _{j = 0}^N.$$

The Initial-Value Problem

We start here with a discussion of hyperbolic evolution problems on the infinite domain ℝn. This is usually the situation one encounters in the mathematical description of isolated systems, where some strong field phenomena take place “near the origin” and generates waves, which are emitted toward “infinity”. Therefore, the goal of this section is to analyze the well-posedness of the Cauchy problem for quasilinear hyperbolic evolution equations without boundaries. The case with boundaries is the subject of Section 5. As mentioned in the introduction (Section 1), the well-posedness results are fundamental in the sense that they give existence (at least local in time if the problem is nonlinear) and uniqueness of solutions and show that these depend continuously on the initial data. Of course, how the solution actually appears in detail needs to be established by more sophisticated mathematical tools or by numerical experiments, but it is clear that it does not make sense to speak about “the solution” if the problem is not well posed.

Our presentation starts with the simplest case of linear constant coefficient problems in Section 3.1, where solutions can be constructed explicitly using Fourier transform. Then, we consider in Section 3.2 linear problems with variable coefficients, which we reduce to the constant coefficient case using the localization principle. Next, in Section 3.3, we treat first-order quasilinear equations, which we reduce to the previous case by the principle of linearization. Finally, in Section 3.4 we summarize some basic results about abstract evolution operators, which give the general framework for treating evolution problems including not only those described by local partial differential operators, but also more general ones.

Much of the material from the first three subsections is taken from the book by Kreiss and Lorenz [259]. However, our summary also includes recent results concerning second-order equations, examples of wave systems on curved spacetimes, and a very brief review of semigroup theory.

Linear, constant coefficient problems

We consider an evolution equation on n-dimensional space of the following form:

$${u_t} = P(\partial /\partial x)u \equiv \sum\limits_{\vert \nu \vert \leq p} {{A_\nu}} {D_\nu}u,\quad x \in {{\mathbb R}^n},\quad t \geq 0.$$

Here, u = u(t, x) ∈ ℂm is the state vector, and u t its partial derivative with respect to t. Next, the A v ’s denote complex, m × m matrices where v = (v1, v2, …, v n ) denotes a multi-index with components V j ∈ {0, 1, 2, 3, …} and |v| := v1 + … + v n . Finally, D v denotes the partial derivative operator

$${D_\nu}: = {{{\partial ^{\vert \nu \vert}}} \over {\partial x_1^{{\nu _1}} \cdot \cdot \cdot \partial x_n^{{\nu _n}}}}$$

of order |v|, where D0 := I. Here are a few representative examples:

Example 1. The advection equation u t (t, x) = λu x (t, x) with speed λ ∈ ℝ in the negative x direction.

Example 2. The heat equation u t (t, x) = Δu(t, x), where

$$\Delta : = {{{\partial ^2}} \over {\partial x_1^2}} + {{{\partial ^2}} \over {\partial x_2^2}} + \ldots + {{{\partial ^2}} \over {\partial x_n^2}}$$

denotes the Laplace operator.

Example 3. The Schrödinger equation u t (t, x) = iΔu(t, x).

Example 4. The wave equation U tt = ΔU, which can be cast into the form of Eq. (3.1),

$${u_t} = \left({\begin{array}{*{20}c} 0 & 1 \\ \Delta & 0 \\ \end{array}} \right)u,\quad u = \left({\begin{array}{*{20}c} U \\ V \\ \end{array}} \right).$$

We can find solutions of Eq. (3.1) by Fourier transformation in space,

$$\hat u(t,k) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{- ik \cdot x}}} u(t,x){d^n}x,\quad k \in {{\mathbb R}^n},\quad t \geq 0.$$

Applied to Eq. (3.1) this yields the system of linear ordinary differential equations

$${\hat u_t} = P(ik)\hat u,\quad t \geq 0,$$

for each wave vector k ∈ ℝn where P(ik), called the symbol of the differential operator P(∂/x), is defined as

$$P(ik): = \sum\limits_{\vert \nu \vert \leq p} {{A_\nu}} {(i{k_1})^{{\nu _1}}} \cdot \cdot \cdot {(i{k_n})^{{\nu _n}}},\quad k \in {{\mathbb R}^n}.$$

The solution of Eq. (3.4) is given by

$$\hat u(t,k) = {e^{P(ik)t}}\hat u(0,k),\quad t \geq 0,$$

where û(0, k) is determined by the initial data for u at t = 0. Therefore, the formal solution of the Cauchy problem

$${u_t}(t\,,x) = P(\partial /\partial x)u(t\,,x)\,,x \in {{\mathbb {R}}^n},\quad t \geq 0,$$
$$u(0\,,x) = f(x)\,, x\in{\mathbb R}^n,$$

with given initial data ƒ for u at t = 0 is

$$u(t,x) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{ik \cdot x}}} {e^{P(ik)t}}\hat f(k){d^n}k,\quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where \(\hat f(k) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{- ik \cdot x}}f(x){d^n}x}\)


At this point we have to ask ourselves if expression (3.9) makes sense. In fact, we do not expect the integral to converge in general. Even if \({\hat f}\) is smooth and decays rapidly to zero as |k| → ∞ we could still have problems if |eP(ik)t| diverges as |k| → ∞. One simple, but very restrictive, possibility to control this problem is to limit ourselves to initial data ƒ in the class \({{\mathcal S}^\omega}\) of functions, which are the Fourier transform of a C-function with compact support, i.e., \(f \in {{\mathcal S}^\omega}\), where

$${{\mathcal S}^\omega}: = \left\{{v(\cdot) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{ik \cdot (\cdot)}}} \hat v(k){d^n}k:\hat v \in C_0^\infty ({{\mathbb R}^n})} \right\}.$$

A function in this space is real analytic and decays faster than any polynomial as |x| → ∞.Footnote 1 If \(f \in {{\mathcal S}^\omega}\) the integral in Eq. (3.9) is well-defined and we obtain a solution of the Cauchy problem (3.7, 3.8), which, for each t ≥ 0 lies in this space. However, this possibility suffers from several unwanted features:

  • The space of admissible initial data is very restrictive. Indeed, since \(f \in {{\mathcal S}^\omega}\) is necessarily analytic it is not possible to consider nontrivial data with, say, compact support, and study the propagation of the support for such data.

  • For fixed t > 0, the solution may grow without bound when perturbations with arbitrarily small amplitude but higher and higher frequency components are considered. Such an effect is illustrated in Example 6 below.

  • The function space \({{\mathcal S}^\omega}\) does not seem to be useful as a solution space when considering linear variable coefficient or quasilinear problems, since, for such problems, the different k modes do not decouple from each other. Hence, mode coupling can lead to components with arbitrarily high frequencies.Footnote 2

For these reasons, it is desirable to consider initial data of a more general class than \({{\mathcal S}^\omega}\). For this, we need to control the growth of eP(ik)t. This is captured in the following

Definition 1. The Cauchy problem ( 3.7 , 3.8 ) is called well posed if there are constants K ≥ 1 and α ∈ ℝ such that

$$\vert {e^{P(ik)t}}\vert \leq K{e^{\alpha t}}\quad {\rm{for}}\,{\rm{all}}\,t \geq 0\,{\rm{and}}\,{\rm{all}}\,k \in {{\mathbb R}^n}.$$

The importance of this definition relies on the property that for each fixed time t > 0 the norm |eP(ik)t| of the propagator is bounded by the constant C(t) := Keαt, which is independent of the wave vector k. The definition does not state anything about the growth of the solution with time other that this growth is bounded by an exponential. In this sense, unless one can choose α ≤ 0 or α > 0 arbitrarily small, well-posedness is not a statement about the stability in time, but rather about stability with respect to mode fluctuations.

Let us illustrate the meaning of Definition 1 with a few examples:

Example 5. The heat equation u t (t, x) = Δu(t, x).

Fourier transformation converts this equation into û t (t, k) = −|k|2û(t, k). Hence, the symbol is \(P(ik) = - \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}t}} \leq 1\). The problem is well posed.

Example 6. The backwards heat equation u t (t, x) = −Δu(t, x).

In this case the symbol is \(P(ik) = + \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}}}\). In contrast to the previous case, eP(ik)t exhibits exponential frequency-dependent growth for each fixed t > 0 and the problem is not well posed. Notice that small initial perturbations with large |k| are amplified by a factor that becomes larger and larger as |k| increases. Therefore, after an arbitrarily small time, the solution is contaminated by high-frequency modes.

Example 7. The Schrödinger equation u t (t, x) = iΔu(t, x).

In this case we have P(ik) = i|k|2 and |eP(ik)t| = 1. The problem is well posed. Furthermore, the evolution is unitary, and we can evolve forward and backwards in time. When compared to the previous example, it is the factor i in front of the Laplace operator that saves the situation and allows the evolution backwards in time.

Example 8. The one-dimensional wave equation written in first-order form,

$${u_t}(t,x) = A{u_x}(t,x),\quad A = \left({\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right).$$

The symbol is P(ik) = ikA. Since the matrix A is symmetric and has eigenvalues ±1, there exists an orthogonal transformation U such that

$$A = U\left({\begin{array}{*{20}c} 1 & 0 \\ 0 & {- 1} \\ \end{array}} \right){U^{- 1}},\quad {e^{ikAt}} = U\left({\begin{array}{*{20}c} {{e^{ikt}}} & {0\quad} \\ {0\;\;} & {{e^{- ikt}}} \\ \end{array}} \right){U^{- 1}}.$$

Therefore, |eP(ik)t| = 1, and the problem is well posed.

Example 9. Perturb the previous problem by a lower-order term,

$${u_t}(t,\,x) = A{u_x}(t,\,x) + \lambda u(t,\,x),\quad A = \left({\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right),\quad \lambda \in {\mathbb R}.$$

The symbol is P(ik) = ikA + λI, and |eP(ik)t| = eλt. The problem is well posed, even though the solution grows exponentially in time if λ > 0.

More generally one can show (see Theorem 2.1.2 in [259]):

Lemma 1. The Cauchy problem for the first-order equation u t = Au x + B with complex m × m matrices A and B is well posed if and only if A is diagonalizable and has only real eigenvalues.

By considering the eigenvalues of the symbol P(ik) we obtain the following simple necessary condition for well-posedness:

Lemma 2 (Petrovskii condition). Suppose the Cauchy problem ( 3.7 , 3.8 ) is well posed. Then, there is a constant α ∈ ℝ such that

$$Re(\lambda) \leq \alpha$$

for all eigenvalues λ of P(ik).

Proof. Suppose λ is an eigenvalue of P(ik) with corresponding eigenvector v, P(ik)v = λv. Then, if the problem is well posed,

$$K{e^{\alpha t}}\vert v\vert \geq \vert {e^{P(ik)t}}v\vert = \vert {e^{\lambda t}}v\vert = {e^{{\rm{Re}}(\lambda)t}}\vert v\vert ,$$

for all t ≥ 0, which implies that e Re(λ)tKeαt for all t ≥ 0, and hence Re(λ) ≤ α. □

Although the Petrovskii condition is a very simple necessary condition, we stress that it is not sufficient in general. Counterexamples are first-order systems, which are weakly, but not strongly, hyperbolic; see Example 10 below.

Extension of solutions

Now that we have defined and illustrated the notion of well-posedness, let us see how it can be used to solve the Cauchy problem (3.7, 3.8) for initial data more general than in \({{\mathcal S}^\omega}\). Suppose first that \(f \in {{\mathcal S}^\omega}\), as before. Then, if the problem is well posed, Parseval’s identities imply that the solution (3.9) must satisfy

$$\Vert {u(t,.)} \Vert = \Vert {\hat u(t,.)} \Vert = \Vert {{e^{P(i \cdot)t}}\hat f} \Vert \leq K{e^{\alpha t}}\Vert {\hat f} \Vert = K{e^{\alpha t}}\Vert f \Vert ,\quad t \geq 0.$$

Therefore, the \({{\mathcal S}^\omega}\)-solution satisfies the following estimate

$$\Vert u(t,.)\Vert \leq K{e^{\alpha t}}\Vert f\Vert ,\quad t \geq 0,$$

for all \(f \in {{\mathcal S}^\omega}\). This estimate is important because it allows us to extend the solution to the much larger space L2(ℝn). This extension is defined in the following way: let ƒL2(ℝn). Since \({{\mathcal S}^\omega}\) is dense in L2(ℝn) there exists a sequence {ƒ j } in \({{\mathcal S}^\omega}\) such that ‖ƒ j − ƒ‖ → 0. Therefore, if the problem is well posed, it follows from the estimate (3.18) that the corresponding solutions u j defined by Eq. (3.9) form a Cauchy-sequence in L2(ℝn), and we can define

$$U(t)f(x): = \underset {j \rightarrow \infty} {\lim} {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{ik \cdot x}}} {e^{P(ik)t}}{\hat f_j}(k){d^n}k,\quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where the limit exists in the L2(ℝn) sense. The linear map U(t) : L2(ℝn) → L2(ℝn) satisfies the following properties:

  1. (i)

    U(0) = I is the identity map.

  2. (ii)

    U(t + s) = U(t)U(s) for all t, s ≥ 0.

  3. (iii)

    For \(f \in {{\mathcal S}^\omega},u(t,.) = U(t)f\) is the unique solution to the Cauchy problem (3.7, 3.8).

  4. (iv)

    U(t)ƒ‖ ≤ Keαtƒ‖ for all ƒL2(ℝn) and all t ≥ 0.

The family {U(t) : t ≥ 0} is called a semi-group on L2(ℝn). In general, U(t) cannot be extended to negative t as the example of the backwards heat equation, Example 6, shows.

For ƒL2(ℝn) the function u(t, x) := U(t)ƒ(x) is called a weak solution of the Cauchy problem (3.7, 3.8). It can also be constructed in an abstract way by using the Fourier-Plancharel operator Ƒ : L2(ℝn) → L2(ℝn). If the problem is well posed, then for each ƒ ∈ L2(ℝn) and t ≥ 0 the map keP(ik)tƑ(ƒ)(k) defines an L2(ℝn)-function, and, hence, we can define

$$u(t, \cdot): = {{\mathcal F}^{- 1}}\left({{e^{P(i \cdot)t}}{\mathcal F}f} \right),\quad t \geq 0.$$

According to Duhamel’s principle, the semi-group U(t) can also be used to construct weak solutions of the inhomogeneous problem,

$${u_t}(t,x) = P(\partial /\partial x)u(t,\,x) + F(t,\,x),\;x \in {{\mathbb R}^n},\quad t \geq 0,$$
$$u(0,\,x) = f(x),\;x \in {{\mathbb R}^n},$$

where F : [0, ∞) → L2(ℝn), tF(t, ·) is continuous:

$$u(t, \cdot) = U(t)f + \int\limits_0^t U (t - s)F(s, \cdot)ds.$$

for a discussion on semi-groups in a more general context see Section 3.4.

Algebraic characterization

In order to extend the solution concept to initial data more general than analytic, we have introduced the concept of well-posedness in Definition 1. However, given a symbol P(ik), it is not always a simple task to determine whether or not constants K ≥ 0 and α ∈ ℝ exist such that |eP(ik)t|Xeαt for all t ≥ 0 and k ∈ ℝn. Fortunately, the matrix theorem by Kreiss [257] provides necessary and sufficient conditions on the symbol P(ik) for well-posedness.

Theorem 1. Let P(ik), k ∈ ℝn, be the symbol of a constant coefficient linear problem, see Eq. (3.5) , and let α ∈ ℝ. Then, the following conditions are equivalent:

  1. (i)

    There exists a constant K ≥ 0 such that

    $$\vert{e^{P(ik)t}}\vert \leq K{e^{\alpha t}}$$

    for all t ≥ 0 and k ∈ ℝn.

  2. (ii)

    There exists a constant M > 0 and a family H(k) of m × m Hermitian matrices such that

    $${M^{- 1}}I \leq H(k) \leq MI,\quad H(k)P(ik) + P{(ik)^{\ast}}H(k) \leq 2\alpha H(k)$$

    for all k ∈ ℝn.

A generalization and complete proof of this theorem can be found in [259]. However, let us show here the implication (ii) ⇒ (i) since it illustrates the concept of energy estimates, which will be used quite often throughout this review (see Section 3.2.3 below for a more general discussion of these estimates). Hence, let H(k) be a family of m × m Hermitian matrices satisfying the condition (3.25). Let k ∈ ℝn and v0 ∈ ℂm be fixed, and define v(t) := eP(ik)tv0 for t ≥ 0. Then we have the following estimate for the “energy” density v(t)*H(k)v(t),

$$\begin{array}{*{20}c} {{d \over {dt}}v{{(t)}^{\ast}}H(k)v(t) = {{[P(ik)v(t)]}^{\ast}}H(k)v(t) + v{{(t)}^{\ast}}H(k)P(ik)v(t)\quad \quad \quad \quad \,} \\ {= v{{(t)}^{\ast}}\left[ {P{{(ik)}^{\ast}}H(k) + H(k)P(ik)} \right]v(t)} \\ {\leq 2\alpha \,v{{(t)}^{\ast}}H(k)v(t),\quad \quad \quad \quad \quad \quad \quad \;} \\ \end{array}$$

which implies the differential inequality

$${d \over {dt}}\left[ {{e^{- 2\alpha t}} v{{(t)}^{\ast}}H(k)v(t)} \right] \leq 0,\quad t \geq 0,\quad k \in {{\mathbb R}^n}.$$

Integrating, we find

$${M^{- 1}}\vert v(t){\vert ^2} \leq v{(t)^{\ast}}H(k)v(t) \leq {e^{2\alpha t}}v_0^{\ast}H(k){v_0} \leq M{e^{2\alpha t}}\vert {v_0}{\vert ^2},$$

which implies the inequality (3.24) with K = M.

First-order systems

Many systems in physics, like Maxwell’s equations, the Dirac equation, and certain formulation of Einstein’s equations are described by first-order partial-differential equations (PDEs). In fact, even systems, which are given by a higher-order PDE, can be reduced to first order at the cost of introducing new variables, and possibly also new constraints. Therefore, let us specialize the above results to a first-order linear problem of the form

$${u_t} = P(\partial /\partial x)u \equiv \sum\limits_{j = 1}^n {{A^j}} {\partial \over {\partial {x^j}}}u + Bu,\quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where A1, …, An, B are complex m × m matrices. We split P(ik) = P0(ik) + B into its principal symbol, \({P_0}(ik) = i\sum\limits_{j = 1}^n {{k_j}{A^j}}\), and the lower-order term B. The principal part is the one that dominates for large |k| and hence the one that turns out to be important for well-posedness. Notice that P0(ik) depends linearly on k. With these observations in mind we note:

  • A necessary condition for the problem to be well posed is that for each k ∈ ℝn with |k| = 1 the symbol P0(ik) is diagonalizable and has only purely imaginary eigenvalues. To see this, we require the inequality

    $$\vert {e^{\vert k\vert {P_0}(ik\prime)t + Bt}}\vert \leq K{e^{\alpha t}},\quad k\prime: = {k \over {\vert k\vert}},$$

    for all t ≥ 0 and k ∈ ℝn, k ≠ 0, replace t by t/|k|, and take the limit |k| → ∞, which yields \(\vert{e^{{P_0}(i{k\prime})t}}\vert\, \leq K\) for all k′ ∈ ℝn with |k′| = 1. Therefore, there must exist for each such k′ a complex m × m matrix S(k′) such that S(k′)−1P0(ik′)S(k′) = iΛ(k′), where Λ(k′) is a diagonal real matrix (cf. Lemma 1).

  • In this case the family of Hermitian m × m matrices H(k′) := (S(k′)−1)*S(k′)−1 satisfies

    $$H(k\prime){P_0}(ik\prime) + {P_0}{(ik\prime)^{\ast}}H(k\prime) = 0$$

    for all k′ ∈ ℝn with |k′| = 1.

  • However, in order to obtain the energy estimate, one also needs the condition M−1IH(k′) ≤ MI, that is, H(k′) must be uniformly bounded and positive. This follows automatically if H(k′) depends continuously on k′, since k′ varies over the (n − 1)-dimensional unit sphere, which is compact.Footnote 3 In turn, it follows that H(k′) depends continuously on k′ if S(k′) does. However, although this may hold in many situations, continuous dependence of S(k′) on k′ cannot always be established; see Example 12 for a counterexample.

These observations motivate the following three notions of hyperbolicity, each of them being a stronger condition than the previous one:

Definition 2. The first-order system (3.28) is called

  1. (i)

    weakly hyperbolic, if all the eigenvalues of its principal symbol P0(ik) are purely imaginary.

  2. (ii)

    strongly hyperbolic, if there exists a constant M > 0 and a family of Hermitian m × m matrices H (k), kSn−1, satisfying

    $${M^{- 1}}I \leq H(k) \leq MI,\quad H(k){P_0}(ik) + {P_0}{(ik)^{\ast}}H(k) = 0,$$

    for all kSn−1, where Sn−1 := {k ∈ ℝn : |k| = 1} denotes the unit sphere.

  3. (iii)

    symmetric hyperbolic, if there exists a Hermitian, positive definite m × m matrix H (which is independent of k) such that

    $$H{P_0}(ik) + {P_0}{(ik)^{\ast}}H = 0,$$

    for all kSn−1.

The matrix theorem implies the following statements:

  • Strongly and symmetric hyperbolic systems give rise to a well-posed Cauchy problem. According to Theorem 1, their principal symbol satisfies

    $$\vert {e^{{P_0}(ik)t}}\vert \leq K,\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R},$$

    and this property is stable with respect to lower-order perturbations,

    $$\vert {e^{P(ik)t}}\vert = \vert {e^{{P_0}(ik)t + Bt}}\vert \leq K{e^{K\vert B\vert t}},\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R}.$$

    The last inequality can be proven by applying Duhamel’s formula (3.23) to the function \(\hat u(t): = {e^{p(ik)t}}\hat f\), which satisfies û t (t) = P0(ik)û(t) + F(t) with F(t) = (t). The solution formula (3.23) then gives \(\vert \hat u(t)\vert \, \leq K(\vert \hat f\vert + \vert B\vert \int\nolimits_0^t {\vert \hat u(s)\vert ds)}\), which yields \(\vert \hat u(t)\vert \, \leq K{e^{K\vert B\vert t}}\vert \hat f\vert\) upon integration.

  • As we have anticipated above, a necessary condition for well-posedness is the existence of a complex m × m matrix S(k) for each kSn−1 on the unit sphere, which brings the principal symbol P0(ik) into diagonal, purely imaginary form. If, furthermore, S(k) can be chosen such that |S(k)| and |S(k)−1|are uniformly bounded for all kSn−1, then H(k) := (S(k)−1)*S(k)−1 satisfies the conditions (3.31) for strong hyperbolicity. If the system is well posed, Theorem 2.4.1 in [259] shows that it is always possible to construct a symmetrizer H(k) satisfying the conditions (3.31) in this manner, and hence, strong hyperbolicity is also a necessary condition for well-posedness. The symmetrizer construction H(k) := (S(k)−1)*S(k)−1 is useful for applications, since S(k) is easily constructed from the eigenvectors and S(k)−1 from the eigenfields of the principal symbol; see Example 15.

  • Weakly hyperbolic systems are not well posed in general because \(\vert {e^{{P_0}(ik)t}}\vert\) might exhibit polynomial growth in |k|t. Although one might consider such polynomial growth as acceptable, such systems are unstable with respect to lower-order perturbations. As the next example shows, it is possible that |eP(ik)t| grows exponentially in |k| if the system is weakly hyperbolic.

Example 10. Consider the weakly hyperbolic system [259]

$${u_t} = \left({\begin{array}{*{20}c} 1 & 1 \\ 0 & 1 \\ \end{array}} \right){u_x} + a\left({\begin{array}{*{20}c} {- 1} & {+ 1} \\ {- 1} & {- 1} \\ \end{array}} \right)u,$$

with a ∈ ℝ a parameter. The principal symbol is \({P_0}(ik) = ik\left({\begin{array}{*{20}c} 1 & 1 \\ 0 & 1 \\ \end{array}} \right)\) and

$${e^{{P_0}(ik)t}} = {e^{ikt}}\left({\begin{array}{*{20}c} 1 & {ikt} \\ 0 & 1 \\ \end{array}} \right).$$

Using the tools described in Section 2 we find for the norm

$$\vert {e^{{P_0}(ik)t}}\vert = \sqrt {1 + {{{k^2}{t^2}} \over 2} + \sqrt {{{\left({1 + {{{k^2}{t^2}} \over 2}} \right)}^2} - 1}} ,$$

which is approximately equal to |k|t for large |k|t. Hence, the solutions to Eq. (3.35) contain modes, which grow linearly in |k|t for large |k|t when a = 0, i.e., when there are no lower-order terms.

However, when a ≠ 0, the eigenvalues of P(ik) are

$${\lambda _ \pm} = ik - a \pm i\sqrt {a(a + ik)} ,$$

which, for large k has real part \({\rm{Re}}({\lambda _ \pm}) \approx \pm \sqrt {\vert a\vert \vert k\vert/2}\). The eigenvalue with positive real part gives rise to solutions, which, for fixed t, grow exponentially in |k|.

Example 11. For the system [353],

$${u_t} = {A^1}{u_x} + {A^2}{u_y} ,\quad {A^1} = \left({\begin{array}{*{20}c} 1 & 1 \\ 0 & 2 \\ \end{array}} \right),\quad {A^2} = \left({\begin{array}{*{20}c} 1 & 0 \\ 0 & 2 \\ \end{array}} \right),$$

the principal symbol, \({P_0}(ik) = i\left({\begin{array}{*{20}c} {{k_1} + {k_2}} & {{k_1}} \\ {0} & {2({k_1} + {k_2})} \\ \end{array}} \right)\), is diagonalizable for all vectors k = (k1, k2) ∈ S1 except for those with k1 + k2 = 0. In particular, P0(ik) is diagonalizable for k = (1, 0) and k = (0, 1). This shows that in general, it is not sufficient to check that the n matrices A1, A2, …, An alone are diagonalizable and have real eigenvalues; one has to consider all possible linear combinations \(\sum\limits_{j = 1}^n {{A^j}{k_j}}\) with kSn−1.

Example 12. Next, we present a system for which the eigenvectors of the principal symbol cannot be chosen to be continuous functions of k:

$${u_t} = {A^1}{u_x} + {A^2}{u_y} + {A^3}{u_z},\quad {A^1} = \left({\begin{array}{*{20}c} 1 & 0 \\ 0 & {- 1} \\ \end{array}} \right),\quad {A^2} = \left({\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right),\quad {A^3} = \left({\begin{array}{*{20}c} 0 & 0 \\ 0 & 0 \\ \end{array}} \right).$$

The principal symbol \({P_0}(ik) = i\left({\begin{array}{*{20}c} {{k_1}} & {{k_2}} \\ {{k_2}} & {- {k_1}} \\ \end{array}} \right)\) has eigenvalues \({\lambda _ \pm}(k) = \pm i\sqrt {k_1^2 + k_2^2}\) and for (k1, k2) ≠ (0, 0) the corresponding eigenprojectors are

$${P_ \pm}({k_1},{k_2}) = {1 \over {2{\lambda _ \pm}(k)}}\left({\begin{array}{*{20}c} {{\lambda _ \pm}(k) + i{k_1}} & {i{k_2}} \\ {i{k_2}} & {{\lambda _ \pm}(k) - i{k_1}} \\ \end{array}} \right).$$

When (k1, k2) → (0, 0) the two eigenvalues fall together, and A(k) converges to the zero matrix. However, it is not possible to continuously extend P ± (k1, k2) to (k1, k2) = (0, 0). For example,

$${P_ +}(h,0) = \left({\begin{array}{*{20}c} 1 & 0 \\ 0 & 0 \\ \end{array}} \right),\quad {P_ +}(- h,0) = \left({\begin{array}{*{20}c} 0 & 0 \\ 0 & 1 \\ \end{array}} \right),$$

for positive h > 0. Therefore, any choice for the matrix S(k), which diagonalizes A(k), must be discontinuous at k = (0, 0, ±1) since the columns of S(k) are the eigenvectors of A(k).

Of course, A(k) is symmetric and so S(k) can be chosen to be unitary, which yields the trivial symmetrizer H(k) = I. Therefore, the system is symmetric hyperbolic and yields a well-posed Cauchy problem; however, this example shows that it is not always possible to choose S(k) as a continuous function of k.

Example 13. Consider the Klein-Gordon equation

$${\Phi _{tt}} = \Delta \Phi - {m^2}\Phi ,$$

in two spatial dimensions, where m ∈ ℝ is a parameter, which is proportional to the mass of the field Φ Introducing the variables u = (Φ, Φ t , Φ x , Φ y ) we obtain the first-order system

$${u_t} = \left({\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}} \right){u_x} + \left({\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array}} \right){u_y} + \left({\begin{array}{*{20}c} {\;\;\;\;0} & 1 & 0 & 0 \\ {- {m^2}} & 0 & 0 & 0 \\ {\;\;\;\;0} & 0 & 0 & 0 \\ {\;\;\;\;0} & 0 & 0 & 0 \\ \end{array}} \right)u.$$

The matrix coefficients in front of u x and u y are symmetric; hence the system is symmetric hyperbolic with trivial symmetrizer H = diag(m2, 1, 1, 1).Footnote 4 The corresponding Cauchy problem is well posed. However, a problem with this first-order system is that it is only equivalent to the original, second-order equation (3.43) if the constraints (u1) x = u3 and (u1) y = u4 are satisfied.

An alternative symmetric hyperbolic first-order reduction of the Klein-Gordon equation, which does not require the introduction of constraints, is the Dirac equation in two spatial dimensions,

$${v_t} = \left({\begin{array}{*{20}c} 1 & 0 \\ 0 & {- 1} \\ \end{array}} \right){v_x} + \left({\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right){v_y} + m\left({\begin{array}{*{20}c} 0 & 1 \\ {- 1} & 0 \\ \end{array}} \right)v,\quad v = \left({\begin{array}{*{20}c} {{v_1}} \\ {{v_2}} \\ \end{array}} \right).$$

This system implies the Klein-Gordon equation (3.43) for either of the two components of v.

Yet another way of reducing second-order equations to first-order ones without introducing constraints will be discussed in Section 3.1.5.

Example 14. In terms of the electric and magnetic fields u = (E, B), Maxwell’s evolution equations,

$${E_t} = + \nabla \wedge B - J,$$
$${B_t} = - \nabla \wedge E,$$

constitute a symmetric hyperbolic system. Here, J is the current density and ∇ and Λ denote the nabla operator and the vector product, respectively. The principal symbol is

$${P_0}(ik)\left({\begin{array}{*{20}c} E \\ B \\ \end{array}} \right) = i\left({\begin{array}{*{20}c} {+ k \wedge B} \\ {- k \wedge E} \\ \end{array}} \right)$$

and a symmetrizer is given by the physical energy density,

$${u^{\ast}}Hu = {1 \over 2}\left({\vert E{\vert ^2} + \vert B{\vert ^2}} \right),$$

in other words, H = 2 −1I is trivial. The constraints V · E = ρ and ∇ · B = 0 propagate as a consequence of Eqs. (3.46, 3.47), provided that the continuity equation holds: (∇ · Eρ) t = −∇ · Jρ t = 0, (∇ · B) t = 0.

Example 15. There are many alternative ways to write Maxwell’s equations. The following system [353, 287] was originally motivated by an analogy with certain parametrized first-order hyperbolic formulations of the Einstein equations, and provides an example of a system that can be symmetric, strongly, weakly or not hyperbolic at all, depending on the parameter values. Using the Einstein summation convention, the evolution system in vacuum has the form

$${\partial _t}{E_i} = {\partial ^j}({W_{ij}} - {W_{ji}}) - \alpha ({\partial _i}{W^j}_j - {\partial ^j}{W_{ij}}),$$
$${\partial _t}{W_{ij}} = - {\partial _i}{E_j} - {\beta \over 2}{\delta _{ij}}{\partial ^k}{E_k},$$

where E i and W ij = i A j , i = 1, 2, 3, represent the Cartesian components of the electric field and the gradient of the magnetic potential Aj, respectively, and where the real parameters α and β determine the dynamics of the constraint hypersurface defined by kE k = 0 and k W ij i W kj = 0.

In order to analyze under which conditions on α and β the system (3.50, 3.51) is strongly hyperbolic we consider the corresponding symbol,

$${P_0}(ik)u = i\left({\begin{array}{*{20}c} {(1 + \alpha){k^j}{W_{ij}} - {k^j}{W_{ji}} - \alpha {k_i}{W^j}_j} \\ {- {k_i}{E_j} - {\beta \over 2}{\delta _{ij}}{k^l}{E_l}} \\ \end{array}} \right),\quad u = \left({\begin{array}{*{20}c} {{E_i}} \\ {{W_{ij}}} \\ \end{array}} \right),\quad k \in {S^2}.$$

Decomposing E i and W ij into components parallel and orthogonal to k i ,

$${E_i} = \bar E{k_i} + {\bar E_i},\quad {W_{ij}} = \bar W{k_i}{k_j} + {\bar W_i}{k_j} + {k_i}{\bar V_j} + {\bar W_{ij}} + {1 \over 2}{\gamma _{ij}}\bar U,$$

where in terms of the projector \({\gamma _i}^j: = {\delta _i}^j - {k_i}{k^j}\) orthogonal to k we have defined \(\bar E: = {k^l}{E_l},{\bar E_i}: = {\gamma _i}^j{E_j}\) and \(\bar W: = {k^i}{k^j}{W_{ij}},{\bar W_i}: = {\gamma _i}^k{W_{kj}}{k^j},{\bar V_j}: = {k^i}{W_{ik}}{\gamma ^k}_j,\bar U: = {\gamma ^{ij}}{W_{ij}}\) and \({\bar W_{ij}}: = ({\gamma _i}^k{\gamma _j}^l - {2^{- 1}}{\gamma _{ij}}{\gamma ^{kl}}){W_{kl}}\),Footnote 5 we can write the eigenvalue problem P0(ik)u = iλu as

$$\begin{array}{*{20}c} {\lambda \bar E = - \alpha \bar U,\quad \quad} \\ {\lambda \bar U = - \beta \bar E,\quad \quad} \\ {\lambda \bar W = - \left({1 + {\beta \over 2}} \right)\bar E,} \\ {\lambda {{\bar E}_i} = (1 + \alpha){{\bar W}_i} - {{\bar V}_i},} \\ {\lambda {{\bar V}_i} = - {{\bar E}_i},\quad \quad \quad \quad} \\ {\lambda {{\bar W}_i} = 0,\quad \quad \quad \quad \quad} \\ {\lambda {{\bar W}_{ij}} = 0.\quad \quad \quad \quad \quad} \\ \end{array}$$

It follows that P0(ik) is diagonalizable with purely complex eigenvalues if and only if αβ > 0. However, in order to show that in this case the system is strongly hyperbolic one still needs to construct a bounded symmetrizer H(k). For this, we set \(\mu := \sqrt {\alpha \beta}\) and diagonalize P0(ik) = iS(k)Λ(k)S(k)−1 with Λ(k) = diag(µ, −µ, 0, 1, −1, 0, 0) and

$$S{(k)^{- 1}}u = \left({\begin{array}{*{20}c} {\bar E - {\mu \over \beta}\bar U} \\ {\bar E + {\mu \over \beta}\bar U} \\ {\beta \bar W - \left({1 + {\beta \over 2}} \right)\bar U} \\ {{{\bar E}_i} - {{\bar V}_i} + (1 + \alpha){{\bar W}_i}} \\ {{{\bar E}_i} + {{\bar V}_i} - (1 + \alpha){{\bar W}_i}} \\ {{{\bar W}_i}} \\ {{{\bar W}_{ij}}} \\ \end{array}} \right).$$

Then, the quadratic form associated with the symmetrizer is

$$\begin{array}{*{20}c} {{u^{\ast}}H(k)u = {u^{\ast}}{{(S{{(k)}^{- 1}})}^{\ast}}S{{(k)}^{- 1}}u\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {\; = 2\vert \bar E{\vert ^2} + 2{\alpha \over \beta}\vert \bar U{\vert ^2} + {{\left\vert {\beta \bar W - \left({1 + {\beta \over 2}} \right)\bar U} \right\vert}^2} + 2{{\bar E}^i}{{\bar E}_i}} \\ {\quad \; + 2\left[ {{{\bar V}^i} - (1 + \alpha){{\bar W}^i}} \right]\left[ {{{\bar V}_i} - (1 + \alpha){{\bar W}_i}} \right] + {{\bar W}^i}{{\bar W}_i} + {{\bar W}^{ij}}{{\bar W}_{ij}},} \\ \end{array}$$

and H(k) is smooth in kS2. Therefore, the system is indeed strongly hyperbolic for αβ > 0.

In order to analyze under which conditions the system is symmetric hyperbolic we notice that because of rotational and parity invariance the most general k-independent symmetrizer must have the form

$${u^{\ast}}Hu = a{({E^i})^{\ast}}{E_i} + b{({W^{[ij]}})^{\ast}}{W_{[ij]}} + c{({\hat W^{ij}})^{\ast}}{\hat W_{ij}} + d {W^{\ast}}W,$$

with strictly positive constants a, b, c and d, where Ŵ ij := W(ij)δ ij W/3 denotes the symmetric, trace-free part of W ij and \(W: = {W^j}_j\) its trace. Then,

$$\begin{array}{*{20}c} {{u^{\ast}}H{P_0}(ik)u = ia{{({E^i})}^{\ast}}\left[ {(\alpha + 2){k^j}{W_{[ij]}} + \alpha {k^j}{{\hat W}_{ij}} - {{2\alpha} \over 3}{k_i}W} \right]\quad \quad \quad \quad \quad \quad \quad \quad} \\ {+ ib{{({W^{[ij]}})}^{\ast}}{E_i}{k_j} - ic{{({{\hat W}^{ij}})}^{\ast}}{E_i}{k_j} - id\left({1 + {{3\beta} \over 2}} \right){W^{\ast}}{k^i}{E_i}.} \\ \end{array}$$

for H to be a symmetrizer, the expression on the right-hand side must be purely imaginary. This is the case if and only if a(α + 2) = b, − = c and 2/3 = d(1 + 3β/2). Since a, b, c and d are positive, these equalities can be satisfied if and only if −2 < α < 0 and β < −2/3. Therefore, if either α and β are both positive or α and β are both negative and α ≤ −2 or β ≥ −2/3, then the system (3.50, 3.51) is strongly but not symmetric hyperbolic.

Second-order systems

An important class of systems in physics are wave problems. In the linear, constant coefficient case, they are described by an equation of the form

$${v_{tt}} = \sum\limits_{j,k = 1}^n {{A^{jk}}} {{{\partial ^2}} \over {\partial {x^j}\partial {x^k}}}v + \sum\limits_{j = 1}^n 2 {B^j}{\partial \over {\partial {x^j}}}{v_t} + \sum\limits_{j = 1}^n {{C^j}} {\partial \over {\partial {x^j}}}v + D{v_t} + Ev,\quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where v = v(t, x) ∈ ℂm is the state vector, and Aij = Aij, Bj, Cj, D, E denote complex m × m matrices. In order to apply the theory described so far, we reduce this equation to a system that is first order in time. This is achieved by introducing the new variable \(w: = {\upsilon _t} - \sum\limits_{j = 1}^n {{B^j}} {\partial \over {\partial {x^j}}}\upsilon\).Footnote 6 With this redefinition one obtains a system of the form (3.1) with u = (v, w)T and

$$P(\partial /\partial x) = \sum\limits_{j = 1}^n {{B^j}} {\partial \over {\partial {x^j}}} + \left({\begin{array}{*{20}c} 0 & I \\ {\sum\limits_{j,k = 1}^n {({A^{jk}} + {B^j}{B^k})} {{{\partial ^2}} \over {\partial {x^j}\partial {x^k}}} + \sum\limits_{j = 1}^n {({C^j} + D{B^j})} {\partial \over {\partial {x^j}}} + E} & D \\ \end{array}} \right).$$

Now we could apply the matrix theorem, Theorem 1, to the corresponding symbol P(ik) and analyze under which conditions on the matrix coefficients Aij, Bj, Cj, D, E the Cauchy problem is well posed. However, since our problem originates from a second-order equation, it is convenient to rewrite the symbol in a slightly different way: instead of taking the Fourier transform of v and w directly, we multiply û by |k| and write the symbol in terms of the variable \(\hat U: = {(\vert k\vert \hat \upsilon, \hat w)^T}\). Then, the L2-norm of Û controls, through Parseval’s identity, the L2-norms of the first partial derivatives of v, as is the case for the usual energies for second-order systems. In terms of Û the system reads

$${\hat U_t} = Q(ik)\hat U,\quad t \geq 0,\quad k \in {{\mathbb R}^n},$$

in Fourier space, where

$$Q(ik) = i\vert k\vert \sum\limits_{j = 1}^n {{B^j}} {\hat k_j} + \left({\begin{array}{*{20}c} 0 & {\vert k\vert I} \\ {- \vert k\vert \sum\limits_{j,k = 1}^n {({A^{jk}} + {B^j}{B^k})} {{\hat k}_j}{{\hat k}_k} + i\sum\limits_{j = 1}^n {({C^j} + D{B^j})} {{\hat k}_j} + {1 \over {\vert k\vert}}E} & D \\ \end{array}} \right)$$

with \({\hat k_j}: = {k_j}/\vert k\vert\). As for first-order systems, we can split Q(ik) into its principal part,

$${Q_0}(ik): = i\vert k\vert \sum\limits_{j = 1}^n {{B^j}} {\hat k_j} + \vert k\vert \left({\begin{array}{*{20}c} 0 & I \\ {- \sum\limits_{j,k = 1}^n {({A^{jk}} + {B^j}{B^k})} {{\hat k}_j}{{\hat k}_k}} & 0 \\ \end{array}} \right),$$

which dominates for |k| → ∞, and the remaining, lower-order terms. Because of the homogeneity of Q0(ik) in k we can restrict ourselves to values of kSn−1 on the unit sphere, like for first-order systems. Then, it follows as a consequence of the matrix theorem that the problem is well posed if and only if there exists a symmetrizer H(k) and a constant M > 0 satisfying

$${M^{- 1}}I \leq H(k) \leq MI,\quad H(k){Q_0}(ik) + {Q_0}{(ik)^{\ast}}H(k) = 0$$

for all such k. Necessary and sufficient conditions under which such a symmetrizer exists have been given in [261] for the particular case in which the mixed-second-order derivative term in Eq. (3.56) vanishes; that is, when Bj = 0. This result can be generalized in a straightforward manner to the case where the matrices Bj = βjI are proportional to the identity:

Theorem 2. Suppose Bj = βjI, j = 1, 2, …, n. (Note that this condition is trivially satisfied if m = 1.) Then, the Cauchy problem for Eq. (3.56) is well posed if and only if the symbol

$$R(k): = \sum\limits_{i,j = 1}^n {({A^{ij}} + {B^i}{B^j})} {k_i}{k_j},\quad k \in {S^{n - 1}},$$

has the following properties: there exist constants M > 0 and δ > 0 and a family h(k) of Hermitian m × m matrices such that

$${M^{- 1}}I \leq h(k) \leq MI,\quad h(k)R(k) = R{(k)^{\ast}}h(k) \geq \delta I$$

for all kSn−1.

Proof. Since for Bj = βjI the advection term \(i|k|\sum\limits_{j = 1}^n {{B^j}{{\hat k}_j}}\) commutes with any Hermitian matrix H(k), it is sufficient to prove the theorem for Bj = 0, in which case the principal symbol reduces to

$${Q_0}(ik): = \left({\begin{array}{*{20}c} 0 & I \\ {- R(k)} & 0 \\ \end{array}} \right),\quad k \in {S^{n - 1}}.$$

We write the symmetrizer H(k) in the following block form,

$$H(k) = \left({\begin{array}{*{20}c} {{H_{11}}(k)} & {{H_{12}}(k)} \\ {{H_{12}}{{(k)}^{\ast}}} & {{H_{22}}(k)} \\ \end{array}} \right),$$

where H11(k), H22(k) and H12(k) are complex m × m matrices, the first two being Hermitian. then,

$$H(k){Q_0}(ik) + {Q_0}{(ik)^{\ast}}H(k) = \left({\begin{array}{*{20}c} {- {H_{12}}(k)R(k) - R{{(k)}^{\ast}}{H_{12}}{{(k)}^{\ast}}} & {{H_{11}}(k) - R{{(k)}^{\ast}}{H_{22}}(k)} \\ {{H_{11}}(k) - {H_{22}}(k)R(k)} & {{H_{12}}(k) + {H_{12}}{{(k)}^{\ast}}} \\ \end{array}} \right).$$

Now, suppose h(k) satisfies the conditions (3.63). Then, choosing H12(k) := 0, H22(k) := h(k) and H11(k) := h(k)R(k) we find that H(k)Q0(ik) + Q0(ik)*H(k) = 0. Furthermore, M−1IH22(k) ≤ MI and δIH11(k) = h(k)R(k) ≤ MCI where

$$C: = \sup \{\vert R(k)u\vert :k \in {S^{n - 1}},u \in {{\mathbb C}^m},\vert u\vert = 1\}$$

is finite because R(k)u is continuous in k and u. Therefore, H(k) is a symmetrizer for Q0(ik), and the problem is well posed.

Conversely, suppose that the problem is well posed with symmetrizer H(k). Then, the vanishing of H(k)Q0(ik) + Q0(ik)*H(k) yields the conditions H11(k) = H22(k)R(k) = R(k)*H22(k) and the conditions (3.63) are satisfied for h(k) := H22(k).

Remark: The conditions (3.63) imply that R(k) is symmetric and positive with respect to the scalar product defined by h(k). Hence it is diagonalizable, and all its eigenvalues are positive. A practical way of finding h(k) is to construct T(k), which diagonalizes R(k), T(k)−1 R(k)T(k) = P(k) with P(k) diagonal and positive. Then, h(k) := (T(k)−1)*T(k)−1 is the candidate for satisfying the conditions (3.63).

Let us give some examples and applications:

Example 16. The Klein-Gordon equation v tt = Δvm2v on flat spacetime. In this case, Aij = δij and Bj = 0, and R(k) = |k|2 trivially satisfies the conditions of Theorem 2.

Example 17. In anticipation of the following Section 3.2, where linear problems with variable coefficients are treated, let us generalize the previous example on a curved spacetime (M, g). We assume that (M, g) is globally hyperbolic such that it can be foliated by space-like hypersurfaces Σ t . In the ADM decomposition, the metric in adapted coordinates assumes the form

$$g = - {\alpha ^2}dt \otimes dt + {\gamma _{ij}}(d{x^i} + {\beta ^i}dt) \otimes (d{x^j} + {\beta ^j}dt),$$

with α > 0 the lapse, βi the shift vector, which is tangent to Σ t , and γ ij dxidxj the induced three-metric on the spacelike hypersurfaces Σ t . The inverse of the metric is given by

$${g^{- 1}} = - {1 \over {{\alpha ^2}}}\left({{\partial \over {\partial t}} - {\beta ^i}{\partial \over {\partial {x^i}}}} \right) \otimes \left({{\partial \over {\partial t}} - {\beta ^j}{\partial \over {\partial {x^j}}}} \right) + {\gamma ^{ij}}{\partial \over {\partial {x^i}}} \otimes {\partial \over {\partial {x^j}}},$$

where γij are the components of the inverse three-metric. The Klein-Gordon equation on (M, g) is

$${g^{\mu \nu}}{\nabla _\mu}{\nabla _\nu}v = {1 \over {\sqrt {- \det (g)}}}{\partial _\mu}\left({\sqrt {- \det (g)} {g^{\mu \nu}}{\partial _\nu}v} \right) = {m^2}v,$$

which, in the constant coefficient case, has the form of Eq. (3.56) with

$${A^{jk}} = {\alpha ^2}{\gamma ^{jk}} - {\beta ^j}{\beta ^k},\quad {B^j} = {\beta ^j}.$$

Hence, R(k) = α2γijk i k j , and the conditions of Theorem 2 are satisfied with h(k) = 1 since α > 0 and γij is symmetric positive definite.

Linear problems with variable coefficients

Next, we generalize the theory to linear evolution problems with variable coefficients. That is, we consider equations of the following form:

$${u_t} = P(t,x,\partial /\partial x)u \equiv \sum\limits_{\vert \nu \vert \leq p} {{A_\nu}} (t,x){D_\nu}u,\quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where now the complex m × m matrices A v (t, x) may depend on t and x. For simplicity, we assume that each matrix coefficient of A v belongs to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C-functions with bounded derivatives. Unlike the constant coefficient case, the different k-modes couple when performing a Fourier transformation, and there is no simple explicit representation of the solutions through the exponential of the symbol. Therefore, Definition 1 of well-posedness needs to be altered. Instead of giving an operator-based definition, let us define well-posedness by the basic requirements a Cauchy problem should satisfy:

Definition 3. The Cauchy problem

$${u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x),\;\;x \in {{\mathbb R}^n},\quad t \geq 0,$$
$$u(0,x) = f(x),\,x \in {{\mathbb R}^n},$$

is well posed if any \(f \in C_0^\infty ({{\rm{\mathbb R}}^n})\) gives rise to a unique C-solution u(t, x), and if there are constants K ≥ 1 and α ∈ ℝ such that

$$\vert \vert u(t, \cdot)\vert \vert \leq K{e^{\alpha t}}\vert \vert f\vert \vert$$

for all \(f \in C_0^\infty ({{\rm{\mathbb R}}^n})\) and all t ≥ 0.

Before we proceed and analyze under which conditions on the operator P(t, x, ∂/∂x) the Cauchy problem (3.73, 3.74) is well posed, let us make the following observations:

  • In the constant coefficient case, inequality (3.75) is equivalent to inequality (3.11), and in this sense Definition 3 is a generalization of Definition 1.

  • If u1 and u2 are the solutions corresponding to the initial data \({f_1},{f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\), then the difference u = u2u1 satisfies the Cauchy problem (3.73, 3.74) with ƒ = ƒ2ƒ1 and the estimate (3.75) implies that

    $$\vert \vert {u_2}(t, \cdot) - {u_1}(t, \cdot)\vert \vert \leq K{e^{\alpha t}}\vert \vert {f_2} - {f_1}\vert \vert ,\quad \quad t \geq 0.$$

    In particular, this implies that u2(t, ·) converges to u1(t, ·) if ƒ2 converges to ƒ1 in the L2-sense. In this sense, the solution depends continuously on the initial data. This property is important for the convergence of a numerical approximation, as discussed in Section 7.

  • Estimate (3.75) also implies uniqueness of the solution, because for two solutions u1 and u2 with the same initial data \({f_1} = {f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\) the inequality (3.76) implies u1 = u2.

  • As in the constant coefficient case, it is possible to extend the solution concept to weak ones by taking sequences of C-elements. This defines a propagator U(t, s) : L2(ℝn) → L2(ℝn), which maps the solution at time s ≥ 0 to the solution at time ts and satisfies similar properties to the ones described in Section 3.1.2: (i) U(t, t) = I for all t ≥ 0, (ii) U(t, s)U(s, r) = U(t, r) for all tsr ≥ 0, (iii) for \(f \in C_0^\infty ({{\rm{R}}^n}),U(t,0)f\), U(t, 0)ƒ is the unique solution of the Cauchy problem (3.73, 3.74), (iv) ‖U(t, s)ƒ‖ ≤ Keα(ts)ƒ‖ for all ƒ ∈ L2(ℝ) and all ts ≥ 0. Furthermore, the Duhamel formula (3.23) holds with the replacement U(ts) ↦ U (t, s).

The localization principle

Like in the constant coefficient case, we would like to have a criterion for well-posedness that is based on the coefficients A v (t, x) of the differential operator alone. As we have seen in the constant coefficient case, well-posedness is essentially a statement about high frequencies. Therefore, we are led to consider solutions with very high frequency or, equivalently, with very short wavelength. In this regime we can consider small neighborhoods and since the coefficients A v (t, x) are smooth, they are approximately constant in such neighborhoods. Therefore, intuitively, the question of well-posedness for the variable coefficient problem can be reduced to a frozen coefficient problem, where the values of the matrix coefficients A v (t, x) are frozen to their values at a given point.

In order to analyze this more carefully, and for the sake of illustration, let us consider a first-order linear system with variable coefficients

$${u_t} = P(t,x,\partial /\partial x)u \equiv \sum\limits_{j = 1}^n {{A^j}} (t,x){\partial \over {\partial {x^j}}}u + B(t,x)u,\quad \quad x \in {{\mathbb R}^n},\quad t \geq 0,$$

where A1, …, An, B are complex m×m matrices, whose coefficients belong to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C-functions with bounded derivatives. As mentioned above, the Fourier transform of this operator does not yield a simple, algebraic symbol like in the constant coefficient case.Footnote 7 However, given a specific point p0 = (t0, x0) ∈ [0, ∞) × ℝn, we may zoom into a very small neighborhood of p0. Since the coefficients Aj(t, x) and B(t, x) are smooth, they will be approximately constant in this neighborhood and we may freeze the coefficients of Aj(t, x) and B(t, x) to their values at the point p0. More precisely, let u(t, x) be a smooth solution of Eq. (3.77). Then, we consider the formal expansion

$$u({t_0} + \varepsilon t,{x_0} + \varepsilon x) = u({t_0},{x_0}) + \varepsilon {u^{(1)}}(t,x) + {\varepsilon ^2}{u^{(2)}}(t,x) + \ldots ,\quad \quad \varepsilon > 0.$$

As a consequence of Eq. (3.77) one obtains

$$\begin{array}{*{20}c} {u_t^{(1)}(t,x) + \varepsilon u_t^{(2)}(t,x) + \ldots = \sum\limits_{j = 1}^n {{A^j}} ({t_0} + \varepsilon t,{x_0} + \varepsilon x)\left[ {{{\partial {u^{(1)}}} \over {\partial {x^j}}}(t,x) + \varepsilon {{\partial {u^{(2)}}} \over {\partial {x^j}}}(t,x) + \ldots} \right]\quad \quad \quad \quad \quad \quad \,\,} \\ {+ B({t_0} + \varepsilon t,{x_0} + \varepsilon x)\left[ {u({t_0},{x_0}) + \varepsilon {u^{(1)}}(t,x) + \ldots} \right].} \end{array}$$

Taking the pointwise limit ε → 0 on both sides of this equation we obtain

$$u_t^{(1)}(t,x) = \sum\limits_{j = 1}^n {{A^j}} ({t_0},{x_0}){{\partial {u^{(1)}}} \over {\partial {x^j}}}(t,x) + {F_0} = {P_0}({t_0},{x_0},\partial /\partial x){u^{(1)}}(t,x) + {F_0},$$

where F0 := B(t0, x0)u(t0, x0). Therefore, if u is a solution of the variable coefficient equation u t = P(t, x, ∂/∂x)u, then, u(1) satisfies the linear constant coefficient problem \(u_t^{(1)}(t,x) = {P_0}({t_0},{x_0},\partial/\partial x){u^{(1)}} + {F_0}\) obtained by freezing the coefficients in the principal part of P(t, x, ∂/∂x)u to their values at the point p0 and by replacing the lower-order term B(t, x) by the forcing term F0. By adjusting the scaling of t, a similar conclusion can be obtained when P(t, x, ∂/∂x) is a higher-derivative operator.

This leads us to the following statement: a necessary condition for the linear, variable coefficient Cauchy problem for the equation u t = P(t, x, ∂/∂x)u to be well posed is that all the corresponding problems for the frozen coefficient equations v t = P0(t0, x0, ∂/∂x)v are well posed. For a rigorous proof of this statement for the case in which P(t, x, ∂/∂x) is time-independent; see [397]. We stress that it is important to replace P(t, x, ∂/∂x) by its principal part P0(t, x, ∂/∂x) when freezing the coefficients. The statement is false if lower-order terms are retained; see [259, 397] for counterexamples.

Now it is natural to ask whether or not the converse statement is true: suppose that the Cauchy problems for all frozen coefficient equations v t = P0(t0, x0, ∂/∂x)v are well posed; is the original, variable coefficient problem also well posed? It turns out this localization principle is valid in many cases under additional smoothness requirements. In order to formulate the latter, let us go back to the first-order equation (3.77). We define its principal symbol as

$${P_0}(t,x,ik): = i\sum\limits_{j = 1}^n {{A^j}} (t,x){k_j}.$$

In analogy to the constant coefficient case we define:

Definition 4. The first-order system (3.77) is called

  1. (i)

    weakly hyperbolic if all the eigenvalues of its principal symbol P0(t, x, ik) are purely imaginary.

  2. (ii)

    strongly hyperbolic if there exist M > 0 and a family of positive definite, Hermitian m × m matrices H(t, x, k), (t, x, k) ∈ Ω × Sn−1, whose coefficients belong to the class \(C_b^\infty (\Omega \times {S^{n - 1}})\), such that

    $${M^{- 1}}I \leq H(t,x,k) \leq MI,\quad \quad H(t,x,k){P_0}(t,x,ik) + {P_0}{(t,x,ik)^{\ast}}H(t,x,k) = 0,$$

    for all (t, x, k) ∈ Ω × Sn−1, where Ω := [0, ∞) × ℝn.

  3. (iii)

    symmetric hyperbolic if it is strongly hyperbolic and the symmetrizer H(t, x, k) can be chosen independent of k.

We see that these definitions are straight extrapolations of the corresponding definitions (see Definition 2) in the constant coefficient case, except for the smoothness requirements for the symmetrizer H(t, x, k).Footnote 8 There are examples of ill-posed Cauchy problems for which a Hermitian, positive-definite symmetrizer H(t, x, k) exists but is not smooth [397] showing that these requirements are necessary in general.

The smooth symmetrizer is used in order to construct a pseudo-differential operator

$$[H(t)v](x): = {1 \over {{{(2\pi)}^{n/2}}}}\int {H(t,x,k/\vert k\vert){e^{ik \cdot x}}\hat v(k){d^n}k,} \quad \quad \hat v(k) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{- ik \cdot x}}v(x){d^n}x,}$$

from which one defines a scalar product (·, ·)H(t), which, for each t, is equivalent to the L2 product. This scalar product has the property that a solution u to the equation (3.77) satisfies an inequality of the form

$${d \over {dt}}{(u,u)_{H(t)}} \leq b(T){(u,u)_{H(t)}},\quad \quad 0 \leq t \leq T,$$

see, for instance, [411]. Upon integration this yields an estimate of the form of Eq. (3.75). In the symmetric hyperbolic case, we have simply [H(t)v] = H(t, x)v(x) and the scalar product is given by

$${(u,v)_{H(t)}}: = \int {u{{(x)}^{\ast}}H(t,x)v(x){d^n}x,} \quad \quad u,v \in {L^2}({\mathbb R^n}).$$

We will return to the application of this scalar product for deriving energy estimates below. Let us state the important result:

Theorem 3. If the first-order system (3.77) is strongly or symmetric hyperbolic in the sense of Definition 4, then the Cauchy problem ( 3.73 , 3.74 ) is well posed in the sense of Definition 3.

For a proof of this theorem, see, for instance, Proposition 7.1 and the comments following its formulation in Chapter 7 of [411]. Let us look at some examples:

Example 18. For a given, stationary fluid field, the non-relativistic, ideal magnetohydrodynamic equations reduce to the simple system [120]

$${B_t} = \nabla \wedge (v \wedge B)$$

for the magnetic field B, where v is the fluid velocity. The principal symbol for this equation is given by

$${P_0}(x,ik)B = ik \wedge (v(x) \wedge B) = (ik \cdot B)v(x) - (ik \cdot v(x))B.$$

In order to analyze it, it is convenient to introduce an orthonormal frame e1, e2, e3 such that e1 is parallel to k. With respect to this, the matrix corresponding to P0(x, ik) is

$$i\vert k\vert \left(\begin{array}{*{20}c} 0 & 0 & 0 \\ {{v_2}(x)} & {- {v_1}(x)} & 0 \\ {{v_3}(x)} & 0 & {- {v_1}(x)} \\ \end{array}\right),$$

with purely imaginary eigenvalues 0, −i|k|v1(x). However, the symbol is not diagonalizable when k is orthogonal to the fluid velocity, v1(x) = 0, and so the system is only weakly hyperbolic.

One can still show that the system is well posed, if one takes into account the constraint ∇ · B = 0, which is preserved by the evolution equation (3.85). In Fourier space, this constraint forces B1 = 0, which eliminates the first row and column in the principal symbol, and yields a strongly hyperbolic symbol. However, at the numerical level, this means that special care needs to be taken when discretizing the system (3.85) since any discretization, which does not preserve ∇ · B = 0, will push the solution away from the constraint manifold, in which case the system is weakly hyperbolic. For numerical schemes, which explicitly preserve (divergence-transport) or enforce (divergence-cleaning) the constraints, see [159] and [136], respectively. For alternative formulations, which are strongly hyperbolic without imposing the constraint; see [120].

Example 19. The localization principle can be generalized to a certain class of second-order systems [261] [308]: For example, we may consider a second-order linear equation of the form

$${v_{tt}} = \sum\limits_{j,k = 1}^n {{A^{jk}}} (t,x){{{\partial ^2}} \over {\partial {x^j}\partial {x^k}}}v + \sum\limits_{j = 1}^n 2 {B^j}(t,x){\partial \over {\partial {x^j}}}{v_t} + \sum\limits_{j = 1}^n {{C^j}} (t,x){\partial \over {\partial {x^j}}}v + D(t,x){v_t} + E(t,x)v,$$

x ∈ ℝn, t ≥ 0, where now the m × m matrices Ajk Bj, Cj, D and E belong to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C-functions with bounded derivatives. Zooming into a very small neighborhood of a given point P0 = (t0, x0) by applying the expansion in Eq. (3.78) to v, one obtains, in the limit ε → 0, the constant coefficient equation

$$v_{tt}^{(2)}(t,x) = \sum\limits_{j,k = 1}^n {{A^{jk}}} ({t_0},{x_0}){{{\partial ^2}{v^{(2)}}} \over {\partial {x^j}\partial {x^k}}}(t,x) + \sum\limits_{j = 1}^n 2 {B^j}({t_0},{x_0}){{\partial v_t^{(2)}} \over {\partial {x^j}}}(t,x) + {F_0},$$


$${F_0}: = \sum\limits_{j = 1}^n {{C^j}} ({t_0},{x_0}){{\partial v} \over {\partial {x^j}}}({t_0},{x_0}) + D({t_0},{x_0}){v_t}({t_0},{x_0}) + E({t_0},{x_0})v({t_0},{x_0}),$$

where we have used the fact that \({\upsilon ^{(1)}}(t,x) = t{\upsilon _t}({t_0},{x_0}) + \sum\limits_{j = 1}^n {{x^j}{{\partial \upsilon} \over {\partial {x^j}}}({t_0},{x_0})}\). Eq. (3.89) can be rewritten as a first-order system in Fourier space for the variable

$$\hat U = \left(\begin{array}{*{20}c} {\vert k\vert \hat v\quad \quad \quad \quad \quad \quad} \\ {{{\hat v}_t} - i\sum\limits_{j = 1}^n {{B^j}} ({t_0},{x_0}){k_j}\hat v} \\ \end{array} \right),$$

see Section 3.1.5. Now Theorem 2 implies that the problem is well posed, if there exist constants M > 0 and δ > 0 and a family of positive definite m × m Hermitian matrices h(t, x, k), (t, x, k) ∈ Ω × Sn−1, which is C-smooth in all its arguments, such that M−1Ih(t, x, k) ≤ MI and h(t, x, k)R(t, x, k) = R(t, x, k)*h(t, x, k) ≥ δI for all (t, x, k) ∈ Ω × Sn−1, where \(R(t,x,k): = \sum\limits_{i,j = 1}^n {({A^{ij}}(t,x) + {B^i}(t,x){B^i}(t,x){B^j}(t,x)){k_i}{k_j}}\).

In particular, it follows that the Cauchy problem for the Klein-Gordon equation on a globally-hyperbolic spacetime M = [0, ∞) × ℝn with \(\alpha, {\beta ^i},{\gamma _{ij}} \in C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\), is well posed provided that α2γij is uniformly positive definite; see Example 17.

Characteristic speeds and fields

Consider a first-order linear system of the form (3.77), which is strongly hyperbolic. Then, for each t ≥ 0, x ∈ ℝn and kSn−1 the principal symbol P0(t, x, ik) is diagonalizable and has purely complex eigenvalues. In the constant coefficient case with no lower-order terms (B = 0) an eigenvalue (k) of P0(ik) with corresponding eigenvector a(k) gives rise to the plane-wave solution

$$u(t,x) = a(k){e^{i\mu (k)t + ik \cdot x}},\quad \quad t \geq 0,x \in {\mathbb R^n}.$$

if lower-order terms are present and the matrix coefficients Aj(t, x) are not constant, one can look for approximate plane-wave solutions, which have the form

$$u(t,x) = {a_\varepsilon}(t,x){e^{i{\varepsilon ^{- 1}}\psi (t,x)}},\quad \quad t \geq 0,x \in {\mathbb R^n},$$

where ε > 0 is a small parameter, ψ(t, x) a smooth-phase function and a ε (t, x) = a0(t, x) + εa1(t, x) + ε2a2(t, x) + … a slowly varying amplitude. Introducing the ansatz (3.93) into Eq. (3.77) and taking the limit ε → 0 yields the problem

$$i{\psi _t}{a_0} = {P_0}(t,x,i\nabla \psi){a_0} = i\sum\limits_{j = 1}^n {{A^j}} (t,x){{\partial \psi} \over {\partial {x^j}}}{a_0}.$$

Setting ω(t, x) := ψ t (t, x) and k(t, x) := ∇ψ(t, x), a nontrivial solution exists if and only if the eikonal equation

$$\det \left[ {i\omega I - {P_0}(t,x,ik)} \right] = 0$$

is satisfied. Its solutions provide the phase function ψ(t, x) whose level sets have co-normal ωdt + k · dx. The phase function and a0 determine approximate plane-wave solutions of the form (3.93). For this reason we call ω(k) the characteristic speed in the direction kSn−1, and a0 a corresponding characteristic mode. For a strongly hyperbolic system, the solution at each point (t, x) can be expanded in terms of the characteristic modes e j (t, x, k) with respect to a given direction kSn−1,

$$u(t,x) = \sum\limits_{j = 1}^m {{u^{(j)}}} (t,x,k){e_j}(t,x,k).$$

The corresponding coefficients u(j)(t, x, k) are called the characteristic fields.

Example 20. Consider the Klein-Gordon equation on a hyperbolic spacetime, as in Example 17. In this case the eikonal equation is

$$0 = \det \left[ {i\omega I - {Q_0}(ik)} \right] = \det \left(\begin{array}{*{20}c} {i(\omega - {\beta ^j}{k_j})} & {\vert k\vert} \\ {- {\alpha ^2}{\gamma ^{ij}}{k_i}{k_j}/\vert k\vert} & {i(\omega - {\beta ^j}{k_j})} \\ \end{array} \right) = - {(\omega - {\beta ^j}{k_j})^2} + {\alpha ^2}{\gamma ^{ij}}{k_i}{k_j},$$

which yields \({\omega _ \pm}(k) = {\beta ^j}{k_j} \pm \alpha \sqrt {{\gamma ^{ij}}{k_i}{k_j}}\). The corresponding co-normals ω±(k)dt + k j dxj is null; hence the surfaces of constant phase are null surfaces. The characteristic modes and fields are

$${e_ \pm}(k) = \left(\begin{array}{*{20}c} {i\vert k\vert} \\ {\mp \alpha \sqrt {{\gamma ^{ij}}{k_i}{k_j}}} \\ \end{array} \right),\quad \quad {u^{(\pm)}}(k) = {1 \over 2}\left({{{{U_1}} \over {i\vert k\vert}} \mp {{{U_2}} \over {\alpha \sqrt {{\gamma ^{ij}}{k_i}{k_j}}}}} \right),$$

where U = (U1, U2) = (|k|v, v t jk j v) and v is the Klein-Gordon field.

Example 21. In the formulation of Maxwell’s equations discussed in Example 15, the characteristic speeds are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, and the corresponding characteristic fields are the components of the vector on the right-hand side of Eq. (3.54).

Energy estimates and finite speed of propagation

Here we focus our attention on first-order linear systems, which are symmetric hyperbolic. In this case it is not difficult to derive a priori energy estimates based on integration by parts. Such estimates assume the existence of a sufficiently smooth solution and bound an appropriate norm of the solution at some time t > 0 in terms of the same norm of the solution at the initial time t = 0. As we will illustrate here, such estimates already yield quite a lot of information on the qualitative behavior of the solutions. In particular, they give uniqueness, continuous dependence on the initial data and finite speed of propagation.

The word “energy” stems from the fact that for many problems the squared norm satisfying the estimate is directly or indirectly related to the physical energy of the system, although for many other problems the squared norm does not have a physical interpretation of any kind.

For first-order symmetric hyperbolic linear systems, an a priori energy estimate can be constructed from the symmetrizer H(t, x) in the following way. For a given smooth solution u(t, x) of Eq. (3.77), define the vector field J on Ω = [0, ∞) × ℝn by its components

$${J^\mu}(t,x): = - u{(t,x)^{\ast}}H(t,x){A^\mu}(t,x)u(t,x),\quad \quad \mu = 0,1,2, \ldots ,n,$$

where A0(t, x) := −I. By virtue of the evolution equation, J satisfies

$${\partial _\mu}{J^\mu}(t,x) \equiv {\partial \over {\partial t}}{J^0}(t,x) + \sum\limits_{k = 1}^n {{\partial \over {\partial {x^k}}}} {J^k}(t,x) = u{(t,x)^{\ast}}K(t,x)u(t,x),$$

where the Hermitian m × m matrix K(t, x) is defined as

$$K(t,x): = H(t,x)B(t,x) + B{(t,x)^{\ast}}H(t,x) + {H_t}(t,x) - \sum\limits_{k = 1}^n {{\partial \over {\partial {x^k}}}} \left[ {H(t,x){A^k}(t,x)} \right].$$

if K = 0, Eq. (3.100) formally looks like a conservation law for the current density J. If K ≠ 0, we obtain, instead of a conserved quantity, an energy-like expression whose growth can be controlled by its initial value. For this, we first notice that our assumptions on the matrices H(t, x), B(t, x) and Ak(t, x) imply that K(t, x) is bounded on Ω. In particular, since H(t, x) is uniformly positive, there is a constant α > 0 such that

$$K(t,x) \leq 2\alpha H(t,x),\quad \quad (t,x) \in \Omega .$$

Let Ω T = ∪0≤tT Σ t be a tubular region obtained by piling up open subsets Σ t of t = const hypersurfaces. This region is enclosed by the initial surface Σ0, the final surface Σ T and the boundary surface \({\mathcal T}: = {\cup _{0 \leq t \leq T}}\partial {\Sigma _t}\), which is assumed to be smooth. Integrating Eq. (3.100) over Ω T and using Gauss’ theorem, one obtains

$$\int\limits_{{\Sigma _T}} {{J^0}(t,x){d^n}x =} \int\limits_{{\Sigma _0}} {{J^0}(t,x){d^n}x -} \int\limits_{\mathcal T} {{e_\mu}{J^\mu}(t,x)dS +} \int\limits_{{\Omega _T}} {u{{(t,x)}^\ast}K(t,x)u(t,x)dt{d^n}x} ,$$

where is the unit outward normal covector to \({\mathcal T}\) and dS the volume element on that surface. Defining the “energy” contained in the surface Σ t by

$$E({\Sigma _t}): = \int\limits_{{\Sigma _t}} {{J^0}} (t,x){d^n}x = \int\limits_{{\Sigma _t}} u {(t,x)^\ast}H(t,x)u(t,x){d^n}x$$

and assuming for the moment that the “flux” integral over \({\mathcal T}\) is positive or zero, one obtains the estimate

$$\begin{array}{*{20}c} {E({\Sigma _T}) \leq E({\Sigma _0})\int\limits_0^T {\left({\int\limits_{{\Sigma _t}} u {{(t,x)}^{\ast}} K(t, x)u(t, x){d^n}x} \right)}\,\,\,dt} \\{\leq E({\Sigma _0}) + 2\alpha \int\limits_0^T E ({\Sigma _t})dt,\quad \quad \quad \quad \quad} \end{array}$$

where we have used the inequality (3.102) and the definition of E t ) in the last step. Defining the function \(h(T): = \int\nolimits_0^T {E({\Sigma _t})}\) this inequality can be rewritten as

$${d \over {dt}}\left({h(t){e^{- 2\alpha t}}} \right) \leq E({\Sigma _0}){e^{- 2\alpha t}},\quad \quad 0 \leq t \leq T,$$

which yields αh(T) ≤ E0)(e2αT − 1) upon integration. This together with (3.105) gives

$$E({\Sigma _t}) \leq {e^{2\alpha t}}E({\Sigma _0}),\quad \quad 0 \leq t \leq T,$$

which bounds the energy at any time t ∈ [0, T] in terms of the initial energy.

In order to analyze the conditions under which the flux integral is positive or zero, we examine the sign of the integrand e µ Jµ(t, x). Decomposing e µ dxµ = N[a dt + s1dx1 + … + s2dxn] where s = (s1, …, s n ) is a unit vector and N > 0 a positive normalization constant, we have

$${e_\mu}{J^\mu}(t,\,x) = N(t,\,x)u{(t,\,x)^{\ast}}[a(t,\,x)H(t,\,x) - H(t,\,x){P_0}(t,\,x,\,s)]u(t,\,x),$$

where \({P_0}(t,x,s) = \sum\limits_{j = 1}^n {{A^j}(t,x){s_j}}\) is the principal symbol in the direction of the unit vector s. This is guaranteed to be positive if the boundary surface \({\mathcal T}\) is such that a(t, x) is greater than or equal to all the eigenvalues of the boundary matrix P0(t, x, s), for each \((t,x) \in {\mathcal T}\). This is equivalent to the condition

$$a(t,x) \geq {\sup}_{u \in {\mathbb {C}}^{m},\,u \neq 0} {{{u^{\ast}}H(t,\,x){P_0}(t,\,x,\,s)u} \over {{u^{\ast}}H(t,\,x)u}}\quad \quad {\rm{for all}}(t,\,x) \in {\mathcal T}.$$

Since H(t, x)P0(t, x, s) is symmetric, the supremum is equal to the maximum eigenvalue of P0(t, x, s). Therefore, condition (3.109) is equivalent to the requirement that a(t, x) be greater than or equal to the maximum characteristic speed in the direction of the unit outward normal s.

With these arguments, we arrive at the following conclusions and remarks:

  • Finite speed of propagation. Let p0 = (t0, x0) ∈ Ω be a given event, and set

    $$v({t_0}): = \sup \left\{{{{{u^\ast}H(t,\,x){P_0}(t,\,x,\,s)u} \over {{u^\ast}H(t,\,x)u}}:0 \leq t \leq {t_0},\,x \in {{\mathbb R}^n},\,s \in {S^{n - 1}},\,u \in {{\mathbb C}^m},\,u \neq 0} \right\}.$$

    Define the past cone at p0 asFootnote 9

    $${C^ -}({p_0}): = \{(t,\,x) \in \Omega :\vert x\vert\,\leq v({t_0})({t_0} - t)\} .$$

    The unit outward normal to its boundary is e µ dxµ = N[v(t0)dt+x·dx/|x|], which satisfies the condition (3.109). It follows from the estimate (3.107) applied to the domain Ω T = C(p0) that the solution is zero on C(p0) if the initial data is zero on the intersection of the cone C(p0) with the initial surface t = 0. In other words, a perturbation in the initial data outside the ball |x| ≤ v (t0)t0 does not alter the solution inside the cone C(p0). Using this argument, it also follows that if ƒ has compact support, the corresponding solution u(t, ·) also has compact support for all t > 0.

  • Continuous dependence on the initial data. Let \(f \in C_0^\infty ({{\rm{R}}^n})\) be smooth initial data with compact support. As we have seen above, the corresponding smooth solution u(t, ·) also has compact support for each t ≥ 0. Therefore, applying the estimate (3.107) to the case Σ t := {t} × ℝn, the boundary integral vanishes and we obtain

    $$E({\Sigma _t}) \leq {e^{2\alpha t}}E({\Sigma _0}),\quad \quad t \geq 0.$$

    In view of the definition of E t ), see Eq. (3.104), and the properties (3.81) of the symmetrizer, it follows that

    $$\Vert u(t, \cdot)\Vert \, \leq M{e^{\alpha t}}\Vert f \Vert ,\quad \quad t \geq 0,$$

    which is of the required form; see Definition 3. In particular, we have uniqueness and continuous dependence on the initial data.

  • The statements about finite speed of propagation and continuous dependence on the data can easily be generalized to the case of a first-order symmetric hyperbolic inhomogeneous equation u t = P(t, x, ∂/∂x)u + F(t, x), with F : Ω → ℂm a bounded, C-function with bounded derivatives. In this case, the inequality (3.113) is replaced by

    $$\Vert u(t, \cdot)\Vert \, \leq \,M{e^{\alpha t}}\left[ {\Vert f \Vert + \int\limits_0^t {{e^{- \alpha s}}}\Vert F(s, \cdot)\Vert ds} \right],\quad \quad t \geq 0.$$
  • If the boundary surface \({\mathcal T}\) does not satisfy the condition (3.109) for the boundary integral to be positive, then suitable boundary conditions need to be specified in order to control the sign of this term. This will be discussed in Section 5.2.

  • Although different techniques have to be used to prove them, very similar results hold for strongly hyperbolic systems [353].

  • For definitions of hyperbolicity of a geometric PDE on a manifold, which do not require a 3+1 decomposition of spacetime, see, for instance, [205, 353], for first-order systems and [47] for second-order ones.

Example 22. We have seen that for the Klein-Gordon equation propagating on a globally-hyperbolic spacetime, the characteristic speeds are the speed of light. Therefore, in the case of a constant metric (i.e., Minkowksi space), the past cone C(p0) defined in Eq. (3.111) coincides with the past light cone at the event p0. A slight refinement of the above argument shows that the statement remains true for a Klein-Gordon field propagating on any hyperbolic spacetime.

Example 23. In Example 21 we have seen that the characteristic speeds of the system given in Example 15 are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, where αβ > 0 is assumed for strong hyperbolicity. Therefore, the past cone C(p0) corresponds to the past light cone provided that 0 < αβ ≤ 1. For αβ > 1, the formulation has superluminal constraint-violating modes, and an initial perturbation emanating from a region outside the past light cone at p0 could affect the solution at p0. In this case, the past light cone at p0 is a proper subset of C(p0).

Quasilinear equations

Next, we generalize the theory one more step and consider evolution systems, which are described by quasilinear partial differential equations, that is, by nonlinear partial differential equations, which are linear in their highest-order derivatives. This already covers most of the interesting physical systems, including the Yang-Mills and the Einstein equations. Restricting ourselves to the first-order case, such equations have the form

$${u_t} = \sum\limits_{j = 1}^n {{A^j}} (t,\,x,\,u){\partial \over {\partial {x^j}}}u + F(t,\,x,\,u),\quad \quad 0 \leq t \leq T,\,\quad x \in {{\mathbb R}^n},$$

where all the coefficients of the complex m×m matrices A1(t, x, u), …, An(t, x, u) and the nonlinear source term F(t, x, u) ∈ ℂm belong to the class \(C_b^\infty ([0,T] \times {{\rm{\mathbb R}}^n} \times {{\rm{\mathbb C}}^m})\) of bounded, C-functions with bounded derivatives. Compared to the linear case, there are two new features the solutions may exhibit:

  • The nonlinear term F(t, x, u) may induce blowup of the solutions in finite time. This is already the case for the simple example where m = 1, all the matrices Aj vanish identically and F(t, x, u) = u2, in which case Eq. (3.115) reduces to u t = u2. In the context of Einstein’s equations such a blowup is expected when a curvature singularity forms, or it could also occur in the presence of a coordinate singularity due to a “bad” gauge condition.

  • In contrast to the linear case, the matrix functions Aj in front of the derivative operator now depend pointwise on the state vector itself, which implies, in particular, that the characteristic speeds and fields depend on u. This can lead to the formation of shocks where characteristics cross each other, like in the simple example of Burger’s equation u t = uu x corresponding to the case m = n = 1, A1(t, x, u) = and F(t, x, u) = 0. In general, shocks may form when the system is not linearly degenerated or genuinely nonlinear [250]. The Einstein vacuum equations, on the other hand, can be written in linearly degenerate form (see, for example, [6, 7, 348, 8]) and are therefore expected to be free of physical shocks.

For these reasons, one cannot expect global existence of smooth solutions from smooth initial data with compact support in general, and the best one can hope for is existence of a smooth solution on some finite time interval [0, T], where T might depend on the initial data.

Under such restrictions, it is possible to prove well-posedness of the Cauchy problem. The idea is to linearize the problem and to apply Banach’s fixed-point theorem. This is discussed next.

The principle of linearization

Suppose u(0)(t, x) is a C (reference) solution of Eq. (3.115), corresponding to initial data ƒ(x) = u(0)(0, x). Assuming this solution to be uniquely determined by the initial data ƒ, we may ask if a unique solution u also exists for the perturbed problem

$${u_t}(t,{\mkern 1mu} x) = \sum\limits_{j = 1}^n {{A^j}} (t,{\mkern 1mu} x,{\mkern 1mu} u){\partial \over {\partial {x^j}}}u(t,{\mkern 1mu} x) + F(t,{\mkern 1mu} x,{\mkern 1mu} u) + \delta F(t,{\mkern 1mu} x),\;\;x \in {{\mathbb R^n}},\quad 0 \leq t \leq T,$$
$$u(0,\,x) = f(x) + \delta f(x),\quad x \in {{\mathbb R}^n},$$

where the perturbations δF(t, x) and δƒ(x) belong to the class of bounded, C-functions with bounded derivatives. This leads to the following definition:

Definition 5. Consider the nonlinear Cauchy problem given by Eq. (3.115) and prescribed initial data for u at t = 0. Let u(0) be a C-solution to this problem, which is uniquely determined by its initial data ƒ. Then, the problem is called well posed at mathb ƒu(0), if there are normed vector spaces X, Y, and Z and constants K > 0, ε > 0 such that for all sufficiently-smooth perturbations δƒ and δF lying in Y and Z, respectively, with

$$\Vert \delta f\Vert_{Y} + \Vert \delta F\Vert_{Z} < \varepsilon ,$$

the perturbed problem ( 3.116 , 3.117 ) is also uniquely solvable and the corresponding solution u satisfies uu(0)X and the estimate

$$\Vert u - {u^{(0)}}\Vert_{X} \leq K\left(\Vert {\delta f \Vert_{Y} + \Vert \delta F \Vert_{Z}} \right).$$

Here, the norms X and Y appearing on both sides of Eq. (3.119) are different from each other because ‖uu(0) X controls the function uu(0) over the spacetime region [0, T] × ℝn while is a norm controlling the function δƒ on ℝn.

If the problem is well posed at u(0), we may consider a one-parameter curve ƒ ε of initial data lying in \(C_0^\infty ({{\rm{\mathbb R}}^n})\) that goes through ƒ and assume that there is a corresponding solution u ε (t, x) for each small enough |ε|, which lies close to u(0) in the sense of inequality (3.119). Expanding

$${u_\varepsilon}(t,\,x) = {u^{(0)}}(t,\,x) + \varepsilon {v^{(1)}}(t,\,x) + {\varepsilon ^2}{v^{(2)}}(t,\,x) + \ldots$$

and plugging into the Eq. (3.115) we find, to first order in ε,

$$v_t^{(1)} = \sum\limits_{j = 1}^n {A_0^j} (t,\,x){\partial \over {\partial {x^j}}}{v^{(1)}} + {B_0}(t,\,x){v^{(1)}},$$


$$A_0^j(t,\,x) = {A^j}(t,\,x,\,{u^{(0)}}(t,\,x)),\quad \quad {B_0}(t,\,x) = {{\partial {A^j}} \over {\partial u}}(t,\,x,\,{u^{(0)}}(t,\,x)){{\partial {u^{(0)}}} \over {\partial {x^j}}} + {{\partial F} \over {\partial u}}(t,\,x,\,{u^{(0)}}(t,\,x)).$$

Eq. (3.121) is a first-order linear equation with variable coefficients for the first variation, v(1), for which we can apply the theory described in Section 3.2. Therefore, it is reasonable to assume that the linearized problem is strongly hyperbolic for any smooth function u(0)(t, x). In particular, if we generalize the definitions of strongly and symmetric hyperbolicity given in Definition 4 to the quasilinear case by requiring that the symmetrizer H(t, x, k, u) has coefficients in \(C_0^\infty (\Omega \times {S^{n - 1}} \times {{\rm{C}}^m})\), it follows that the linearized problem is well posed provided that the quasilinear problem is strongly or symmetric hyperbolic.

The linearization principle states that the converse is also true: the nonlinear problem is well posed at u(0) if all the linear problems, which are obtained by linearizing Eq. (3.115) at functions in a suitable neighborhood of u(0) are well posed. To prove that this principle holds, one sets up the following iteration. We define the sequence u(k) of functions by iteratively solving the linear problems

$$u_t^{(k + 1)} = \sum\limits_{j = 1}^n {{A^j}} (t,\,x,\,{u^{(k)}}){\partial \over {\partial {x^j}}}{u^{(k + 1)}} + F(t,\,x,\,{u^{(k)}}) + \delta F(t,x),\;\;x \in {{\mathbb R}^n},\quad 0 \leq t \leq T,$$
$${u^{(k + 1)}}(0,x) = f(x) + \delta f(x),\;\;x \in {{\mathbb R}^n},$$

for k = 0, 1, 2, … starting with the reference solution u(0). If the linearized problems are well posed in the sense of Definition 3 for functions lying in a neighborhood of u(0), one can solve each Cauchy problem (3.123, 3.124), at least for small enough time T k . The key point then, is to prove that T k does not shrink to zero when k → ∞ and to show that the sequence u(k) of functions converges to a solution of the perturbed problem (3.116, 3.117). This is, of course, a nontrivial task, which requires controlling u(k) and its derivatives in an appropriate way. For particular examples where this program is carried through; see [259]. For general results on quasilinear symmetric hyperbolic systems; see [251, 164, 412, 51].

Abstract evolution operators

A general framework for treating evolution problems is based on methods from functional analysis. Here, one considers a linear operator A : D(A) ⊂ XX with dense domain, \(\overline {D(A)} = X\), in a Banach space X and asks under which conditions the Cauchy problem

$${u_t}(t) = Au(t),\quad \quad t \geq 0,$$
$$u{\rm{(0) =}}f,$$

possesses a unique solution curve, i.e., a continuously differentiable map u : [0, ∞) → D(A) ⊂ X satisfying Eqs. (3.125, 3.126) for each ƒD(A). Under a mild assumption on this turns out to be the case if and only ifFootnote 10 the operator A is the infinitesimal generator of a strongly continuous semigroup P(t), that is, a map \(P:[0,\infty) \rightarrow {\mathcal L}(X)\), with \({\mathcal L}(X)\) denoting the space of bounded, linear operators on X, with the properties that

  1. (i)

    P(0) = I,

  2. (ii)

    P(t + s) = P(t)P(s) for all t, s ≥ 0,

  3. (iii)

    \(\underset {t \rightarrow 0} {\lim} P(t)u = u\) for all \(u \in X\),

  4. (iv)

    \(D(A) = \left\{{u \in X:\underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u - u]{\rm{exists in}}X} \right\}\) and \(Au = \underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u - u],u \in D(A)\).

In this case, the solution curve of the Cauchy problem (3.125, 3.126) is given by u(t) = P(t)ƒ, t ≥ 0, ƒD(A). One can show [327, 51] that P(t) always possesses constants K ≥ 1 and α ∈ ℝ such that

$$\Vert P(t)\Vert \leq K{e^{\alpha t}},\quad \quad t \geq 0,$$

which implies that ‖u(t)‖ ≤ Keαtƒ‖ for all ƒ ∈ D(A) and all t ≥ 0. Therefore, the semigroup P(t) gives existence, uniqueness and continuous dependence on the initial data.

There are several results giving necessary and sufficient conditions for the linear operator A to generate a strongly continuous semigroup; see, for instance, [327, 51]. One useful result, which we formulate for Hilbert spaces, is the following:

Theorem 4 (Lumer-Phillips). Let X be a complex Hilbert space with scalar product (·, ·), and let A : D(A) ⊂ XX be a linear operator. Let α ∈ ℝ. Then, the following statements are equivalent:

  1. (i)

    A is the infinitesimal generator of a strongly continuous semigroup P(t) such thatP(t)‖ ≤ eαt for all t ≥ 0.

  2. (ii)

    AαI is dissipative, that is, Re(u, Auαu) ≤ 0 for all uD(A), and the range of AλI is equal X for some λ > α.

Example 24. As a simple example consider the Hilbert space X = L2(ℝn) with the linear operator A : D(A) ⊂ XX defined by

$$\begin{array}{*{20}c} {D(A): = \{u \in X:(1 + \vert k\vert ^{2}){\mathcal F}u \in {L^2}({{\mathbb R}^n})\} ,} \\{Au: = \Delta u = - {{\mathcal F}^{- 1}}(\vert k\vert ^{2}{\mathcal F}u),\quad \quad u \in D(A),} \end{array}$$

where Ƒ denotes the Fourier-Plancharel operator; see Section 2. Using Parseval’s identity, we find

$${\rm{Re}}(u,\,Au) = {\rm{Re}}({\mathcal F}u, - \vert k \vert ^{2}{\mathcal F}u) = -\Vert (\vert k\vert {\mathcal F}u)^{2} \leq 0,$$

hence A is dissipative. Furthermore, let vL2(ℝn), then

$$u: = {{\mathcal F}^{- 1}}\left({{{{\mathcal F}v} \over {1 + \vert k\vert ^{2}}}} \right)$$

defines an element in D(A) satisfying (IA)u = u − Δu = v. Therefore, the range of AI is equal to X, and Theorem 4 implies that A = Δ generates a strongly continuous semigroup P(t) on X such that ‖P(t)‖ ≤ 1 for all t ≥ 0. The curves u(t) := P(t)ƒ, t ≥ 0, ƒL2(ℝn) are the weak solutions to the heat equation on ℝn; see Section 3.1.2.

In general, the requirement for AαI to be dissipative is equivalent to finding an energy estimate for the squared norm E := ‖u2 of u. Indeed, setting u(t) := P(t)ƒ and using u t = AP(t)ƒ we find

$${d \over {dt}}E(t) = {d \over {dt}}\Vert u(t)\Vert ^{2} = 2{\rm{Re}}(u(t),\,Au(t)) \leq 2\alpha \Vert u(t)\Vert ^{2} = 2\alpha E(t)$$

for all t ≥ 0 and ƒD(A), which yields the estimate

$$\Vert u(t) \Vert \leq {e^{\alpha t}}\Vert f\Vert ,\quad \quad t \geq 0,$$

for all ƒD(A). Given the dissipativity of AαI, the second requirement, that the range of AλI is X for some λ > α, is equivalent to demanding that the linear operator AλI : D(A) → X be invertible. Therefore, proving this condition requires solving the linear equation

$$Au - \lambda u = v$$

for given vX. This condition is important for the existence of solutions, and shows that for general evolution problems, requiring an energy estimate is not sufficient. This statement is rather obvious, because given that AαI is dissipative on D(A), one could just make D(A) smaller, and still have an energy estimate. However, if D(A) is too small, the Cauchy problem is over-determined and a solution might not exist. We will encounter explicit examples of this phenomenon in Section 5, when discussing boundary conditions.

Finding the correct domain D(A) for the infinitesimal generator A is not always a trivial task, especially for equations involving singular coefficients. Fortunately, there are weaker versions of the Lumer-Phillips theorem, which only require checking conditions on a subspace DD(A), which is dense in X. It is also possible to formulate the Lumer-Phillips theorem on Banach spaces. See [327, 152, 51] for more details.

The semigroup theory can be generalized to time-dependent operators A(t), and to quasilinear equations where A(u) depends on the solution u itself. We refer the reader to [51] for these generalizations and for applications to examples from mathematical physics including general relativity. The theory of strongly continuous semigroups has also been used for formulating well-posed initial-boundary value formulations for the Maxwell equations [354] and the linearized Einstein equations [309] with elliptic gauge conditions.

Initial-Value Formulations for Einstein’s Equations

In this section, we apply the theory discussed in Section 3 to well-posed Cauchy formulations of Einstein’s vacuum equations. The first such formulation dates back to the 1950s [169] and will be discussed in Section 4.1. Since then, there has been a plethora of new formulations, which distinguish themselves by the choice of variables (metric vs. tetrad, Christoffel symbols vs. connection coefficients, inclusion or not of curvature components as independent variables, etc.), the choice of gauges and the use of the constraint equations in order to modify the evolution equations off the constraint surface. Many of these new formulations have been motivated by numerical calculations, which try to solve a given physical problem in a stable way.

By far the most successful formulations for numerically-evolving compact-object binaries have been the harmonic system, which is based on the original work of [169], and that of Baumgarte-Shapiro-Shibata-Nakamura (BSSN) [390, 44]. For this reason, we review these two formulations in detail in Sections 4.1 and 4.3, respectively. In Section 4.2 we also review the Arnowitt-Deser-Misner (ADM) formulation [30], which is based on a Hamiltonian approach to general relativity and serves as a starting point for many hyperbolic systems, including the BSSN one. A list of references for hyperbolic reductions of Einstein’s equations not discussed here is given in Section 4.4.

The harmonic formulation

We start by discussing the harmonic formulation of Einstein’s field equations. Like in the potential formulation of electromagnetism, where the Lorentz gauge ∇ µ Aµ = 0 allows one to cast Maxwell’s equations into a system of wave equations, it was observed early in [134, 269] that Einstein’s equations reduce to a system of wave equations when harmonic coordinates,

$${\nabla ^\mu}{\nabla _\mu}{x^\nu} = 0,\qquad \nu = 0,1,2,3,$$

are used. There are many straightforward generalizations of these gauge conditions; one of them is to replace the right-hand side of Eq. (4.1) by given source functions Hv [178, 182, 202].

In order to keep general covariance, we follow [232] and choose a fixed smooth background metric \({\overset \circ g _{\alpha \beta}}\) with corresponding Levi-Civita connection \(\overset \circ \nabla\), Christoffel symbols \(\overset \circ \Gamma {\,^\mu}_{\alpha \beta}\), and curvature tensor \(\overset \circ R {\,^\alpha}_{\beta \mu \nu}\). Then, the generalized harmonic gauge condition can be rewritten asFootnote 11

$${C^\mu}: = {g^{\alpha \beta}}\left({{\Gamma ^\mu}_{\alpha \beta} - {{\overset \circ \Gamma}\,^\mu}_{\alpha \beta}} \right) + {H^\mu} = 0.$$

In the particular case where Hµ = 0 and where the background metric is Minkowski in standard Cartesian coordinates, \(\overset \circ \Gamma {\,^\mu}_{\alpha \beta}\) vanishes, and the condition Cµ = 0 reduces to the harmonic coordinate expression (4.1). However, unlike condition (4.1), Eq. (4.3) yields a coordinate-independent condition for any given vector field Hµ on spacetime since the difference \({C^\mu}_{\alpha \beta}: = {\Gamma ^\mu}_{\alpha \beta} - \overset \circ \Gamma {\,^\mu}_{\alpha \beta}\) between two connections is a tensor field. In terms of the difference, \(\,{h_{\alpha \beta}}: = {g_{\alpha \beta}} - {\overset \circ g _{\alpha \beta}}\), between the dynamical and background metric, this tensor field can be expressed as

$${C^\mu}_{\alpha \beta} = {1 \over 2}{g^{\mu \nu}}\left({{{\overset \circ \nabla}_\alpha}{h_{\beta \nu}} + {{\overset \circ \nabla}_\beta}{h_{\alpha \nu}} - {{\overset \circ \nabla}_\nu}{h_{\alpha \beta}}} \right).$$

Of course, the coordinate-independence is now traded for the introduction of a background metric \({\overset \circ g _{\alpha \beta}}\), and the question remains of how to choose \({\overset \circ g _{\alpha \beta}}\) and the vector field Hµ. A simple possibility is to choose Hµ = 0 and \({\overset \circ g _{\alpha \beta}}\) equal to the initial data for the metric, such that h µv = 0 initially.

Einstein’s field equations in the gauge Cµ = 0 are equivalent to the wave system

$$\begin{array}{*{20}c} {{g^{\mu \nu}}{{\overset \circ \nabla}_\mu}{{\overset \circ \nabla}_\nu}{h_{\alpha \beta}} = 2\,{g_{\sigma \tau}}{g^{\mu \nu}}{C^\sigma}_{\alpha \mu}{C^\tau}_{\beta \nu} + 4\,{C^\mu}_{\nu (\alpha}{g_{\beta)\sigma}}{C^\sigma}_{\mu \tau}{g^{\nu \tau}} - 2\,{g^{\mu \nu}}\,\overset \circ R {\,^\sigma}_{\mu \nu (\alpha}{g_{\beta)\sigma}}} \\ {+ 16\pi {G_N}\left({{T_{\alpha \beta}} - {1 \over 2}{g_{\alpha \beta}}{g^{\mu \nu}}{T_{\mu \nu}}} \right) - 2\,{\nabla _{(\alpha}}{H_{\beta)}}\,,} \\ \end{array}$$

where T αβ is the stress-energy tensor and Newton’s constant. This system is subject to the harmonic constraint

$$0 = {C^\mu} = {g^{\mu \nu}}{g^{\alpha \beta}}\left({{{\overset \circ \nabla}_\alpha}{h_{\beta \nu}} - {1 \over 2}{{\overset \circ \nabla}_\nu}{h_{\alpha \beta}}} \right) + {H^\mu}.$$


For any given smooth stress-energy tensor T αβ , the equations (4.5) constitute a quasilinear system of ten coupled wave equations for the ten coefficients of the difference metric h αβ (or equivalently, for the ten components of the dynamical metric g αβ ) and, therefore, we can apply the results of Section 3 to formulate a (local in time) well-posed Cauchy problem for the wave system (4.5) with initial conditions

$${h_{\alpha \beta}}(0,x) = h_{\alpha \beta}^{(0)}(x),\qquad {{\partial {h_{\alpha \beta}}} \over {\partial t}}(0,x) = k_{\alpha \beta}^{(0)}(x),$$

where \(h_{\alpha \beta}^{(0)}\) and \(k_{\alpha \beta}^{(0)}\) are two sufficiently-smooth symmetric tensor fields defined on the initial slice t = 0 satisfying the requirement that \({g_{\alpha \beta}}(0,x) = {\overset \circ g _{\alpha \beta}}(0,x) + h_{\alpha \beta}^{(0)}\) has Lorentz signature such that g00(0, x) < 0 and the induced metric g ij (0, x), = 1, 2, 3, on t = 0 is positive definiteFootnote 12. For detailed well-posed Cauchy formulations we refer the reader to the original work in [169]; see also [85], [164], and [246], which presents an improvement on the results in the previous references due to weaker smoothness assumptions on the initial data.

An alternative way of establishing the hyperbolicity of the system (4.5) is to cast it into first-order symmetric hyperbolic form [164, 18, 286]. There are several ways of constructing such a system; the simplest one is obtained [164] by introducing the first partial derivatives of g αβ as new variables,

$${k_{\alpha \beta}}: = {{\partial {g_{\alpha \beta}}} \over {\partial t}},\qquad {D_{j\alpha \beta}}: = {{\partial {g_{\alpha \beta}}} \over {\partial {x^j}}},\qquad j = 1,2,3.$$

then, the second-order wave system (4.5) can be rewritten in the form

$${{\partial {g_{\alpha \beta}}} \over {\partial t}} = {k_{\alpha \beta}},$$
$${{\partial {D_{j\alpha \beta}}} \over {\partial t}} = {{\partial {k_{\alpha \beta}}} \over {\partial {x^j}}},$$
$${{\partial {k_{\alpha \beta}}} \over {\partial t}} = - 2{{{g^{0j}}} \over {{g^{00}}}}{{\partial {k_{\alpha \beta}}} \over {\partial {x^j}}} - {{{g^{ij}}} \over {{g^{00}}}}{{\partial {D_{i\alpha \beta}}} \over {\partial {x^j}}} + {\rm{l}}{\rm{.o}}.,$$

where l.o. are lower-order terms not depending on any derivatives of the state vector u = (g αβ , k αβ , D jαβ ). The system of equations (4.9, 4.10, 4.11) constitutes a quasilinear first-order symmetric hyperbolic system for u with symmetrizer given by the quadratic form

$${u^{\ast}}H(u)u = \sum\limits_{\alpha ,\beta = 0}^3 {\left({g_{\alpha \beta}^{\ast}{g_{\alpha \beta}} + \vert {g^{00}}\vert k_{\alpha \beta}^{\ast}{k_{\alpha \beta}} + {g^{ij}}D_{i\alpha \beta}^{\ast}{D_{j\alpha \beta}}} \right)} .$$

However, it should be noted that the symmetrizer is only positive definite if gij is; that is, only if the time evolution vector field t is time-like. In many situations, this requirement might be too restrictive. Inside a Schwarzschild black hole, for example, the asymptotically time-like Killing field t is space-like.

However, as indicated above, the first-order symmetric hyperbolic reduction (4.9, 4.10, 4.11) is not unique. A different reduction is based on the variables ũ = (h αβ , Π αβ , Φ jαβ ), where \({\Pi _{\alpha \beta}}: = {n^\mu}{\overset \circ \nabla _\mu}{h_{\alpha \beta}}\) is the derivative of g αβ in the direction of the future-directed unit normal nµ to the time-slices t = const, and \({\Phi _{j\alpha \beta}}: = {\overset \circ \nabla _j}{h_{\alpha \beta}}\). This yields a first-order system, which is symmetric hyperbolic as long as the t = const slices are space-like, independent of whether or not t is time-like [18, 286].

Constraint propagation and damping

The hyperbolicity results described above guarantee that unique solutions of the nonlinear wave system (4.5) exist, at least for short times, and that they depend continuously on the initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\). However, in order to obtain a solution of Einstein’s field equations one has to ensure that the harmonic constraint (4.3) is identically satisfied.

The system (4.5) is equivalent to the modified Einstein equations

$${R^{\alpha \beta}} + {\nabla ^{(\alpha}}{C^{\beta)}} = 8\pi {G_N}\left({{T^{\alpha \beta}} - {1 \over 2}{g^{\alpha \beta}}{g_{\mu \nu}}{T^{\mu \nu}}} \right),$$

where denotes the Ricci tensor, and where Cµ = 0 if the harmonic constraint holds. From the twice contracted Bianchi identities 2∇ β Rαβ − ∇α(g µv Rµv) = 0 one obtains the following equation for the constraint variable Cα,

$${g^{\mu \nu}}{\nabla _\mu}{\nabla _\nu}{C^\alpha} + {R^\alpha}_\beta {C^\beta} = - 16\pi {G_N}{\nabla _\beta}{T^{\alpha \beta}}.$$

This system describes the propagation of constraint violations, which are present if is nonzero. For this reason, we call it the constraint propagation system, or subsidiary system. Provided the stress-energy tensor is divergence free, ∇ β Tαβ = 0, this is a linear, second-order hyperbolic equation for Cα.Footnote 13 Therefore, it follows from the uniqueness properties of such hyperbolic problems that Cα = 0 provided the initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\) satisfies the initial constraints

$${C^\alpha}(0,x) = 0,\qquad {{\partial {C^\alpha}} \over {\partial t}}(0,x) = 0.$$

This turns out to be equivalent to solving Cα(0, x) = 0 plus the usual Hamiltonian and momentum constraints; see [169, 286]. Summarizing, specifying initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\) satisfying the constraints (4.15), the corresponding unique solution to the nonlinear wave system (4.5) yields a solution to the Einstein equations.

However, in numerical calculations, one cannot assume that the initial constraints (4.15) are satisfied exactly, due to truncation and roundoff errors. The propagation of these errors is described by the constraint propagation system (4.14), and hyperbolicity guarantees that for each fixed time t > 0 of existence, these errors converge to zero if the initial constraint violation converges to zero, which is usually the case when resolution is increased. On the other hand, due to limited computer resources, one cannot reach the limit of infinite resolution, and from a practical point of view one does not want the constraint errors to grow rapidly in time for fixed resolution. Therefore, one would like to design an evolution scheme in which the constraint violations are damped in time, such that the constraint hypersurface is an attractor set in phase space. A general method for damping constraints violations in the context of first-order symmetric hyperbolic formulations of Einstein’s field equations was given in [74]. This method was then adapted to the harmonic formulation in [224]. The procedure proposed in [224] consists in adding lower-order friction terms in Eq. (4.13), which damp constraint violations. Explicitly, the modified system reads

$${R^{\alpha \beta}} + {\nabla ^{(\alpha}}{C^{\beta)}} - \kappa \left({{n^{(\alpha}}{C^{\beta)}} - {1 \over 2}(1 + \rho){g^{\alpha \beta}}{n_\mu}{C^\mu}} \right) = 8\pi {G_N}\left({{T^{\alpha \beta}} - {1 \over 2}{g^{\alpha \beta}}{g_{\mu \nu}}{T^{\mu \nu}}} \right),$$

with nµ the future-directed unit normal to the t = const surfaces, and κ and ρ real constants, where κ > 0 determines the timescale on which the constraint violations Cµ are damped.

With this modification the constraint propagation system reads

$${g^{\mu \nu}}{\nabla _\mu}{\nabla _\nu}{C^\alpha} + {R^\alpha}_\beta {C^\beta} - \kappa {\nabla _\beta}\left({2{n^{(\alpha}}{C^{\beta)}} + \rho {g^{\alpha \beta}}{n_\mu}{C^\mu}} \right) = - 16\pi {G_N}{\nabla _\beta}{T^{\alpha \beta}}.$$

A mode analysis for linear vacuum perturbations of the Minkowski metric reveals [224] that for κ > 0 and ρ > −1 all modes, except those, which are constant in space, are damped. Numerical codes based on the modified system (4.16) or similar systems have been used in the context of binary black-hole evolutions [335, 336, 286, 384, 36, 403, 320], the head-on collision of boson stars [323] and the evolution of black strings in five-dimensional gravity [279], among other references.

For a discussion on possible effects due to nonlinearities in the constraint propagation system; see [185].

Geometric issues

The results described so far guarantee the local-in-time unique existence of solutions to Einstein’s equations in harmonic coordinates, given a sufficiently-smooth initial data set (h(0), k(0)). However, since general relativity is a diffeomorphism invariant theory, some questions remain. The first issue is whether or not the harmonic gauge is sufficiently general such that any solution of the field equations can be obtained by this method, at least for short enough time. The answer is affirmative [169, 164]. Namely, let (M, g), M = (−ε, ε) × ℝ3, be a smooth spacetime satisfying Einstein’s field equations such that the initial surface t = 0 is spacelike with respect to g. Then, we can find a diffeomorphism ϕ : MM in a neighborhood of the initial surface, which leaves it invariant and casts the metric into the harmonic gauge. For this, one solves the harmonic wave map equation (4.2) with initial data

$${\phi ^0}(0,x) = 0,\qquad {{\partial {\phi ^0}} \over {\partial t}}(0,x) = 1,\qquad {\phi ^i}(0,x) = {x^i},\qquad {{\partial {\phi ^i}} \over {\partial t}}(0,x) = 0{.}$$

Since equation (4.2) is a second-order hyperbolic one, a unique solution exists, at least on some sufficiently-small time interval (−ε′, ε′). Furthermore, choosing ε′ > 0 small enough, ϕ : (−ε′, ε′) × ℝ3M describes a diffeomorphism when restricted to its image. By construction, ḡ := (ϕ−1)*g satisfies the harmonic gauge condition (4.3).

The next issue is the question of geometric uniqueness. Let g(1) and g(2) be two solutions of Einstein’s equations with the same initial data on t = 0, i.e., \(g_{\alpha \beta}^{(1)}(0,x) = g_{\alpha \beta}^{(2)}(0,x),{\partial _t}g_{\alpha \beta}^{(1)}(0,x) = {\partial _t}g_{\alpha \beta}^{(2)}(0,x)\). Are these solutions related, at least for small time, by a diffeomorphism? Again, the answer is affirmative [169, 164] because one can transform both solutions to harmonic coordinates using the above diffeomorphism ϕ without changing their initial data. It then follows by the uniqueness property of the nonlinear wave system (4.5) that the transformed solutions must be identical, at least on some sufficiently-small time interval. Note that this geometric uniqueness property also implies that the solutions are, at least locally, independent of the background metric. For further results on geometric uniqueness involving only the first and second fundamental forms of the initial surface; see [127], where it is shown that every such initial-data set satisfying the Hamiltonian and momentum constraints possesses a unique maximal Cauchy development.

Finally, we mention that results about the nonlinear stability of Minkowski spacetime with respect to vacuum and vacuum-scalar perturbations have been established based on the harmonic system [283, 284], offering an alternative proof to the one of [129].

The ADM formulation

In the usual 3+1 decomposition of Einstein’s field equations (see, for example, [214], for a through discussion of it) one evolves the three metric and the extrinsic curvature (the first and second fundamental forms) relative to a foliation Σ t of spacetime by spacelike hypersurfaces. The motivation for this formulation stems from the Hamiltonian description of general relativity (see, for instance, Appendix E in [429]) where the “q” variables are the three metric γ ij and the associated canonical momenta πij (the “p” variables) are related to the extrinsic curvature K ij according to

$${\pi ^{ij}} = - \sqrt \gamma \left({{K^{ij}} - {\gamma ^{ij}}K} \right),$$

where γ = det(γ ij ) denotes the determinant of the three-metric and K = γijK ij the trace of the extrinsic curvature.

In York’s formulation [444] of the 3+1 decomposed Einstein equations, the evolution equations are

$${\partial _0}{\gamma _{ij}} = - 2{K_{ij}},$$
$${\partial _0}{K_{ij}} = R_{ij}^{(3)} - {1 \over \alpha}{D_i}{D_j}\alpha + K{K_{ij}} - 2{K_i}^l{K_{lj}} - 8\pi {G_N}\left[ {{\sigma _{ij}} + {1 \over 2}{\gamma _{ij}}(\rho - \sigma)} \right].$$

Here, the operator 0 is defined as 0 := α−1( t £ β ) with α and βi denoting lapse and shift, respectively. It is equal to the Lie derivative along the future-directed unit normal n to the time slices when acting on covariant tensor fields orthogonal to n. Next, \(R_{ij}^{(3)}\) and D j are the Ricci tensor and covariant derivative operator belonging to the three metric γ ij , and ρ := nαnβT αβ and σ ij := T ij are the energy density and the stress tensor as measured by observers moving along the future-directed unit normal n to the time slices. Finally, σ := γijT ij denotes the trace of the stress tensor. The evolution system (4.20, 4.21) is subject to the Hamiltonian and momentum constraints,

$$H: = {1 \over 2}\left({{\gamma ^{ij}}R_{ij}^{(3)} + {K^2} - {K^{ij}}{K_{ij}}} \right) = 8\pi {G_N}\rho ,$$
$${M_i}: = {D^j}{K_{ij}} - {D_i}K = 8\pi {G_N}{j_i},$$

where j i := −nβT is the flux density.

Algebraic gauge conditions

One issue with the evolution equations (4.20, 4.21) is the principle part of the Ricci tensor belonging to the three-metric,

$$R_{ij}^{(3)} = {1 \over 2}{\gamma ^{kl}}\left({- {\partial _k}{\partial _l}{\gamma _{ij}} - {\partial _i}{\partial _j}{\gamma _{kl}} + {\partial _i}{\partial _k}{\gamma _{lj}} + {\partial _j}{\partial _k}{\gamma _{li}}} \right) + {\rm{l}}{\rm{.o}}{.},$$

which does not define a positive-definite operator. This is due to the fact that the linearized Ricci tensor is invariant with respect to infinitesimal coordinate transformations γ ij γ ij + 2(iξj) generated by a vector field ξ = ξi i . This has the following implications for the evolution equations (4.20, 4.21), assuming for the moment that lapse and shift are fixed, a priori specified functions, in which case the system is equivalent to the second-order system \(\partial _0^2{\gamma _{ij}} = - 2R_{ij}^{(3)} + {\rm{l}}{\rm{.o}}.\) for the three metric. Linearizing and localizing as described in Section 3 one obtains a linear, constant coefficient problem of the form (3.56), which can be brought into first-order form via the reduction in Fourier space described in Section 3.1.5. The resulting first-order system has the form of Eq. (3.58) with the symbol

$$Q(ik) = i\vert k\vert \sum\limits_{j = 1}^n {\overset \circ \beta} {\,^j}{\hat k_j} + \vert k\vert \left({\begin{array}{*{20}c} 0 & I \\ {- \overset \circ \alpha {\,^2}R(\hat k)} & 0 \\ \end{array}} \right)\,,$$

where R(k) is, up to a factor 2, the principal symbol of the Ricci operator,

$$R(\hat k){\gamma _{ij}} = \overset \circ \gamma {\,^{lm}}\,\left({{{\hat k}_l}{{\hat k}_m}{\gamma _{ij}} + {{\hat k}_i}{{\hat k}_j}{\gamma _{lm}} - {{\hat k}_i}{{\hat k}_l}{\gamma _{mj}} - {{\hat k}_j}{{\hat k}_l}{\gamma _{mi}}} \right).$$

Here, \(\overset \circ \alpha, \overset \circ \beta {\,^i}\) and \({\overset \circ \gamma _{ij}}\) refer to the frozen lapse, shift and three-metric, respectively. According to Theorem 2, the problem is well posed if and only there is a uniformly positive and bounded symmetrizer \(h(\hat k)\) such that \(h(\hat k)R(\hat k)\) is symmetric and uniformly positive for \(\hat k \in {S^2}\). Although \(R(\hat k)\) is diagonalizable and its eigenvalues are not negative, some of them are zero since \(R(\hat k){\gamma _{ij}} = 0\) for γ ij of the form \({\gamma _{ij}} = 2{{\hat k}_{(i\xi j)}}\) with an arbitrary one-form ξ j , so h(k)R(k) cannot be positive.

These arguments were used in [308] to show that the evolution system (4.20, 4.21) with fixed lapse and shift is weakly but not strongly hyperbolic. The results in [308] also analyze modifications of the equations for which the lapse is densitized and the Hamiltonian constraint is used to modify the trace of Eq. (4.21). The conclusion is that such changes cannot make the evolution equations (4.20, 4.21) strongly hyperbolic. Therefore, these equations, with given shift and densitized lapse, are not suited for numerical evolutions.Footnote 14

Dynamical gauge conditions leading to a well-posed formulation

The results obtained so far often lead to the popular statement “The ADM equations are not strongly hyperbolic.” However, consider the possibility of determining the lapse and shift through evolution equations. A natural choice, motivated by the discussion in Section 4.1, is to impose the harmonic gauge constraint (4.3). Assuming that the background metric \({\overset \circ g _{\alpha \beta}}\) is Minkowski in Cartesian coordinates for simplicity, this yields the following equations for the 3+1 decomposed variables,

$$({\partial _t} - {\beta ^j}{\partial _j})\alpha = - {\alpha ^2}fK + {\alpha ^3}{H^t},$$
$$({\partial _t} - {\beta ^j}{\partial _j}){\beta ^i} = - \alpha {\gamma ^{ij}}{\partial _j}\alpha + {\alpha ^2}{\gamma ^{ij}}{\gamma ^{kl}}\left({{\partial _k}{\gamma _{jl}} - {1 \over 2}{\partial _j}{\gamma _{kl}}} \right) + {\alpha ^2}({H^i} + {\beta ^i}{H^t}),$$

with ƒ a constant, which is equal to one for the harmonic time coordinate t. Let us analyze the hyperbolicity of the evolution system (4.27, 4.28, 4.20, 4.21) for the fields u = (α, βi, γ ij , K ij ), where for generality and later use, we do not necessarily assume ƒ = 1 in Eq. (4.27). Since this is a mixed first/second-order system, we base our analysis on the first-order pseudodifferential reduction discussed in Section 3.1.5. After linearizing and localizing, we obtain the constant coefficient linear problem

$$({\partial _t} - \overset \circ \beta {\,^k}{\partial _k})\alpha = - \overset \circ \alpha {\,^2}fK,$$
$$({\partial _t} - \overset \circ \beta {\,^k}{\partial _k}){\beta ^i} = - \overset \circ \alpha \overset \circ \gamma {\,^{ij}}{\partial _j}\alpha + \overset \circ \alpha {\,^2}\overset \circ \gamma {\,^{ij}}\overset \circ \gamma {\,^{kl}}\left({{\partial _k}{\gamma _{jl}} - {1 \over 2}{\partial _j}{\gamma _{kl}}} \right),$$
$$({\partial _t} - \overset \circ \beta {\,^k}{\partial _k}){\gamma _{ij}} = 2{\overset \circ \gamma _{k(i}}{\partial _{j)}}{\beta ^k} - 2\overset \circ \alpha {K_{ij}},$$
$$({\partial _t} - \overset \circ \beta {\,^k}{\partial _k}){K_{ij}} = - {\partial _i}{\partial _j}\alpha + {{\overset \circ \alpha} \over 2}{\overset \circ \gamma ^{kl}}\left({- {\partial _k}{\partial _l}{\gamma _{ij}} - {\partial _i}{\partial _j}{\gamma _{kl}} + {\partial _i}{\partial _k}{\gamma _{lj}} + {\partial _j}{\partial _k}{\gamma _{li}}} \right),$$

where \(\overset \circ \alpha, \overset \circ \beta {\,^k}\) and \({\overset \circ \gamma _{ij}}\) refer to the quantities corresponding to α, βk, γ ij of the background metric when frozen at a given point. In order to rewrite this in first-order form, we perform a Fourier transformation in space and introduce the variables Û = (a, b i , l ij , p ij ) with

$$a: = \vert k\vert \hat \alpha /\overset \circ \alpha ,\qquad {b_i}: = \vert k\vert {\overset \circ \gamma _{ij}}{\hat \beta ^j}/\overset \circ \alpha ,\qquad {l_{ij}}: = \vert k\vert {\hat \gamma _{ij}},\qquad {p_{ij}}: = 2i{\hat K_{ij}},$$

where \(\vert k\vert := \sqrt {\overset \circ \gamma {\,^{ij}}{k_i}{k_j}}\) and the hatted quantities refer to their Fourier transform. With this, we obtain the first-order system Û t = P(ik)Û where the symbol has the form \(P(ik) = i\overset \circ \beta {\,^s}{k_s}I + \overset \circ \alpha Q(ik)\) with

$$Q(ik)\left({\begin{array}{*{20}c} a \\ {{b_i}} \\ {{l_{ij}}} \\ {{p_{ij}}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} {{f \over 2}p} \\ {- {{\hat k}_i}a + {{\hat k}^j}{l_{ij}} - {1 \over 2}{{\hat k}_i}l} \\ {2{{\hat k}_{(i}}{b_{j)}} + {p_{ij}}} \\ {2{{\hat k}_i}{{\hat k}_j}a + {l_{ij}} + {{\hat k}_i}{{\hat k}_j}l - 2{{\hat k}^s}{{\hat k}_{(i}}{l_{j)s}}} \\ \end{array}} \right),$$

where \({{\hat k}_i}: = {k_i}/\vert k\vert, {{\hat k}^i}: = \overset \circ \gamma {\,^{ij}}{{\hat k}_j},l: + \overset \circ \gamma {\,^{ij}}{l_{ij}}\), and \(p: = \overset \circ \gamma {\,^{ij}}{p_{ij}}\). In order to determine the eigenfields S(k)−1Û such that S(k)−1P(ik)S(k) is diagonal, we decompose

$$\begin{array}{*{20}c} {{b_i} = \bar b{{\hat k}_i} + {{\bar b}_i},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,\,} \\ {{l_{ij}} = \bar l{{\hat k}_i}{{\hat k}_j} + 2{{\hat k}_{(i}}{{\bar l}_{j)}} + {{\hat l}_{ij}} + {1 \over 2}({{\overset \circ \gamma}_{ij}} - {{\hat k}_i}{{\hat k}_j})\bar l\prime ,\qquad {p_{ij}} = \bar p{{\hat k}_i}{{\hat k}_j} + 2{{\hat k}_{(i}}{{\bar p}_{j)}} + {{\hat p}_{ij}} + {1 \over 2}({{\overset \circ \gamma}_{ij}} - {{\hat k}_i}{{\hat k}_j})\bar p\prime} \\ \end{array}$$

into pieces parallel and orthogonal to \({{\hat k}_i}\), similar to Example 15. Then, the problem decouples into a tensor sector, involving \(({{\hat l}_{ij}},{{\hat p}_{ij}})\), into a vector sector, involving \(({{\bar b}_i},{{\bar l}_i},{{\bar p}_i})\) and a scalar sector involving \((a,\bar b,\bar l,\bar p,{{\bar l}\prime},{{\bar p}\prime})\). In the tensor sector, we have

$${Q^{({\rm{tensor}})}}(ik)\left({\begin{array}{*{20}c} {{{\hat l}_{ij}}} \\ {{{\hat p}_{ij}}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} {{{\hat p}_{ij}}} \\ {{{\hat l}_{ij}}} \\ \end{array}} \right),$$

which has the eigenvalues ±i|k| with corresponding eigenfields \({{\hat l}_{ij}} \pm {{\hat p}_{ij}}\). In the vector sector, we have

$${Q^{({\rm{vector}})}}(ik)\left({\begin{array}{*{20}c} {{{\bar b}_j}} \\ {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar b}_j} + {{\bar p}_j}} \\ 0 \\ \end{array}} \right),$$

which is also diagonalizable with eigenvalues 0, ±i|k| and corresponding eigenfields \({{\bar p}_j}\) and \({{\bar l}_j} \pm ({{\bar b}_j} + {{\bar p}_j})\). Finally, in the scalar sector we have

$${Q^{({\rm{scalar}})}}(ik)\left({\begin{array}{*{20}c} a \\ {\bar b} \\ {\bar l} \\ {\bar p} \\ {\bar l\prime} \\ {\bar p\prime} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} {{f \over 2}(\bar p + \bar p\prime)} \\ {- a + {1 \over 2}(\bar l - \bar l\prime)} \\ {2\bar b + \bar p} \\ {2a + \bar l\prime} \\ {\bar p\prime} \\ {\bar l\prime} \\ \end{array}} \right).$$

It turns out Q(scaial)(ik) is diagonalizable with purely imaginary values if and only if ƒ > 0 and ƒ ≠ 1. In this case, the eigenvalues and corresponding eigenfields are \(\pm i\vert k\vert, \, \pm i\vert k\vert, \, \pm i\sqrt f \vert k\vert\) and \({{\bar l}{\prime}} \pm {{\bar p}{\prime}},\bar l \pm (2\bar b + \bar p),\,a + f{{\bar l}{\prime}}/(f - 1) \pm \sqrt f [\bar p + (f + 1)/(f - 1)/{{\bar p}{\prime}}]/2\), respectively. A symmetrizer for P(ik), which is smooth in \(k \in {S^2},\overset \circ \alpha, \overset \circ \beta {\,^k}\) and \({\overset \circ \gamma _{ij}}\), can be constructed from the eigenfields as in Example 15.


  • If instead of imposing the dynamical shift condition (4.28), β is a priori specified, then the resulting evolution system, consisting of Eqs. (4.27, 4.20, 4.21), is weakly hyperbolic for any choice of ƒ. Indeed, in that case the symbol (4.36) in the vector sector reduces to the Jordan block

    $${Q^{(vector)}}(ik)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} 0 & 1 \\ 0 & 0 \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right),$$

    which cannot be diagonalized.

  • When linearized about Minkowski spacetime, it is possible to classify the characteristic fields into physical, constraint-violating and gauge fields; see [106]. For the system (4.294.32) the physical fields are the ones in the tensor sector, \({{\hat l}_{ij}} \pm {{\hat p}_{ij}}\), the constraint-violating ones are \({{\bar p}_j}\) and \({{\bar l}{\prime}} \pm {{\bar p}{\prime}}\), and the gauge fields are the remaining characteristic variables. Observe that the constraint-violating fields are governed by a strongly-hyperbolic system (see also Section 4.2.4 below), and that in this particular formulation of the ADM equations the gauge fields are coupled to the constraint-violating ones. This coupling is one of the properties that make it possible to cast the system as a strongly hyperbolic one.

We conclude that the evolution system (4.27, 4.28, 4.20, 4.21) is strongly hyperbolic if and only if ƒ > 0 and ƒ ≠ 1. Although the full harmonic gauge condition (4.3) is excluded from these restrictions,Footnote 15 there is still a large family of evolution equations for the lapse and shift that give rise to a strongly hyperbolic problem together with the standard evolution equations (4.20, 4.21) from the 3+1 decomposition.

Elliptic gauge conditions leading to a well-posed formulation

Rather than fixing the lapse and shift algebraically or dynamically, an alternative, which has been considered in the literature, is to fix them according to elliptic equations. A natural restriction on the extrinsic geometry of the time slices Σ t is to require that their mean curvature, c = −K/3, vanishes or is constant [391]. Taking the trace of Eq. (4.21) and using the Hamiltonian constraint to eliminate the trace of \(R_{ij}^{(3)}\) yields the following equation for the lapse,

$$\left[ {- {D^j}{D_j} + {K^{ij}}{K_{ij}} + 4\pi {G_N}(\rho + \sigma)} \right]\alpha = {\partial _t}K,$$

which is a second-order linear elliptic equation. The operator inside the square parenthesis is formally positive if the strong energy condition, ρ + σ ≥ 0, holds, and so it is invertible when defined on appropriate function spaces. See also [203] for generalizations of this condition. Concerning the shift, one choice, which is motivated by eliminating the “bad” terms in the expression for the Ricci tensor, Eq. (4.24), is the spatial harmonic gauge [25]. In terms of a fixed (possibly time-dependent) background metric \({\overset \circ \gamma _{ij}}\) on Σ t , this gauge is defined as (cf. Eq. (4.3))

$$0 = {V^k}: = {\gamma ^{ij}}\left({{\Gamma ^k}_{ij} - {{\overset \circ \Gamma}\,^k}_{ij}} \right) = {\gamma ^{ij}}{\gamma ^{kl}}\left({{{\overset \circ D}_k}{\gamma _{lj}} - {1 \over 2}{{\overset \circ D}_j}{\gamma _{kl}}} \right),$$

where \(\overset \circ D\) is the Levi-Civita connection with respect to \(\overset \circ \gamma\) and \(\overset \circ \Gamma {\,^k}_{ij}\) denote the corresponding Christoffel symbols. The main importance of this gauge is that it permits one to rewrite the Ricci tensor belonging to the three metric in the form

$$R_{ij}^{(3)} = - {1 \over 2}{\gamma ^{kl}}{\overset \circ D _k}{\overset \circ D _l}{\gamma _{ij}} + {D_{(i}}{V_{j)}} + {\rm{l}}{.}{\rm{o}}{.},$$

where \({\overset \circ D _k}\) denotes the covariant derivative with respect to the background metric \(\overset \circ \gamma\) and where the lower-order terms “l.o.” depend only on γ ij and its first derivatives \({\overset \circ D _k}{\gamma _{ij}}\). When Vk = 0 the operator on the right-hand side is second-order quasilinear elliptic, and with this, the evolution system (4.20, 4.21) has the form of a nonlinear wave equation for the three-metric γ ij . However, the coefficients and source terms in this equation still depend on the lapse and shift. For constant mean curvature slices the lapse satisfies the elliptic scalar equation (4.39), and with the spatial harmonic gauge the shift is determined by the requirement that Eq. (4.40) is preserved throughout evolution, which yields an elliptic vector equation for it. In [25] it was shown that the coupled hyperbolic-elliptic system consisting of the evolution equations (4.20, 4.21) with the Ricci tensor rewritten in elliptic form using the condition Vk = 0, the constant mean curvature condition (4.39), and this elliptic equation for βi, gives rise to a well-posed Cauchy problem in vacuum. Besides eliminating the “bad” terms in the Ricci tensor, the spatial harmonic gauge also has other nice properties, which were exploited in the well-posed formulation of [25]. For example, the covariant Laplacian of a function ƒ is

$${D^k}{D_k}f = {\gamma ^{ij}}{\overset \circ D _i}{\overset \circ D _j}f - {V^k}{\overset \circ D _k}f,$$

which does not contain any derivatives of the three metric γij if Vk = 0. For applications of the hyperbolic-elliptic formulation in [25] to the global existence of expanding vacuum cosmologies; see [26, 27].

Other methods for specifying the shift have been proposed in [391], with the idea of minimizing a functional of the type

$$I[\beta ] = \int\limits_{{\Sigma _t}} {{\Theta ^{ij}}} {\Theta _{ij}}\sqrt \gamma {d^3}x,$$

where Θ ij := t γ ij /2 = −αK ij + D(iβj) is the strain tensor. Therefore, the functional I[β] minimizes time changes in the three metric in an averaged sense. In particular, I[β] attains its absolute minimum (zero) if t is a Killing vector field. Therefore, one expects the resulting gauge condition to minimize the time dependence of the coordinate components of the three metric. An alternative is to replace the strain by its trace-free part on the right-hand side of Eq. (4.43), giving rise to the minimal distortion gauge. Both conditions yield a second-order elliptic equation for the shift vector, which has unique solutions provided suitable boundary conditions are specified. For generalizations and further results on these type of gauge conditions; see [73, 203, 204]. However, it seems to be currently unknown whether or not these elliptic shift conditions, together with the evolution system (4.20, 4.21) and an appropriate condition on the lapse, lead to a well-posed Cauchy problem.

Constraint propagation

The evolution equations (4.20, 4.21) are equivalent to the components of the Einstein equations corresponding to the spatial part of the Ricci tensor,

$${R_{ij}} = 8\pi {G_N}\left({{T_{ij}} - {1 \over 2}{\gamma _{ij}}{g^{\mu \nu}}{T_{\mu \nu}}} \right),$$

and in order to obtain a solution of the full Einstein equations one also needs to solve the constraints H = 8πG and M i = 8πG N j i . As in Section 4.2.3, the constraint propagation system can be obtained from the twice contracted Bianchi identities, which, in the 3+1 decomposition, read

$${\partial _0}H + {1 \over {{\alpha ^2}}}{D^j}\left({{\alpha ^2}{M_j}} \right) - 2KH - \left({{K^{ij}} - K{\gamma ^{ij}}} \right){R_{ij}} = 0,$$
$${\partial _0}{M_i} + {1 \over {{\alpha ^2}}}{D_i}\left({{\alpha ^2}H} \right) - K{M_i} + {1 \over \alpha}{D^j}\left({\alpha {R_{ij}} - \alpha {\gamma _{ij}}{\gamma ^{kl}}{R_{kl}}} \right) = 0.$$

The condition of the stress-energy tensor being divergence-free leads to similar evolution equations for ρ and j i . Therefore, the equations (4.44) lead to the following symmetric hyperbolic system [190, 445] for the constraint variables \({\mathcal H}: = H - 8\pi {G_N}\rho\) and \({{\mathcal M}_i}: = {M_i} - 8\pi {G_N}{j_i}\),

$${\partial _0}{\mathcal H} = - {1 \over {{\alpha ^2}}}{D^j}\left({{\alpha ^2}{{\mathcal M}_j}} \right) + 2K{\mathcal H},$$
$${\partial _0}{{\mathcal M}_i} = - {1 \over {{\alpha ^2}}}{D_i}\left({{\alpha ^2}{\mathcal H}} \right) + K{{\mathcal M}_i}.$$

As has also been observed in [190], the constraint propagation system associated with the standard ADM equations, where Eq. (4.44) is replaced by its trace-reversed version R ij γ ij gµvR µv /2 = 8πG N T ij is

$$\begin{array}{*{20}c} {{\partial _0}{\mathcal H} = - {1 \over {{\alpha ^2}}}{D^j}\left({{\alpha ^2}{{\mathcal M}_j}} \right) + K{\mathcal H},} \\ {{\partial _0}{{\mathcal M}_i} = - {{{D_i}\alpha} \over \alpha}{\mathcal H} + K{{\mathcal M}_i},\quad \quad \quad \quad} \\ \end{array}$$

which is only weakly hyperbolic. Therefore, it is much more difficult to control the constraint fields in the standard ADM case than in York’s formulation of the 3+1 equations.

The BSSN formulation

The BSSN formulation is based on the 3+1 decomposition of Einstein’s field equations. Unlike the harmonic formulation, which has been motivated by the mathematical structure of the equations and the understanding of the Cauchy formulation in general relativity, this system has been mainly developed and improved based on its capability of numerically evolving spacetimes containing compact objects in a stable way. Interestingly, it turns out that in spite of the fact that the BSSN formulation is based on an entirely different motivation, mathematical questions like the well-posedness of its Cauchy problem can be answered, at least for most gauge conditions.

In the BSSN formulation, the three metric γ ij and the extrinsic curvature K ij are decomposed according to

$${\gamma _{ij}} = {e^{4\phi}}{\tilde \gamma _{ij}}\,,$$
$${K_{ij}} = {e^{4\phi}}\left({{{\tilde A}_{ij}} + {1 \over 3}{{\tilde \gamma}_{ij}}K} \right).$$

Here, K = γij K ij and à ij are the trace and the trace-less part, respectively, of the conformally-rescaled extrinsic curvature. The conformal factor e2ϕ is determined by the requirement for the conformal metric to have unit determinant. Aside from these variables one also evolves the lapse (α), the shift (βi) and its time derivative (Bi), and the variable

$${\tilde \Gamma ^i}: = - {\partial _j}{\tilde \gamma ^{ij}}.$$

In terms of the operator \({\partial _0} = {\partial _t} - {\beta ^j}{\partial _j}\) the BSSN evolution equations are

$${\hat \partial _0}\alpha = - {\alpha ^2}f(\alpha ,\phi ,{x^\mu})(K - {K_0}({x^\mu})),$$
$${\hat \partial _0}K = - {e^{- 4\phi}}\left[ {{{\tilde D}^i}{{\tilde D}_i}\alpha + 2{\partial _i}\phi \cdot{{\tilde D}^i}\alpha} \right] + \alpha \left({{{\tilde A}^{ij}}{{\tilde A}_{ij}} + {1 \over 3}{K^2}} \right) - \alpha S,$$
$${\hat \partial _0}{\beta ^i} = {\alpha ^2}G(\alpha ,\phi ,{x^\mu}){B^i},$$
$${\hat \partial _0}{B^i} = {e^{- 4\phi}}H(\alpha ,\phi ,{x^\mu}){\hat \partial _0}{\tilde \Gamma ^i} - {\eta ^i}({B^i},\alpha ,{x^\mu}),$$
$${\hat \partial _0}\phi = - {\alpha \over 6}\,K + {1 \over 6}{\partial _k}{\beta ^k},$$
$${\hat \partial _0}{\tilde \gamma _{ij}} = - 2\alpha {\tilde A_{ij}} + 2{\tilde \gamma _{k(i}}{\partial _{j)}}{\beta ^k} - {2 \over 3}{\tilde \gamma _{ij}}{\partial _k}{\beta ^k},$$
$$\begin{array}{*{20}c} {{{\hat \partial}_0}{{\tilde A}_{ij}} = {e^{- 4\phi}}{{\left[ {\alpha {{\tilde R}_{ij}} + \alpha R_{ij}^\phi - {{\tilde D}_i}{{\tilde D}_j}\alpha + 4{\partial _{(i}}\phi \cdot{{\tilde D}_{j)}}\alpha} \right]}^{TF}}\quad \quad \quad \quad \quad \quad \quad \quad} \\ {+ \alpha K{{\tilde A}_{ij}} - 2\alpha {{\tilde A}_{ik}}\tilde A_{\,j}^k + 2{{\tilde A}_{k(i}}{\partial _{j)}}{\beta ^k} - {2 \over 3}{{\tilde A}_{ij}}{\partial _k}{\beta ^k} - \alpha {e^{- 4\phi}}{{\hat S}_{ij}},} \\ \end{array}$$
$$\begin{array}{*{20}c} {{{\hat \partial}_0}{{\tilde \Gamma}^i} = {{\tilde \gamma}^{kl}}{\partial _k}{\partial _l}{\beta ^i} + {1 \over 3}{{\tilde \gamma}^{ij}}{\partial _j}{\partial _k}{\beta ^k} + {\partial _k}{{\tilde \gamma}^{kj}}\cdot{\partial _j}{\beta ^i} - {2 \over 3}{\partial _k}{{\tilde \gamma}^{ki}}\cdot{\partial _j}{\beta ^j}\quad \quad \quad \quad \quad \quad \quad \,} \\ {- 2{{\tilde A}^{ij}}{\partial _j}\alpha + 2\alpha \left[ {(m - 1){\partial _k}{{\tilde A}^{ki}} - {{2m} \over 3}{{\tilde D}^i}K + m(\tilde \Gamma _{\,kl}^i{{\tilde A}^{kl}} + 6{{\tilde A}^{ij}}{\partial _j}\phi)} \right] - {S^i}.} \\ \end{array}$$

Here, quantities with a tilde refer to the conformal three metric \({{\tilde \gamma}_{ij}}\), which is also used in order to raise and lower indices. In particular, \({{\tilde D}_i}\) and \({{\tilde \Gamma}^k}_{ij}\) denote the covariant derivative and the Christoffel symbols, respectively, with respect to \({{\tilde \gamma}_{ij}}\). Expressions with a superscript TF refer to their trace-less part with respect to the conformal metric. Next, the sum \({{\tilde R}_{ij}} + R_{ij}^\phi\) represents the Ricci tensor associated with the physical three metric γ ij , where

$${\tilde R_{ij}} = - {1 \over 2}{\tilde \gamma ^{kl}}{\partial _k}{\partial _l}{\tilde \gamma _{ij}} + {\tilde \gamma _{k(i}}{\partial _{j)}}{\tilde \Gamma ^k} - {\tilde \Gamma _{(ij)k}}{\partial _l}{\tilde \gamma ^{lk}} + {\tilde \gamma ^{ls}}\left({2{{\tilde \Gamma}^k}_{l(i}{{\tilde \Gamma}_{j)ks}} + {{\tilde \Gamma}^k}_{is}{{\tilde \Gamma}_{klj}}} \right),$$
$$R_{ij}^\phi = - 2{\tilde D_i}{\tilde D_j}\phi - 2{\tilde \gamma _{ij}}{\tilde D^k}{\tilde D_k}\phi + 4{\tilde D_i}\phi \,{\tilde D_j}\phi - 4{\tilde \gamma _{ij}}{\tilde D^k}\phi \,{\tilde D_k}\phi .$$

The term \({{\hat \partial}_0}{{\tilde \Gamma}^i}\) in Eq. (4.55) is set equal to the right-hand side of Eq. (4.59). The parameter m in the latter equation modifies the evolution flow off the constraint surface by adding the momentum constraint to the evolution equation for the variable \({{\tilde \Gamma}^i}\). This parameter was first introduced in [10] in order to compare the stability properties of the BSSN evolution equations with those of the ADM formulation.

The gauge conditions, which are imposed on the lapse and shift in Eqs. (4.52, 4.54, 4.55), were introduced in [52] and generalize the Bona-Massó condition [62] and the hyperbolic Gamma driver condition [11]. It is assumed that the functions ƒ (α, ϕ, xµ), G(α, ϕ, xµ) and H(α, ϕ, xµ) are strictly positive and smooth in their arguments, and that K0(xµ) and ηi(Bj, α, xµ) are smooth functions of their arguments. The choice

$$m = 1,\qquad f(\alpha ,\phi ,{x^\mu}) = {2 \over \alpha}\,,\qquad {K_0}({x^\mu}) = 0,$$
$$G(\alpha ,\phi ,{x^\mu}) = {3 \over {4{\alpha ^2}}}\,,\qquad H(\alpha ,\phi ,{x^\mu}) = {e^{4\phi}}\,,\qquad {\eta ^i}({B^j},\alpha ,{x^\mu}) = \eta {B^i},$$

with η a positive constant, corresponds to the evolution system used in many black-hole simulations based on 1 + log slicing and the moving puncture technique (see, for instance, [423] and references therein). Finally, the source terms S, Ŝ ij and Si are defined in the following way: denoting by \(R_{ij}^{(3)}\) and \(R_{ij}^{(4)}\) the Ricci tensors belonging to the three-metric γ ij and the spacetime metric, respectively, and introducing the constraint variables

$$H: = {1 \over 2}\left({{\gamma ^{ij}}\;R_{ij}^{(3)} + {2 \over 3}{K^2} - {{\tilde A}^{ij}}{{\tilde A}_{ij}}} \right),$$
$${M_i}: = {\tilde D^j}{\tilde A_{ij}} - {2 \over 3}{\tilde D_i}K + 6{\tilde A_{ij}}{\tilde D^j}\phi ,$$
$${C^i}: = {\tilde \Gamma ^i} + {\partial _j}{\tilde \gamma ^{ij}},$$

the source terms are defined as

$$S: = {\gamma ^{ij}}R_{ij}^{(4)} - 2H,\qquad {\hat S_{ij}}: = {\left[ {R_{ij}^{(4)} + {{\tilde \gamma}_{k(i}}{\partial _{j)}}{C^k}} \right]^{TF}},\quad {S^i}: = 2\alpha \,m\,{\tilde \gamma ^{ij}}{M_j} - {\hat \partial _0}{C^i}.$$

for vacuum evolutions one sets S = 0, Ŝ ij = 0 and Si =0. When matter fields are present, the Einstein field equations are equivalent to the evolution equations (4.524.59) setting \(S = - 4\pi {G_N}(\rho + \sigma),{{\hat S}_{ij}} = 8\pi {G_N}\sigma _{ij}^{TF},{S^i} = 16\pi {G_N}m\alpha {{\tilde \gamma}^{ik}}{j_k}\) and the constraints H = 8πG N ρ, Mi = 8πG N j i and Ci = 0.

When comparing Cauchy evolutions in different spatial coordinates, it is very convenient to reformulate the BSSN system such that it is covariant with respect to spatial coordinate transformations. This is indeed possible; see [77, 82]. One way of achieving this is to fix a smooth background three-metric \({\overset \circ \gamma _{ij}}\), similarly as in Section 4.1, and to replace the fields ϕ and \({{\tilde \Gamma}^i}\) by the scalar and vector fields

$$\phi : = {1 \over {12}}\log \left({{\gamma \over {\overset \circ \gamma}}} \right),\qquad {\tilde \Gamma ^i}: = - {\overset \circ D _j}{\tilde \gamma ^{ij}},$$

where γ and \(\overset \circ \gamma\) denote the determinants of γ ij and \({\overset \circ \gamma_{ij}}\), and \({\overset \circ D_j}\) is the covariant derivative associated to the latter. If \({\overset \circ \gamma _{ij}}\) is flatFootnote 16 and time-independent, the corresponding BSSN equations are obtained by replacing \({\partial _k} \mapsto {\overset \circ D _k}\) and \({{\tilde \Gamma}^k}_{ij} \mapsto {{\tilde \Gamma}^k}_{ij} - \overset \circ \Gamma {\,^k}_{ij}\) in Eqs. (4.524.59, 4.60, 4.61, 4.644.66).

The hyperbolicity of the BSSN evolution equations

In fact, the ADM formulation in the spatial harmonic gauge described in Section 4.2.3 and the BSSN formulation are based on some common ideas. In the covariant reformulation of BSSN just mentioned, the variable \({{\tilde \Gamma}^i}\) is just the quantity Vi defined in Eq. (4.40), where γ ij is replaced by the conformal metric \({\overset \circ \gamma _{ij}}\). Instead of requiring \({{\tilde \Gamma}^i}\) to vanish, which would convert the operator on the right-hand side of Eq. (4.60) into a quasilinear elliptic operator, one promotes this quantity to an independent field satisfying the evolution equation (4.59) (see also the discussion below Equation (2.18) in [390]). In this way, the \({{\tilde \gamma}_{ij}} - {{\tilde A}_{ij}}\)-block of the evolution equations forms a wave system. However, this system is coupled through its principal terms to the evolution equations of the remaining variables, and so one needs to analyze the complete system. As follows from the discussion below, it is crucial to add the momentum constraint to Eq. (4.59) with an appropriate factor m in order to obtain a hyperbolic system.

The hyperbolicity of the BSSN evolution equations was first analyzed in a systematic way in [373], where it was established that for fixed shift and densitized lapse,

$$\alpha = {e^{12\sigma \phi}}$$

the evolution system (4.53, 4.564.59) is strongly hyperbolic for σ > 0 and m > 1/4 and symmetric hyperbolic for m > 1 and 6σ = 4m − 1. This was shown by introducing new variables and enlarging the system to a strongly or symmetric hyperbolic first-order one. In fact, similar first-order reductions were already obtained in [196, 188]. However, in [373] it was shown that the first-order enlargements are equivalent to the original system if the extra constraints associated to the definition of the new variables are satisfied, and that these extra constraints propagate independently of the BSSN constraints H = 0, M i = 0 and Ci = 0. This establishes the well-posedness of the Cauchy problem for the system (4.69, 4.53, 4.564.59) under the aforementioned conditions on σ and m. Based on the same method, a symmetric hyperbolic first-order enlargement of the evolution equations (4.52, 4.53, 4.564.59) and fixed shift was obtained in [52] under the conditions ƒ > 0 and 4m = 3ƒ + 1 and used to construct boundary conditions for BSSN. First-order strongly-hyperbolic reductions for the full system (4.524.59) have also been recently analyzed in [82].

An alternative and efficient method for analyzing the system consists in reducing it to a first-order pseudodifferential system, as described in Section 3.1.5. This method has been applied in [308] to derive a strongly hyperbolic system very similar to BSSN with fixed, densitized lapse and fixed shift. This system is then shown to yield a well-posed Cauchy problem. In [52] the same method was applied to the evolution system (4.524.59). Linearizing and localizing, one obtains a first-order system of the form \({{\hat U}_t} = P(ik)\hat U = i\overset \circ \beta {\,^s}{k_s}\hat U + \overset \circ \alpha Q(ik)\hat U\). The eigenvalues of Q(ik) are 0, \(\pm i, \pm i\sqrt m, \pm i\sqrt \mu, \pm \sqrt f, \pm \sqrt {GH}, \pm \sqrt \kappa\), where we have defined µ := (4m − 1)/3 and κ := 4GH/3. The system is weakly hyperbolic provided that

$$f > 0,\qquad \mu > 0,\qquad \kappa > 0,$$

and it is strongly hyperbolic if, in addition, the parameter and the functions ƒ, G, and H can be chosen such that the functions

$${\kappa \over {f - \kappa}}\,,\qquad {{m - 1} \over {\mu - \kappa}}\,,\qquad {{6(m - 1)\kappa} \over {4m - 3\kappa}}$$

are bounded and smooth. In particular, this requires that the nominators converge to zero at least as fast as the denominators when ƒκ, μκ or 3κ → 4m, respectively. Since κ > 0, the boundedness of κ/(ƒκ) requires that ƒκ. For the standard choice m = 1, the conditions on the gauge parameters leading to strong hyperbolicity are, therefore, ƒ > 0, κ > 0 and ƒκ. Unfortunately, for the choice (4.62, 4.63) used in binary black-hole simulations these conditions reduce to

$${e^{4\phi}} \neq 2\alpha ,$$

which is typically violated at some two-surface, since asymptotically, α → 1 and ϕ → 0 while near black holes is small and positive. It is currently not known whether or not the Cauchy problem is well posed if the system is strongly hyperbolic everywhere except at points belonging to a set of zero measure, such as a two-surface. Although numerical simulations based on finite-difference discretizations with the standard choice (4.62, 4.63) show no apparent sign of instabilities near such surfaces, the well-posedness for the Cauchy problem for the BSSN system (4.524.59) with the choice (4.62, 4.63) for the gauge source functions remains an open problem when the condition (4.72) is violated. However, a well-posed problem could be formulated by modifying the choice for the functions G and H such that ƒκ and ƒ, κ > 0 are guaranteed to hold everywhere.

Yet a different approach to analyzing the hyperbolicity of BSSN has been given in [219, 220] based on a new definition of strongly and symmetric hyperbolicity for evolution systems, which are first order in time and second order in space. Based on this definition, it has been verified that the BSSN system (4.69, 4.53, 4.564.59) is strongly hyperbolic for σ > 0 and m > 1/4 and symmetric hyperbolic for 6σ = 4m − 1 > 0. (Note that this generalizes the original result in [373] where, in addition, m > 1 was required.) The results in [220] also discuss more general 3+1 formulations, including the one in [308] and construct constraint-preserving boundary conditions. The relation between the different approaches to analyzing hyperbolicity of evolution systems, which are first order in time and second order in space, has been analyzed in [221].

Strong hyperbolicity for different versions of the gauge evolution equations (4.52, 4.54, 4.55), where the normal operator \({{\hat \partial}_0}\) is sometimes replaced by t , has been analyzed in [222]. See Table I in that reference for a comparison between the different versions and the conditions they are subject to in order to satisfy strong hyperbolicity. It should be noted that when m = 1 and \({{\hat \partial}_0}\) is replaced by t , additional conditions restricting the magnitude of the shift appear in addition to ƒ > 0 and ƒκ.

Table 1 The structure of a Butcher table.

Constraint propagation

As mentioned above, the BSSN evolution equations (4.524.59) are only equivalent to Einstein’s field equation if the constraints

$${\mathcal H}: = H - 8\pi {G_N}\rho = 0,\qquad {{\mathcal M}_i}: = {M_i} - 8\pi {G_N}{j_i} = 0,\qquad {C^i} = 0$$

are satisfied. Using the twice contracted Bianchi identities in their 3+1 decomposed form, Eqs. (4.45, 4.46), and assuming that the stress-energy tensor is divergence free, it is not difficult to show that the equations (4.524.59) imply the following evolution system for the constraint fields [52, 220]:

$${\hat \partial _0}{\mathcal H} = - {1 \over \alpha}\,{D^j}({\alpha ^2}{{\mathcal M}_j}) - \alpha {e^{- 4\phi}}{\tilde A^{ij}}{\tilde \gamma _{ki}}{\partial _j}{C^k} + {{2\alpha} \over 3}\,K{\mathcal H},$$
$${\hat \partial _0}{{\mathcal M}_j} = {{{\alpha ^3}} \over 3}{D_j}({\alpha ^{- 2}}{\mathcal H}) + \alpha K{{\mathcal M}_j} + {{\mathcal M}_i}{\partial _j}{\beta ^i} + {D^i}\left({\alpha {{\left[ {{{\tilde \gamma}_{k(i}}{\partial _{j)}}{C^k}} \right]}^{TF}}} \right),$$
$${\hat \partial _0}{C^i} = 2\alpha \,m\,{\tilde \gamma ^{ij}}{{\mathcal M}_j}.$$

This is the constraint propagation system for BSSN, which describes the propagation of constraint violations, which are usually present in numerical simulations due to truncation and roundoff errors. There are at least three reasons for establishing the well-posedness of its Cauchy problem. The first reason is to show that the unique solution of the system (4.744.76) with zero initial data is the trivial solution. This implies that it is sufficient to solve the constraints at the initial time t = 0. Then, any smooth enough solution of the BSSN evolution equations with such data satisfies the constraint propagation system with \({\mathcal H} = 0,{{\mathcal M}_j} = 0\) and Ci =0, and it follows from the uniqueness property of this system that the constraints must hold everywhere and at each time. In this way, one obtains a solution to Einstein’s field equations. However, in numerical calculations, the initial constraints are not exactly satisfied due to numerical errors. This brings us to the second reason for having a well-posed problem at the level of the constraint propagation system; namely, the continuous dependence on the initial data. Indeed, the initial constraint violations give rise to constraint violating solutions; but, if these violations are governed by a well-posed evolution system, the norm of the constraint violations is controlled by those of the initial violations for each fixed time t > 0. In particular, the constraint violations must converge to zero if the initial constraint violations do. Since the initial constraint errors go to zero when resolution is increased (provided a stable numerical scheme is used to solve the constraints), this guarantees convergence to a constraint-satisfying solution.Footnote 17 Finally, the third reason for establishing well-posedness for the constraint propagation system is the construction of constraint-preserving boundary conditions, which will be explained in detail in Section 6.

The hyperbolicity of the constraint propagation system (4.744.76) has been analyzed in [220, 52, 81, 80], and [315] and shown to be reducible to a symmetric hyperbolic first-order system for m > 1/4. Furthermore, there are no superluminal characteristic fields if 1/4 < m ≤ 1. Because of finite speed of propagation, this means that BSSN with 1/4 < m ≤ 1 (which includes the standard choice m = 1) does not possess superluminal constraint-violating modes. This is an important property, for it shows that constraint violations that originate inside black hole regions (which usually dominate the constraint errors due to high gradients at the punctures or stuffing of the black-hole singularities in the turducken approach [156, 81, 80]) cannot propagate to the exterior region.

In [353] a general result is derived, showing that under a mild assumption on the form of the constraints, strong hyperbolicity of the main evolution system implies strong hyperbolicity of the constraint propagation system, with the characteristic speeds of the latter being a subset of those of the former. The result does not hold in general if “strong” is replaced by “symmetric”, since there are known examples for which the main evolution system is symmetric hyperbolic, while the constraint propagation system is only strongly hyperbolic [108].

Other hyperbolic formulations

There exist many other hyperbolic reductions of Einstein’s field equations. In particular, there has been a large amount of work on casting the evolution equations into first-order symmetric [2, 182, 195, 3, 21, 155, 248, 443, 22, 74, 234, 254, 383, 377, 18, 285, 285, 86] and strongly hyperbolic [62, 63, 12, 59, 60, 13, 64, 367, 222, 78, 58, 82] form; see [182, 352, 188, 353] for reviews. For systems involving wave equations for the extrinsic curvature; see [128, 2]; see also [424] and [20, 75, 374, 379, 436] for applications to perturbation theory and the linear stability of solitons and hairy black holes.

Recently, there has also been work deriving strongly or symmetric hyperbolic formulations from an action principle [79, 58, 243].

Boundary Conditions: The Initial-Boundary Value Problem

In Section 3 we discussed the general Cauchy problem for quasilinear hyperbolic evolution equations on the unbounded domain ℝn. However, in the numerical modeling of such problems one is faced with the finiteness of computer resources. A common approach for dealing with this problem is to truncate the domain via an artificial boundary, thus forming a finite computational domain with outer boundary. Absorbing boundary conditions must then be specified at the boundary such that the resulting IBVP is well posed and such that the amount of spurious reflection is minimized.

Therefore, we examine in this section quasilinear hyperbolic evolution equations on a finite, open domain Σ ⊂ ℝn with C-smooth boundary Σ. Let T > 0. We are considering an IBVP of the following form,

$${u_t} = \sum\limits_{j = 1}^n {{A^j}} (t,x,u){\partial \over {\partial {x^j}}}u + F(t,x,u),x \in \Sigma ,\quad t \in [0,T],$$
$$u(0,x) = f(x),\quad \quad \quad x \in \Sigma ,$$
$$b(t,x,u)u = g(t,x),\quad \quad x \in \partial \Sigma ,\quad t \in [0,T],$$

where u(t, x) m is the state vector, A1(t, x, u), …, An(t, x, u) are complex m×m matrices, F(t,x,u) m, and b(t,x,u) is a complex r × m matrix. As before, we assume for simplicity that all coefficients belong to the class \(C_b^\infty ([0,T] \times \Sigma \times {{\rm{\mathbb C}}^m})\) of bounded, smooth functions with bounded derivatives. The data consists of the initial data \(f \in C_b^\infty (\Sigma, {{\rm{\mathbb C}}^m})\) and the boundary data \(g \in C_b^\infty ([0,T] \times \partial \Sigma, {{\rm{\mathbb C}}^r})\).

Compared to the initial-value problem discussed in Section 3 the following new issues and difficulties appear when boundaries are present:

  • For a smooth solution to exist, the data f and g must satisfy appropriate compatibility conditions at the intersection S := {0} × Σ between the initial and boundary surface [344]. Assuming that u is continuous, for instance, Eqs. (5.2, 5.3) imply that g(0, x) = b(0, x, f(x))f(x) for all x ∈ ∂Σ. If u is continuously differentiable, then taking a time derivative of Eq. (5.3) and using Eqs. (5.1, 5.2) leads to

    $${g_t}(0,x) = c(x)\;\left[ {\sum\limits_{j = 1}^n {{A^j}} (0,x,f(x)){{\partial f} \over {\partial {x^j}}}(x) + F(0,x,f(x))} \right] + {b_t}(0,x,f(x))f(x),\qquad x \in \partial \Sigma ,$$

    where c(x) is the complex r × m matrix with coefficients

    $$c{(x)^A}_{\;B} = b{(0,x,f(x))^A}_{\;B} + \sum\limits_{C = 1}^m {{{\partial {b^A}_C} \over {\partial {u^B}}}} (0,x,f(x))f{(x)^C},\qquad A = 1, \ldots ,r,\quad B = 1, \ldots ,m.$$

    Assuming higher regularity of u, one obtains additional compatibility conditions by taking further time derivatives of Eq. (5.3). In particular, for an infinitely-differentiable solution u, one has an infinite family of such compatibility conditions at S, and one must make sure that the data f, g satisfies each of them if the solution u is to be reproduced by the IBVP. If an exact solution u(0) of the partial-differential equation (5.1) is known, a convenient way of satisfying these conditions is to choose the data such that in a neighborhood of S, f and g agree with the corresponding values for u(0) i.e., such that f(x) = u(0)(0, x) and g(t, x) = b(t, x, u(0)(t, x))u(0)(t, x) for (t, x) in a neighborhood of S. However, depending on the problem at hand, this might be too restrictive.

  • The next issue is the question of what class of boundary conditions (5.3) leads to a well-posed problem. In particular, one would like to know, which are the restrictions on the matrix b(t, x, u) implying existence of a unique solution, provided the compatibility conditions hold. In order to illustrate this issue on a very simple example, consider the advection equation u t = u x on the interval [−1, 1]. The most general solution has the form u(t, x) = h(t + x) for some differentiable function h: (−1, ∞) → ℂ. The function h is determined on the interval [−1, 1] by the initial data alone, and so the initial data alone fixes the solution on the strip −1 −tx ≤ 1 − t. Therefore, one is not allowed to specify any boundary conditions at x = −1, whereas data must be specified for u at x = 1 in order to uniquely determine the function h on the interval (1, ∞).

  • Additional difficulties appear when the system has constraints, like in the case of electromagnetism and general relativity. In the previous Section 4, we saw in the case of Einstein’s equations that it is usually sufficient to solve these constraints on an initial Cauchy surface, since the Bianchi identities and the evolution equations imply that the constraints propagate. However, in the presence of boundaries one can only guarantee that the constraints remain satisfied inside the future domain of dependence of the initial surface Σ0:= {0} × Σ unless the boundary conditions are chosen with care. Methods for constructing constraint-preserving boundary conditions, which make sure that the constraints propagate correctly on the whole spacetime domain [0, T] × Σ will be discussed in Section 6.

There are two common techniques for analyzing an IBVP. The first, discussed in Section 5.1, is based on the linearization and localization principles, and reduces the problem to linear, constant coefficient IBVPs which can be explicitly solved using Fourier transformations, similar to the case without boundaries. This approach, called the Laplace method, is very useful for finding necessary conditions for the well-posedness of linear, constant coefficient IBVPs. Likely, these conditions are also necessary for the quasilinear IBVP, since small-amplitude high-frequency perturbations are essentially governed by the corresponding linearized, frozen coefficient problem. Based on the Kreiss symmetrizer construction [258] and the theory of pseudo-differential operators, the Laplace method also gives sufficient conditions for the linear, variable coefficient problem to be well posed; however, the general theory is rather technical. For a discussion and interpretation of this approach in terms of wave propagation we refer to [241].

The second method, which is discussed in Section 5.2, is based on energy inequalities obtained from integration by parts and does not require the use of pseudo-differential operators. It provides a class of boundary conditions, called maximal dissipative, which leads to a well-posed IBVP. Essentially, these boundary conditions specify data to the incoming normal characteristic fields, or to an appropriate linear combination of the in- and outgoing normal characteristic fields. Although technically less involved than the Laplace one, this method requires the evolution equations (5.1) to be symmetric hyperbolic in order to be applicable, and it gives sufficient, but not necessary, conditions for well-posedness.

In Section 5.3 we also discuss absorbing boundary conditions, which are designed to minimize spurious reflections from the boundary surface.

The Laplace method

Upon linearization and localization, the IBVP (5.1, 5.2, 5.3) reduces to a linear, constant-coefficient problem of the following form,

$${u_t} = \sum\limits_{j = 1}^n {{A^j}} {\partial \over {\partial {x^j}}}u + {F_0}(t,x),x \in \Sigma ,\quad t \geq 0,$$
$$u(0,x) = f(x),\quad \quad x \in \Sigma ,$$
$$bu = g(t,x),\quad \quad x \in \partial \Sigma ,\quad t \geq 0,$$

where Aj = Aj(t0, x0, u(0) (t0, x0)), b = b(t0, x0,u(0) (t0, x0)) denote the matrix coefficients corresponding to Aj(t, x, u) and b(t, x, u) linearized about a solution u(0) and frozen at the point p0 = (t0, x0), and where, for generality, we include the forcing term F0(t, x) with components in the class \(C_b^\infty ([0,\infty) \times \Sigma)\). Since the freezing process involves a zoom into a very small neighborhood of p0, we may replace Σ by ℝn for all points p0 lying inside the domain Σ. We are then back into the case of Section 3, and we conclude that a necessary condition for the IBVP (5.1, 5.2, 5.3) to be well posed at u(0) is that all linearized, frozen coefficient Cauchy problems corresponding to p0 Σ are well posed. In particular, the equation (5.6) must be strongly hyperbolic.

Now let us consider a point p0 ∈ ∂Σ at the boundary. Since Σ is assumed to be smooth, it will be mapped to a plane during the freezing process. Therefore, taking points p0 ∈ ∂Σ, it is sufficient to consider the linear, constant coefficient IBVP (5.6, 5.7, 5.8) on the half space

$$\Sigma : = \{({x_1},{x_2}, \ldots ,{x_n}) \in {{\mathbb R}^n}:{x_1} > 0\} ,$$

say. This is the subject of this subsection. Because we are dealing with a constant coefficient problem on the half-space, we can reduce the problem to an ordinary differential boundary problem on the interval [0, ∞) by employing Fourier transformation in the directions t and y:= (x2, …, x n ) tangential to the boundary. More precisely, we first exponentially damp the function u(t, x) in time by defining for η > 0 the function

$${u_\eta}(t,x): = \left\{{\begin{array}{*{20}c} {{e^{- \eta t}}\,u(t,x)} & {{\rm{for}}\;t \geq 0,x \in \Sigma ,} \\ {0\quad \quad \;\,\quad} & {{\rm{for}}\;t < 0,x \in \Sigma .} \\ \end{array}} \right.$$

We denote by ũ η (ξ, x1, k) the Fourier transformation of u η (t, x1, y) with respect to the directions t, and y tangential to the boundary and define the Laplace-Fourier transformation of u by

$$\tilde u(s,{x_1},k): = {\hat u_\eta}(\xi ,{x_1},k) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{- st - ik\cdot y}}} u(t,{x_1},y)dt{d^{n - 1}}y,\qquad s: = \eta + i\xi ,$$

then, ũ satisfies the following boundary value problem,

$$A{\partial \over {\partial {x_1}}}\tilde u = B(s,k)\tilde u + \tilde F(s,{x_1},k),{x_1} > 0,$$
$$b\tilde u = \tilde g(s,k)\quad \quad \quad {x_1} = 0,$$

where, for notational simplicity, we set A := A1 and Bj:= Aj, j = 2, …, n, and where B(s, k):= sIiB2k2 − … − iBnk n . Here, \(\tilde F(s,{x_1},k) = {{\tilde F}_0}(s,{x_1},k) + \hat f({x_1},k)\) with \({{\tilde F}_0}\) and \({\hat f}\) denoting the Laplace-Fourier and Fourier transform, respectively, of F0 and f, and \(\tilde g(s,k)\) is the Laplace-Fourier transform of the boundary data g.

In the following, we assume for simplicity that the boundary matrix A is invertible, and that the equation (5.6) is strongly hyperbolic. An interesting example with a singular boundary matrix is mentioned in Example 26 below. If A can be inverted, then we rewrite Eq. (5.12) as the linear ordinary differential equation

$${\partial \over {\partial {x_1}}}\tilde u = M(s,k)\tilde u + {A^{- 1}}\tilde F(s,{x_1},k),\qquad {x_1} > 0,$$

where M(s,k):= A−1B(s,k). We solve this equation subject to the boundary conditions (5.13) and the requirement that ũ vanishes as x1 → ∞. For this, it is useful to have information about the eigenvalues of M(s, k).

Lemma 3 ([258, 259, 228]). Suppose the equation (5.6) is strongly hyperbolic and the boundary matrix A has q negative and mq positive eigenvalues. Then, M(s, k) has precisely q eigenvalues with negative real part and mq eigenvalues with positive real part. (The eigenvalues are counted according to their algebraic multiplicity.) Furthermore, there is a constant δ > 0 such that the eigenvalues κ of M(s,k) satisfy the estimate

$$\vert Re(\kappa)\vert \;\, \geq \delta Re(s),$$

for all Re(s) > 0 and k ∈ ℝn−1.

Proof. Let Re(s) > 0, β ∈ ℝ and k ∈ ℝn−1. Then

$$M(s,k) - i\beta I = {A^{- 1}}\;\left[ {sI - i\beta A - i{k_j}{B^j}} \right] = {A^{- 1}}\;\left[ {sI - {P_0}(i\beta ,ik)} \right].$$

Since the equation (5.6) is strongly hyperbolic there is a constant K and matrices S(β, k) such that (see the comments below Definition 2)

$$\vert S(\beta ,k)\vert + \vert S{(\beta ,k)^{- 1}}\vert \; \leq K,\qquad S{(\beta ,k)^{- 1}}{P_0}(i\beta ,ik)S(\beta ,k) = i\Lambda (\beta ,k),$$

for all (β, k) ∈ ℝn, where Λ(β, k) is a real, diagonal matrix. Hence,

$$M(s,k) - i\beta I = {A^{- 1}}S(\beta ,k)\left[ {sI - i\Lambda (\beta ,k)} \right]S{(\beta ,k)^{- 1}},$$

and since sIiΛ(β, k) is diagonal and its diagonal entries have real part greater than or equal to Re(s), it follows that

$$\vert {[M(s,k) - i\beta I]^{- 1}}\vert \; \leq \;\vert A\vert \vert S(\beta ,k)\vert \vert S{(\beta ,k)^{- 1}}\vert \vert {[sI - i\Lambda (\beta ,k)]^{- 1}}\vert \leq {1 \over {\delta {\rm Re} (s)}},$$

with δ:= (K2A∣)−1. Therefore, the eigenvalues κ of M(s, k) must satisfy

$$\vert \kappa - i\beta \vert \; \geq \delta {\rm Re} (s)$$

for all β ∈ ℝ. Choosing β:= Im(κ) proves the inequality (5.15). Furthermore, since the eigenvalues κ = κ(s, k) can be chosen to be continuous functions of (s, k) [252], and since for k = 0, M(s, 0) = sA−1, the number of eigenvalues κ with positive real part is equal to the number of positive eigenvalues of A. □

According to this lemma, the Jordan normal form of the matrix M(s, k) has the following form:

$$M(s,k) = T(s,k)\left[ {D(s,k) + N(s,k)} \right]T{(s,k)^{- 1}},$$

with T(s, k) a regular matrix, N(s, k) is nilpotent (N(s, k)m = 0) and

$$D(s,k) = {\rm{diag}}({\kappa _1}, \ldots ,{\kappa _q},{\kappa _{q + 1}}, \ldots ,{\kappa _m})$$

is the diagonal matrix with the eigenvalues of M(s,k), where κ1, …,κ q have negative real part. Furthermore, N(s, k) commutes with D(s, k). Transforming to the variable (s, x, k):= T(s,k)−1ũ(s, x, k) the boundary value problem (5.12, 5.13) simplifies to

$${\partial \over {\partial {x_1}}}\tilde v = \left[ {D(s,k) + N(s,k)} \right]\tilde v + T{(s,k)^{- 1}}{A^{- 1}}\tilde F(s,{x_1},k),{x_1} > 0,$$
$$bT(s,k)\tilde v = \tilde g(s,k)\quad \quad \quad \quad \quad {x_1} = 0.$$

Necessary conditions for well-posedness and the Lopatinsky condition

Having cast the IBVP into the ordinary differential system (5.23, 5.24), we are ready to obtain a simple necessary condition for well-posedness. For this, we consider the problem for \(\tilde F = 0\) and split = (, +) where := (1, … q ) and +:= ( q +1, …, m ) are the variables corresponding to the eigenvalues of M(s, k) with negative and positive real parts, respectively. Accordingly, we split

$$D(s,k) = \left({\begin{array}{*{20}c} {{D_ -}(s,k)} & 0 \\ 0 & {{D_ +}(s,k)} \\ \end{array}} \right),\qquad N(s,k) = \left({\begin{array}{*{20}c} {{N_ -}(s,k)} & 0 \\ 0 & {{N_ +}(s,k)} \\ \end{array}} \right)\;,$$

and bT(s, k) = (b(s, k), b+(s, k)). When \(\tilde F = 0\) the most general solution of Eq. (5.23) is

$$\begin{array}{*{20}c} {{{\tilde v}_ -}(s,{x_1},k) = {e^{{D_ -}(s,k){x_1}}}{e^{{N_ -}(s,k){x_1}}}{\sigma _ -}(s,k),} \\ {{{\tilde v}_ +}(s,{x_1},k) = {e^{{D_ +}(s,k){x_1}}}{e^{{N_ +}(s,k){x_1}}}{\sigma _ +}(s,k),} \\ \end{array}$$

with constant vectors σ(s, k) q and σ+(s, k) ∈ ℂmq. The expression for + describes modes that grow exponentially in x1 and do not satisfy the required boundary condition at x1 → ∞ unless σ+(s, k) = 0; hence, we set σ+(s, k) = 0. In view of the boundary conditions (5.24), we then obtain the algebraic equation

$${b_ -}(s,k){\sigma _ -}(s,k) = \tilde g.$$

Therefore, a necessary condition for existence and uniqueness is that the r × q matrix b(s, k) be a square matrix, i.e., r = q, and that

$$\det ({b_ -}(s,k)) \neq 0$$

for all Re(s) > 0 and k ∈ ℝn−1. Let us make the following observations:

  • The condition (5.27) implies that we must specify exactly as many linearly-independent boundary conditions as there are incoming characteristic fields, since q is the number of negative eigenvalues of the boundary matrix A = A1.

  • The violation of condition (5.27) at some (s0, k0) with Re(s0) > 0 and k ∈n−1 gives rise to the simple wave solutions

    $$u(t,{x_1},y) = {e^{{s_0}t + i{k_0}\cdot y}}\tilde u({s_0},{x_1},{k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$

    where ũ(s0,·, k0) = T(s, k)(s0,·, k0) ∈ L2(0, ∞) is a nontrivial solution of the problem (5.23, 5.24) with homogeneous data \(\tilde F = 0\) and \(\tilde g = 0\). Therefore, an equivalent necessary condition for well-posedness is that no such simple wave solutions exist. This is known as the Lopatinsky condition.

  • If such a simple wave solution exists for some (s0, k0), then the homogeneity of the problem implies the existence of a whole family,

    $${u_\alpha}(t,{x_1},y) = {e^{\alpha ({s_0}t + i{k_0}\cdot y)}}\tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$

    of such solutions parametrized by α > 0. In particular, it follows that

    $$\vert {u_\alpha}(t,{x_1},y)\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert \tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0})\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert {u_\alpha}(0,{x_1},y)\vert ,$$

    such that

    $${{\vert {u_\alpha}(t,{x_1},y)\vert} \over {\vert {u_\alpha}(0,{x_1},y)\vert}} = {e^{\alpha {\rm{Re}}(s)t}} \rightarrow \infty$$

    for all t > 0, as α. Therefore, one has solutions growing exponentially in time at an arbitrarily large rate.Footnote 18

Example 25. Consider the IBVP for the massless Dirac equation in two spatial dimensions (cf. Section 8.4.1 in [259]),

$${u_t} = \left({\begin{array}{*{20}c} 1 & {\,\;0} \\ 0 & {- 1} \\ \end{array}} \right){u_x} + \left({\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right){u_y},t \geq 0,\quad x \geq 0,\quad y \in {\mathbb R},\qquad u = \left({\begin{array}{*{20}c} {{u_1}} \\ {{u_2}} \\ \end{array}} \right)$$
$$u(0,x,y) = f(x,y),\quad \quad x \geq 0,\quad y \in {\mathbb R},$$
$$a{u_1} + b{u_2} = g(t,y),\quad \quad t \geq 0,\quad y \in {\mathbb R},$$

where a and b are two complex constants to be determined. Assuming f = 0, Laplace-Fourier transformation leads to the boundary-value problem

$${\tilde u_x} = M(s,k)\tilde u,\quad x > 0,\qquad M(s,k) = \left({\begin{array}{*{20}c} {\;s} & {- ik} \\ {ik} & {- s} \\ \end{array}} \right)$$
$$a{\tilde u_1} + b{\tilde u_2} = \tilde g(s,k),x = 0.$$

The eigenvalues and corresponding eigenvectors of the matrix M(s, k) are κ± = ±λ and e± = (ik, sλ)T, with \(\lambda := \sqrt {{s^2} + {k^2}}\), where the root is chosen such that Re(λ) > 0 for Re(s) > 0. The solution, which is square integrable on [0, ∞), is the one associated with κ; that is,

$$\tilde u(s,x,k) = \sigma {e^{- \lambda x}}{e_ -},$$

with σ a constant. Introduced into the boundary condition (5.36) leads to the condition

$$\left[ {ika + (s + \lambda)b} \right]\sigma = \tilde g(s,k),$$

and the Lopatinsky condition is satisfied if and only if the expression inside the square brackets on the left-hand side is different from zero for all Re(s) > 0 and k ∈ ℝ. Clearly, this implies b ≠ 0, since otherwise this expression is zero for k = 0. Assuming b ≠ 0 and k ≠ 0, we then obtain the condition

$$z + \sqrt {{z^2} + 1} \pm i{a \over b} \neq 0,$$

for all z:= s/∣k∣ with Re(z) > 0, which is the case if and only if ∣a/b∣ ≤ 1 or a/b ∈ ℝ; see Figure 1. The particular case a = 0, b = 1 corresponds to fixing the incoming normal characteristic field u2 to g at the boundary.

Figure 1

Image of the lines Re(z) = const > 0 under the map \({\rm{\mathbb C}} \rightarrow {\rm{\mathbb C,}}\,z \mapsto z + \sqrt {{z^2} + 1}\).

Example 26. We consider the Maxwell evolution equations of Example 15 on the half-space x1 > 0, and freeze the incoming normal characteristic fields to zero at the boundary. These fields are the ones defined in Eq. (3.54), which correspond to negative eigenvalues and k = x ;Footnote 19 hence

$${E_1} + {\mu \over \beta}({W_{22}} + {W_{33}}) = 0,\qquad {E_A} + {W_{1A}} - (1 + \alpha){W_{A1}} = 0,\qquad {x_1} = 0,\quad {x_A} \in {\mathbb R},\quad t \geq 0,$$

where A = 2, 3 label the coordinates tangential to the boundary, and where we recall that \(\mu = \sqrt {\alpha \beta}\), assuming that α and β have the same sign such that the evolution system (3.50, 3.51) is strongly hyperbolic. In this example, we apply the Lopatinsky condition in order to find necessary conditions for the resulting IBVP to be well posed. For simplicity, we assume that \(\mu = \sqrt {\alpha \beta} = 1\), which implies that the system is strongly hyperbolic for all values of α ≠ 0, but symmetric hyperbolic only if −3/2 < α < 0; see Example 15.

In order to analyze the system, it is convenient to introduce the variables U1:= W22 + W33, U A := W1A − (1 + α)WA1, Z:= βW11 − (1 + β/2)U1, and \({{\bar W}_{AB}}: = {W_{AB}} - {\delta _{AB}}{U_1}/2\), which are motivated by the form of the characteristic fields with respect to the direction k = −1 normal to the boundary x1 = 0; see Example 15. With these assumptions and definitions, Laplace-Fourier transformation of the system (3.50, 3.51) yields

$$\begin{array}{*{20}c} {s{{\tilde E}_1} = - \alpha {\partial _1}{{\tilde U}_1} + i{k^A}\;\left[ {(1 + \alpha){{\tilde U}_A} + \alpha (2 + \alpha){{\tilde W}_{A1}}} \right]\;,\quad \quad \quad \quad \quad \quad \;} \\ {s{{\tilde E}_A} = - {\partial _1}{{\tilde U}_A} - i{k^B}\;\left[ {{{\tilde \bar W}_{BA}} - (1 + \alpha){{\tilde \bar W}_{AB}}} \right] - \alpha i{k_A}\left[ {\alpha \tilde Z + (1 + \alpha){{\tilde U}_1}} \right]\;,} \\ {s{{\tilde U}_1} = - {1 \over \alpha}\;\left[ {{\partial _1}{{\tilde E}_1} + (1 + \alpha)i{k^A}{{\tilde E}_A}} \right]\;,\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;} \\ {s{{\tilde U}_A} = - {\partial _1}{{\tilde E}_A} + (1 + \alpha)i{k_A}{{\tilde E}_1},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {s\tilde Z = {{3 + 2\alpha} \over {2\alpha}}i{k^A}{{\tilde E}_A},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\,} \\ {s{{\tilde W}_{A1}} = - i{k_A}{{\tilde E}_1},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,} \\ {s{{\tilde \bar W}_{AB}} = - i{k_A}{{\tilde E}_B} + {i \over 2}{\delta _{AB}}{k^C}{{\tilde E}_C},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;\;} \\ \end{array}$$

where we have used β = 1/α since μ = 1. The last three equations are purely algebraic and can be used to eliminate the zero speed fields \(\tilde Z,{{\tilde W}_{A1}}\) and \({{\tilde \bar W}_{AB}}\) from the remaining equations. The result is the ordinary differential system

$$\begin{array}{*{20}c} {{\partial _1}{{\tilde E}_1} = - \alpha s{{\tilde U}_1} - (1 + \alpha)i{k^A}{{\tilde E}_A},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;\,} \\ {{\partial _1}{{\tilde U}_1} = - \;\left[ {{s \over \alpha} - (2 + \alpha){{\vert k{\vert ^2}} \over s}} \right]\;{{\tilde E}_1} + {{1 + \alpha} \over \alpha}i{k^A}{{\tilde U}_A},\quad \quad \quad \quad \quad \;} \\ {{\partial _1}{{\tilde E}_A} = - s{{\tilde U}_A} + (1 + \alpha)i{k_A}{{\tilde E}_1},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;} \\ {{\partial _1}{{\tilde U}_A} = - \;\left[ {s + {{\vert k{\vert ^2}} \over s}} \right]\;{{\tilde E}_A} + {{(1 + \alpha)}^2}{{{k_A}{k^B}} \over s}{{\tilde E}_B} - \alpha (1 + \alpha)i{k_A}{{\tilde U}_1}.} \\ \end{array}$$

In order to diagonalize this system, we decompose A and Ũ A into their components parallel and orthogonal to k; if \(\hat k: = k/\vert k\vert\) and \({\hat l}\) form an orthonormal basis of the boundary x1 = 0,Footnote 20 then these are defined as

$${\tilde E_{\Vert}}: = {\hat k^A}{\tilde E_A},\qquad {\tilde E_ \bot}: = {\hat l^A}{\tilde E_A},\qquad {\tilde U_{\Vert}}: = {\hat k^A}{\tilde U_A},\qquad {\tilde U_ \bot}: = {\hat l^A}{\tilde U_A}.$$

then, the system decouples into two blocks, one comprising the transverse quantities (, Ũ) and the other the quantities (1, Ũ1, , Ũ). The first block gives

$${\partial _1}\left({\begin{array}{*{20}c} {{{\tilde E}_ \bot}} \\ {{{\tilde U}_ \bot}} \\ \end{array}} \right) = \left({\begin{array}{*{20}c} 0 & {- s} \\ {- \left[ {s + {{\vert k{\vert ^2}} \over s}} \right]} & 0 \\ \end{array}} \right)\;\,\left({\begin{array}{*{20}c} {{{\tilde E}_ \bot}} \\ {{{\tilde U}_ \bot}} \\ \end{array}} \right)\;,$$

and the corresponding solutions with exponential decay at x1 → ∞ have the form

$$\left({\begin{array}{*{20}c} {{{\tilde E}_ \bot}(s,{x_1},k)} \\ {{{\tilde U}_ \bot}(s,{x_1},k)} \\ \end{array}} \right) = {\sigma _0}{e^{- \lambda {x_1}}}\left({\begin{array}{*{20}c} s \\ \lambda \\ \end{array}} \right),$$

where σ0 is a complex constant, and where we have defined \(\lambda := \sqrt {{s^2} + \vert k{\vert ^2}}\) with the root chosen such that Re(λ) > 0 for Re(s) > 0. The second block is

$${\partial _1}\left({\begin{array}{*{20}c} {{{\tilde E}_1}} \\ {{{\tilde U}_1}} \\ {{{\tilde E}_{\Vert}}} \\ {{{\tilde U}_{\Vert}}} \\ \end{array}} \right) = \left({\begin{array}{*{20}c} 0 & {- \alpha s} & {- i(1 + \alpha)\vert k\vert} & 0 \\ {- {s \over \alpha} + (2 + \alpha){{\vert k{\vert ^2}} \over s}} & 0 & 0 & {i{{1 + \alpha} \over \alpha}\vert k\vert} \\ {i(1 + \alpha)\vert k\vert} & 0 & 0 & {- s} \\ 0 & {- i\alpha (1 + \alpha)\vert k\vert} & {- s + \alpha (2 + \alpha){{\vert k{\vert ^2}} \over s}} & 0 \\ \end{array}} \right)\;\,\left({\begin{array}{*{20}c} {{{\tilde E}_1}} \\ {{{\tilde U}_1}} \\ {{{\tilde E}_{\Vert}}} \\ {{{\tilde U}_{\Vert}}} \\ \end{array}} \right)\;\,,$$

with corresponding decaying solutions

$$\left({\begin{array}{*{20}c} {{{\tilde E}_1}(s,{x_1},k)} \\ {{{\tilde U}_1}(s,{x_1},k)} \\ {{{\tilde E}_{\Vert}}(s,{x_1},k)} \\ {{{\tilde U}_{\Vert}}(s,{x_1},k)} \\ \end{array}} \right) = {\sigma _1}{e^{- \lambda {x_1}}}\left({\begin{array}{*{20}c} {i\vert k\vert s} \\ {- i\vert k\vert \lambda} \\ {s\lambda} \\ {{s^2} - \alpha \vert k{\vert ^2}} \\ \end{array}} \right) + {\sigma _2}{e^{- \lambda {x_1}}}\left({\begin{array}{*{20}c} {is\lambda} \\ {i({s^2}/\alpha - \vert k{\vert ^2})} \\ {\vert k\vert s} \\ {- \alpha \vert k\vert \lambda} \\ \end{array}} \right)\;\,,$$

with complex constants σ1 and σ2.

On the other hand, Laplace-Fourier transformation of the boundary conditions (5.40) leads to

$${\tilde E_1} + \alpha {\tilde U_1} = 0,\quad {\tilde E_A} + {\tilde U_A} = 0,\qquad {x_1} = 0.$$

Introducing into this solutions (5.43, 5.45) gives

$$(s + \lambda){\sigma _0} = 0$$


$$\left({\begin{array}{*{20}c} {\vert k\vert (s - \alpha \lambda)} & {s\lambda + {s^2} - \alpha \vert k{\vert ^2})} \\ {s\lambda + {s^2} - \alpha \vert k{\vert ^2}} & {\vert k\vert (s - \alpha \lambda)} \\ \end{array}} \right)\;\,\left({\begin{array}{*{20}c} {{\sigma _1}} \\ {{\sigma _2}} \\ \end{array}} \right) = 0.$$

In the first case, since Re(s + λ) ≥ Re(s) > 0, we obtain σ0 = 0 and there are no simple wave solutions in the transverse sector. In the second case, the determinant of the system is

$$- {s^2}\left[ {{{(s + \lambda)}^2} - {{(1 + \alpha)}^2}\vert k{\vert ^2}} \right]\;\,,$$

which is different from zero if and only if \(z + \sqrt {{z^2} + 1} \neq \pm (1 + \alpha)\) for all Re(z) > 0, where z:= s/∣k∣. Since α is real, this is the case if and only if −2 ≤ α ≤ 0; see Figure 1.

We conclude that the strongly hyperbolic evolution system (3.50, 3.51) with αβ = 1 and incoming normal characteristic fields set to zero at the boundary does not give rise to a well-posed IBVP when α > 0 or α < −2. This excludes the parameter range −3/2 < α < 0 for which the system is symmetric hyperbolic. This case is covered by the results in Section 5.2, which utilize energy estimates and show that symmetric hyperbolic problems with zero incoming normal characteristic fields are well posed.

Sufficient conditions for well-posedness and boundary stability

Next, let us discuss sufficient conditions for the linear, constant coefficient IBVP (5.6, 5.7, 5.8) to be well posed. For this, we first transform the problem to trivial initial data by replacing u(t, x) with u(t, x) − etf(x). Then, we obtain the IBVP

$${u_t} = \sum\limits_{j = 1}^n {{A^j}} {\partial \over {\partial {x^j}}}u + F(t,x),x \in \Sigma ,\quad t \geq 0,$$
$$u(0,x) = 0,\quad \quad x \in \Sigma ,$$
$$bu = g(t,x),\quad \quad x \in \partial \Sigma ,\quad t \geq 0,$$

with \(F(t,x) = {F_0}(t,x) - {e^{- t}}[f(x) + \sum\limits_{j = 1}^n {{A^j}} {\partial \over {\partial {x^j}}}f(x)]\) and g(t, x) replaced by g(t, x) + e−tbf(x). By applying the Laplace-Fourier transformation to it, one obtains the boundary-value problem (5.12, 5.13), which could be solved explicitly, provided the Lopatinsky condition holds. However, in view of the generalization to variable coefficients, one would like to have a method that does not rely on the explicit representation of the solution in Fourier space.

In order to formulate the next definition, let \(\Omega := [0,\infty) \times \bar \Sigma\) be the bulk and \({\mathcal T}: = [0,\infty) \times \partial \Sigma\) the boundary surface, and introduce the associated norms ∥ · ∥η,0,Ω and \(\Vert \cdot \Vert_{\eta, 0, {\mathcal T}}\) defined by

$$\begin{array}{*{20}c} {\Vert u\Vert _{\eta ,0,\Omega}^2\;: = \int\limits_\Omega {{e^{- 2\eta t}}} \vert u(t,{x_1},y){\vert ^2}dt\,d{x_1}{d^{n - 1}}y = \int\limits_{{{\mathbb R}^{n + 1}}} \vert {u_\eta}(t,x){\vert ^2}dt\,{d^n}x,} \\ {\Vert u\Vert _{\eta ,0,T}^2: = \int\limits_T {{e^{- 2\eta t}}} \vert u(t,0,y){\vert ^2}dt\,{d^{n - 1}}y = \int\limits_{{{\mathbb R}^n}} \vert {u_\eta}(t,0,y){\vert ^2}dt\,{d^{n - 1}}y,\;\,} \\ \end{array}$$

where we have used the definition of u η as in Eq. (5.10). Using Parseval’s identities we may also rewrite these norms as

$$\Vert u\Vert _{\eta ,0,\Omega}^2\;: = \int\limits_{\mathbb R} {\left[ {\int\limits_0^\infty {\left({\;\int\limits_{{{\mathbb R}^{n - 1}}} \vert \tilde u(\eta + i\xi ,{x_1},k){\vert ^2}{d^{n - 1}}k} \right)\;} \,d{x_1}} \right]\;\,} d\xi ,$$
$$\Vert u\Vert _{\eta ,0,T}^2: = \int\limits_{\mathbb R} {\left({\;\int\limits_{{{\mathbb R}^{n - 1}}} \vert \tilde u(\eta + i\xi ,0,k){\vert ^2}{d^{n - 1}}k} \right)} \;\,d\xi .$$

The relevant concept of well-posedness is the following one.

Definition 6. [258] The IBVP ( 5.50 , 5.51 , 5.52 ) is called strongly well posed in the generalized sense if there is a constant K > 0 such that each compatible data \(F \in C_0^\infty (\Omega)\) and \(g \in C_0^\infty ({\mathcal T})\) gives rise to a unique solution u satisfying the estimate

$$\eta \,\Vert u\Vert _{\eta ,0,\Omega}^2 + \Vert u\Vert _{\eta ,0,{\mathcal T}}^2\; \leq {K^2}\left({{1 \over \eta}\Vert F\Vert _{\eta ,0,\Omega}^2 + \Vert g\Vert _{\eta ,0,{\mathcal T}}^2} \right)\;,$$

for all η > 0.

The inequality (5.55) implies that both the bulk norm · η,0,Ω and the boundary norm \(\Vert \cdot \Vert_{\eta, 0, {\mathcal T}}\) of u are bounded by the corresponding norms of F and g. For a trivial source term, F = 0, the inequality (5.55) implies, in particular,

$$\Vert u\Vert _{\eta ,0,{\mathcal T}} \;\leq K\Vert g\Vert _{\eta ,0,{\mathcal T}},\qquad \eta > 0,$$

which is an estimate for the solution at the boundary in terms of the norm of the boundary data g. In view of Eq. (5.54) this is equivalent to the following requirement.

Definition 7. [259, 267] The boundary problem ( 5.50 , 5.51 , 5.52 ) is called boundary stable if there is a constant K > 0 such that all solutions ũ(s,·,k) ∈ L2(0,) of Eqs. ( 5.12 , 5.13 ) with \(\tilde F = 0\) satisfy

$$\vert \tilde u(s,0,k)\vert \; \leq K\vert \tilde g(s,k)\vert$$

for all Re(s) > 0 and k ∈ ℝn−1.

Since boundary stability only requires considering solutions for trivial source terms, F = 0, it is a much simpler condition than Eq. (5.55). Clearly, strong well-posedness in the generalized sense implies boundary stability. The main result is that, modulo technical assumptions, the converse is also true: boundary stability implies strong well-posedness in the generalized sense.

Theorem 5. [258, 340] Consider the linear, constant coefficient IBVP ( 5.50 , 5.51 , 5.52 ) on the half space Σ = {(x1, x2, …, x n ) ∈ ℝn: x1 > 0}. Assume that equation (5.50) is strictly hyperbolic, meaning that the eigenvalues of the principal symbol P0(ik) are distinct for all kSn−1. Assume that the boundary matrix A = A1 is invertible. Then, the problem is strongly well posed in the generalized sense if and only if it is boundary stable.

Maybe the importance of Theorem 5 is not so much its statement, which concerns only the linear, constant coefficient case for which the solutions can also be constructed explicitly, but rather the method for its proof, which is based on the construction of a smooth symmetrizer symbol, and which is amendable to generalizations to the variable coefficient case using pseudo-differential operators.

In order to formulate the result of this construction, define \(\rho := \sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}, {s{\prime}}: = s/\rho, {k{\prime}}: = k/\rho\), such that \(({s{\prime}},{k{\prime}}) \in S_ + ^n\) lies on the half sphere \(S_ + ^n: = \{({s{\prime}},{k{\prime}}) \in {\rm{C}} \times {{\rm{R}}^n}:\vert {s{\prime}}{\vert ^2} + \vert k{\vert ^2} = 1,{\rm Re} ({s{\prime}}) > 0\}\) for Re(s) > 0 and k ∈ ℝn−1. Then, we have,

Theorem 6. [258] Consider the linear, constant coefficient IBVP ( 5.50 , 5.51 , 5.52 ) on the half space Σ. Assume that equation (5.50) is strictly hyperbolic, that the boundary matrix A = A1 is invertible, and that the problem is boundary stable. Then, there exists a family of complex m × m matrices \(H({s{\prime}},k),({s{\prime}},k) \in S_ + ^n\), whose coefficients belong to the class \({C^\infty}(S_ + ^m)\), with the following properties:

  1. (i)

    H(s′, k′) = H(s′, k′)* is Hermitian.

  2. (ii)

    H(s′, k′)M(s′, k′) + M(s′, k′)*H(s′, k′) ≥ 2Re(s′)I for all (s′, k′) \(({s{\prime}},{k{\prime}}) \in S_ + ^n\).

  3. (iii)

    There is a constant C > 0 such that

    $${\tilde u^{\ast}}H(s\prime ,k\prime)\tilde u + C\vert b\tilde u{\vert ^2}\; \geq \;\vert \tilde u{\vert ^2}$$

    for all ũ ∈ ℂm and all (s′, k′) \(\in S_n^ +\).

Furthermore, H can be chosen to be a smooth function of the matrix coefficients of Aj and b.

Let us show how the existence of the symmetrizer H(s′, k′) implies the estimate (5.55). First, using Eq. (5.14) and properties (i) and (ii) we have

$$\begin{array}{*{20}c} {{\partial \over {\partial {x_1}}}\;\left[ {{{\tilde u}^{\ast}}H(s\prime ,k\prime)\tilde u} \right] = {{\left({{{\partial \tilde u} \over {\partial {x_1}}}} \right)}^{\ast}}H(s\prime ,k\prime)\tilde u + {{\tilde u}^{\ast}}H(s\prime ,k\prime){{\partial \tilde u} \over {\partial {x_1}}}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;} \\ {= \rho {{\tilde u}^{\ast}}\;\left[ {H(s\prime ,k\prime)M(s\prime ,k\prime) + M{{(s\prime ,k\prime)}^{\ast}}H(s\prime ,k\prime)} \right]\tilde u + 2{\rm{Re}}\left({{{\tilde u}^{\ast}}H(s\prime ,k\prime){A^{- 1}}\tilde F} \right)} \\ {\geq 2{\rm{Re}}(s)\vert \tilde u{\vert ^2} - {C_1}\vert \tilde u{\vert ^2} - {1 \over {{C_1}}}\vert H(s\prime ,k\prime){A^{- 1}}\tilde F{\vert ^2},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ \end{array}$$

where we have used the fact that M(s, k) = ρM(s′, k′) in the second step, and the inequality \(2{{\rm Re}} (a^\ast b) \leq 2\vert a\Vert b\vert \, \leq {C_1}\vert a{\vert ^2} + C_1^{- 1}\vert b{\vert ^2}\) for complex numbers a and b and any positive constant C1 > 0 in the third step. Integrating both sides from x1 = 0 to ∞ and choosing C1 = Re(s), we obtain, using (iii),

$$\begin{array}{*{20}c} {{\rm{Re}}(s)\int\limits_0^\infty \vert \tilde u{\vert ^2}d{x_1} \leq - {{\left[ {{{\tilde u}^{\ast}}H(s\prime ,k\prime)\tilde u} \right]}_{{x_1} = 0}} + {1 \over {{\rm{Re}}(s)}}\int\limits_0^\infty \vert H{A^{- 1}}\tilde F{\vert ^2}d{x_1}\quad \quad \quad \quad \quad \;\;} \\ {\leq - {{\left. {\vert \tilde u{\vert ^2}} \right\vert}_{{x_1} = 0}} + C\vert \tilde g{\vert ^2} + {1 \over {{\rm{Re}}(s)}}\int\limits_0^\infty \vert H{A^{- 1}}\tilde F{\vert ^2}d{x_1}.} \\ \end{array}$$

Since H is bounded, there exists a constant C2 > 0 such that \(\vert H{A^{- 1}}\tilde F\vert \leq {C_2}\vert \tilde F\vert\) for all \(({s{\prime}},{k{\prime}}) \in S_ + ^n\). Integrating over ξ = Im(s) ℝ and k ∈n−1 and using Parseval’s identity, we obtain from this

$$\eta \,\Vert u\Vert _{\eta ,0,\Omega}^2 + \Vert u\Vert _{\eta ,0,{\mathcal T}}^2 \leq {{C_2^2} \over \eta}\Vert F\Vert _{\eta ,0,\Omega}^2 + C\Vert g\Vert _{\eta ,0,{\mathcal T}}^2,$$

and the estimate (5.55) follows with \({K^2}: = \max \{C_2^2,C\}\).

Example 27. Let us go back to Example 25 of the 2D Dirac equation on the halfspace with boundary condition (5.34) at x = 0. The solution of Eqs. (5.35, 5.36) at the boundary is given by ũ(s, 0, k) = σ(ik, s +λ)T, where \(\lambda = \sqrt {{s^2} + {k^2}}\), and

$$\sigma = {{\tilde g(s,k)} \over {ika + (s + \lambda)b}}.$$

Therefore, the IBVP is boundary stable if and only if there exists a constant K > 0 such that

$${{\sqrt {{k^2} + \vert s + \lambda {\vert ^2}}} \over {\vert ika + (s + \lambda)b\vert}} \leq K$$

for all Re(s) > 0 and k ∈ ℝn−1. We may assume b ≠ 0, otherwise the Lopatinsky condition is violated. For k = 0 the left-hand side is 1/∣b∣. For k ≠ 0 we can rewrite the condition as

$${1 \over {\vert b\vert}}{{\sqrt {1 + \vert \psi (z){\vert ^2}}} \over {\vert \psi (z) \pm i{a \over b}\vert}} \leq K,$$

for all Re(z) > 0, where \(\psi (z): = z + \sqrt {{z^2} + 1}\) and z:= s/∣k∣. This is satisfied if and only if the function \(\vert \psi (z) \pm i{a \over b}\vert\) is bounded away from zero, which is the case if and only if ∣a/b∣ < 1; see Figure 1.

This, together with the results obtained in Example 25, yields the following conclusions: the IBVP (5.32, 5.33, 5.34) gives rise to an ill-posed problem if b = 0 or if ∣a/b∣ > 1 and a/b ∈ ℝ and to a problem, which is strongly well posed in the generalized sense if b ≠ 0 and ∣a/b∣ < 1. The case a∣ = ∣b ≠ 0 is covered by the energy method discussed in Section 5.2. For the case ∣a/b∣ > 1 with a/b ∈ ℝ, see Section 10.5 in [228].

Before discussing second-order systems, let us make a few remarks concerning Theorem 5:

  • The boundary stability condition (5.57) is often called the Kreiss condition. Provided the eigenvalues of the matrix M(s, k) are suitably normalized, it can be shown [258, 228, 241] that the determinant det(b(s, k)) in Eq. (5.27) can be extended to a continuous function defined for all Re(s) ≥ 0 and k ∈n−1, and condition (5.57) can be restated as the following algebraic condition:

    $$\det ({b_ -}(s,k)) \neq 0$$

    for all Re(s) ≥ 0 and k ∈n−1. This is a strengthened version of the Lopatinsky condition, since it requires the determinant to be different from zero also for s on the imaginary axis.

  • As anticipated above, the importance of the symmetrizer construction in Theorem 6 relies on the fact that, based on the theory of pseudo-differential operators, it can be used to treat the linear, variable coefficient IBVP [258]. Therefore, the localization principle holds: if all the frozen coefficient IBVPs are boundary stable and satisfy the assumptions of Theorem 5, then the variable coefficient problem is strongly well posed in the generalized sense.

  • If the problem is boundary stable, it is also possible to estimate higher-order derivatives of the solutions. For example, if we multiply both sides of the inequality (5.59) by ∣k2, integrate over ξ = Im(s) and k and use Parseval’s identity as before, we obtain the estimate (5.55) with u, F and g replaced by their tangential derivatives u y , F y and g y , respectively. Similarly, one obtains the estimate (5.55) with u, F and g replaced by their time derivatives u t , F t and g t if we multiply both sides of the inequality (5.59) by ∣s2 and assume that u t (0, x) = 0 for all x ∈ Σ.Footnote 21 Then, a similar estimate follows for the partial derivative, 1u, in the x1-direction using the evolution equation (5.6) and the fact that the boundary matrix A1 is invertible. Estimates for higher-order derivatives of u follow by an analogous process.

  • Theorem 5 assumes that the initial data f is trivial, which is not an important restriction since one can always achieve f = 0 by transforming the source term F and the boundary data g, as described below Eq. (5.52). Since the transformed F involves derivatives of f, this means that derivatives of f would appear on the right-hand side of the inequality (5.55), and at first sight it looks like one “loses a derivative” in the sense that one needs to control the derivatives of f to one degree higher than the ones of u. However, the results in [341, 342] improve the statement of Theorem 5 by allowing nontrivial initial data and by showing that the same hypotheses lead to a stronger concept of well-posedness (strong well-posedness, defined below in Definition 9 as opposed to strong well-posedness in the generalized sense).

  • The results mentioned so far assume strict hyperbolicity and an invertible boundary matrix, which are too-restrictive conditions for many applications. Unfortunately, there does not seem to exist a general theory, which removes these two assumptions. Partial results include [5], which treats strongly hyperbolic problems with an invertible boundary matrix that are not necessarily strictly hyperbolic, and [293], which discusses symmetric hyperbolic problems with a singular boundary matrix.

Second-order systems

It has been shown in [267] that certain systems of wave problems can be reformulated in such a way that they satisfy the hypotheses of Theorem 6. In order to illustrate this, we consider the IBVP for the wave equation on the half-space Σ:= {(x1, x2, …, x n ) ∈ ℝn: x1 > 0}, n ≥ 1,

$${v_{tt}} = \Delta v + F(t,x),\;\;x \in \Sigma ,\quad t \geq 0,$$
$$v(0,x) = 0,\quad {v_t}(0,x) = 0,\;\;x \in \Sigma ,$$
$$Lv = g(t,x),\;\;x \in \partial \Sigma ,\quad t \geq 0,$$

where \(F \in C_0^\infty ([0,\infty) \times \Sigma)\)) and \(g \in C_0^\infty ([0,\infty) \times \partial \Sigma)\), and where L is a first-order linear differential operator of the form

$$L: = a{\partial \over {\partial t}} - b{\partial \over {\partial {x_1}}} - \sum\limits_{j = 2}^n {{c_j}} {\partial \over {\partial {x_j}}},$$

where a, b, c2, …, c n are real constants. We ask under which conditions on these constants the IBVP (5.65, 5.66, 5.67) is strongly well posed in the generalized sense. Since we are dealing with a second-order system, the estimate (5.55) in Definition 6 has to be replaced with

$$\eta \Vert v\Vert _{\eta ,1,\Omega}^2 + \Vert v\Vert _{\eta ,1,{\mathcal T}}^2 \leq {K^2}\left({{1 \over \eta}\Vert F\Vert _{\eta ,0,\Omega}^2 + \Vert g\Vert _{\eta ,0,{\mathcal T}}^2} \right)\;,$$

where the norms \(\Vert \cdot \Vert _{\eta, 1, \Omega}^2\) and \(\Vert \cdot \Vert _{\eta, 1, {\mathcal T}}^2\) control the first partial derivatives of v,

$$\begin{array}{*{20}c} {\Vert v\Vert _{\eta ,1,\Omega}^2: = \int\limits_\Omega {{e^{- 2\eta t}}} \sum\limits_{\mu = 0}^n {{{\left\vert {{{\partial v} \over {\partial {x^\mu}}}(t,{x_1},y)} \right\vert}^2}} \,dt\,d{x_1}{d^{n - 1}}y,} \\ {\Vert v\Vert _{\eta ,1,{\mathcal T}}^2: = \int\limits_{\mathcal T} {{e^{- 2\eta t}}} \sum\limits_{\mu = 0}^n {{{\left\vert {{{\partial v} \over {\partial {x^\mu}}}(t,0,y)} \right\vert}^2}} dt\,{d^{n - 1}}y,\quad \;\;\;} \\ \end{array}$$

with (xμ) = (t, x1, x2, …, x n ). Likewise, the inequality (5.57) in the definition of boundary stability needs to be replaced by

$$\vert \tilde u(s,0,k)\vert \; \leq K{{\vert \tilde g(s,k)\vert} \over {\sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}}}.$$

Laplace-Fourier transformation of Eqs. (5.65, 5.67) leads to the second-order differential problem

$${{{\partial ^2}} \over {\partial x_1^2}}\tilde v = ({s^2} + \vert k{\vert ^2})\tilde v - \tilde F,\,\,{x_1} > 0,$$
$$b{\partial \over {\partial {x_1}}}\tilde v = (as - ic(k))\tilde v - \tilde g,\,\,{x_1} = 0,$$

where we have defined \(c(k): = \sum\limits_{j = 2}^n {{c_j}{k_j}}\) and where \({\tilde F}\) and \({\tilde g}\) denote the Laplace-Fourier transformations of F and g, respectively. In order to apply the theory described in Section 5.1.2, we rewrite this system in first-order pseudo-differential form. Defining

$$\tilde u: = \left({\begin{array}{*{20}c} {\rho \tilde v} \\ {{{\partial \tilde v} \over {\partial {x_1}}}} \\ \end{array}} \right),\qquad \tilde f: = - \left({\begin{array}{*{20}c} 0 \\ {\tilde F} \\ \end{array}} \right),$$

where \(\rho := \sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}\), we find

$${\partial \over {\partial {x_1}}}\tilde u = M(s,k)\tilde u + \tilde f,\,\,{x_1} > 0,$$
$$L(s,k)\tilde u = \tilde g,\,\,{x_1} = 0,$$

where we have defined

$$M(s,\,k): = \rho \left({\begin{array}{*{20}c} 0 & 1 \\ {{{s\prime}^2} + \vert k\prime {\vert ^2}} & 0 \\ \end{array}} \right),\qquad L(s,k): = \left({as\prime - ic(k\prime), - b} \right),$$

with s′ := s/ρ, k′:= k/ρ. This system has the same form as the one described by Eqs. (5.14, 5.13), and the eigenvalues of the matrix M(s,k) are distinct for Re(s) > 0 and k ∈n−1. Therefore, we can construct a symmetrizer H(s′, k′) according to Theorem 6 provided that the problem is boundary stable. In order to check boundary stability, we diagonalize M(s, k) and consider the solution of Eq. (5.74) for \(\tilde f = 0\), which decays exponentially as x1 → ∞,

$$\tilde u(s,{x_1},k) = {\sigma _ -}{e^{- \lambda {x_1}}}\left({\begin{array}{*{20}c} \rho \\ {- \lambda} \\ \end{array}} \right),$$

where σ is a complex constant and \(\lambda := \sqrt {{s^2} + \vert k{\vert ^2}}\) with the root chosen such that Re(λ) > 0 for Re(s) > 0. Introduced into the boundary condition (5.75), this gives

$$\left[ {as\prime + b\lambda \prime - ic(k\prime)} \right]{\sigma _ -} = {{\tilde g} \over \rho},$$

and the system is boundary stable if and only if the expression inside the square parenthesis is different from zero for all Re(s′) ≥ 0 and kn−1 with ∣s2 + ∣k2 = 1. In the one-dimensional case, n = 1, this condition reduces to (a + b)s′ = 0 with ∣s = 1, and the system is boundary stable if and only if a + b ≠ 0; that is, if and only if the boundary vector field L is not proportional to the ingoing null vector at the boundary surface,

$${\partial \over {\partial t}} + {\partial \over {\partial {x_1}}}.$$

Indeed, if a + b = 0, Lu = a(u t + ux1) is proportional to the outgoing characteristic field, for which it is not permitted to specify boundary data since it is completely determined by the initial data.

When n ≥ 2 it follows that b must be different from zero since otherwise the square parenthesis is zero for purely imaginary s′ satisfying as′ = ic(k′). Therefore, one can choose b=1 without loss of generality. It can then be shown that the system is boundary stable if and only if a > 0 and \(\sum\limits_{j = 2}^n {\vert{c_j}{\vert^2} < {a^2}}\); see [267], which is equivalent to the condition that the boundary vector field L is pointing outside the domain, and that its orthogonal projection onto the boundary surface \({\mathcal T}\),

$$T: = a{\partial \over {\partial t}} - \sum\limits_{j = 2}^n {{c_j}} {\partial \over {\partial {x_j}}},$$

is future-directed time-like. This includes as a particular case the “Sommerfeld” boundary condition u t ux1 = 0 for which L is the null vector obtained from the sum of the time evolution vector field t and the normal derivative \(N = - {\partial _{{x_1}}}\). While N is uniquely determined by the boundary surface \({\mathcal T}\), t is not unique, since one can transform it to an arbitrary future-directed time-like vector field T, which is tangent to \({\mathcal T}\) by means of an appropriate Lorentz transformation. Since the wave equation is Lorentz-invariant, it is clear that the new boundary vector field \(\hat L = T + N\) must also give rise to a well-posed IBVP, which explains why there is so much freedom in the choice of L.

For a more geometric derivation of these results based on estimates derived from the stress-energy tensor associated to the scalar field v, which shows that the above construction for L is sufficient for strong well-posedness; see Appendix B in [263]. For a generalization to the shifted wave equation; see [369].

As pointed out in [267], the advantage of obtaining a strong well-posedness estimate (5.69) for the scalar-wave problem is the fact that it allows the treatment of systems of wave equations where the boundary conditions can be coupled in a certain way through terms involving first derivatives of the fields. In order to illustrate this with a simple example, consider a system of two wave equations,

$${\left({\begin{array}{*{20}c} {{v_1}} \\ {{v_2}} \\ \end{array}} \right)_{tt}} = \Delta \left({\begin{array}{*{20}c} {{v_1}} \\ {{v_2}} \\ \end{array}} \right) + \left({\begin{array}{*{20}c} {{F_1}(t,x)} \\ {{F_2}(t,x)} \\ \end{array}} \right),\qquad x \in \Sigma ,\quad t \geq 0,$$

which is coupled through the boundary conditions

$$\left({{\partial \over {\partial t}} - {\partial \over {\partial {x_1}}}} \right)\,\,\left({\begin{array}{*{20}c} {{v_1}} \\ {{v_2}} \\ \end{array}} \right) = N\left({\begin{array}{*{20}c} {{v_1}} \\ {{v_2}} \\ \end{array}} \right) + \left({\begin{array}{*{20}c} {{g_1}(t,x)} \\ {{g_2}(t,x)} \\ \end{array}} \right),\qquad x \in \partial \Sigma ,\quad t \geq 0,$$

where N has the form

$$N = \left({\begin{array}{*{20}c} 0 & 0 \\ 0 & X \\ \end{array}} \right),\qquad X = {X^0}{\partial \over {\partial t}} + {X^1}{\partial \over {\partial {x_1}}} + \ldots + {X^n}{\partial \over {\partial {x_n}}},$$

with (X0, X1, …, Xn) ∈ n+1 any vector. Since the wave equation and boundary condition for v1 decouples from the one for v2, we can apply the estimate (5.69) to v1, obtaining

$$\eta \,\Vert {v_1}\Vert _{\eta ,1,\Omega}^2 + \Vert {v_1}\Vert _{\eta ,1,{\mathcal T}}^2 \leq {K^2}\left({{1 \over \eta}\Vert {F_1}\Vert _{\eta ,0,\Omega}^2 + \Vert {g_1}\Vert _{\eta ,0,{\mathcal T}}^2} \right).$$

If we set \({g_3}(t,x): = {g_2}(t,x) + X{\upsilon _1}(t,x),t \geq 0,x \in \partial \Sigma\), we have a similar estimate for v2,

$$\eta \,\Vert {v_2}\Vert _{\eta ,1,\Omega}^2 + \Vert {v_2}\Vert _{\eta ,1,{\mathcal T}}^2 \leq {K^2}\left({{1 \over \eta}\Vert {F_2}\Vert _{\eta ,0,\Omega}^2 + \Vert {g_3}\Vert _{\eta ,0,{\mathcal T}}^2} \right).$$

However, since the boundary norm of v1 is controlled by the estimate (5.84), one also controls

$$\Vert {g_3}\Vert _{\eta ,0,{\mathcal T}}^2 \leq 2\Vert {g_2}\Vert _{\eta ,0,{\mathcal T}}^2 + {C^2}\Vert {v_1}\Vert _{\eta ,1,{\mathcal T}}^2 \leq {{{{(CK)}^2}} \over \eta}\Vert {F_1}\Vert _{\eta ,0,\Omega}^2 + {(CK)^2}\Vert {g_1}\Vert _{\eta ,0,{\mathcal T}}^2 + 2\Vert {g_2}\Vert _{\eta ,0,{\mathcal T}}^2$$

with some constant C > 0 depending only on the vector field X. Therefore, the inequalities (5.84,5.85) together yield an estimate of the form (5.69) for v = (v1, v2), F = (F1,F2) and g = (g1, g2), which shows strong well-posedness in the generalized sense for the coupled system. Notice that the key point, which allows the coupling of v1 and v2 through the boundary matrix operator N, is the fact that one controls the boundary norm of v1 in the estimate (5.84). The result can be generalized to larger systems of wave equations, where the matrix operator N is in triangular form with zero on the diagonal, or where it can be brought into this form by an appropriate transformation [267, 264].

Example 28. As an application of the theory for systems of wave equations, which are coupled through the boundary conditions, we discuss Maxwell’s equations in their potential formulation on the half space Σ [267]. In the Lorentz gauge and the absence of sources, this system is described by four wave equations μ μ Aν = 0 for the components (A t , A x , A y , A z ) of the vector potential A μ , which are subject to the constraint C:= μA μ = 0, where we use the Einstein summation convention.

As a consequence of the wave equation for A ν , the constraint variable C also satisfies the wave equation, μ μ C = 0. Therefore, the constraint is correctly propagated if the initial data is chosen such that C and its first time derivative vanish, and if C is set to zero at the boundary. Setting C = 0 at the boundary amounts in the following condition for A ν at x = 0:

$${{\partial {A_t}} \over {\partial t}} = {{\partial {A_x}} \over {\partial x}} + {{\partial {A_y}} \over {\partial y}} + {{\partial {A_z}} \over {\partial z}},$$

which can be rewritten as

$$\left({{\partial \over {\partial t}} - {\partial \over {\partial x}}} \right)({A_t} + {A_x}) = - \left({{\partial \over {\partial t}} + {\partial \over {\partial x}}} \right)({A_t} - {A_x}) + 2{\partial \over {\partial y}}{A_y} + 2{\partial \over {\partial z}}{A_z}.$$

Together with the boundary conditions

$$\begin{array}{*{20}c} {\left({{\partial \over {\partial t}} - {\partial \over {\partial x}}} \right)({A_t} - {A_x}) = 0,\quad \quad \quad \quad \quad \quad \quad} \\ {\left({{\partial \over {\partial t}} - {\partial \over {\partial x}}} \right){A_y} = {\partial \over {\partial y}}({A_t} - {A_x}),} \\ {\left({{\partial \over {\partial t}} - {\partial \over {\partial x}}} \right){A_z} = {\partial \over {\partial z}}({A_t} - {A_x}),} \\ \end{array}$$

this yields a system of the form of Eq. (5.82) with N having the required triangular form, where v is the four-component vector function v = (A t A x , A y , A z , A t + A x ). Notice that the Sommerfeldlike boundary conditions on A y and A z set the gauge-invariant quantities E y + B z and E z B y to zero, where E and B are the electric and magnetic fields, which is compatible with an outgoing plane wave traveling in the normal direction to the boundary.

For a recent development based on the Laplace method, which allows the treatment of second-order IBVPs with more general classes of boundary conditions, including those admitting boundary phenomena like glancing and surface waves; see [262].

Maximal dissipative boundary conditions

An alternative technique for specifying boundary conditions, which does not require Laplace-Fourier transformation and the use of pseudo-differential operators when generalizing to variable coefficients, is based on energy estimates. In order to understand this, we go back to Section 3.2.3, where we discussed such estimates for linear, first-order symmetric hyperbolic evolution equations with symmetrizer H(t,x). We obtained the estimate (3.107), bounding the energy \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{J^0}(t,x){d^n}x}\) at any time t ∈ [0, T] in terms of the initial energy E0), provided that the flux integral

$$\int\limits_{\mathcal T} {{e_\mu}} {J^\mu}(t,x)dS,\qquad {J^\mu}(t,x): = - u{(t,x)^{\ast}}H(t,x){A^\mu}(t,x)u(t,x)$$

was nonnegative. Here, the boundary surface is \({\mathcal T} = [0,T] \times \partial \Sigma\), and its unit outward normal e = (0, s1, …, s n ) is determined by the unit outward normal s to Σ. Therefore, the integral is nonnegative if

$$u{(t,x)^{\ast}}H(t,x){P_0}(t,x,s)u(t,x) \leq 0,\qquad (t,x) \in {\mathcal T},$$

where \({P_0}(t,x,s) = \sum\limits_{j = 1}^n {{A^j}(t,x){s_j}}\) is the principal symbol in the direction of the unit normal s. Hence, the idea is to specify homogeneous boundary conditions, b(t, x)u = 0 at \({\mathcal T}\), such that the condition (5.90) is satisfied.Footnote 22 In this case, one obtains an a priori energy estimate as in Section 3.2.3. Of course, there are many possible choices for b(t, x), which fulfill the condition (5.90); however, an additional requirement is that one should not overdetermine the IBVP. For example, setting all the components of u to zero at the boundary does not lead to a well-posed problem if there are outgoing modes, as discussed in Section 5.1.1 for the constant coefficient case. Correct boundary conditions turn out to be a minimal condition on u for which the inequality (5.90) holds. In other words, at the boundary surface, u has to be restricted to a space for which Eq. (5.90) holds and which cannot be extended. The precise definition, which captures this idea, is:

Definition 8. Denote for each boundary point \(p = (t,x) \in {\mathcal T}\) the boundary space

$${V_p}: = \{u \in {\mathbb C^m}:b(t,x)u = 0\} \subset {\mathbb C^m}$$

of state vectors satisfying the homogeneous boundary condition. V p is called maximal nonpositive if

  1. (i)

    u*H(t, x)P0(t, x, s)u ≤ 0 for all uV p ,

  2. (ii)

    V p is maximal with respect to condition (i); that is, if W p V p is a linear subspace ofm containing V p , which satisfies (i), then W p = V p .

The boundary condition b(t, x)u = g(t, x) is called maximal dissipative if the associated boundary spaces V p are maximal nonpositive for all \(p \in {\mathcal T}\).

Maximal dissipative boundary conditions were proposed in [189, 275] in the context of symmetric positive operators, which include symmetric hyperbolic operators as a special case. With such boundary conditions, the IBVP is well posed in the following sense:

Definition 9. Consider the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ), where the matrix functions Aj(t, x) and b(t, x) and the vector function F(t, x) do not depend on u. It is called well posed if there are constants K = K(T) and ε = ε(T) ≥ 0 such that each compatible data \(f \in C_b^\infty (\Sigma, {{\rm{\mathbb C}}^m})\) and \(g \in C_b^\infty ([0,T) \times \partial \Sigma, {{\rm{\mathbb C}}^r})\)) gives rise to a unique C∞-solution u satisfying the estimate

$$\Vert u(t,\cdot)\Vert _{{L^2}(\Sigma)}^2 + \varepsilon \int\limits_0^t \vert \vert u(s,\cdot)\Vert _{{L^2}(\partial \Sigma)}^2ds \leq {K^2}\left[ {\Vert f\Vert _{{L^2}(\Sigma)}^2 + \int\limits_0^t {\left({\Vert F(s,\cdot)\Vert _{{L^2}(\Sigma)}^2 + \Vert g(s,\cdot)\Vert _{{L^2}(\partial \Sigma)}^2} \right)} ds} \right],$$

for all t ∈ [0, T]. If, in addition, the constant ε > 0 can be chosen strictly positive, the problem is called strongly well posed.

This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data was assumed and only a time-integral of the L2(Σ)-norm of the solution could be estimated (see Definition 6). The main result of the theory of maximal dissipative boundary conditions is:

Theorem 7. Consider the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ), where the matrix functions Aj(t, x) and b(t, x) and the vector function F(t, x) do not depend on u. Suppose the system is symmetric hyperbolic, and that the boundary conditions (5.3) are maximal dissipative. Suppose, furthermore, that the rank of the boundary matrix P0(t, x, s) is constant in \((t,x) \in {\mathcal T}\).

Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed if the boundary matrix P0(t, x, s) is invertible.

This theorem was first proven in [189, 275, 344] for the case where the boundary surface \({\mathcal T}\) is non-characteristic, that is, the boundary matrix P0(t, x, s) is invertible for all \((t,x) \in {\mathcal T}\). A difficulty with the characteristic case is the loss of derivatives of u in the normal direction to the boundary (see [422]). This case was studied in [293, 343, 387], culminating with the regularity theorem in [387], which is based on special function spaces, which control the L2-norms of 2k tangential derivatives and k normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear case; see [218, 388].

A more practical way of characterizing maximal dissipative boundary conditions is the following. Fix a boundary point \((t,x) \in {\mathcal T}\), and define the scalar product (·,·) by (u, v):= u*H(t, x)v, u, v ∈m. Since the boundary matrix P0(t, x, s) is Hermitian with respect to this scalar product, there exists a basis e1, e2, …, e m of eigenvectors of P0(t, x, s), which are orthonormal with respect to (·, ·). Let λ1, λ2, …, λ m be the corresponding eigenvalues, where we might assume that the first r of these eigenvalues are strictly positive, and the last s are strictly negative. We can expand any vector u ∈m as \(u = \sum\limits_{j = 1}^m {{u^{(j)}}} {e_j}\), the coefficients u(j) being the characteristic fields with associated speeds λ j . Then, the condition (5.90) at the point p can be written as

$$0 \geq (u,{P_0}(t,x,s)u) = \sum\limits_{j = 1}^m {{\lambda _j}} \vert {u^{(j)}}{\vert ^2} = \sum\limits_{j = 1}^r {{\lambda _j}} \vert {u^{(j)}}{\vert ^2} - \sum\limits_{j = m - s + 1}^m \vert {\lambda _j}\Vert {u^{(j)}}{\vert ^2},$$

where we have used the fact that λ1, …, λ r > 0, λms+1, …, λ m < 0 and the remaining λ j ’s are zero. Therefore, a maximal dissipative boundary condition must have the form

$${u_ +} = q{u_ -},\qquad {u_ +}: = \left({\begin{array}{*{20}c} {{u^{(1)}}} \\ \ldots \\ {{u^{(r)}}} \\ \end{array}} \right),\qquad {u_ -}: = \left({\begin{array}{*{20}c} {{u^{(m - s + 1)}}} \\ \ldots \\ {{u^{(m)}}} \\ \end{array}} \right),$$

with q a complex r × s matrix, since u = 0 must imply u+ = 0. Furthermore, the matrix q has to be small enough such that the inequality (5.93) holds. There can be no further conditions since an additional, independent condition on u would violate the maximality of the boundary space V p .

In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94), which describes a linear coupling of the outgoing characteristic fields u to the incoming ones, u+. In particular, there are exactly as many independent boundary conditions as there are incoming fields, in agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not involve the zero speed fields. The simplest choice for q is the trivial one, q = 0, in which case data for the incoming fields is specified. A nonzero value of q would be chosen if the boundary is to incorporate some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for example.

Example 29. Consider the first-order reformulation of the Klein-Gordon equation for the variables u = (Φ, Φ t , Φ x , Φ y ); see Example 13. Suppose the spatial domain is x > 0, with the boundary located at x = 0. Then, s = (−1, 0) and the boundary matrix is

$${P_0}(s) = - \left( {\begin{array}{*{20}{c}} {0\;0\;0\;0} \\ {0\;0\;1\;0} \\ {0\;1\;0\;0} \\ {0\;0\;0\;0} \end{array}} \right).$$

Therefore, the characteristic fields and speeds are Φ, Φ y (zero speed fields, λ = 0), Φ t − Φ x (incoming field with speed λ = 1) and Φ t + Φ x (outgoing field with speed λ = −1). It follows from Eqs. (5.93, 5.94) that the class of maximal dissipative boundary conditions is

$$({\Phi _t} - {\Phi _x}) = q(t,y)({\Phi _t} + {\Phi _x}) + g(t,y),\qquad t \geq 0,\quad y \in {\mathbb {R}},$$

where the function q satisfies ∣q(t, y)∣ ≤ 1 and g is smooth boundary data. Particular cases are:

  • q = 0: Sommerfeld boundary condition,

  • q = −1: Dirichlet boundary condition,

  • q = 1: Neumann boundary condition.

Example 30. For Maxwell’s equations on a domain Σ ⊂ ℝ3 with C-boundary Σ, the boundary matrix is given by

$${P_0}(s)\left({\begin{array}{*{20}c} E \\ B \\ \end{array}} \right) = \left({\begin{array}{*{20}c} {+ s \wedge B} \\ {- s \wedge E} \\ \end{array}} \right);$$

see Example 14. In terms of the components E of E parallel to the boundary surface Σ, and the ones E, which are orthogonal to it (and, hence, parallel to s) the characteristic speeds and fields are

$$\begin{array}{*{20}c} {0:\,\,{E_ \bot},\quad {B_ \bot},} \\ {\pm 1:\,\,{E_{\Vert}} \pm s \wedge {B_{\Vert}}.} \\ \end{array}$$

Therefore, maximal dissipative boundary conditions have the form

$$({E_{\Vert}} + s \wedge {B_{\Vert}}) = q({E_{\Vert}} - s \wedge {B_{\Vert}}) + {g_{\Vert}},$$

with g some smooth vector-valued function at the boundary, which is parallel to Σ, and q a matrix-valued function satisfying the condition ∣q∣ ≤ 1. Particular cases are:

  • q = −1, g = 0: The boundary condition E = 0 describes a perfectly conducting boundary surface.

  • q = 0, g =0: This is a Sommerfeld-type boundary condition, which, locally, is transparent to outgoing plane waves traveling in the normal direction s,

    $$E(t,x) = {\mathcal E}{e^{i(\omega t - k\cdot x)}},\qquad B(t,x) = s \wedge E(t,x),$$

    where ω the frequency, k = ωs the wave vector, and \({\mathcal E}\) the polarization vector, which is orthogonal to k. The generalization of this boundary condition to inhomogeneous data g ≠ 0 allows one to specify data on the incoming field E +s Λ B at the boundary surface, which is equal to \(2{\mathcal E}{e^{i\omega t}}\) for the plane waves traveling in the normal inward direction −s.

Recall that the constraints ∇ · E = ρ and ∇ · B = 0 propagate along the time evolution vector field ∂t, (∇ · Eρ) t = 0, (∇ · B) t = 0, provided the continuity equation holds. Since t is tangent to the boundary, no additional conditions controlling the constraints must be specified at the boundary; the constraints are automatically satisfied everywhere provided they are satisfied on the initial surface.

Example 31. Commonly, one writes Maxwell’s equations as a system of wave equations for the electromagnetic potential A μ in the Lorentz gauge, as discussed in Example 28. By reducing the problem to a first-order symmetric hyperbolic system, one may wonder if it is possible to apply the theory of maximal dissipative boundary conditions and obtain a well-posed IBVP, as in the previous example. As we shall see in Section 5.2.1, the answer is affirmative, but the correct application of the theory is not completely straightforward. In order to illustrate why this is the case, introduce the new independent fields D μν := μ A ν . Then, the set of wave equations can be rewritten as the first-order system for the 20-component vector (A ν , D , Djν), j = x, y, z,

$${\partial _t}{A_\nu} = {D_{t\nu}},\qquad {\partial _t}{D_{t\nu}} = {\partial ^j}{D_{j\nu}},\qquad {\partial _t}{D_{j\nu}} = {\partial _j}{D_{t\nu}},$$

which is symmetric hyperbolic. The characteristic fields with respect to the unit outward normal s = (−1, 0, 0) at the boundary are

$$\begin{array}{*{20}c} {{D_{t\nu}} - {D_{x\nu}} = ({\partial _t} - {\partial _x}){A_\nu}\,\,\,{\rm{(incoming field)}},\quad} \\ {{D_{t\nu}} + {D_{x\nu}} = ({\partial _t} + {\partial _x}){A_\nu}\,\,\,{\rm{(outgoing field)}},\quad} \\ {\quad \quad \quad \quad \quad {D_{y\nu}} = {\partial _y}{A_\nu}\,\,\,{\rm{(zero speed field)}},\,\,\,} \\ {\quad \quad \quad \quad \quad {D_{z\nu}} = {\partial _z}{A_\nu}\,\,\,{\rm{(zero speed field)}}.\,} \\ \end{array}$$

According to Eq. (5.88) we can rewrite the Lorentz constraint in the following way:

$$({D_{tt}} - {D_{xt}}) + ({D_{tx}} - {D_{xx}}) = - ({D_{tt}} + {D_{xt}}) + ({D_{tx}} + {D_{xx}}) + 2{D_{yy}} + 2{D_{zz}}.$$

The problem is that when written in terms of the characteristic fields, the Lorentz constraint not only depends on the in- and outgoing fields, but also on the zero speed fields D yy and D zz . Therefore, imposing the constraint on the boundary in order to guarantee constraint preservation leads to a boundary condition, which couples the incoming fields to outgoing and zero speed fields,Footnote 23, which does not fall in the class of admissible boundary conditions.

At this point, one might ask why we were able to formulate a well-posed IBVP based on the second-order formulation in Example 28, while the first-order reduction discussed here fails. As we shall see, the reason for this is that there exist many first-order reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for well-posed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.

Example 32. A generalization of Maxwell’s equations is the evolution system

$${\partial _t}{E_{ij}} = - {\varepsilon _{kl(i}}{\partial ^k}{B^l}_{j)},$$
$${\partial _t}{B_{ij}} = + {\varepsilon _{kl(i}}{\partial ^k}{E^l}_{j)},$$

for the symmetric, trace-free tensor fields E ij and B ij , where here we use the Einstein summation convention, the indices i,j,k,l run over 1,2,3, (ij) denotes symmetrization over ij, and ε ijk is the totally antisymmetric tensor with ε123 = 1. Notice that the right-hand sides of Eqs. (5.102, 5.103) are symmetric and trace-free, such that one can consistently assume that \({E^i}_i = {B^i}_i = 0\). The evolution system (5.102, 5.103), which is symmetric hyperbolic with respect to the trivial symmetrizer, describes the propagation of the electric and magnetic parts of the Weyl tensor for linearized gravity on a Minkowski background; see, for instance, [182].

Decomposing E ij into its parts parallel and orthogonal to the unit outward normal s,

$${E_{ij}} = \bar E\left({{s_i}{s_j} - {1 \over 2}{\gamma _{ij}}} \right) + 2{s_{(i}}{\bar E_{j)}} + {\hat E_{ij}},$$

where \({\gamma _{ij}}: = {\delta _{ij}} - {s_i}{s_j}\bar E: = {s^i}{s^j}{E_{ij}},{{\bar E}_i}:\gamma _i^k{E_{kj}}{s^j},{{\hat E}_{ij}}: = (\gamma _i^k\gamma _j^l - {\gamma _{ij}}{\gamma ^{kl}}/2){E_{kl}}\), and similarly for B ij , the eigenvalue problem λu = P0(s)u for the boundary matrix is

$$\begin{array}{*{20}c} {\lambda \bar E = 0,} \\ {\lambda \bar B = 0,} \\ {\lambda {{\bar E}_i} = - {1 \over 2}{\varepsilon _{kli}}{s^k}{{\bar B}^l},} \\ {\lambda {{\bar B}_i} = + {1 \over 2}{\varepsilon _{kli}}{s^k}{{\bar E}^l},} \\ {\lambda {{\hat E}_{ij}} = - {\varepsilon _{kl(i}}{s^k}{{\hat B}^l}_{j)},} \\ {\lambda {{\hat B}_{ij}} = + {\varepsilon _{kl(i}}{s^k}{{\hat E}^l}_{j)},} \\ \end{array}$$

from which one obtains the following characteristic speeds and fields,

$$\begin{array}{*{20}c} {0:\bar E,\quad \bar B,\quad \,\,} \\ {\pm {1 \over 2}:{{\bar E}_i} \mp {\varepsilon _{kli}}{s^k}{{\bar B}^l},} \\ {\quad \pm 1:{{\hat E}_{ij}} \mp {\varepsilon _{kl(i}}{s^k}{{\hat B}^l}_{j)}.} \\ \end{array}$$

Similar to the Maxwell case, the boundary condition \({{\hat E}_{ij}} - {\varepsilon _{kl(i}}{s^k}{{\hat B}^l}_{j)} = 0\) on the incoming, symmetric trace-free characteristic field is, locally, transparent to outgoing linear gravitational plane waves traveling in the normal direction s. In fact, this condition is equivalent to setting the complex Weyl scalar Ψ0 computed from the adapted, complex null tetrad K:= t + s, L := t s, Q, \({\bar Q}\), to zero at the boundary surface.Footnote 24 Variants of this condition have been proposed in the literature in the context of the IBVP for Einstein’s field equations in order to approximately control the incoming gravitational radiation; see [187, 40, 253, 378, 363, 309, 286, 384, 366].

However, one also needs to control the incoming field \({{\bar E}_i} - {\varepsilon _{kli}}{s^k}{{\bar B}^l}\) at the boundary. This field, which propagates with speed 1/2, is related to the constraints in the theory. Like in electromagnetism, the fields E ij and B ij are subject to the divergence constraints P j := iE ij = 0, Q j := iB ij = 0. However, unlike the Maxwell case, these constraints do not propagate trivially. As a consequence of the evolution equations (5.102, 5.103), the constraint fields P j and Q j obey

$${\partial _t}{P_j} = - {1 \over 2}{\varepsilon _{jkl}}{\partial ^k}{Q^l},\qquad {\partial _t}{Q_j} = + {1 \over 2}{\varepsilon _{jkl}}{\partial ^k}{P^l},$$

which is equivalent to Maxwell’s equations except that the propagation speed for the transverse modes is 1/2 instead of 1. Therefore, guaranteeing constraint propagation requires specifying homogeneous maximal dissipative boundary conditions for this system, which have the form of Eq. (5.98) with EP, B ↦ −Q and g = 0. A problem is that this yields conditions involving first derivatives of the fields E ij and B ij , when rewritten as a boundary condition for the main system (5.102, 5.103). Except in some particular cases involving totally-reflecting boundaries, it is not possible to cast these conditions into maximal dissipative form.

A solution to this problem has been presented in [181] and [187], where a similar system appears in the context of the IBVP for Einstein’s field equations for solutions with anti-de Sitter asymptotics, or for solutions with an artificial boundary, respectively. The method consists in modifying the evolution system (5.102, 5.103) by using the constraint equations P j = Q j = 0 in such a way that the constraint fields for the resulting boundary adapted system propagate along t at the boundary surface. In order to describe this system, extend s to a smooth vector field on Σ with the property that ∣s∣ ≤ 1. Then, the boundary-adapted system reads:

$${\partial _t}{E_{ij}} = - {\varepsilon _{kl(i}}{\partial ^k}{B^l}_{j)} + {s_{(i}}{\varepsilon _{j)kl}}{s^k}{Q^l},$$
$${\partial _t}{B_{ij}} = + {\varepsilon _{kl(i}}{\partial ^k}{E^l}_{j)} - {s_{(i}}{\varepsilon _{j)kl}}{s^k}{P^l}.$$

This system is symmetric hyperbolic, and the characteristic fields in the normal direction are identical to the unmodified system with the important difference that the fields \({{\bar E}_i} \mp {\varepsilon _{kli}}{s^k}{{\bar B}^l}\) now propagate with zero speed. The induced evolution system for the constraint fields is symmetric hyperbolic, and has a trivial boundary matrix. As a consequence, the constraints propagate tangentially to the boundary surface, and no extra boundary conditions for controlling the constraints must be specified.

Application to systems of wave equations

As anticipated in Example 31, the theory of symmetric hyperbolic first-order equations with maximal dissipative boundary conditions can also be used to formulate well-posed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong well-posedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.

In order to explain how this is performed, we consider the simple case of the Klein-Gordon equation Φ tt = ΔΦ − m2Φ on the half plane Σ:= {(x, y) 2: x > 0}. In Example 13 we reduced the problem to a first-order symmetric hyperbolic system for the variables u = (Φ, Φ t , Φ x , Φ y ) with symmetrizer H = diag(m2, 1, 1, 1), and in Example 29 we determined the class of maximal dissipative boundary conditions for this first-order reduction. Consider the particular case of Sommerfeld boundary conditions, where Φ t = Φ x is specified at x = 0. Then, Eq. (3.103) gives the following conservation law,

$$E({\Sigma _T}) = E({\Sigma _0}) + \int\limits_0^T {\int\limits_{\mathbb {R}} {{{\left. {{u^{\ast}}H{P_0}(s)u} \right\vert}_{x = 0}}}} dy\,dt,$$

where \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{u^{\ast}}H\,u\,dxdy} = \int\nolimits_{{\Sigma _t}} {({m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2})dxdy}\), and u*HP 0 (s)u = −2Re(Φ* t Φ x ); see Example 29. Using the Sommerfeld boundary condition, we may rewrite −2Re(Φ* t Φ x ) = −(∣Φ t 2 + ∣Φ x 2), and obtain the energy equality

$$E({\Sigma _T}) + \int\limits_0^T {\int\limits_{\mathbb {R}} {{{\left[ {\vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2}} \right]}_{x = 0}}}} dy\,dt = E({\Sigma _0}),$$

controlling the derivatives of Φ t and Φ x at the boundary surface. However, a weakness of this estimate is that it does not control the zero speed fields Φ and Φ y at the boundary, and so one does not obtain strong well-posedness.

On the other hand, the first-order reduction is not unique, and as we show now, different reductions may lead to stronger estimates. For this, we choose a real constant b such that 0 < b ≤ 1/2 and define the new fields ū := (Φ, Φ t bΦ x , Φ x , Φ y ), which yield the symmetric hyperbolic system

$${\bar u_t} = \left({\begin{array}{*{20}c} b & 0 & 0 & 0 \\ 0 & {- b} & {1 - {b^2}} & 0 \\ 0 & 1 & b & 0 \\ 0 & 0 & 0 & b \\ \end{array}} \right){\bar u_x} + \left({\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array}} \right){\bar u_y} + \left({\begin{array}{*{20}c} 0 & 1 & 0 & 0 \\ {- {m^2}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array}} \right)\bar u,$$

with symmetrizer \(\bar H = {\rm{diag(}}{m^2},1,1 - {b^2},1)\). The characteristic fields in terms of Φ and its derivatives are Φ, Φ y , Φ t + Φ x , and Φ t − Φ x , as before. However, the fields now have characteristic speeds −b, −b, −1, +1, respectively, whereas in the previous reduction they were 0, 0, −1, +1. Therefore, the effect of the new reduction versus the old one is to shift the speeds of the zero speed fields, and to convert them to outgoing fields with speed −b. Notice that the Sommerfeld boundary condition Φ t = Φ x is still maximal dissipative with respect to the new reduction. Repeating the energy estimates again leads to a conservation law of the form (5.108), but where now the energy and flux quantities are \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{{\bar u}^{\ast}}\bar H\,\bar u\,dx\,dy} = \int\nolimits_{{\Sigma _t}} {({m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t} - b{\Phi _x}{\vert ^2} + (1 - {b^2})\vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2})dx\,dy}\) dxdy and

$${\bar u^{\ast}}\bar H{P_0}(s)\bar u = - b\left[ {{m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2}} \right] + 2b\left[ {\vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2}} \right] - 2{\rm Re}(\Phi _t^{\ast}{\Phi _x}).$$

Imposing the boundary condition Φ t = Φ x at x = 0 and using 2b ≤ 1 leads to the energy estimate

$$E({\Sigma _T}) + b\int\limits_0^T {\int\limits_{\mathbb {R}} {{{\left[ {{m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2}} \right]}_{x = 0}}}} dy\,dt \leq E({\Sigma _0}),$$

controlling Φ and all its first derivatives at the boundary surface.

Summarizing, we have seen that the most straightforward first-order reduction of the Klein-Gordon equation does not lead to strong well-posedness. However, strong well-posedness can be obtained by choosing a more sophisticated reduction, in which the time-derivative of Φ is replaced by its derivative ΦtbΦ x along the time-like vector (1, −b), which is pointing outside the domain at the boundary surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong well-posedness for any future-directed time-like vector field u, which is pointing outside the domain at the boundary. Based on the geometric definition of first-order symmetric hyperbolic systems in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved backgrounds [264].

In order to describe the result in [264], let π: EM be a vector bundle over \(M = [0,T] \times \bar \Sigma\) with fiber ℝN; let ∇ μ be a fixed, given connection on E and let g μν = g μν (Φ) be a Lorentz metric on M with inverse gμν(Φ), which depends pointwise and smoothly on a vector-valued function Φ = {ΦA} A =1,2, …,N, parameterizing a local section of E. Assume that each time-slice Σ t = {t} × Σ is space-like and that the boundary \({\mathcal T} = [0,T] \times \partial \Sigma\) is time-like with respect to g μν (Φ). We consider a system of quasilinear wave equations of the form

$${g^{\mu \nu}}(\Phi){\nabla _\mu}{\nabla _\nu}{\Phi ^A} = {F^A}(\Phi ,\nabla \Phi),$$

where FA(Φ, ∇Φ) is a vector-valued function, which depends pointwise and smoothly on its arguments. The wave system (5.113) is subject to the initial conditions

$${\left. {{\Phi ^A}} \right\vert _{{\Sigma _0}}} = \Phi _0^A\,,\qquad {\left. {{n^\mu}{\nabla _\mu}{\Phi ^A}} \right\vert _{{\Sigma _0}}} = \Pi _0^A\,,$$

where \(\Phi _0^A\) and \(\Pi _0^A\) are given vector-valued functions on Σ0, and where nμ = nμ(Φ) denotes the future-directed unit normal to Σ0 with respect to g μν . In order to describe the boundary conditions, let \({T^\mu} = {T^\mu}(p,\Phi),p \in {\mathcal T}\), be a future-directed vector field on \({\mathcal T}\), which is normalized with respect to g μν , and let Nμ = Nμ(p, Φ) be the unit outward normal to \({\mathcal T}\) with respect to the metric g μν . We consider boundary conditions on \({\mathcal T}\) of the following form

$${\left. {\left[ {{T^\mu} + \alpha {N^\mu}} \right]{\nabla _\mu}{\Phi ^A}} \right\vert _{\mathcal T}} = {c^{\mu \,A}}_B{\left. {{\nabla _\mu}{\Phi ^B}} \right\vert _{\mathcal T}} + {d^A}_B{\left. {{\Phi ^B}} \right\vert _{\mathcal T}} + {G^A},$$

where α = α(p, Φ) > 0 is a strictly positive, smooth function, GA = GA(p) is a given, vector-valued function on \({\mathcal T}\) and the matrix coefficients \({c^{\mu A}}_B = {c^{\mu A}}_B(p,\Phi)\) and \({d^A}_B = {d^A}_B(p,\Phi)\) are smooth functions of their arguments. Furthermore, we assume that \({c^\mu}{^A_B}\) satisfies the following property. Given a local trivialization φ: U × ℝNπ −1(U) of E such that ŪM is compact and contains a portion \({\mathcal U}\) of the boundary \({\mathcal T}\), there exists a smooth map J: UGL(N, ℝ), \(p \mapsto ({J^A}_B(p))\) such that the transformed matrix coefficients

$${\tilde c^{\mu \,A}}_B: = {J^A}_C{c^{\mu \,C}}_D{\left({{J^{- 1}}} \right)^D}_B$$

are in upper triangular form with zeroes on the diagonal, that is

$${\tilde c^{\mu \,A}}_B = 0,\qquad B \leq A.$$

Theorem 8. [264] The IBVP ( 5.113 , 5.114 , 5.115 ) is well posed. Given T > 0 and sufficiently small and smooth initial and boundary data \(\Phi _0^A,\Pi _0^A\) and GA satisfying the compatibility conditions at the edge S = {0} × Σ, there exists a unique smooth solution on M satisfying the evolution equation (5.113) , the initial condition (5.114) and the boundary condition (5.115) . Furthermore, the solution depends continuously on the initial and boundary data.

Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.

5.2.2 Existence of weak solutions and the adjoint problem

Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a more general class of linear symmetric operators with maximal dissipative boundary conditions; see [189, 275]. The proof below will shed some light on the maximality condition for the boundary space V p .

Our starting point is an IBVP of the form (5.1, 5.2, 5.3), where the matrix functions Aj(t, x) and b(t, x) do not depend on u, and where F(t, x, u) is replaced by B(t, x)u + F(t, x), such that the system is linear. Furthermore, we can assume that the initial and boundary data is trivial, f = 0, g = 0. We require the system to be symmetric hyperbolic with symmetrizer H(t, x) satisfying the conditions in Definition 4(iii), and assume the boundary conditions (5.3) are maximal dissipative. We rewrite the IBVP on Ω T := [0, T] × Σ as the abstract linear problem

$$- Lu = F,$$

where L: D(L) ⊂ XX is the linear operator on the Hilbert space X:= L2(Ω T ) defined by the evolution equation and the initial and boundary conditions:

$$\begin{array}{*{20}c} {D(L): = \{u \in C_b^\infty ({\Omega _T}):u(p) = 0\,{\rm{for}}\,{\rm{all}}\,p \in {\Sigma _0}\,{\rm{and}}\,u(p) \in {V_p}\,{\rm{for}}\,{\rm{all}}\,p \in {\mathcal T}\} ,\,} \\ {Lu: = \sum\limits_{\mu = 0}^n {{A^\mu}} (t,x){{\partial u} \over {\partial {x^\mu}}} + B(t,x)u,\qquad u \in D(L),\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,\,} \\ \end{array}$$

where we have defined A0:= − I and x0:= t, where V p = {u ∈m : b(t, x)u = 0} is the boundary space, and where Σ0:= {0} × Σ, Σ T := {T} × Σ and \({\mathcal T}: = [0,T] \times \partial \Sigma\) denote the initial, the final and the boundary surface, respectively.

For the following, the adjoint IBVP plays an important role. This problem is defined as follows. First, the symmetrizer defines a natural scalar product on X,

$${\langle v,\,u\rangle _H}: = \int\limits_{{\Omega _T}} {{v^{\ast}}} (t,\,x)H(t,\,x)u(t,\,x)\,dt\,{d^n}x,\qquad u,\,v \in X,$$

which, because of the properties of H, is equivalent to the standard scalar product on L2 T ). In order to obtain the adjoint problem, we take u ∈ D(L) and \(\upsilon \in C_b^\infty ({\Omega _T})\), and use Gauss’s theorem to find

$${\langle v,Lu\rangle_H} = \langle {L^{\ast}}v,{u\rangle}_H + \int\limits_{{\Sigma _0}} {{v^{\ast}}} H(t,x)u\,{d^n}x - \int\limits_{{\Sigma _T}} {{v^{\ast}}} H(t,x)u\,{d^n}x + \int\limits_{\mathcal T} {{v^{\ast}}} H(t,x){P_0}(t,x,s)u\,dS,$$

where we have defined the formal adjoint L*: D(L*) ⊂ XX of L by

$${L^{\ast}}v: = - \sum\limits_{\mu = 0}^n {{A^\mu}} (t,x){{\partial v} \over {\partial {x^\mu}}} - H{(t,x)^{- 1}}\sum\limits_{\mu = 0}^n {{{\partial [H(t,x){A^\mu}(t,x)]} \over {\partial {x^\mu}}}} v + H{(t,x)^{- 1}}B{(t,x)^{\ast}}H(t,x)v{.}$$

In order for the integrals on the right-hand side of Eq. (5.120) to vanish, such that 〈v, LuH = 〈L*v, uH, we first notice that the integral over Σ0 vanishes, because u = 0 on Σ0. The integral over Σ T also vanishes if we require v = 0 on Σ T . The last term also vanishes if we require v to lie in the dual boundary space

$$V_p^{\ast}: = \{v \in {{\mathbb {C}}^m}:{v^{\ast}}H(t,x){P_0}(t,x,s)u = 0\,{\rm{for}}\,{\rm{all}}\,u \in {V_p}\} ,$$

for each \(p \in {\mathcal T}\). Therefore, if we define

$$D({L^{\ast}}): = \{v \in C_b^\infty ({\Omega _T}):v(p) = 0\,{\rm{for}}\,{\rm{all}}\,p \in {\Sigma _T}\,{\rm{and}}\,v(p) \in V_p^{\ast}\,{\rm{for}}\,{\rm{all}}\,p \in {\mathcal T}\} ,$$

we have (v, Lu) H = 〈L*v, uH for all u ∈ D(L) and v ∈ D(L*); that is, the operator L* is adjoint to L. There is the following nice relation between the boundary spaces V p and V* p :

Lemma 4. Let \(p \in {\mathcal T}\) be a boundary point. Then, V p is maximal nonpositive if and only if V* p is maximal nonnegative.

Proof. Fix a boundary point \(p = (t,x) \in {\mathcal T}\) and define the matrix \({\mathcal B}: = H(t,x){P_0}(t,x,s)\) with s the unit outward normal to Σ at x. Since the system is symmetric hyperbolic, \({\mathcal B}\) is Hermitian. We decompose ℂm = E+EE0 into orthogonal subspaces E+, E, E0 on which \({\mathcal B}\) is positive, negative and zero, respectively. We equip E± with the scalar products (·,·)±, which are defined by

$${({u_ \pm},{v_ \pm})_ \pm}: = \pm u_ \pm ^{\ast}{\mathcal B}{v_ \pm},\qquad {u_ \pm},{v_ \pm} \in {E_ \pm}.$$

In particular, we have \({u^{\ast}}{{\mathcal B}_u} = {({u_ +},{u_ +})_ +} - {({u_ -},{u_ -})_ -}\) for all u ∈m. Therefore, if V p is maximal nonpositive, there exists a linear transformation q: EE+ satisfying ∣qu+ ≤ ∣u for all u ∈ E, such that (cf. Eq. (5.94))

$${V_p} = \{u \in {{\mathbb {C}}^m}:{u_ +} = q{u_ -}\} .$$

Let v ∈ V* p . Then,

$$0 = {v^{\ast}}{\mathcal B}u = {({v_ +},{u_ +})_ +} - {({v_ -},{u_ -})_ -} = {({v_ +},q{u_ -})_ +} - {({v_ -},{u_ -})_ -} = {({q^\dagger}{v_ +},{u_ -})_ -} - {({v_ -},{u_ -})_ -}$$

for all u ∈ V p , where q: E+E is the adjoint of q with respect to the scalar products (·,··) ± defined on E ± . Therefore, v = qv+, and

$$V_p^{\ast} = \{v \in {{\mathbb C}^m}:{v_ -} = {q^\dagger}{v_ +}\} .$$

Since q has the same norm as q, which is one, it follows that V* p is maximal nonnegative. The converse statement follows in an analogous way. □

The lemma implies that solving the original problem −Lu = F with u ∈ D(L) is equivalent to solving the adjoint problem L*v = F with v ∈ D(L*), which, since v(T, x) = 0 is held fixed at Σ T , corresponds to the time-reversed problem with the adjoint boundary conditions. From the a priori energy estimates we obtain:

Lemma 5. There is a constant δ = δ(T) such that

$$\Vert Lu\Vert _{H}\; \geq \delta \Vert u\Vert _{H},\qquad \Vert {L^{\ast}}v\Vert _{H}\; \geq \delta \Vert v\Vert _{H}$$

for all u ∈ D(L) and v ∈ D(L*), where ∥· H is the norm induced by the scalar product {·,·} H .

Proof. Let u ∈ D(L) and set F := −Lu. From the energy estimates in Section 3.2.3 one easily obtains

$$E({\Sigma _t}) \leq C\Vert F\Vert _H^2,\qquad 0 \leq t \leq T,$$

for some positive constants C depending on T. Integrating both sides from t = 0 to t = T gives

$$\Vert u\Vert _H^2\; \leq CT\Vert F\Vert _H^2\; = CT\Vert Lu\Vert _H^2,$$

which yields the statement for L setting δ:= (CT) −1/2. The estimate for L* follows from a similar energy estimate for the adjoint problem. □

In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique. Since L and L* are closable operators [345], their closures \(\overline L\) and \(\overline {{L^\ast}}\) satisfy the same inequalities as in Eq. (5.128). Now we are ready to define weak solutions and to prove their existence:

Definition 10. uX is called a weak solution of the problem (5.118) if

$${\langle {L^{\ast}}v,u\rangle _H} = - {\langle v,F\rangle _H}$$

for all v ∈D(L*).

In order to prove the existence of such u ∈ X, we introduce the linear space \(Y = D(\overline {L\ast})\) and equip it with the scalar product 〈·, ·〉 Y defined by

$${\langle v,w\rangle _Y}: = {\langle \overline {{L^{\ast}}} v,\overline {{L^{\ast}}} w\rangle _H},\qquad v,w \in Y.$$

The positivity of this product is a direct consequence of Lemma 5, and since \(\overline {L^\ast}\) is closed, Y defines a Hilbert space. Next, we define the linear form J: Y → ℂ on Y by

$$J(v): = - {\langle F,v\rangle _H}.$$

This form is bounded, according to Lemma 5,

$$\vert J(v)\vert \; \leq \;\Vert F{\Vert _H}\Vert v{\Vert _H}\; \leq {\delta ^{- 1}}\Vert F{\Vert _H}\Vert \overline {{L^{\ast}}} v{\Vert _H} = {\delta ^{- 1}}\Vert F{\Vert _H}\Vert v{\Vert _Y}$$

for all v ∈ Y. Therefore, according to the Riesz representation lemma there exists a unique w ∈ Y such that (w, ν) Y = J(v) for all vY. Setting \(u: = \overline {{L^{\ast}}} w \in X\) gives a weak solution of the problem.

If u ∈ X is a weak solution, which is sufficiently smooth, it follows from the Green type identity (5.120) that u has vanishing initial data and that it satisfies the required boundary conditions, and hence is a solution to the original IBVP (5.118). The difficult part is to show that a weak solution is indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such “weak=strong” results.

Absorbing boundary conditions

When modeling isolated systems, the boundary conditions have to be chosen such that they minimize spurious reflections from the boundary surface. This means that inside the computational domain, the solution of the IBVP should lie as close as possible to the true solution of the Cauchy problem on the unbounded domain. In this sense, the dynamics outside the computational domain is replaced by appropriate conditions on a finite, artificial boundary. Clearly, this can only work in particular situations, where the solutions outside the domain are sufficiently simple so that they can be computed and used to construct boundary conditions, which are, at least, approximately compatible with them. Boundary conditions, which give rise to a well-posed IBVP and achieve this goal are called absorbing, non-reflecting or radiation boundary conditions in the literature, and there has been a substantial amount of work on the construction of such conditions for wave problems in acoustics, electromagnetism, meteorology, and solid geophysics (see [206] for a review). Some recent applications to general relativity are mentioned in Sections 6 and 10.3.1.

One approach in the construction of absorbing boundary conditions is based on suitable series or Fourier expansions of the solution, and derives a hierarchy of local boundary conditions with increasing order of accuracy [153, 46, 240]. Typically, such higher-order local boundary conditions involve solving differential equations at the boundary surface, where the order of the differential equation is increasing with the order of the accuracy. This problem can be dealt with by introducing auxiliary variables at the boundary surface [207, 208].

The starting point for a slightly different approach is an exact nonlocal boundary condition, which involves the convolution with an appropriate integral kernel. A method based on an efficient approximation of this integral kernel is then implemented; see, for instance, [16, 17] for the case of the 2D and 3D flat wave equations and [271, 270, 272] for the Regge-Wheeler [347] and Zerilli [453] equations describing linear gravitational waves on a Schwarzschild background. Although this method is robust, very accurate and stable, it is based on detailed knowledge of the solutions, which might not always be available in more general situations.

In the following, we illustrate some aspects of the problem of constructing absorbing boundary conditions on some simple examples [372]. Specifically, we construct local absorbing boundary conditions for the wave equation with a spherical outer boundary at radius R > 0.

The one-dimensional wave equation

Consider first the one-dimensional case,

$${u_{tt}} - {u_{xx}} = 0,\qquad \vert x\vert < R,\quad t > 0.$$

The general solution is a superposition of a left- and a right-moving solution,

$$u(t,x) = {f_ \nwarrow}(x + t) + {f_ \nearrow}(x - t).$$

Therefore, the boundary conditions

$$({b_ -}u)(t, - R) = 0,\qquad ({b_ +}u)(t, + R) = 0,\qquad {b_ \pm}: = {\partial \over {\partial t}} \pm {\partial \over {\partial x}},\qquad t > 0,$$

are perfectly absorbing according to our terminology. Indeed, the operator b+ has as its kernel the right-moving solutions f↗(xt); hence, the boundary condition (b+u)(t, R) = 0 at x = R is transparent to these solutions. On the other hand, b+f↖(t + x) = 2f(t + x), which implies that at x = R, the boundary condition requires that f↖(v) = f↖(1) is constant for advanced time ν = t + x > R. A similar argument shows that the left boundary condition (bu)(t, − R) = 0 implies that f↗(−u) = f↗(−R) is constant for retarded time u = tx > R. Together with initial conditions for u and its time derivative at t = 0 satisfying the compatibility conditions, Eqs. (5.135, 5.137) give rise to a well-posed IBVP. In particular, the solution is identically zero after one crossing time t ≥ 2R for initial data, which are compactly supported inside the interval (−R,R).

The three-dimensional wave equation

Generalizing the previous example to higher dimensions is a nontrivial task. This is due to the fact that there are infinitely many propagation directions for outgoing waves, and not just two as in the one-dimensional case. Ideally, one would like to control all the propagation directions k, which are outgoing at the boundary (k ··n > 0, where n is the unit outward normal to the boundary), but this is obviously difficult. Instead, one can try to control specific directions (starting with the one that is normal to the outer boundary). Here, we illustrate the method of [46] on the three-dimensional wave equation,

$${u_{tt}} - \Delta u = 0,\qquad \vert x\vert < R,\quad t > 0.$$

The general solution can be decomposed into spherical harmonics Yℓm according to

$$u(t,r,\vartheta ,\varphi) = {1 \over r}\sum\limits_{\ell = 0}^\infty {\sum\limits_{m = - \ell}^\ell {{u_{\ell m}}}} (t,r){Y^{\ell m}}(\vartheta ,\varphi),$$

which yields the family of reduced equations

$$\left[ {{{{\partial ^2}} \over {\partial {t^2}}} - {{{\partial ^2}} \over {\partial {r^2}}} + {{\ell (\ell + 1)} \over {{r^2}}}} \right]\;{u_{\ell m}}(t,r) = 0,\qquad 0 < r < R,\quad t > 0.$$

for = 0 this equation reduces to the one-dimensional wave equation, for which the general solution is u00(t,r) = U00↗(rt) + U00↖(r + t) with U00↗ and U00↖ two arbitrary functions. Therefore, the boundary condition

$${{\mathcal B}_0}:\qquad b(ru){\vert _{r = R}} = 0,\qquad b: = {r^2}\;\left({{\partial \over {\partial t}} + {\partial \over {\partial r}}} \right)\;,\qquad t > 0,$$

is perfectly absorbing for spherical waves. For ≥ 1, exact solutions can be generated from the solutions for = 0 by applying suitable differential operators to u00(t,r). For this, we define the operators [92]

$${a_\ell} \equiv {\partial \over {\partial r}} + {\ell \over r}\,,\qquad a_\ell ^\dagger \equiv - {\partial \over {\partial r}} + {\ell \over r},$$

which satisfy the operator identities

$${a_{\ell + 1}}a_{\ell + 1}^\dagger = a_\ell ^\dagger {a_\ell} = - {{{\partial ^2}} \over {\partial {r^2}}} + {{\ell (\ell + 1)} \over {{r^2}}}\;.$$

As a consequence, for each =1,2,3, …, we have

$$\begin{array}{*{20}c} {\left[ {{{{\partial ^2}} \over {\partial {t^2}}} - {{{\partial ^2}} \over {\partial {r^2}}} + {{\ell (\ell + 1)} \over {{r^2}}}} \right]a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger = \left[ {{{{\partial ^2}} \over {\partial {t^2}}} + a_\ell ^\dagger {a_\ell}} \right]a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;} \\ {= a_\ell ^\dagger \left[ {{{{\partial ^2}} \over {\partial {t^2}}} + a_{\ell - 1}^\dagger {a_{\ell - 1}}} \right]a_{\ell - 1}^\dagger \ldots a_1^\dagger} \\ {= a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger \left[ {{{{\partial ^2}} \over {\partial {t^2}}} - {{{\partial ^2}} \over {\partial {r^2}}}} \right]\;.\quad \;\,} \\ \end{array}$$

Therefore, we have the explicit in- and outgoing solutions

$$\begin{array}{*{20}c} {{u_{\ell m \nwarrow}}(t,r) = a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger {V_{\ell m}}(r + t) = \sum\limits_{j = 0}^\ell {{{(- 1)}^j}} {{(2\ell - j)!} \over {(\ell - j)!\,j!}}{{(2r)}^{j - \ell}}V_{\ell m}^{(j)}(r + t),\;} \\ {{u_{\ell m \nearrow}}(t,r) = a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger {U_{\ell m}}(r - t) = \sum\limits_{j = 0}^\ell {{{(- 1)}^j}} {{(2\ell - j)!} \over {(\ell - j)!\,j!}}{{(2r)}^{j - \ell}}U_{\ell m}^{(j)}(r - t),} \\ \end{array}$$

where V ℓm and U ℓm are arbitrary smooth functions with j’th derivatives \(V_{\ell m}^{(j)}\) and \(U_{\ell m}^{(j)}\), respectively. In order to construct boundary conditions, which are perfectly absorbing for u ℓm , one first notices the following identity:

$${b^{\ell + 1}}a_\ell ^\dagger a_{\ell - 1}^\dagger \ldots a_1^\dagger U(r - t) = 0$$

for all = 0, 1, 2, … and all sufficiently smooth functions U. This identity follows easily from Eq. (5.144) and the fact that b+1(rk) = k(k + 1) ·… (k + )rk++1 = 0 if k ∈ {0, −1, −2, …, −}. Therefore, given L ∈ {1, 2, 3, …}, the boundary condition

$${{\mathcal B}_L}:\qquad {b^{L + 1}}(ru){\vert _{r = R}} = 0$$

leaves the outgoing solutions with L unaltered. Notice that this condition is local in the sense that its formulation does not require the decomposition of u into spherical harmonics. Based on the Laplace method, it was proven in [46] (see also [369]) that each boundary condition \({{\mathcal B}_L}\) yields a well-posed IBVP. By uniqueness this implies that initial data corresponding to a purely outgoing solution with L yields a purely outgoing solution (without reflections). In this sense, the condition \({{\mathcal B}_L}\) is perfectly absorbing for waves with ℓL. For waves with > L, one obtains spurious reflections; however, for monochromatic radiation with wave number k, the corresponding amplitude reflection coefficients can be calculated to decay as (kR)−2(L+1) in the wave zone kR ≫ 1 [88]. Furthermore, in most scenarios with smooth solutions, the amplitudes corresponding to the lower few ’s will dominate over the ones with high so that reflections from high ’s are unimportant. For a numerical implementation of the boundary condition \({{\mathcal B}_2}\) via spectral methods and a possible application to general relativity see [314].

The wave equation on a curved background

When the background is curved, it is not always possible to construct in- and outgoing solutions explicitly, as in the previous example. Therefore, it is not even clear how a hierarchy of absorbing boundary conditions should be formulated. However, in many applications the spacetime is asymptotically flat, and if the boundary surface is placed sufficiently far from the strong field region, one can assume that the metric is a small deformation of the flat, Minkowski metric. To first order in M/R with M the ADM mass and R the areal radius of the outer boundary, these correction terms are given by those of the Schwarzschild metric, and approximate in- and outgoing solutions for all (ℓ,m) modes can again be computed [372].Footnote 25 The M/R terms in the background metric induce two kind of corrections in the in- and outgoing solutions uℓm. The first is a curvature correction term, which just adds M/R terms to the coefficients in the sum of Eq. (5.144). This term is local and still obeys Huygens’ principle. The second term is fast decaying (it decays as R/r+1) and describes the backscatter off the curvature of the background. As a consequence, it is nonlocal (it depends on the past history of the unperturbed solution) and violates Huygens’ principle.

By construction, the boundary conditions \({{\mathcal B}_L}\) are perfectly absorbing for outgoing waves with angular momentum number L, including their curvature corrections to first order in M/R. If the first-order correction terms responsible for the backscatter are taken into account, then \({{\mathcal B}_L}\) are not perfectly absorbing anymore, but the spurious reflections arising from these correction terms have been estimated in [372] to decay at least as fast as (M/R)(kR)−2 for monochromatic waves with wave number k satisfying Mk−1R.

The well-posedness of higher-order absorbing boundary conditions for wave equations on a curved background can be established by assuming the localization principle and the Laplace method [369]. Some applications to general relativity are discussed in Sections 6 and 10.3.1.

Boundary Conditions for Einstein’s Equations

The subject of this section is the discussion of the IBVP for Einstein’s field equations. There are at least three difficulties when formulating Einstein’s equations on a finite domain with artificial outer boundaries. First, as we have seen in Section 4, the evolution equations are subject to constraints, which, in general, propagate with nontrivial characteristic speeds. As a consequence, in general there are incoming constraint fields at the boundary that need to be controlled in order to make sure that the constraints propagate correctly, i.e., that constraint-satisfying initial data yields a solution of the evolution equations and the constraints on the complete computational domain, and not just on its domain of dependence. The control of these incoming constraint fields leads to constraint-preserving boundary conditions, and a nontrivial task is to fit these conditions into one of the admissible boundary conditions discussed in the previous Section 5, for which well-posedness can be shown.

A second issue is the construction of absorbing boundary conditions. Unlike the simple examples considered in Section 5.3, for which the fields evolve on a fixed background and in- and outgoing solutions can be represented explicitly, or at least characterized precisely, in general relativity it is not even clear how to define in- and outgoing gravitational radiation since there are no local expressions for the gravitational energy density and flux. Therefore, the best one can hope for is to construct boundary conditions, which approximately control the incoming gravitational radiation in certain regimes, like, for example, in the weak field limit where the field equations can be linearized around, say, a Schwarzschild or Minkowski spacetime.

Finally, the third issue is related to the diffeomorphism invariance of the theory. Ideally, one would like to formulate a geometric version of the IBVP, for which the data given on the initial and boundary surfaces Σ0 and \({\mathcal T}\) can be characterized in terms of geometric quantities such as the first and second fundamental forms of these surfaces as embedded in the yet unknown spacetime (M, g). In particular, this means that one should be able to identify equivalent data sets, i.e., those which are related to each other by a diffeomorphism of M, leaving Σ0 and \({\mathcal T}\) invariant, by local transformations on Σ0 and \({\mathcal T}\), without knowing the solution (M, g). It is currently not even clear if such a geometric uniqueness property does exist; see [186, 355] for further discussions on these points.

A well-posed IBVP for Einstein’s vacuum field equations was first formulated by Friedrich and Nagy [187] based on a tetrad formalism, which incorporates the Weyl curvature tensor as an independent field. This formulation exploits the freedom of choosing local coordinates and the tetrad orientation in order to impose very precise gauge conditions, which are adapted to the boundary surface \({\mathcal T}\) and tailored to the IBVP. These gauge conditions, together with a suitable modification of the evolution equations for the Weyl curvature tensor using the constraints (cf. Example 32), lead to a first-order symmetric hyperbolic system in which all the constraint fields propagate tangentially to \({\mathcal T}\) at the boundary. As a consequence, no constraint-preserving boundary conditions need to be specified, and the only incoming fields are related to the gravitational radiation, at least in the context of the approximations mentioned above. With this, the problem can be shown to be well posed using the techniques described in Section 5.2.

After the pioneering work of [187], there has been much effort in formulating a well-posed IBVP for metric formulations of general relativity, on which most numerical calculations are based. However, with the exception of particular cases in spherical symmetry [249], the linearized field equations [309] or the restriction to flat, totally reflecting boundaries [404, 405, 106, 98, 219, 220, 410, 29, 15], not much progress had been made towards obtaining a manifestly well-posed IBVP with nonreflecting, constraint-preserving boundary conditions. The difficulties encountered were similar to those described in Examples 31 and 32. Namely, controlling the incoming constraint fields usually resulted in boundary conditions for the main system involving either derivatives of its characteristic fields or fields propagating with zero speed, when it was written in first-order symmetric hyperbolic form. Therefore, the theory of maximal dissipative boundary conditions could not be applied in these attempts. Instead, boundary conditions controlling the incoming characteristic constraint fields were specified and combined with more or less ad hoc conditions controlling the gauge and gravitational degrees of freedom and verified to satisfy the Lopatinsky condition (5.27) using the Laplace method; see [395, 108, 378, 220, 363, 368].

The breakthrough in the metric case came with the work by Kreiss and Winicour [267] who formulated a well-posed IBVP for the linearized Einstein vacuum field equations with harmonic coordinates. Their method is based on the pseudo-differential first-order reduction of the wave equation described in Section 5.1.3, which, when combined with Sommerfeld boundary conditions, yields a problem, which is strongly well posed in the generalized sense and, when applied to systems of equations, allows a certain hierarchical coupling in the boundary conditions. This work was then generalized to shifted wave equations and higher-order absorbing boundary conditions in [369]. Later, it was recognized that the results in [267] could also be established based on the usual a priori energy estimates based on integration by parts [263]. Finally, it was found that the boundary conditions imposed were actually maximal dissipative for a specific nonstandard class of first-order symmetric hyperbolic reduction of the wave system; see Section 5.2.1. Unlike the reductions considered in earlier work, such nonstandard class has the property that the boundary surface is noncharacteristic, which implies that no zero speed fields are present, and yields a strong well-posed system. Based on this reduction and the theory of quasilinear symmetric hyperbolic formulations with maximal dissipative boundary conditions [218, 388], it was possible to extend the results in [267, 263] and formulate a well-posed IBVP for quasilinear systems of wave equations [264] with a certain class of boundary conditions (see Theorem 8), which was sufficiently flexible to treat the Einstein equations. Furthermore, the new reduction also offers the interesting possibility to extend the proof to the discretized case using finite difference operators satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.

In order to parallel the presentation in Section 4, here we focus on the IBVP for Einstein’s equations in generalized harmonic coordinates and the IBVP for the BSSN system. The first case, which is discussed in Section 6.1, is an application of Theorem 8. In the BSSN case, only partial results have been obtained so far, but since the BSSN system is widely used, we nevertheless present some of these results in Section 6.2. In Section 6.3 we discuss some of the problems encountered when trying to formulate a geometric uniqueness theorem and, finally, in Section 6.4 we briefly mention alternative approaches to the IBVP, which do not require an artificial boundary.

For an alternative approach to treating the IBVP, which is based on the imposition of the Gauss-Codazzi equations at T; see [191, 192, 194, 193]. For numerical studies, see [249, 104, 40, 404, 405, 98, 287, 244, 378, 253, 61, 362, 35, 33, 368, 57, 56], especially [366] and [369] for a comparison between different boundary conditions used in numerical relativity and [365] for a numerical implementation of higher absorbing boundary conditions. For review articles on the IBVP in general relativity, see [372, 355, 435].

At present, there are no numerical simulations that are based directly on the well-posed IBVP for the tetrad formulation [187] or the well-posed IBVP for the harmonic formulation [267, 263, 264] described in Section 6.1, nor is there a numerical implementation of the constraint-preserving boundary conditions for the BSSN system presented in Section 6.2. The closest example is the harmonic approach described in [286, 363, 366], which has been shown to be well posed in the generalized sense in the high-frequency limit [369]. However, as mentioned above, the well posed IBVP in [264] opens the door for a numerical discretization based on the energy method, which can be proven to be stable, at least in the linearized case.

The harmonic formulation

Here, we discuss the IBVP formulated in [264] for the Einstein vacuum equations in generalized harmonic coordinates. The starting point is a manifold of the form M = [0, T] × Σ, with Σ a three-dimensional compact manifold with C-boundary Σ, and a given, fixed smooth background metric \({\overset \circ g _{\alpha \beta}}\) with corresponding Levi-Civita connection \(\overset \circ \nabla\), as in Section 4.1. We assume that the time slices Σ t := {t} × Σ are space-like and that the boundary surface \({\mathcal T}: = [0,T] \times \partial \Sigma\) is time-like with respect to \({\overset \circ g_{\alpha \beta}}\).

In order to formulate the boundary conditions, we first construct a null tetrad \(\{{K^\mu},{L^\mu},{Q^\mu},{{\bar Q}^\mu}\}\), which is adapted to the boundary. This null tetrad is based on the choice of a future-directed time-like vector field Tμ tangent to \({\mathcal T}\), which is normalized such that g μν TμTν = −1. One possible choice is to tie Tμ to the foliation Σ t , and then define it in the direction orthogonal to the cross sections Σ t = {t} × Σ of the boundary surface. A more geometric choice has been proposed in [186], where instead Tμ is chosen as a distinguished future-directed time-like eigenvector of the second fundamental form of \({\mathcal T}\), as embedded in (M, g). Next, we denote by Nμ the unit outward normal to \({\mathcal T}\) with respect to the metric g μν and complete Tμ and Nμ to an orthonormal basis {Tμ, Nμ, Vμ, Wμ} of T p M at each point \(p \in {\mathcal T}\). Then, we define the complex null tetrad by

$${K^\mu}: = {T^\mu} + {N^\mu},\qquad {L^\mu}: = {T^\mu} - {N^\mu},\qquad {Q^\mu}: = {V^\mu} + i\,{W^\mu},\qquad {\bar Q^\mu}: = {V^\mu} - i\,{W^\mu},$$

where \(i = \sqrt {- 1}\). Notice that the construction of these vectors is implicit, since it depends on the dynamical metric gαβ, which is yet unknown. However, the dependency is algebraic, and does not involve any derivatives of g αβ . We also note that the complex null vector Qμ is not unique since it can be rotated by an angle φ ∈ ℝ, QμeQμ. Finally, we define a radial function r on \({\mathcal T}\) as the areal radius of the cross sections Σ t with respect to the background metric.

Then, the boundary conditions, which were proposed in [264] for the harmonic system (4.5), are:

$${\overset \circ \nabla}_{K}{\left. {{h_{KK}} + {2 \over r}{h_{KK}}} \right\vert _{\mathcal T}} = {q_K},$$
$${\left. {{{\overset \circ \nabla}_K}{h_{KL}} + {1 \over r}({h_{KL}} + {h_{Q\bar Q}})} \right\vert _{\mathcal T}} = {q_L},$$
$${\left. {{{\overset \circ \nabla}_K}{h_{KQ}} + {2 \over r}{h_{KQ}}} \right\vert _{\mathcal T}} = {q_Q},$$
$${\left. {{{\overset \circ \nabla}_{K}}{{h}_{QQ}} - {{\overset \circ \nabla}_{Q}}{{h}_{QK}}} \right\vert _{\mathcal T}} = {q_{QQ}},$$
$${\left. {{{\overset \circ \nabla}_K}{h_{Q\bar Q}} + {{\overset \circ \nabla}_L}{h_{KK}} - {{\overset \circ \nabla}_Q}{h_{K\bar Q}} - {{\overset \circ \nabla}_{\bar Q}}{h_{KQ}}} \right\vert _{\mathcal T}} = {\left. {2{H_K}} \right\vert _{\mathcal T}},$$
$${\left. {{{\overset \circ \nabla}_K}{h_{LQ}} + {{\overset \circ \nabla}_L}{h_{KQ}} - {{\overset \circ \nabla}_Q}{h_{KL}} - {{\overset \circ \nabla}_{\bar Q}}{h_{QQ}}} \right\vert _{\mathcal T}} = {\left. {2{H_Q}} \right\vert _{\mathcal T}},$$
$${\left. {{{\overset \circ \nabla}_K}{h_{LL}} + {{\overset \circ \nabla}_L}{h_{Q\bar Q}} - {{\overset \circ \nabla}_Q}{h_{L\bar Q}} - {{\overset \circ \nabla}_{\bar Q}}{h_{LQ}}} \right\vert _{\mathcal T}} = {\left. {2{H_L}} \right\vert _{\mathcal T}},$$

where \({\overset \circ \nabla _K}{h_{LQ}}: = {K^\mu}{L^\alpha}{Q^\beta}{\overset \circ \nabla _\mu}{h_{\alpha \beta}},{h_{KL}}: = {K^\alpha}{L^\beta}{h_{\alpha \beta}},{H_K}:{K^\mu}{H_\mu}\), etc., and where q K and q L are real-valued given smooth functions on \({\mathcal T}\) and q Q and q QQ are complex-valued given smooth functions on \({\mathcal T}\). Since Q is complex, these constitute ten real boundary conditions for the metric coefficients h αβ . The content of the boundary conditions (6.2, 6.3, 6.4, 6.5) can be clarified by considering linearized gravitational waves on a Minkowski background with a spherical boundary. The analysis in [264] shows that in this context the four real conditions (6.2),(6.3, 6.4) are related to the gauge freedom; and the two conditions (6.5) control the gravitational radiation. The remaining conditions (6.6, 6.7, 6.8) enforce the constraint Cμ = 0 on the boundary, see Eq. (4.6), and so together with the constraint propagation system (4.14) and the initial constraints (4.15) they guarantee that the constraints are correctly propagated. Based on these observations, it is expected that these boundary conditions yield small spurious reflections in the case of a nearly-spherical boundary in the wave zone of an asymptotically-flat curved spacetime.

Well-posedness of the IBVP

The IBVP consisting of the harmonic Einstein equations (4.5), initial data (4.7) and the boundary conditions (6.26.8) can be shown to be well posed as an application of Theorem 8. For this, we first notice that the evolution equations (4.5) have the required form of Eq. (5.113), where E is the vector bundle of symmetric, covariant tensor fields h μν on M. Next, the boundary conditions can be written in the form of Eq. (5.115) with α = 1. In order to compute the matrix coefficients \({c^\mu}{^A_B}\), it is convenient to decompose h μν = hAe Aμν in terms of the basis vectors

$$\begin{array}{*{20}c} {{e_{1\,\alpha \beta}}: = {K_\alpha}{K_\beta},\quad {e_{2\,\alpha \beta}}: = - 2{K_{(\alpha}}{{\bar Q}_{\beta)}},\quad {e_{3\,\alpha \beta}}: = - 2{K_{(\alpha}}{Q_{\beta)}},\quad {e_{4\,\alpha \beta}}: = 2{Q_{(\alpha}}{{\bar Q}_{\beta)}},} \\ {{e_{5\,\alpha \beta}}: = {{\bar Q}_\alpha}{{\bar Q}_\beta},\quad {e_{6\,\alpha \beta}}: = {Q_\alpha}{Q_\beta}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {{e_{7\,\alpha \beta}}: = - 2{L_{(\alpha}}{{\bar Q}_{\beta)}},\quad {e_{8\,\alpha \beta}}: = - 2{L_{(\alpha}}{Q_{\beta)}},\quad {e_{9\,\alpha \beta}}: = 2{K_{(\alpha}}{L_{\beta)}},\quad {e_{10\,\alpha \beta}}: = {L_\alpha}{L_\beta},} \\ \end{array}$$

with \({h^1} = {h_{LL}}/4,\,{h^2} = {{\bar h}^3} = {h_{LQ}}/4,\,{h^4} = {h_{Q\bar Q}}/4,\,{h^5} = {{\bar h}^6} = {h_{QQ}}/4,\,{h^7} = {{\bar h}^8} = {h_{KQ}}/4,\,{h^9} = {h_{KL}}/4,\,{h^{10}} = {h_{KK}}/4\). With respect to this basis, the only nonzero matrix coefficients are

$$\begin{array}{*{20}c} {{c^{\mu \,1}}_2 = {{\bar Q}^\mu},} & {{c^{\mu \,1}}_3 = {Q^\mu},} & {{c^{\mu \,1}}_4 = - {L^\mu},} \\ {{c^{\mu \,2}}_5 = {{\bar Q}^\mu},} & {{c^{\mu \,2}}_7 = - {L^\mu},} & {{c^{\mu \,2}}_9 = {Q^\mu},} \\ {{c^{\mu \,3}}_6 = {Q^\mu},} & {{c^{\mu \,3}}_8 = - {L^\mu},} & {{c^{\mu \,3}}_9 = {{\bar Q}^\mu},} \\ {{c^{\mu \,4}}_7 = {{\bar Q}^\mu},} & {{c^{\mu \,4}}_8 = {Q^\mu},} & {{c^{\mu \,4}}_{10} = - {L^\mu},} \\ {{c^{\mu \,5}}_7 = {Q^\mu},} & {{c^{\mu \,6}}_8 = {{\bar Q}^\mu},} & {} \\ \end{array}$$

which has the required upper triangular form with zeros in the diagonal. Therefore, the hypothesis of Theorem 8 are verified and one obtains a well-posed IBVP for Einstein’s equations in harmonic coordinates.

This result also applies the the modified system (4.16), since the constraint damping terms, which are added, do not modify the principal part of the main evolution system nor the one of the constraint propagation system.

Boundary conditions for BSSN

Here we discuss boundary conditions for the BSSN system (4.524.59), which is used extensively in numerical calculations of spacetimes describing dynamic black holes and neutron stars. Unfortunately, to date, this system lacks an initial-boundary value formulation for which well-posedness in the full nonlinear case has been proven. Without doubt the reason for this relies on the structure of the evolution equations, which are mixed first/second order in space, and whose principal part is much more complicated than the harmonic case, where one deals with a system of wave equations.

A first step towards formulating a well-posed IBVP for the BSSN system was performed in [52], where the evolution equations (4.52, 4.53, 4.564.59) with a fixed shift and the relation f = μ ≡ (4m −1)/3 were reduced to a first-order symmetric hyperbolic system. Then, a set of six boundary conditions consistent with this system could be formulated based on the theory of maximal dissipative boundary conditions. Although this gives rise to a well-posed IBVP, the boundary conditions specified in [52] are not compatible with the constraints, and therefore, one does not necessarily obtain a solution to the full set of Einstein’s equations beyond the domain of dependence of the initial data surface. In a second step, constraint-preserving boundary conditions for BSSN with a fixed shift were formulated in [220], and cast into maximal dissipative form for the linearized system (see also [15]). However, even at the linearized level, these boundary conditions are too restrictive because they constitute a combination of Dirichlet and Neumann boundary conditions on the metric components, and in this sense they are totally reflecting instead of absorbing. More general constraint-preserving boundary conditions were also considered in [220] and, based on the Laplace method, they were shown to satisfy the Lopatinsky condition (5.27).

Radiative-type constraint-preserving boundary conditions for the BSSN system (4.524.59) with dynamical lapse and shift were formulated in [315] and shown to yield a well-posed IBVP in the linearized case. The assumptions on the parameters in this formulation are m = 1, f > 0, κ = 4GH/3 > 0, fκ, which guarantee that the BSSN system is strongly hyperbolic, and as long as e ≠ 2α, they allow for the gauge conditions (4.62, 4.63) used in recent numerical calculations, where f = 2/α and κ = e/α2; see Section 4.3.1. In the following, we describe this IBVP in more detail. First, we notice that the analysis in Section 4.3.1 reveals that for the standard choice m = 1 the characteristic speeds with respect to the unit outward normal si to the boundary are

$${\beta ^s},\qquad {\beta ^s} \pm \alpha ,\qquad {\beta ^s} \pm \alpha \,\sqrt f ,\qquad {\beta ^s} \pm \alpha \,\sqrt {GH} ,\qquad {\beta ^s} \pm \alpha \,\sqrt \kappa ,$$

where βs = βis i is the normal component of the shift. According to the theory described in Section 5 it is the sign of these speeds, which determines the number of incoming fields and boundary conditions that must be specified. Namely, the number of boundary conditions is equal to the number of characteristic fields with positive speed. Assuming ∣βs∣ is small enough such that \(\vert {\beta ^s}/\alpha \vert < \min \{1,\sqrt f, \sqrt {GH}, \sqrt \kappa \}\), which is satisfied asymptotically if βs → 0 and α → 1, it is the sign of the normal component of the shift, which determines the number of boundary conditions. Therefore, in order to keep the number of boundary conditions fixed throughout evolutionFootnote 26 one has to ensure that either βs > 0 or βs ≤ 0 at the boundary surface. If the condition βs → 0 is imposed asymptotically, the most natural choice is to set the normal component of the shift to zero at the boundary, βs = 0 at \({\mathcal T}\). The analysis in [52] then reveals that there are precisely nine incoming characteristic fields at the boundary, and thus, nine conditions have to be imposed at the boundary. These nine boundary conditions are as follows:

  • Boundary conditions on the gauge variables

    There are four conditions that must be imposed on the gauge functions, namely the lapse and shift. These conditions are motivated by the linearized analysis, where the gauge propagation system, consisting of the evolution equations for lapse and shift obtained from the BSSN equations (4.524.55, 4.59), decouples from the remaining evolution equations. Surprisingly, this gauge propagation system can be cast into symmetric hyperbolic form [315], for which maximal dissipative boundary conditions can be specified, as described in Section 5.2. It is remarkable that the gauge propagation system has such a nice mathematical structure, since the equations (4.52, 4.54, 4.55) have been specified by hand and mostly motivated by numerical experiments instead of mathematical analysis.

    In terms of the operator \({\Pi ^i}_j = {\delta ^i}_j - {s^i}{s_j}\) projecting onto vectors tangential to the boundary, the four conditions on the gauge variables can be written as

    $${s^i}{\partial _i}\alpha = 0,$$
    $${\beta ^s} = 0,$$
    $${\Pi ^i}_j\,\left({{\partial _t} + {{\sqrt {3\kappa}} \over 2}{s^k}{\partial _k}} \right){\beta ^j} = {\kappa \over {f - \kappa}}{\Pi ^i}_j\,{\tilde \gamma ^{jk}}{\partial _k}\alpha .$$

    Eq. (6.10) is a Neumann boundary condition on the lapse, and Eq. (6.11) sets the normal component of the shift to zero, as explained above. Geometrically, this implies that the boundary surface \({\mathcal T}\) is orthogonal to the time slices Σ t . The other two conditions in Eq. (6.12) are Sommerfeld-like boundary conditions involving the tangential components of the shift and the tangential derivatives of the lapse; they arise from the analysis of the characteristic structure of the gauge propagation system. An alternative to Eq. (6.12) also described in [315] is to set the tangential components of the shift to zero, which, together with Eq. (6.11) is equivalent to setting βi = 0 at the boundary. This alternative may be better suited for IBVP with non-smooth boundaries, such as cubes, where additional compatibility conditions must be enforced at the edges.

  • Constraint-preserving boundary conditions

    Next, there are three conditions requiring that the momentum constraint be satisfied at the boundary. In terms of the BSSN variables this implies

    $${\tilde D^j}{\tilde A_{ij}} - {2 \over 3}{\tilde D_i}K + 6{\tilde A_{ij}}{\tilde D^j}\phi = 8\pi {G_N}{j_i}.$$

    As shown in [315], Eqs. (6.13) yields homogeneous maximal dissipative boundary conditions for a symmetric hyperbolic first-order reduction of the constraint propagation system (4.74, 4.75, 4.76). Since this system is also linear and its boundary matrix has constant rank if βs = 0, it follows from Theorem 7 that the propagation of constraint violations is governed by a well-posed IBVP. This implies, in particular, that solutions whose initial data satisfy the constraints exactly automatically satisfy the constraints on each time slice Σ t . Furthermore, small initial constraint violations, which are usually present in numerical applications yield solutions for which the growth of the constraint violations can be bounded in terms of the initial violations.

  • Radiation controlling boundary conditions

    Finally, the last two boundary conditions are intended to control the incoming gravitational radiation, at least approximately, and specify the complex Weyl scalar Ψ0, cf. Example 32. In order to describe this boundary condition we first define the quantities \({{\bar {\mathcal E}}_{ij}}: = {{\tilde R}_{ij}} + R_{ij}^\phi + {e^{4\phi}}({1 \over 3}K{{\tilde A}_{ij}} - {{\tilde A}_{il}}\tilde A_j^l) - 4\pi {G_N}{\sigma _{ij}}\) and \({{\bar {\mathcal B}}_{kij}}: = {e^{4\phi}}\left[ {{{\tilde D}_k}{{\tilde A}_{ij}} - 4\left({{{\tilde D}_{(i}}\phi} \right){{\tilde A}_{j)k}}} \right]\), which determine the electric and magnetic parts of the Weyl tensor through \({E_{ij}} = {{\bar {\mathcal E}}_{ij}} - {1 \over 3}{\gamma _{ij}}{\gamma ^{kl}}{{\bar {\mathcal E}}_{kl}}\) and \({B_{ij}} = {\varepsilon _{kl(i}}{{\bar {\mathcal B}}^{kl}}{\,_{j)}}\), respectively. Here, ε kij denotes the volume form with respect to the three metric γ ij . In terms of the operator \({P^{ij}}_{lm} = {\Pi ^i}_{(l}{\Pi ^j}_{m)} - {1 \over 2}{\Pi ^{ij}}{\Pi _{lm}}\) projecting onto symmetric trace-less tangential tensors to the boundary, the boundary condition reads

    $${P^{ij}}_{lm}{\bar {\mathcal E} _{ij}} + \left({{s^k}{P^{ij}}_{lm} - {s^i}{P^{kj}}_{lm}} \right){\bar {\mathcal B} _{kij}} = {P^{ij}}_{lm}{G_{ij}},$$

    with G ij a given smooth tensor field on the boundary surface \({\mathcal T}\). The relation between G ij and Ψ0 is the following: if n = α−1( t βi i ) denotes the future-directed unit normal to the time slices, we may construct an adapted Newman-Penrose null tetrad \(\{K,L,Q,\bar Q\}\) at the boundary by defining K := n + s, L := ns, and by choosing Q to be a complex null vector orthogonal to K and L, normalized such that \({Q^\mu}{{\bar Q}_\mu} = 2\). Then, we have Ψ0 = (E kl iB kl )QkQl = G kl QkQl. For typical applications involving the modeling of isolated systems one may set G ij to zero. However, this in general is not compatible with the initial data (see the discussion in Section 10.3), an alternative is then to freeze the value of G ij to the one computed from the initial data.

    The boundary condition (6.14) can be partially motivated by considering an isolated system, which, globally, is described by an asymptotically-flat spacetime. Therefore, if the outer boundary is placed far enough away from the strong field region, one may linearize the field equations on a Minkowski background to a first approximation. In this case, one is in the same situation as in Example 32, where the Weyl scalar Ψ0 is an outgoing characteristic field when constructed from the adapted null tetrad. Furthermore, one can also appeal to the peeling behavior of the Weyl tensor [328], in which Ψ0 is the fastest decaying component along an outgoing null geodesics and describes the incoming radiation at past null infinity. While Ψ0 can only be defined in an unambiguous way at null infinity, where a preferred null tetrad exists, the boundary condition (6.14) has been successfully numerically implemented and tested for truncated domains with artificial boundaries in the context of the harmonic formulation; see, for example, [366]. Estimates on the amount of spurious reflection introduced by this condition have also been derived in [88, 89]; see also [135].

Geometric existence and uniqueness

The results mentioned so far concerning the well-posed IBVP for Einstein’s field equations in the tetrad formulation of [187], in the metric formulation with harmonic coordinates described in Section 6.1, or in the linearized BSSN formulation described in Section 6.2 allow one, from the PDE point of view, to construct unique solutions on a manifold of the form M = [0, T] × Σ, given appropriate initial and boundary data. However, since general relativity is a diffeomorphism invariant theory, one needs to pose the IBVP from a geometric perspective. In particular, the following questions arise, which, for simplicity, we only formulate for the vacuum case:

  • Geometric existence. Let (M, g) be any smooth solution of Einstein’s vacuum field equations on the manifold M = [0, T] × Σ corresponding to initial data (h, k) on Σ0 and boundary data ψ on \({\mathcal T}\), where h and k represent, respectively, the first and second fundamental forms of the initial surface Σ0 as embedded in (M, g). Is it possible to reproduce this solution with any of the well-posed IBVP mentioned so far, at least on a submanifold M′ = [0, T′] × Σ with 0 < T′ ≤ T? That is, does there exist initial data f and boundary data q for this IBVP and a diffeomorphism ϕ: M′ → ϕ(M′) ⊂ M, which leaves Σ0 and \({{\mathcal T}\prime}\) invariant, such that the metric constructed from this IBVP is equal to ⊂*g on M′?

  • Geometric uniqueness. Is the solution (M,g) uniquely determined by the data (h,k,ψ)? Given a well-posed IBVP for which geometric existence holds, the question about geometric uniqueness can be reduced to the analysis of this particular IBVP in the following way: let u1 and u2 be two solutions of the IBVP on the manifold M = [0, T] × Σ with corresponding data (f1, q1) and (f2, q2). Suppose the two solutions induce the same data (h, k) on Σ0 and ψ on \({\mathcal T}\). Does there exist a diffeomorphism ϕ: M′ = [0, T′] × Σ → ϕ(M′) ⊂ M, which leaves Σ0 and \({{\mathcal T}\prime}\) invariant, such that the metrics g1 and g2 corresponding to u1 and u2 are related to each other by g2 = φ*g1 on M′?

These geometric existence and uniqueness problems have been solved in the context of the Cauchy problem without boundaries; see [127] and Section 4.1.3. However, when boundaries are present, several new difficulties appear as pointed out in [186]; see also [187, 184]:

  1. (i)

    It is a priori not clear what the boundary data ψ should represent geometrically. Unlike the case of the initial surface, where the data represents the first and second fundamental forms of Σ0 as a spatial surface embedded in the constructed spacetime (M, g), it is less clear what the geometric meaning of ψ should be since it is restricted by the characteristic structure of the evolution equations, as discussed in Section 5.

  2. (ii)

    The boundary data (q K , q L , q Q , q QQ ) in the boundary conditions (6.2, 6.3, 6.4, 6.5) for the harmonic formulation and the boundary data G ij in the boundary condition (6.14) for the BSSN formulation ultimately depend on the specific choice of a future-directed time-like vector field T at the boundary surface \({\mathcal T}\). Together with the unit outward normal N to \({\mathcal T}\), this vector defines the preferred null directions K = T + N and L = TN, which are used to construct the boundary-adapted null tetrad in the harmonic case and the projection operators \({\Pi ^\mu}_\nu = {\delta ^\mu}_\nu + {T^\mu}{T_\nu} - {N^\mu}{N_\nu}\) and \({P^{\mu \nu}}_{\alpha \beta} = {\Pi ^\mu}_\alpha {\Pi ^\nu}_\beta - {1 \over 2}{\Pi ^{\mu \nu}}{\Pi _{\alpha \beta}}\) in the BSSN one. Although it is tempting to define T as the unit, future-directed time-like vector tangent to \({\mathcal T}\), which is orthogonal to the cross sections Σ t , this definition would depend on the particular foliation Σ t the formulation is based on, and so the resulting vector T would be gauge-dependent. A similar issue arises in the tetrad formulation of [187].

  3. (iii)

    When addressing the geometric uniqueness issue, an interesting question is whether or not it is possible to determine from the data sets (f1, q1) and (f2, q2) alone if they are equivalent in the sense that their solutions u1 and u2 induce the same geometric data (h, k, ψ). Therefore, the question is whether or not one can identify equivalent data sets by considering only transformations on the initial and boundary surfaces Σ0 and \({\mathcal T}\), without knowing the solutions u1 and u2.

Although a complete answer to these questions remains a difficult task, there has been some recent progress towards their understanding. In [186] a method was proposed to geometrically single out a preferred time direction T at the boundary surface \({\mathcal T}\). This is done by considering the trace-free part of the second fundamental form, and proving that under certain conditions, which are stable under perturbations, the corresponding linear map on the tangent space possesses a unique time-like eigenvector. Together with the unit outward normal vector N, the vector field T defines a distinguished adapted null tetrad at the boundary, from which geometrically meaningful boundary data could be defined. For instance, the complex Weyl scalar Ψ0 can then be defined as the contraction Ψ0 = C αβγδ KαQβKγQδ of the Weyl tensor C αβγδ associated to the metric g μν along the null vectors K and Q, and the definition is unique up to the usual spin rotational freedom Qe eQ, and therefore, the Weyl scalar Ψ0 is a good candidate for forming part of the boundary data ψ.

In [355] it was suggested that the unique specification of a vector field T may not be a fundamental problem, but rather the manifestation of the inability to specify a non-incoming radiation condition correctly. In the linearized case, for example, setting the Weyl scalar Ψ0 to zero computed from the boundary-adapted tetrad is transparent to gravitational plane waves traveling along the specific null direction K = T + N, see Example 32, but it induces spurious reflections for outgoing plane waves traveling in other null direction. Therefore, a genuine non-incoming radiation condition should be, in fact, independent of any specific null or time-like direction at the boundary, and can only depend on the normal vector N. This is indeed the case for much simpler systems like the scalar wave equation on a Minkowski background [153], where perfectly absorbing boundary conditions are formulated as a nonlocal condition, which is independent of a preferred time direction at the boundary.

Aside from controlling the incoming gravitational degrees of freedom, the boundary data ψ should also comprise information related to the geometric evolution of the boundary surface. In [187] this was achieved by specifying the mean curvature of \({\mathcal T}\) as part of the boundary data. In the harmonic formulation described in Section 6.1 this information is presumably contained in the functions q K , q L and q Q , but their geometric interpretation is not clear.

In order to illustrate some of the issues related to the geometric existence and uniqueness problem in a simpler context, in what follows we analyze the IBVP for linearized gravitational waves propagating on a Minkowski background. Before analyzing this case, however, we make two remarks. First, it should be noted [186] that the geometric uniqueness problem, especially an understanding of point (iii), also has practical interest, since in long term evolutions it is possible that the gauge threatens to break down at some point, requiring a redefinition. The second remark concerns the formulation of the Einstein IBVP in generalized harmonic coordinates, described in Sections 4.1 and 6.1, where general covariance was maintained by introducing a background metric \({\overset \circ g _{\mu \nu}}\) on the manifold M. IBVPs based on this approach have been formulated in [369] and [264] and further developed in [434] and [433]. However, one has to emphasize that this approach does not automatically solve the geometric existence and uniqueness problems described here: although it is true that the IBVP is invariant with respect to any diffeomorphism ϕ: MM, which acts on the dynamical and the background metric at the same time, the question of the dependency of the solution on the background metric remains.

Geometric existence and uniqueness in the linearized case

Here we analyze some of the geometric existence and uniqueness issues of the IBVP for Einstein’s field equations in the much simpler setting of linearized gravity on Minkowski space, where the vacuum field equations reduce to

$$- {\nabla ^\mu}{\nabla _\mu}{h_{\alpha \beta}} - {\nabla _\alpha}{\nabla _\beta}h + 2{\nabla ^\mu}{\nabla _{(\alpha}}{h_{\beta)\mu}} = 0,$$

where h αβ denotes the first variation of the metric, h:= ηαβh αβ its trace with respect to the Minkowski background metric η αβ , and ∇ μ is the covariant derivative with respect to η αβ . An infinitesimal coordinate transformation parametrized by a vector field ξμ induces the transformation

$${h_{\alpha \beta}} \mapsto {\tilde h_{\alpha \beta}} = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}},$$

where ξ α := η αβ ξβ.

Let us consider the linearized Cauchy problem without boundaries first, where initial data is specified at the initial surface Σ0 = {0} × ℝ3. The initial data is specified geometrically by the first and second fundamental forms of Σ0, which, in the linearized case, are represented by a pair \((h_{ij}^{(0)},k_{ij}^{(0)})\) of covariant symmetric tensor fields on Σ0. We assume \((h_{ij}^{(0)},k_{ij}^{(0)})\) to be smooth and to satisfy the linearized Hamiltonian and momentum constraints

$${G^{ijrs}}{\partial _i}{\partial _j}h_{rs}^{(0)} = 0,\qquad {G^{ijrs}}{\partial _j}k_{rs}^{(0)} = 0,$$

where Gijrs:= δi(rδs)jδijδrs. A solution h αβ of Eq. (6.15) with the induced data corresponding to \((h_{ij}^{(0)},k_{ij}^{(0)})\) up to a gauge transformation (6.16) satisfies

$${\left. {{h_{ij}}} \right\vert _{{\Sigma _0}}} = h_{ij}^{(0)} + 2{\partial _{(i}}{X_{j)}},\qquad {\left. {{\partial _t}{h_{ij}} - 2{\partial _{(i}}{h_{j)0}}} \right\vert _{{\Sigma _0}}} = - 2(k_{ij}^{(0)} + {\partial _i}{\partial _j}f),$$

where X j = ξ j and f = ξ0 are smooth and represent the initial gauge freedom. Then, one has:

Theorem 9. The initial-value problem ( 6.15 , 6.18 ) possesses a smooth solution h αβ , which is unique up to an infinitesimal coordinate transformation h αβ = h αβ + 2∇(αξ β ) generated by a vector field ξα.

Proof. We first show the existence of a solution in the linearized harmonic gauge \({C_\beta} = {\nabla ^\mu}{h_{\beta \mu}} - {1 \over 2}{\nabla _\beta}h = 0\), for which Eq. (6.15) reduces to the system of wave equations ∇μ μ h αβ = 0. The initial data, \(({h_{\alpha \beta}}{\vert _{{\Sigma _0}}},{\partial _t}{h_{\alpha \beta}}{\vert _{{\Sigma _0}}})\), for this system is chosen such that \({h_{ij}}{\vert _{{\Sigma _0}}} = h_{ij}^{(0)},{\partial _t}{h_{ij}}{\vert _{{\Sigma _0}}} = 2{\partial _{(i}}{h_{j)0}}{\vert _{{\Sigma _0}}} - 2k_{ij}^{(0)}\) and \({\partial _t}{h_{00}}{\vert _{{\Sigma _0}}} = 2{\delta ^{ij}}k_{ij}^{(0)},{\partial _t}{h_{{0_j}}}{\vert _{{\Sigma _0}}} = {\partial ^i}(h_{ij}^{(0)} - {1 \over 2}{\delta _{ij}}{\delta ^{kl}}h_{kl}^{(0)}) + {1 \over 2}{\partial _j}{h_{00}}{\vert _{{\Sigma _0}}}\), where \((h_{ij}^{(0)},k_{ij}^{(0)})\) satisfy the constraint equations (6.17) and where the initial data for h00 and h0j is chosen smooth but otherwise arbitrary. This choice implies the satisfaction of Eq. (6.18) with X j = 0 and f = 0 and the initial conditions \({C_\beta}{\vert _{{\Sigma _0}}} = 0\) and \({\partial _t}{C_\beta}{\vert _{{\Sigma _0}}} = 0\) on the constraint fields C β . Therefore, solving the wave equation ∇μ μ h αβ = 0 with such data, we obtain a solution of the linearized Einstein equations (6.15) in the harmonic gauge with initial data satisfying (6.18) with X j = 0 and f = 0. This shows geometric existence for the linearized harmonic formulation.

As for uniqueness, suppose we had two smooth solutions of Eqs. (6.15, 6.18). Then, since the equations are linear, the difference h αβ between these two solutions also satisfies Eqs. (6.15, 6.18) with trivial data \(h_{ij}^{(0)} = 0,\,\,k_{ij}^{(0)} = 0\). We show that h αβ can be transformed away by means of an infinitesimal gauge transformation (6.16). For this, define \({\tilde h_{\alpha \beta}}: = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}}\) where ξ β is required to satisfy the inhomogeneous wave equation

$$0 = {\nabla ^\alpha}{\tilde h_{\alpha \beta}} - {1 \over 2}{\nabla _\beta}\tilde h = {\nabla ^\alpha}{h_{\alpha \beta}} - {1 \over 2}{\nabla _\beta}h + {\nabla ^\alpha}{\nabla _\alpha}{\xi _\beta}$$

with initial data for ξ β defined by \({\xi _0}{\vert _{{\Sigma _0}}} = - f,\,\,{\xi _i}{\vert _{{\Sigma _0}}} = - {X_i},\,\,{\partial _t}{\xi _0}{\vert _{{\Sigma _0}}} = - {h_{00}}/2,\,\,{\partial _t}{\xi _i}{\vert _{{\Sigma _0}}} = - {h_{0i}} + {\partial _i}f\). Then, by construction, \({\tilde h_{\alpha \beta}}\) satisfies the harmonic gauge, and it can be verified that \({\tilde h_{\alpha \beta}}{\vert _{{\Sigma _0}}} = {\partial _t}{\tilde h_{\alpha \beta}}{\vert _{{\Sigma _0}}} = 0\). Therefore, \({\tilde h_{\alpha \beta}}\) is a solution of the wave equation \({\nabla ^\mu}{\nabla _\mu}{\tilde h_{\alpha \beta}} = 0\) with trivial initial data, and it follows that \({\tilde h_{\alpha \beta}} = 0\) and that h αβ = −2∇( α ξ β ) is a pure gauge mode. □

It follows from the existence part of the proof that the quantities \({h_{00}}{\vert _{{\Sigma _0}}}\) and \({h_{0j}}{\vert_{{\Sigma _0}}}\), corresponding to linearized lapse and shift, parametrize pure gauge modes in the linearized harmonic formulation.

Next, we turn to the IBVP on the manifold M = [0, T] × Σ. Let us first look at the boundary conditions (6.26.5), which, in the linearized case, reduce to

$${\left. {{\nabla _K}{h_{KK}}} \right\vert _{\mathcal T}} = {q_K},\qquad {\left. {{\nabla _K}{h_{KL}}} \right\vert _{\mathcal T}} = {q_L},\qquad {\left. {{\nabla _K}{h_{KQ}}} \right\vert _{\mathcal T}} = {q_Q},\qquad {\left. {{\nabla _K}{h_{QQ}} - {\nabla _Q}{h_{QK}}} \right\vert _{\mathcal T}} = {q_{QQ}}.$$

There is no problem in repeating the geometric existence part of the proof on M imposing these boundary condition, and using the IBVP described in Section 6.1. However, there is a problem when trying to prove the uniqueness part. This is because a gauge transformation (6.16) induces the following transformations on the boundary data,

$$\begin{array}{*{20}c} {{{\tilde q}_K} = {q_K} + 2\nabla _K^2{\xi _K},\qquad {{\tilde q}_L} = {q_L} + \nabla _K^2{\xi _L} + {\nabla _K}{\nabla _L}{\xi _K},\qquad {{\tilde q}_Q} = {q_Q} + \nabla _K^2{\xi _Q} + {\nabla _K}{\nabla _Q}{\xi _K},} \\ {{{\tilde q}_{QQ}} = {q_{QQ}} + {\nabla _Q}({\nabla _K}{\xi _Q} - {\nabla _Q}{\xi _K}),\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ \end{array} \quad \quad$$

which overdetermines the vector field ξ β at the boundary. On the other hand, replacing the boundary condition (6.5) by the specification of the Weyl scalar Ψ0, leads to [286, 369]

$${\left. {\nabla _K^2{h_{QQ}} + {\nabla _Q}({\nabla _Q}{h_{KK}} - 2{\nabla _K}{h_{KQ}})} \right\vert _{\mathcal T}} = {\Psi _0}.$$

Since the left-hand side is gauge-invariant, there is no over-determination of ξ β at the boundary any more, and the transformation properties of the remaining boundary data q K , q L and q Q provides a complete set of boundary data for ξ K , ξ L and ξ Q , which may be used in conjunction with the wave equation ∇μ μ ξ β = 0 in order to formulate a well-posed IBVP [369]. Provided Ψ0 is smooth and the compatibility conditions are satisfied at the edge \(S = {\Sigma _0} \cap {\mathcal T}\), it follows:

Theorem 10. [355] The IBVP ( 6.15 , 6.18 , 6.21 ) possesses a smooth solution h αβ , which is unique up to an infinitesimal coordinate transformation \({{\tilde h}_{\alpha \beta}} = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}}\) generated by a vector field ξα.

In conclusion, we can say that, in the simple case of linear gravitational waves propagating on a Minkowksi background, we have resolved the issues (i–iii). Correct boundary data is given to the linearized Weyl scalar Ψ0 computed from the boundary-adapted tetrad. To linear order, Ψ0 is invariant with respect to coordinate transformations, and the time-like vector field T appearing in its definition can be defined geometrically by taking the future-directed unit normal to the initial surface Σ0 and parallel transport it along the geodesics orthogonal to Σ0.

Whether or not this result can be generalized to the full nonlinear case is not immediately clear. In our linearized analysis we have imposed no restrictions on the normal component ξN of the vector field generating the infinitesimal coordinate transformation. However, such a restriction is necessary in order to keep the boundary surface fixed under a diffeomorphism. Unfortunately, it does not seem possible to restrict ξN in a natural way with the boundary conditions constructed so far.

Alternative approaches

Although the formulation of Einstein’s equations on a finite space domain with an artificial time-like boundary is currently the most used approach in numerical simulations, there are a number of difficulties associated with it. First, as discussed above, spurious reflections from the boundary surface may contaminate the solution unless the boundary conditions are chosen with great care. Second, in principle there is a problem with wave extraction, since gravitational waves can only be defined in an unambiguous (gauge-invariant) way at future null infinity. Third, there is an efficiency problem, since in the far zone the waves propagate along outgoing null geodesics so that hyperboloidal surfaces, which are asymptotically null, should be better adapted to the problem. These issues have become more apparent as numerical simulations have achieved higher accuracy to the point that boundary and wave extraction artifacts are noticeable, and have driven a number of other approaches.

One of them is that of compactification schemes, which include spacelike or null infinity into the computational domain. For schemes compactifying spacelike infinity; see [335, 336]. Conformal compactifications are reviewed in [172, 183], and a partial list of references to date includes [328, 176, 177, 180, 179, 170, 245, 172, 247, 100, 446, 447, 316, 87, 451, 452, 448, 449, 450, 305, 364, 42].

Another approach is Cauchy-characteristic matching (CCM) [99, 392, 401, 143, 148, 53], which combines a Cauchy approach in the strong field regime (thereby avoiding the problems that the presence of caustics would cause on characteristic evolutions) with a characteristic one in the wave zone. Data from the Cauchy evolution is used as inner boundary conditions for the characteristic one and, viceversa, the latter provides outer boundary conditions for the Cauchy IBVP. An understanding of the Cauchy IBVP is still a requisite. CCM is reviewed in [432]. A related idea is Cauchy-perturbative matching [455, 356, 4, 370], where the Cauchy code is instead coupled to one solving gauge-invariant perturbations of Schwarzschild black holes or flat spacetime. The multipole decomposition in the Regge-Wheeler-Zerilli equations [347, 453, 376, 294, 307] implies that the resulting equations are 1+1 dimensional and can therefore extend the region of integration to very large distances from the source. As in CCM, an understanding of the IBVP for the Cauchy sector is still a requisite.

One way of dealing with the ambiguity of extracting gravitational waves from Cauchy evolutions at finite radii is by extrapolating procedures; see, for example, [72, 331] for some approaches and quantification of their accuracies. Another approach is Cauchy characteristic extraction (CCE) [350, 37, 349, 32, 34, 54]. In CCE a Cauchy IBVP is solved, and the numerical data on a world tube is used to provide inner boundary conditions for a characteristic evolution that “transports” the data to null infinity. The difference with CCM is that in CCE there is no “feedback” from the characteristic evolution to the Cauchy one, and the extraction is done as a post-processing step.

Numerical Stability

In the previous sections we have discussed continuum initial and IBVPs. In this section we start with the study of the discretization of such problems. In the same way that a PDE can have a unique solution yet be ill posedFootnote 27, a numerical scheme can be consistent yet not convergent due to the unbounded growth of small perturbations as resolution is increased. The definition of numerical stability is the discrete version of well-posedness. One wants to ensure that small initial perturbations in the numerical solution, which naturally appear due to discretization errors and finite precision, remain bounded for all resolutions at any given time t > 0. Due to the classical Lax-Richtmyer theorem [276], this property, combined with consistency of the scheme, is equivalent in the linear case to convergence of the numerical solution, and the latter approaches the continuum one as resolution is increased (at least within exact arithmetic). Convergence of a scheme is in general difficult to prove directly, especially because the exact solution is in general not known. Instead, one shows stability.

The different definitions of numerical stability follow those of well-posedness, with the L2 norm in space replaced by a discrete one, which is usually motivated by the spatial approximation. For example, discrete norms under which the summation by parts property holds are natural in the context of some finite difference approximations and collocation spectral methods (see Sections 8 and 9).

We start with a general discussion of some aspects of stability, and explicit analyses of simple, low-order schemes for test models. There follows a discussion of different variations of the von Neumann condition, including an eigenvalue version, which can be used to analyze in practice necessary conditions for IBVPs. Next, we discuss a rather general stability approach for the method of lines, the notion of time-stability, Runge-Kutta methods, and we close the section with some references to other approaches not covered here, as well as some discussion in the context of numerical relativity.

Definitions and examples

Consider a well-posed linear initial-value problem (see Definition 3)

$${u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x),\quad x \in {{\mathbb{R}}^n},\quad t \geq 0,$$
$$u(0,x) = f(x),\quad x \in {{\mathbb{R}}^n}\,.$$

Definition 11. An approximation-discretization to the Cauchy problem ( 7.1 , 7.2 ) is numerically stable if there is some discrete norm in space ∥ · ∥d and constants K d , α d such that the corresponding approximation v satisfies

$${\Vert {v(t,\cdot)} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}{\Vert f \Vert_{\rm{d}}},$$

for high enough resolutions, smooth initial data f, and t ≥ 0.


  • The previous definition applies both to the semi-discrete case (where space but not time is discretized) as well as the fully-discrete one. In the latter case, Eq. (7.3) is to be interpreted at fixed time. For example, if the timestep discretization is constant,

    $${t_k} = k\Delta t,\qquad k = 0,1,2 \ldots$$

    then Eq. (7.3) needs to hold for fixed t k and arbitrarily large k. In other words, the solution is allowed to grow with time, but not with the number of timesteps at fixed time when resolution is increased.

  • The norm ∥ · ∥d in general depends on the spatial approximation, and in Sections 8 and 9 we discuss some definitions for the finite difference and spectral cases.

  • From Definition 11, one can see that an ill-posed problem cannot have a stable discretization, since otherwise one could take the continuum limit in (7.3) and reach the contradiction that the original system was well posed.

  • As in the continuum, Eq. (7.3) implies uniqueness of the numerical solution v.

  • In Section 3 we discussed that if, in a well-posed homogeneous Cauchy problem, a forcing term is added to Eq. (7.1),

    $${u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x)\qquad \mapsto \qquad {u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x) + F(t,x),$$

    then the new problem admits another estimate, related to the original one via Duhamel’s formula, Eq. (3.23). A similar concept holds at the semi-discrete level, and the discrete estimates change accordingly (in the fully-discrete case the integral in time is replaced by a discrete sum),

    $$\Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\Vert f \Vert_{\rm{d}}\qquad \mapsto \qquad \Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}} \left(\Vert f \Vert_{\rm{d}} + \int\limits_0^t \Vert F(s, \cdot) \Vert_{\rm{d}} ds \right).$$

    In other words, the addition of a lower-order term does not affect numerical stability, and without loss of generality one can restrict stability analyses to the homogeneous case.

  • The difference w:= uv between the exact solution and its numerical approximation satisfies an equation analogous to (7.5), where F is related to the truncation error of the approximation. If the scheme is numerically stable, then in the linear and semi-discrete cases Eq. (7.6) implies

    $${\Vert {w(t,\cdot)} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\int\limits_0^t {{{\Vert {F(s,\cdot)} \Vert}_{\rm{d}}}} ds\,.$$

    If the approximation is consistent, the truncation error converges to zero as resolution is increased, and Equation (7.7) implies that so does the norm of the error ∥w(t, ·)∥d· That is, stability implies convergence. The inverse is also true and this equivalence between convergence and stability is the celebrated Lax-Richtmyer theorem. The equivalence also holds in the fully-discrete case.

  • In the quasi-linear case, one follows the principle of linearization, as described in Section 3.3. One linearizes the problem, and constructs a stable numerical scheme for the linearization. The expectation, then, is that the scheme also converges for the nonlinear scheme. For particular problems and discretizations this expectation can be rigorously proven (see, for example, [259]).

From here on {x j , t k } denotes some discretization of space and time. This includes both finite difference and spectral collocation methods, which are the ones discussed in Sections 8 and 9, respectively. In addition, we use the shorthand notation

$$v_j^k: = v({t_k},{x_j}){.}$$

In order to gain some intuition into the general problem of numerical stability we start with some examples of simple, low-order approximations for a test problem. Consider uniform grids both in space and time

$${t_k} = k\Delta t,\quad {x_j} = j\Delta x,\qquad k = 0,1,2, \ldots ,\quad j = 0,1,2, \ldots N,$$

and the advection equation,

$${u_t} = a{u_x}\,,\qquad x \in [0,2\pi ],\quad t \geq 0,$$

on a periodic domain with 2π = N Δx, and smooth periodic initial data. Then the solution u can be represented by a Fourier series:

$$u(t,x) = {1 \over {\sqrt {2\pi}}}\sum\limits_{\omega \in {\mathbb{Z}}} {\hat u} (t,\omega){e^{i\omega x}},$$


$$\hat u(t,\omega) = {1 \over {\sqrt {2\pi}}}\int\nolimits_0^{2\pi} {{e^{- i\omega x}}} u(t,x)dx,$$

and the stability of the following schemes can be analyzed in Fourier space.

Example 33. The one-sided Euler scheme.

Eq. (7.10) is discretized with a one-sided FD approximation for the spatial derivative and evolved in time with the forward Euler scheme,

$${{v_j^{k + 1} - v_j^k} \over {\Delta t}} = a{{v_{j + 1}^k - v_j^k} \over {\Delta x}}.$$

In Fourier space the approximation becomes

$${\hat v^{k + 1}}(\omega) = \hat q(\omega){\hat v^k}(\omega) = {\left[ {\hat q(\omega)} \right]^{k + 1}}{\hat v^0}(\omega)\,,$$


$$\hat q(\omega) = 1 + a\lambda \left({{e^{i\omega \Delta x}} - 1} \right)$$

is called the amplification factor and

$$\lambda = {{\Delta t} \over {\Delta x}}$$

the Courant-Friedrich-Levy (CFL) factor

Using Parseval’s identity, we find

$${\Vert {v({t_k},\cdot)} \Vert^2} = \sum\limits_{\omega \in {\mathbb {Z}}} \vert \hat q(\omega){\vert ^{2k}}\vert {\hat v^0}(\omega){\vert ^2},$$

and therefore, we see that the inequality (7.3) can only hold for all k if

$$\vert \hat q(\omega)\vert \leq1\quad {\rm{for all}}\omega \in \,{\mathbb{Z}}.$$

for a > 0, this is the case if and only if the CFL factor satisfies

$$0 < \lambda \leq{1 \over a},$$

and in this case the well-posedness estimate (7.3) holds with K d = 1 and α d = 0. The upper bound in condition (7.19) for this example is known as the CFL limit, and (7.18) as the von Neumann condition. If \(a = 0,\,\hat q(\omega) = 1\), while for a < 0 the scheme is unconditionally unstable even though the underlying continuum problem is well posed.

Next we consider a scheme very similar to the previous one, but which turns out to be unconditionally unstable for a ≠ 0, regardless of the direction of propagation.

Example 34. A centered Euler scheme.

Consider first the semi-discrete approximation to Eq. (7.10),

$${d \over {dt}}{v_j} = a{{{v_{j + 1}} - {v_{j - 1}}} \over {2\Delta x}}\,;$$

it is easy to check that it is stable for all values of Δx. Next discretize time through an Euler scheme, leading to

$${{v_j^{k + 1} - v_j^k} \over {\Delta t}} = a{{v_{j + 1}^k - v_{j - 1}^k} \over {2\Delta x}}\,.$$

The solution again has the form given by Eq. (7.14), now with

$$\vert \hat q(\omega)\vert \,=\, \vert 1 + ia\lambda \sin (\omega \Delta x)\vert \, \geq 1.$$

At fixed time t k , the norm of the solution to the fully-discrete approximation (7.21) for arbitrary small initial data with ωΔxπℤ grows without bound as the timestep decreases.

The semi-discrete centered approximation (7.20) and the fully-discrete centered Euler scheme (7.21) constitute the simplest example of an approximation, which is not fully-discrete stable, even though its semi-discrete version is. This is related to the fact that the Euler time integration is not locally stable, as discussed in Section 7.3.2.

The previous two examples were one-step methods, where vk+1 can be computed in terms of vk. The following is an example of a two-step method.

Example 35. Leap-frog.

A way to stabilize the centered Euler scheme is by approximating the time derivative by a centered difference instead of a forward, one-sided operator:

$$v_j^{k + 1} = v_j^{k - 1} + a\lambda \left({v_{j + 1}^k - v_{j - 1}^k} \right)\,.$$

Enlarging the system by introducing

$$w_j^k: = \left({\begin{array}{*{20}c} {v_j^k} \\ {v_j^{k - 1}} \\ \end{array}} \right)$$

it can be cast into the one-step method

$${\hat w^{k + 1}} = \hat Q(\omega){\hat w^k} = \hat Q{(\omega)^{k + 1}}{\hat w^0},\qquad {\rm{with}}\hat Q(\omega) = \left({\begin{array}{*{20}c} {2ia\lambda \sin (\omega \Delta x)} & 1 \\ 1 & 0 \\ \end{array}} \right).$$

By a similar procedure, a general multi-step method can always be reduced to a one-step one. Therefore, in the stability results below we can assume without loss of generality that the schemes are one-step.

In the above example the amplification matrix \(\hat Q(\omega)\) can be diagonalized through a transformation that is uniformly bounded:

$$\hat Q(\omega) = \hat T(\omega)\left({\begin{array}{*{20}c} {{\mu _ +}} & 0 \\ 0 & {{\mu _ -}} \\ \end{array}} \right){\hat T^{- 1}}(\omega),\qquad \hat Q{(\omega)^k} = \hat T(\omega)\left({\begin{array}{*{20}c} {\mu _ + ^k} & 0 \\ 0 & {\mu _ - ^k} \\ \end{array}} \right){\hat T^{- 1}}(\omega),$$

with μ ± = z ± (1 + z2)1/2, z:= iaλ sin (ωΔx), and

$$\hat T(\omega) = \left({\begin{array}{*{20}c} {{\mu _ +}} & {{\mu _ -}} \\ 1 & 1 \\ \end{array}} \right).$$

The eigenvalues μ± are of unit modulus, ∣μ ± ∣ = 1. In addition, the norms of \(\hat T(\omega)\) and its inverse are

$$\vert \hat T(\omega)\vert = \sqrt {2\left({1 + \vert z\vert} \right)} \,,\quad \vert {\hat T^{- 1}}(\omega)\vert = {1 \over {\sqrt {2\left({1 - \vert z\vert} \right)}}}\,.$$

Therefore, the condition number of \(\hat T(\omega)\) can be bounded for all ω:

$$\vert \hat T(\omega)\vert \cdot\vert {\hat T^{- 1}}(\omega)\vert = {\left({{{1 + \vert z\vert} \over {1 - \vert z\vert}}} \right)^{1/2}} \leq{\left({{{1 + \vert a\vert \lambda} \over {1 - \vert a\vert \lambda}}} \right)^{1/2}} < \infty \,,$$

provided that

$$\lambda < {1 \over {\vert a\vert}}\,,$$

and it follows that the Leap-frog scheme is stable under the condition (7.30).

The previous examples were explicit methods, where the solution \(\upsilon _j^{k + 1}\) (or \(w_j^{k + 1}\)) can be explicitly computed from the one at the previous timestep, without inverting any matrices.

Example 36. Crank-Nicholson.

Approximating Eq. (7.10) by

$$\left({1 - a{{\Delta t} \over 2}{D_0}} \right)v_j^{k + 1} = \left({1 + a{{\Delta t} \over 2}{D_0}} \right)v_j^k,$$


$${D_0}{v_j}: = {1 \over {2\Delta x}}\left({{v_{j + 1}} - {v_{j - 1}}} \right),$$

defines an implicit method. Fourier transform leads to

$$\left[ {1 - ia{\lambda \over 2}\sin (\omega x)} \right]\hat v_j^{k + 1} = \left[ {1 + ia{\lambda \over 2}\sin (\omega x)} \right]\hat v_j^k.$$

The expressions inside the square brackets on both sides are different from zero and have equal magnitude. As a consequence, the amplification factor in this case satisfies

$$\vert \hat q(\omega)\vert = 1\quad {\rm{for all}}\omega \in {\mathbb {Z}}\,\,{\rm and}\,\,\lambda > 0,$$

and the scheme is unconditionally stable at the expense of having to invert a matrix to advance the solution in time.

Example 37. Iterated Crank-Nicholson.

Approximating the Crank-Nicholson scheme through an iterative scheme with a fixed number of iterations is usually referred to as the Iterated Crank-Nicholson (ICN) method. For Eq. (7.10) it proceeds as follows [414]:

  • First iteration: an intermediate variable (1) is calculated using a second-order-in-space centered difference (7.32) and an Euler, first-order forward-time approximation,

    $${1 \over {\Delta t}}\left({{}^{(1)}\tilde v_j^{n + 1} - v_j^n} \right) = {D_0}\,v_j^n\,.$$

    Next, a second intermediate variable is computed through averaging,

    $$^{(1)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(1)}\tilde v_j^{n + 1} + v_j^n} \right).$$

    The full time step for this first iteration is

    $${1 \over {\Delta t}}\left({v_j^{n + 1} - v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2}.$$
  • Second iteration: it follows the same steps. Namely, the intermediate variables

    $$\begin{array}{*{20}c} {{1 \over {\Delta t}}\left({{}^{(2)}\tilde v_j^{n + 1} - v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2},\quad \quad \quad \quad \quad \quad} \\ {{}^{(2)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(2)}\tilde v_j^{n + 1} + v_j^n} \right),} \\ \end{array}$$

    are computed, and the full step is obtained from

    $${1 \over {\Delta t}}\left({v_j^{n + 1} - v_j^n} \right) = {D_0}{\,^{(2)}}\bar v_j^{n + 1/2}\,.$$
  • Further iterations proceed in the same way.

The resulting discretization is numerically stable for λ ≤ 2/a and p = 2, 3, 6, 7, 10, 11, … iterations, and unconditionally unstable otherwise. In the limit p the ICN scheme becomes the implicit, unconditionally-stable Crank-Nicholson scheme of the previous example. For any fixed number of iterations, though, the method is explicit and stability is contingent on the CFL condition λ ≤ 2/a. The method is unconditionally unstable for p = 4, 5, 8, 9, 12, 13, … because the limit of the amplification factor approaching one in absolute value [cf. Eq. (7.34)] as p increases is not monotonic. See [414] for details and [380] for a similar analysis for “theta” schemes.

Similar definitions to the one of Definition 11 are introduced for the IBVP. For simplicity we explicitly discuss the semi-discrete case. In analogy with the definition of a strongly-well-posed IBVP (Definition 9) one has

Definition 12. A semi-discrete approximation to the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ) is numerically stable if there are discrete norms ∥ · d at Σ and ∥ · , d at ∂Σ and constants Kd = Kd(T) and εd = εd(T) ≥ 0 such that for high-enough resolution the corresponding approximation v satisfies

$$v(t,\cdot)_{\text{d}}^2 + {\varepsilon _{\text{d}}}\int\limits_0^t {v(s,\cdot)_{\partial ,{\text{d}}}^2} {\mkern 1mu} ds{\text{ }} \leqslant K_{\text{d}}^2\left[ {f_{\text{d}}^2 + \int\limits_0^t {(F(s,\cdot)_{\text{d}}^2 + g(s,\cdot)_{\partial ,{\text{d}}}^2)} ds} \right],$$

for all t ∈ [0, T]. If the constant ε d can be chosen strictly positive, the problem is called strongly stable.

In addition, the semi-discrete version of Definitions 6 and 7 lead to the concepts of strong stability in the generalized sense and boundary stability, respectively, which we do not write down explicitly here. The definitions for the fully-discrete case are similar, with time integrals such as those in Eq. (7.39) replaced by discrete sums.

The von Neumann condition

Consider a discretization for a linear system with variable, time-independent coefficients such that

$${{\bf{v}}^{k + 1}} = {\bf{Q}}{{\bf{v}}^k}\,,$$
$${{\bf{v}}^0} = {\bf{f}}\,,$$

where vk denotes the gridfunction \({{\rm{v}}^k} = \{\upsilon _j^k:j = 0,1, \ldots, N\}\) and Q is called the amplification matrix. We assume that Q is also time-independent. Then

$${{\bf{v}}^k} = {{\bf{Q}}^k}{\bf{f}}$$

and the approximation (7.40, 7.41) is stable if and only if there are constants Kd and αd such that

$${\Vert {{{\bf{Q}}^k}} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}{t_k}}}$$

for all k = 0, 1, 2, … and high enough resolutions.

In practice, condition (7.43) is not very manageable as a way of determining if a given scheme is stable since it involves computing the norm of the power of a matrix. A simpler condition based on the eigenvalues {q i } of Q as opposed to the norm of Qk is von Neumann’s one:

$$\vert {q_i}\vert \leq{e^{{\alpha _{\rm{d}}}\Delta t}}\quad {\rm{for\,\, all\,\, eigenvalues}}\,\,{q_i}\,\,{\rm{of}}\,\,{\bf{Q}}\,\,{\rm{and\,\, all}}\,\,\Delta t > 0.$$

This condition is necessary for numerical stability: if q i is an eigenvalue of Q, \(q_i^k\) is an eigenvalue of Qk and

$$\vert q_i^k\vert \,\, \leq\,\,\Vert {{{\bf{Q}}^k}} \Vert\,\, \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}{t_k}}} = {K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}k\Delta t}}.$$

That is,

$$|{q_i}|{\text{ }} \leqslant K_{\text{d}}^{1/k}{e^{{\alpha _{\text{d}}}\Delta t}},$$

which, m order to be valid for all k, implies Eq. (7.44).

As already mentioned, in order to analyze numerical stability, one can drop lower-order terms. Doing so typically leads to Q depending on Δt and Δx only through a quotient (the CFL factor) of the form (with p = 1 for hyperbolic equations)

$$\lambda = {{\Delta t} \over {{{(\Delta x)}^p}}}\,,$$
$${\bf{Q}}(\Delta t,\Delta x) = {\bf{Q}}(\lambda)\,.$$

then for Eq. (7.44) to hold for all Δt > 0 while keeping the CFL factor fixed (in particular, for small Δt > 0), the following condition has to be satisfied:

$$\vert {q_i}\vert \leq1\quad {\rm{for\,\,all\,\,eigenvalues}}\,\,{q_i}\,\,{\rm{of}}\,\,{\bf{Q}}\,,$$

and one has a stronger version of the von Neumann condition, which is the one encountered in Example 33; see Eq. (7.18).

The periodic, scalar case

We return to the periodic scalar case, such as the schemes discussed in Examples 33, 34, 35, and 36 with some more generality. Suppose then, in addition to the linearity and time-independent assumptions of the continuum problem, that the initial data and discretization (7.40, 7.41) are periodic on the interval [0, 2π]. Through a Fourier expansion we can write the grid function f = (f(x0), f(x1), …, f(x N )) corresponding to the initial data as

$${\bf{f}} = {1 \over {\sqrt {2\pi}}}\sum\limits_{\omega \in {\mathbb{Z}}} {\hat f} (\omega){{\bf{e}}^{i\omega}},$$

where \({{\rm{e}}^{i\omega}} = ({e^{i\omega {x_0}}},{e^{i\omega {x_1}}}, \ldots, {e^{i\omega {x_N}}})\). The approximation becomes

$${{\bf{v}}^k} = {1 \over {\sqrt {2\pi}}}\sum\limits_{\omega \in {\mathbb{Z}}} {\hat f} (\omega){{\bf{Q}}^k}{{\bf{e}}^{i\omega}}.$$

Assuming that Q is diagonal in the basis e, such that

$${\bf{Q}}{{\bf{e}}^{i\omega}} = \hat q(\omega){{\bf{e}}^{i\omega}},$$

as is many times the case, we obtain, using Parseval’s identity,

$$\Vert {{{\bf{v}}^k}} \Vert = {\left({\sum\limits_{\omega \in {\mathbb{Z}}} \vert \hat f(\omega){\vert ^2}\vert \hat q{{(\omega)}^k}{\vert ^2}} \right)^{1/2}}.$$


$$\vert \hat q(\omega)\vert \leq{e^{{\alpha _d}\Delta t}}\quad {\rm{for\,\, all\,\, eigenvalues}}\,\,\omega \in {\mathbb Z} \,\,{\rm and}\,\,\Delta t > 0,$$

for some constant αd then

$$\Vert {{{\bf{v}}^k}} \Vert \leq{e^{{\alpha _{\rm{d}}}k\Delta t}}{\left({\sum\limits_\omega \vert \hat f(\omega){\vert ^2}} \right)^{1/2}} = {e^{{\alpha _{\rm{d}}}k\Delta t}}\Vert {\bf{f}} \Vert = {e^{{\alpha _{\rm{d}}}{t_k}}}\Vert {\bf{f}} \Vert$$

and stability follows. Conversely, if the scheme is stable and (7.52) holds, (7.54) has to be satisfied. Take

$${\bf{f}} = {{\bf{e}}^{i\omega}}\,,$$


$$\vert {\hat q^k}(\omega)\vert \Vert {\bf{f}} \Vert = \Vert {{{\bf{v}}^k}} \Vert \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}{t_k}}}\Vert {\bf{f}} \Vert,$$


$$|\hat q(\omega )|{\mkern 1mu} {\mkern 1mu} {\text{ }} \leqslant K_{\text{d}}^{1/k}{e^{{\alpha _{\text{d}}}\Delta t}}$$

for arbitrary k, which implies (7.54). Therefore, provided the condition (7.52) holds, stability is equivalent to the requirement (7.54) on the eigenvalues of Q.

The general, linear, time-independent case

However, as mentioned, the von Neumann condition is not sufficient for stability, neither in its original form (7.44) nor in its strong one (7.49), unless, for example, Q can be uniformly diagonalized. This means that there exists a matrix T such that

$$\Lambda = {{\bf{T}}^{- 1}}{\bf{QT}} = {\rm{diag}}({q_0}, \ldots ,{q_N})$$

is diagonal and the condition number of T with respect to the same norm,

$${\kappa _{\rm{d}}}({\bf{T}}): = {\Vert {\bf{T}} \Vert_{\rm{d}}}{\Vert {{{\bf{T}}^{- 1}}} \Vert_{\rm{d}}}$$

is bounded

$${\kappa _{\rm{d}}}({\bf{T}}) \leq{K_{\rm{d}}}$$

for some constant Kd independent of resolution (an example is that of Q being normal, QQ* = Q*Q). In that case

$${{\bf{v}}^k} = {\bf{T}}{\Lambda ^k}{{\bf{T}}^{- 1}}{\bf{f}}$$


$${\Vert {{{\bf{v}}^k}} \Vert_{\rm{d}}} \leq\kappa ({\bf{T}}){\underset {i}{\rm {max}}} \vert {q_i}{\vert ^k}{\Vert {\bf{f}} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}k\Delta t}}{\Vert {\bf{f}} \Vert_{\rm{d}}} = {K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}{t_k}}}{\Vert {\bf{f}} \Vert_{\rm{d}}}\,.$$

Next, we discuss two examples where the von Neumann condition is satisfied but the resulting scheme is unconditionally unstable. The first one is for a well-posed underlying continuum problem and the second one for an ill-posed one.

Example 38. An unstable discretization, which satisfies the von Neumann condition for a trivially-well-posed problem [228].

Consider the following system on a periodic domain with periodic initial data

$${u_t} = 0\,,\qquad u = \left({\begin{array}{*{20}c} {{u_1}} \\ {{u_2}} \\ \end{array}} \right)\,,$$

discretized as

$${{{{\bf{v}}^{k + 1}} - {{\bf{v}}^k}} \over {\Delta t}} = - \Delta x\left({\begin{array}{*{20}c} 0 & 1 \\ 0 & 0 \\ \end{array}} \right)D_0^2{{\bf{v}}^k}$$

with D0 given by Eq. (7.32). The Fourier transform of the amplification matrix and its k-th power are

$$\hat{\bf{Q}} = \left({\begin{array}{*{20}c} 1 & {\lambda {{\sin}^2}(\omega \Delta x)} \\ 0 & 1 \\ \end{array}} \right)\,,\qquad {\hat{\bf{Q}}^k} = \left({\begin{array}{*{20}c} 1 & {k\lambda {{\sin}^2}(\omega \Delta x)} \\ 0 & 1 \\ \end{array}} \right)\,.$$

The von Neumann condition is satisfied, since the eigenvalues are 1. However, the discretization is unstable for any value of λ > 0. For the unit vector e = (0, 1)T, for instance, we have

$$\vert {\hat{\bf{Q}}^k}{\bf{e}}\vert = \sqrt {1 + {{\left({k\lambda} \right)}^2}{{\sin}^4}\left({\omega \Delta x} \right)} \,,$$

which grows without bound as k is increased for sin (ωΔx) ≠ 0.

The von-Neumann condition is clearly not sufficient for stability in this example because the amplification matrix not only cannot be uniformly diagonalized, but it cannot be diagonalized at all because of the Jordan block structure in (7.65).

Example 39. Ill-posed problems are unconditionally unstable, even if they satisfy the von Neumann condition. The following example is drawn from [107].

Consider the periodic Cauchy problem

$${u_t} = A{\mu _x},$$

where u = (u1, u2)T, A is a 2 × 2 constant matrix, and the following discretization. The right-hand side of the equation is approximated by a second-order centered derivative plus higher (third) order numerical dissipation (see Section 8.5)

$$A{u_x} \rightarrow A{D_0}v - \epsilon I{(\Delta x)^3}D_ + ^2D_ - ^2v\,,$$

where I is the 2 × 2 identity matrix, ϵ ≥ 0 an arbitrary parameter regulating the strength of the numerical dissipation and D+, D are first-order forward and backward approximations of d/dx,

$${D_ +}{v_j}: = {{{v_{j + 1}} - {v_j}} \over {\Delta x}},\quad {D_ -}{v_j}: = {{{v_j} - {v_{j - 1}}} \over {\Delta x}}.$$

The resulting system of ordinary differential equations is marched in time (method of lines, discussed in Section 7.3) through an explicit method: the iterated Crank-Nicholson (ICN) one with an arbitrary but fixed number of iterations p (see Example 37).

If the matrix A is diagonalizable, as in the scalar case of Example 37, the resulting discretization is numerically stable for λ ≤ 2/a and p = 2, 3, 6, 7, 10, 11,…, even without dissipation. On the other hand, if the system (7.68) is weakly hyperbolic, as when the principal part has a Jordan block,

$$A = \left({\begin{array}{*{20}c} {a\;\;1} \\ {0\;\;2} \\ \end{array}} \right),$$

one can expect on general grounds that any discretization will be unconditionally unstable. As an illustration, this was explicitly shown in [107] for the above scheme and variations of it. In Fourier space the amplification matrix and its k-th power take the form

$$\hat Q = \left({\begin{array}{*{20}c} {c\;b} \\ {0\;c} \\ \end{array}} \right)\,,\qquad {\hat Q^k} = \left({\begin{array}{*{20}c} {{c^k}} & {k{c^{k - 1}}b} \\ 0 & {{c^k}} \\ \end{array}} \right),$$

with coefficients c, b depending on {a, λ, ωΔx, ϵ} such that for an arbitrary small initial perturbation at just one gridpoint,

$$v_0^0 = {(0,2\pi \epsilon)^{\rm{T}}},\qquad v_j^0 = {(0,0)^{\rm{T}}}\quad {\rm{otherwise}},$$

the solution satisfies

$$\Vert {{\bf{v}}^k}\Vert _{\rm{d}}^2 \geq C{k^{5/4}}\Vert {{\bf{v}}^0}\Vert _{\rm{d}}^2\quad {\rm{for\;some\;constant}}\;\;C,$$

and is therefore unstable regardless of the value of λ and ϵ. On the other hand, the von Neumann condition ∣a∣ ≤ 1 is satisfied if and only if

$$0 \leq \epsilon\lambda \leq 1/8.$$

Notice that, as expected, the addition of numerical dissipation cannot stabilize the scheme independent of its amount. Furthermore, adding dissipation with a strength parameter ϵ > 1/(8λ) violates the von Neumann condition (7.75) and the growth rate of the numerical instability worsens.

The method of lines

A convenient approach both from an implementation point of view as well as for analyzing numerical stability or constructing numerically-stable schemes is to decouple spatial and time discretizations. That is, one first analyzes stability under some spatial approximation assuming time to be continuous (semi-discrete stability) and then finds conditions for time integrators to preserve stability in the fully-discrete case.

In general, this method provides only a subclass of numerically-stable approximations. However, it is a very practical one, since spatial and time stability are analyzed separately and stable semi-discrete approximations and appropriate time integrators can then be combined at will, leading to modularity in implementations.

Semi-discrete stability

Consider the approximation

$${{\bf{v}}_t}(t) = {\bf{Lv}}\,,\quad t > 0$$
$${\bf{v}}(0) = {\bf{f}}$$

for the initial value problem (7.1, 7.2). The scheme is semi-discrete stable if the solution to Eqs. (7.76, 7.77) satisfies the estimate (7.3).

In the time-independent case, the solution to (7.76, 7.77) is

$${\bf{v}}(t) = {e^{{\bf{L}}t}}{\bf{f}}$$

and stability holds if and only if there are constants Kd and αd such that

$$\Vert {e^{{\bf{L}}t}}{\Vert _{\rm{d}}} \leq {K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\quad {\rm{for\,\,all}}\;t \geq 0.$$

The von Neumann condition now states that there exists a constant αd, independent of spatial resolution (i.e., the size of the matrix L), such that the eigenvalues i of L satisfy

$$\underset {i} {\max} \,{\rm Re} ({\ell _i}) \leq {\alpha _{\rm{d}}}.$$

This is the discrete-in-space version of the Petrovskii condition; see Lemma 2. As already pointed out, it is not always a sufficient condition for stability, unless L can be uniformly diagonalized. Also, if the lower-order terms are dropped from the analysis then

$${\bf{L}} = {1 \over {{{(\Delta x)}^p}}}\tilde{\bf{L}}$$

with \({{\rm{\tilde L}}}\) independent of Δx, and in order for (7.80) to hold for all Δx (in particular small Δx),

$$\underset {i} {\max} \,{\rm Re} ({\ell _i}) \leq 0 .$$

which is a stronger version of the semi-discrete von Neumann condition.

Semi-discrete stability also follows if L is semi-bounded, that is, there is a constant αd independent of resolution such that (cf. Eq. (3.25) in Theorem 1)

$${\langle {\bf{v}},{\bf{Lv}}\rangle _{\rm{d}}} + {\langle {\bf{Lv}},{\bf{v}}\rangle _{\rm{d}}} \leq 2{\alpha _{\rm{d}}}\Vert {\bf{v}}\Vert _{\rm{d}}^2\quad {\rm{for\;all}}\;\;{\bf{v}}.$$

In that case, the semi-discrete approximation (7.76, 7.77) is numerically stable, as follows immediately from the following energy estimate arguments,

$${d \over {dt}}\Vert {\bf{v}}\Vert _{\rm{d}}^2 = {d \over {dt}}{\langle {\bf{v}},\;{\bf{v}}\rangle _{\rm{d}}} = {\langle {\bf{Lv}},\;{\bf{v}}\rangle _{\rm{d}}} + {\langle {\bf{v}},\;{\bf{Lv}}\rangle _{\rm{d}}} \leq 2{\alpha _{\rm{d}}}\Vert {\bf{v}}\Vert _{\rm{d}}^2.$$

For a large class of problems, which can be shown to be well posed using the energy estimate, one can construct semi-bounded operators L by satisfying the discrete counterpart of the properties of the differential operator P in Eq. (7.1) that were used to show well-posedness. This leads to the construction of spatial differential approximations satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.

Fully-discrete stability

Now we consider explicit time integration for systems of the form (7.76, 7.77) with time-independent coefficients. That is, if there are N points in space we consider the system of ordinary differential equations (ODEs)

$${{\bf{v}}_t} = \;{\bf{Lv}},$$

where L is an N × N matrix.

In the previous Section 7.3.1 we derived necessary conditions for semi-discrete stability of such systems. Namely, the von Neumann one in its weak (7.80) and strong (7.82) forms. Below we shall derive necessary conditions for fully-discrete stability for a large class of time integration methods, including Runge-Kutta ones. Upon time discretization, stability analyses of (7.85) require the introduction of the notion of the region of absolute stability of ODE solvers. Part of the subtlety in the stability analysis of fully-discrete systems is that the size N of the system of ODEs is not fixed; instead, it depends on the spatial resolution. However, the obtained necessary conditions for fully-discrete stability will also turn out to be sufficient when combined with additional assumptions. We will also discuss sufficient conditions for fully-discrete stability using the energy method.

Necessary conditions. Recall the vo