Abstract
I review the development of numerical evolution codes for general relativity based upon the characteristic initialvalue problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2Daxisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary blackhole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary blackhole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchycharacteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchycharacteristic matching. Progress in this direction is discussed.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is my pleasure to review progress in numerical relativity based upon characteristic evolution. In the spirit of Living Reviews in Relativity, I invite my colleagues to continue to send me contributions and comments at winicour@pitt.edu.
We are now in an era in which Einstein’s equations can effectively be considered solved at the local level. Several groups, as reported here and in other Living Reviews, have developed 3D Cauchy evolution codes, which are stable and accurate in some sufficientlybounded domain. The pioneering works [235] (based upon a harmonic formulation) and [78, 21] (based upon BSSN formulations [268, 38]) have initiated dramatic progress in the ability of these codes to simulate the inspiral and merger of binary black holes, the premier problem in classical relativity. Global solutions of binary black holes are another matter. Characteristic evolution codes have been successful in treating the exterior region of asymptoticallyflat spacetimes extending to future null infinity. Just as several coordinate patches are necessary to describe a spacetime with nontrivial topology, the most effective attack on the binary blackhole waveform might involve a global solution patched together from pieces of spacetime handled by a combination of different codes and techniques.
Most of the effort in numerical relativity has centered about Cauchy codes based upon the {3 + 1} formalism [308], which evolve the spacetime inside an artificiallyconstructed outer boundary. It has been common practice in Cauchy simulations of binary black holes to compute the waveform from data on a finite extraction worldtube inside the outer boundary, using perturbative methods based upon introducing a Schwarzschild background in the exterior region [1, 3, 2, 4, 255, 251, 214]. In order to properly approximate the waveform at null infinity the extraction worldtube must be sufficiently large but at the same time causally and numerically isolated from errors propagating in from the outer boundary. Considerable improvement in this approach has resulted from efficient methods for dealing with a very large outer boundary and from techniques to extrapolate the extracted waveform to infinity. However, this is not an ideally efficient approach and is especially impractical to apply to simulations of stellar collapse. A different approach, which is specifically tailored to study radiation at null infinity, can be based upon the characteristic initialvalue problem. This eliminates error due to asymptotic approximations and the gauge effects introduced by the choice of a finite extraction worldtube.
In the 1960s, Bondi [62, 63] and Penrose [227] pioneered the use of null hypersurfaces to describe gravitational waves. The characteristic initialvalue problem did not receive much attention before its importance in general relativity was recognized. Historically, the development of computational physics has focused on hydrodynamics, where the characteristics typically do not define useful coordinate surfaces and there is no generic outer boundary behavior comparable to null infinity. But this new approach has flourished in general relativity. It has led to the first unambiguous description of gravitational radiation in a fully nonlinear context. By formulating asymptotic flatness in terms of characteristic hypersurfaces extending to infinity, it was possible to reconstruct, in a nonlinear geometric setting, the basic properties of gravitational waves, which had been developed in linearized theory on a Minkowski background. The major new nonlinear features were the Bondi mass and news function, and the mass loss formula relating them. The Bondi news function is an invariantlydefined complex radiation amplitude N = N_{⊕} + iN_{⊗}, whose real and imaginary parts correspond to the time derivatives ∂_{t}h_{⊕} and ∂_{t}h_{⊗} of the “plus” and “cross” polarization modes of the strain h incident on a gravitational wave antenna. The corresponding waveforms are important both for the design of detection templates for a binary blackhole inspiral and merger and for the determination of the resulting recoil velocity.
The recent success of Cauchy evolutions in simulating binary black holes emphasizes the need to apply global techniques to accurate waveform extraction. This has stimulated several attempts to increase the accuracy of characteristic evolution. The Cauchy simulations have incorporated increasingly sophisticated numerical techniques, such as mesh refinement, multidomain decomposition, pseudospectral collocation and highorder (in some cases eighthorder) finite difference approximations. The initial characteristic codes were developed with unigrid secondorder accuracy. One of the prime factors affecting the accuracy of any characteristic code is the introduction of a smooth coordinate system covering the sphere, which labels the null directions on the outgoing light cones. This is also an underlying problem in meteorology and oceanography. In a pioneering paper on largescale numerical weather prediction, Phillips [229] put forward a list of desirable features for a mapping of the sphere to be useful for global forecasting. The first requirement was the freedom from singularities. This led to two distinct choices, which had been developed earlier in purely geometrical studies: stereographic coordinates (two coordinate patches) and cubedsphere coordinates (six patches). Both coordinate systems have been rediscovered in the context of numerical relativity (see Section 4.1). The cubedsphere method has stimulated two new attempts at improved codes for characteristic evolution (see Section 4.2.4). An ingenious third treatment, based upon a toroidal map of the sphere, was devised in developing a characteristic code for Einstein equations [36] (see Section 4.1.3).
Another issue affecting code accuracy is the choice between a second or first differential order reduction of the evolution system. Historically, the predominant importance of computational fluid dynamics has favored firstorder systems, in particular the reduction to symmetric hyperbolic form. However, in acoustics and elasticity theory, where the natural treatment is in terms of secondorder wave equations, an effective argument for the secondorder form has been made [187, 188]. In general relativity, the question of whether first or secondorder formulations are more natural depends on how Einstein’s equations are reduced to a hyperbolic system by some choice of coordinates and variables. The secondorder form is more natural in the harmonic formulation, where the Einstein equations reduce to quasilinear wave equations. The firstorder form is more natural in the FriedrichNagy formulation [118], which includes the Weyl tensor among the evolution variables, and was used in the first demonstration of a wellposed initialboundary value problem for Einstein’s equations. Investigations of firstorder formulations of the characteristic initialvalue problem are discussed in Section 4.2.3.
The major drawback of a standalone characteristic approach arises from the formation of caustics in the light rays generating the null hypersurfaces. In the most ambitious scheme proposed at the theoretical level such caustics would be treated “headon” as part of the evolution problem [283]. This is a profoundly attractive idea. Only a few structural stable caustics can arise in numerical evolution, and their geometrical properties are wellenough understood to model their singular behavior numerically [120], although a computational implementation has not yet been attempted.
In the typical setting for the characteristic initialvalue problem, the domain of dependence of a single smooth null hypersurface is empty. In order to obtain a nontrivial evolution problem, the null hypersurface must either be completed to a causticcrossover region where it pinches off, or an additional inner boundary must be introduced. So far, the only caustics that have been successfully evolved numerically in general relativity are pure point caustics (the complete null cone problem). When spherical symmetry is not present, the stability conditions near the vertex of a light cone place a strong restriction on the allowed time step [146]. Nevertheless, point caustics in general relativity have been successfully handled for axisymmetric vacuum spacetimes [142]. Progress toward extending these results to realistic astrophysical sources has been made by coupling an axisymmetric characteristic gravitationalhydro code with a highresolution shockcapturing code for the relativistic hydrodynamics, as initiated in the thesis of Siebel [269]. This has enabled the global characteristic simulation of the oscillation and collapse of a relativistic star in which the emitted gravitational waves are computed at null infinity (see Sections 7.1 and 7.2). Nevertheless, computational demands to extend these results to 3D evolution would be prohibitive using current generation supercomputers, due to the small timestep required at the vertex of the null cone (see Section 3.3). This is an unfortunate feature of presentday finitedifference codes, which might be eliminated by the use, say, of a spectral approach. Away from the caustics, characteristic evolution offers myriad computational and geometrical advantages. Vacuum simulations of blackhole spacetimes, where the inner boundary can be taken to be the whitehole horizon, offer a scenario where both the timestep and caustic problems can be avoided and threedimensional simulations are practical (as discussed in Section 4.5). An early example was the study of gravitational radiation from the postmerger phase of a binary black hole using a fullynonlinear threedimensional characteristic code [311, 312].
At least in the near future, fully threedimensional computational applications of characteristic evolution are likely to be restricted to some mixed form, in which data is prescribed on a nonsingular but incomplete initial null hypersurface N and on a second inner boundary B, which together with the initial null hypersurface determines a nontrivial domain of dependence. The hypersurface B may be either (i) null, (ii) timelike or (iii) spacelike, as schematically depicted in Figure 1. The first two possibilities give rise to (i) the double null problem and (ii) the nullconeworldtube problem. Possibility (iii) has more than one interpretation. It may be regarded as a Cauchy initialboundary value problem where the outer boundary is null. An alternative interpretation is the Cauchycharacteristic matching (CCM) problem, in which the Cauchy and characteristic evolutions are matched transparently across a worldtube W, as indicated in Figure 1.
In CCM, it is possible to choose the matching interface between the Cauchy and characteristic regions to be a null hypersurface, but it is more practical to match across a timelike worldtube. CCM combines the advantages of characteristic evolution in treating the outer radiation zone in spherical coordinates, which are naturally adapted to the topology of the worldtube with the advantages of Cauchy evolution in treating the inner region in Cartesian coordinates, where spherical coordinates would break down.
In this review, we trace the development of characteristic algorithms from model 1D problems to a 2D axisymmetric code, which computes the gravitational radiation from the oscillation and gravitational collapse of a relativistic star, to a 3D code designed to calculate the waveform emitted in the merger to ringdown phase of a binary black hole. And we trace the development of CCM from early feasibility studies to successful implementation in the linear regime and through current attempts to treat the binary blackhole problem.
CCM eliminates the need of outer boundary data for the Cauchy evolution and supplies the waveform at null infinity via a characteristic evolution. At present, the only successful 3D application of CCM in general relativity has been to the linearized matching problem between a 3D characteristic code and a 3D Cauchy code based upon harmonic coordinates [287] (see Section 5.8). Here the linearized Cauchy code satisfies a wellposed initialboundary value problem, which seems to be a critical missing ingredient in previous attempts at CCM in general relativity. Recently, a wellposed initialboundary value problem has been established for fully nonlinear harmonic evolution [192] (see Section 5.3), which should facilitate the extension of CCM to the nonlinear case.
Cauchycharacteristic extraction (CCE), which is one of the pieces of the CCM strategy, also supplies the waveform at null infinity by means of a characteristic evolution. However, in this case the artificial outer Cauchy boundary is left unchanged and the data for the characteristic evolution is extracted from Cauchy data on an interior worldtube. Since my last review, the most important development has been the application of CCE to the binary blackhole problem. Beginning with the work in [243], CCE has become an important tool for gravitationalwave data analysis (see Section 6.2). The application of CCE to this problem was developed as a major part of the PhD thesis of Reisswig [241].
In previous reviews, I tried to include material on the treatment of boundaries in the computational mathematics and fluid dynamics literature because of its relevance to the CCM problem. The fertile growth of this subject has warranted a separate Living Review on boundary conditions, which is presently under construction and will appear soon [261]. In anticipation of this, I will not attempt to keep this subject up to date except for material of direct relevance to CCM. See [260, 250] for independent reviews of boundary conditions that have been used in numerical relativity.
The wellposedness of the associated initialboundary value problem, i.e., that there exists a unique solution, which depends continuously on the data, is a necessary condition for a successful numerical treatment. In addition to the forthcoming Living Review [261], this subject is covered in the review [119] and the book [185].
If wellposedness can be established using energy estimates obtained by integration by parts with respect to the coordinates defining the numerical grid, then the analogous finitedifference estimates obtained by summation by parts [191] provide guidance for a stable finitedifference evolution algorithm. See the forthcoming Living Review [261] for a discussion of the application of summation by parts to numerical relativity.
The problem of computing the evolution of a neutron star in close orbit about a black hole is of clear importance to the new gravitational wave detectors. The interaction with the black hole could be strong enough to produce a drastic change in the emitted waves, say by tidally disrupting the star, so that a perturbative calculation would be inadequate. The understanding of such nonlinear phenomena requires wellbehaved numerical simulations of hydrodynamic systems satisfying Einstein’s equations. Several numerical relativity codes for treating the problem of a neutron star near a black hole have been developed, as described in the Living Review on “Numerical Hydrodynamics in General Relativity” by Font [109]. Although most of these efforts concentrate on Cauchy evolution, the characteristic approach has shown remarkable robustness in dealing with a single black hole or relativistic star. In this vein, axisymmetric studies of the oscillation and gravitational collapse of relativistic stars have been achieved (see Section 7.2) and progress has been made in the 3D simulation of a body in close orbit about a Schwarzschild black hole (see Sections 4.6 and 7.3).
2 The Characteristic Initial Value Problem
Characteristics have traditionally played an important role in the analysis of hyperbolic partial differential equations. However, the use of characteristic hypersurfaces to supply the foliation underlying an evolution scheme has been mainly restricted to relativity. This is perhaps natural because in curved spacetime there is no longer a preferred Cauchy foliation provided by the Euclidean 3spaces allowed in Galilean or special relativity. The method of shooting along characteristics is a standard technique in many areas of computational physics, but evolution based upon characteristic hypersurfaces is quite uniquely limited to relativity.
Bondi’s initial use of null coordinates to describe radiation fields [62] was followed by a rapid development of other null formalisms. These were distinguished either as metric based approaches, as developed for axisymmetry by Bondi, Metzner, and van der Burg [63] and generalized to three dimensions by Sachs [258], or as null tetrad approaches in which the Bianchi identities appear as part of the system of equations, as developed by Newman and Penrose [216].
At the outset, null formalisms were applied to construct asymptotic solutions at null infinity by means of 1/r expansions. Soon afterward, Penrose [227] devised the conformal compactification of null infinity \({\mathcal I}\) (“scri”), thereby reducing to geometry the asymptotic quantities describing the physical properties of the radiation zone, most notably the Bondi mass and news function. The characteristic initialvalue problem rapidly became an important tool for the clarification of fundamental conceptual issues regarding gravitational radiation and its energy content. It laid bare and geometrized the gravitational far field.
The initial focus on asymptotic solutions clarified the kinematic properties of radiation fields but could not supply the dynamical properties relating the waveform to a specific source. It was soon realized that instead of carrying out a 1/r expansion, one could reformulate the approach in terms of the integration of ordinary differential equations along the characteristics (null rays) [288]. The integration constants supplied on some inner boundary then played the role of sources in determining the specific waveforms obtained at infinity. In the doublenull initial value problem of Sachs [259], the integration constants are supplied at the intersection of outgoing and ingoing null hypersurfaces. In the worldtubenullcone formalism, the sources were represented by integration constants on a timelike worldtube [288]. These early formalisms have gone through much subsequent revamping. Some have been reformulated to fit the changing styles of modern differential geometry. Some have been reformulated in preparation for implementation as computational algorithms. The articles in [97] give a representative sample of formalisms. Rather than including a review of the extensive literature on characteristic formalisms in general relativity, I concentrate here on those approaches, which have been implemented as computational evolution schemes.
All characteristic evolution schemes share the same skeletal form. The fundamental ingredient is a foliation by null hypersurfaces u = const., which are generated by a twodimensional set of null rays, labeled by coordinates x^{A}, with a coordinate λ varying along the rays. In (u, λ, x^{A}) null coordinates, the main set of Einstein equations take the schematic form
and
Here F represents a set of hypersurface variables, G a set of evolution variables, and H_{F} and H_{G} are nonlinear hypersurface operators, i.e., they operate locally on the values of F, G and G,_{u} intrinsic to a single null hypersurface. In the Bondi formalism, these hypersurface equations have a hierarchical structure in which the members of the set F can be integrated in turn in terms of the characteristic data for the evolution variables and the computed values of prior members of the hierarchy. In addition to these main Einstein equations, there is a subset of four subsidiary Einstein equations, which are satisfied by virtue of the Bianchi identities, provided that they are satisfied on a hypersurface transverse to the characteristics. These equations have the physical interpretation as conservation laws. Mathematically they are analogous to the constraint equations of the canonical formalism. But they are not elliptic since they may be intrinsic to null or timelike hypersurfaces, rather than spacelike Cauchy hypersurfaces.
Computational implementation of characteristic evolution may be based upon different versions of the formalism (i.e., metric or tetrad) and different versions of the initial value problem (i.e., double null or worldtubenullcone). The performance and computational requirements of the resulting evolution codes can vary drastically. However, most characteristic evolution codes share certain common advantages:

The characteristic initial data is free, i.e., there are no elliptic constraints on the data. This eliminates the need for time consuming iterative constraint solvers with their accompanying artificial boundary conditions. This flexibility and control in prescribing initial data has the tradeoff of limited experience with prescribing physically realistic characteristic initial data.

The coordinates are very “rigid”, i.e., there is very little remaining gauge freedom.

The constraints satisfy ordinary differential equations along the characteristics, which force any constraint violation to fall off asymptotically as 1/r^{2}.

No second time derivatives appear so that the number of basic variables is at most half the number for the corresponding version of the Cauchy problem.

The main Einstein equations form a system of coupled ordinary differential equations with respect to the parameter λ varying along the characteristics. This allows construction of an evolution algorithm in terms of a simple march along the characteristics.

In problems with isolated sources, the radiation zone can be compactified into a finite grid boundary with the metric rescaled by 1/r^{2} as an implementation of Penrose’s conformal boundary at future null infinity \({{\mathcal I}^ +}\). Because \({{\mathcal I}^ +}\) is a null hypersurface, no extraneous outgoing radiation condition or other artificial boundary condition is required. The analogous treatment in the Cauchy problem requires the use of hyperboloidal spacelike hypersurfaces asymptoting to null infinity [116]. For reviews of the hyperboloidal approach and its status in treating the associated threedimensional computational problem, see [171, 110].

The grid domain is exactly the region in which waves propagate, which is ideally efficient for radiation studies. Since each null hypersurface of the foliation extends to infinity, the radiation is calculated immediately (in retarded time).

In blackhole spacetimes, a large redshift at null infinity relative to internal sources is an indication of the formation of an event horizon and can be used to limit the evolution to the exterior region of spacetime. While this can be disadvantageous for late time accuracy, it allows the possibility of identifying the event horizon “on the fly”, as opposed to Cauchy evolution where the event horizon can only be located after the evolution has been completed.
Perhaps most important from a practical view, characteristic evolution codes have shown remarkably robust stability and were the first to carry out long term evolutions of moving black holes [139].
Characteristic schemes also share as a common disadvantage the necessity either to deal with caustics or to avoid them altogether. The scheme to tackle the caustics head on by including their development and structure as part of the evolution [283, 120] is perhaps a great idea still ahead of its time but one that should not be forgotten. There are only a handful of structurallystable caustics, and they have wellknown algebraic properties. This makes it possible to model their singular structure in terms of Padé approximants. The structural stability of the singularities should in principle make this possible, and algorithms to evolve the elementary caustics have been proposed [92, 280]. In the axisymmetric case, cusps and folds are the only structurallystable caustics, and they have already been identified in the horizon formation occurring in simulations of headon collisions of black holes and in the temporarily toroidal horizons occurring in collapse of rotating matter [209, 267]. In a generic binary blackhole horizon, where axisymmetry is broken, there is a closed curve of cusps, which bounds the twodimensional region on the event horizon where the black holes initially form and merge [197, 173].
2.1 The worldtubenullcone problem
A version of the characteristic initial value problem for Einstein’s equations, which avoids caustics, is the worldtubenullcone problem, where boundary data is given on a timelike worldtube and on an initial outgoing null hypersurface [288]. The underlying physical picture is that the worldtube data represent the outgoing gravitational radiation emanating from interior matter sources, while ingoing radiation incident on the system is represented by the initial null data.
The wellposedness of the worldtubenullcone problem for Einstein’s equations has not yet been established. Rendall [249] established the wellposedness of the double null version of the problem where data is given on a pair of intersecting characteristic hypersurfaces. He did not treat the characteristic problem headon but reduced it to a standard Cauchy problem with data on a spacelike hypersurface passing through the intersection of the characteristic hypersurfaces. Unfortunately, this approach cannot be applied to the nulltimelike problem and it does not provide guidance for the development of a stable finitedifference approximation based upon characteristic coordinates.
Another limiting case of the nullconeworldtube problem is the Cauchy problem on a characteristic cone, corresponding to the limit in which the timelike worldtube shrinks to a nonsingular worldline. ChoquetBruhat, Chruściel, and MartínGarcía established the existence of solutions to this problem using harmonic coordinates adapted to the null cones, thus avoiding the singular nature of characteristic coordinates at the vertex [81]. Again, this does not shed light on numerical implementation in characteristic coordinates.
A necessary condition for the wellposedness of the gravitational problem is that the corresponding problem for the quasilinear wave equation be wellposed. This brings our attention to the Minkowski space wave equation, which provides the simplest version of the worldtubenullcone problem. The treatment of this simplified problem traces back to Duff [103], who showed existence and uniqueness for the case of analytic data. Later, Friedlander extended existence and uniqueness to the C^{∞} case for the linear wave equation on an asymptoticallyflat curvedspace background [112, 111].
The wellposedness of a variable coefficient or quasilinear problem requires energy estimates for the derivatives of the linearized solutions. Partial estimates for characteristic boundaryvalue problems were first obtained by Müller zum Hagen and Seifert [213]. Later, Balean carried out a comprehensive study of the differentiability of solutions of the worldtubenullcone problem for the flat space wave equation [22, 23]. He was able to establish the required estimates for the derivatives tangential to the outgoing null cones but weaker estimates for the time derivatives transverse to the cones had to be obtained from a direct integration of the wave equation. Balean concentrated on the differentiability of the solution rather than wellposedness. Frittelli [121] made the first explicit investigation of wellposedness, using the approach of Duff, in which the characteristic formulation of the wave equation is reduced to a canonical firstorder differential form, in close analogue to the symmetric hyperbolic formulation of the Cauchy problem. The energy associated with this firstorder reduction gave estimates for the derivatives of the field tangential to the null hypersurfaces but, as in Balean’s work, weaker estimates for the time derivatives had to be obtained indirectly. As a result, wellposedness could not be established for variable coefficient of quasilinear wave equations.
The basic difficulty underlying this problem can be illustrated in terms of the one(spatial)dimensional wave equation
where \((\tilde t,\tilde x)\) are standard spacetime coordinates. The conserved energy
leads to the wellposedness of the Cauchy problem. In characteristic coordinates \((t = \tilde t  \tilde x,x = \tilde t + \tilde x)\), the wave equation transforms into
The conserved energy on the characteristics t = const.,
no longer controls the time derivative ∂_{t}Φ.
As a result, the standard technique for establishing wellposedness of the Cauchy problem fails. For Equation (3), the solutions to the Cauchy problem with compact initial data on \(\tilde t = 0\) are square integrable and wellposedness can be established using the L_{2} norm (4). However, in characteristic coordinates the onedimensional wave equation (5) admits signals traveling in the +xdirection with infinite coordinate velocity. In particular, initial data of compact support Φ(0, x) = f(x) on the characteristic t = 0 admits the solution Φ = g(t) + f(x), provided that g(0) = 0. Here g(t) represents the profile of a wave, which travels from past null infinity (x → −∞) to future null infinity (x → +∞). Thus, without a boundary condition at past null infinity, there is no unique solution and the Cauchy problem is ill posed. Even with the boundary condition Φ(t, −∞) = 0, a source of compact support S(t, x) added to Equation (5), i.e.,
produces waves propagating to x = +∞ so that, although the solution is unique, it is still not square integrable.
On the other hand, consider the modified problem obtained by setting Φ = e^{ax}Ψ,
where F = e^{−ax}S. With the boundary condition Ψ(t, −∞) = 0, the solutions to (8) vanish at x = +∞ and are square integrable. As a result, the Cauchy problem (8) is well posed with respect to an L_{2} norm. For the simple example where F = 0, multiplication of (8) by \((2a\Psi + {\partial _x}\Psi + {1 \over 2}{\partial _t}\Psi)\) and integration by parts gives
The resulting inequality
for the energy
provides the estimates for ∂_{x}Ψ and Ψ, which are necessary for wellposedness. Estimates for ∂_{t}Ψ, and other higher derivatives, follow from applying this approach to the derivatives of (8). The approach can be extended to include the source term F and other generic lower differential order terms. This allows wellposedness to be extended to the case of variable coefficients and, locally in time, to the quasilinear case.
The modification in going from (7) to (8) leads to an effective modification of the standard energy for the problem. Rewritten in terms of the original variable Φ = e^{ax}Ψ, Equation (11) corresponds to the energy
Thus, while the Cauchy problem for (8) is ill posed with respect to the standard L_{2} norm it is well posed with respect to the exponentially weighted norm (12).
This technique was introduced in [193] to treat the worldtubenullcone problem for the threedimensional quasilinear wave equation for a scalar field Φ in an asymptoticallyflat curved space background with source S,
where the metric g^{ab} and its associated covariant derivative ∇_{a} are explicitly prescribed functions of (Φ, x^{c}). In terms of retarded spherical null coordinates x^{a} = (u = t − r, r, θ, ϕ), the initialboundary value problem consists of determining Φ in the region (r > R, u > 0) given data Φ(u, R, θ, ϕ) on the timelike worldtube r = R and Φ(0, r, θ, ϕ) on the initial null hypersurface u = 0. It was shown that this quasilinear wave problem is well posed on a domain extending to future null infinity subject to smoothness and asymptotic falloff conditions on the data. The treatment was based upon energy estimates obtained by integration by parts with respect to the characteristic coordinates. As a result, the analogous finitedifference estimates obtained by summation by parts [191] do provide guidance for the development of a stable numerical evolution algorithm. The corresponding worldtubenullcone problem for Einstein’s equations plays a major underlying role in the CCM strategy. Its wellposedness appears to be confirmed by numerical simulations but the analytic proof remains an important unresolved problem.
3 Prototype Characteristic Evolution Codes
Limited computer power, as well as the instabilities arising from nonhyperbolic formulations of Einstein’s equations, necessitated that the early code development in general relativity be restricted to spacetimes with symmetry. Characteristic codes were first developed for spacetimes with spherical symmetry. The techniques for other special relativistic fields that propagate on null characteristics are similar to the gravitational case. Such fields are included in this section. We postpone treatment of relativistic fluids, whose characteristics are timelike, until Section 7.
3.1 {1 + 1}dimensional codes
It is often said that the solution of the general ordinary differential equation is essentially known, in light of the success of computational algorithms and present day computing power. Perhaps this is an overstatement because investigating singular behavior is still an art. But, in this spirit, it is fair to say that the general system of hyperbolic partial differential equations in one spatial dimension seems to be a solved problem in general relativity. Codes have been successful in revealing important new phenomena underlying singularity formation in cosmology [42] and in dealing with unstable spacetimes to discover critical phenomena [151]. As described below, characteristic evolution has contributed to a rich variety of such results.
One of the earliest characteristic evolution codes, constructed by Corkill and Stewart [92, 279], treated spacetimes with two Killing vectors using a grid based upon double null coordinates, with the null hypersurfaces intersecting in the surfaces spanned by the Killing vectors. They simulated colliding plane waves and evolved the KhanPenrose [180] collision of impulsive (δfunction curvature) plane waves to within a few numerical zones from the final singularity, with extremely close agreement with the analytic results. Their simulations of collisions with more general waveforms, for which exact solutions are not known, provided input to the understanding of singularity formation, which was unforeseen in the analytic treatments of this problem.
Many {1 + 1}dimensional characteristic codes have been developed for sphericallysymmetric systems. Here matter must be included in order to make the system nonSchwarzschild. Initially the characteristic evolution of matter was restricted to simple cases, such as massless KleinGordon fields, which allowed simulation of gravitational collapse and radiation effects in the simple context of spherical symmetry. Now, characteristic evolution of matter is progressing to more complicated systems. Its application to hydrodynamics has made significant contributions to general relativistic astrophysics, as reviewed in Section 7.
The synergy between analytic and computational approaches has led to dramatic results in the massless KleinGordon case. On the analytic side, working in a characteristic initial value formulation based upon outgoing null cones, Christodoulou made a penetrating study of the sphericallysymmetric problem [82, 83, 84, 85, 86, 87]. In a suitable function space, he showed the existence of an open ball about Minkowski space data whose evolution is a complete regular spacetime; he showed that an evolution with a nonzero final Bondi mass forms a black hole; he proved a version of cosmic censorship for generic data; and he established the existence of naked singularities for nongeneric data. What this analytic tourdeforce did not reveal was the remarkable critical behavior in the transition to the blackhole regime, which was discovered by Choptuik [79, 80] in simulations using Cauchy evolution. This phenomenon has now been understood in terms of the methods of renormalization group theory and intermediate asymptotics, and has spawned a new subfield in general relativity, which is covered in the Living Review on “Critical Phenomena in Gravitational Collapse” by Gundlach [151].
The characteristic evolution algorithm for the sphericallysymmetric EinsteinKleinGordon problem provides a simple illustration of the techniques used in the general case. It centers about the evolution scheme for the scalar field, which constitutes the only dynamical field. Given the scalar field, all gravitational quantities can be determined by integration along the characteristics of the null foliation. This is a coupled problem, since the scalar wave equation involves the curved space metric. It illustrates how null algorithms lead to a hierarchy of equations, which can be integrated along the characteristics to effectively decouple the hypersurface and dynamical variables.
In a Bondi coordinate system based upon outgoing null hypersurfaces u = const. and a surface area coordinate r, the metric is
Smoothness at r = 0 allows imposition of the coordinate conditions
The field equations consist of the curved space wave equation □Φ = 0 for the scalar field and two hypersurface equations for the metric functions:
The wave equation can be expressed in the form
where g = rΦ and □^{(2)} is the D’Alembertian associated with the twodimensional submanifold spanned by the ingoing and outgoing null geodesics. Initial null data for evolution consists of Φ(u_{0}, r) at the initial retarded time u_{0}.
Because any twodimensional geometry is conformally flat, the surface integral of □^{(2)}g over a null parallelogram Σ gives exactly the same result as in a flat 2space, and leads to an integral identity upon which a simple evolution algorithm can be based [144]. Let the vertices of the null parallelogram be labeled by N, E, S, and corresponding, respectively, to their relative locations (North, East, South, and West) in the 2space, as shown in Figure 2. Upon integration of Equation (18), curvature introduces an integral correction to the flat space null parallelogram relation between the values of g at the vertices:
This identity, in one form or another, lies behind all of the null evolution algorithms that have been applied to this system. The prime distinction between the different algorithms is whether they are based upon double null coordinates, or upon Bondi coordinates as in Equation (14). When a double null coordinate system is adopted, the points N, E, S, and W can be located in each computational cell at grid points, so that evaluation of the lefthand side of Equation (19) requires no interpolation. As a result, in flat space, where the righthand side of Equation (19) vanishes, it is possible to formulate an exact evolution algorithm. In curved space, of course, there is a truncation error arising from the approximation of the integral, e.g., by evaluating the integrand at the center of Σ.
The identity (19) gives rise to the following explicit marching algorithm, indicated in Figure 2. Let the null parallelogram lie at some fixed θ and ϕ and span adjacent retarded time levels u_{0} and u_{0} + Δu. Imagine for now that the points N, E, S, and W lie on the spatial grid, with r_{N} − r_{W} = r_{E} − r_{S} = Δr. If g has been determined on the entire initial cone u_{0}, which contains the points E and S, and g has been determined radially outward from the origin to the point W on the next cone u_{0} + Δu, then Equation (19) determines g at the next radial grid point N in terms of an integral over Σ. The integrand can be approximated to second order, i.e., to \({\mathcal O}(\Delta r\Delta u)\), by evaluating it at the center of Σ. To this same accuracy, the value of g at the center equals its average between the points E and W, at which g has already been determined. Similarly, the value of (V/r),_{r} at the center of Σ can be approximated to second order in terms of values of V at points where it can be determined by integrating the hypersurface equations (16, 17) radially outward from r = 0.
After carrying out this procedure to evaluate g at the point N, the procedure can then be iterated to determine g at the next radially outward grid point on the u_{0} + Δu level, i.e., point n in Figure 2. Upon completing this radial march to null infinity, in terms of a compactified radial coordinate such as x = r/(1 + r), the field g is then evaluated on the next null cone at u_{0} + 2Δu, beginning at the vertex where smoothness gives the startup condition that g (u, 0) = 0.
In the compactified Bondi formalism, the vertices N, E, S, and W of the null parallelogram Σ cannot be chosen to lie exactly on the grid because, even in Minkowski space, the velocity of light in terms of a compactified radial coordinate x is not constant. As a consequence, the fields g, β, and V at the vertices of Σ are approximated to secondorder accuracy by interpolating between grid points. However, cancellations arise between these four interpolations so that Equation (19) is satisfied to fourthorder accuracy. The net result is that the finitedifference version of Equation (19) steps g radially outward one zone with an error of fourth order in grid size, \({\mathcal O}({(\Delta u)^2}{(\Delta x)^2})\). In addition, the smoothness conditions (15) can be incorporated into the startup for the numerical integrations for V and β to insure no loss of accuracy in starting up the march at r = 0. The resulting global error in g, after evolving a finite retarded time, is then \({\mathcal O}(\Delta u\Delta x)\), after compounding errors from 1/(ΔuΔx) number of zones.
When implemented on a grid based upon the (u, r) coordinates, the stability of this algorithm is subject to a CourantFriedrichsLewy (CFL) condition requiring that the physical domain of dependence be contained in the numerical domain of dependence. In the sphericallysymmetric case, this condition requires that the ratio of the time step to radial step be limited by (V/r)Δu ≤ 2Δr, where Δr = Δ[x/(1 − x)]. This condition can be built into the code using the value V/r = e^{2H}, corresponding to the maximum of V/r at \({{\mathcal I}^ +}\). The strongest restriction on the time step then arises just before the formation of a horizon, where V/r → ∞ at \({{\mathcal I}^ +}\). This infinite redshift provides a mechanism for locating the true event horizon “on the fly” and restricting the evolution to the exterior spacetime. Points near \({{\mathcal I}^ +}\) must be dropped in order to evolve across the horizon due to the lack of a nonsingular compactified version of future time infinity \({I^ +}\).
The situation is quite different in a double null coordinate system, in which the vertices of the null parallelogram can be placed exactly on grid points so that the CFL condition is automatically satisfied. A characteristic code based upon double null coordinates was developed by Goldwirth and Piran in a study of cosmic censorship [132] based upon the sphericallysymmetric gravitational collapse of a massless scalar field. Their early study lacked the sensitivity of adaptive mesh refinement (AMR), which later enabled Choptuik to discover the critical phenomena appearing in this problem. Subsequent work by Marsa and Choptuik [208] combined the use of the null related ingoing EddingtonFinklestein coordinates with Unruh’s strategy of singularity excision to construct a 1D code that “runs forever”. Later, Garfinkle [126] constructed an improved version of the GoldwirthPiran double null code, which was able to simulate critical phenomena without using adaptive mesh refinement. In this treatment, as the evolution proceeds on one outgoing null cone to the next, the grid points follow the ingoing null cones and must be dropped as they cross the origin at r = 0. However, after half the grid points are lost they are then “recycled” at new positions midway between the remaining grid points. This technique is crucial for resolving the critical phenomena associated with an r → 0 size horizon. An extension of the code [127] was later used to verify that scalar field collapse in six dimensions continues to display critical phenomena.
Hamadé and Stewart [158] also applied a double null code to study critical phenomena. In order to obtain the accuracy necessary to confirm Choptuik’s results they developed the first example of a characteristic grid with AMR. They did this with both the standard Berger and Oliger algorithm and their own simplified version, with both versions giving indistinguishable results. Their simulations of critical collapse of a massless scalar field agreed with Choptuik’s values for the universal parameters governing mass scaling and displayed the echoing associated with discrete selfsimilarity. Hamadé, Horne, and Stewart [157] extended this study to the spherical collapse of an axion/dilaton system and found in this case that selfsimilarity was a continuous symmetry of the critical solution.
Brady, Chambers, and Gonçalves [64] used Garfinkle’s [126] double null algorithm to investigate the effect of a massive scalar field on critical phenomena. The introduction of a mass term in the scalar wave equation introduces a scale to the problem, which suggests that the critical point behavior might differ from the massless case. They found that there are two different regimes depending on the ratio of the Compton wavelength 1/m of the scalar mass to the radial size λ of the scalar pulse used to induce collapse. When λm << 1, the critical solution is the one found by Choptuik in the m = 0 case, corresponding to a type II phase transition. However, when λm >> 1, the critical solution is an unstable soliton star (see [265]), corresponding to a type I phase transition where blackhole formation turns on at a finite mass.
A code based upon Bondi coordinates, developed by Husa and his collaborators [172], has been successfully applied to sphericallysymmetric critical collapse of a nonlinear σmodel coupled to gravity. Critical phenomena cannot be resolved on a static grid based upon the Bondi rcoordinate. Instead, the numerical techniques of Garfinkle were adopted by using a dynamic grid following the ingoing null rays and by recycling radial grid points. They studied how coupling to gravity affects the critical behavior previously observed by Bizoń [60] and others in the Minkowski space version of the model. For a wide range of the coupling constant, they observe discrete selfsimilarity and typical mass scaling near the critical solution. The code is shown to be secondorder accurate and to give secondorder convergence for the value of the critical parameter.
The first characteristic code in Bondi coordinates for the selfgravitating scalar wave problem was constructed by Gómez and Winicour [144]. They introduced a numerical compactification of \({{\mathcal I}^ +}\) for the purpose of studying effects of selfgravity on the scalar radiation, particularly in the high amplitude limit of the rescaling Φ → aΦ. As a → ∞, the red shift creates an effective boundary layer at \({{\mathcal I}^ +}\), which causes the Bondi mass M_{B} and the scalar field monopole moment Q to be related by \({M_{\rm{B}}} \sim \pi \vert Q\vert/\sqrt 2\), rather than the quadratic relation of the weak field limit [144]. This could also be established analytically so that the high amplitude limit provided a check on the code’s ability to handle strongly nonlinear fields. In the small amplitude case, this work incorrectly reported that the radiation tails from black hole formation had an exponential decay characteristic of quasinormal modes rather than the polynomial 1/t or 1/t^{2} falloff expected from Price’s [238] work on perturbations of Schwarzschild black holes. In hindsight, the error here was not having confidence to run the code sufficiently long to see the proper latetime behavior.
Gundlach, Price, and Pullin [152, 153] subsequently reexamined the issue of powerlaw tails using a double null code similar to that developed by Goldwirth and Piran. Their numerical simulations verified the existence of power law tails in the full nonlinear case, thus establishing consistency with analytic perturbative theory. They also found normal mode ringing at intermediate time, which provided reassuring consistency with perturbation theory and showed that there is a region of spacetime where the results of linearized theory are remarkably reliable even though highlynonlinear behavior is taking place elsewhere. These results have led to a methodology that has application beyond the confines of sphericallysymmetric problems, most notably in the “close approximation” for the binary blackhole problem [239]. Powerlaw tails and quasinormal ringing have also been confirmed using Cauchy evolution [208].
The study of the radiation tail decay of a scalar field was subsequently extended by Gómez, Schmidt, and Winicour [147] using a characteristic code. They showed that the NewmanPenrose constant [218] for the scalar field determines the exponent of the power law (and not the static monopole moment as often stated). When this constant is nonzero, the tail decays as 1/t on \({{\mathcal I}^ +}\), as opposed to the 1/t^{2} decay for the vanishing case. (They also found t^{−n} logt corrections, in addition to the exponentiallydecaying contributions of the quasinormal modes.) This code was also used to study the instability of a topological kink in the configuration of the scalar field [29]. The kink instability provides the simplest example of the turning point instability [175, 276], which underlies gravitational collapse of static equilibria.
Brady and Smith [66] have demonstrated that characteristic evolution is especially well adapted to explore properties of Cauchy horizons. They examined the stability of the ReissnerNordström Cauchy horizon using an EinsteinKleinGordon code based upon advanced Bondi coordinates (υ, r) (where the hypersurfaces υ = const are ingoing null hypersurfaces). They studied the effect of a sphericallysymmetric scalar pulse on the spacetime structure as it propagates across the event horizon. Their numerical methods were patterned after the work of Goldwirth and Piran [132], with modifications of the radial grid structure that allow deep penetration inside the black hole. In accord with expectations from analytic studies, they found that the pulse first induces a weak null singularity on the Cauchy horizon, which then leads to a crushing spacelike singularity as r → 0. The null singularity is weak in the sense that an infalling observer experiences a finite tidal force, although the NewmanPenrose Weyl component Ψ_{2} diverges, a phenomenon known as mass inflation [233]. These results confirm the earlier result of Gnedin and Gnedin [131] that a central spacelike singularity would be created by the interaction of a charged black hole with a scalar field, in accord with a physical argument by Penrose [228] that a small perturbation undergoes an infinite redshift as it approaches the Cauchy horizon.
Burko [71] has confirmed and extended these results, using a code based upon double null coordinates, which was developed with Ori [72] in a study of tail decay. He found that in the early stages the perturbation of the Cauchy horizon is weak and in agreement with the behavior calculated by perturbation theory.
Brady, Chambers, Krivan, and Laguna [65] have found interesting effects of a nonzero cosmological constant Λ on tail decay by using a characteristic EinsteinKleinGordon code to study the effect of a massless scalar pulse on Schwarzschildde Sitter and ReissnerNordströmde Sitter spacetimes. First, by constructing a linearized scalar evolution code, they show that scalar test fields with ℓ ≠ 0 have exponentially decaying tails, in contrast to the standard power law tails for asymptoticallyflat spacetimes. Rather than decaying, the monopole mode asymptotes at late time to a constant, which scales linearly with Λ, in contrast to the standard nohair result. This unusual behavior for the ℓ = 0 case was then independently confirmed with a nonlinear spherical characteristic code.
Using a combination of numerical and analytic techniques based upon null coordinates, Hod and Piran have made an extensive series of investigations of the sphericallysymmetric charged EinsteinKleinGordon system dealing with the effect of charge on critical gravitational collapse [165] and the late time tail decay of a charged scalar field on a ReissnerNordström black hole [166, 169, 167, 168]. These studies culminated in a full nonlinear investigation of horizon formation by the collapse of a charged massless scalar pulse [170]. They track the formation of an apparent horizon, which is followed by a weakly singular Cauchy horizon, which then develops a strong spacelike singularity at r = 0. This is in complete accord with prior perturbative results and nonlinear simulations involving a preexisting black hole. Oren and Piran [219] increased the late time accuracy of this study by incorporating an adaptive grid for the retarded time coordinate u, with a refinement criterion to maintain Δr/r = const. The accuracy of this scheme is confirmed through convergence tests as well as charge and constraint conservation. They were able to observe the physical mechanism, which prohibits blackhole formation with charge to mass ratio Q/M > 1. Electrostatic repulsion of the outer parts of the scalar pulse increases relative to the gravitational attraction and causes the outer portion of the charge to disperse to larger radii before the black hole is formed. Inside the black hole, they confirm the formation of a weakly singular Cauchy horizon, which turns into a strong spacelike singularity, in accord with other studies.
Hod extended this combined numericalanalytical double null approach to investigate higherorder corrections to the dominant power law tail [163], as well as corrections due to a general sphericallysymmetric scattering potential [162] and due to a timedependent potential [164]. He found (log t)/t modifications to the leadingorder tail behavior for a Schwarzschild black hole, in accord with earlier results of Gómez et al. [147]. These modifications fall off at a slow rate so that a very long numerical evolution (t ≈ 3000 M)is necessary to cleanly identify the leadingorder powerlaw decay.
The foregoing numericalanalytical work based upon characteristic evolution has contributed to a very comprehensive classical treatment of sphericallysymmetric gravitational collapse. Sorkin and Piran [275] have investigated the question of quantum corrections due to pair creation on the gravitational collapse of a charged scalar field. For observers outside the black hole, several analytic studies have indicated that such pairproduction can rapidly diminish the charge of the black hole. Sorkin and Piran apply the same doublenull characteristic code used in studying the classical problem [170] to evolve across the event horizon and observe the quantum effects on the Cauchy horizon. The quantum electrodynamic effects are modeled in a rudimentary way by a nonlinear dielectric ϵ constant that limits the electric field to the critical value necessary for pair creation. The backreaction of the pairs on the stressenergy and the electric current are ignored. They found that quantum effects leave the classical picture of the Cauchy horizon qualitatively intact but that they shorten its “lifetime” by hastening the conversion of the weak null singularity into a strong spacelike singularity.
The Southampton group has constructed a {1 + 1}dimensional characteristic code for spacetimes with cylindrical symmetry [90, 102]. The original motivation was to use it as the exterior characteristic code in a test case of CCM (see Section 5.5.1 for the application to matching). Subsequently, Sperhake, Sjödin, and Vickers [273, 277] modified the code into a global characteristic version for the purpose of studying cosmic strings, represented by massive scalar and vector fields coupled to gravity. Using a Geroch decomposition [128] with respect to the translational Killing vector, they reduced the global problem to a {2 + 1}dimensional asymptoticallyflat spacetime, so that \({{\mathcal I}^ +}\) could be compactified and included in the numerical grid. Rather than the explicit scheme used in CCM, the new version employs an implicit, second order in space and time, CrankNicholson evolution scheme. The code showed longterm stability and secondorder convergence in vacuum tests based upon exact WeberWheeler waves [301] and Xanthopoulos’ rotating solution [307], and in tests of wave scattering by a string. The results show damped ringing of the string after an incoming WeberWheeler pulse has excited it and then scattered to \({{\mathcal I}^ +}\). The ringing frequencies are independent of the details of the pulse but are inversely proportional to the masses of the scalar and vector fields.
Frittelli and Gómez [123] have cast the sphericallysymmetric EinsteinKleinGordon problem in symmetric hyperbolic form, where in a BondiSachs gauge the fundamental variables are the scalar field, lapse and shift. The BondiSachs gauge conditions relate the usual ADM variables (the 3metric and extrinsic curvature) to the lapse and shift, which obey simpler evolution equations. The resulting Cauchy problem is wellposed and the outer boundary condition is constraint preserving (although whether the resulting initialboundary value problem is wellposed is not addressed, i.e., whether the boundary condition is dissipative). A numerical evolution algorithm based upon the system produces a stable simulation of a scalar pulse Φ scattering off a black hole. The initial data for the pulse satisfies ∂_{t}Φ = 0 so, as expected, it contains an ingoing part, which crosses the horizon, and an outgoing part, which leaves the grid at the outer boundary with a small amount of back reflection.
3.1.1 Cosmology on the past null cone
The standard approach to cosmology begins with a spacetime metric incorporating assumptions of approximate homogeneity and isotropy. An alternative approach, based upon observational data on the past light cone of earthbased telescopes, was proposed in a seminal paper by Kristian and Sachs [194]. In that work, construction of the metric was based upon observational data but the use of a series expansion restricted the approach to nearby regions. Their ideas provided the starting point for further developments. In particular the observational cosmology program of Ellis et al. [105] exploited an earlier work of Temple [289] to extend the approach to larger redshift by using the natural observational coordinates based upon null geodesics propagating to the telescope.
In this approach, solving the Einstein equations in the context of observational cosmology poses two problems. First, astronomical observations are used to determine the metric on the past null cone of the observer. Second, these are used as the final data for a characteristic evolution into the past, which determines the cosmological history. A program to carry out this second step via numerical evolution has been initiated by Bishop and his collaborators [57, 299]. As a first step, they implemented a sphericallysymmetric null code for the Einstein equations coupled with a pressurefree fluid (dust). The code was tested against solutions of the sphericallysymmetric but inhomogeneous LemaîtreTolmanBondi model. The code is then used to compare the LemaîtreTolmanBondi model with the now standard Λcolddarkmatter model. Using the presentlyobserved characteristic data, it is shown that the past histories of these two models are distinctly different. The density of the LemaîtreTolmanBondi model rises more quickly into the past, indicating a universe, which might be too young.
3.1.2 Adaptive mesh refinement
The goal of computing waveforms from relativistic binaries, such as a neutron star or stellarmass back hole spiraling into a supermassive black hole, requires more than a stable convergent code. It is a delicate task to extract a waveform in a spacetime in which there are multiple length scales: the size of the supermassive black hole, the size of the star, the wavelength of the radiation. It is commonly agreed that some form of mesh refinement is essential to attack this problem. Mesh refinement was first applied in characteristic evolution to solve specific sphericallysymmetric problems regarding critical phenomena and singularity structure [126, 158, 71].
Pretorius and Lehner [237] have presented a general approach for AMR to a generic characteristic code. Although the method is designed to treat 3D simulations, the implementation has so far been restricted to the EinsteinKleinGordon system in spherical symmetry. The 3D approach is modeled after the Berger and Oliger AMR algorithm for hyperbolic Cauchy problems, which is reformulated in terms of null coordinates. The resulting characteristic AMR algorithm can be applied to any unigrid characteristic code and is amenable to parallelization. They applied it to the problem of a sphericallysymmetric massive KleinGordon field propagating outward from a black hole. The nonzero rest mass restricts the KleinGordon field from propagating to infinity. Instead it diffuses into higher frequency components, which Pretorius and Lehner show can be resolved using AMR but not with a comparison unigrid code.
3.2 {2 + 1}dimensional codes
Onedimensional characteristic codes enjoy a very special simplicity due to the two preferred sets (ingoing and outgoing) of characteristic null hypersurfaces. This eliminates a source of gauge freedom that otherwise exists in either two or threedimensional characteristic codes. However, the manner in which the characteristics of a hyperbolic system determine domains of dependence and lead to propagation equations for shock waves is the same as in the onedimensional case. This makes it desirable for the purpose of numerical evolution to enforce propagation along characteristics as extensively as possible. In basing a Cauchy algorithm upon shooting along characteristics, the infinity of characteristic rays (technically, bicharacteristics) at each point leads to an arbitrariness, which, for a practical numerical scheme, makes it necessary either to average the propagation equations over the sphere of characteristic directions or to select out some preferred subset of propagation equations. The latter approach was successfully applied by Butler [73] to the Cauchy evolution of twodimensional fluid flow, but there seems to have been very little followup along these lines. The closest resemblance is the use of Riemann solvers for highresolution shock capturing in hydrodynamic codes (see Section 7.1).
The formal ideas behind the construction of two or threedimensional characteristic codes are similar, although there are various technical options for treating the angular coordinates, which label the null rays. Historically, most characteristic work graduated first from 1D to 2D because of the available computing power.
3.3 The Bondi problem
The first characteristic code based upon the original Bondi equations for a twistfree axisymmetric spacetime was constructed by Welling in his PhD thesis at Pittsburgh [176]. The spacetime was foliated by a family of null cones, complete with point vertices at which regularity conditions were imposed. The code accurately integrated the hypersurface and evolution equations out to compactified null infinity. This allowed studies of the Bondi mass and radiation flux on the initial null cone, but it could not be used as a practical evolution code because of instabilities.
These instabilities came as a rude shock and led to a retreat to the simpler problem of axisymmetric scalar waves propagating in Minkowski space, with the metric
in outgoing null cone coordinates. A null cone code for this problem was constructed using an algorithm based upon Equation (19), with the angular part of the flat space Laplacian replacing the curvature terms in the integrand on the righthand side. This simple setting allowed one source of instability to be traced to a subtle violation of the CFL condition near the vertices of the cones. In terms of the grid spacing Δx^{α}, the CFL condition in this coordinate system takes the explicit form
where the coefficient K, which is of order one, depends on the particular startup procedure adopted for the outward integration. Far from the vertex, the condition (21) on the time step Δu is quantitatively similar to the CFL condition for a standard Cauchy evolution algorithm in spherical coordinates. But condition (21) is strongest near the vertex of the cone where (at the equator θ = π/2) it implies that
This is in contrast to the analogous requirement
for stable Cauchy evolution near the origin of a spherical coordinate system. The extra power of Δθ is the price that must be paid near the vertex for the simplicity of a characteristic code. Nevertheless, the enforcement of this condition allowed efficient global simulation of axisymmetric scalar waves. Global studies of backscattering, radiative tail decay, and solitons were carried out for nonlinear axisymmetric waves [176], but threedimensional simulations extending to the vertices of the cones were impractical at the time on existing machines.
Aware now of the subtleties of the CFL condition near the vertices, the Pittsburgh group returned to the Bondi problem, i.e., to evolve the Bondi metric [63]
by means of the three hypersurface equations
and the evolution equation
The beauty of the Bondi equations is that they form a clean hierarchy. Given γ on an initial null hypersurface, the equations can be integrated radially to determine β, U, V, and γ_{u} on the hypersurface (in that order) in terms of integration constants determined by boundary conditions, or smoothness conditions if extended to the vertex of a null cone. The initial data is unconstrained except for smoothness conditions. Because γ represents an axisymmetric spin2 field, it must be \({\mathcal O}({\sin ^2}\theta)\) near the poles of the spherical coordinates and must consist of l ≥ 2 spin2 multipoles.
In the computational implementation of this system by the Pittsburgh group [142], the null hypersurfaces were chosen to be complete null cones with nonsingular vertices, which (for simplicity) trace out a geodesic worldline r = 0. The smoothness conditions at the vertices were formulated in local Minkowski coordinates.
The vertices of the cones were not the chief source of difficulty. A null parallelogram marching algorithm, similar to that used in the scalar case, gave rise to another instability that sprang up throughout the grid. In order to reveal the source of this instability, physical considerations suggested looking at the linearized version of the Bondi equations, where they can be related to the wave equation. If this relationship were sufficiently simple, then the scalar wave algorithm could be used as a guide in stabilizing the evolution of γ. A scheme for relating γ to solutions Φ of the wave equation had been formulated in the original paper by Bondi, Metzner, and van der Burgh [63]. However, in that scheme, the relationship of the scalar wave to γ was nonlocal in the angular directions and was not useful for the stability analysis.
A local relationship between γ and solutions of the wave equation was found [142]. This provided a test bed for the null evolution algorithm similar to the Cauchy test bed provided by Teukolsky waves [291]. More critically, it allowed a simple von Neumann linear stability analysis of the finitedifference equations, which revealed that the evolution would be unstable if the metric quantity U was evaluated on the grid. For a stable algorithm, the grid points for U must be staggered between the grid points for γ, β, and V. This unexpected feature emphasizes the value of linear stability analysis in formulating stable finitedifference approximations.
It led to an axisymmetric code [221, 142] for the global Bondi problem, which ran stably, subject to a CFL condition, throughout the regime in which caustics and horizons did not form. Stability in this regime was verified experimentally by running arbitrary initial data until it radiated away to \({{\mathcal I}^ +}\). Also, new exact solutions, as well as the linearized null solutions, were used to perform extensive convergence tests that established secondorder accuracy. The code generated a large complement of highly accurate numerical solutions for the class of asymptoticallyflat axisymmetric vacuum spacetimes, a class for which no analytic solutions are known. All results of numerical evolutions in this regime were consistent with the theorem of Christodoulou and Klainerman [88] that weak initial data evolve asymptotically to Minkowski space at late time.
An additional global check on accuracy was performed using Bondi’s formula relating mass loss to the time integral of the square of the news function. The Bondi massloss formula is not one of the equations used in the evolution algorithm but follows from those equations as a consequence of a global integration of the Bianchi identities. Thus, it not only furnishes a valuable tool for physical interpretation but it also provides a very important calibration of numerical accuracy and consistency.
An interesting feature of the evolution arises in regard to compactification. By construction, the udirection is timelike at the origin, where it coincides with the worldline traced out by the vertices of the outgoing null cones. But even for weak fields, the udirection generically becomes spacelike at large distances along an outgoing ray. Geometrically, this reflects the property that \({{\mathcal I}^ +}\) is itself a null hypersurface so that all internal directions are spacelike, except for the null generator. For a flat space time, the udirection picked out at the origin leads to a null evolution direction at \({{\mathcal I}^ +}\), but this direction becomes spacelike under a slight deviation from spherical symmetry. Thus, the evolution generically becomes “superluminal” near \({{\mathcal I}^ +}\). Remarkably, this leads to no adverse numerical effects. This remarkable property apparently arises from the natural way that causality is built into the marching algorithm so that no additional resort to numerical techniques, such as “causal differencing” [91], is necessary.
3.3.1 The conformalnull tetrad approach
Stewart has implemented a characteristic evolution code, which handles the Bondi problem by a null tetrad, as opposed to metric, formalism [281]. The geometrical algorithm underlying the evolution scheme, as outlined in [283, 120], is Friedrich’s [114] conformalnull description of a compactified spacetime in terms of a firstorder system of partial differential equations. The variables include the metric, the connection, and the curvature, as in a NewmanPenrose formalism, but, in addition, the conformal factor (necessary for compactification of \({\mathcal I}\)) and its gradient. Without assuming any symmetry, there are more than seven times as many variables as in a metricbased null scheme, and the corresponding equations do not decompose into as clean a hierarchy. This disadvantage, compared to the metric approach, is balanced by several advantages:

The equations form a symmetric hyperbolic system so that standard theorems can be used to establish that the system is wellposed.

Standard evolution algorithms can be invoked to ensure numerical stability.

The extra variables associated with the curvature tensor are not completely excess baggage, since they supply essential physical information.

The regularization necessary to treat \({{\mathcal I}^ +}\) is built in as part of the formalism so that no special numerical regularization techniques are necessary as in the metric case. (This last advantage is somewhat offset by the necessity of having to locate \({\mathcal I}\) by tracking the zeroes of the conformal factor.)
The code was intended to study gravitational waves from an axisymmetric star. Since only the vacuum equations are evolved, the outgoing radiation from the star is represented by data (Ψ_{4} in NewmanPenrose notation) on an ingoing null cone forming the inner boundary of the evolved domain. This inner boundary data is supplemented by Schwarzschild data on the initial outgoing null cone, which models an initially quiescent state of the star. This provides the necessary data for a doublenull initialvalue problem. The evolution would normally break down where the ingoing null hypersurface develops caustics. But by choosing a scenario in which a black hole is formed, it is possible to evolve the entire region exterior to the horizon. An obvious test bed is the Schwarzschild spacetime for which a numerically satisfactory evolution was achieved (although convergence tests were not reported).
Physicallyinteresting results were obtained by choosing data corresponding to an outgoing quadrupole pulse of radiation. By increasing the initial amplitude of the data Ψ_{4}, it was possible to evolve into a regime where the energy loss due to radiation was large enough to drive the total Bondi mass negative. Although such data is too grossly exaggerated to be consistent with an astrophysicallyrealistic source, the formation of a negative mass was an impressive test of the robustness of the code.
3.3.2 Axisymmetric mode coupling
Papadopoulos [222] has carried out an illuminating study of mode mixing by computing the evolution of a pulse emanating outward from an initially Schwarzschild white hole of mass M. The evolution proceeds along a family of ingoing null hypersurfaces with outer boundary at r = 60 M. The evolution is stopped before the pulse hits the outer boundary in order to avoid spurious effects from reflection and the radiation is inferred from data at r = 20 M. Although gauge ambiguities arise in reading off the waveform at a finite radius, the work reveals interesting nonlinear effects: (i) modification of the light cone structure governing the principal part of the equations and hence the propagation of signals; (ii) modulation of the Schwarzschild potential by the introduction of an angular dependent “mass aspect”; and (iii) quadratic and higherorder terms in the evolution equations, which couple the spherical harmonic modes. A compactified version of this study [312] was later carried out with the 3D PITT code, which confirms these effects as well as new effects, which are not present in the axisymmetric case (see Section 4.5 for details).
3.3.3 Spectral approach to the Bondi problem
Oliveira and Rodrigues [94] have taken the first step in developing a code based upon the Galerkin spectral method to evolve the axisymmetric Bondi problem. The strength of spectral methods is to provide high accuracy relative to computational effort. The spectral decomposition reduces the partial differential evolution system to a system of ordinary differential equations for the spectral coefficients. Several numerical tests were performed to verify stability and convergence, including linearized gravitational waves and the global energy momentum conservation law relating the Bondi mass to the radiated energy flux. The main feature of the Galerkin method is that each basis function is chosen to automatically satisfy the boundary conditions, in this case the regularity conditions on the Bondi variables on the axes of symmetry and at the vertices of the outgoing null cones and the asymptotic flatness condition at infinity. Although \({{\mathcal I}^ +}\) is not explicitly compactified, the choice of radial basis functions allows verification of the asymptotic relations governing the coefficients of the leading gaugedependent terms of the metric quantities.
It will be interesting to see if the approach can be applied to highlynonlinear problems and generalized to the full 3D case. There has been little other effort in applying spectral methods to characteristic evolution, although the approach offers a distinct advantage in handling the vertices of the null cones.
3.3.4 Twisting axisymmetry
The Southampton group, as part of its goal of combining Cauchy and characteristic evolution, has developed a code [99, 100, 234], which extends the Bondi problem to full axisymmetry, as described by the general characteristic formalism of Sachs [258]. By dropping the requirement that the rotational Killing vector be twistfree, they were able to include rotational effects, including radiation in the “cross” polarization mode (only the “plus” mode is allowed by twistfree axisymmetry). The null equations and variables were recast into a suitably regularized form to allow compactification of null infinity. Regularization at the vertices or caustics of the null hypersurfaces was not necessary, since they anticipated matching to an interior Cauchy evolution across a finite worldtube.
The code was designed to insure standard Bondi coordinate conditions at infinity, so that the metric has the asymptotically Minkowskian form corresponding to nullspherical coordinates. In order to achieve this, the hypersurface equation for the Bondi metric variable β must be integrated radially inward from infinity, where the integration constant is specified. The evolution of the dynamical variables proceeds radially outward as dictated by causality [234]. This differs from the Pittsburgh code in which all the equations are integrated radially outward, so that the coordinate conditions are determined at the inner boundary and the metric is asymptotically flat but not asymptotically Minkowskian. The Southampton scheme simplifies the formulae for the Bondi news function and mass in terms of the metric. It is anticipated that the inward integration of β causes no numerical problems because this is a gauge choice, which does not propagate physical information. However, the code has not yet been subject to convergence and long term stability tests so that these issues cannot be properly assessed at the present time.
The matching of the Southampton axisymmetric code to a Cauchy interior is discussed in Section 5.6.
3.4 The Bondi mass
Numerical calculations of asymptotic quantities such as the Bondi mass must pick off nonleading terms in an asymptotic expansion about infinity. This is similar to the experimental task of determining the mass of an object by measuring its far field. For example, in an asymptoticallyinertial Bondi frame at \({{\mathcal I}^ +}\) (in which the metric takes an asymptoticallyMinkowski form in null spherical coordinates)), the mass aspect \({\mathcal M}(u,\theta ,\phi)\) is picked off from the asymptotic expansion of Bondi’s metric quantity V (see Equation (27)) of the form \(V = r  2{\mathcal M} + {\mathcal O}(1/r)\). In gauges, which incorporate some of the properties of an asymptoticallyinertial frame, such as the null quasispherical gauge [36] in which the angular metric is conformal to the unitsphere metric, this can be a straightforward computational problem. However, the job can be more difficult if the gauge does not correspond to a standard Bondi frame at \({{\mathcal I}^ +}\). One must then deal with an arbitrary coordinatization of \({{\mathcal I}^ +}\), which is determined by the details of the interior geometry. As a result, V has a more complicated asymptotic behavior, given in the axisymmetric case by
where L, H, and K are gauge dependent functions of (u, θ), which would vanish in an inertial Bondi frame [288, 176]. The calculation of the Bondi mass requires regularization of this expression by numerical techniques so that the coefficient \({\mathcal M}\) can be picked off. The task is now similar to the experimental determination of the mass of an object by using noninertial instruments in a far zone, which contains \({\mathcal O}(1/r)\) radiation fields. But it has been done!
It was accomplished in Stewart’s code by reexpressing the formula for the Bondi mass in terms of the wellbehaved fields of the conformal formalism [281]. In the Pittsburgh code, it was accomplished by reexpressing the Bondi mass in terms of renormalized metric variables, which regularize all calculations at \({{\mathcal I}^ +}\) and made them secondorder accurate in grid size [143]. The calculation of the Bondi news function (which provides the waveforms of both polarization modes) is an easier numerical task than the Bondi mass. It has also been implemented in both of these codes, thus allowing the important check of the Bondi mass loss formula.
An alternative approach to computing the Bondi mass is to adopt a gauge, which corresponds more closely to an inertial Bondi frame at \({{\mathcal I}^ +}\) and simplifies the asymptotic limit. Such a choice is the null quasispherical gauge in which the angular part of the metric is proportional to the unitsphere metric, and as a result the gauge term K vanishes in Equation (29). This gauge was adopted by Bartnik and Norton at Canberra in their development of a 3D characteristic evolution code [36] (see Section 4 for further discussion). It allowed accurate computation of the Bondi mass as a limit as r → ∞ of the Hawking mass [33].
Mainstream astrophysics is couched in Newtonian concepts, some of which have no welldefined extension to general relativity. In order to provide a sound basis for relativistic astrophysics, it is crucial to develop generalrelativistic concepts, which have welldefined and useful Newtonian limits. Mass and radiation flux are fundamental in this regard. The results of characteristic codes show that the energy of a radiating system can be evaluated rigorously and accurately according to the rules for asymptoticallyflat spacetimes, while avoiding the deficiencies that plagued the “prenumerical” era of relativity: (i) the use of coordinatedependent concepts such as gravitational energymomentum pseudotensors; (ii) a rather loose notion of asymptotic flatness, particularly for radiative spacetimes; (iii) the appearance of divergent integrals; and (iv) the use of approximation formalisms, such as weak field or slow motion expansions, whose errors have not been rigorously estimated.
Characteristic codes have extended the role of the Bondi mass from that of a geometrical construct in the theory of isolated systems to that of a highlyaccurate computational tool. The Bondi massloss formula provides an important global check on the preservation of the Bianchi identities. The massloss rates themselves have important astrophysical significance. The numerical results demonstrate that computational approaches, rigorously based upon the geometrical definition of mass in general relativity, can be used to calculate radiation losses in highlynonlinear processes, where perturbation calculations would not be meaningful.
Numerical calculation of the Bondi mass has been used to explore both the Newtonian and the strong field limits of general relativity [143]. For a quasiNewtonian system of radiating dust, the numerical calculation joins smoothly on to a postNewtonian expansion of the energy in powers of 1/c, beginning with the Newtonian mass and mechanical energy as the leading terms. This comparison with perturbation theory has been carried out to \({\mathcal O}(1/{c^7})\), at which stage the computed Bondi mass peels away from the postNewtonian expansion. It remains strictly positive, in contrast to the truncated postNewtonian behavior, which leads to negative values.
A subtle feature of the Bondi mass stems from its role as one component of the total energymomentum 4vector, whose calculation requires identification of the translation subgroup of the BondiMetznerSachs group [257]. This introduces boost freedom into the problem. Identifying the translation subgroup is tantamount to knowing the conformal transformation to an inertial Bondi frame [288] in which the time slices of \({{\mathcal I}^ +}\) have unitsphere geometry. Both Stewart’s code and the Pittsburgh code adapt the coordinates to simplify the description of the interior sources. This results in a nonstandard foliation of \({{\mathcal I}^ +}\). The determination of the conformal factor, which relates the 2metric h_{ab} of a slice of \({{\mathcal I}^ +}\) to the unitsphere metric is an elliptic problem equivalent to solving the secondorder partial differential equation for the conformal transformation of Gaussian curvature. In the axisymmetric case, the PDE reduces to an ODE with respect to the angle θ, which is straightforward to solve [143]. The integration constants determine the boost freedom along the axis of symmetry.
The nonaxisymmetric case is more complicated. Stewart [281] proposes an approach based upon the dyad decomposition
The desired conformal transformation is obtained by first relating h_{AB} conformally to the flat metric of the complex plane. Denoting the complex coordinate of the plane by ζ, this relationship can be expressed as dζ = e^{f}m_{A}dx^{A}. The conformal factor f can then be determined from the integrability condition
This is equivalent to the classic Beltrami equation for finding isothermal coordinates. It would appear to be a more effective scheme than tackling the secondorder PDE directly, but numerical implementation has not yet been carried out.
4 3D Characteristic Evolution
The initial work on 3D characteristic evolution led to two independent codes, one developed at Canberra and the other at Pittsburgh (the PITT code), both with the capability to study gravitational waves in single blackhole spacetimes at a level not yet mastered at the time by Cauchy codes. The Pittsburgh group established robust stability and secondorder accuracy of a fully nonlinear code, which was able to calculate the waveform at null infinity [56, 53] and to track a dynamical black hole and excise its internal singularity from the computational grid [141, 139]. The Canberra group implemented an independent nonlinear code, which accurately evolved the exterior region of a Schwarzschild black hole. Both codes pose data on an initial null hypersurface and on a worldtube boundary, and evolve the exterior spacetime out to a compactified version of null infinity, where the waveform is computed. However, there are essential differences in the underlying geometrical formalisms and numerical techniques used in the two codes and in their success in evolving generic blackhole spacetimes. Recently two new codes have evolved from the PITT code by introducing a new choice of spherical coordinates [134, 242].
4.1 Coordinatization of the sphere
Any characteristic code extending to \({{\mathcal I}^ +}\) requires the ability to handle tensor fields and their derivatives on the sphere. Spherical coordinates and spherical harmonics are natural analytic tools for the description of radiation, but their implementation in computational work requires dealing with the impossibility of smoothly covering the sphere with a single coordinate grid. Polar coordinate singularities in axisymmetric systems can be regularized by standard tricks. In the absence of symmetry, these techniques do not generalize and would be especially prohibitive to develop for tensor fields. Because of the natural use of nullspherical coordinates in characteristic evolution, this differs from Cauchy evolution, where spherical harmonics can be properly described in a Cartesian coordinate system.
The development of grids smoothly covering the sphere has had a long history in computational meteorology that has led to two distinct approaches: (i) the stereographic approach in which the sphere is covered by two overlapping patches obtained by stereographic projection about the North and South poles [68]; and (ii) the cubedsphere approach in which the sphere is covered by the 6 patches obtained by a projection of the faces of a circumscribed cube [253]. A discussion of the advantages of each of these methods and a comparison of their performance in a standard fluid testbed are given in [68]. In numerical relativity, the stereographic method has been reinvented in the context of the characteristic evolution problem [217]; and the cubedsphere method has been reinvented in building an apparent horizon finder [295]. The cubedsphere module, including the interpatch transformations, has been integrated into the Cactus toolkit [292] and applied to blackhole excision and numerous other problems in numerical relativity [294, 200, 264, 101, 96, 226, 310, 183, 286, 182]. Perhaps the most ingenious treatment of the sphere, based upon a toroidal map, was devised by the Canberra group in building their characteristic code [36]. These methods are described below.
4.1.1 Stereographic grids
Motivated by problems in meteorology, Browning, Hack, and Swartztrauber [68] developed the first finitedifference scheme based upon a composite mesh with two overlapping stereographic coordinate patches, each having a circular boundary centered about the North or South poles. Values for quantities required at ghost points beyond the boundary of one of the patches were interpolated from values in the other patch. Because a circular boundary does not fit regularly on a stereographic grid, dissipation was found necessary to remove the short wavelength error resulting from the interpatch interpolations. They used the shallow water equations as a testbed to compare their approach to existing spectral approaches in terms of computer time, execution rate and accuracy. Such comparisons of different numerical methods can be difficult. Both the finitedifference and spectral approaches gave good results and were competitive in terms of overall operation count and memory requirements. For the particular initial data sets tested, the spectral approach had an advantage but not enough to give clear indication of the suitability of one method over another. The spectral method with M modes requires O(M^{3}) operations per time step compared with O(N^{2}) for a finitedifference method on an N × N grid. However, assuming that the solution is analytic, the accuracy of the spectral method is O(e^{−M}) compared to, say, O(N^{−6}) for a sixthorder finitedifference method. Hence, for comparable accuracy, M = O(ln N), which implies that the operation count for the spectral and finitedifference methods would be O[(ln N)^{3}] and O(N^{2}), respectively. Thus, for sufficiently high accuracy, i.e., large N, the spectral method requires fewer operations. Thus, the issue of spectral vs finitedifference methods depends on the nature of the smoothness of the physical problem being addressed and the accuracy desired. For smooth C^{∞} solutions the spectral convergence rate is still faster than any power law.
The Pitt null code was first developed using two stereographic patches with square boundaries, each overlapping the equator. This has recently been modified based upon the approach advocated in [68], which retains the original stereographic coordinates but shrinks the overlap region by masking a circular boundary near the equator. The original square boundaries aligned with the grid and did not require numerical dissipation. However, the corners of the square boundary, besides being a significant waste of economy, were a prime source of inaccuracy. The resolution at the corners is only 1/9th that at the poles due to the stretching of the stereographic map. Near the equator, the resolution is approximately 1/2 that at the poles. The use of a circular boundary requires an angular version of numerical dissipation to control the resulting high frequency error (see Section 4.2.2).
A crucial ingredient of the PITT code is the ðmodule [140], which incorporates a computational version of the NewmanPenrose ethformalism [217]. The underlying method can be applied to any smooth coordinatization x^{A} of the sphere based upon several patches. The unitsphere metric q_{AB}, defined by these coordinates, is decomposed in each patch in terms of a complex basis vector q_{A},
Vector and tensor fields on the sphere, and their covariant derivatives, are then represented by their basis components. For example, the vector field U^{A} is represented by the complex spinweight 1 field U = U^{A}q_{A}. The covariant derivative D^{A} associated with q_{AB} is then expressed in terms of the ð operator according to
The ethcalculus simplifies the underlying equations, avoids spurious coordinate singularities and allows accurate differentiation of tensor fields on the sphere in a computationally efficient and clean way. Its main weakness is the numerical noise introduced by interpolations between the patches.
4.1.2 Cubed sphere grids
Ronchi, Iacono, and Paolucci [253] developed the “cubedsphere” approach as a new gridding method for solving global meteorological problems. The method decomposes the sphere into the six identical regions obtained by projection of a cube circumscribed on its surface. This gives a variation of the composite mesh method in which the six domains butt up against each other along shared grid boundaries. As a result, depending upon the implementation, either no intergrid interpolations or only onedimensional interpolations are necessary (as opposed to the twodimensional interpolations necessary for a stereographic grid), which results in enhanced accuracy. See [261] for a review of abutting grid techniques in numerical relativity. The symmetry of the scheme, in which the six patches have the same geometric structure and grid, also allows efficient use of parallel computer architectures. Their tests of the cubedsphere method based upon the simulation of shallow water waves in spherical geometry show that the numerical solutions are as accurate as those with spectral methods, with substantial saving in execution time. Recently, the cubedsphere method has also been developed for application to characteristic evolution in numerical relativity [242, 134]. The ethcalculus is used to treat tensor fields on the sphere in the same way as in the stereographic method except the interpatch transformations now involve six, rather than two, sets of basis vectors.
4.1.3 Toroidal grids
The Canberra group treats fields on the sphere by taking advantage of the existence of a smooth map from the torus to the sphere [36]. The pullback of this map allows functions on the sphere to be expressed in terms of toroidal coordinates. The intrinsic topology of these toroidal coordinates allow them to take advantage of of fastFourier transforms to implement a highly efficient pseudospectral treatment. This ingenious method has apparently not yet been adopted in other fields.
4.2 Geometrical formalism
The PITT code uses a standard BondiSachs null coordinate system,
where
for some standard choice q_{AB} of the unit sphere metric. This generalizes Equation (24) to the threedimensional case. The characteristic version of Einstein’s equations consists of four hypersurface equations, which do not contain time derivates, two evolution equations, and four supplementary conditions, which need only hold on a worldtube. The hypersurface equations derive from the \({G_\mu}^\nu {\nabla _\nu}u\) components of the Einstein tensor. They take the explicit form [302]
where D_{A} is the covariant derivative and \({\mathcal R}\) the curvature scalar of the conformal 2metric h_{AB} of the r = const. surfaces, and capital Latin indices are raised and lowered with h_{AB}. Given the null data h_{AB} on an outgoing null hypersurface, this hierarchy of equations can be integrated radially in order to determine β, U^{A} and V on the hypersurface in terms of integration constants on an inner boundary. The evolution equations for the uderivative of the null data derive from the tracefree part of the angular components of the Einstein tensor, i.e., the components m^{A}m^{B}G_{AB}where \({h^{AB}} = 2{m^{(A}}{{\bar m}^{B)}}\). They take the explicit form
A compactified treatment of null infinity is achieved by introducing the radial coordinate x = r/(R + r), where R is a scale parameter adjusted to the size of the inner boundary. Thus, x = 1 at \({{\mathcal I}^ +}\).
The Canberra code employs a null quasispherical (NQS) gauge (not to be confused with the quasispherical approximation in which quadraticallyaspherical terms are ignored [56]). The NQS gauge takes advantage of the possibility of mapping the angular part of the Bondi metric conformally onto a unitsphere metric, so that h_{AB} → q_{AB}. The required transformation x^{A} → y^{A}(u, r, x^{A}) is in general dependent upon u and r so that the NQS angular coordinates y^{A} are not constant along the outgoing null rays, unlike the BondiSachs angular coordinates. Instead the coordinates y^{A} display the analogue of a shift on the null hypersurfaces u = const. In addition, the NQS spheres (u, r) = const. are not the same as the Bondi spheres. The radiation content of the metric is contained in a shear vector describing this shift. This results in the description of the radiation in terms of a spinweight 1 field, rather than the spinweight 2 field associated with h_{AB} in the BondiSachs formalism. In both the BondiSachs and NQS gauges, the independent gravitational data on a null hypersurface is the conformal part of its degenerate 3metric. The BondiSachs null data consist of h_{AB}, which determines the intrinsic conformal metric of the null hypersurface. In the NQS case, h_{AB} = q_{ab} and the shear vector comprises the only nontrivial part of the conformal 3metric. Both the BondiSachs and NQS gauges can be arranged to coincide in the special case of shearfree RobinsonTrautman metrics [95, 32].
The formulation of Einstein’s equations in the NQS gauge is presented in [31], and the associated gauge freedom arising from (u, r) dependent rotation and boosts of the unit sphere is discussed in [32]. As in the PITT code, the main equations involve integrating a hierarchy of hypersurface equations along the radial null geodesics extending from the inner boundary to null infinity. In the NQS gauge the source terms for these radial ODEs are rather simple when the unknowns are chosen to be the connection coefficients. However, as a price to pay for this simplicity, after the radial integrations are performed on each null hypersurface, a firstorder elliptic equation must be solved on each r = const. crosssection to reconstruct the underlying metric.
4.2.1 Worldtube conservation laws
The components of Einstein’s equations independent of the hypersurface and evolution equations,
were called supplementary conditions by Bondi et al. [63] and Sachs [258]. They showed that the Bianchi identity
implies that these equations need only be satisfied on a worldtube r = R(u, x^{A}). When the hypersurface and evolution equations are satisfied, the Bianchi identity for ν = r reduces to h^{AB}G_{AB} = 0 so that (40) becomes trivially satisfied. The Bianchi identity for ν = A then reduces to
so that \(G_A^r = 0\) if it is set to zero at r = R(u, x^{A}). When that is the case, the Bianchi identity for ν = u then reduces to
so that \(G_u^r = 0\) also vanishes if it vanishes for r = R(u, x^{A}).
As a result, the supplementary conditions can be replaced by the condition that the Einstein tensor satisfy
on the worldtube, where ξ^{μ} is any vector field tangent to the worldtube and N_{μ} is the unit normal to the worldtube. Since ξ^{μ}N_{μ} = 0, we can further replace (43) by the worldtube condition on the Ricci tensor
The Ricci identity
then gives rise to the strict Komar conservation law [181]
when ξ^{μ} is a Killing vector corresponding to an exact symmetry. More generally, (45) gives rise to the flux conservation law
where
and dS_{μν} and dS_{ν} are, respectively, the appropriate surface and 3volume elements on the worldtube. For the limiting case when R → ∞, these flux conservation laws govern the energymomentum, angular momentum and supermomentum corresponding to the generators of the BondiMetznerSachs asymptotic symmetry group [288]. For an asymptotic time translation, they give rise to the Bondi massloss relation.
These conservation laws (44) can also be expressed in terms of the intrinsic metric of the worldtube.
and its extrinsic curvature
This leads to the worldtube analogue of the momentum constraint for the Cauchy problem,
where \({{\mathcal D}_\mu}\) is the covariant derivative associated with H_{μν}. These are equivalent to the conservation conditions (44) and allow the conserved quantities to be expressed in terms of the extrinsic curvature of the boundary. For any vector field ξ^{μ} tangent to the worldtube, (46) implies
In particular, this shows that ξ^{μ} need only be a Killing vector for the 3metric H^{μν} to obtain a strict conservation law on the boundary.
The worldtube conservation laws can also be interpreted as a symmetric hyperbolic system governing the evolution of certain components of the extrinsic curvature [306]. This leads to the
4.2.1.1 Worldtube Theorem
Given H_{ab}, m^{a}m^{b}K_{ab} and K, the worldtube constraints constitute a wellposed initialvalue problem, which determines the remaining components of the extrinsic curvature K_{ab}.
These extrinsic curvature components are related to the integration constants for the BondiSachs system, which leads to possible applications of the worldtube theorem. One application is to waveform extraction. In that case, the data (H_{ab}, m^{a}m^{b}K_{ab}, K) necessary to apply the worldtube theorem are supplied by the numerical results of a 3 + 1 Cauchy evolution. The remaining components of the extrinsic curvature can then be determined by means of a wellposed initialvalue problem on the boundary. The integration constants \((\beta ,V,{U^A},U_r^A,{h_{AB}})\), for the BondiSachs equations at r = R(u, x^{A}) are then determined. This approach can be used to enforce the constraints in the numerical computation of waveforms at \({{\mathcal I}^ +}\)by means of Cauchycharacteristic extraction (see Section 6).
Another possible application is to the characteristic initialboundary value problem, for which boundary data consistent with the constraints must be prescribed a priori, i.e., independent of the evolution. The object is to obtain a wellposed version of the characteristic initialboundary value problem. However, the complicated coupling between the BondiSachs evolution system and the boundary constraint system prevents any definitive results.
4.2.2 Angular dissipation
For a {3 + 1} evolution algorithm based upon a system of wave equations, or any other symmetric hyperbolic system, numerical dissipation can be added in the standard KreissOliger form [186]. Dissipation cannot be added to the {2 + 1 + 1} format of characteristic evolution in this standard way for {3 + 1} Cauchy evolution. In the original version of the PITT code, which used square stereographic patches with boundaries aligned with the grid, numerical dissipation was only introduced in the radial direction [195]. This was sufficient to establish numerical stability. In the new version of the code with circular stereographic patches, whose boundaries fit into the stereographic grid in an irregular way, angular dissipation is necessary to suppress the resulting highfrequency error.
Angular dissipation can be introduced in the following way [13]. In terms of the spinweight 2 variable
the evolution equation (39) takes the form
where S represents the righthandside terms. We add angular dissipation to the uevolution through the modification
where h is the discretization size and ϵ_{u} ≥ 0 is an adjustable parameter independent of h. This leads to
Integration over the unit sphere with solid angle element dΩ then gives
Thus, the ϵ_{u}term has the effect of damping highfrequency noise as measured by the L_{2} norm of ∂_{r}(rJ) over the sphere.
Similarly, dissipation can be introduced in the radial integration of (48) through the substitution
with ϵ_{r} ≥ 0. Angular dissipation can also be introduced in the hypersurface equations, e.g., in Equation (38) through the substitution
4.2.3 First versus second differential order
The PITT code was originally formulated in the second differential form of Equations (36, 37, 38, 39), which in the spinweighted version leads to an economical number of 2 real and 2 complex variables. Subsequently, the variable
was introduced to reduce Equation (37) to two firstorder radial equations, which simplified the startup procedure at the boundary. Although the resulting code was verified to be stable and secondorder accurate, its application to problems involving strong fields and gradients led to numerical errors, which made smallscale effects of astrophysical importance difficult to measure.
In particular, in initial attempts to simulate a whitehole fission, Gómez [133] encountered an oscillatory error pattern in the angular directions near the time of fission. The origin of the problem was tracked to numerical error of an oscillatory nature introduced by ð^{2} terms in the hypersurface and evolution equations. Gómez’s solution was to remove the offending second angular derivatives by introducing additional variables and reducing the system to first differential order in the angular directions. This suppressed the oscillatory mode and subsequently improved performance in the simulation of the whitehole fission problem [136] (see Section 4.4.2).
This success opens the issue of whether a completely first differential order code might perform even better, as has been proposed by Gómez and Frittelli [135]. By the use of ∂_{u}h_{ab} as a fundamental variable, they cast the Bondi system into Duff’s firstorder quasilinear canonical form [103]. At the analytic level this provides standard uniqueness and existence theorems (extending previous work for the linearized case [124]) and is a starting point for establishing the estimates required for wellposedness.
At the numerical level, Gómez and Frittelli point out that this firstorder formulation provides a bridge between the characteristic and Cauchy approaches, which allows application of standard methods for constructing numerical algorithms, e.g., to take advantage of shockcapturing schemes. Although true shocks do not exist for vacuum gravitational fields, when coupled to hydro the resulting shocks couple back to form steep gradients, which might not be captured by standard finitedifference approximations. In particular, the second derivatives needed to compute gravitational radiation from stellar oscillations have been noted to be a troublesome source of inaccuracy in the characteristic treatment of hydrodynamics [270]. Application of standard versions of AMR is also facilitated by the firstorder form.
The benefits of this completely firstorder approach are not simple to decide without code comparison. The part of the code in which the ð^{2} operator introduced the oscillatory error mode in [133] was not identified, i.e., whether it originated in the inner boundary treatment or in the interpolations between stereographic patches where second derivatives might be troublesome. There are other possible ways to remove the oscillatory angular modes, such as adding angular dissipation (see Section 4.2.2). The finitedifference algorithm in the original PITT code only introduced numerical dissipation in the radial direction [195]. The economy of variables and other advantages of a secondorder scheme [187] should not be abandoned without further tests and investigation.
4.2.4 Numerical methods
The PITT code is an explicit finitedifference evolution algorithm based upon retarded time steps on a uniform threedimensional null coordinate grid based upon the stereographic coordinates and a compactified radial coordinate. The straightforward numerical implementation of the finitedifference equations has facilitated code development. The Canberra code uses an assortment of novel and elegant numerical methods. Most of these involve smoothing or filtering and have obvious advantage for removing short wavelength noise but would be unsuitable for modeling shocks.
There have been two recent projects, to improve the performance of the PITT code by using the cubedsphere method to coordinatize the sphere. They both include an adaptation of the ethcalculus to handle the transformation of spinweighted variables between the six patches.
In one of these projects, Gómez, Barreto and Frittelli develop the cubedsphere approach into an efficient, highly parallelized 3D code, the LEO code, for the characteristic evolution of the coupled EinsteinKleinGordon equations in the BondiSachs formalism [134]. This code was demonstrated to be convergent and its high accuracy in the linearized regime with a Schwarzschild background was demonstrated by the simulation of the quasinormal ringdown of the scalar field and its energymomentum conservation.
Because the characteristic evolution scheme constitutes a radial integration carried out for each angle on the sphere of null directions, the natural way to parallelize the code is to distribute the angular grid among processors. Thus, given M × M processors one can distribute the N × N points in each spherical patch (cubedsphere or stereographic), assigning to each processor equal square grids of extent N/M in each direction. To be effective this requires that the communication time between processors scales effectively. This depends upon the ghost point location necessary to supply nearest neighbor data and is facilitated in the cubedsphere approach because the ghost points are aligned on onedimensional grid lines, whose pattern is invariant under grid size. In the stereographic approach, the ghost points are arranged in an irregular pattern, which changes in an essentially random way under rescaling and requires a more complicated parallelization algorithm.
Their goal is to develop the LEO code for application to blackholeneutronstar binaries in a close orbit regime, where the absence of caustics make a pure characteristic evolution possible. Their first anticipated application is the simulation of a boson star orbiting a black hole, whose dynamics is described by the EinsteinKleinGordon equations. They point out that characteristic evolution of such systems of astrophysical interest have been limited in the past by resolution due to the lack of necessary computational power, parallel infrastructure and mesh refinement. Most characteristic code development has been geared toward single processor machines, whereas the current computational platforms are designed toward performing highresolution simulations in reasonable times by parallel processing.
At the same time the LEO code was being developed, Reisswig et al. [242] also constructed a characteristic code for the BondiSachs problem based upon the cubedsphere infrastructure of Thornburg [295, 294]. They retain the original secondorder differential form of the angular operators.
The Canberra code handles fields on the sphere by means of a 3fold representation: (i) as discretized functions on a spherical grid uniformly spaced in standard (θ, ϕ) coordinates, (ii) as fastFourier transforms with respect to (θ, ϕ) (based upon the smooth map of the torus onto the sphere), and (iii) as a spectral decomposition of scalar, vector, and tensor fields in terms of spinweighted spherical harmonics. The grid values are used in carrying out nonlinear algebraic operations; the Fourier representation is used to calculate (θ, ϕ)derivatives; and the spherical harmonic representation is used to solve global problems, such as the solution of the firstorder elliptic equation for the reconstruction of the metric, whose unique solution requires pinning down the ℓ = 1 gauge freedom. The sizes of the grid and of the Fourier and sphericalharmonic representations are coordinated. In practice, the sphericalharmonic expansion is carried out to 15th order in ℓ, but the resulting coefficients must then be projected into the ℓ ≤ 10 subspace in order to avoid inconsistencies between the spherical harmonic, grid, and Fourier representations.
The Canberra code solves the null hypersurface equations by combining an eighthorder RungeKutta integration with a convolution spline to interpolate field values. The radial grid points are dynamically positioned to approximate ingoing null geodesics, a technique originally due to Goldwirth and Piran [132] to avoid the problems with a uniform rgrid near a horizon, which arise from the degeneracy of an areal coordinate on a stationary horizon. The time evolution uses the method of lines with a fourthorder RungeKutta integrator, which introduces further high frequency filtering.
4.2.5 Stability
4.2.5.1 PITT code
Analytic stability analysis of the finitedifference equations has been crucial in the development of a stable evolution algorithm, subject to the standard CourantFriedrichsLewy (CFL) condition for an explicit code. Linear stability analysis on Minkowski and Schwarzschild backgrounds showed that certain field variables must be represented on the halfgrid [142, 56]. Nonlinear stability analysis was essential in revealing and curing a modecoupling instability that was not present in the original axisymmetric version of the code [53, 195]. This has led to a code, whose stability persists even in the regime that the udirection, along which the grid flows, becomes spacelike, such as outside the velocityoflight cone in a rotating coordinate system. Several tests were used to verify stability. In the linear regime, robust stability was established by imposing random initial data on the initial characteristic hypersurface and random constraintviolating boundary data on an inner worldtube. (Robust stability was later adopted as one of the standardized tests for Cauchy codes [5].) The code ran stably for 10,000 grid crossing times under these conditions [56]. The PITT code was the first 3D generalrelativistic code to pass this robust stability test. The use of random data is only possible in sufficiently weak cases where terms quadratic in the field gradients are not dominant. Stability in the highly nonlinear regime was tested in two ways. Runs for a time of 60, 000 M were carried out for a moving, distorted Schwarzschild black hole (of mass M), with the marginallytrapped surface (MTS) at the inner boundary tracked and its interior excised from the computational grid [139, 148]. At the time, this was by far the longest simulation of a dynamic black hole. Furthermore, the scattering of a gravitational wave off a Schwarzschild black hole was successfully carried out in the extreme nonlinear regime where the backscattered Bondi news was as large as N = 400 (in dimensionless geometric units) [53], showing that the code can cope with the enormous power output N^{2}c^{5}/G ≈ 10^{60} W in conventional units. This exceeds the power that would be produced if, in one second, the entire galaxy were converted to gravitational radiation.
4.2.5.2 Cubedsphere codes
The characteristic codes using the cubedsphere grid [134, 242] are based upon the same variables and equations as the PITT code, with the same radial integration scheme. Thus, it should be expected that stability be maintained since the main difference is that the interpatch interpolations are simpler, i.e., only onedimensional. This appears to be the case in the reported tests, although robust stability was not directly confirmed. In particular, the LEO code showed no sign of instability in long time, highresolution simulations of the quasinormal ringdown of a scalar field scattering off a Schwarzschild black hole. Angular dissipation was not necessary.
4.2.5.3 Canberra code
Analytic stability analysis of the underlying finitedifference equations is impractical because of the extensive mix of spectral techniques, higherorder methods, and splines. Although there is no clearcut CFL limit on the code, stability tests show that there is a limit on the time step. The damping of high frequency modes due to the implicit filtering would be expected to suppress numerical instability, but the stability of the Canberra code is nevertheless subject to two qualifications [34, 35, 36]: (i) At late times (less than 100 M), the evolution terminates as it approaches an event horizon, apparently because of a breakdown of the NQS gauge condition, although an analysis of how and why this should occur has not yet been given. (ii) Numerical instabilities arise from dynamic inner boundary conditions and restrict the inner boundary to a fixed Schwarzschild horizon. Tests in the extreme nonlinear regime were not reported.
4.2.6 Accuracy
4.2.6.1 PITT code
The designed secondorder accuracy has been verified in an extensive number of testbeds [56, 53, 139, 311, 312], including new exact solutions specifically constructed in null coordinates for the purpose of convergence tests:

Linearized waves on a Minkowski background in null cone coordinates.

Schwarzschild in rotating coordinates.

Polarization symmetry of nonlinear twistfree axisymmetric waveforms.

RobinsonTrautman waveforms from perturbed Schwarzschild black holes.

Nonlinear RobinsonTrautman waveforms utilizing an independentlycomputed solution of the RobinsonTrautman equation.

Perturbations of a Schwarzschild black hole utilizing an independentlycomputed solution of the Teukolsky equation.
In addition to these testbeds, a set of linearized solutions has recently been obtained in the BondiSachs gauge for either Schwarzschild or Minkowski backgrounds [47]. The solutions are generated by the introduction of a thin shell of matter whose density varies with time and angle. This gives rise to an exterior field containing gravitational waves. For a Minkowski background, the solution is given in exact analytic form and, for a Schwarzschild background, in terms of a power series. The solutions are parametrized by frequency and spherical harmonic decomposition. They supply a new and very useful testbed for the calibration and further development of characteristic evolution codes for Einstein’s equations, analogous to the role of the Teukolsky waves in Cauchy evolution. The PITT code showed clean secondorder convergence in both the L_{2} and L_{∞} error norms in tests based upon waves in a Minkowski background. However, in applications involving very high resolution or nonlinearity, there was excessive short wavelength noise, which degraded convergence. Recent improvements in the code [16] have now established clean secondorder convergence in the nonlinear regime.
It would be of great value to increase the accuracy of the code to higher order. However, the marching algorithm, which combines the radial integration of the hypersurface and evolution equations does not fall into the standard categories that have been studied in computational mathematics. In particular, there are no energy estimates for the analytic problem, which would could serve as a guide to design a higherorder stable algorithm. This is a an important area for future investigation.
4.2.6.2 Cubedsphere codes
Convergence of the cubedsphere code developed by Reisswig et al. [242] was also checked using linearized waves in a Minkowski background. The convergence rate for the L_{2} error norm was approximately secondorder accurate but in some cases there was significant degradation. They conjectured that the underlying source of error arises at the corners of the six patches. Comparison with the L_{∞} error norm would discriminate such a localized source of error but such results were not reported.
The designed convergence rate of the ð operator used in the LEO code was verified for second, fourth and eighthorder finitedifference approximations, using the spinweight 2 spherical harmonic _{2}Y_{43} as a test. Similarly, the convergence of the integral relations governing the orthonormality of the spinweighted harmonics was verified. The code includes coupling to a KleinGordon scalar field. Although convergence of the evolution code was not explicitly checked, high accuracy in the linearized regime with Schwarzschild background was demonstrated in the simulation of quasinormal ringdown of the scalar field and in the energymomentum conservation of the scalar field.
4.2.6.3 Canberra code
The complexity of the algorithm and NQS gauge makes it problematic to establish accuracy by direct means. Exact solutions do not provide an effective convergence check, because the Schwarzschild solution is trivial in the NQS gauge and other known solutions in this gauge require dynamic inner boundary conditions, which destabilize the present version of the code. Convergence to linearized solutions is a possible check but has not yet been performed. Instead indirect tests by means of geometric consistency and partial convergence tests are used to calibrate accuracy. The consistency tests were based on the constraint equations, which are not enforced during null evolution except at the inner boundary. The balance between mass loss and radiation flux through \({{\mathcal I}^ +}\) is a global consequence of these constraints. No appreciable growth of the constraints was noticeable until within 5 M of the final breakdown of the code. In weak field tests where angular resolution does not dominate the error, partial convergence tests based upon varying the radial grid size verify the eigthorder convergence in the shear expected from the RungeKutta integration and splines. When the radial source of error is small, reduced error with smaller time step can also be discerned.
In practical runs, the major source of inaccuracy is the sphericalharmonic resolution, which was restricted to ℓ ≤ 15 by hardware limitations. Truncation of the sphericalharmonic expansion has the effect of modifying the equations to a system for which the constraints are no longer satisfied. The relative error in the constraints is 10^{−3} for strong field simulations [33].
4.2.7 Nonlinear scattering off a Schwarzschild black hole
A natural physical application of a characteristic evolution code is the nonlinear version of the classic problem of scattering off a Schwarzschild black hole, first solved perturbatively by Price [238]. Here the inner worldtube for the characteristic initialvalue problem consists of the ingoing branch of the r = 2 m hypersurface (the past horizon), where Schwarzschild data are prescribed. The nonlinear problem of a gravitational wave scattering off a Schwarzschild black hole is then posed in terms of data on an outgoing null cone, which describe an incoming pulse with compact support. Part of the energy of this pulse falls into the black hole and part is backscattered to \({{\mathcal I}^ +}\). This problem has been investigated using both the PITT and Canberra codes.
The Pittsburgh group studied the backscattered waveform (described by the Bondi news function) as a function of incoming pulse amplitude. The computational ethmodule smoothly handled the complicated timedependent transformation between the noninertial computational frame at \({{\mathcal I}^ +}\) and the inertial (Bondi) frame necessary to obtain the standard “plus” and “cross” polarization modes. In the perturbative regime, the news corresponds to the backscattering of the incoming pulse off the effective Schwarzschild potential. When the energy of the pulse is no larger than the central Schwarzschild mass, the backscattered waveform still depends roughly linearly on the amplitude of the incoming pulse. However, for very high amplitudes the waveform behaves quite differently. Its amplitude is greater than that predicted by linear scaling and its shape drastically changes and exhibits extra oscillations. In this very high amplitude case, the mass of the system is completely dominated by the incoming pulse, which essentially backscatters off itself in a nonlinear way.
The Canberra code was used to study the change in Bondi mass due to the radiation [33]. The Hawking mass M_{H}(u, r) was calculated as a function of radius and retarded time, with the Bondi mass M_{B}(u) then obtained by taking the limit r → ∞. The limit had good numerical behavior. For a strong initial pulse with ℓ = 4 angular dependence, in a run from u = 0 to u = 70 (in units where the interior Schwarzschild mass is 1), the Bondi mass dropped from 1.8 to 1.00002, showing that almost half of the initial energy of the system was backscattered and that a surprisingly negligible amount of energy fell into the black hole. A possible explanation is that the truncation of the spherical harmonic expansion cuts off wavelengths small enough to effectively penetrate the horizon. The Bondi mass decreased monotonically in time, as necessary theoretically, but its rate of change exhibited an interesting pulsing behavior whose time scale could not be obviously explained in terms of quasinormal oscillations. The Bondi mass loss formula was confirmed with relative error of less than 10^{−3}. This is impressive accuracy considering the potential sources of numerical error introduced by taking the limit of the Hawking mass with limited resolution. The code was also used to study the appearance of logarithmic terms in the asymptotic expansion of the Weyl tensor [37]. In addition, the Canberra group studied the effect of the initial pulse amplitude on the waveform of the backscattered radiation, but did not extend their study to the very high amplitude regime in which qualitatively interesting nonlinear effects occur.
4.2.8 Black hole in a box
The PITT code has also been implemented to evolve along an advanced time foliation by ingoing null cones, with data given on a worldtube at their outer boundary and on the initial ingoing null cone. The code was used to evolve a black hole in the region interior to the worldtube by implementing a horizon finder to locate the MTS on the ingoing cones and excising its singular interior [141]. The code tracks the motion of the MTS and measures its area during the evolution. It was used to simulate a distorted “black hole in a box” [139]. Data at the outer worldtube was induced from a Schwarzschild or Kerr spacetime but the worldtube was allowed to move relative to the stationary trajectories; i.e., with respect to the grid the worldtube is fixed but the black hole moves inside it. The initial null data consisted of a pulse of radiation, which subsequently travels outward to the worldtube, where it reflects back toward the black hole. The approach of the system to equilibrium was monitored by the area of the MTS, which also equals its Hawking mass. When the worldtube is stationary (static or rotating in place), the distorted black hole inside evolved to equilibrium with the boundary. A boost or other motion of the worldtube with respect to the black hole did not affect this result. The MTS always reached equilibrium with the outer boundary, confirming that the motion of the boundary was “pure gauge”.
This was the first code that ran “forever” in a dynamic blackhole simulation, even when the worldtube wobbled with respect to the black hole to produce artificial periodic time dependence. An initially distorted, wobbling black hole was evolved for a time of 60, 000 M, longer by orders of magnitude than permitted by the stability of other existing black hole codes at the time. This exceptional performance opens a promising new approach to handle the inner boundary condition for Cauchy evolution of black holes by the matching methods reviewed in Section 5.
Note that setting the pulse to zero is equivalent to prescribing shear free data on the initial null cone. Combined with Schwarzschild boundary data on the outer worldtube, this would be complete data for a Schwarzschild space time. However, the evolution of such shear free null data combined with Kerr boundary data would have an initial transient phase before settling down to a Kerr black hole. This is because the twist of the shearfree Kerr null congruence implies that Kerr data specified on a null hypersurface are not generally shear free. The event horizon is an exception but Kerr null data on other null hypersurfaces have not been cast in explicit analytic form. This makes the Kerr spacetime an awkward testbed for characteristic codes. (Curiously, Kerr data on a null hypersurface with a conical type singularity do take a simple analytic form, although unsuitable for numerical evolution [108].) Using some intermediate analytic results of Israel and Pretorius [236], Venter and Bishop [59] have recently constructed a numerical algorithm for transforming the Kerr solution into Bondi coordinates and in that way provide the necessary null data numerically.
4.3 Characteristic treatment of binary black holes
An important application of characteristic evolution is the calculation of the waveform emitted by binary black holes, which is possible during the very interesting nonlinear domain from merger to ringdown [197, 305]. The evolution is carried out along a family of ingoing null hypersurfaces, which intersect the horizon in topological spheres. It is restricted to the period following the merger, as otherwise the ingoing null hypersurfaces would intersect the horizon in disjoint pieces corresponding to the individual black holes. The evolution proceeds backward in time on an ingoing null foliation to determine the exterior spacetime in the postmerger era. It is an example of the characteristic initial value problem posed on an intersecting pair of null hypersurfaces [259, 159], for which existence theorems apply in some neighborhood of the initial null hypersurfaces [213, 115, 114]. Here one of the null hypersurfaces is the event horizon \({{\mathcal H}^ +}\) of the binary black holes. The other is an ingoing null hypersurface J^{+}, which intersects \({{\mathcal H}^ +}\) in a topologically spherical surface \({{\mathcal S}^ +}\), approximating the equilibrium of the final Kerr black hole, so that J^{+} approximates future null infinity \({{\mathcal I}^ +}\). The required data for the analytic problem consists of the degenerate conformal null metrics of \({{\mathcal H}^ +}\) and J^{+} and the metric and extrinsic curvature of their intersection \({{\mathcal S}^ +}\).
The conformal metric of \({{\mathcal H}^ +}\) is provided by the conformal horizon model for a binary blackhole horizon [197, 173], which treats the horizon in standalone fashion as a threedimensional manifold endowed with a degenerate metric γ_{ab} and affine parameter t along its null rays. The metric is obtained from the conformal mapping \({\gamma _{ab}} = {\Omega ^2}{{\hat \gamma}_{ab}}\) of the intrinsic metric \({{\hat \gamma}_{ab}}\) of a flat space null hypersurface emanating from a convex surface \({{\mathcal S}_0}\) embedded at constant time in Minkowski space. The horizon is identified with the null hypersurface formed by the inner branch of the boundary of the past of \({{\mathcal S}_0}\), and its extension into the future. The flat space null hypersurface expands forever as its affine parameter \({\hat t}\) (Minkowski time) increases, but the conformal factor is chosen to stop the expansion so that the crosssectional area of the black hole approaches a finite limit in the future. At the same time, the Raychaudhuri equation (which governs the growth of surface area) forces a nonlinear relation between the affine parameters t and \({\hat t}\). This is what produces the nontrivial topology of the affine tslices of the blackhole horizon. The relative distortion between the affine parameters t and \({\hat t}\), brought about by curved space focusing, gives rise to the trousers shape of a binary blackhole horizon.
An embedding diagram of the horizon for an axisymmetric headon collision, obtained by choosing \({{\mathcal S}_0}\) to be a prolate spheroid, is shown in Figure 3 [197]. The blackhole event horizon associated with a triaxial ellipsoid reveals new features not seen in the degenerate case of the headon collision [173], as depicted in Figure 4. If the degeneracy is slightly broken, the individual black holes form with spherical topology but as they approach, an effective tidal distortion produces two sharp pincers on each black hole just prior to merger. At merger, the two pincers join to form a single temporarily toroidal black hole. The inner hole of the torus subsequently closes up to produce first a peanut shaped black hole and finally a spherical black hole. No violation of topological censorship [113] occurs because the hole in the torus closes up superluminally. Consequently, a causal curve passing through the torus at a given time can be slipped below the bottom of a trouser leg to yield a causal curve lying entirely outside the hole [267]. In the degenerate axisymmetric limit, the pincers reduce to a point so that the individual holes have teardrop shape and they merge without a toroidal phase. Animations of this merger can be viewed at [198].
The conformal horizon model determines the data on \({{\mathcal H}^ +}\) and \({{\mathcal S}^ +}\). The remaining data necessary to evolve the exterior spacetime are given by the conformal geometry of J^{+}, which constitutes the outgoing radiation waveform. The determination of the mergerringdown waveform proceeds in two stages. In the first stage, this outgoing waveform is set to zero and the spacetime is evolved backward in time to calculate the incoming radiation entering from \({{\mathcal I}^ }\). (This incoming radiation is eventually absorbed by the black hole.) From a timereversed point of view, this evolution describes the outgoing waveform emitted in the fission of a white hole, with the physicallycorrect initial condition of no ingoing radiation. Preliminary calculations show that at late times the waveform is entirely quadrupolar (ℓ = 2) but that a strong octopole mode (ℓ = 4) exists just before fission. In the second stage of the calculation, this waveform could be used to generate the physicallycorrect outgoing waveform for a blackhole merger. The passage from the first stage to the second is the nonlinear equivalent of first determining an inhomogeneous solution to a linear problem and then adding the appropriate homogeneous solution to satisfy the boundary conditions. In this context, the first stage supplies an advanced solution and the second stage the homogeneous retarded minus advanced solution. When the evolution is carried out in the perturbative regime of a Kerr or Schwarzschild background, as in the close approximation [239], this superposition of solutions is simplified by the time reflection symmetry [305]. The second stage has been carried out in the perturbative regime of the close approximation using a characteristic code, which solves the Teukolsky equation, as described in Section 4.4. More generally, beyond the perturbative regime, the mergerringdown waveform must be obtained by a more complicated inverse scattering procedure, which has not yet been attempted.
There is a complication in applying the PITT code to this double null evolution because a dynamic horizon does not lie precisely on rgrid points. As a result, the rderivative of the null data, i.e., the ingoing shear of \({\mathcal H}\), must also be provided in order to initiate the radial hypersurface integrations. The ingoing shear is part of the free data specified at \({{\mathcal S}^ +}\). Its value on \({\mathcal H}\) can be determined by integrating (backward in time) a sequence of propagation equations involving the horizon’s twist and ingoing divergence. A horizon code that carries out these integrations has been tested to give accurate data even beyond the merger [137].
The code has revealed new global properties of the headon collision by studying a sequence of data for a family of colliding black holes that approaches a single Schwarzschild black hole. The resulting perturbed Schwarzschild horizon provides global insight into the close limit [239], in which the individual black holes have joined in the infinite past. A marginally antitrapped surface divides the horizon into interior and exterior regions, analogous to the division of the Schwarzschild horizon by the r = 2 M bifurcation sphere. In passing from the perturbative to the strongly nonlinear regime there is a rapid transition in which the individual black holes move into the exterior portion of the horizon. The data pave the way for the PITT code to calculate whether this dramatic time dependence of the horizon produces an equally dramatic waveform. See Section 4.4.2 for first stage results.
4.4 Perturbations of Schwarzschild
The nonlinear 3D PITT code has been calibrated in the regime of small perturbations of a Schwarzschild spacetime [311, 312] by measuring convergence with respect to independent solutions of the Teukolsky equation [290]. By decomposition into spherical harmonics, the Teukolsky equation reduces the problem of a perturbation of a stationary black hole to a 1D problem in the (t, r) subspace perturbations for a component of the Weyl tensor. Historically, the Teukolsky equation was first solved numerically by Cauchy evolution. Campanelli, Gómez, Husa, Winicour, and Zlochower [77, 174] have reformulated the Teukolsky formalism as a doublenull characteristic evolution algorithm. The evolution proceeds on a family of outgoing null hypersurfaces with an ingoing null hypersurface as inner boundary and with the outer boundary compactified at future null infinity. It applies to either the Weyl component Ψ_{0} or Ψ_{4}, as classified in the NewmanPenrose formalism. The Ψ_{0} component comprises constraintfree gravitational data on an outgoing null hypersurface and Ψ_{4} comprises the corresponding data on an ingoing null hypersurface. In the study of perturbations of a Schwarzschild black hole, Ψ_{0} is prescribed on an outgoing null hypersurface \({{\mathcal J}^ }\), representing an early retarded time approximating past null infinity, and Ψ_{4} is prescribed on the inner white hole horizon \({{\mathcal H}^ }\).
The physical setup is described in Figure 5. The outgoing null hypersurfaces extend to future null infinity \({{\mathcal I}^ +}\) on a compactified numerical grid. Consequently, there is no need for either an artificial outer boundary condition or an interior extraction worldtube. The outgoing radiation is computed in the coordinates of an observer in an inertial frame at infinity, thus avoiding any gauge ambiguity in the waveform.
The first calculations were carried out with nonzero data for Ψ_{4} on \({{\mathcal H}^ }\) and zero data on \({{\mathcal J}^ }\) [77] (so that no ingoing radiation entered the system). The resulting simulations were highly accurate and tracked the quasinormal ringdown of a perturbation consisting of a compact pulse through ten orders of magnitude and tracked the final powerlaw decay through an additional six orders of magnitude. The measured exponent of the power law decay varied from ≈ 5.8, at the beginning of the tail, to ≈ 5.9 near the end, in good agreement with the predicted value of 2ℓ + 2 = 6 for a quadrupole wave [238].
The accuracy of the perturbative solutions provide a virtual exact solution for carrying out convergence tests of the nonlinear PITT null code. In this way, the error in the Bondi news function computed by the PITT code was calibrated for perturbative data consisting of either an outgoing pulse on \({{\mathcal H}^ }\) or an ingoing pulse on \({{\mathcal J}^ }\). for the outgoing pulse, clean secondorder convergence was confirmed until late times in the evolution, when small deviations from second order arise from accumulation of roundoff and truncation error. For the Bondi news produced by the scattering of an ingoing pulse, clean secondorder convergence was again confirmed until late times when the pulse approached the r = 2 M blackhole horizon. The latetime error arises from loss of resolution of the pulse (in the radial direction) resulting from the properties of the compactified radial coordinate used in the code. This type of error could be eliminated by using the characteristic AMR techniques under development [237].
4.4.1 Close approximation whitehole and blackhole waveforms
The characteristic Teukolsky code has been used to study radiation from axisymmetric white holes and black holes in the close approximation. The radiation from an axisymmetric fissioning white hole [77] was computed using the Weyl data on \({{\mathcal H}^ }\) supplied by the conformal horizon model described in Section 4.3, with the fission occurring along the axis of symmetry. The close approximation implies that the fission takes place far in the future, i.e., in the region of \({{\mathcal H}^ }\) above the blackhole horizon \({{\mathcal H}^ +}\). The data have a free parameter η, which controls the energy yielded by the whitehole fission. The radiation waveform reveals an interesting dependence on the parameter η. In the large η limit, the waveform consists of a single pulse, followed by ringdown and tail decay. The amplitude of the pulse scales quadratically with η and the width decreases with η. As η is reduced, the initial pulse broadens and develops more structure. In the small η limit, the amplitude scales linearly with η and the shape is independent of η.
Since there was no incoming radiation, the above model gave the physicallyappropriate boundary conditions for a whitehole fission (in the close approximation). From a time reversed view point, the system corresponds to a blackhole merger with no outgoing radiation at future null infinity, i.e., the analog of an advanced solution with only ingoing but no outgoing radiation. In the axisymmetric case studied, the merger corresponds to a headon collision between two black holes. The physicallyappropriate boundary conditions for a blackhole merger correspond to no ingoing radiation on \({{\mathcal J}^ }\) and binary blackhole data on \({{\mathcal H}^ +}\). Because \({{\mathcal J}^ }\) and \({{\mathcal H}^ +}\) are disjoint, the corresponding data cannot be used directly to formulate a double null characteristic initial value problem. However, the ingoing radiation at \({{\mathcal J}^ }\) supplied by the advanced solution for the blackhole merger could be used as Stage I of a two stage approach to determine the corresponding retarded solution. In Stage II, this ingoing radiation is used to generate the analogue of an advanced minus retarded solution. A pure retarded solution (with no ingoing radiation but outgoing radiation at \({{\mathcal I}^ +}\) can then be constructed by superposition. The time reflection symmetry of the Schwarzschild background is key to carrying out this construction.
This two stage strategy has been carried out by Husa, Zlochower, Gómez, and Winicour [174]. The superposition of the Stage I and II solutions removes the ingoing radiation from \({{\mathcal J}^ }\) while modifying the close approximation perturbation of \({{\mathcal H}^ +}\), essentially making it ring. The amplitude of the radiation waveform at \({{\mathcal I}^ +}\) has a linear dependence on the parameter η, which in this blackhole scenario governs the energy lost in the inelastic merger process. Unlike the fission waveforms, there is very little ηdependence in their shape and the amplitude continues to scale linearly even for large η. It is not surprising that the retarded waveforms from a blackhole merger differ markedly from the retarded waveforms from a whitehole merger. The whitehole process is directly visible at \({{\mathcal I}^ +}\) whereas the merger waveform results indirectly from the black holes through the preceding collapse of matter or gravitational energy that formed them. This explains why the fission waveform is more sensitive to the parameter η, which controls the shape and timescale of the horizon data. However, the weakness of the dependence of the merger waveform on η is surprising and has potential importance for enabling the design of an efficient template for extracting a gravitational wave signal from noise.
4.4.2 Fissioning white hole
In the purely vacuum approach to the binary blackhole problem, the stars that collapse to form the black holes are replaced either by imploding gravitational waves or some past singularity as in the Kruskal picture. This avoids hydrodynamic difficulties at the expense of a globallycomplicated initialvalue problem. The imploding waves either emanate from a past singularity, in which case the timereversed application of cosmic censorship implies the existence of an antitrapped surface; or they emanate from \({{\mathcal I}^ }\), which complicates the issue of gravitational radiation content in the initial data and its effect on the outgoing waveform. These complications are avoided in the two stage approach adopted in the closeapproximation studies described in Section 4.4.1, where advanced and retarded solutions in a Schwarzschild background can be rigorously identified and superimposed. Computational experiments have been carried out to study the applicability of this approach in the nonlinear regime [136].
From a timereversed viewpoint, the first stage is equivalent to the determination of the outgoing radiation from the fission of a white hole in the absence of ingoing radiation, i.e., the physicallyappropriate “retarded” waveform from a whitehole fission. This fission problem can be formulated in terms of data on the whitehole horizon \({{\mathcal H}^ }\) and data representing the absence of ingoing radiation on a null hypersurface J^{−}, which emanates from \({{\mathcal H}^ }\) at an early time. The data on \({{\mathcal H}^ }\) is provided by the conformal horizon model for a fissioning white hole. This allows study of a range of models extending from the perturbative close approximation regime, in which the fission occurs inside a blackhole event horizon, to the nonlinear regime of a “bare” fission visible from \({{\mathcal I}^ +}\). The study concentrates on the axisymmetric spinless fission (corresponding in the time reversed view to the headon collision of nonspinning black holes). In the perturbative regime, the news function agrees with the close approximation waveforms. In the highly nonlinear regime, a bare fission was found to produce a dramatically sharp radiation pulse, which then undergoes a damped oscillation. Because the blackhole fission is visible from \({{\mathcal I}^ +}\), it is a more efficient source of gravitational waves than a blackhole merger and can produce a higher fractional mass loss!
4.5 Nonlinear mode coupling
The PITT code has been used to model the nonlinear generation of waveforms by scattering off a Schwarzschild black hole [311, 312]. The physical setup is similar to the perturbative study in Section 4.4. A radially compact pulse is prescribed on an earlytime outgoing null hypersurface \({{\mathcal J}^ }\) and Schwarzschild null data is given on the interior whitehole horizon \({{\mathcal H}^ }\), which is causally unaffected by the pulse. The input pulse is standardized to (ℓ = 2, m = 0) and (ℓ = 2, m = 2) quadrupole modes with amplitude A. The outgoing null hypersurfaces extend to future null infinity \({{\mathcal I}^ +}\) on a compactified numerical grid. Consequently, there is no need for an artificial outer boundary. The evolution code then provides the news function at \({{\mathcal I}^ +}\), in the coordinates of an observer in an inertial frame at infinity, thus avoiding any gauge ambiguity in the waveform. This provides a simple setting for how the nonlinearities generated by high amplitudes affect the waveform.
The study reveals several features of qualitative importance:

1.
The mode coupling amplitudes consistently scale as powers A^{n} of the input amplitude A corresponding to the nonlinear order of the terms in the evolution equations, which produce the mode. This allows much economy in producing a waveform catalog: Given the order n associated with a given mode generation, the response to any input amplitude A can be obtained from the response to a single reference amplitude.

2.
The frequency response has similar behavior but in a less consistent way. The dominant frequencies produced by mode coupling are in the approximate range of the quasinormal frequency of the input mode and the expected sums and difference frequencies generated by the order of nonlinearity.

3.
Large phase shifts, ranging up to 15% in a half cycle relative to the linearized waveform, are exhibited in the news function obtained by the superposition of all output modes, i.e., in the waveform of observational significance. These phase shifts, which are important for design of signal extraction templates, arise in an erratic way from superposing modes with different oscillation frequencies. This furnishes a strong argument for going beyond the linearized approximation in designing a waveform catalog for signal extraction.

4.
Besides the nonlinear generation of harmonic modes absent in the initial data, there is also a strongerthanlinear generation of gravitationalwave output. This provides a potential mechanism for enhancing the strength of the gravitational radiation produced during, say, the merger phase of a binary inspiral above the strength predicted in linearized theory.

5.
In the nonaxisymmetric m = 2 case, there is also considerable generation of radiation in polarization states not present in the linearized approximation. In the simulations, input amplitudes in the range A = 0.1 to A = 0.36 lead to nonlinear generation of the ⊕ polarization mode, which is of the same order of magnitude as the ⊗ mode (which would be the sole polarization in the linearized regime). As a result, significant nonlinear amplification and phase shifting of the waveform would be observed by a gravitationalwave detector, depending on its orientation.
These effects arise from the nonlinear modification of the Schwarzschild geometry identified by Papadopoulos in his prior work on axisymmetric mode coupling [222], reported in Section 3.3.2. Although Papadopoulos studied nonlinear mode generation produced by an outgoing pulse, as opposed to the case of an ingoing pulse studied in [311, 312], the same nonlinear factors were in play and gave rise to several common features. In both cases, the major effects arise in the region near r = 3 M. Analogs of Features 1, 2, 3, and 4 above are all apparent in Papadopoulos’s work. At the finite difference level, both codes respect the reflection symmetry inherent in Einstein’s equations and exhibit the corresponding selection rules arising from parity considerations. In the axisymmetric case considered by Papadopoulos, this forbids the nonlinear generation of a ⊕ mode from a ⊗ mode, as described in Feature 5 above.
The evolution along ingoing null hypersurfaces in the axisymmetric work of Papadopoulos has complementary numerical features with the evolution along outgoing null hypersurfaces in the 3D work. The grid based upon ingoing null hypersurfaces avoids the difficulty in resolving effects close to r = 2 M encountered with the grid based upon outgoing null hypersurfaces. The outgoing code would require AMR in order to resolve the quasinormal ringdown for as many cycles as achieved by Papadopoulos. However, the outgoing code avoids the late time caustic formation noted in Papadopoulos’ work, as well as the complications of gauge ambiguity and backscattering introduced by a finite outer boundary. One attractive option would be to combine the best features of these approaches by matching an interior evolution based upon ingoing null hypersurfaces to an exterior evolution based upon outgoing null hypersurfaces, as implemented in [196] for sphericallysymmetric EinsteinKleinGordon waves.
The waveform of relevance to gravitationalwave astronomy is the superposition of modes with different frequency compositions and angular dependence. Although this waveform results from a complicated nonlinear processing of the input signal, which varies with choice of observation angle, the response of the individual modes to an input signal of arbitrary amplitude can be obtained by scaling the response to an input of standard reference amplitude. This offers an economical approach to preparing a waveform catalog. Nonlinear mode coupling has also been extensively studied by gauge invariant secondorder perturbative methods (see [130, 67, 225] and references therein).
4.6 3D EinsteinKleinGordon system
The EinsteinKleinGordon (EKG) system can be used to simulate many interesting physical phenomena. In 1D, characteristic EKG codes have been used to simulate critical phenomena and the perturbation of black holes (see Section 3.1), and a Cauchy EKG code has been used to study boson star dynamics [265]. Extending these codes to 3D would open up a new range of possibilities, e.g., the possibility to study radiation from a boson star orbiting a black hole. A first step in that direction has been achieved with the incorporation of a massless scalar field into the PITT code [28]. Since the scalar and gravitational evolution equations have the same basic form, the same evolution algorithm could be utilized. The code was tested to be secondorder convergent and stable. It was applied to the fullynonlinear simulation of an asymmetric pulse of ingoing scalar radiation propagating toward a Schwarzschild black hole. The resulting scalar radiation and gravitational news backscattered to \({{\mathcal I}^ +}\) was computed. The amplitudes of the scalar and gravitational radiation modes exhibited the expected powerlaw scaling with respect to the initial pulse amplitude. In addition, the computed ringdown frequencies agreed with the results from perturbative quasinormal mode calculations.
The LEO code [134] developed by Gómez et al. has been applied to the characteristic evolution of the coupled EinsteinKleinGordon fields, using the cubedsphere coordinates. The long term plan is to simulate a boson star orbiting a black hole. In simulations of a scalar pulse incident on a Schwarzschild black hole, they find the interesting result that scalar energy flow into the black hole reaches a maximum at spherical harmonic index ℓ = 2, and then decreases for larger ℓ due to the centrifugal barrier preventing the harmonics from effective penetration. The efficient parallelization allows them to perform large simulations with resolution never achieved before. Characteristic evolution of such systems of astrophysical interest have been limited in the past by resolution. They note that at the finest resolution considered in [55], it would take 1.5 months on the fastest current (single) processor to track a star in close orbit around a black hole. This is so, even though the grid in question is only 81 × 123 points, which is moderate by today’s standards.
5 CauchyCharacteristic Matching
Characteristic evolution has many advantages over Cauchy evolution. Its main disadvantage is the existence of either a caustic, where neighboring characteristics focus, or a milder version consisting of a crossover between two distinct characteristics. The vertex of a light cone is a highly symmetric caustic, which already strongly limits the time step for characteristic evolution because of the CFL condition (22). It does not appear possible for a single characteristic coordinate system to cover the entire exterior region of a binary blackhole spacetime without developing very complicated caustics and crossovers. This limits the waveform determined by a purely characteristic evolution to the post merger period.
CCM is a way to avoid this limitation by combining the strong points of characteristic and Cauchy evolution into a global evolution [46]. One of the prime goals of computational relativity is the simulation of the inspiral and merger of binary black holes. Given the appropriate data on a worldtube surrounding a binary system, characteristic evolution can supply the exterior spacetime and the radiated waveform. But determination of the worldtube data for a binary requires an interior Cauchy evolution. CCM is designed to solve such global problems. The potential advantages of CCM over traditional boundary conditions are

accurate waveform and polarization state at infinity,

computational efficiency for radiation problems in terms of both the grid domain and the computational algorithm,

elimination of an artificial outer boundary condition on the Cauchy problem, which eliminates contamination from backreflection and clarifies the global initial value problem, and

a global picture of the spacetime exterior to the event horizon.
These advantages have been realized in model tests (see Sections 5.5–5.8), but CCM has not yet been achieved in fully nonlinear threedimensional general relativity. The early attempts to implement CCM in general relativity employed the ArnowittDeserMisner (ADM) [12] formulation, with explicit lapse and shift, for the Cauchy evolution. A major problem in this application has since been identified with the weakly hyperbolic nature of this system. Even at the analytic level of the Cauchy problem there are secularly growing modes with arbitrarily fast rates, i.e., the Cauchy problem is illposed. Such powerlaw instabilities of the Cauchy problem can be converted to exponentially growing instabilities by the introduction of lower order or nonlinear terms. See [76] for discussions relevant to the stability of the ADM formulation.
Such behavior can also be made worse by the imposition of boundary conditions. Linearized studies [284, 285, 18] of ADM evolutionboundary algorithms with prescribed values of lapse and shift have shown the following:

On analytic grounds, those ADM boundary algorithms that supply values for all components of the metric (or extrinsic curvature) are inconsistent.

A consistent boundary algorithm allows free specification of the transversetraceless components of the metric with respect to the boundary.

Using such a boundary algorithm, linearized ADM evolution can be carried out in a bounded domain for thousands of crossing times without sign of exponential growth, even though there are the secularly growing modes whose rates increase with resolution.
Such results contributed to the early belief that long term evolutions might be possible by means of ADM evolution. The linearized tests satisfied the original criterion for robust stability, i.e., that there be no exponential growth when the initial Cauchy data and free boundary data are prescribed as random numbers (in the linearized regime) [285]. However, it was subsequently shown that the weakly hyperbolic nature of ADM led to uncontrolled power law instabilities. In the nonlinear regime, it is symptomatic of weakly hyperbolic systems that such instabilities become exponential. This has led to refined criteria for robust stability as a standardized test [18].
CCM cannot work unless the Cauchy and characteristic codes have robustly stable boundaries. This is necessarily so because interpolations continually introduce short wavelength noise into the neighborhood of the boundary. It has been demonstrated that the PITT characteristic code has a robustlystable boundary (see Section 4.2.5), but robustness of the Cauchy boundary has only recently been studied.
5.1 Computational boundaries
Boundary conditions are both the most important and the most difficult part of a theoretical treatment of most physical systems. Usually, that’s where all the physics is. And, in computational approaches, that’s usually where all the agony is. Computational boundaries for hyperbolic systems pose special difficulties. Even with an analytic form of the correct physical boundary condition in hand, there are seemingly infinitely more unstable numerical implementations than stable ones. In general, a stable problem places more boundary requirements on the numerical algorithm than on the corresponding partial differential equations. Furthermore, the methods of linear stability analysis are often more unwieldy to apply to the boundary than to the interior evolution algorithm. Only if there is an energy estimate for the analytic problem is there a straightforward way to proceed. In that case the integration by parts underlying the energy conservation law can be converted into a summation by parts [191] construction of a stable finitedifference algorithm (see the forthcoming Living Review [261]). For problems with constraints, such as general relativity, there is the additional complication that the boundary condition must enforce the constraints.
The von Neumann stability analysis of the interior algorithm linearizes the equations, while assuming a uniform grid with periodic boundary conditions, and checks that the discrete Fourier modes do not grow exponentially. There is an additional stability condition that a boundary introduces into this analysis. Consider the onedimensional case. The mode e^{kx}, with k real, is not included in the von Neumann analysis for periodic boundary conditions. However, for the half plane problem in the domain x ≤ 0, one can legitimately prescribe such a mode as initial data as long as k > 0 so that it has finite energy. Thus, the stability of such boundary modes must be checked. In the case of an additional boundary, e.g., for a problem in the domain −1 ≤ x ≤ 1, the GodunovRyaben’kii theory gives as a necessary condition for stability the combined von Neumann stability of the interior and the stability of the allowed boundary modes [274]. The Kreiss condition [184, 155] strengthens this result by providing a sufficient condition for stability.
The correct physical formulation of any Cauchy problem for an isolated system also involves asymptotic conditions at infinity. These conditions must ensure not only that the total energy and energy loss by radiation are both finite, but they must also ensure the proper 1/r asymptotic falloff of the radiation fields. However, when treating radiative systems computationally, an outer boundary is often established artificially at some large but finite distance in the wave zone, i.e., many wavelengths from the source. Imposing an appropriate radiation boundary condition at a finite distance is a difficult task even in the case of a simple radiative system evolving on a fixed geometric background. Gustafsson and Kreiss have shown in general that the construction of a nonreflecting boundary condition for an isolated system requires knowledge of the solution in a neighborhood of infinity [154].
When the system is nonlinear and not amenable to an exact solution, a finiteouter boundary condition must necessarily introduce spurious physical effects into a Cauchy evolution. The domain of dependence of the initial Cauchy data in the region spanned by the computational grid would shrink in time along ingoing characteristics unless data on a worldtube traced out by the outer grid boundary is included as part of the problem. In order to maintain a causally sensible evolution, this worldtube data must correctly substitute for the missing Cauchy data, which would have been supplied if the Cauchy hypersurface had extended to infinity. In a scattering problem, this missing exterior Cauchy data might, for instance, correspond to an incoming pulse initially outside the outer boundary. In a scalar wave problem with field Φ, where the initial radiation is confined to a compact region inside the boundary, the missing Cauchy data outside the boundary would be Φ = Φ,_{t} = 0 at the initial time t_{0}. However, the determination of Cauchy data for general relativity is a global elliptic constraint problem so that there is no welldefined scheme to confine it to a compact region. Furthermore, even in the scalar field case where Φ = Φ,_{t} = 0 is appropriate Cauchy data outside the boundary at t_{0}, it would still be a nontrivial evolution problem to correctly assign the associated boundary data for t ≥ t_{0}.
It is common practice in computational physics to impose an artificial boundary condition (ABC), such as an outgoing radiation condition, in an attempt to approximate the proper data for the exterior region. This ABC may cause partial reflection of an outgoing wave back into the system [205, 177, 161, 248], which contaminates the accuracy of the interior evolution and the calculation of the radiated waveform. Furthermore, nonlinear waves intrinsically backscatter, which makes it incorrect to try to entirely eliminate incoming radiation from the outer region. The resulting error is of an analytic origin, essentially independent of computational discretization. In general, a systematic reduction of this error can only be achieved by moving the computational boundary to larger and larger radii. There has been recent progress in designing such absorbing boundary conditions for the gravitational field [69]. See the review [261] for details on this subject.
A traditional ABC for the wave equation is the Sommerfeld condition. For a scalar field Φ satisfying the Minkowski space wave equation
with a smooth source S of compact support emitting outgoing radiation, the exterior retarded field has the form
where f, g and h and their derivatives are smooth bounded functions. The simplest case is the monopole radiation
which satisfies (∂_{t} + ∂_{r})(rΦ) = 0. This motivates the use of the Sommerfeld condition
on a finite boundary r = R.
A homogeneous Sommerfeld condition, i.e., q = 0, is exact only in the sphericallysymmetric case. The Sommerfeld boundary data q in the general case (56) falls off as 1/R^{3}, so that a homogeneous Sommerfeld condition introduces an error, which is small only for large R. As an example, for the dipole solution
we have
A homogeneous Sommerfeld condition at r = R would lead to a solution \({{\tilde \Phi}_{{\rm{Dipole}}}}\) containing a reflected ingoing wave. For large R,
where ∂_{t}f(t) = F(t) and the reflection coefficient has asymptotic behavior κ = O(1/R^{2}). More precisely, the Fourier mode
satisfies the homogeneous boundary condition \(({\partial _t} + {\partial _r})(r{{\tilde \Phi}_{{\rm{Dipole}}}}(\omega){\vert_R} = 0\) with reflection coefficient
Much work has been done on formulating boundary conditions, both exact and approximate, for linear problems in situations that are not spherically symmetric. These boundary conditions are given various names in the literature, e.g., absorbing or nonreflecting. A variety of ABCs have been reported for linear problems. See [129, 248, 298, 256, 50] for general discussions.
Local ABCs have been extensively applied to linear problems with varying success [205, 106, 41, 297, 161, 61, 178]. Some of these conditions are local approximations to exact integral representations of the solution in the exterior of the computational domain [106], while others are based on approximating the dispersion relation of the oneway wave equations [205, 297]. Higdon [161] showed that this last approach is essentially equivalent to specifying a finite number of angles of incidence for which the ABCs yield perfect transmission. Local ABCs have also been derived for the linear wave equation by considering the asymptotic behavior of outgoing solutions [41], thus generalizing the Sommerfeld outgoing radiation condition. Although this type of ABC is relatively simple to implement and has a low computational cost, the final accuracy is often limited because the assumptions made about the behavior of the waves are rarely met in practice [129, 298].
The disadvantages of local ABCs have led some workers to consider exact nonlocal boundary conditions based on integral representations of the infinite domain problem [296, 129, 298]. Even for problems where the Green’s function is known and easily computed, such approaches were initially dismissed as impractical [106]; however, the rapid increase in computer power has made it possible to implement exact nonlocal ABCs for the linear wave equation and Maxwell’s equations in 3D [93, 149]. If properly implemented, this method can yield numerical solutions to a linear problem, which converge to the exact infinite domain problem in the continuum limit, while keeping the artificial boundary at a fixed distance. However, due to nonlocality, the computational cost per time step usually grows at a higher power with grid size (\(({\mathcal O}({N^4})\) per time step in three dimensions) than in a local approach [129, 93, 298].
The extension of ABCs to nonlinear problems is much more difficult. The problem is normally treated by linearizing the region between the outer boundary and infinity, using either local or nonlocal linear ABCs [298, 256]. The neglect of the nonlinear terms in this region introduces an unavoidable error at the analytic level. But even larger errors are typically introduced in prescribing the outer boundary data. This is a subtle global problem because the correct boundary data must correspond to the continuity of fields and their normal derivatives when extended across the boundary into the linearized exterior. This is a clear requirement for any consistent boundary algorithm, since discontinuities in the field or its derivatives would otherwise act as a spurious sheet source on the boundary, which contaminates both the interior and the exterior evolutions. But the fields and their normal derivatives constitute an overdetermined set of data for the boundary problem. So it is necessary to solve a global linearized problem, not just an exterior one, in order to find the proper data. The designation “exact ABC” is given to an ABC for a nonlinear system whose only error is due to linearization of the exterior. An exact ABC requires the use of global techniques, such as the difference potentials method, to eliminate back reflection at the boundary [298].
There have been only a few applications of ABCs to strongly nonlinear problems [129]. Thompson [293] generalized a previous nonlinear ABC of Hedstrom [160] to treat 1D and 2D problems in gas dynamics. These boundary conditions performed poorly in some situations because of their difficulty in adequately modeling the field outside the computational domain [293, 129]. Hagstrom and Hariharan [156] have overcome these difficulties in 1D gas dynamics by a clever use of Riemann invariants. They proposed a heuristic generalization of their local ABC to 3D, but this approach has not yet been validated.
In order to reduce the level of approximation at the analytic level, an artificial boundary for a nonlinear problem must be placed sufficiently far from the strongfield region. This sharply increases the computational cost in multidimensional simulations [106]. There is no numerical method, which converges (as the discretization is refined) to the infinite domain exact solution of a strongly nonlinear wave problem in multidimensions, while keeping the artificial boundary fixed. Attempts to use compactified Cauchy hypersurfaces, which extend the domain to spatial infinity have failed because the phase of short wavelength radiation varies rapidly in spatial directions [177]. Characteristic evolution avoids this problem by approaching infinity along the phase fronts.
CCM is a strategy that eliminates this nonlinear source of error. In the simplest version of CCM, Cauchy and characteristic evolution algorithms are pasted together in the neighborhood of a worldtube to form a global evolution algorithm. The characteristic algorithm provides an outer boundary condition for the interior Cauchy evolution, while the Cauchy algorithm supplies an inner boundary condition for the characteristic evolution. The matching worldtube provides the geometric framework necessary to relate the two evolutions. The Cauchy foliation slices the worldtube into spherical crosssections. The characteristic evolution is based upon the outgoing null hypersurfaces emanating from these slices, with the evolution proceeding from one hypersurface to the next by the outward radial march described in Section 3.1. There is no need to truncate spacetime at a finite distance from the source, since compactification of the radial null coordinate used in the characteristic evolution makes it possible to cover the infinite space with a finite computational grid. In this way, the true waveform may be computed up to discretization error by the finitedifference algorithm.
5.2 The computational matching strategy
CCM evolves a mixed spacelikenull initialvalue problem in which Cauchy data is given in a spacelike hypersurface bounded by a spherical boundary \({\mathcal S}\) and characteristic data is given on a null hypersurface emanating from \({\mathcal S}\). The general idea is not entirely new. An early mathematical investigation combining spacelike and characteristic hypersurfaces appears in the work of Duff [103]. The three chief ingredients for computational implementation are: (i) a Cauchy evolution module, (ii) a characteristic evolution module and, (iii) a module for matching the Cauchy and characteristic regions across their interface. In the simplest scenario, the interface is the timelike worldtube, which is traced out by the flow of \({\mathcal S}\) along the worldlines of the Cauchy evolution, as determined by the choice of lapse and shift. Matching provides the exchange of data across the worldtube to allow evolution without any further boundary conditions, as would be necessary in either a purely Cauchy or purely characteristic evolution. Other versions of CCM involve a finite overlap between the characteristic and Cauchy regions.
The most important application of CCM is anticipated to be the waveform and momentum recoil in the binary blackhole inspiral and merger. The 3D Cauchy codes being applied to simulate this problem employ a single Cartesian coordinate patch. In principle, the application of CCM to this problem might seem routine, tantamount to translating into finitedifference form the textbook construction of an atlas consisting of overlapping coordinate patches. In practice, it is a complicated project. The computational strategy has been outlined in [52]. The underlying algorithm consists of the following main submodules:

The boundary module, which sets the grid structures. This defines masks identifying which points in the Cauchy grid are to be evolved by the Cauchy module and which points are to be interpolated from the characteristic grid, and vice versa. The reference structures for constructing the mask is the inner characteristic boundary, which in the Cartesian Cauchy coordinates is the “ spherical” extraction worldtube \({x^2} + {y^2} + {z^2} = R_E^2\), and the outer Cauchy boundary \({x^2} + {y^2} + {z^2} = R_I^2\), where the Cauchy boundary data is injected. The choice of lapse and shift for the Cauchy evolution governs the dynamical and geometrical properties of these worldtubes.

The extraction module whose input is Cauchy grid data in the neighborhood of the extraction worldtube at R_{E} and whose output is the inner boundary data for the exterior characteristic evolution. This module numerically implements the transformation from Cartesian {3 + 1} coordinates to spherical null coordinates. The algorithm makes no perturbative assumptions and is based upon interpolations of the Cauchy data to a set of prescribed grid points near R_{E}. The metric information is then used to solve for the null geodesics normal to the slices of the extraction worldtube. This provides the Jacobian for the transformation to null coordinates in the neighborhood of the worldtube. The characteristic evolution module is then used to propagate the data from the worldtube to null infinity, where the waveform is calculated.

The injection module, which completes the interface by using the exterior characteristic evolution to inject the outer boundary data for the Cauchy evolution at R_{I}. This is the inverse of the extraction procedure but must be implemented with R_{I} > R_{E} to allow for overlap between the Cauchy and characteristic domains. The overlap region can be constructed either to have a fixed physical size or to shrink to zero in the continuum limit. In the latter case, the inverse Jacobian describing the transformation from null to Cauchy coordinates can be obtained to prescribed accuracy in terms of an affine parameter expansion along the null geodesics emanating from the worldtube. The numerical stability of this element of the scheme is not guaranteed.
The above strategy provides a model of how Cauchy and characteristic codes can be pieced together as modules to form a global evolution code.
The full advantage of CCM lies in the numerical treatment of nonlinear systems, where its error converges to zero in the continuum limit for any size outer boundary and extraction radius [45, 46, 89]. For high accuracy, CCM is also very efficient. For small target error ε, it has been shown on the assumption of unigrid codes that the relative amount of computation required for CCM (A_{CCM}) compared to that required for a pure Cauchy calculation (A_{C}) goes to zero, A_{CCM}/A_{C} → O as ε → O [56, 52]. An important factor here is the use of a compactified characteristic evolution, so that the whole spacetime is represented on a finite grid. From a numerical point of view this means that the only error made in a calculation of the radiation waveform at infinity is the controlled error due to the finite discretization.
The accuracy of a Cauchy algorithm, which uses an ABC, requires a large grid domain in order to avoid error from nonlinear effects in its exterior. Improved numerical techniques, such as the design of Cauchy grids whose resolution decreases with radius, has improved the efficiency of this approach. Nevertheless, the computational demands of CCM are small since the interface problem involves one less dimension than the evolution problem and characteristic evolution algorithms are more efficient than Cauchy algorithms. CCM also offers the possibility of using a small matching radius, consistent with the requirement that it lie in the region exterior to any caustics. This is advantageous in simulations of stellar collapse, in which the star extends over the entire computational grid, although it is then necessary to include the matter in the characteristic treatment.
At present, the computational strategy of CCM is mainly the tool of numerical relativists, who are used to dealing with dynamical coordinate systems. The first discussion of its potential was given in [45] and its feasibility has been more fully explored in [89, 90, 102, 49, 287]. Recent work has been stimulated by the requirements of the binary blackhole problem, where CCM is one of the strategies to provide boundary conditions and determine the radiation waveform. However, it also has inherent advantages in dealing with other hyperbolic systems in computational physics, particularly nonlinear threedimensional problems. A detailed study of the stability and accuracy of CCM for linear and nonlinear wave equations has been presented in [50], illustrating its potential for a wide range of problems.
5.3 The outer Cauchy boundary in numerical relativity
A special issue arising in general relativity is whether the boundary conditions on an artificial outer worldtube preserve the constraints. It is typical of hyperbolic reductions of the Einstein equations that the Hamiltonian and momentum constraints propagate in a domain of dependence dictated by the characteristics. Unless the boundary conditions enforce these constraints, they will be violated outside the domain of dependence of the initial Cauchy hypersurface. This issue of a constraintpreserving initialboundary value problem has only recently been addressed [282]. The first fully nonlinear treatment of a wellposed constraintpreserving formulation of the Einstein initialboundary value problem (IBVP) has subsequently been given by Friedrich and Nagy [118]. Their treatment is based upon a frame formulation in which the evolution variables are the tetrad, connection coefficients and Weyl curvature. Although this system has not yet been implemented computationally, it has spurred the investigation of simpler treatments of Einstein equations, which give rise to a constraint preserving IBVP under various restrictions [74, 287, 75, 122, 150, 254, 192]. See [260, 250] for reviews.
The successful implementation of CCM for Einstein’s equations requires a wellposed initialboundary value problem for the artificial outer boundary of the Cauchy evolution. This is particularly cogent for dealing with waveform extraction in the simulation of black holes by BSSN formulations. There is no wellposed outer boundary theory for the BSSN formulation and the strategy is to place the boundary out far enough so that it does no harm. The harmonic formulation has a simpler mathematical structure as a system of coupled quasilinear wave equations, which is more amenable to an analytic treatment.
Standard harmonic coordinates satisfy the covariant wave equation
This can easily be generalized to include gauge forcing [117], whereby Γ^{α} = f^{α}(x^{β}, g^{βγ}). For simplicity of discussion, I will set Γ^{α} = 0, although gauge forcing is an essential tool in simulating black holes [235].
When Γ^{α} = 0, Einstein’s equations reduce to the ten quasilinear wave equations
where S^{αβ} does not enter the principle part and vanishes in the linearized approximation. Straightforward techniques can be applied to formulate a wellposed IBVP for the system (65). The catch is that Einstein’s equations are not necessarily satisfied unless the constraints are also satisfied.
In the harmonic formalism, the constraints can be reduced to the harmonic coordinate conditions (64). For the resulting IBVP to be constraint preserving, these harmonic conditions must be built into the boundary condition. Numerous early attempts to accomplish this failed because Equation (64) contains derivatives tangent to the boundary, which do not fit into the standard methods for obtaining the necessary energy estimates. The use of pseudodifferential techniques developed for similar problems in elasticity theory has led to the first wellposed formulation of the IBVP for the harmonic Einstein equations [192]. Subsequently, wellposedness was also obtained using energy estimates by means of a novel, nonconventional choice of the energy for the harmonic system [189]. A Cauchy evolution code, the Abigel code, based upon a discretized version of these energy estimates was found to be stable, convergent and constraint preserving in nonlinear boundary tests [14]. These results were confirmed using an independent harmonic code developed at the Albert Einstein Institute [266]. A linearized version of the Abigel code has been used to successfully carry out CCM (see Section 5.8).
Given a wellposed IBVP, there is the additional complication of the correct specification of boundary data. Ideally, this data would be supplied by matching to a solution extending to infinity, e.g., by CCM. In the formulations of [192] and [189], the boundary conditions are of the Sommerfeld type for which homogeneous boundary data, i.e., zero boundary values, is a good approximation in the sense that the reflection coefficients for gravitational waves fall off as O(1/R^{3}) as the boundary radius R → ∞ [190]. A second differential order boundary condition based upon requiring the NewmanPenrose [216] Weyl tensor component ψ_{0} = 0 has also been shown to be wellposed by means of pseudodifferential techniques [254]. For this ψ_{0} condition, the reflection coefficients fall off at an addition power of 1/R. In the present state of the art of blackhole simulations, the ψ_{0} condition comes closest to a satisfactory treatment of the outer boundary [252].
5.4 Perturbative matching schemes
In numerous analytic studies outside of general relativity, matching techniques have successfully cured pathologies in perturbative expansions [215]. Matching is a strategy for obtaining a global solution by patching together solutions obtained using different coordinate systems for different regions. By adopting each coordinate system to a length scale appropriate to its domain, a globallyconvergent perturbation expansion is sometimes possible in cases where a single coordinate system would fail.
In general relativity, Burke showed that matching could be used to eliminate some of the divergences arising in perturbative calculations of gravitational radiation [70]. Kates and Kegles further showed that use of an exterior null coordinate system in the matching scheme could eliminate problems in the perturbative treatment of a scalar radiation field on a Schwarzschild background [179]. The Schwarzschild light cones have drastically different asymptotic behavior from the artificial Minkowski light cones used in perturbative expansions based upon a flat space Green function. Use of the Minkowski light cones leads to nonuniformities in the expansion of the radiation fields, which are eliminated by the use of true null coordinates in the exterior. Kates, Anderson, Kegles, and Madonna extended this work to the fully generalrelativistic case and reached the same conclusion [10]. Anderson later applied this approach to the slow motion approximation of a binary system and obtained a derivation of the radiationreaction effect on the orbital period, which avoided some objections to earlier approaches [6]. The use of the true light cones was also essential in formulating as a mathematical theorem that the Bondi news function satisfies the Einstein quadrupole formula to leading order in a Newtonian limit [304]. Although questions of mathematical consistency still remain in the perturbative treatment of gravitational radiation, it is clear that the use of characteristic methods pushes these problems to a higher perturbative order.
One of the first computational applications of characteristic matching was a hybrid numericalanalytical treatment by Anderson and Hobill of the test problem of nonlinear 1D scalar waves [7, 8, 9]. They matched an inner numerical solution to a far field solution, which was obtained by a perturbation expansion. A key ingredient is that the far field is solved in retarded null coordinates (u, r). Because the transformation from null coordinates (u, r) to Cauchy coordinates (t, r) is known analytically for this problem, the matching between the null and Cauchy solutions is quite simple. Causality was enforced by requiring that the system be stationary prior to some fixed time. This eliminates extraneous incoming radiation in a physically correct way in a system, which is stationary prior to a fixed time but it is nontrivial to generalize, say, to the problem of radiation from an orbiting binary.
Later, a global, characteristic, numerical study of the selfgravitating version of this problem confirmed that the use of the true null cones is essential in getting the correct radiated waveform [145]. For quasiperiodic radiation, the phase of the waveform is particular sensitive to the truncation of the outer region at a finite boundary. Although a perturbative estimate would indicate an \({\mathcal O}(M/R)\) error, this error accumulates over many cycles to produce an error of order π in the phase.
Anderson and Hobill proposed that their method be extended to general relativity by matching a numerical solution to an analytic 1/r expansion in null coordinates. Most perturbativenumerical matching schemes that have been implemented in general relativity have been based upon perturbations of a Schwarzschild background using the standard Schwarzschild time slicing [1, 3, 2, 4, 255, 251, 214]. It would be interesting to compare results with an analyticnumeric matching scheme based upon the true null cones. Although the full proposal by Anderson and Hobill has not been carried out, characteristic techniques have been used [207, 77, 174] to study the radiation content of numerical solutions by treating the far field as a perturbation of a Schwarzschild spacetime.
Most metricbased treatments of gravitational radiation are based upon perturbations of the Schwarzschild metric and solve the underlying ReggeWheeler [240] and Zerilli [309] equations using traditional spacelike Cauchy hypersurfaces. At one level, these approaches extract the radiation from a numerical solution in a region with outer boundary \({\mathcal B}\) by using data on an inner worldtube \({\mathcal W}\) to construct the perturbative solution. Ambiguities are avoided by use of Moncrief’s gauge invariant perturbation quantities [212]. For this to work, \({\mathcal W}\) must not only be located in the far field, i.e., many wavelengths from the source, but, because of the lack of proper outer boundary data, it is necessary that the boundary \({\mathcal B}\) be sufficiently far outside \({\mathcal W}\) so that the extracted radiation is not contaminated by backreflection for some significant window of time. This poses extreme computational requirements in a 3D problem. This extraction strategy has also been carried out using characteristic evolution in the exterior of \({\mathcal W}\) instead of a perturbative solution, i.e., Cauchycharacteristic extraction [56] (see Section 6).
A study by Babiuc, Szilágyi, Hawke, and Zlochower carried out in the perturbative regime [15] shows that CCE compares favorably with Zerilli extraction and has advantages at small extraction radii. When the extraction worldtube is sufficiently large, e.g., r = 200λ, where λ is the characteristic wavelength of the radiation, the Zerilli and CCE methods both give excellent results. However, the accuracy of CCE remains unchanged at small extraction radii, e.g., r = 10λ, whereas the Zerilli approach shows error associated with near zone effects. This flexibility to apply CCE to small extraction radii has proved advantageous in the simulations of stellar collapse [246, 220] discussed in Section 6.3.
The contamination of the extracted radiation by backreflection can only be completely eliminated by matching to an exterior solution, which injects the physicallyappropriate boundary data on \({\mathcal W}\). Cauchyperturbative matching [255, 251] has been implemented using the same modular structure described for CCM in Section 5.2. Nagar and Rezzolla [214] have given a review of this approach. At present, perturbative matching and CCM share the common problem of long term stability of the outer Cauchy boundary in 3D applications.
5.5 Cauchycharacteristic matching for 1D gravitational systems
The first numerical implementations of CCM were 1D feasibility studies. These model problems provided a controlled environment for the development of CCM, in which either exact solutions or independent numerical solutions were known. In the following studies CCM worked like a charm in a variety of 1D applications, i.e., the matched evolutions were essentially transparent to the presence of the interface.
5.5.1 Cylindrical matching
The Southampton group chose cylindricallysymmetric systems as their model problem for developing matching techniques. In preliminary work, they showed how CCM could be consistently carried out for a scalar wave evolving in Minkowski spacetime but expressed in a nontrivial cylindrical coordinate system [89].
They then tackled the gravitational problem. First they set up the analytic machinery necessary for investigating cylindricallysymmetric vacuum spacetimes [90]. Although the problem involves only one spatial dimension, there are two independent modes of polarization. The Cauchy metric was treated in the JordanEhlersKompaneets canonical form, using coordinates (t, r, ϕ, z) adapted to the (ϕ, z) cylindrical symmetry. The advantage here is that u = t − r is then a null coordinate, which can be used for the characteristic evolution. They successfully recast the equations in a suitably regularized form for the compactification of \({{\mathcal I}^ +}\) in terms of the coordinate \(y = \sqrt {1/r}\). The simple analytic relationship between Cauchy coordinates (t, r) and characteristic coordinates (u, y) facilitated the translation between Cauchy and characteristic variables on the matching worldtube, given by r = const.
Next they implemented the scheme as a numerical code. The interior Cauchy evolution was carried out using an unconstrained leapfrog scheme. It is notable that they report no problems with instability, which have arisen in other attempts at unconstrained leapfrog evolution in general relativity. The characteristic evolution also used a leapfrog scheme for the evolution between retarded time levels u, while numerically integrating the hypersurface equations outward along the characteristics.
The matching interface was located at points common to both the Cauchy and characteristic grids. In order to update these points by Cauchy evolution, it was necessary to obtain field values at the Cauchy “ghost” points, which lie outside the worldtube in the characteristic region. These values were obtained by interpolation from characteristic grid points (lying on three levels of null hypersurfaces in order to ensure secondorder accuracy). Similarly, the boundary data for starting up the characteristic integration was obtained by interpolation from Cauchy grid values inside the worldtube.
The matching code was first tested [102] using exact WeberWheeler cylindrical waves [301], which come in from \({{\mathcal I}^ }\), pass through the symmetry axis and expand out to \({{\mathcal I}^ +}\). The numerical errors were oscillatory with low growth rate, and secondorder convergence was confirmed. Of special importance, little numerical noise was introduced by the interface. Comparisons of CCM were made with Cauchy evolutions using a standard outgoing radiation boundary condition [230]. At high amplitudes the standard condition developed a large error very quickly and was competitive only for weak waves with a large outer boundary. In contrast, the matching code performed well even with a small matching radius. Some interesting simulations were presented in which an outgoing wave in one polarization mode collided with an incoming wave in the other mode, a problem studied earlier by pure Cauchy evolution [232]. The simulations of the collision were qualitatively similar in these two studies.
The WeberWheeler waves contain only one gravitational degree of freedom. The code was next tested [98] using exact cylindricallysymmetric solutions, due to Piran, Safier and Katz [231], which contain both degrees of freedom. These solutions are singular at \({{\mathcal I}^ +}\) so that the code had to be suitably modified. Relative errors of the various metric quantities were in the range 10^{−4} to 10^{−2}. The convergence rate of the numerical solution starts off as second order but diminishes to first order after long time evolution. This performance could perhaps be improved by incorporating subsequent improvements in the characteristic code made by Sperhake, Sjödin, and Vickers (see Section 3.1).
5.5.2 Spherical matching
A joint collaboration between groups at Pennsylvania State University and the University of Pittsburgh applied CCM to the EKG system with spherical symmetry [138]. This model problem allowed simulation of blackhole formation as well as wave propagation.
The geometrical setup is analogous to the cylindricallysymmetric problem. Initial data were specified on the union of a spacelike hypersurface and a null hypersurface. The evolution used a 3level Cauchy scheme in the interior and a 2level characteristic evolution in the compactified exterior. A constrained Cauchy evolution was adopted because of its earlier success in accurately simulating scalar wave collapse [80]. Characteristic evolution was based upon the null parallelogram algorithm (19). The matching between the Cauchy and characteristic foliations was achieved by imposing continuity conditions on the metric, extrinsic curvature and scalar field variables, ensuring smoothness of fields and their derivatives across the matching interface. The extensive analytical and numerical studies of this system in recent years aided the development of CCM in this nontrivial geometrical setting by providing basic knowledge of the expected physical and geometrical behavior, in the absence of exact solutions.
The CCM code accurately handled wave propagation and blackhole formation for all values of M/R at the matching radius, with no symptoms of instability or backreflection. Secondorder accuracy was established by checking energy conservation.
5.5.3 Excising 1D black holes
In further developmental work on the EKG model, the Pittsburgh group used CCM to formulate a new treatment of the inner Cauchy boundary for a blackhole spacetime [141]. In the excision strategy, the inner boundary of the Cauchy evolution is located at an apparent horizon, which must lie inside (or on) the event horizon [300]. The physical rationale behind this apparent horizon boundary condition is that the truncated region of spacetime cannot causally affect the gravitational waves radiated to infinity. However, it should be noted that many Cauchy formalisms contain superluminal gauge or constraint violating modes so that this strategy is not always fully justified.
In the CCM excision strategy, illustrated in Figure 6, the interior blackhole region is evolved using an ingoing null algorithm whose inner boundary is an MTS, and whose outer boundary lies outside the black hole and forms the inner boundary of a region evolved by the Cauchy algorithm. In turn, the outer boundary of the Cauchy region is handled by matching to an outgoing null evolution extending to \({{\mathcal I}^ +}\). Data are passed between the inner characteristic and central Cauchy regions using a CCM procedure similar to that already described for an outer Cauchy boundary. The main difference is that, whereas the outer Cauchy boundary data is induced from the Bondi metric on an outgoing null hypersurface, the inner Cauchy boundary is now obtained from an ingoing null hypersurface, which enters the event horizon and terminates at an MTS.
The translation from an outgoing to an incoming null evolution algorithm can be easily carried out. The substitution β → β + iπ/2 in the 3D version of the Bondi metric (14) provides a simple formal recipe for switching from an outgoing to an ingoing null formalism [141].
In order to ensure that trapped surfaces exist on the ingoing null hypersurfaces, initial data were chosen, which guarantee blackhole formation. Such data can be obtained from initial Cauchy data for a black hole. However, rather than extending the Cauchy hypersurface inward to an apparent horizon, it was truncated sufficiently far outside the apparent horizon to avoid computational problems with the Cauchy evolution. The initial Cauchy data were then extended into the blackhole interior as initial null data until an MTS was reached. Two ingredients were essential in order to arrange this. First, the inner matching surface must be chosen to be convex, in the sense that its outward null normals uniformly diverge and its inner null normals uniformly converge. (This is trivial to satisfy in the sphericallysymmetric case.) Given any physicallyreasonable matter source, the focusing theorem guarantees that the null rays emanating inward from the matching sphere continue to converge until reaching a caustic. Second, the initial null data must lead to a trapped surface before such a caustic is encountered. This is a relatively easy requirement to satisfy because the initial null data can be posed freely, without any elliptic or algebraic constraints other than continuity with the Cauchy data.
A code was developed, which implemented CCM at both the inner and outer boundaries [141]. Its performance showed that CCM provides as good a solution to the blackhole excision problem in spherical symmetry as any previous treatment [262, 263, 208, 11]. CCM is computationally more efficient than these pure Cauchy approaches (fewer variables) and much easier to implement. Depending upon the Cauchy formalism adopted, achieving stability with a pure Cauchy scheme in the region of an apparent horizon can be quite tricky, involving much trial and error in choosing finitedifference schemes. There were no complications with stability of the null evolution at the MTS.
The Cauchy evolution was carried out in ingoing EddingtonFinklestein (IEF) coordinates. The initial Cauchy data consisted of a Schwarzschild black hole with an ingoing Gaussian pulse of scalar radiation. Since IEF coordinates are based on ingoing null cones, it is possible to construct a simple transformation between the IEF Cauchy metric and the ingoing null metric. Initially there was no scalar field present on either the ingoing or outgoing null patches. The initial values for the Bondi variables β and V were determined by matching to the Cauchy data at the matching surfaces and integrating the hypersurface equations (16, 17).
As the evolution proceeds, the scalar field passes into the black hole, and the MTS grows outward. The MTS is easily located in the sphericallysymmetric case by an algebraic equation. In order to excise the singular region, the grid points inside the MTS were identified and masked out of the evolution. The backscattered radiation propagated cleanly across the outer matching surface to \({{\mathcal I}^ +}\). The strategy worked smoothly, and second order accuracy of the approach was established by comparing it to an independent numerical solution obtained using a secondorder accurate, purely Cauchy code [208]. As discussed in Section 5.9, this insideoutside application of CCM has potential application to the binary blackhole problem.
In a variant of this double CCM matching scheme, Lehner [196] has eliminated the middle Cauchy region between R_{0} and R_{1} in Figure 6. He constructed a 1D code matching the ingoing and outgoing characteristic evolutions directly across a single timelike worldtube. In this way, he was able to simulate the global problem of a scalar wave falling into a black hole by purely characteristic methods.
5.6 Axisymmetric Cauchycharacteristic matching
The Southampton CCM project is being carried out for spacetimes with (twisting) axial symmetry. The formal basis for the matching scheme was developed by d’Inverno and Vickers [99, 100]. Similar to the Pittsburgh 3D strategy (see Section 5.2), matching is based upon an extraction module, which supplies boundary data for the exterior characteristic evolution, and an injection module, which supplies boundary data for the interior Cauchy evolution. However, their use of spherical coordinates for the Cauchy evolution (as opposed to Cartesian coordinates in the 3D strategy) allows use of a matching worldtube r = R_{m}, which lies simultaneously on Cauchy and characteristic gridpoints. This tremendously simplifies the necessary interpolations between the Cauchy and characteristic evolutions, at the expense of dealing with the r = 0 coordinate singularity in the Cauchy evolution. The characteristic code (see Section 3.3.4) is based upon a compactified BondiSachs formalism. The use of a “radial” Cauchy gauge, in which the Cauchy coordinate r measures the surface area of spheres, simplifies the relation to the BondiSachs coordinates. In the numerical scheme, the metric and its derivatives are passed between the Cauchy and characteristic evolutions exactly at r = R_{m}, thus eliminating the need of a matching interface encompassing a few grid zones, as in the 3D Pittsburgh scheme. This avoids a great deal of interpolation error and computational complexity.
Preliminary results in the development of the Southampton CCM code are described by Pollney in his thesis [234]. The Cauchy code was based upon the axisymmetric ADM code of Stark and Piran [278] and reproduces their vacuum results for a short time period, after which an instability at the origin becomes manifest. The characteristic code has been tested to reproduce accurately the Schwarzschild and boostrotation symmetric solutions [43], with more thorough tests of stability and accuracy still to be carried out.
5.7 Cauchycharacteristic matching for 3D scalar waves
CCM has been successfully implemented in the fully 3D problem of nonlinear scalar waves evolving in a flat spacetime [50, 49]. This study demonstrated the feasibility of matching between Cartesian Cauchy coordinates and spherical null coordinates, the setup required to apply CCM to the binary blackhole problem. Unlike spherically or cylindrically symmetric examples of matching, the Cauchy and characteristic patches do not share a common coordinate, which can be used to define the matching interface. This introduces a major complication into the matching procedure, resulting in extensive use of intergrid interpolation. The accompanying short wavelength numerical noise presents a challenge in obtaining a stable algorithm.
The nonlinear waves were modeled by the equation
with selfcoupling F(Φ) and external source S. The initial Cauchy data Φ(t_{0}, x, y, z) and ∂_{t}Φ(t_{0}, x, y, z) are assigned in a spatial region bounded by a spherical matching surface of radius R_{m}.
The characteristic initial value problem (66) is expressed in standard spherical coordinates (r, θ, φ) and retarded time u = t − r + R_{m}:
where g = rΦ and L^{2} is the angular momentum operator
The initial null data consist of g(r, θ, φ, u_{0}) on the outgoing characteristic cone u_{0} = t_{0} emanating at the initial Cauchy time from the matching worldtube at r = R_{m}.
CCM was implemented so that, in the continuum limit, Φ and its normal derivatives would be continuous across the matching interface. The use of a Cartesian discretization in the interior and a spherical discretization in the exterior complicated the treatment of the interface. In particular, the stability of the matching algorithm required careful attention to the details of the intergrid matching. Nevertheless, there was a reasonably broad range of discretization parameters for which CCM was stable.
Two different ways of handling the spherical coordinates were used. One was based upon two overlapping stereographic grid patches and the other upon a multiquadric approximation using a quasiregular triangulation of the sphere. Both methods gave similar accuracy. The multiquadric method showed a slightly larger range of stability. Also, two separate tactics were used to implement matching, one based upon straightforward interpolations and the other upon maintaining continuity of derivatives in the outward null direction (a generalization of the Sommerfeld condition). Both methods were stable for a reasonable range of grid parameters. The solutions were secondorder accurate and the Richardson extrapolation technique could be used to accelerate convergence.
The performance of CCM was compared to traditional ABCs. As expected, the nonlocal ABCs yielded convergent results only in linear problems, and convergence was not observed for local ABCs, whose restrictive assumptions were violated in all of the numerical experiments. The computational cost of CCM was much lower than that of current nonlocal boundary conditions. In strongly nonlinear problems, CCM appears to be the only available method, which is able to produce numerical solutions that converge to the exact solution with a fixed boundary.
5.8 Stable 3D linearized Cauchycharacteristic matching
Although the individual pieces of the CCM module have been calibrated to give an accurate interface between Cauchy and characteristic evolution modules in 3D general relativity, its stability has not yet been established [52]. However, a stable version of CCM for linearized gravitational theory has recently been demonstrated [287]. The Cauchy evolution is carried out using a harmonic formulation for which the reduced equations have a wellposed initialboundary problem. Previous attempts at CCM were plagued by boundaryinduced instabilities of the Cauchy code. Although stable behavior of the Cauchy boundary is only a necessary and not a sufficient condition for CCM, the tests with the linearized harmonic code matched to a linearized characteristic code were successful.
The harmonic conditions consist of wave equations for the coordinates, which can be used to propagate the gauge as four scalar waves using characteristic evolution. This allows the extraction worldtube to be placed at a finite distance from the injection worldtube without introducing a gauge ambiguity. Furthermore, the harmonic gauge conditions are the only constraints on the Cauchy formalism so that gauge propagation also insures constraint propagation. This allows the Cauchy data to be supplied in numericallybenign Sommerfeld form, without introducing constraint violation. Using random initial data, robust stability of the CCM algorithm was confirmed for 2000 crossing times on a 45^{3} Cauchy grid. Figure 7 shows a sequence of profiles of the metric component \({\gamma ^{xy}} = \sqrt { g} {g^{xy}}\) as a linearized wave propagates cleanly through the spherical injection boundary and passes to the characteristic grid, where it is propagated to \({{\mathcal I}^ +}\).
5.9 The binary blackhole inner boundary
CCM also offers a new approach to singularity excision in the binary blackhole problem in the manner described in Section 5.5.3 for a single sphericallysymmetric black hole. In a binary system, there are computational advantages in posing the Cauchy evolution in a frame, which is corotating with the orbiting black holes. In this coorbiting description, the Cauchy evolution requires an inner boundary condition inside the black holes and also an outer boundary condition on a worldtube outside of which the grid rotation is likely to be superluminal. An outgoing characteristic code can routinely handle such superluminal gauge flows in the exterior [53]. Thus, successful implementation of CCM could solve the exterior boundary problem for this coorbiting description.
CCM also has the potential to handle the two black holes inside the Cauchy region. As described earlier with respect to Figure 6, an ingoing characteristic code can evolve a moving black hole with long term stability [141, 139]. This means that CCM might also be able to provide the inner boundary condition for Cauchy evolution once stable matching has been accomplished. In this approach, the interior boundary of the Cauchy evolution is located outside the apparent horizon and matched to a characteristic evolution based upon ingoing null cones. The inner boundary for the characteristic evolution is a trapped or MTS, whose interior is excised from the evolution.
In addition to restricting the Cauchy evolution to the region outside the black holes, this strategy offers several other advantages. Although finding an MTS on the ingoing null hypersurfaces remains an elliptic problem, there is a natural radial coordinate system (r, θ, ϕ) to facilitate its solution. Motion of the black hole through the grid reduces to a onedimensional radial problem, leaving the angular grid intact and thus reducing the computational complexity of excising the inner singular region. (The angular coordinates can even rotate relative to the Cauchy coordinates in order to accommodate spinning black holes.) The chief danger in this approach is that a caustic might be encountered on the ingoing null hypersurface before entering the trapped region. This is a gauge problem whose solution lies in choosing the right location and geometry of the surface across which the Cauchy and characteristic evolutions are matched. There is a great deal of flexibility here because the characteristic initial data can be posed without constraints. This global strategy is tailormade to treat two black holes in the coorbiting gauge, as illustrated in Figure 8. Two disjoint characteristic evolutions based upon ingoing null cones are matched across worldtubes to a central Cauchy region. The interior boundaries of each of these interior characteristic regions border a trapped surface. At the outer boundary of the Cauchy region, a matched characteristic evolution based upon outgoing null hypersurfaces propagates the radiation to infinity.
Present characteristic and Cauchy codes can handle the individual pieces of this problem. Their unification offers a new approach to simulating the inspiral and merger of two black holes. The individual pieces of the fully nonlinear CCM module, as outlined in Section 5.2, have been implemented and tested for accuracy. The missing ingredient is long term stability in the nonlinear gravitational case, which would open the way to future applications.
6 CauchyCharacteristic Extraction of Waveforms
When an artificial finite outer boundary is introduced there are two broad sources of error:

The outer boundary condition

Waveform extraction at a finite inner worldtube.
CCM addresses both of these items. Cauchycharacteristic extraction (CCE), which is one of the pieces of the CCM strategy, offers a means to avoid the second source of error introduced by extraction at a finite worldtube. In current codes used to simulate black holes, the waveform is extracted at an interior worldtube, which must be sufficiently far inside the outer boundary in order to isolate it from errors introduced by the boundary condition. At this inner worldtube, the waveform is extracted by a perturbative scheme based upon the introduction of a background Schwarzschild spacetime. This has been carried out using the ReggeWheelerZerilli [240, 309] treatment of the perturbed metric, as reviewed in [214], and also by calculating the NewmanPenrose Weyl component Ψ_{4}, as first done for the binary blackhole problem in [19, 235, 78, 20]. In these approaches, errors arise from the finite size of the extraction worldtube, from nonlinearities and from gauge ambiguities involved in the arbitrary introduction of a background metric. The gauge ambiguities might seem less severe in the case of Ψ_{4} (vs metric) extraction, but there are still delicate problems associated with the choices of a preferred null tetrad and preferred worldlines along which to measure the waveform (see [199] for an analysis).
CCE offers a means to avoid this error introduced by extraction at a finite worldtube. In CCE, the inner worldtube data supplied by the Cauchy evolution is used as boundary data for a characteristic evolution to future null infinity, where the waveform can be unambiguously computed in terms of the Bondi news function. By itself, CCE does not use the characteristic evolution to inject outer boundary data for the Cauchy evolution, which can be a source of instability in full CCM. A wide number of highly nonlinear tests involving black holes [56, 53, 311, 312] have shown that early versions of CCE were a stable procedure, which provided the gravitational waveform up to numerical error that is secondorder convergent when the worldtube data is prescribed in analytic form. Nevertheless, in nonlinear applications requiring numerical worldtube data and high resolution, such as the inspiral of matter into a black hole [51], the numerical error was a troublesome factor in computing the waveform. The CCE modules were first developed in a past period when stability was the dominant issue and secondorder accuracy was considered sufficient. Only recently have they begun to be updated to include the more accurate techniques now standard in Cauchy codes. There are two distinct ways, geometric and numerical, that the accuracy of CCE might be improved. In the geometrical category, one option is to compute Ψ_{4} instead of the news function as the primary description of the waveform. In the numerical category, some standard methods for improving accuracy, such as higherorder finite difference approximations, are straightforward to implement whereas others, such as adaptive mesh refinement, have only been tackled for 1D characteristic codes [237].
A major source of numerical error in characteristic evolution arises from the intergrid interpolations arising from the multiple patches necessary to coordinatize the spherical crosssections of the outgoing null hypersurfaces. More accurate methods, have now been developed to reduce this interpolation error, as discussed in Section 4.1. In particular, the cubedsphere method and the stereographic method with circular patch boundaries have both shown improvement over the original use of square stereographic patches. In a test problem involving a scalar wave Φ, the accuracies of the circularstereographic and cubedsphere methods were compared [13]. For equivalent computational expense, the cubedsphere error in the scalar field \({\mathcal E}(\Phi)\) was \( \approx {1 \over 3}\) the circularstereographic error but the advantage was smaller for the higher ðderivatives (angular derivatives) required in gravitational waveform extraction. The cubedsphere error \({\mathcal E}(\bar\eth\eth ^2\Phi)\) was \( \approx {4 \over 5}\) the stereographic error. However, the cubedsphere method has not yet been developed for extraction of gravitational waveforms at \({{\mathcal I}^ +}\).
6.1 Waveforms at null infinity
In order to appreciate why waveforms are not easy to extract accurately it is worthwhile to review the calculation of the required asymptotic quantities. A simple approach to Penrose compactification is by introducing an inverse surface area coordinate ℓ = 1/r, so that future null infinity \({{\mathcal I}^ +}\) is given by ℓ = 0 [288]. In the resulting x^{μ} = (u, ℓ, x^{A}) Bondi coordinates, where u is the retarded time defined on the outgoing null hypersurfaces and x^{A} are angular coordinates along the outgoing null rays, the physical spacetime metric g_{μν} has conformal compactification \({{\hat g}_{\mu \nu}} = {\ell ^2}{g_{\mu \nu}}\) of the form
where α, β, U^{A} and h_{AB} are smooth fields at \({{\mathcal I}^ +}\).
The news function and Weyl component Ψ_{4}, which describe the radiation, are constructed from the leading coefficients in an expansion of \({{\hat g}_{\mu \nu}}\) in powers of ℓ. The requirement of asymptotic flatness imposes relations between these expansion coefficients. In terms of the Einstein tensor \({{\hat G}_{\mu \nu}}\) and covariant derivative \({{\hat \nabla}_\mu}\) associated with \({{\hat g}_{\mu \nu}}\), the vacuum Einstein equations become
Asymptotic flatness immediately implies that \({\hat g^{\ell \ell}} = ({\hat \nabla ^\alpha}\ell){\hat \nabla _\alpha}\ell = O(\ell)\) so that \({{\mathcal I}^ +}\) is a null hypersurface with generators in the \({{\hat \nabla}^\mu}\ell\) direction. From Equation (70) there also follows the existence of a smooth tracefree field \({{\hat \Sigma}_{\mu \nu}}\) defined on \({{\mathcal I}^ +}\) by
where \(\hat \Theta : = {{\hat \nabla}^\mu}{{\hat \nabla}_\mu}\ell\) is the expansion of \({{\mathcal I}^ +}\). The expansion \({\hat \Theta}\) depends upon the conformal factor used to compactify \({{\mathcal I}^ +}\). In an inertial conformal Bondi frame, tailored to a standard Minkowski metric at \({{\mathcal I}^ +},\hat \Theta = 0\). But this is not the case for the computational frame used in characteristic evolution, which is determined by conditions on the inner extraction worldtube.
The gravitational waveform depends on \({{\hat \Sigma}_{\mu \nu}}\), which in turn depends on the leading terms in the expansion of \({{\hat g}_{\mu \nu}}\):
In an inertial conformal Bondi frame, H^{AB} = Q^{AB} (the unitsphere metric), H = L^{A} = 0 and the Bondi news function reduces to the simple form
where Q^{A} is a complex polarization dyad on the unit sphere, i.e., \({Q^{AB}} = {Q^{(A}}{{\bar Q}^{B)}}\). The spin rotation freedom Q^{β} → e^{−iγ}Q^{β} is fixed by parallel propagation along the generators of \({{\mathcal I}^ +}\), so that the real and imaginary parts of N correctly describe the ⊕ and ⊗ polarization modes of inertial observers at \({{\mathcal I}^ +}\).
However, in the computational frame the news function has the more complicated form
where ω is the conformal factor relating H_{AB} to the unitsphere metric, i.e., Q_{AB} = ω^{2}H_{AB}. The conformal factor obeys the elliptic equation governing the conformal transformation relating the metric of the crosssections of \({{\mathcal I}^ +}\) to the unitsphere metric,
where \({\mathcal R}\) is the curvature scalar and D_{A} the covariant derivative associated with H_{AB}. By first solving Equation (75) at the initial retarded time, ω can then be determined at later times by evolving it according to the asymptotic relation
All of these procedures introduce numerical error, which presents a challenge for computational accuracy, especially because of the appearance of second angular derivatives of ω in the news function (74).
Similar complications appear in Ψ_{4} extraction. Asymptotic flatness implies that the Weyl tensor vanishes at \({{\mathcal I}^ +}\), i.e., Ĉ_{μνρσ} = O(ℓ). This is the conformal space statement of the peeling property [227]. Let \(({{\hat n}^\mu},{{\hat \ell}^\mu},{{\hat m}^\mu})\) be an orthonormal null tetrad such that \({{\hat n}^\mu} = {{\hat \nabla}^\mu}\ell\) and \({{\hat \ell}^\mu}{\partial _\mu} = {\partial _\ell}\) at \({{\mathcal I}^ +}\). Then the radiation is described by the limit
which corresponds in NewmanPenrose notation to \(  (1/2)\bar \psi _4^0\). The main calculational result in [13] is that
which is independent of the freedom \({{\hat m}^\mu} \rightarrow {{\hat m}^\mu} + \lambda {{\hat n}^\mu}\) in the choice of m^{μ}. In inertial Bondi coordinates, this reduces to
which is related to the Bondi news function by
so that
with N_{Ψ} = N up to numerical error.
As in the case of the news function, the general expression (78) for \({\hat \Psi}\) must be used. This challenges numerical accuracy due to the large number of terms and the appearance of third angular derivatives. For instance, in the linearized approximation, the value of \({\hat \Psi}\) on \({{\mathcal I}^ +}\) is given by the fairly complicated expression
where J = Q^{A}Q^{B}h_{AB} and L = Q^{A}L_{A}. In the same approximation, the news function is given by
(The relationship (80) still holds in the linearized approximation but in the nonlinear case, the derivative along the generators of \({{\mathcal I}^ +}\) is \({{\hat n}^\mu}{\partial _\mu} = {e^{ 2H}}({\partial _u} + {L^A}{\partial _A})\) and Equation (80) must be modified accordingly.)
These linearized expressions provide a starting point to compare the advantages between computing the radiation via N or N_{Ψ}. The troublesome gauge terms involving L, H and ω all vanish in inertial Bondi coordinates (where ω = 1). One difference is that \({\hat \Psi}\) contains thirdorder angular derivatives, e.g., \({\eth^3}\bar L\), as opposed to second angular derivatives for N. This means that the smoothness of the numerical error is more crucial in the \({\hat \Psi}\) approach. Balancing this, N contains the ð^{2}ω term, which is a potential source of numerical error since ω must be evolved via Equation (76).
The accuracy of waveform extraction via the Bondi news function N and its counterpart N_{Ψ} constructed from the Weyl curvature has been compared in a linearized gravitationalwave test problem [13]. The results show that both methods are competitive, although the Ψ_{4} approach has an edge.
However, even though both methods were tested to be secondorder convergent in test beds with analytic data, there was still considerable error, of the order of 5% for grids of practical size. This error reflects the intrinsic difficulty in extracting waveforms because of the delicate cancellation of leadingorder terms in the underlying metric and connection when computing the O(1/r) radiation field. It is somewhat analogous to the experimental task of isolating a transverse radiation field from the longitudinal fields representing the total mass, while in a very noninertial laboratory. In the linearized wave test carried out in [13], the news consisted of the sum of three terms, N = A + B + C, where because of cancellations N ≈ A/24. The individual terms A, B and C had small fractional error but the cancellations magnified the fractional error in N.
The tests in [13] were carried out with a characteristic code using the circularstereographic patches. The results are in qualitative agreement with tests of CCE using a cubedsphere code [242], which, in addition, confirmed the expectation that fourthorder finitedifference approximations for the ðoperator gives improved accuracy. As demonstrated recently [134], once all the necessary infrastructure for interpatch communication is in place, an advantage of the cubedsphere approach is that its shared boundaries admit a highly scalable algorithm for parallel architectures.
Another alternative is to carry out a coordinate transformation in the neighborhood of \({{\mathcal I}^ +}\) to inertial Bondi coordinates, in which the news calculation is then quite clean numerically. This approach was implemented in [48] and shown to be secondorder convergent in RobinsonTrautman and Schwarzschild testbeds. However, it is clear that this coordinate transformation also involves the same difficult numerical problem of extracting a small radiation field in the presence of the large gauge effects that are present in the primary output data.
These underlying gauge effects, which complicate CCE, are introduced at the inner extraction worldtube and then propagate out to \({{\mathcal I}^ +}\), but they are of numerical origin and can be reduced with increased accuracy. Perturbative waveform extraction suffers the same gauge effects but in this case they are of analytic origin and cannot be controlled by numerical accuracy. Lehner and Moreschi [199] have shown that the delicate gauge issues involved at \({{\mathcal I}^ +}\) have counterparts in Ψ_{4} extraction of radiation on a finite worldtube. They show how some of the analytic techniques used at \({{\mathcal I}^ +}\) can also be used to reduce the effect of these ambiguities on a finite worldtube, in particular the ambiguity arising from the conformal factor ω. The analogue of ω on a finite worldtube can reduce some of the noninertial effects that enter the perturbative waveform. In addition, use of normalization conventions on the null tetrad defining Ψ_{4} analogous to the conventions at \({{\mathcal I}^ +}\) can avoid other spurious errors. This approach can also be used to reduce gauge ambiguities in the perturbative calculation of momentum recoil in the merger of black holes [125].
6.2 Application of CCE to binary black hole inspirals
The emission of gravitational waves from the inspiral and merger of binary black holes is the most likely source for detection by gravitational wave observatories. The postNewtonian regime of the inspiral can be accurately modeled by the chirp waveforms obtained by perturbation theory and the final ringdown waveform can be accurately modeled by the known quasinormal modes. This places special importance on reliable waveforms for the nonlinear inspiral and merger waveform, which interpolates between these early and late time phases. Here CCE plays the important role of providing an unambiguous waveform at \({{\mathcal I}^ +}\), which can be used to avoid the error introduced by perturbative extraction techniques.
The application of CCE to binary blackhole simulations was first carried out in [243, 244] using an implementation of the PITT code for the characteristic evolution. The Cauchy evolution was carried out using a variant of the BSSN formulation [268, 38]. Simulations of inspiral and merger were carried out for equalmass nonspinning black holes and for equalmass black holes with spins aligned with the orbital angular momentum. For a binary of mass M, two separate choices of outer Cauchy boundary were located at R = 3600 M and R = 2000 M, with the corresponding characteristic extraction worldtubes ranging from R_{E} = 100 M to R_{E} = 250 M, sufficient to causally isolate the characteristic extraction from the outer boundary during the simulation of eight orbits prior to merger and ringdown. The difference between CCE waveforms in this range of extraction radii was found to be of comparable size to the numerical error. In particular, for the grid resolutions used, the dominant numerical error was due to the Cauchy evolution.
The CCE waveforms at \({{\mathcal I}^ +}\) were also used to evaluate the quality of perturbative waveforms based upon Weyl tensor extraction. In order to reduce finite extraction effects, the perturbative waveforms were extrapolated to infinity by extraction at six radii in the range R = 280 M to R = 1000 M. It is notable that the results in [243] indicate that the systematic error in perturbative extraction had, previously, been underestimated.
The lack of reflection symmetry in the spinning case leads to a recoil, or “kick”, due to the linear momentum carried off by the gravitational waves. The astrophysical consequence of this kick to the evolution of a galactic core has accentuated the important role of CCE waveforms to supply the energy, momentum and angular momentum radiated during binary black hole inspirals. The radiated energy and momentum obtained from the ψ_{4} Weyl component obtained at \({{\mathcal I}^ +}\) via CCE was compared to the corresponding value extracted at finite radii and then extrapolated to infinity [244]. The extrapolated value was found to be of comparable accuracy to the CCE result for the large extraction radii used. For extraction at a single radius of R = 100 M, commonly used in numerical relativity, this was no longer true and the error was 1 to 2 orders of magnitude larger. The CCE energy loss obtained via ψ_{4} was also found to be consistent, within numerical error, to the recoil computed from the news function. The work emphasizes the need for an accurate description of the astrophysical consequences of gravitational radiation, which CCE is designed to provide.
In addition to the dominant oscillatory gravitationalwave signals produced during binary inspirals, there are also memory effects described by the long time scale change in the strain Δh = h(t, θ, ϕ) − h(−∞, θ, ϕ). In a followup to the work in [243, 244], these were studied by means of CCE [247] for the inspiral of spinning black holes. It was found that the memory effect was greatest for the case of spins aligned with the orbital angular momentum, as might be expected since this case also produces the strongest radiation. The largest spherical harmonic mode for the effect was found to be the (ℓ = 2, m = 0) mode. Since CCE supplies either the news function or its time derivative ψ_{4}, a major difficulty in measuring the memory is the proper setting of the integration constants in determining the strain. This was done by matching the numerical evolution to a postNewtonian precursor. There is a slow monotonic growth of Δh during the inspiral followed by a rapid rise during the merger phase, which over the time scale of the simulation leads to a steplike behavior modulated by the final ringdown. The simulations showed that the largest memory offset occurs for highlyspinning black holes, with an estimated value of 0.24 in the maximallyspinning case. These results are central to determining the detectability of the memory effect by observations of gravitational waves. Since the size of the (ℓ = 2, m = 0) mode is small compared to the dominant (ℓ = 2, m = 2) radiation mode, the memory effect is unlikely to be observable in LIGO signals. However, the long period behavior of the effect might make it more conducive to detection by proposed pulsar timing arrays designed to measure the residual timesofarrival caused by intervening gravitational waves.
Another application of CCE has been to the study of gravitational waves from precessing binary black holes with spins aligned or antialigned to the orbital angular momentum [245]. It was found that binaries with spin aligned with the orbital angular momentum are more powerful sources than the corresponding binaries with antialigned spins. The results were confirmed by comparing the waveforms obtained using perturbative extraction at finite radius to those obtained using CCE. The comparisons showed that the difference between the two approaches was within the numerical error of the simulation.
6.3 Application of CCE to stellar collapse
CCE has also been recently applied to study the waveform from the fully threedimensional simulation of the collapse and core bounce of a massive rotating star [246]. After nuclear energy generation has ceased, dissipative processes eventually push the core over its effective Chandrasekhar mass. Radial instability then drives the inner core to nuclear densities at which time the stiffened equation of state leads to a core bounce with tremendous acceleration. The asymmetry of this bounce due to a rotating core potentially gives rise to a detectable source of gravitational quadrupole radiation, which can be used to probe the nuclear equation of state and the mass and angular momentum of the star. Simulations were carried out for three choices of initial star parameters. The gravitational waves emitted in the core bounce phase were compared using four independent extraction techniques:

The simplest technique was via the quadrupole formula, which estimates the waveform in terms of the second time derivative of the mass quadrupole tensor but does not take into account the effects of curvature or relativistic motion.

Extraction at a finite radius via the NewmanPenrose ψ_{4} component.

Perturbative extraction at a finite radius based upon the ReggeWheelerZerilliMoncrief (RWZM) formalism.

CCE, utilizing the PITT code, which avoids the near field or perturbative approximations of the above techniques..
Historically, the quadrupole formula, which is computed in the inner region where the numerical grid is most accurate, has been the predominant extraction tool used in stellar collapse. The metric or curvature based methods suffer from numerical error in extracting a signal, which is many orders of magnitude weaker than that from a binary inspiral from the numerical noise. This is especially pertinent to RWZM and ψ_{4} extraction, where the signal must be extracted in the far field. In addition, the radiation is dominant in the (ℓ = 2, m = 0) spherical harmonic mode, in which the memory effect complicates the relationship between ψ_{4} and the strain at low frequencies.
CCE was used as the benchmark in comparing the various extraction techniques. For all three choices of initial stellar configurations, extraction via RWZM yielded the largest discrepancy and showed a large spurious spike at core bounce and other spurious highfrequency contributions. Quadrupole and ψ_{4} extraction only led to small differences with CCE. It was surprising that the quadrupole technique gave such good agreement, given its simplistic assumptions. Overall, quadrupole extraction performed slightly better than ψ_{4} extraction when compared to CCE. One reason is that the double time integration of ψ_{4} to produce the strain introduces lowfrequency errors. Also, ψ_{4} extraction led to larger peak amplitudes compared to either quadrupole extraction or CCE.
Several important observations emerged from this study. (i) ψ_{4} extraction and CCE converge properly with extraction worldtube radius. RWZM produces spurious highfrequency effects, which no other method reproduces. (ii) Waveforms from CCE, ψ_{4} extraction and quadrupole extraction agree well in phase. The highfrequency contamination of RWZM makes phase comparisons meaningless. (iii) Compared to CCE, the maximum amplitudes at core bounce differ by ≈ 1 to 7%, depending on initial stellar parameters, for ψ_{4} extraction and by ≈ 5 to 11% for quadrupole extraction. (iv) Only quadrupole extraction is free of low frequency errors. (v) For use in gravitational wave data analysis, except for RWZM, the three other extraction techniques yield results, which are equivalent up to the uncertainties intrinsic to matchedfilter searches.
Certain technical issues cloud the above observations. CCE, ψ_{4} and RWZM extraction are based upon vacuum solutions at the extraction worldtube, which is not the case for those simulations in which the star extends over the entire computational grid. This could be remedied by the inclusion of matter terms in the CCE technique, which might also improve the low frequency behavior. In any case, this work represents a milestone in showing that CCE has important relevance to waveform extraction from astrophysicallyrealistic collapse models.
The above study [246] employed a sufficiently stiff equation of state to produce core bounce after collapse. In subsequent work, CCE was utilized to study the gravitational radiation from a collapsar model [220], in which a rotating star collapses to form a black hole with accretion disk. The simulations tracked the initial collapse and bounce, followed by a post bounce phase leading to blackhole formation. At bounce, there is a burst of gravitational waves similar to the above study, followed by a turbulent post bounce with weak gravitational radiation in which an unstable protoneutron star forms. Collapse to a black hole then leads to another pronounced spike in the waveform, followed by ringdown to a Kerr black hole. The ensuing accretion flow does not lead to any further radiation of appreciable size. The distinctive signature of the gravitational waves observed in these simulations would enable a LIGO detection to distinguish between core collapse leading to bounce and supernova and one leading to blackhole formation.
6.4 LIGO accuracy standards
The strong emission of gravitational waves from the inspiral and merger of binary black holes has been a dominant motivation for the construction of the LIGO and Virgo gravitational wave observatories. The precise detail of the waveform obtained from numerical simulation is a key tool to enhance detection and allow useful scientific interpretation of the gravitational signal. The first derivation [107] of the accuracy required for numericallygenerated blackhole waveforms to be useful as templates for gravitationalwave data analysis was carried out in the frequency domain. Proper accuracy standards must take into account the power spectral density of the detector noise S_{n}(f), which is calibrated with respect to the frequency domain strain \(\hat h(f)\). Consequently, the primary accuracy standards must be formulated in the frequency domain in order to take detector sensitivity into account. See [203] for a recent review.
It has been emphasized [202] that the direct use of time domain errors obtained in numerical simulations can be deceptive in assessing the accuracy standards for model waveforms to be suitable for gravitationalwave data analysis. For this reason, the frequency domain accuracy requirements have been translated into requirements on the time domain L_{2} error norms, so that they can be readily enforced in practice [203, 204, 201].
There are two distinct criteria for waveform accuracy: (i) Insufficient accuracy can lead to an unacceptable fraction of signals to pass undetected through the corresponding matchedfilter; (ii) the accuracy affects whether a detected waveform can be used to measure the physical properties of the source, e.g., mass and spin, to a level commensurate with the accuracy of the observational data. Accuracy standards for model waveforms have been formulated to prevent these potential losses in the detection of gravitational waves and the measurement of their scientific content.
For a numerical waveform with strain component h(t), the time domain error is measured by
where δh is the error in the numerical approximation and ∥F∥^{2} = ∫ dt∣F(t)∣^{2}, i.e., ∥F∥ is the L_{2} norm, which in principle should be integrated over the complete time domain of the model waveform obtained by splicing a perturbative chirp waveform to a numerical waveform for the inspiral and merger.
The error can also be measured in terms of time derivatives of the strain. The first time derivative corresponds to the error in the news
and the second time derivative corresponds to the Weyl component error
In [203], it was shown that sufficient conditions to satisfy data analysis criteria for detection and measurement can be formulated in terms of any of the error norms \({{\mathcal E}_k} = \left({{{\mathcal E}_0},\>{{\mathcal E}_1},\>{{\mathcal E}_2}} \right)\), i.e., in terms of the strain, the news or the Weyl component. The accuracy requirement for detection is
and the requirement for measurement is
Here ρ is the optimal signaltonoise ratio of the detector, defined by
C_{k} are dimensionless factors introduced in [203] to rescale the traditional signaltonoise ratio ρ in making the transition from frequency domain standards to time domain standards; ϵ_{max} determines the fraction of detections lost due to template mismatch, cf. Equation (14) of [204]; and η_{c} ≤ 1 corrects for error introduced in detector calibration. These requirements for detection and measurement, for either k = 0, 1, 2 conservatively overstate the basic frequency domain requirements by replacing S_{n}(f) by its minimum value in transforming to the time domain.
The values of C_{k} for the inspiral and merger of nonspinning equalmass black holes have been calculated in [203] for the advanced LIGO noise spectrum. As the total mass of the binary varies from 0 → ∞, C_{0} varies between. 65 > C_{0} > 0, C_{1} varies between. 24 < C_{1} <. 8 and C_{2} varies between 0 < C_{2} < 1. Thus, only the error \({{\mathcal E}_1}\) in the news can satisfy the criteria over the entire mass range. The error in the strain \({{\mathcal E}_0}\) provides the easiest way to satisfy the criteria in the low mass case M << M_{⊙} and the error in the Weyl component \({{\mathcal E}_2}\) provides the easiest way to satisfy the criteria in the high mass case M >> M_{⊙}.
6.5 A community CCE tool
The importance of accurate waveforms has prompted development of a newlydesigned CCE tool [16], which meets the advanced LIGO accuracy standards. Preliminary progress was reported in [17]. The CCE tool is available for use by the numerical relativity community under a general public license as part of the Einstein Toolkit [104]. It can be applied to a generic Cauchy code with extraction radius as small as r = 20 M, which provides flexibility for many applications besides binary black holes, such as waveform extraction from stellar collapse.
The matching interface was streamlined by introducing a pseudospectral decomposition of the Cauchy metric in the neighborhood of the extraction worldtube. This provides economical storage of the boundary data for the characteristic code so that the waveform at \({{\mathcal I}^ +}\) can be obtained in postprocessing with a small computational burden compared to the Cauchy evolution. The new version incorporates stereographic grids with circular patch boundaries [13], which eliminates the large error from the corners of the square patches used previously. The finitedifference accuracy of the angular derivatives was increased to fourth order. Bugs were eliminated that had been introduced in the process of parallelizing the code using the Cactus framework [292]. In addition, the worldtube module, which supplies the inner boundary data for the characteristic evolution, was revamped so that it provides a consistent, secondorderaccurate startup algorithm for numericallygenerated Cauchy data. The prior module required differentiable Cauchy data, as provided by analytic testbeds, to be consistent with convergence.
These changes led to clean secondorder convergence of all evolved quantities at finite locations. Because some of the hypersurface equations become degenerate at \({{\mathcal I}^ +}\), certain asymptotic quantities, in particular the Bondi news function, are only firstorder accurate. However, the clean firstorder convergence allows the application of Richardson extrapolation, based upon three characteristic grid sizes, to extract waveforms with thirdorder accuracy.
The error norm for the extracted news function, \({{\mathcal E}_1}(N)\) as defined in Equation (85), has been measured for the simulation of the inspiral of equal mass, nonspinning black holes obtained via a BSSN simulation [16]. The advanced LIGO criterion for detection (87), was satisfied for ϵ_{max} = .005 (which corresponds to less than a 10% signal loss) and for values of C_{1} throughout the entire binary mass range. The criterion (88) for measurement is more stringent. For the expected lower bound of the calibration factor η_{min} = 0.4, for C_{1} = .24 (corresponding to the most demanding small mass limit) and for the most optimistic advanced LIGO signaltonoise ratio ρ = 100, the requirement for measurement is \({{\mathcal E}_1}(N)\; \le 9.6 \times {10^{ 4}}\). This measurement criterion was satisfied throughout the entire binary mass range by the numerical truncation error \({{\mathcal E}_1}(N)\) in the CCE waveform.
These detection and measurement criteria were satisfied for a range of extraction worldtubes extending from R = 20 M to R = 100 M. The \({{\mathcal E}_1}(N)\) error norm decreased with larger extraction radius, as expected since the error introduced by characteristic evolution depends upon the size of the integration region between the extraction worldtube and \({{\mathcal I}^ +}\). However, the modeling error corresponding to the difference in waveforms obtained with extraction at R = 50 M as compared to R_{E} = 100 M only satisfied the measurement criterion for signaltonoise ratios ρ < 25 (which would still cover the most likely advanced LIGO events). This modeling error results from the different initial data, which correspond to different extraction radii. This error would be smaller for longer simulations with a higher number of orbits. The results suggest that the choice of extraction radius should be balanced between a sufficiently large radius to reduce initialization effects and a sufficiently small radius where the Cauchy grid is more highly refined and outer boundary effects are better isolated.
6.6 Initial characteristic data for CCE
Data on the initial null hypersurface must be prescribed to begin the characteristic evolution. This data consist of the conformal 2metric h_{AB} of the null hypersurface. Because of the determinant condition (35), this data can be formulated in terms of the spinweight 2 variable J given in (47). In the first applications of CCE, it was expedient to set J = 0 on the initial hypersurface outside some radius. This necessitated a transition region to obtain continuity with the initial Cauchy data, which requires nonzero initial characteristic data at the extraction worldtube.
In [16] the initialization was changed by requiring that the NewmanPenrose component of the Weyl tensor intrinsic to the initial null hypersurface vanish, i.e., by setting ψ_{0} = 0. This approach is dual to the technique of using ψ_{4} to extract outgoing gravitational waves. For a linear perturbation of the Schwarzschild metric, this ψ_{0} condition eliminates incoming radiation crossing the initial null hypersurface. Since ψ_{0} consists of a second radial derivative of the characteristic data, the condition allows both continuity of J at the extraction worldtube and the desired asymptotic falloff of J at infinity. In the linearized limit, setting ψ_{0} = 0 reduces to (∂_{ℓ})^{2} J = 0, in terms of the compactified radial coordinate ℓ = 1/r. In terms of the compactified grid coordinate x = r/(R_{E} + r) (where R_{E} is the Cartesian radius of the extraction worldtube defined by the Cauchy coordinates), the corresponding solution is
where \(J{\vert_{{x_E}}}\) is determined by Cauchy data at the extraction worldtube. Since this solution also implies \(J{\vert_{{\mathcal I} +}} = J{\vert_{x = 1}} = 0\), \({{\mathcal I}^ +}\) has unitsphere geometry so that the conformal gauge effects discussed in Section 6.1 are minimized at the outset of the evolution.
Besides the extraneous radiation content in the characteristic initial data there is also extraneous “junk” radiation in the initial Cauchy data for the binary black hole simulation. Practical experience indicates that the effect of this “junk” radiation on the waveform is transient and becomes negligible by the onset of the plunge and merger stage. However, another source of waveform error with potentially longer time consequences can arise from a mismatch between the initial characteristic and Cauchy data. This mismatch arises because the characteristic data is given on the outgoing null hypersurface emanating from the intersection of the extraction worldtube and the initial Cauchy hypersurface. Since in CCE the extraction worldtube cannot be located at the outer Cauchy boundary, part of the initial null hypersurface lies in the domain of dependence of the initial Cauchy data. Thus, a free prescription of the characteristic data can be inconsistent with the Cauchy data.
The initial characteristic data ψ_{0} = 0 implies the absence of radiation on the assumption that the geometry of the initial null hypersurface is close to Schwarzschild. This assumption becomes valid as the extraction radius becomes large and the exterior Cauchy data can be approximated by Schwarzschild data. Thus, this mismatch could in principle be reduced by a sufficiently large choice of extraction worldtube. However, that approach is counter productive to the savings that CCE can provide.
An alternative approach developed in [58] attempts to alleviate this problem by constructing a solution linearized about Minkowski space. The linearized solution is modeled upon binary blackhole initial Cauchy data. By evaluating the solution on the initial characteristic null hypersurface, this solves the compatibility issue up to curved space effects. A comparison study based upon this approach shows that the choice of J = 0 initial data does affect the waveform for time scales, which extend long after the burst of junk radiation has passed. Although this study is restricted to CCE extraction radii R > 100 M and does not explore the additional benefits of the more gauge invariant ψ_{0} = 0 initial data implemented in [16], it emphasizes the need to control potential long terms effects, which might result from a mismatch between the Cauchy and characteristic initial data.
Ideally, this mismatch could be eliminated by placing the extraction worldtube at the artificial outer boundary of the Cauchy evolution by means of a transparent interface with the outer characteristic evolution. This is the ultimate goal of CCM, although a formidable amount of work remains to develop a stable implementation.
7 Numerical Hydrodynamics on Null Cones
Numerical evolution of relativistic hydrodynamics has been traditionally carried out on spacelike Cauchy hypersurfaces. Although the BondiSachs evolution algorithm can easily be extended to include matter [176], the advantage of a light cone approach for treating fluids is not as apparent as for a massless field whose physical characteristics lie on the light cone. However, results from recent studies of relativistic stars and of fluid sources moving in the vicinity of a black hole indicate that this approach can provide accurate simulations of astrophysical relevance such as supernova collapse to a black hole, mass accretion, and the production of gravitational waves.
7.1 Sphericallysymmetric hydrodynamic codes
The earliest fully general relativistic simulations of fluids were carried out in spherical symmetry. The first major work was a study of gravitational collapse by May and White [210]. Most of this early work was carried out using Cauchy evolution [109]. Miller and Mota [211] performed the first simulations of sphericallysymmetric gravitational collapse using a null foliation. Baumgarte, Shapiro and Teukolsky subsequently used a null slicing to study supernovae [39] and the collapse of neutron stars to form black holes [40]. The use of a null slicing allowed them to evolve the exterior spacetime while avoiding the region of singularity formation.
Barreto’s group in Venezuela applied characteristic methods to study the selfsimilar collapse of spherical matter and charge distributions [26, 30, 27]. The assumption of selfsimilarity reduces the problem to a system of ODE’s, subject to boundary conditions determined by matching to an exterior ReissnerNordströmVaidya solution. Heat flow in the internal fluid is balanced at the surface by the Vaidya radiation. Their simulations illustrate how a nonzero total charge can halt gravitational collapse and produce a final stable equilibrium [27]. It is interesting that the pressure vanishes in the final equilibrium state so that hydrostatic support is completely supplied by Coulomb repulsion. In subsequent work [24, 25], they applied their characteristic code to the evolution of a polytropic fluid sphere coupled to a scalar radiation field to study the central equation of state, conservation of the NewmanPenrose constant, the scattering of the scalar radiation off the polytrope and its late time decay. The work illustrates how characteristic evolution can be used to simulate radiation from a matter source in the simple context of spherical symmetry.
Font and Papadopoulos [223] have given a stateoftheart treatment of relativistic fluids, which is applicable to either spacelike or null foliations. Their approach is based upon a highresolution shockcapturing (HRSC) version of relativistic hydrodynamics in flux conservative form, which was developed by the Valencia group (for a review see [109]). In the HRSC scheme, the hydrodynamic equations are written in flux conservative, hyperbolic form. In each computational cell, the system of equations is diagonalized to determine the characteristic fields and velocities, and the local Riemann problem is solved to obtain a solution consistent with physical discontinuities. This allows a finitedifferencing scheme along the characteristics of the fluid that preserves the conserved physical quantities and leads to a stable and accurate treatment of shocks. Because the general relativistic system of hydrodynamical equations is formulated in covariant form, it can equally well be applied to spacelike or null foliations of the spacetime. The null formulation gave remarkable performance in the standard Riemann shock tube test carried out in a Minkowski background. The code was successfully implemented first in the case of spherical symmetry, using a version of the BondiSachs formalism adapted to describe gravity coupled to matter with a worldtube boundary [288]. They verified secondorder convergence in curved space tests based upon TolmanOppenheimerVolkoff equilibrium solutions for spherical fluids. In the dynamic selfgravitating case, simulations of spherical accretion of a fluid onto a black hole were stable and free of numerical problems. Accretion was successfully carried out in the regime where the mass of the black hole doubled. Subsequently the code was used to study how accretion modulates both the decay rates and oscillation frequencies of the quasinormal modes of the interior black hole [224].
The characteristic hydrodynamic approach of Font and Papadopoulos was first applied to sphericallysymmetric problems of astrophysical interest. Linke, Font, Janka, Müller, and Papadopoulos [206] simulated the spherical collapse of supermassive stars, using an equation of state that included the effects due to radiation, electronpositron pair formation, and neutrino emission. They were able to follow the collapse from the onset of instability to blackhole formation. The simulations showed that collapse of a star with mass greater than 5 × 10^{5} solar masses does not produce enough radiation to account for the gamma ray bursts observed at cosmological redshifts.
Next, Siebel, Font, and Papadopoulos [272] studied the interaction of a massless scalar field with a neutron star by means of the coupled KleinGordonEinsteinhydrodynamic equations. They analyzed the nonlinear scattering of a compact ingoing scalar pulse incident on a spherical neutron star in an initial equilibrium state obeying the null version of the TolmanOppenheimerVolkoff equations. Depending upon the initial mass and radius of the star, the scalar field either excites radial pulsation modes or triggers collapse to a black hole. The transfer of scalar energy to the star was found to increase with the compactness of the star. The approach included a compactification of null infinity, where the scalar radiation was computed. The scalar waveform showed quasinormal oscillations before settling down to a late time power law decay in good agreement with the t^{−3} dependence predicted by linear theory. Global energy balance between the star’s relativistic mass and the scalar energy radiated to infinity was confirmed.
7.2 Axisymmetric characteristic hydrodynamic simulations
The approach initiated by Font and Papadopoulos has been applied in axisymmetry to pioneering studies of gravitational waves from relativistic stars. The gravitational field is treated by the original Bondi formalism using the axisymmetric code developed by Papadopoulos [221, 142]. Because of the twistfree property of the axisymmetry in the original Bondi formalism, the fluid motion cannot have a rotational component about the axis of symmetry, i.e., the fluid velocity is constrained to the (r, θ) plane. In his thesis work, Siebel [269] extensively tested the combined hydrodynamicgravity code in the nonlinear, relativistic regime and demonstrated that it accurately and stably maintained the equilibrium of a neutron star.
As a first application of the code, Siebel, Font, Müller, and Papadopoulos [270] studied axisymmetric pulsations of neutron stars, which were initiated by perturbing the density and θcomponent of velocity of a sphericallysymmetric equilibrium configuration. The frequencies measured for the radial and nonradial oscillation modes of the star were found to be in good agreement with the results from linearized perturbation studies. The Bondi news function was computed and its amplitude found to be in rough agreement with the value given by the Einstein quadrupole formula. Both computations involve numerical subtleties: The computation of the news involves large terms, which partially cancel to give a small result, and the quadrupole formula requires computing three time derivatives of the fluid variables. These sources of computational error, coupled with ambiguity in the radiation content in the initial data, prevented any definitive conclusions. The total radiated mass loss was approximately 10^{−9} of the total mass.
Next, the code was applied to the simulation of axisymmetric supernova core collapse [271]. A hybrid equation of state was used to mimic stiffening at collapse to nuclear densities and shock heating during the bounce. The initial equilibrium state of the core was modeled by a polytrope with index Γ = 4/3. Collapse was initiated by reducing the polytropic index to 1.3. In order to break spherical symmetry, small perturbations were introduced into the θcomponent of the fluid velocity. During the collapse phase, the central density increased by five orders of magnitude. At this stage the inner core bounced at supranuclear densities, producing an expanding shock wave, which heated the outer layers. The collapse phase was well approximated by spherical symmetry but nonspherical oscillations were generated by the bounce. The resulting gravitational waves at null infinity were computed by the compactified code. After the bounce, the Bondi news function went through an oscillatory build up and then decayed in an ℓ = 2 quadrupole mode. However, a comparison with the results predicted by the Einstein quadrupole formula no longer gave the decent agreement found in the case of neutron star pulsations. This discrepancy was speculated to be due to the relativistic velocities of ≈ 0.2 c reached in the core collapse as opposed to 10^{−4} c for the pulsations. However, gauge effects and numerical errors also make important contributions, which cloud any definitive interpretation. This is the first study of gravitational wave production by the gravitational collapse of a relativistic star carried out with a characteristic code. It is clearly a remarkable piece of work, which offers up a whole new approach to the study of gravitational waves from astrophysical sources.
7.3 Threedimensional characteristic hydrodynamic simulations
The PITT code has been coupled with a rudimentary matter source to carry out threedimensional characteristic simulations of a relativistic star orbiting a black hole. A naive numerical treatment of the Einsteinhydrodynamic system for a perfect fluid was incorporated into the code [54], but a more accurate HRSC hydrodynamic algorithm has not yet been implemented. The fully nonlinear mattergravity null code was tested for stability and accuracy to verify that nothing breaks down as long as the fluid remains well behaved, e.g., hydrodynamic shocks do not form. The code was used to simulate a localized blob of matter falling into a black hole, verifying that the motion of the center of the blob approximates a geodesic and determining the waveform of the emitted gravitational radiation at \({{\mathcal I}^ +}\). This simulation was a prototype of a neutron star orbiting a black hole, although it would be unrealistic to expect that this naive treatment of the fluid could reliably evolve a compact star for several orbits. A 3D HRSC characteristic hydrodynamic code would open the way to explore this important astrophysical problem.
Short term issues were explored with the code in subsequent work [55]. The code was applied to the problem of determining realistic initial data for a star in circular orbit about a black hole. In either a Cauchy or characteristic approach to this initial data problem, a serious source of physical ambiguity is the presence of spurious gravitational radiation in the gravitational data. Because the characteristic approach is based upon a retarded time foliation, the resulting spurious outgoing waves can be computed by carrying out a short time evolution. Two very different methods were used to prescribe initial gravitational null data:

1.
a Newtonian correspondence method, which guarantees that the Einstein quadrupole formula is satisfied in the Newtonian limit [303], and

2.
setting the shear of the initial null hypersurface to zero.
Both methods are mathematically consistent but suffer from physical shortcomings. Method 1 has only approximate validity in the relativistic regime of a star in close orbit about a black hole while Method 2 completely ignores the gravitational lensing effect of the star. It was found that, independent of the choice of initial gravitational data, the spurious waves quickly radiate away, and that the system relaxes to a quasiequilibrium state with an approximate helical symmetry corresponding to the circular orbit of the star. The results provide justification of recent approaches for initializing the Cauchy problem, which are based on imposing an initial helical symmetry, as well as providing a relaxation scheme for obtaining realistic characteristic data.
7.3.1 Massive particle orbiting a black hole
One attractive way to avoid the computational expense of hydrodynamics in treating a star orbiting a massive black hole is to treat the star as a particle. This has been attempted using the PITT code to model a star of mass m orbiting a black hole of much larger mass, say 1000 m [51]. The particle was described by the perfect fluid energymomentum tensor of a rigid Newtonian polytrope in spherical equilibrium of a fixed size in its local proper rest frame, with its center following a geodesic. The validity of the model requires that the radius of the polytrope be large enough that the assumption of Newtonian equilibrium is valid but small enough that the assumption of rigidity is consistent with the tidal forces produced by the black hole. Characteristic initial gravitational data for a double null initial value problem were taken to be Schwarzschild data for the black hole. The system was then evolved using a fully nonlinear characteristic code. The evolution equations for the particle were arranged to take computational advantage of the energy and angular momentum conservation laws, which would hold in the test body approximation.
The evolution was robust and could track the particle for two orbits as it spiraled into the black hole. Unfortunately, the computed rate of inspiral was much too large to be physically realistic: the energy loss was ≈ 10^{3} greater than the value expected from perturbation theory. This discrepancy might have a physical origin, due to the choice of initial gravitational data that ignores the particle or due to a breakdown of the rigidity assumption, or a numerical origin due to improper resolution of the particle. It is a problem whose resolution would require the characteristic AMR techniques being developed [237].
These sources of error can be further aggravated by the introduction of matter fields, as encountered in trying to make definitive comparisons between the Bondi news and the Einstein quadrupole formula in the axisymmetric studies of supernova collapse [271] described in Section 7.2. In the threedimensional characteristic simulations of a star orbiting a black hole [55, 51], the lack of resolution introduced by a localized star makes an accurate calculation of the news highly problematic. There exists no good testbed for validating the news calculation in the presence of a fluid source. A perturbation analysis in Bondi coordinates of the oscillations of an infinitesimal fluid shell in a Schwarzschild background [47] might prove useful for testing constraint propagation in the presence of a fluid. However, the underlying Fourier mode decomposition requires the gravitational field to be periodic so that the solution cannot be used to test the computation of mass loss or radiation reaction effects.
References
Abrahams, A.M. and Evans, C.R., “Gaugeinvariant treatment of gravitational radiation near the source: Analysis and numerical simulations”, Phys. Rev. D, 42, 2585–2594, (1990). [DOI], [ADS]. (Cited on pages 7 and 58.)
Abrahams, A.M. and Price, R.H., “Applying black hole perturbation theory to numerically generated spacetimes”, Phys. Rev. D, 53, 1963–1971, (1996). [DOI], [ADS], [arXiv:grqc/9508059]. (Cited on pages 7 and 58.)
Abrahams, A.M., Shapiro, S.L. and Teukolsky, S.A., “Calculation of gravitational waveforms from black hole collisions and disk collapse: Applying perturbation theory to numerical spacetimes”, Phys. Rev. D, 51, 4295–4301, (1995). [DOI], [ADS], [arXiv:grqc/9408036]. (Cited on pages 7 and 58.)
Abrahams, A.M. et al. (Binary Black Hole Grand Challenge Alliance), “Gravitational Wave Extraction and Outer Boundary Conditions by Perturbative Matching”, Phys. Rev. Lett., 80, 1812–1815, (1998). [DOI], [ADS], [arXiv:grqc/9709082]. (Cited on pages 7 and 58.)
Alcubierre, M. et al., “Towards standard testbeds for numerical relativity”, Class. Quantum Grav., 21, 589–613, (2004). [DOI], [ADS], [arXiv:grqc/0305023]. (Cited on page 38.)
Anderson, J.L., “Gravitational radiation damping in systems with compact components”, Phys. Rev. D, 36, 2301–2313, (1987). [DOI], [ADS]. (Cited on page 57.)
Anderson, J.L. and Hobill, D.W., “Matched analyticnumerical solutions of wave equations”, in J.M., Centrella., ed., Dynamical Spacetimes and Numerical Relativity, Proceedings of the Workshop held at Drexel University, October 7–11, 1985, pp. 389–410, (Cambridge University Press, Cambridge, New York, 1986). [ADS]. (Cited on page 57.)
Anderson, J.L. and Hobill, D.W., “Mixed analyticnumerical solutions for a simple radiating system”, Gen. Relativ. Gravit., 19, 563–580, (1987). [DOI], [ADS]. (Cited on page 57.)
Anderson, J.L. and Hobill, D.W., “A study of nonlinear radiation damping by matching analytic and numerical solutions”, J. Comput. Phys., 75, 283–299, (1988). [DOI], [ADS]. (Cited on page 57.)
Anderson, J.L., Kates, R.E., Kegeles, L.S. and Madonna, R.G., “Divergent integrals of postNewtonian gravity: Nonanalytic terms in the nearzone expansion of a gravitationally radiating system found by matching”, Phys. Rev. D, 25, 2038–2048, (1982). [DOI], [ADS]. (Cited on page 57.)
Anninos, P., Daues, G., Massó, J., Seidel, E. and Suen, W.M., “Horizon boundary conditions for black hole spacetimes”, Phys. Rev. D, 51, 5562–5578, (1995). [DOI], [ADS], [arXiv:grqc/9412069]. (Cited on page 61.)
Arnowitt, R., Deser, S. and Misner, C.W., “The dynamics of general relativity”, in Witten, L., ed., Gravitation: An Introduction to Current Research, pp. 227–265, (Wiley, New York; London, 1962). [DOI], [ADS], [arXiv:grqc/0405109]. (Cited on page 50.)
Babiuc, M.C., Bishop, N.T., Szilágyi, B. and Winicour, J., “Strategies for the characteristic extraction of gravitational waveforms”, Phys. Rev. D, 79, 084011, (2009). [DOI], [ADS], [arXiv:0808.0861 [grqc]]. (Cited on pages 35, 66, 68, 69, and 74.)
Babiuc, M.C., Kreiss, H.O. and Winicour, J., “Constraintpreserving Sommerfeld conditions for the harmonic Einstein equations”, Phys. Rev. D, 75, 044002, (2007). [DOI], [ADS], [arXiv:grqc/0612051]. (Cited on page 57.)
Babiuc, M.C., Szilágyi, B., Hawke, I. and Zlochower, Y., “Gravitational wave extraction based on Cauchycharacteristic extraction and characteristic evolution”, Class. Quantum Grav., 22, 5089–5107, (2005). [DOI], [ADS], [arXiv:grqc/0501008]. (Cited on page 58.)
Babiuc, M.C., Szilágyi, B., Winicour, J. and Zlochower, Y., “Characteristic extraction tool for gravitational waveforms”, Phys. Rev. D, 84, 044057, (2011). [DOI], [ADS], [arXiv:1011.4223 [grqc]]. (Cited on pages 39, 74, and 75.)
Babiuc, M.C., Winicour, J. and Zlochower, Y., “Binary black hole waveform extraction at null infinity”, Class. Quantum Grav., 28, 134006, (2011). [DOI], [ADS], [arXiv:1106.4841 [grqc]]. (Cited on page 74.)
Babiuc, M.C. et al., “Implementation of standard testbeds for numerical relativity”, Class. Quantum Grav., 25, 125012, (2008). [DOI], [ADS], [arXiv:0709.3559 [grqc]]. (Cited on pages 50 and 51.)
Baker, J., Campanelli, M., Lousto, C.O. and Takahashi, R., “Modeling gravitational radiation from coalescing binary black holes”, Phys. Rev. D, 65, 124012, 1–23, (2002). [DOI], [ADS], [arXiv:astroph/0202469]. (Cited on page 66.)
Baker, J.G., Centrella, J., Choi, D.I., Koppitz, M. and van Meter, J.R., “Binary black hole merger dynamics and waveforms”, Phys. Rev. D, 73, 104002, (2006). [DOI], [ADS], [arXiv:grqc/0602026]. (Cited on page 66.)
Baker, J.G., Centrella, J., Choi, D.I., Koppitz, M. and van Meter, J.R., “GravitationalWave Extraction from an Inspiraling Configuration of Merging Black Holes”, Phys. Rev. Lett., 96, 111102, (2006). [DOI], [ADS], [arXiv:grqc/0511103]. (Cited on page 7.)
Balean, R., The NullTimelike Boundary Problem, Ph.D. Thesis, (University of New England, Armidale, NSW, Australia, 1966). (Cited on page 13.)
Balean, R., “The nulltimelike boundary problem for the linear wave equation”, Commun. Part. Diff. Eq., 22, 1325–1360, (1997). [DOI]. (Cited on page 13.)
Barreto, W., Castillo, L. and Barrios, E., “Central equation of state in spherical characteristic evolutions”, Phys. Rev. D, 80, 084007, (2009). [DOI], [ADS], [arXiv:0909.4500 [grqc]]. (Cited on page 76.)
Barreto, W., Castillo, L. and Barrios, E., “Bondian frames to couple matter with radiation”, Gen. Relativ. Gravit., 42, 1845–1862, (2010). [DOI], [ADS], [arXiv:1002.4168 [grqc]]. (Cited on page 76.)
Barreto, W. and Da Silva, A., “Gravitational collapse of a charged and radiating fluid ball in the diffusion limit”, Gen. Relativ. Gravit., 28, 735–747, (1996). [DOI], [ADS]. (Cited on page 76.)
Barreto, W. and Da Silva, A., “Selfsimilar and charged spheres in the diffusion approximation”, Class. Quantum Grav., 16, 1783–1792, (1999). [DOI], [ADS], [arXiv:grqc/0508055]. (Cited on page 76.)
Barreto, W., Da Silva, A., Gómez, R., Lehner, L., Rosales, L. and Winicour, J., “Threedimensional EinsteinKleinGordon system in characteristic numerical relativity”, Phys. Rev. D, 71, 064028, (2005). [DOI], [ADS], [arXiv:grqc/0412066]. (Cited on page 49.)
Barreto, W., Gómez, R., Lehner, L. and Winicour, J., “Gravitational instability of a kink”, Phys. Rev. D, 54, 3834–3839, (1996). [DOI], [ADS], [arXiv:grqc/0507086]. (Cited on page 20.)
Barreto, W., Peralta, C. and Rosales, L., “Equation of state and transport processes in selfsimilar spheres”, Phys. Rev. D, 59, 024008, (1998). [DOI], [ADS], [arXiv:grqc/0508054]. (Cited on page 76.)
Bartnik, R., “Einstein equations in the null quasispherical gauge”, Class. Quantum Grav., 14, 2185–2194, (1997). [DOI], [ADS], [arXiv:grqc/9611045]. (Cited on page 33.)
Bartnik, R., “Shearfree null quasispherical spacetimes”, J. Math. Phys., 38, 5774–5791, (1997). [DOI], [ADS], [arXiv:grqc/9705079]. (Cited on page 33.)
Bartnik, R., “Interaction of gravitational waves with a black hole”, in De Wit, D., Bracken, A.J., Gould, M.D. and Pearce, P.A., eds., XIIth International Congress of Mathematical Physics (ICMP ’97), The University of Queensland, Brisbane, 13–19 July 1997, pp. 3–14, (International Press, Somerville, 1999). (Cited on pages 28, 40, and 41.)
Bartnik, R., “Assessing accuracy in a numerical Einstein solver”, in Weinstein, G. and Weikard, R., eds., Differential Equations and Mathematical Physics, Proceedings of an international conference held at the University of Alabama in Birmingham, March 16–20, 1999, AMS/IP Studies in Advanced Mathematics, 16, p. 11, (American Mathematical Society; International Press, Providence, RI, 2000). (Cited on page 39.)
Bartnik, R. and Norton, A.H., “Numerical solution of the Einstein equations”, in Noye, B.J., Teubner, M.D. and Gill, A.W., eds., Computational Techniques and Applications: CTAC 97, The Eighth Biennial Conference, The University of Adelaide, Australia, 29 September–1 October 1997, p. 91, (World Scientific, Singapore; River Edge, NJ, 1998). (Cited on page 39.)
Bartnik, R. and Norton, A.H., “Numerical Methods for the Einstein Equations in Null QuasiSpherical Coordinates”, SIAM J. Sci. Comput., 22, 917–950, (2000). [DOI]. (Cited on pages 8, 28, 30, 32, and 39.)
Bartnik, R. and Norton, A.H., “Numerical Experiments at Null Infinity”, in Friedrich, H. and Frauendiener, J., eds., The Conformal Structure of SpaceTime: Geometry, Analysis, Numerics, Proceedings of the international workshop, Tübingen, Germany, 2–4 April 2001, Lecture Notes in Physics, 604, pp. 313–326, (Springer, Berlin; New York, 2002). [DOI], [ADS]. (Cited on page 41.)
Baumgarte, T.W. and Shapiro, S.L., “Numerical integration of Einstein’s field equations”, Phys. Rev. D, 59, 024007, (1998). [DOI], [ADS], [arXiv:grqc/9810065]. (Cited on pages 7 and 70.)
Baumgarte, T.W., Shapiro, S.L. and Teukolsky, S.A., “Computing Supernova Collapse to Neutron Stars and Black Holes”, Astrophys. J., 443, 717–734, (1995). [DOI], [ADS]. (Cited on page 76.)
Baumgarte, T.W., Shapiro, S.L. and Teukolsky, S.A., “Computing the Delayed Collapse of Hot Neutron Stars to Black Holes”, Astrophys. J., 458, 680–691, (1996). [DOI], [ADS]. (Cited on page 76.)
Bayliss, A. and Turkel, E., “Radiation boundary conditions for wavelike equations”, Commun. Pure Appl. Math., 33, 707–725, (1980). [DOI], [ADS]. (Cited on page 53.)
Berger, B.K., “Numerical Approaches to Spacetime Singularities”, Living Rev. Relativity, 5, lrr20021, (2002). URL (accessed 20 July 2005): http://www.livingreviews.org/lrr20021. (Cited on page 16.)
Bičák, J., Reilly, P. and Winicour, J., “Boostrotation symmetric gravitational null cone data”, Gen. Relativ. Gravit., 20, 171–181, (1988). [DOI], [ADS]. (Cited on pages 39 and 62.)
Bičák, J. and Schmidt, B.G., “Asymptotically flat radiative spacetimes with boostrotation symmetry: the general structure”, Phys. Rev. D, 40, 1827–1853, (1989). (Cited on page 39.)
Bishop, N.T., “Some aspects of the characteristic initial value problem in numerical relativity”, in d’Inverno, R.A., ed., Approaches to Numerical Relativity, Proceedings of the International Workshop on Numerical Relativity, Southampton, December 1991, pp. 20–33, (Cambridge University Press, Cambridge; New York, 1992). [ADS]. (Cited on pages 55 and 56.)
Bishop, N.T., “Numerical relativity: combining the Cauchy and characteristic initial value problems”, Class. Quantum Grav., 10, 333–341, (1993). [DOI], [ADS]. (Cited on pages 50 and 55.)
Bishop, N.T., “Linearized solutions of the Einstein equations within a BondiSachs framework, and implications for boundary conditions in numerical simulations”, Class. Quantum Grav., 22, 2393–2406, (2005). [DOI], [ADS], [arXiv:grqc/0412006]. (Cited on pages 39 and 79.)
Bishop, N.T. and Deshingkar, S.S., “New approach to calculating the news”, Phys. Rev. D, 68, 024031, (2003). [DOI], [ADS], [arXiv:grqc/0303021]. (Cited on page 69.)
Bishop, N.T., Gómez, R., Holvorcem, P.R., Matzner, R.A., Papadopoulos, P. and Winicour, J., “CauchyCharacteristic Matching: A New Approach to Radiation Boundary Conditions”, Phys. Rev. Lett., 76, 4303–4306, (1996). [DOI], [ADS]. (Cited on pages 56 and 62.)
Bishop, N.T., Gómez, R., Holvorcem, P.R., Matzner, R.A., Papadopoulos, P. and Winicour, J., “CauchyCharacteristic Evolution and Waveforms”, J. Comput. Phys., 136, 140–167, (1997). [DOI], [ADS]. Erratum J. Comput. Phys., 148, 299–301, DOI:10.1006/jcph.1998.6139. (Cited on pages 53, 56, and 62.)
Bishop, N.T., Gómez, R., Husa, S., Lehner, L. and Winicour, J., “Numerical relativistic model of a massive particle in orbit near a Schwarzschild black hole”, Phys. Rev. D, 68, 084015, (2003). [DOI], [ADS], [arXiv:grqc/0301060]. (Cited on pages 66, 78, and 79.)
Bishop, N.T., Gómez, R., Isaacson, R.A., Lehner, L., Szilágyi, B. and Winicour, J., “Cauchycharacteristic matching”, in Bhawal, B. and Iyer, B.R., eds., Black Holes, Gravitational Radiation and the Universe: Essays in Honour of C.V. Vishveshwara, Fundamental Theories of Physics, pp. 383–408, (Kluwer, Dordrecht; Boston, 1999). [ADS], [arXiv:grqc/9801070]. (Cited on pages 55 and 63.)
Bishop, N.T., Gómez, R., Lehner, L., Maharaj, M. and Winicour, J., “Highpowered gravitational news”, Phys. Rev. D, 56, 6298–6309, (1997). [DOI], [ADS], [arXiv:grqc/9708065]. (Cited on pages 30, 38, 39, 65, and 66.)
Bishop, N.T., Gómez, R., Lehner, L., Maharaj, M. and Winicour, J., “The incorporation of matter into characteristic numerical relativity”, Phys. Rev. D, 60, 024005, (1999). [DOI], [ADS], [arXiv:grqc/9901056]. (Cited on page 78.)
Bishop, N.T., Gómez, R., Lehner, L., Maharaj, M. and Winicour, J., “Characteristic initial data for a star orbiting a black hole”, Phys. Rev. D, 72, 024002, (2005). [DOI], [ADS], [arXiv:grqc/0412080]. (Cited on pages 49, 78, and 79.)
Bishop, N.T., Gómez, R., Lehner, L. and Winicour, J., “Cauchycharacteristic extraction in numerical relativity”, Phys. Rev. D, 54, 6153–6165, (1996). [DOI], [ADS], [arXiv:grqc/9705033]. (Cited on pages 30, 33, 38, 39, 55, 58, and 66.)
Bishop, N.T. and Haines, P., “Observational cosmology and numerical relativity”, Quaest. Math., 19, 259–274, (1996). [DOI]. (Cited on page 22.)
Bishop, N.T., Pollney, D. and Reisswig, C., “Initial data transients in binary black hole evolutions”, Class. Quantum Grav., 28, 155019, (2011). [DOI], [ADS], [arXiv:grqc/1101.5492]. (Cited on page 75.)
Bishop, N.T. and Venter, L.R., “Kerr metric in BondiSachs form”, Phys. Rev. D, 73, 084023, (2006). [DOI], [ADS], [arXiv:grqc/0506077]. (Cited on page 42.)
Bizoń, P., “Equivariant SelfSimilar Wave Maps from Minkowski Spacetime into 3Sphere”, Commun. Math. Phys., 215, 45–56, (2000). [DOI], [ADS], [arXiv:mathph/9910026]. (Cited on page 19.)
Blaschak, J.G. and Kriegsmann, G.A., “A comparative study of absorbing boundary conditions”, J. Comput. Phys., 77, 109–139, (1988). [DOI], [ADS]. (Cited on page 53.)
Bondi, H., “Gravitational waves in general relativity”, Nature, 186, 535, (1960). [DOI], [ADS]. (Cited on pages 7 and 11.)
Bondi, H., van der Burg, M.G.J. and Metzner, A.W.K., “Gravitational Waves in General Relativity. VII. Waves from AxiSymmetric Isolated Systems”, Proc. R. Soc. London, Ser. A, 269, 21–52, (1962). [DOI], [ADS]. (Cited on pages 7, 11, 24, 25, and 33.)
Brady, P.R., Chambers, C.M. and Gonçalves, S.M.C.V., “Phases of massive scalar field collapse”, Phys. Rev. D, 56, R6057–R6061, (1997). [DOI], [ADS], [arXiv:grqc/9709014]. (Cited on page 19.)
Brady, P.R., Chambers, C.M., Krivan, W. and Laguna, P., “Telling tails in the presence of a cosmological constant”, Phys. Rev. D, 55, 7538–7545, (1997). [DOI], [ADS], [arXiv:grqc/9611056]. (Cited on page 21.)
Brady, P.R. and Smith, J.D., “Black Hole Singularities: A Numerical Approach”, Phys. Rev. Lett., 75, 1256–1259, (1995). [DOI], [ADS], [arXiv:grqc/950607]. (Cited on page 20.)
Brizuela, D., MartínGarcía, J.M. and Tiglio, M., “A complete gaugeinvariant formalism for arbitrary secondorder perturbations of a Schwarzschild black hole”, Phys. Rev. D, 80, 024021, (2009). [DOI], [0903.1134]. (Cited on page 49.)
Browning, G.L., Hack, J.J. and Swarztrauber, P.N., “A Comparison of Three Numerical Methods for Solving Differential Equations on the Sphere”, Mon. Weather Rev., 117, 1058–1075, (1989). [DOI], [ADS]. (Cited on pages 30 and 31.)
Buchman, L.T. and Sarbach, O., “Towards absorbing outer boundaries in general relativity”, Class. Quantum Grav., 23, 6709–6744, (2006). [DOI], [grqc/0608051]. (Cited on page 52.)
Burke, W.L., “Gravitational Radiation Damping of Slowly Moving Systems Calculated Using Matched Asymptotic Expansions”, J. Math. Phys., 12, 401–418, (1971). [DOI], [ADS]. (Cited on page 57.)
Burko, L.M., “Structure of the Black Hole’s CauchyHorizon Singularity”, Phys. Rev. Lett., 79, 4958–4961, (1997). [DOI], [ADS], [arXiv:grqc/9710112]. (Cited on pages 20 and 23.)
Burko, L.M. and Ori, A., “Latetime evolution of nonlinear gravitational collapse”, Phys. Rev. D, 56, 7820–7832, (1997). [DOI], [ADS], [arXiv:grqc/9703067]. (Cited on page 21.)
Butler, D.S., “The Numerical Solution of Hyperbolic Systems of Partial Differential Equations in Three Independent Variables”, Proc. R. Soc. London, Ser. A, 255, 232–252, (1960). [DOI], [ADS]. (Cited on page 23.)
Calabrese, G., Lehner, L. and Tiglio, M., “Constraintpreserving boundary conditions in numerical relativity”, Phys. Rev. D, 65, 104031, (2002). [DOI], [ADS], [arXiv:grqc/0111003]. (Cited on page 56.)
Calabrese, G., Pullin, J., Reula, O., Sarbach, O. and Tiglio, M., “Well Posed ConstraintPreserving Boundary Conditions for the Linearized Einstein Equations”, Commun. Math. Phys., 240, 377–395, (2003). [DOI], [ADS], [arXiv:grqc/0209017]. (Cited on page 56.)
Calabrese, G., Pullin, J., Sarbach, O. and Tiglio, M., “Convergence and stability in numerical relativity”, Phys. Rev. D, 66, 041501(R), (2002). [DOI], [grqc/0207018]. (Cited on page 50.)
Campanelli, M., Gómez, R., Husa, S., Winicour, J. and Zlochower, Y., “Close limit from a null point of view: The advanced solution”, Phys. Rev. D, 63, 124013, (2001). [DOI], [ADS], [arXiv:grqc/0012107]. (Cited on pages 45, 46, and 58.)
Campanelli, M., Lousto, C.O., Marronetti, P. and Zlochower, Y., “Accurate Evolutions of Orbiting BlackHole Binaries without Excision”, Phys. Rev. Lett., 96, 111101, (2006). [DOI], [ADS], [arXiv:grqc/0511048]. (Cited on pages 7 and 66.)
Choptuik, M.W., “‘Critical’ behavior in massless scalar field collapse”, in d’Inverno, R.A., ed., Approaches to Numerical Relativity, Proceedings of the International Workshop on Numerical Relativity, Southampton, December 1991, pp. 202–222, (Cambridge University Press, Cambridge; New York, 1992). [ADS]. (Cited on page 16.)
Choptuik, M.W., “Universality and scaling in gravitational collapse of a massless scalar field”, Phys. Rev. Lett., 70, 9–12, (1993). [DOI], [ADS]. (Cited on pages 16 and 60.)
ChoquetBruhat, Y., Chruściel, P.T. and MartínGarcía, J.M., “An existence theorem for the Cauchy problem on a characteristic cone for the Einstein equations”, in Agranovsky, M. et al., ed., Complex Analysis and Dynamical Systems IV. Part 2: General Relativity, Geometry, and PDE, Proceedings of the conference held in Nahariya, Israel, May 18–22, 2009, Contemporary Mathematics, 554, (American Mathematical Society and BarIlan University, Providence, RI; RamatGan, Israel, 2011). [ADS], [arXiv:1006.5558 [grqc]]. (Cited on page 13.)
Christodoulou, D., “A mathematical theory of gravitational collapse”, Commun. Math. Phys., 109, 613–647, (1987). [DOI]. (Cited on page 16.)
Christodoulou, D., “The formation of black holes and singularities in spherically symmetric gravitational collapse”, Commun. Pure Appl. Math., 44, 339–373, (1991). [DOI]. (Cited on page 16.)
Christodoulou, D., “Bounded Variation Solutions of the Spherically Symmetric EinsteinScalar Field Equations”, Commun. Pure Appl. Math., 46, 1131–1220, (1993). [DOI]. (Cited on page 16.)
Christodoulou, D., “Examples of Naked Singularity Formation in the Gravitational Collapse of a Scalar Field”, Ann. Math. (2), 140, 607–653, (1994). [DOI]. (Cited on page 16.)
Christodoulou, D., “The instability of naked singularities in the gravitational collapse of a scalar field”, Ann. Math. (2), 149, 183–217, (1999). [DOI]. (Cited on page 16.)
Christodoulou, D., “On the global initial value problem and the issue of singularities”, Class. Quantum Grav., 16, A23–A35, (1999). [DOI]. (Cited on page 16.)
Christodoulou, D. and Klainerman, S., The Global Nonlinear Stability of the Minkowski Space, Princeton Mathematical Series, 41, (Princeton University Press, Princeton, NJ, 1993). (Cited on page 25.)
Clarke, C.J.S. and d’Inverno, R.A., “Combining Cauchy and characteristic numerical evolutions in curved coordinates”, Class. Quantum Grav., 11, 1463–1448, (1994). [DOI], [ADS]. (Cited on pages 55, 56, and 59.)
Clarke, C.J.S., d’Inverno, R.A. and Vickers, J.A., “Combining Cauchy and characteristic codes. I. The vacuum cylindrically symmetric problem”, Phys. Rev. D, 52, 6863–6867, (1995). [DOI], [ADS]. (Cited on pages 22, 56, and 59.)
Cook, G.B. et al. (Binary Black Hole Grand Challenge Alliance), “Boosted ThreeDimensional BlackHole Evolutions with Singularity Excision”, Phys. Rev. Lett., 80, 2512–2516, (1998). [DOI], [ADS], [arXiv:grqc/9711078]. (Cited on page 26.)
Corkill, R.W. and Stewart, J.M., “Numerical Relativity. II. Numerical Methods for the Characteristic Initial Value Problem and the Evolution of the Vacuum Field Equations for SpaceTimes with Two Killing Vectors”, Proc. R. Soc. London, Ser. A, 386, 373–391, (1983). [DOI], [ADS]. (Cited on pages 13 and 16.)
de Moerloose, J. and de Zutter, D., “Surface integral representation radiation boundary condition for the FDTD method”, IEEE Trans. Ant. Prop., 41, 890–896, (1993). [DOI], [ADS]. (Cited on page 53.)
de Oliveira, H.P. and Rodrigues, E.L., “A Dynamical System Approach for the Bondi Problem”, Int. J. Mod. Phys. A, 24, 1700–1704, (2009). [DOI], [ADS], [arXiv:0809.2837 [grqc]]. (Cited on page 27.)
Derry, L., Isaacson, R.A. and Winicour, J., “ShearFree Gravitational Radiation”, Phys. Rev., 185, 1647–1655, (1969). [DOI], [ADS]. (Cited on page 33.)
Diener, P., Dorband, E.N., Schnetter, E. and Tiglio, M., “Optimized HighOrder Derivative and Dissipation Operators Satisfying Summation by Parts, and Applications in Threedimensional Multiblock Evolutions”, J. Sci. Comput., 32, 109–145, (2007). [DOI], [grqc/0512001]. (Cited on page 30.)
d’Inverno, R.A., ed., Approaches to Numerical Relativity, Proceedings of the International Workshop on Numerical Relativity, Southampton, December 1991, (Cambridge University Press, Cambridge; New York, 1992). (Cited on page 11.)
d’Inverno, R.A., Dubal, M.R. and Sarkies, E.A., “Cauchycharacteristic matching for a family of cylindrical solutions possessing both gravitational degrees of freedom”, Class. Quantum Grav., 17, 3157–3170, (2000). [DOI], [ADS], [arXiv:grqc/0002057]. (Cited on page 59.)
d’Inverno, R.A. and Vickers, J.A., “Combining Cauchy and characteristic codes. III. The interface problem in axial symmetry”, Phys. Rev. D, 54, 4919–4928, (1996). [DOI], [ADS]. (Cited on pages 27 and 62.)
d’Inverno, R.A. and Vickers, J.A., “Combining Cauchy and characteristic codes. IV. The characteristic field equations in axial symmetry”, Phys. Rev. D, 56, 772–784, (1997). [DOI], [ADS]. (Cited on pages 27 and 62.)
Dorband, E.N., Berti, E., Diener, P., Schnetter, E. and Tiglio, M., “A numerical study of the quasinormal mode excitation of Kerr black holes”, Phys. Rev. D, 74, 084028, (2006). [DOI], [grqc/0608091]. (Cited on page 30.)
Dubal, M.R., d’Inverno, R.A. and Clarke, C.J.S., “Combining Cauchy and characteristic codes. II. The interface problem for vacuum cylindrical symmetry”, Phys. Rev. D, 52, 6868–6881, (1995). [DOI], [ADS]. (Cited on pages 22, 56, and 59.)
Duff, G.F.D., “Mixed problems for linear systems of first order equations”, Can. J. Math., 10, 127–160, (1958). [DOI]. (Cited on pages 13, 36, and 54.)
“Einstein Toolkit”, project homepage, Louisiana State University. URL (accessed 7 August 2011): http://www.einsteintoolkit.org/. (Cited on page 74.)
Ellis, G.F.R., Nel, S.D., Stoeger, W.J., Maartens, R. and Whitman, A.P., “Ideal observational cosmology”, Phys. Rep., 124, 315–417, (1985). [DOI], [ADS]. (Cited on page 22.)
Engquist, B. and Majda, A., “Absorbing Boundary Conditions for the Numerical Simulation of Waves”, Math. Comput., 31 (139), 629–651, (1977). [DOI], [ADS]. (Cited on pages 53 and 54.)
Flanagan, É.É. and Hughes, S.A., “Measuring gravitational waves from binary black hole coalescences. I. Signal to noise for inspiral, merger and ringdown”, Phys. Rev. D, 57, 4535–4565, (1998). [DOI], [ADS], [arXiv:grqc/9701039]. (Cited on page 72.)
Fletcher, S.J. and Lun, A.W.C., “The Kerr spacetime in generalized BondiSachs coordinates”, Class. Quantum Grav., 20, 4153–4167, (2003). [DOI], [ADS]. (Cited on page 42.)