Modern Tests of Lorentz Invariance

Motivated by ideas about quantum gravity, a tremendous amount of effort over the past decade has gone into testing Lorentz invariance in various regimes. This review summarizes both the theoretical frameworks for tests of Lorentz invariance and experimental advances that have made new high precision tests possible. The current constraints on Lorentz violating effects from both terrestrial experiments and astrophysical observations are presented.


Introduction
Relativity has been one of the most successful theories of the last century and is a cornerstone of modern physics. This review focuses on the modern experimental tests of one of the fundamental symmetries of relativity, Lorentz invariance. Over the last decade there has been tremendous interest and progress in testing Lorentz invariance. This is largely motivated by two factors. First, there have been theoretical suggestions that Lorentz invariance may not be an exact symmetry at all energies. The possibility of four-dimensional Lorentz invariance violation has been investigated in different quantum gravity models (including string theory [185,107], warped brane worlds [70], and loop quantum gravity [120]), although no quantum gravity model predicts Lorentz violation conclusively. Other high energy models of spacetime structure, such as non-commutative field theory, do however explicitly contain Lorentz violation [98]. High energy Lorentz violation can regularize field theories, another reason it may seem plausible. Even if broken at high energies, Lorentz symmetry can still be an attractive infrared fixed point, thereby yielding an approximately Lorentz invariant low energy world [79]. Other ideas such as emergent gauge bosons [54,189,161,80], varying moduli [93], axion-Wess-Zumino models [30], analogues of emergent gravity in condensed matter [40,238], ghost condensate [34], space-time varying couplings [177,50], or varying speed of light cosmologies [219,209] also incorporate Lorentz violation. The ultimate fate of Lorentz invariance is therefore an important theoretical question.
We shall primarily focus on quantum gravity induced Lorentz violation as the theoretical target for experimental tests. If Lorentz invariance is violated by quantum gravity, the natural scale one would expect it to be strongly violated at is the Planck energy of ≈ 10 19 GeV. While perhaps theoretically interesting, the large energy gap between the Planck scale and the highest known energy particles, the trans-GZK cosmic rays of 10 11 GeV (not to mention accelerator energies of ∼ 1 TeV), precludes any direct observation of Planck scale Lorentz violation.
Fortunately, it is very likely that strong Planck scale Lorentz violation yields a small amount of violation at much lower energies. If Lorentz invariance is violated at the Planck scale, there must be an interpolation to the low energy, (at least nearly) Lorentz invariant world we live in. Hence a small amount of Lorentz violation should be present at all energies. Advances in technology and observational techniques have dramatically increased the precision of experimental tests, to the level where they can be sensitive to small low energy residual effects of Planck scale Lorentz violation. These experimental advances are the second factor stimulating recent interest in testing Lorentz invariance. One should keep in mind that low energy experiments cannot directly tell us whether or not quantum gravity is Lorentz invariant. Rather, they can only determine if the "state" that we live in is Lorentz violating. For example, it is possible that quantum gravity might be Lorentz invariant but contains tensor fields that acquire a vacuum expectation value at low energies [185], thereby spontaneously breaking the symmetry. Experiments carried out at low energies would therefore see Lorentz violation, even though it is a good symmetry of the theory at the Planck scale. That said, any discovery of Lorentz violation would be an important signal of beyond standard model physics.
There are currently a number of different theoretical frameworks in which Lorentz symmetry might be modified, with a parameter space of possible modifications for each framework. Since many of the underlying ideas come from quantum gravity, which we know little about, the fate of Lorentz violation varies widely between frameworks. Most frameworks explicitly break Lorentz invariance, in that there is a preferred set of observers or background field other than the metric [90,34]. However others try to deform the Poincaré algebra, which would lead to modified transformations between frames but no preferred frame (for a review see [186]). These latter frameworks lead to only "apparent" low energy Lorentz violation. Even further complications arise as some frameworks violate other symmetries, such as CPT or translation invariance, in conjunction with Lorentz symmetry. The fundamental status of Lorentz symmetry, broken or deformed, as well as the additional symmetries makes a dramatic difference as to which experiments and observations are sensitive. Hence the primary purpose of this review is to delineate various frameworks for Lorentz violation and catalog which types of experiments are relevant for which framework. Theoretical issues relating to each framework are touched on rather briefly, but references to the relevant theoretical work are included.
Tests of Lorentz invariance span atomic physics, nuclear physics, high-energy physics, relativity, and astrophysics. Since researchers in so many disparate fields are involved, this review is geared towards the non-expert/advanced graduate level, with descriptions of both theoretical frameworks and experimental/observational approaches. Some other useful starting points on Lorentz violation are [23,276,174,155]. The structure of this review is as follows. An general overview of various issues relating to the interplay of theory with experiment is given in Section 2. The current theoretical frameworks for testing Lorentz invariance are given in Sections 3 and 4. A discussion of the various relevant results from earth based laboratory experiments, particle physics, and astrophysics is given in Sections 5 and 6. Limits from gravitational observations are in Section 7. Finally, the conclusions and prospects for future progress are in Section 8. Throughout this review η αβ denotes the Minkowski (+ − − −) metric. Greek indices will be used exclusively for spacetime indices whereas Roman indices will be used in various ways. Theorists' unitsh = c = 1 are used throughout. E Pl denotes the (approximate) Planck energy of 10 19 GeV. Before we discuss Lorentz violation in general, it will be useful to detail a pedagogical example that will give an intuitive feel as to what "Lorentz violation" actually means. Let us work in a field theory framework and consider a "bimetric" action for two massless scalar fields Φ and Ψ,

Living Reviews in Relativity
where τ αβ is some arbitrary symmetric tensor, not equal to g αβ . Both g αβ and τ αβ are fixed background fields. At a point, one can always choose coordinates such that g αβ = η αβ . Now, consider the action of local Lorentz transformations at this point, which we define as those transformations for which η αβ is invariant, on S. 1 S is a spacetime scalar, as it must be to be well-defined and physically meaningful. Scalars are by definition invariant under all passive diffeomorphisms (where one makes a coordinate transformation of every tensor in the action, background fields included). A local Lorentz transformation is a subgroup of the group of general coordinate transformations so the action is by construction invariant under "passive" local Lorentz transformations. This implies that as long as our field equations are kept in tensorial form we can freely choose what frame we wish to calculate in. Coordinate invariance is sometimes called "observer Lorentz invariance" in the literature [172] although it really has nothing to do with the operational meaning of Lorentz symmetry as a physical symmetry of nature. Lorentz invariance of a physical system is based upon the idea of "active" Lorentz transformations, where we only transform the dynamical fields φ and ψ. Consider a Lorentz transformation of φ and ψ, where Λ µ ν is the Lorentz transformation matrix, x µ = Λ µ ν x ν . The derivatives transform as from which one can easily see that η αβ ∂ α φ (x)∂ β φ (x) = η αβ ∂ α φ(x)∂ β φ(x) since by definition η αβ (Λ −1 ) µ β (Λ −1 ) ν α = η µν . The η αβ terms are therefore Lorentz invariant. τ αβ is not, however, invariant under the action of Λ −1 and hence the action violates Lorentz invariance. Equations of motion, particle thresholds, etc. will all be different when expressed in the coordinates of relatively boosted or rotated observers.
Since in order for a physical theory to be well defined the action must be a spacetime scalar, breaking of active Lorentz invariance is the only physically acceptable type of Lorentz violation. Sometimes active Lorentz invariance is referred to as "particle" Lorentz invariance [172]. We will only consider active Lorentz violation and so shall drop any future labelling of "observer", "particle","active", or "passive" Lorentz invariance. For the rest of this review, Lorentz violation always means active Lorentz violation. For another discussion of active Lorentz symmetry in field theory see [240]. Since we live in a world where Lorentz invariance is at the very least an excellent approximate symmetry, τ αβ must be small in our frame. In field theoretical approaches to Lorentz violation, a frame in which all Lorentz violating coefficients are small is called a concordant frame [176].

Modified Lorentz groups
Almost all models for Lorentz violation fall into the framework above, where there is a preferred set of concordant frames (although not necessarily a field theory description). In these theories Lorentz invariance is broken; there is a preferred set of frames where one can experimentally determine that Lorentz violation is small. A significant alternative that has attracted attention is simply modifying the way the Lorentz group acts on physical fields. In the discussion above, it was assumed that everything transformed linearly under the appropriate representation of the Lorentz group. On top of this structure, Lorentz non-invariant tensors were introduced that manifestly broke the symmetry but the group action remained the same. One could instead modify the group action itself in some manner. A partial realization of this idea is provided by so-called "doubly special relativity" (DSR) [15,186], which will be discussed more thoroughly in Section 3.4. In this scenario there is still Lorentz invariance, but the Lorentz group acts non-linearly on physical quantities. The new choice of group action leads to a new invariant energy scale as well as the invariant velocity c (hence the name doubly special). The invariant energy scale λ DSR is usually taken to be the Planck energy. There is no preferred class of frames in these theories, but it still leads to Lorentz "violating" effects. For example, there is a wavelength dependent speed of light in DSR models. This type of violation is really only "apparent" Lorentz violation. The reader should understand that it is a violation only of the usual linear Lorentz group action on physical quantities.

Kinematics vs. dynamics
A complete physical theory must obviously include dynamics. However, over the years a number of kinematic frameworks have been developed for testing Lorentz violation that do not postulate a complete dynamics [246,211,205,20]. Furthermore, some proposals coming from quantum gravity are at a stage where the low energy kinematics are partially understood/conjectured, but the corresponding dynamics are not understood (a good example of this is DSR [186]). Hence until these models become more mature, only kinematic tests of Lorentz invariance are truly applicable. Strictly enforced, this rule would preclude any use of an experiment that relies on particle interactions, as these interactions are determined by the dynamics of the theory. Only a select few observations, such as interferometry, birefringence, Doppler shifts, or time of flight are by construction insensitive to dynamics. However, the observational situation is often such that tests that use particle interactions can be applied to theories where only the kinematics is understood. This can be done in astrophysical threshold interactions as long as the dynamics are assumed to be not drastically different from Lorentz invariant physics (see Section 6.4). In terrestrial experiments, one must either recognize that different experiments can give different values with kinematic frameworks (for an example, see the discussion of the Robertson-Mansouri-Sexl framework in Section 3.2) or embed the kinematics in a fully dynamical model like the standard model extension (see Section 4.1.1).

The role of other symmetries
There are many other symmetries that affect how Lorentz violation might manifest itself below the Planck scale. The standard model in Minkowski space is invariant under four main symmetries, three continuous and one discrete. There are two continuous spacetime symmetries, Lorentz symmetry and translation symmetry, as well as gauge and CPT symmetry. Supersymmetry can Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 also have profound effects on how Lorentz violation can occur. Finally, including gravity means that we must take into account diffeomorphism invariance. The fate of these other symmetries in conjunction with Lorentz violation can often have significant observational ramifications.

CPT invariance
Lorentz symmetry is intimately tied up with CPT symmetry in that the assumption of Lorentz invariance is required for the CPT theorem [162]. Lorentz violation therefore allows for (but does not require) CPT violation, even if the other properties of standard quantum field theory are assumed. Conversely, however, CPT violation implies Lorentz violation for local field theories [134]. Furthermore, many observational results are sensitive to CPT violation but not directly to Lorentz violation. Examples of such experiments are kaon decay (see Section 5.5) and γ-ray birefringence (see Section 6.3), both of which indirectly provide stringent bounds on Lorentz violation that incorporates CPT violation. Hence CPT tests are very important tools for constraining Lorentz violation. In effective field theory CPT invariance can explicitly be imposed to forbid a number of strongly constrained operators. For more discussion on this point see Section 4.3.

Supersymmetry
SUSY, while related to Lorentz symmetry, can still be an exact symmetry even in the presence of Lorentz violation. Imposing exact SUSY provides another custodial symmetry that can forbid certain operators in Lorentz violating field theories. If, for example, exact SUSY is imposed in the MSSM (minimal supersymmetric standard model), then the only Lorentz violating operators that can appear have mass dimension five or above [137]. Of course, we do not have exact SUSY in nature. The size of low dimension Lorentz violating operators in a theory with Planck scale Lorentz violation and low energy broken SUSY has recently been analyzed in [65]. For more discussion on this point see Section 4.3.

Poincaré invariance
In many astrophysics approaches to Lorentz violation, conservation of energy-momentum is used along with Lorentz violating dispersion relations to give rise to new particle reactions. Absence of these reactions then yields constraints. Energy/momentum conservation between initial and final particle states requires translation invariance of the underlying spacetime and the Lorentz violating physics. Therefore we can apply the usual conservation laws only if the translation subgroup of the Poincaré group is left unmodified. If Lorentz violation happens in conjunction with a modification of the rest of the Poincaré group, then it can happen that modified conservation laws must be applied to threshold reactions. This is the situation in DSR: All reactions that are forbidden by conservation in ordinary Lorentz invariant physics are also forbidden in DSR [146], even though particle dispersion relations in DSR would naively allow new reactions. The conservation equations change in such a way as to compensate for the modified dispersion relations (see Section 3.4). Due to this unusual (and useful) feature, DSR evades many of the constraints on effective field theory formulations of Lorentz violation.

Diffeomorphism invariance and prior geometry
If Lorentz violating effects are to be embedded in an effective field theory, then new tensors must be introduced that break the Lorentz symmetry (cf. the bimetric theory (1) of Section 2.1). If we are considering only special relativity, then keeping these tensors as constant is viable. However, any complete theory must include gravity, of course, and one should preserve as many fundamental principles of general relativity as possible while still introducing local Lorentz violation. There are Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 three general principles in general relativity relevant to Lorentz violation: general covariance (which implies both passive and active diffeomorphism invariance [247]), the equivalence principle, and lack of prior geometry. As we saw in Section 2, general covariance is automatically a property of an appropriately formulated Lorentz violating theory, even in flat space. The fate of the equivalence principle we deal with below in Section 2.5. The last principle, lack of prior geometry, is simply a statement that the metric is a dynamical object on the same level as any other field. Coupled with diffeomorphism invariance this leads to conservation of matter stress tensors (for a discussion see [73]). However, a fixed Lorentz violating tensor constitutes prior geometry in the same way that a fixed metric would. If we keep our Lorentz violating tensors as fixed objects, we immediately have non-conservation of stress tensors and inconsistent Einstein equations. As a specific example, consider again the bimetric theory (1). We will include gravity in the usual way by adding the Einstein-Hilbert Lagrangian for the metric. The resultant action is and the corresponding field equations are Taking the divergence of Equation (7) and using the φ, ψ equations of motion yields since ∇ α G αβ vanishes by virtue of the Bianchi identities. The right hand side of Equation (8) does not in general vanish for solutions to the field equations and therefore Equation (8) is not in general satisfied unless one restricts to very specific solutions for ψ. This is not a useful situation, as we would like to have the full space of solutions for ψ yet maintain energy conservation. The solution is to make all Lorentz violating tensors dynamical [173,157], thereby removing prior geometry. If the Lorentz violating tensors are dynamical then conservation of the stress tensor is automatically enforced by the diffeomorphism invariance of the action. While dynamical Lorentz violating tensors have a number of effects that are testable in the gravitational sector, most researchers have concentrated on flat space tests of Lorentz invariance where gravitational effects can be ignored. Hence for most of this review we will treat the Lorentz violating coefficients as fixed and neglect dynamics. The theoretical consequences of dynamical Lorentz violation will be analyzed only in Section 4.4, where we discuss a model of a diffeomorphism invariant "aether" which has received some attention. The observational constraints on this theory are discussed in Section 7.

Lorentz violation and the equivalence principle
Lorentz violation implies a violation of the equivalence principle. Intuitively this is clear: In order for there to be Lorentz violation particles must travel on world-lines that are species dependent (and not fully determined by the mass). In various papers dealing with Lorentz violating dispersion relations one will sometimes see the equivalence principle being cited as a motivation for keeping the Lorentz violating terms equal for all particle species. We now give a pedagogical example to show that the equivalence principle is violated even in this case. Consider a dispersion modification of the form Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 for a free particle and assume f (4) is independent of particle species. If we assume Hamiltonian dynamics at low energy and use the energy as the Hamiltonian, then for a non-relativistic particle in a weak gravitational field we have where V (x) is the Newtonian gravitational potential mΦ(x). Applying Hamilton's equations to solve for the acceleration yieldsẍ to lowest order in the Lorentz violating term. From this expression it is obvious that the acceleration is mass dependent and the equivalence principle is violated (albeit slightly) for particles of different masses with the same f (4) . Of course, if the f (n) terms are different, as is natural with some Lorentz violating models [110], then it is also obviously violated. As a consequence one cannot preserve the equivalence principle with Lorentz violation unless one also modifies Hamiltonian dynamics. Equivalence principle tests are therefore able to also look for Lorentz violation and vice versa (for an explicit example see [13]). Other examples of the relationship between equivalence principle violation and Lorentz violation can be found in [140,138,256].

Systematic vs. non-systematic violations
Most tests of Lorentz violation deal with systematic Lorentz violation, where the deviation is constant in time/space. For example, consider the modified dispersion relation for a photon where f (4) is some fixed coefficient. There is no position dependence, so the Lorentz violating term is a constant as the particle propagates. However, various models [99,255,231] suggest that particle energy/momentum may not be constant but instead vary randomly by a small amount. Some authors have combined these two ideas about quantum gravity, Lorentz violation and stochastic fluctuations, and considered a stochastic violation of Lorentz invariance characterized by a fluctuating coefficient [12,232,24,108,115]. We will discuss non-systematic models in greater detail in Section 3.5.

Causality
It is obvious that when we introduce Lorentz violation we have to rethink causality -there is no universal light cone given by the metric that all fields must propagate within. Even with Lorentz violation we must certainly maintain some notion of causality, at least in concordant frames, since we know that our low energy physics is causal. Causality from a strict field theory perspective is usually discussed in terms of microcausality which in turn comes from the cluster decomposition principle: Physical observables at different points and equal times should be independently measurable. This is essentially a statement that physics is local. We now briefly review how microcausality arises from cluster decomposition. Let O 1 (x), O 2 (y) represent two observables for a field theory in flat space. In a particular frame, let us choose the equal time slice t = 0, such that x = (0, x), y = (0, y) and further assume that x = y. The cluster decomposition principle then states that O 1 (x) and O 2 (y) must be independently measurable. This in turn implies that their commutator must vanish, [O 1 (x), O 2 (y)] = 0. When Lorentz invariance holds there is no preferred frame, so the commutator must vanish for the t = 0 surface of any reference frame. This immediately gives that [O 1 (x), O 2 (y)] = 0 whenever x, y are spacelike separated, which is the statement Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 of microcausality. Microcausality is related to the existence of closed timelike curves since closed timelike curves violate cluster decomposition for surfaces that are pierced twice by the curves. The existence of such a curve would lead to a breakdown of microcausality.
Lorentz violation can induce a breakdown of microcausality, as shown in [176]. In this work, the authors find that microcausality is violated if the group velocity of any field mode is superluminal. Such a breakdown is to be expected, as the light cone no longer determines the causal structure and notions of causality based on "spacelike" separation would not be expected to hold. However, the breakdown of microcausality does not lead to a breakdown of cluster decomposition in a Lorentz violating theory, in contrast to a Lorentz invariant theory. Even if fields propagate outside the light cone, we can have perfectly local and causal physics in some reference frames. For example, in a concordant frame Lorentz violation is small, which implies that particles can be only slightly superluminal. In such a frame all signals are always propagated into the future, so there is no mechanism by which signals could be exchanged between points on the same time slice. If we happened to be in such a concordant frame then physics would be perfectly local and causal even though microcausality does not hold.
The situation is somewhat different when we consider gravity and promote the Lorentz violating tensors to dynamical objects. For example in an aether theory, where Lorentz violation is described by a timelike four-vector, the four-vector can twist in such a way that local superluminal propagation can lead to energy-momentum flowing around closed paths [206]. However, even classical general relativity admits solutions with closed timelike curves, so it is not clear that the situation is any worse with Lorentz violation. Furthermore, note that in models where Lorentz violation is given by coupling matter fields to a non-zero, timelike gradient of a scalar field, the scalar field also acts as a time function on the spacetime. In such a case, the spacetime must be stably causal (cf. [272]) and there are no closed timelike curves. This property also holds in Lorentz violating models with vectors if the vector in a particular solution can be written as a non-vanishing gradient of a scalar.
Finally, we mention that in fact many approaches to quantum gravity actually predict a failure of causality based on a background metric [121] as in quantum gravity the notion of a spacetime event is not necessarily well-defined [239]. A concrete realization of this possibility is provided in Bose-Einstein condensate analogs of black holes [40]. Here the low energy phonon excitations obey Lorentz invariance and microcausality [270]. However, as one approaches a certain length scale (the healing length of the condensate) the background metric description breaks down and the low energy notion of microcausality no longer holds.

Stability
In any realistic field theory one would like a stable ground state. With the introduction of Lorentz violation, one must still have some ground state. This requires that the Hamiltonian still be bounded from below and that perturbations around the ground state have real frequencies. It will again be useful to discuss stability from a field theory perspective, as this is the only framework in which we can speak concretely about a Hamiltonian. Consider a simple model for a massive scalar field in flat space similar to Equation (1), where we now assume that in some frame S the only non-zero component of τ αβ is τ 00 . This immediately leads to the dispersion law (1 + τ 00 )E 2 = p 2 + m 2 . We can immediately deduce from this that if τ 00 is small the energy is always positive in this frame (taking the appropriate root of the dispersion relation). Similar statements about energy positivity and the allowable size of coefficients hold in more general field theory frameworks [176]. If the energy for every mode is positive, then the vacuum state |0 S is stable.
As an aside, note that while the energy is positive in S, it is not necessarily positive in a boosted frame S . If τ 00 > 0, then for large momentum E < p, yielding a spacelike energy momentum vector. This implies that the energy E can be less than zero in a boosted frame. Specifically, for a given mode p in S, the energy E of this mode in a boosted frame S is less than zero whenever the relative velocity v between S and S is greater than E/p. The main implication is that if v is large enough the expansion of a positive frequency mode in S in terms of the modes of S (one can do this since both sets are a complete basis) may have support in the negative energy modes. The two vacua |0 S and |0 S are therefore inequivalent. This is in direct analogy to the Unruh effect, where the Minkowski vacuum is not equivalent to the Rindler vacuum of an accelerating observer. With Lorentz violation even inertial observers do not necessarily agree on the vacuum. Due to the inequivalence of vacua an inertial detector at high velocities should see a bath of radiation just as an accelerated detector sees thermal Unruh radiation. A clue to what this radiation represents is contained in the requirement that E < 0 only if v > E/p, which is exactly the criteria foř Cerenkov radiation of a mode p. In other words, the vacuumČerenkov effect (discussed in more detail in Section 6.5) can be understood as an effect of inequivalent vacua.
We now return to the question of stability. For the models in Section 3.1 with higher order dispersion relations (E 2 = p 2 + m 2 + f (n) p n /E n−2 Pl with n > 2) there is a stability problem for particles with momentum near the Planck energy if f (n) < 0 as modes do not have positive energy at these high momenta. However, it is usually assumed that these modified dispersion relations are only effective -at the Planck scale there is a UV completion that renders the fundamental theory stable. Hence the instability to production of Planck energy particles is usually ignored.
So far we have only been concerned with instability of a quantum field with a background Lorentz violating tensor. Dynamical Lorentz violating tensors introduce further possible instabilities. In such a dynamical theory, one needs a version of the positive energy theorem [252,279] that includes the Lorentz violating tensors. For aether theories, the total energy is proportional to the usual ADM energy of general relativity [104]. Unfortunately, the aether stress tensor does not necessarily satisfy the dominant energy condition (although it may for certain choices of coefficients), so there is no proof yet that spacetimes with a dynamical aether have positive energy. For other models of Lorentz violation the positive energy question is completely unexplored. It is also possible to set limits on the coefficients of the aether theory by demanding that the theory be perturbatively stable, which requires that excitations of the aether field around a Lorentz violating vacuum expectation value have real frequencies [158].

Systematic modified dispersion
Perhaps the simplest kinematic framework for Lorentz violation in particle based experiments is to propose modified dispersion relations for particles, while keeping the usual energy-momentum conservation laws. This was the approach taken in much of the work using astrophysical phenomena in the late 1990's. In a given observer's frame in flat space, this is done by postulating that the usual Lorentz invariant dispersion law E 2 = p 2 + m 2 is replaced by some function E 2 = F (p, m). In general the preferred frame is taken to coincide with the rest frame of the cosmic microwave background. Since we live in an almost Lorentz invariant world (and are nearly at rest with respect to the CMBR), in the preferred frame F (p, m) must reduce to the Lorentz invariant dispersion at small energies and momenta. Hence it is natural to expand F (p, m) about p = 0, which yields the expression where the constant coefficients F (n) ij...n are dimensionful and arbitrary but presumably such that the modification is small. The order n of the first non-zero term in Equation (13) depends on the underlying model of quantum gravity taken. Since the underlying motivation for Lorentz violation is quantum gravity, it is useful to factor out the Planck energy in the coefficients F (n) and rewrite Equation (13) as such that the coefficients f (n) are dimensionless. In most of the literature a simplifying assumption is made that rotation invariance is preserved. In nature, we cannot have the rotation subgroup of the Lorentz group strongly broken while preserving boost invariance. Such a scenario leads immediately to broken rotation invariance at every energy which is unobserved. 2 Hence, if there is strong rotation breaking there must also be a broken boost subgroup. However, it is possible to have a broken boost symmetry and unbroken rotation symmetry. Either way, the boost subgroup must be broken. Phenomenologically, it therefore makes sense to look first at boost Lorentz violation and neglect any violation of rotational symmetry. If we make this assumption then we have There is no a priori reason (from a phenomenological point of view) that the coefficients in Equation (15) are universal (and in fact one would expect the coefficients to be renormalized differently even if the fundamental Lorentz violation is universal [6]). We will therefore label each f (n) as f (n) A where A represent particle species.

Modified dispersion and effective field theory
Effective field theory (EFT) is not applicable if one wishes to stick to straight kinematics, however the EFT implications for modified dispersion are so significant that they must be considered. As will be shown in detail in Section 4.1, universal dispersion relations cannot be imposed for all n from an EFT standpoint. For example, rotationally invariant n = 1, 3 type dispersion cannot be imposed universally on photons [90,230]. The operators that give rise to n = 1, 3 dispersion are CPT violating and induce birefringence (the dispersion modifications change sign based on the photon helicity). Since EFT requires different coefficients for particles with different properties and there is no underlying reason why all coefficients should be the same, it is phenomenologically safest when investigating modified dispersion to assume that each particle has a different dispersion relation. After this general analysis is complete the universal case can be treated with ease.

Robertson-Mansouri-Sexl framework
The Robertson-Mansouri-Sexl framework [246,211,277] is a well known kinematic test theory for parameterizing deviations from Lorentz invariance. In the RMS framework, there is assumed to be a preferred frame Σ where the speed of light is isotropic. The ordinary Lorentz transformations to other frames are generalized to where the coefficients a, b, d are functions of the magnitude v of the relative velocity between frames. This transformation is the most general one-to-one transformation that preserves rectilinear motion in the absence of forces. In the case of special relativity, with Einstein clock synchronization, these coefficients reduce to The vector depends on the particular synchronization used and is arbitrary. Many experiments, such as those that measure the isotropy of the one way speed of light [275] or propagation of light around closed loops, have observables that depend on a, b, d but not on the synchronization procedure. Hence the synchronization is largely irrelevant and we assume Einstein synchronization.
The RMS framework is incomplete, as it says nothing about dynamics or how given clocks and rods relate to fundamental particles. In particular, the coordinate transformation of Equation (16) only has meaning if we identify the coordinates with the measurements made by a particular set of clocks and rods. If we chose a different set of clocks and rods, the transformation laws may be completely different. Hence it is not possible to compare the RMS parameters of two experiments that use physically different clocks and rods (for example, an experiment that uses a cesium atomic clock versus an experiment that uses a hydrogen one). However, for experiments involving a single type of clock/rod and light, the RMS formalism is applicable and can be used to search for violations of Lorentz invariance in that experiment. The RMS formalism can be made less ambiguous by placing it into a complete dynamical framework, such as the standard model extension of Section 4.1.1. In fact, it was shown in [179] that the RMS framework can be incorporated into the standard model extension.
Most often, the RMS framework is used in situations where the velocity v is small compared to c. We therefore expand a, b, d in a power series in v, and will give constraints on the parameters α RMS , β RMS , and δ RMS instead.

c-squared framework
The c 2 framework [277] is the flat space limit of the T H µ [205] framework. This framework considers the motion of electromagnetically charged test particles in a spherically symmetric, static gravitational field. T, H, , µ are all parameters that fold in to the motion of the particles which vary depending on the underlying gravitational model. In the flat space limit, which is the c 2 formalism, the units are chosen such that limiting speed of the test particles is one while the speed of light is given in terms of the T H µ parameters by c 2 = H/(T µ). The T H µ and c 2 constructions can also be expressed in terms on the standard model extension [179].

"Doubly special" relativity
Doubly special relativity (DSR), which has only been extensively studied over the past few years, is a novel idea about the fate of Lorentz invariance. DSR is not a complete theory as it has no dynamics and generates problems when applied to macroscopic objects (for a discussion see [186]). Furthermore, it is not fully settled yet if DSR is mathematically consistent or physically meaningful. Therefore it is somewhat premature to talk about robust constraints on DSR from particle threshold interactions or other experiments. One might then ask, why should we talk about it at all? The reason is twofold. First, DSR is the subject of a good amount of theoretical effort and so it is useful to see if it can be observationally ruled out. The second reason is purely phenomenological. As we shall see in the sections below, the constraints on Lorentz violation are astoundingly good in the effective field theory approach. With the current constraints it is difficult to fit Lorentz violation into an effective field theory in a manner that is theoretically natural yet observationally viable.
DSR, if it can eventually be made mathematically consistent in its current incarnation, has one phenomenological advantage -it does not have a preferred frame. Therefore it evades most of the threshold constraints from astrophysics as well as any terrestrial experiment that looks for sidereal variations, while still modifying the usual action of the Lorentz group. Since these experiments provide almost all of the tests of Lorentz violation that we have, DSR becomes more phenomenologically attractive as a Lorentz violating/deforming theory.
So what is DSR? At the level we need for phenomenology, DSR is a set of assumptions that the Lorentz group acts in such a way that the usual speed of light c and a new momentum scale E DSR are invariant. Usually E DSR is taken to be the Planck energy -we also make this assumption. All we will need for this review are the Lorentz boost expressions and the conservation laws, which we will postulate as true in the DSR framework. For brevity we only detail the Magueijo-Smolin version of DSR [210], otherwise known as DSR2 -the underlying conclusions for DSR1 [15] remain the same. The DSR2 boost transformations are most easily derived from the relations where λ DSR = E −1 DSR , E and p are the physical/measured energy and momentum, and and π are called the "pseudo-energy" and "pseudo-momentum", respectively. and π transform under the usual Lorentz transforms, which induce corresponding transformations of E and p [163]. Similarly, the and π for particles are conserved as energy and momentum normally are for a scattering problem. 3 Given this set of rules, for any measured particle momentum and energy, we can solve for and π and calculate interaction thresholds, etc. The invariant dispersion relation for the DSR2 boosts is given by This concludes our (brief) discussion of the basics of DSR. For further introductions to DSR and DSR phenomenology see [186,21,14,95]. We discuss the threshold behavior of DSR theories in Section 6.6.1.

Non-systematic dispersion
As mentioned in Section 2.6, Lorentz violation is only one possibility for a signal of quantum gravity. Another common idea about quantum gravity is that spacetime should have a stochastic [99,255] or "foamy" [231] structure at very small scales. Combining these two ideas has lead a number of authors to the idea of stochastic/non-systematic dispersion where the modifications to the dispersion relation fluctuate over time. Such dispersion modifications have been phenomenologically parameterized by three numbers, the usual coefficient f (n) and exponent n of Section 3.1, and a length scale L which determines the length over which the dispersion is roughly constant. After a particle has travelled a distance L a new coefficient f (n) is chosen based upon some model dependent probability distribution P that reflects the underlying stochasticity. Usually P is assumed to be a gaussian about 0, such that the average energy of the particle is given by its Lorentz invariant value. As well, L is in general taken to be the de Broglie wavelength of the particle in question. Note that in these models n is not required to be an integer, the most common choices are n = 5/2, n = 8/3, n = 3 [232]. 4 The only existing constraints on non-systematic dispersion come from threshold reactions (see Section 6.6.2) and the phase coherence of light (see Section 6.9).

Effective field theory
The most conservative approach for a framework in which to test Lorentz violation from quantum gravity is that of effective field theory (EFT). Both the standard model and relativity can be considered EFT's, and the EFT framework can easily incorporate Lorentz violation via the introduction of extra tensors. Furthermore, in many systems where the fundamental degrees of freedom are qualitatively different than the low energy degrees of freedom, EFT applies and gives correct results up to some high energy scale. Hence following the usual guideline of starting with known physics, EFT is an obvious place to start looking for Lorentz violation.

Renormalizable operators and the Standard Model Extension
The standard model is a renormalizable field theory containing only mass dimension ≤ 4 operators.
If we considered the standard model plus Lorentz violating terms then we would expect a tower of operators with increasing mass dimension. However, without some custodial symmetry protecting the theory from Lorentz violating dimension ≤ 4 operators, the lower dimension operators will be more important than irrelevant higher mass dimension operators (see Section 4.3 for details). Therefore the first place to look from an EFT perspective is all possible renormalizable Lorentz violating terms that can be added to the standard model. In [90] Colladay and Kostelecky derived just such a theory in flat space -the so-called (minimal) Standard Model Extension (mSME). 5 One can classify the mSME terms by whether or not they are CPT odd or even. We first will show the terms with manifestly SU (3) × SU (2) × U (1) gauge invariance. After that, we shall give the coefficients in a more practical notation that exhibits broken gauge invariance.

Manifestly invariant form
We deal with CPT odd terms first. The additional Lorentz violating CPT odd operators for leptons are where L A is the left-handed lepton doublet L A = (ν A l A ) L ,R A is the right singlet (l A ) R , and A and B are flavor indices. The coefficients (a L,R ) µAB are constant vectors that can mix flavor generations. 6 For quarks we have similarly where In the gauge sector we have Here B µ , W µ , and G µ are the U (1), SU (2), and SU (3) gauge fields, and B µν , W µν , and G µν are their respective field strengths. The k 0 term in Equation (25) is usually required to vanish as it makes the theory unstable. The remaining a, k coefficients have mass dimension one. 5 In the literature the mSME is often referred to as just the SME, although technically it was introduced in [90] as a minimal subset of an extension that involved non-renormalizable operators as well.
6 (a L,R ) µAB can be constant because the mSME deals with only Minkowski space. If one wishes to make the mSME diffeomorphism invariant, these and other coefficients would be dynamical (see Section 2.4).
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 The CPT even operators for leptons in the mSME are while we have for quarks For gauge fields the CPT even operators are The coefficients for all CPT even operators in the mSME are dimensionless. While the split of CPT even and odd operators in the mSME correlates with even and odd mass dimension, we caution the reader that this does not carry over to higher mass dimension operators. Finally, we will in general drop the subscripts A, B when discussing various coefficients. These terms without subscripts are understood to be the flavor diagonal coefficients.
Besides the fermion and gauge field content, the mSME also has Yukawa couplings between the fermion fields and the Higgs. These CPT even terms are Finally, there are also additional terms for the Higgs field alone. The CPT odd term is while the CPT even terms are This concludes the description of the mSME terms with manifest gauge invariance.

Practical form
Tests of the mSME are done at low energies, where the SU (2) gauge invariance has been broken. It will be more useful to work in a notation where individual fermions are broken out of the doublet with their own Lorentz violating coefficients. With gauge breaking, the fermion Lorentz violating terms above give the additional CPT odd terms and the CPT even terms where the fermion spinor is denoted by ψ. Each possible particle species has its own set of coefficients. For a single particle the a µ term can be absorbed by making a field redefinition ψ −→ e −ia·x ψ. However, in multi-particle theories involving fermion interactions one cannot remove a µ for all fermions [89]. However, one can always eliminate one of the a µ , i.e. only the differences between a µ for various particles are actually observable.
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 As an aside, we note that there are additional dimension ≤ 4 U (1) invariant terms for fermions that could be added to the mSME once SU (2) gauge invariance is broken. These operators are These terms do not arise from gauge breaking of the renormalizable mSME in the previous Section 4.1.2. However, they might arise from non-renormalizable terms in an EFT expansion. As such, technically they should be constrained along with everything else. However, since their origin can only be from higher dimension operators they are expected to be much smaller than the terms that come directly from the mSME. 7 Current tests of Lorentz invariance for gauge bosons directly constrain only the electromagnetic sector. The Lorentz violating terms for electromagnetism are where the k F term is CPT even and the k AF term is CPT odd. The k AF term makes the theory unstable, so we assume it is zero from here forward unless otherwise noted (see Section 6.3). Now that we have the requisite notation to compare Lorentz violating effects directly with observation we turn to the most common subset of the mSME, Lorentz violating QED.

Lorentz violating QED
In many Lorentz violating tests, the relevant particles are photons and electrons, making Lorentz violating QED the appropriate theory. The relevant Lorentz violating operators are given by Equation (32,33,35). The dispersion relation for photons will be useful when deriving birefringence constraints on k F . If k F = 0, spacetime acts as a anisotropic medium, and different photon polarizations propagate at different speeds. The two photon polarizations, labelled ± , have the dispersion relation [179] Strong limits can be placed on this birefringent effect from astrophysical sources [179], as detailed in Section 6.3.
A simplifying assumption that is often made is rotational symmetry. With rotational symmetry all the Lorentz violating tensors must be reducible to products of a vector field, which we denote by u α , that describes the preferred frame. We will normalize u α to have components (1, 0, 0, 0) in the preferred frame, placing constraints on the coefficients instead. The rotationally invariant extra terms are for electrons and for photons. The high energy (E Pl E m) dispersion relations for the mSME will be necessary later. To lowest order in the Lorentz violating coefficients they are where, if s = ±1 is the helicity state of the electron, f The positron dispersion relation is the same as Equation (39) with the replacement p → −p, which will change only the f In the QED sector dimension five operators that give rise to n = 3 type dispersion have also been investigated by [230] with the assumption of rotational symmetry: where P L,R = 1/2(1 ± γ 5 ) are the usual left and right projection operators andF αβ = 1 2 αβγδ F γδ is the dual of F αβ . One should note that these operators violate CPT. Furthermore, they are not the only dimension five operators, a mistake that has sometimes been made in the literature. For example, we could have u α u βψ D α D β ψ. These other operators, however, do not give rise to n = 3 dispersion as they are CPT even.
The birefringent dispersion relation for photons that results from Equation (40) is for right (+) and left (−) circularly polarized photons, where f Similarly, the high energy electron dispersion is where f e(R,L) = 2η R,L . 8 We note that since the dimension five operators violate CPT, they give rise to different dispersions for positrons than electrons. While the coefficients for the positive and negative helicity states of an electron are 2η R and 2η L , the corresponding coefficients for a positron's positive and negative helicity states are −2η L and −2η R . This will be crucially important when deriving constraints on these operators from photon decay.

Non-commutative spacetime
A common conjecture for the behavior of spacetime in quantum gravity is that the algebra of spacetime coordinates is actually noncommutative. This idea has led to a large amount of research in Lorentz violation and we would be remiss if we did not briefly discuss Lorentz violation from non-commutativity. We will look at only the most familiar form of spacetime non-commutativity, "canonical" non-commutativity, where the spacetime coordinates acquire the commutation relation Θ αβ is an O(1) tensor that describes the non-commutativity and Λ NC is the characteristic noncommutative energy scale. Λ NC is presumably near the Planck scale if the non-commutativity comes from quantum gravity. However, in large extra dimension scenarios Λ NC could be as low as 1 TeV. For discussions of other types of non-commutativity, including those that preserve Lorentz invariance or lead to DSR-type theories, see [187,225]. The phenomenology of canonical non-commutativity as it relates to particle physics can be found in [147,98].
The existence of Θ αβ manifestly breaks Lorentz invariance and hence the size of Λ NC is constrained by tests of Lorentz violation. However, in order to match a non-commutative theory to low energy observations, we must have the appropriate low energy theory, which implies that the infamous UV/IR mixing problem of non-commutative field theory must be tamed enough to create a well-defined low energy expansion. No general method for doing this is known, although supersymmetry [216] can perhaps do the trick. 9 If the UV/IR mixing is present but regulated by a cutoff, then the resulting field theory can be re-expressed in terms of the mSME [31,75].
In order to see how constraints come about, consider for the moment non-commutative QED. The Seiberg-Witten map [254] can be used to express the non-commutative fields in terms of ordinary gauge fields. At lowest order in Λ NC the effective action for low energy is then Direct constraints on the dimension six non-renormalizable operators from cosmological birefringence and atomic clocks have been considered in [75]. A stronger bound of Λ NC > 5×10 14 GeV [218] on the non-commutativity scale can be derived from clock comparison experiments with Cs/Hg magnetometers [46] (see Section 5.2). Similarly, the possibility of constraints from synchrotron radiation in astrophysical systems has been analyzed in [77].
Other strong constraints can be derived by noting that without a custodial symmetry loop effects with the dimension six operators will induce lower dimension operators. In [31], the authors calculated what dimension four operators would be generated, assuming that the field theory has some cutoff scale Λ. The dimension six operators induce dimension four operators of the form B(Θ 2 ) αβ F αν F ν β and AΘ αβ Θ µν F αµ F βν , where A, B are dimensionless numbers that depend on Λ NC , Λ. There are two different regimes of behavior for A, B. If Λ Λ NC then A, B are O(1) (up to loop factors and coupling coefficients), independent of the scale Λ NC . Such strong Lorentz violation is obviously ruled out by current experiment, implying that in this perturbative approach such a limit is observationally not viable. If instead one takes Λ Λ NC then A, B ∝ Λ 2 /Λ 2 NC . The resulting field theory becomes a subset of the standard model extension; specifically the new operators have the form of the (k F ) αβγδ F αβ F γδ term in Equation (35). It has been argued [75] that any realistic non-commutative theory must eventually reduce to part of the mSME. The approach of [31] shows this is possible, although the presence of such a low energy cutoff must be explained.
All of the above approaches use an expansion in Θ αβ , Λ NC to get some low energy effective field theory. In terms of Lorentz tests, the results are all based upon this EFT expansion and not on the full non-commutative theory. Therefore we will restrict ourselves to discussing limits on various terms in effective field theories rather than directly quoting limits on the non-commutative scale. We leave it up to the reader to translate this value into a constraint (if any) on Λ NC and or Λ.

Symmetry and relevant/irrelevant Lorentz violating operators
The above Section 4.2 illustrates a crucial issue in searches for Lorentz violation that are motivated by quantum gravity: Why is Lorentz invariance such a good approximate symmetry at low energies? 9 Other methods of removing UV/IR mixing exist, for an example see [271].
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 To illustrate the problem, let us consider the standard assumption made in much of the work on Lorentz violation in astrophysics -that there exist corrections to particle dispersion relations of the form f (n) p n E n−2 Pl with n ≥ 3 and f (n) of order one. Without any protective symmetry, radiative corrections involving this term will generate dispersion terms of the form f (n) p 2 + E Pl f (n) p. These terms are obviously ruled out by low energy experiment. 10 Accordingly, the first place to look for Lorentz violation is in terrestrial experiments using the standard model extension rather than astrophysics with higher dimension operators. However, no evidence for such violation has been found. The absence of lower dimension operators implies that either there is a fine tuning in the Lorentz violating sector [91], some other symmetry is present that protects the lower dimension operators, or Lorentz invariance is an exact symmetry.
It is always possible that Lorentz violation is finely tuned -there are other currently unexplained fine-tuning problems (such as the cosmological constant) in particle physics. However, it would be far preferable if there was some symmetry or partial symmetry that could naturally suppress/forbid lower dimension operators. For rotation invariance, a discrete remnant of the original symmetry is enough. For example, hypercubic symmetry on a lattice is enough to forbid dimension four rotation breaking operators for scalars. 11 No physically meaningful equivalent construction exists for the full Lorentz group, however (see [223] for a further discussion of this point). A discrete symmetry that can forbid some of the possible lower dimension operators is CPT. A number of the most observationally constrained operators in the mSME are CPT violating, so imposing CPT symmetry would explain why those operators are absent. However, the CPT even operators in the mSME are also very tightly bounded, so CPT cannot completely resolve the naturalness problem either.
Supersymmetry is currently the only known symmetry (other than Lorentz symmetry itself) that can protect Lorentz violating operators of dimension four or less [137,159,65], much as SUSY protects some lower dimension operators in non-commutative field theory [216]. If one imposes exact SUSY on a Lorentz violating theory, the first allowed operators are of dimension five [137]. These dimension five operators do not induce n = 3 type dispersion like the operators (40). Instead, in a rotationally invariant setting they produce dispersion relations of the form Such modifications are completely unobservable in astrophysical processes, although high precision terrestrial experiments can still probe them. Dimension 6 SUSY operators in SQED also yield dispersion relations that are untestable by high energy astrophysics [65]. Fortunately, we do not live in a SUSY world, so it may be that upon SUSY breaking appropriate sized operators at each mass dimension are generated. This question has recently been explored in [65]. For CPT violating dimension five SUSY operators in SQED, the authors find that SUSY breaking yields dimension three operators of the form αm 2 s /M , where m s is the SUSY breaking scale, M is the scale of Lorentz violation, and α is an O(1) coefficient. For m s as light as it could be (around 100 GeV), spin polarized torsion balances (see Section 5.4) are able to place limits on M between 10 5 -10 10 E Pl . It therefore is probable that these operators are observationally unacceptable. However, dimension five SUSY operators are CPT violating, so a combination of CPT invariance and SUSY would forbid Lorentz violating operators below dimension six. The low energy dimension four operators induced by SUSY breaking in the presence of dimension six operators would then presumably be suppressed by m 2 s /M 2 . This is enough suppression to be compatible with current experiment if M is at the Planck scale and m s ≤ 1 TeV.
Another method by which Lorentz violation can occur but might have small dimension ≤ 4 matter operators is via extra dimension scenarios. For example, in [70] a braneworld scenario was considered where four-dimensional Lorentz invariance was preserved on the brane but broken in the bulk. The only particle which can then directly see Lorentz violation is the graviton -the matter fields, being trapped on the brane, can only feel the bulk Lorentz violation through graviton loops. The induced dimension ≤ 4 operators can be quite small, depending on the exact extra-dimension scenario considered. Note though that this approach has been criticized in [91], whose authors argue that significant Lorentz violation in the infrared would still occur.
In summary, the current status of Lorentz violation in EFT is mildly disconcerting for a phenomenologist (if one really wants to believe in Lorentz violation). From an EFT point of view, without custodial symmetries one would expect that we would have seen signs of Lorentz violation by now. Imposing SUSY + CPT or a braneworld scenario may fix this problem, but then we are left with a model with more theoretical assumptions. Furthermore a SUSY + CPT model is unlikely to ever be testable with astrophysics experiments and requires significant improvement in terrestrial experiments to be seen [65]. Fortunately, since this is a phenomenological review we can blithely ignore the above considerations and simply classify and constrain all possible operators at each mass dimension. This is also the safest approach. After all, we are searching for a possible signal from the mysterious realm of quantum gravity and so must be careful about overly restricting our models.

Lorentz violation with gravity in EFT
The previous field theories dealt only with the possible Lorentz violating terms that can be added to the matter sector. Inclusion of gravity into the mix yields a number of new phenomena. Lorentz violating theories with a preferred frame have been studied extensively (cf. [122,157,277] and references therein), while an extension of the mSME into Riemann-Cartan geometry has been performed in [173]. Ghost condensate models, in which a scalar field acquires a constant time derivative, thereby choosing a preferred frame, were introduced in [34]. Let us first look at the more generic case of [173].
In order to couple Lorentz violating coefficients to fermions, one must work in the vierbein formalism (for a discussion see [272]). In Riemann-Cartan geometry the gravitational degrees of freedom are the vierbein and spin connection which give the Riemann and torsion tensors in spacetime. For the purposes of this review we will set the torsion to zero and work strictly in Riemannian geometry; for the complete Lorentz violating theory with torsion see [173] (for more general reviews of torsion in gravity see [143,139]). The low energy action involving only second derivatives in the metric is given by where e is the determinant of the vierbein, R, R αβ , and R αβγδ are the Ricci scalar, Ricci tensor, and Riemann tensor, respectively, and Λ is the cosmological constant. G is the gravitational coupling constant, which can be affected by Lorentz violation. Since there is no longer translation invariance, in principle the Lorentz violating coefficients s αβ and t αβγδ vary with location, so they also behave as spacetime varying couplings. s αβ and t αβγδ can furthermore be assumed to be trace-free as the trace can be absorbed into G and Λ. There are then 19 degrees of freedom left. The difficulty with this formulation is that it constitutes prior geometry and generically leads to energy-momentum non-conservation, similar to the bimetric model in Section 2.4. Again the matter stress tensor will not be conserved unless very restrictive conditions are placed on s αβ and Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 t αβγδ (for example that they are covariantly constant). It is unclear whether or not such restrictions can be consistently imposed in a complicated metric as would describe our universe.
A more flexible approach is to presume that the Lorentz violating coefficients are dynamical, as has been pursued in [122,157,34,185,220]. In this scenario, the matter stress tensor is automatically conserved if all the fields are on-shell. The trade-off for this is that the coefficients s αβ and t αβγδ must be promoted to the level of fields. In particularly they can have their own kinetic terms. Not surprisingly, this rapidly leads to a very complicated theory, as not only must s αβ and t αβγδ have kinetic terms, but they must also have potentials that force them to be non-zero at low energies. (If such potentials were not present, then the vacuum state of the theory would be Lorentz invariant.) For generic s αβ and t αβγδ , the complete theory is not known, but a simpler theory of a dynamical "aether", first looked at by [122] and expanded on by [157,185,104,51] has been explored.
The aether models assume that all the Lorentz violation is provided by a vector field u α . 12 With this assumption, s αβ can be written as u α u β , and t αβγδ can always be reduced to an s αβ term due to the symmetries of the Riemann tensor. The most generic action in D dimensions that is quadratic in fields is therefore where and the s αβ term has been integrated by parts and replaced with the c 1 , c 3 terms. The coefficients c 1,2,3,4 are dimensionless constants, R is the Ricci scalar, and the potential V (u α u α ) is some function that enforces a non-zero value for u a at low energies. With a proper scaling of coefficients and V this value can be chosen to be unit at low energies. The model of Equation (47)  At low energies u α acquires an expectation valueū α , and there will be excitations δu α about this value. Generically, there will be a single massive excitation and three massless ones. It has been argued in [105] that the theory suffers stability problems unless V is of the form λ(u α u α − 1), where λ is a Lagrange multiplier. The theory is also ghost free with this potential and the further assumption that c 1 + c 4 < 0 [133]. Assuming these conditions, aether theories possess a set of coupled aether-metric modes which act as new gravitational degrees of freedom that can be searched for with gravitational wave interferometers or by determining energy loss rates from inspiral systems like the binary pulsar. The same scenario generically happens for any tensor field that acquires a VEV dynamically (see Section 7.1), which implies that Lorentz violation can be constrained by the gravitational sector as well as by direct matter couplings.
The aether models use a vector field to describe a preferred frame. Ghost condensate gives a more specific model involving a scalar field. In this scenario the scalar field φ has a Lagrangian of the form P (X), where X = ∂ α φ ∂ α φ. P (X) is a polynomial in X with a minimum at some value X = m, i.e. φ acquires a constant velocity at its minimum. In a cosmological setting, Hubble friction drives the field to this minimum, hence there is a global preferred frame determined by the velocity of φ. This theory gives rise to the same Lorentz violating effects of aether theories, such asČerenkov radiation and spin dependent forces [33]. In general, systems that give constraints on the coefficients of the aether theory are likely to also yield constraints on the size of the velocity m.

Terrestrial Constraints on Lorentz Violation
Having laid out the necessary theoretical background, we now discuss the various experiments and observations that give the best limits on Lorentz violation.

Penning traps
A Penning trap is a combination of static magnetic and electric fields that can keep a charged particle localized within the trap for extremely long periods of time (for a review of Penning traps see [68]). A trapped particle moves in a number of different ways. The two motions relevant for Lorentz violation tests are the cyclotron motion in the magnetic field and Larmor precession due to the spin. The ratio of the precession frequency ω s to the cyclotron frequency ω c is given by where g is the g-factor of the charged particle. The energy levels for a spin 1/2 particle are given by E s n = nω c + sω s where n is an integer and s = ±1/2. For electrons and positrons, where g ≈ 2, the state n, s = −1/2 is almost degenerate with the state n − 1, s = +1/2. The degeneracy breaking is solely due to the anomalous magnetic moment of the electron and is usually denoted by ω a = ω s − ω c . By introducing a small oscillating magnetic field into the trap one can induce transitions between these almost degenerate energy states and very sensitively determine the value of ω a .
The primary use of measurements of ω a is that they directly give a very accurate value of g − 2. However, due to their precision, these measurements also provide good tests of CPT and Lorentz invariance. In the mSME, the only framework that has been applied to Penning trap experiments, the g factor for electrons and positrons receive no corrections at lowest order. However, the frequencies ω a and ω c both receive corrections [60]. At lowest order in the Lorentz violating coefficients these corrections are (with the trap's magnetic field in the z-direction) expressed in a non-rotating frame. The unmodified frequencies are denoted by ω e,0 c,a and the Lorentz violating parameters are various components of the general set given in Equations (32) and (33).
The functional form of Equation (50) immediately makes clear that there are two ways to test for Lorentz violation. The first is to look for instantaneous CPT violation between electrons and positrons which occurs if the b Z parameter is non-zero. The observational bound on the difference between ω a for electrons and positrons is |ω + a − ω − a | < 2.4 × 10 −21 m e [97]. This leads to a bound on b Z of order b Z ≤ 10 −21 m e . The second approach is to track ω a,c over time, looking for sidereal variations as the orientation of the experimental apparatus changes with respect to the background Lorentz violating tensors. This approach has been used in [217] to place a bound on the diurnal variation of the anomaly frequency of ∆ω e − a ≤ 1.6×10 −21 m e , which limits a particular combination of components of b µ , c µν , and d µν H µν at this level. Finally, we note that similar techniques have been used to measure CPT violations for proton/anti-proton and hydrogen ion systems [118]. By measuring the cyclotron frequency over time, bounds on the cyclotron frequency variation (50) for the anti-proton have established a limit at the level of 10 −26 on components of c p − µν .

Clock comparison experiments
The classic clock comparison experiments are those of Hughes [150] and Drever [100], and their basic approach is still used today. Two "clocks", usually two atomic transition frequencies, are Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 co-located at some point in space. As the clocks move, they pick out different components of the Lorentz violating tensors in the mSME, yielding a sidereal drift between the two clocks. The difference between clock frequencies can be measured over long periods, yielding extremely high precision limits on the amount of drift and hence the parameters in the mSME. 13 Note that this approach is only possible if the clocks are made of different materials or have different orientations.
The best overall limit is in the neutron sector of the mSME and comes from a 3 He/ 129 Xe maser system [44,45]. In this setup, both noble gases are co-located. The gases are placed into a population inverted state by collisions with a pumped vapor of rubidium. In a magnetic field of 1.5 G, each gas acts as a maser at frequencies of 4.9 kHz and 1.7 kHz for He and Xe, respectively. The Xe emission is used as a magnetometer to stabilize the magnetic field while the He emission frequency is tracked over time, looking for sidereal variation. At lowest order in Lorentz violating couplings, the Lorentz violating effect for each gas is that of a single valence neutron, so this experiment is sensitive only to neutron parameters in the mSME. The magnitude of the sidereal variation ∆f J is given by where J stands for the X, Y components of the Lorentz violating tensors in a non-rotating frame that are orthogonal to the earth's rotation axis. All parameters are understood to be the ones for the neutron sector of the mSME. The coefficientsb,d, andg are related to the mSME coefficients of Section 4.
Here m is the neutron mass and IJK is the three-dimensional antisymmetric tensor. Barring conspiratorial cancellations among the coefficients, the bound onb ⊥ = b2 X +b 2 Y is 6.4 ± 5.4 × 10 −32 GeV, which is the strongest clock comparison limit on mSME parameters. Similarly, one can derive bounds ond ⊥ andg D,⊥ that are two to three orders of magnitude lower. Hence certain components of these coefficients are bounded at the level of 10 −28 GeV. A continuation of this experiment has recently been able to directly constrain boost violation at the level of 10 −27 GeV [71] (sidereal variations look at rotation invariance). Besides the bounds above, other clock comparison experiments [175] are able to establish the following bounds on other coefficients in the neutron sector of the mSME: A constraint of the dimension five operators of Equation (40) for neutrons was recently derived in [52] using limits on the spatial variation of the hyperfine nuclear spin transition in Be + as a function of the angle between the spin axis and an external magnetic field [64]. Assuming the reference frame of the earth is not aligned with the four vector u α , the extra terms in Equation (40) generically introduce a small orientation dependent potential into the non-relativistic Schrödinger equation for any particle. For Be + , the nuclear spin can be thought of as being carried by a single neutron, so this experiment limits the neutron Lorentz violating coefficients. This extra potential for the neutron leads to anisotropy of the hyperfine transition frequency, which can be bounded by experiment. The limits are roughly |η 1 | < 6 × 10 −3 and |η 2 | < 3 if u α is timelike and coincides with the rest frame of the CMBR. If u α is spacelike one has |η 1 | < 2 × 10 −8 and |η 2 | < 10 −8 . If u α is lightlike both coefficients are bounded at the 10 −8 level. Note that all these bounds are approximate, as they depend on the spatial orientation of the experiment with respect to spatial components of u α in the lab frame. The authors of [52] have assumed that the orientation is not special.
The above constraints apply solely to the neutron sector. Other clock comparison experiments have been performed that yield constraints on the proton sector [84,243,196,46,241] in the mSME. The best proton limit, on theb ⊥ parameter, is |b ⊥ | < 2 · 10 −27 GeV [241], with corresponding limits ond ⊥ andg D,⊥ of order 10 −25 GeV. Similar bounds have been estimated [175] from the experiment of Berglund et al. [46] using the Schmidt model [251] for nuclear structure, where an individual nucleon is assumed to carry the entire nuclear angular momentum. The experiments of Chupp [84], Prestage [243], and Lamoreaux [196] are insensitive to proton coefficients in this model, so no proton bounds have yet been established from these experiments. As noted in [175], proton bounds would be derivable with a more detailed model of nuclear structure.

Cavity experiments
From the Michelson-Morley experiments onward, interferometry has been an excellent method of testing relativity. Modern cavity experiments extend on the ideas of interferometry and provide very precise tests on the bounds of certain photon parameters. The main technique of a cavity experiment is to detect the variation of the resonance frequency of the cavity as its orientation changes with respect to a stationary frequency standard. In this sense, it is similar to a clock comparison experiment. However, since one of the clocks involves photons, cavity experiments constrain the electromagnetic sector of the mSME as well.
The analysis of cavity experiments is much easier if we make a field redefinition of the electromagnetic sector of the mSME [179]. In analogy to the theory of dielectrics, we define two new fields D and H by The κ coefficients are related to the mSME coefficients by With this choice of fields, the modified Maxwell equations from the mSME take the suggestive form This redefinition shows that the Lorentz violating background tensor (k F ) µναβ can be thought of as a dielectric medium with no charge or current density. Hence we can apply much of our Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 intuition about the behavior of fields inside a dielectric to construct tests of Lorentz violation. Note that since H and D depend on the components of (k F ) µναβ , the properties of the dielectric are orientation dependent. Constraints from cavity experiments are not on the κ parameters themselves, but rather on the linear combinationsκ κ tr ,κ e+ , andκ e− are all parity even, whileκ o+ andκ o− are parity odd. The usefulness of this parameterization can be seen if we rewrite the Lagrangian in these parameters [179], The most straightforward way to constrain Lorentz violation with cavity resonators is to study the resonant frequency of a cavity. Since we have a cavity filled with an orientation dependent dielectric, the resonant frequency will also vary with orientation. The resonant frequency of a cavity is where m is the mode number, c is the speed of light, n is the index of refraction (including Lorentz violation) of any medium in the cavity, and L is the length of the cavity. f r can be sensitive to Lorentz violating effects through c, n, and L. Depending on the construction of the cavity some effects can dominate over others. For example, in sapphire cavities the change in L due to Lorentz violation is negligible compared to the change in c. This allows one to isolate the electromagnetic sector.
In general, all cavities are sensitive to the photon κ parameters. In contrast to sapphire, for certain materials the strain induced on the cavity by Lorentz violation is large. This allows sensitivity to the electron parameters c µν at a level equivalent to the photon parameters. Furthermore, by using a cavity with a medium, the dependence of f r on n gives additional electron sensitivity [226].
The complete bounds on the mSME coefficients for cavity experiments are given in [23,67,227,226,280,207,32,261]. The strongest bounds are displayed in Table 1. Roughly, the components ofκ e− and c µν are bounded at O(10 −15 ) whileκ o+ is bounded at O(10 −11 ). The 10 4 difference arises asκ o+ enters constraints suppressed by the boost factor of the earth relative to the solar "rest" frame where the coefficients are taken to be constant.   Table 1: Cavity limits on c µν ,κ e− , andκ o+ (taken from [23,226,32,261]). Components are in a sun centered equatorial frame. Error bars are 1σ. The non-zero value ofκ ZZ e− is argued by the authors to be due to systematics in the experiment [32].

Spin polarized torsion balances
Clock comparison experiments constrain theb J parameter for protons and neutrons. Spin polarized torsion balances are able to place comparable limits on the electron sector of the mSME [56]. The best limits onb i (where i is the spatial direction, including that parallel to the earth's rotation axis) for the electron come from two balances, one in Washington [170,141] and one in Taiwan [148]. We detail the Washington experiment for pedagogical purposes -the two approaches are similar. In the Washington experiment two different types of magnets (SmCo and Alnico) are arranged in an octagonal shape. Four SmCo magnets are on one side of the octagon and four Alnico magnets are on the other. The magnetization of both types of magnets is set to be equal and in the angular direction around the octagon. This minimizes any magnetic interactions. However, with equal magnetization the net electron spin of the SmCo and Alnico magnets differs as the SmCo magnets have a large contribution to their overall magnetization from orbital angular momentum of Sm ions. Therefore the octagonal pattern of magnets has an overall spin polarization in the octagon's plane.
A stack of four of these octagons are suspended from a torsion fiber in a vacuum chamber. The magnets give an estimated net spin polarization − → σ equivalent to approximately 10 23 aligned electron spins. The whole apparatus is then mounted on a turntable. As the turntable rotates a bound on Lorentz violation is obtained in the following manner. Lorentz violation in the mSME gives rise to an interaction potential for non-relativistic electrons of the form V =b i σ i , where i stands for direction and σ i is the electron magnetic moment. As the turntable rotates, sinceb points in some fixed direction in space, the interaction produces a torque on the torsion balance. The magnet apparatus therefore twists on the torsion fiber by an amount given by where V H is the horizontal component of V , ω is the frequency of rotation, φ 0 is an initial phase due to orientation, and κ is the torsion constant. Since κ and ω are known, a measurement of Θ will give the magnitude of V H . Since σ i is also known, V H gives a limit on the size ofb i . The absence of any extra twist limits all components of |b| for the electron to be less than 10 −28 GeV. The Taiwan experiment uses a different material (Dy 6 Fe 23 ) [148]. The bounds from this experiment are of order 10 −29 GeV for the components ofb i perpendicular to the spin axis and 10 −28 GeV for the parallel component.
To conclude this section, we note that the torsion balance experiments are actually sensitive enough to also constrain the dimension 5 operators in Equation (40). Assuming that all lower dimension operators are absent, the constraint on the dimension five operators is |η R −η L | < 4 [230].

Neutral mesons
Mesons have long been used to probe CPT violation in the standard model. In the framework of the mSME, CPT violation also implies Lorentz violation. Let us focus on kaon tests, where most of the work has been done. The approach for the other mesons is similar [169,1]. The relevant parameter for CPT and Lorentz violation in neutral kaon systems is a µ for the down and strange quarks (since K = ds). As we mentioned previously, one of the a µ can always be absorbed by a field redefinition. Therefore only the difference between the quark a µ 's, ∆a µ = r d a d µ −r s a s µ controls the amount of CPT violation and is physically measurable. Here r d,s are coefficients that allow for effects due to the quark bound state [184].
A generic kaon state Ψ K is a linear combination of the strong eigenstates K 0 and K 0 . If we write Ψ K in two component form, the time evolution of the Ψ K wavefunction is given by a Schrödinger equation, Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 where the Hamiltonian H is a 2×2 complex matrix. H can be decomposed into real and imaginary parts, H = M − iΓ. M and Γ are Hermitian matrices usually called the mass matrix and decay matrix, respectively. The eigenstates of H are the physically propagating states, which are the familiar short and long decay states K S and K L . CPT violation only occurs when the diagonal components of H are not equal [198]. In the mSME, the lowest order contribution to the diagonal components of H occurs in the mass matrix M , contributions to Γ are higher order [184]. Hence the relevant observable for this type of CPT violation in the kaon system is the K 0 and K 0 mass In the mSME the deviation ∆ K is (as usual) orientation dependent. In terms of ∆a µ , we have [171] where β µ is the four-velocity of the kaon in the observer's frame. The mass difference ∆ K has been extremely well measured by experiments such as KTeV [234] or FNAL E773 at Fermilab [253].

Doppler shift of lithium
If Lorentz invariance is violated, then the transformation laws for clocks with relative velocity will be different from the usual time dilation. The RMS framework of Section 3.2 provides a convenient parameterization of how the Doppler shift can deviate from its standard relativistic form.
Comparisons of oscillator frequencies under boosts therefore can constrain the α RMS parameter in the RMS framework. The best test to date comes from spectroscopy of lithium ions in a storage ring [249]. In this experiment, 7 Li + ions are trapped in a storage ring at a velocity of 0.064 c.
The transition frequencies of the boosted ions are then measured and compared to the transition frequencies at rest, providing a bound on the deviation from the special relativistic Doppler shift of |α RMS | < 2 × 10 −7 in the RMS framework. Recently, the results of [249] have been reinterpreted in the context of the mSME. For the electron/proton sector the approximate bounds are [197] |c where J = X, Y, Z in a heliocentric frame. In the photon sector, the limitκ tr ≤ O(10 −5 ) can also be set from this experiment [269].

Muon experiments
Muon experiments provide another window into the lepton sector of the mSME. As discussed in Section 4.3, if the mSME coefficients are to be small then there must be some small energy scale suppressing the Lorentz violating coefficients. There are only a few available small scales, namely particle masses or a symmetry breaking scale. If we assume the scale is particle mass, then muon based experiments would have a signal at least 10 2 larger than equivalent electron experiments due 14 For a more thorough discussion of CPT (and CP) tests, see for example [199].
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 to the larger mass of the muon. The trade-off, of course, is that muons are unstable so experiments are intrinsically more difficult. There are two primary experiments that give constraints on the muon sector. First, spin transitions in muonium (µ + e − ) have been used to place a bound onb J for the muon (see Equation (52) for the definition ofb J ) [149]. Even though muonium is a muon-electron system, the muon sector of the mSME can be isolated by placing the muonium in a strong magnetic field and looking for a particular frequency resonance that corresponds to muon spin flips. The sidereal variation of this transition frequency is then tracked yielding a limit onb J of where J = X, Y in a non-rotating frame with Z oriented along the earth's spin axis.
The second muon experiment that yields strong limits is the µ − /µ + g-2 experiment [58,39,72]. In this experiment relativistic µ − (or µ + ) are injected into a storage ring and allowed to decay. The deposit rate of the decay products along the detector is sensitive to the evolution of the spin of the muon, which in turn is a function of g − 2 for the muon. Lorentz violation changes this evolution equation, and therefore this type of g − 2 experiments can bound the mSME. As in the case of the g − 2 experiments in Section 5.1, two types of bounds can be placed from the muon g − 2 experiment. The first is a direct comparison between the g − 2 factors for µ − and µ + , which limits the CPT violating coefficient b Z < 10 −22 GeV. Furthermore, an analysis of sidereal variations involving only one of the µ − /µ + at the current sensitivity in [72] could bound theb J coefficient at the level of 10 −25 GeV [58].

Constraints on the Higgs sector
Since the constraints on various parameters of the mSME are so tight, one can derive interesting indirect constraints on unmeasured sectors by considering loop effects. Such an approach has been recently taken in [28], where loop corrections to mSME coefficients from Lorentz violation in the Higgs sector are considered. Such an approach could be used with any particle, but since the Higgs is an observationally hidden sector, such an analysis is more important as direct tests are unlikely any time soon. There are four parameters in the Higgs sector of the mSME (see Section 4.1.1).
Constraints on the antisymmetric part of (k φφ ) µν , which we denote (k A φφ ) µν , and (k φB ) µν , (k φW ) µν come from the birefringence constraints on photon propagation (see Section 6.3). Here the loop corrections to the photon propagator induce a non-zero (k F ) αβµν , which can be directly constrained. This yields a constraint on all three coefficients of order 10 −16 . A bound (k s φφ ) µν < 10 −13 can be derived from the cyclotron frequencies of hydrogen ions and anti-protons. Bounds on the CPT violating term (k φ ) µ come from both the spin polarized torsion balance experiments and the noble gas maser. The torsion experiments bound the t and z components (where z is parallel to the earth's rotation axis) at the level of 10 −27 GeV and the transverse components at 10 −25 GeV. The He/Xe maser system gives a better, although less clean, bound on the transverse components of order 10 −31 GeV.

Relevance of astrophysical observations
Terrestrial experiments are invariably concerned with low energy processes. They are therefore best suited for looking at the mSME, which involves lower dimension operators. Astrophysics is more suited for directly constraining higher dimension operators as the Lorentz violating effects scale with energy. As mentioned in Section 4.3, the existence of Lorentz violating higher dimensional operators would generically generate lower dimension ones. At the level of sensitivity of astrophysical tests, the size of the corresponding lower dimension operators should give signals in terrestrial experiments. Hence, if a signal is seen in astrophysics for Lorentz violation, one must then explain why Lorentz invariance passes all the low energy tests. As mentioned in Section 4.3, exact SUSY, which is the only known mechanism to completely protect lower dimension operators, yields dispersion modifications (the primary method used in astrophysics) that are unobservable. In summary there is currently no "natural" and complete way that astrophysics might observe Lorentz violation, but terrestrial experiments confirm Lorentz invariance. That said, physics is often surprising, and it is therefore still important to check for Lorentz violating signals in all possible observational areas.

Time of flight
The simplest astrophysical observations that provide interesting constraints on Planck scale Lorentz violation are time of flight measurements of photons from distant sources [20,106,109]. This is also one of two processes (the other being birefringence) that can be directly applied to kinematic models. With a modified dispersion relation of the form (15) and the assumption that the velocity is given by v = ∂E/∂p 15 , the velocity of a photon is given by If n = 2, the velocity is a function of energy, and the time of arrival difference ∆T between two photons at different energies travelling over a time T is where E 1,2 are the photon energies. The large time T plays the role of an amplifier in this process, compensating for the small ratio E/E Pl . 16 For n = 1 there are much better low energy constraints, while for n = 4 the constraints are far too weak to be useful. Hence we shall concentrate on n = 3 type dispersion, where this constraint has been most often applied. The best limits [53] are provided by observations of rapid flares from Markarian 421, a blazar at a redshift of approximately z = 0.03, although a number of other objects give comparable results [250,62]. The most rapid flare from Markarian 421 showed a strong correlation of flux at 15 In κ-Minkowski space there is currently some debate as to whether the standard relation for group velocity is correct [19,186]. Until this is resolved, v = ∂E/∂p remains an assumption that might be modified in a DSR context. It obviously holds in field theoretic approaches to Lorentz violation. 16 This is the first example of a significant constraint on terms in particle dispersion/effective field theory that are Planck suppressed, which would naively seem impossible. The key feature of this reaction is the interplay between the long travel time and the large Planck energy. In general any experiment that is sensitive to Planck suppressed operators is either extremely precise (as in terrestrial tests of the mSME) or has some sort of "amplifier". An amplifier is some other scale (such as travel time or particle mass) which combines with the Planck scale to magnify the effect.
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 1TeV and 2 TeV on a timescale of 280 s. If we assume that the flare was emitted from the same event at the source, the time of arrival delay between 1TeV and 2TeV photons must be less than 280 s. Combining all these factors yields the limit |f (3) | < 128.
A possible problem with the above bound is that in a single emission event it is not known if the photons of different energies are produced simultaneously. If different energies are emitted at different times, that might mask a LV signal. One way around this is to look for correlations between time delay and redshift, which has been done for a set of gamma ray bursts (GRBs) in [109]. Since time of flight delay is a propagation effect that increases over time, a survey of GRBs at different redshifts can separate this from intrinsic source effects. This enables constraints to be imposed (or LV to be observed) despite uncertainty regarding source effects. The current data from GRBs limit f (3) to be less than O(10 3 ) [109]. Therefore significant observational progress must be made in order to reach O(1) bounds on f (3) . Improvements on this limit might come from observations of GRBs with new instruments such as GLAST, however concerns have been raised that source effects may severely impair this approach as well [242,106]. Higher order dispersion corrections seem unlikely to ever be probed with time of flight measurements.
The limit |f (3) | < 128 can be easily applied to the EFT operators in Equation (40). From Equation (41) we trivially see that the constraint on ξ is |ξ| < 64, again comparing the 2TeV peak to the 1TeV. It might seem that we can get a better constraint by demanding the time delay between 2TeV right handed and left handed photons is less than 280s. However, the polarization of the flare is unknown, so it is possible (although perhaps unlikely) that only one polarization is being produced. If one can show that both polarizations are present, then one can further improve this constraint. However, the time of flight constraints are much weaker than other constraints that can be derived on the operators in Equation (40) from birefringence, so this line of research would not be fruitful.
DSR theories may also predict a time of flight signal, where the speed of light is effectively given by the group velocity of an n = 3 type dispersion relation. 17 If there is such a frequency dependence, it is not expected that DSR also yields birefringence as in the EFT case. An n = 3 type dispersion for photons without birefringence would hence be a strong signal for DSR or something similar. Coupled with the fact that DSR does not affect threshold reactions or exhibit sidereal effects, time of flight analyses provide the only currently realistic probe of DSR theories. Unfortunately, since the invariant energy scale is usually taken to be the Planck energy, time of flight constraints are still one to two orders of magnitude below what is needed to constrain/probe DSR.
As an aside, note that the actual measurement of the dependence of the speed of light with frequency in a telescope such as GLAST [260] has a few subtleties in a DSR framework. Let us make the (unrealistic) assumption that the situation is as good as it could possibly be experimentally: there is a short, high energy GRBs from some astrophysical source where all the photons are emitted from the same point at the same time. The expected observational signal is then a correlation between the photon time of arrival and energy. The time of arrival is fairly straightforward to measure, but the reconstruction of the initial photon energy is not so easy. GLAST measures the initial photon energy by calorimetry -the photon goes through a conversion foil and converts to an electron-positron pair. The pair then enters a calorimeter, which measures the energy by scintillation. The initial particle energy is then only known by reconstruction from many events. Energy reconstruction requires addition of the multitude of low energy signals back into the single high energy incoming photon. Usually this addition in energy is linear (with corrections due to systematics/experimental error). However, if we take the DSR energy summation rules as currently postulated the energies of the low energy events add non-linearly, leading to a modified high energy signal. One might guess that since the initial particle energy is well below the Planck scale, the non-linear corrections make little difference to the energy reconstruction. However, to concretely answer such a question, the multi-particle sector of DSR must be properly understood (for a discussion of the problems with multi-particle states in DSR see [186]).
Finally, while photons are the most commonly used particle in time of flight tests, other particles may also be employed. For example, it has been proposed in [81] that neutrino emission from GRBs may also be used to set limits on n = 3 dispersion. Observed neutrino energies can be much higher than the TeV scale used for photon measurements, hence one expects that any time delay is greatly magnified. Neutrino time delay might therefore be a very precise probe of even n > 3 dispersion corrections. Of course, first an identifiable GRB neutrino flux must be detected, which has not happened yet [5]. Assuming that a flux is seen and able to be correlated on the sky with a GRB, one must still disentangle the signal. In a DSR scenario, where time delay scales uniformly with energy this is not problematic, at least theoretically. However, in an EFT scenario there can be independent coefficients for each helicity, thereby possibly masking an energy dependent signal. For n = 3 this complication is irrelevant if one assumes that all the neutrinos are left-handed (as would be expected if produced from a standard model interaction) as only f

Birefringence
A constraint related to time of flight is birefringence. The dimension five operators in Equation (40) as well as certain operators in the mSME induce birefringence -different speeds for different photon polarizations (41). 18 A number of distant astrophysical objects exhibit strong linear polarization in various low energy bands (see for example the sources in [178,127]). Recently, linear polarization at high energies from a GRB has been reported [85], though this claim has been challenged [248,274]. Lorentz violating birefringence can erase linear polarization as a wave propagates, hence measurements of polarization constrain the relevant operators.
The logic is as follows. We assume for simplicity the framework of Section 4.1.4 and rotation invariance; the corresponding analysis for the general mSME case can be found in [178]. At the source, assume the emitted radiation is completely linearly polarized, which will provide the most conservative constraint. To evolve the wave, we must first decompose the linear polarization into the propagating circularly polarized states, If we choose coordinates such that the wave is travelling in the z-direction with initial polarization (0,1,0,0) then L = (0, 1, −i, 0) and R = (0, 1, i, 0). Rearranging slightly we have which describes a wave with a rotating polarization vector. Hence in the presence of birefringence a linearly polarized wave rotates its direction of polarization during propagation. This fact alone has been used to constrain the k AF term in the mSME to the level of 10 −42 GeV by analyzing the plane of polarization of distant galaxies [74]. A variation on this constraint can be derived by considering birefringence when the difference ∆ω = ω R − ω L is a function of k z . A realistic polarization measurement is an aggregate of the polarization of received photons in a narrow energy band. If there is significant power across the entire band, then a polarized signal must have the polarization direction at the top nearly the 18 Gravitational birefringence has also been studied extensively in the context of non-metric theories of gravitation, which also exhibit Lorentz violation. See for example [259,116,244] for discussions of these theories and the parallels with the mSME.
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 same as the direction at the bottom. If the birefringence effect is energy dependent, however, the polarization vectors across the band rotate differently with energy. This causes polarization "diffusion" as the photons propagate. Given enough time the spread in angle of the polarization vectors becomes comparable to 2π and the initial linear polarization is lost. Measurement of linear polarization from distant sources therefore constrains the size of this effect and hence the Lorentz violating coefficients. We can easily estimate the constraint from this effect by looking at when the polarization at two different energies (representing the top and bottom of some experimental band) is orthogonal, i.e. A α (ET) A (EB) α = 0. Using Equation (77) for the polarization gives Three main results have been derived using this approach. Birefringence has been applied to the mSME in [178,179]. Here, the ten independent components of the two coefficientsκ e+ and κ o− (see Section 5.3) that control birefringence are expressed in terms of a ten-dimensional vector k a [179]. The actual bound, calculated from the observed polarization of sixteen astrophysical objects, is |k a | ≤ 10 −32 . 19 A similar energy band was used to constrain ξ in Equation (40) to be |ξ| < O(10 −4 ) [127]. Recently, the reported polarization of GRB021206 [85] was used to constrain ξ to |ξ| < O(10 −14 ) [156], but since the polarization claim is uncertain [248,274] such a figure cannot be treated as an actual constraint.

Threshold constraints
We now turn our attention from astrophysical tests involving a single particle species to threshold reactions, which often involve many particle types. Before delving into the calculational details of energy thresholds with Lorentz violation, we give a pedagogical example that shows why particle decay processes (which involve rates) give constraints that are only functions of reaction threshold energies. Consider photon decay, γ → e + e − (see Section 6.5.5 for details). In ordinary Lorentz invariant physics the photon is stable to this decay process. What forbids this reaction is solely energy/momentum conservation -two timelike four-momenta (the outgoing pair) cannot add up to the null four momentum of the photon. If, however, we break Lorentz invariance and assume a photon obeys a dispersion relation of the form > 0 (to see this intuitively, note that the extra term at high energies acts as a large effective mass for a photon). Therefore a photon can decay to an electron positron pair.
This type of reaction is called a threshold reaction as it can happen only above some threshold energy ω th ∼ (m 2 e E Pl /f where m e is the electron mass. The threshold energy is translated into a constraint on f (3) γ in the following manner. We see 50 TeV photons from the Crab nebula [268], hence this reaction must not occur for photons up to this energy as they travel to us from the Crab. If the decay rate is high enough, one could demand that ω th is above 50 TeV, constraining f  [152]. If, however, the rate is very small then even though a photon is above threshold it could still reach us from the Crab. Using the Lorentz invariant expression for the matrix element M (i.e. just looking at the kinematical aspect of Lorentz violation) one finds that as ω increases above ω th the rate very rapidly becomes proportional to f (3) γ ω 2 /E Pl . If a 50 TeV photon is above threshold, the decay time is then approximately 10 −11 /f (3) γ s. The travel time of a photon from the Crab is ∼ 10 11 seconds. Hence if a photon is at all above threshold it will decay almost instantly relative to the observationally required lifetime. Therefore we can neglect the actual rate and derive constraints simply by requiring that the threshold itself is above 50 TeV.
It has been argued that technically, threshold constraints can't truly be applicable to a kinematic model where just modified dispersion is postulated and the dynamics/matrix elements are not known. This isn't actually a concern for most threshold constraints. For example, if we wish to constrain f (3) γ at O(1) by photon decay, then we can do so as long as M is within 11 orders of magnitude of its Lorentz invariant value (since the decay rate goes as |M| 2 ). Hence for rapid reactions, even an enormous change in the dynamics is irrelevant for deriving a kinematic constraint. Since kinematic estimates of reaction rates are usually fairly accurate (for an example see [202,201]) one can derive constraints using only kinematic models. In general, under the assumption that the dynamics is not drastically different from that of Lorentz invariant effective field theory, one can effectively apply particle reaction constraints to kinematic theories since the decay times are extremely short above threshold.
There are a few exceptions where the rate is important, as the decay time is closer to the travel time of the observed particle. Any type of reaction involving a weakly interacting particle such as a neutrino or graviton will be far more sensitive to changes in the rate. For these particles, the decay time of observed particles can be comparable to their travel time. As well, any process involving scattering, such as the GZK reaction (p + γ CMBR −→ p + π 0 ) or photon annihilation (2γ −→ e + + e − ) is more susceptible to changes in M as the interaction time is again closer to the particle travel time. Even for scattering reactions, however, M would need to change significantly to have any effect. Finally, M is important in reactions like (γ −→ 3γ), which are not observed in nature but do not have thresholds [154,183,3,2,124]. In these situations, the small reaction rate is what may prevent the reaction from happening on the relevant timescales. For all of these cases, kinematics only models should be applied with extreme care. We now turn to the calculation of threshold constraints assuming EFT.

Particle threshold interactions in EFT
When Lorentz invariance is broken there are a number of changes that can occur with threshold reactions. These changes include shifting existing reaction thresholds in energy, adding additional thresholds to existing reactions, introducing new reactions entirely, and changing the kinematic configuration at threshold [86,130,154,200]. By demanding that the energy of these thresholds is inside or outside a certain range (so as to be compatible with observation) one can derive stringent constraints on Lorentz violation.
In this section we will describe various threshold phenomena introduced by Lorentz violation in EFT and the constraints that result from high energy astrophysics. Thresholds in other models are discussed in Section 6.6. We will use rotationally invariant QED as the prime example when analyzing new threshold behavior. The same methodology can easily be transferred to other particles and interactions. A diagram of the necessary elements for threshold constraints and the appropriate sections of this review is shown in Figure 1.
Thresholds are determined by energy-momentum conservation. Since we are working in straight EFT in Minkowski space, translational invariance implies that the usual conservation laws hold, i.e. p A α + p B α + · · · = p C α + p D α + . . . , where p α is the four momentum of the various particles A, B, C, D, . . . . Since this just involves particle dispersion, we can neglect the underlying EFT for the general derivations of thresholds and threshold theorems. when we need to determine (i) the actual dispersion relations that occur in a physical system to establish constraints and (ii) matrix elements for actual reaction rates (cf. [201]).
Threshold constraints have been looked at for reactions which have the same interaction vertices as in Lorentz invariant physics. The reaction rate is therefore suppressed only by gauge couplings and phase space. n > 2 dispersion requires higher mass dimension operators, and these operators will generically give rise to new interactions when the derivatives are made gauge covariant. However, the effective coupling for such interactions is the same size as the Lorentz violation and hence is presumably very small. These reactions are therefore suppressed relative to the Lorentz invariant coupling and can most likely be ignored, although no detailed study has been done.

Required particle energy for "Planck scale" constraints
We now give another simple example of constraints from a threshold reaction to illustrate the required energy scales for constraints on Planck scale Lorentz violation. The key concept for understanding how threshold reactions are useful is that, as we briefly saw for the photon decay reaction in Section 6.4, particle thresholds are determined by particle mass, which is a small number that can offset the large Planck energy. To see this in more detail, let us consider the vacuum Cerenkov effect, A → A + γ, where A is some massive charged particle. In usual Lorentz invariant physics, this reaction does not happen due to energy-momentum conservation. However, consider now a Lorentz violating dispersion relation for A of the form A > 0. For simplicity, in this pedagogical example we shall not change the photon dispersion relation ω = k.Čerenkov radiation usually occurs when the speed of the source particle exceeds the speed of light in a medium. The same analysis can be applied in this case, although for more general Lorentz violation there are other scenarios whereČerenkov radiation occurs even though the speed condition is not met (see below) [154]. The group velocity of A, v = dE/dp, is equal to one at a momentum and so we see that the threshold momenta can actually be far below the Planck energy, as it is controlled by the particle mass as well. For example, electrons would be unstable with n = 3 and f (82) Therefore constraints can be much less than order one with particle energies much less than E Pl . The orders of magnitude of constraints on f (n) A estimated from the threshold equation alone (i.e. we have neglected the possibility that the matrix elements are small) for various particles are given in Table 2 [154].  For neutrinos, p obs comes from AMANDA data [123]. The p obs for electrons comes from the expected energy of the electrons responsible for the creation of ∼ 50TeV gamma rays via inverse Compton scattering [188,268] in the Crab nebula. For protons, the p obs is from AGASA data [267].
We include the neutrino, even though it is neutral, since neutrinos still have a non-vanishing interaction amplitude with photons. We shall talk more about neutrinos in Section 6.8. The neutrino energies in this table are those currently observed; if future neutrino observatories see PeV neutrinos (as expected) then the constraints will increase dramatically.
This example is overly simplified, as we have ignored Lorentz violation for the photon. However, the main point remains valid with more complicated forms of Lorentz violation: Constraints can be derived with current data that are much less than O(1) even for n = 4 Lorentz violation. We now turn to a discussion of the necessary steps for deriving threshold constraints, as well as the constraints themselves for more general models.

Assumptions
One must make a number of assumptions before one can analyze Lorentz violating thresholds in a rigorous manner.

Rotation Invariance
Almost all work on thresholds to date has made the assumption that rotational invariance holds. If this invariance is broken, then our threshold theorems and results do not necessarily hold. For threshold discussions, we will assume that the underlying EFT is rotationally invariant and use the notation p = | − → p |.

Monotonicity
We will assume that the dispersion relation for all particles is monotonically increasing. This is the case for the mSME with small Lorentz violating coefficients if we work in a concordant frame. Mass dimension > 4 operators generate dispersion relations of the form Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 which do not satisfy this condition at momentum near the Planck scale if f (n) < 0. The turnover momentum p TO where the dispersion relation is no longer monotonically increasing is p TO = (−2/(nf (n) )) 1/(n−2) E Pl . The highest energy particles known to propagate are the trans-GZK cosmic rays with energy 10 −8 E Pl . Hence unless f (n) 1, p TO is much higher than any relevant observational energy, and we can make the assumption of monotonicity without loss of generality.

High energy incoming particle
If there is a multi-particle in state, we will assume that one of the particles is much more energetic than all the others. This is the observational situation in reactions such as photonphoton scattering or pion production by cosmic rays scattering off the cosmic microwave background (the GZK reaction; see Section 6.5.6).

Threshold theorems
Eventually, any threshold analysis must solve for the threshold energy of a particular reaction. To do this, we must first know the appropriate kinematic configuration that applies at a threshold. Of use will be a set of threshold theorems that hold in the presence of Lorentz violation, which we state below. Variations on these theorems were derived in [88] for single particle decays with n = 2 type dispersion and [215] for two in-two out particle interactions with general dispersion.
Here we state the more general versions.
Theorem 1: The configuration at a threshold for a particle with momentum p 1 is the minimum energy configuration of all other particles that conserves momentum.
Theorem 2: At a threshold all outgoing momenta are parallel to p 1 and all other incoming momentum are anti-parallel.

New threshold phenomena
Asymmetric thresholds Asymmetric thresholds are thresholds where two outgoing particles with equal masses have unequal momenta. This cannot occur in Lorentz invariant reactions. Asymmetric thresholds occur because the minimum energy configuration is not necessarily the symmetric configuration. To see this, let us analyze photon decay, where we have one incoming photon with momentum p in and an electron/positron pair with momenta q 1 , q 2 . We will assume our Lorentz violating coefficients are such that the electron and positron have identical dispersion. 20 Imagine that the dispersion coefficients f (n) for the electron and positron are negative and such that the electron/positron dispersion is given by the solid curve in Figure 2. We define the energy E symm to be the energy when both particles have the same momentum q 1 = q 2 = p in /2. This is not the minimum energy configuration, however, if the curvature of the dispersion relation (∂ 2 E/∂p 2 ) at p in /2 is negative. If we add a momentum ∆q to q 2 and −∆q to q 1 , then we change the total energy by ∆E = ∆E 2 − ∆E 1 . Since the curvature is negative, ∆E 1 > ∆E 2 and therefore ∆E < 0. The symmetric configuration is not the minimum energy configuration and is not the appropriate configuration to use for a threshold analysis for all p in .
Note that part of the dispersion curve in Figure 2 has positive curvature, as must be the case if at low energies we have the usual Lorentz invariant massive particle dispersion. If we were considering the constraints derivable when p in /2 is small and in the positive curvature region, then the symmetric configuration would be the applicable one. In general when it is appropriate to use asymmetric thresholds or symmetric ones depends heavily on the algebraic form of the outgoing particle Lorentz violation and the energy that the threshold must be above. The only general statement that can be made is that asymmetric thresholds are not relevant when the outgoing particles have n = 2 type dispersion modifications (either positive or negative) or for strictly positive coefficients at any n.
For further examples of the intricacies of asymmetric thresholds, see [154,167].

HardČerenkov thresholds
Related to the existence of asymmetric thresholds is the hardČerenkov threshold, which also occurs only when n > 2 with negative coefficients. However, in this case both the outgoing and incoming particles must have negative coefficients. To illustrate the hardČerenkov threshold, we consider photon emission from a high energy electron, which is the rotated diagram of the photon decay reaction. In Lorentz invariant physics, electrons emit sofť Cerenkov radiation when their group velocity ∂E/∂p exceeds the phase velocity ω/k of the electromagnetic vacuum modes in a medium. This type ofČerenkov emission also occurs in Lorentz violating physics when the group velocity of the electrons exceeds the low energy speed of light in vacuum. The velocity condition does not apply to hardČerenkov emission, however, so to understand the difference we need to describe both types in terms of energymomentum conservation.
Let us quickly remind ourselves where the velocity condition comes from. The energy conservation equation (imposing momentum conservation) can be written as Dividing both sides by k and taking the soft photon limit k → 0 we have Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 Equation (85) makes clear that the velocity condition is only applicable for soft photon emission. Hard photon emission can occur even when the velocity condition is never satisfied, if the photon energy-momentum vector is spacelike with n > 2 dispersion. As an example, consider an unmodified electron and a photon dispersion of the form ω 2 = k 2 − k 3 /E Pl . The energy conservation equation in the threshold configuration is where p is the incoming electron momentum. Introducing the variable x = k/p and rearranging, we have Since all particles are parallel at threshold, x must be between 0 and 1. The maximum value of the right hand side is 1/4, and so we see that we can solve the conservation equation if p > (4m 2 E Pl ) 1/3 , which is approximately 23TeV. At threshold, x = 1/2 so this corresponds to emission of a hard photon with an energy of 11.5TeV.

Upper thresholds
Upper thresholds do not occur in Lorentz invariant physics. It is easy to see that they are possible with Lorentz violation, however. In figure 3 the region R in energy space spanned by E out (X k , p 1 ) is bounded below, since each individual dispersion relation is bounded below. However, if one can adjust the dispersion E 1 (p 1 ) freely, as would be the case if the incoming particle was a unique species in the reaction, then one can choose Lorentz violating coefficients such that E 1 (p 1 ) moves in and out of R.
As a concrete example consider photon decay, γ −→ e + + e − , with unmodified photon dispersion and an electron/positron dispersion relation of chosen strictly for algebraic convenience. This dispersion relation has positive curvature everywhere, implying that the electron and positron have equal momenta at threshold. The energy conservation equation, where the photon has momentum k is then which reduces to Equation (90) has two positive real roots, at k = (4 ± 2 Pl , corresponding to a lower and upper threshold at 14TeV and 82TeV, respectively. Such a threshold structure would produce a deficit in the observed photon spectrum in this energy band. 21 Very little currently exists in the literature on the observational possibilities of upper thresholds. A complicated lower/upper threshold structure has been applied to the trans-GZK cosmic ray events, with the lower threshold mimicking the GZK-cutoff at 5 × 10 19 GeV and the upper entering below the highest events at 3×10 20 GeV [154]. The region of parameter space where such a scenario might happen is extremely small, however. Figure 3: An example of an upper and lower threshold. R is the region spanned by all X k and E 1 (p 1 ) is the energy of the incoming particle. Where E 1 (p 1 ) enters and leaves R are lower and upper thresholds, respectively.

Helicity decay
In previous work on theČerenkov effect based on EFT it has been assumed that left and right handed fermions have the same dispersion. As we have seen, however, this need not be the case. When the fermion dispersion is helicity dependent, the phenomenon of helicity decay occurs. One of the helicities is unstable and will decay into the other as a particle propagates, emitting some sort of radiation depending on the exact process considered. Helicity decay has no threshold in the traditional sense; the reaction happens at all energies. However, below a certain energy the phase space is highly suppressed, so we have an effective threshold that practically speaking is indistinguishable from a real threshold. As an example, consider the reaction e L −→ e R + γ, with an unmodified photon dispersion and the electron dispersion relation for right and left-handed electrons. Furthermore, assume that f where p is the incoming momentum and k is the outgoing photon momentum. We have assumed that the transverse momentum is zero, which gives us the minimum and maximum values of k. k is assumed to be less than p; one can check a posteriori that this assumption is valid. It can be negative, however, which is different from a threshold calculation where all momenta are necessarily parallel. Solving Equation (93) for k min , and k max yields to lowest order in m and f (4) eL : From Equation (94) it is clear that when p 2 m 2 /f (4) eL the phase space is highly suppressed, while for p 2 m 2 /f (4) eL the phase space in k becomes of order p. The momentum p th = (m 2 /f (4) eL ) 1/2 acts as an effective threshold, where the reaction is strongly suppressed below this energy. Constraints from helicity decay in the current literature [155] are complicated and not particularly useful. Hence we shall not describe them here, instead focussing our attention on the strictČerenkov effect when the incoming and outgoing particle have the same helicity. For an in-depth discussion of helicity decay constraints see [155].

Threshold constraints in QED
With the general phenomenology of thresholds in hand, we now turn to the actual observational constraints from threshold reactions in Lorentz violating QED. We will continue to work in a rotationally invariant setting. Only the briefest listing of the constraints is provided here; for a more detailed analysis see [154,156,155]. Most constraints in the literature have been placed by demanding that the threshold for an unwanted reaction is above some observed particle energy. As mentioned previously, a necessary step in this analysis is to show that the travel times of Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 the observed particles are much longer than the reaction time above threshold. A calculation of this for the vacuumČerenkov has been done for QED with dimension four Lorentz violating operators in [224]. More generally, a simple calculation shows that the energy loss rate above threshold from the vacuumČerenkov effect rapidly begins to scale as e 2 AE n /E n−2 Pl , where A is a coefficient that depends on the coefficients of the Lorentz violating terms in the EFT. Similarly, the photon decay rate is e 2 AE n−1 /E n−2 Pl . In both cases the reaction times for high energy particles are roughly (e 2 A) −1 E n−2 Pl /E n−1 , which is far shorter than the required lifetimes for electrons and photons in astrophysical systems for n = 2, 3. 22 The lifetime of a high energy particle in QED above threshold is therefore short enough that we can establish constraints simply by looking at threshold conditions.

Photon decay
Lorentz violating terms can be chosen such that photons become unstable to decay into electron-positron pairs [152]. We observe 50 TeV photons from the Crab nebula. There must exist then at least one stable photon polarization. The thresholds for n = 2, 3 dispersion have been calculated in [154]. Demanding that these thresholds are above 50 TeV yields the following best constraints.
For n = 2 with CPT preserved we have f [154]. If we set d = 0 in Equation (39) so that there is no helicity dependence, this translates to the constraint k F /2 + c ≤ 4 × 10 −16 . If d = 0 then both helicities of electrons/positrons must satisfy this bound since the photon has a decay channel into every possible combination of electron/positron helicity. The corresponding limit is k F /2 + (c ± d) ≤ 4 · 10 −16 .
For n = 3 the situation is a little more complicated, as we must deal with photon and electron helicity dependence, positron dispersion, and the possibility of asymmetric thresholds. The 50TeV Crab photon polarizations are unknown, so only the region of parameter space in which both polarizations decay can be excluded. We can simplify the problem dramatically by noting that the birefringence constraint on ξ in Equation (40) is |ξ| ≤ 10 −4 [127]. The level of constraints from threshold reactions at 50 TeV is around 10 −2 [152,167]. Since the birefringence constraint is so much stronger than threshold constraints, we can effectively set ξ = 0 and derive the photon decay constraint in the region allowed by birefringence. With this assumption, we can derive a strong constraint on both η R and η L by considering the individual decay channels γ → e − R + e + L and γ → e − L + e + R , where L and R stand for the helicity. For brevity, we shall concentrate on γ → e − R + e + L , the other choice is similar. The choice of a right-handed electron and left-handed positron implies that both particle's dispersion relations are functions of only f (3) eR and hence η R (see Section 4.1.4). The matrix element can be shown to be large enough for this combination of helicities that constraints can be derived by simply looking at the threshold. Imposing the threshold configuration and momentum conservation, and substituting in the appropriate dispersion relations, the energy conservation equation becomes where k is the incoming photon momentum and p is the outgoing electron momentum. Cancelling the lowest order terms and introducing the variable z = 2p/k − 1, this can be rewritten as 22 For n = 4 no QED particles reach energies high enough to provide constraints. The only particles of the required energy are ultra-high energy cosmic rays or neutrinos. Assuming the cosmic rays are protons, the corresponding reaction time forČerenkov emission is 10 −17 s.
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 To find the minimum energy configuration we must minimize the right hand side of Equation (96) with respect to z (keeping the right hand side positive). We note that since the range of z is between −1 and +1, the right hand side of Equation (96) can be positive for both positive and negative f (3) eR , which implies that the bound will be two sided.
As an aside, it may seem odd that photon decay happens at all when the outgoing particles have opposite dispersion modifications, since the net effect on the total outgoing energy might seem to cancel. However, this is only the case if both particles have the same momenta. We can always choose to place more of the incoming momentum into the outgoing particle with a negative coefficient, thereby allowing the process to occur. This reasoning also explains why the bound is two sided, as the threshold configuration gives more momentum to whichever particle has a negative coefficient.
Returning to the calculation of the threshold, minimizing Equation 96), we find that the threshold momentum is The absolute value here appears because we find the minimum positive value of Equation (96).
Placing k th at 50 TeV yields the constraint |f eR | < 0.25 and hence |η R | < 0.125. The same procedure applies in the opposite choice of outgoing particle helicity, so η L obeys this bound as well.

VacuumČerenkov
The 50 TeV photons observed from the Crab nebula are believed to be produced via inverse Compton (IC) scattering of charged particles off the ambient soft photon background. 23 If one further assumes that the charged particles are electrons, it can then be inferred that 50 TeV electrons must propagate. However, only one of the electron helicities may be propagating, so we can only constrain one of the helicities.
For n = 2 the constraint is f > 0 and f and the translation to ξ and η R,L is as before. Note that for the range of ξ allowed by birefringence, the relevant constraint is η R < 0.012 or η L < 0.012.
A major difficulty with the above constraint is that positrons may also be producing some of the 50 TeV photons from the Crab nebula. Since positrons have opposite dispersion coefficients in the n = 3 case, there is always a charged particle able to satisfy theČerenkov constraint. Hence by itself, this ICČerenkov constraint can always be satisfied in the Crab and gives no limits at all. However, as we shall see in Section 6.7 the vacuumČerenkov constraint can be combined with the synchrotron constraint to give an actual two-sided bound.

Photon annihilation
The high energy photon spectrum (above 10TeV) from astrophysical sources such as Markarian 501 and 421 has been observed to show signs of absorption due to scattering off the IR background. While this process occurs in Lorentz invariant physics, the amount of absorption is affected by Lorentz violation. The resulting constraint is not nearly as clear cut as in the photon decay andČerenkov cases, as the spectrum of the background IR photons and the source spectrum are both important, neither of which is entirely known. Various authors have argued for different constraints on the n = 3 dispersion relation, based upon how far the threshold can move in the IR background. The constraints vary from O(1) to O(10). However, none of the analyses take into account the EFT requirement for n = 3 that opposite photon polarization have opposite Lorentz violating terms. Such an effect would cause one polarization to be absorbed more strongly than in the Lorentz invariant case and the other polarization to be absorbed less strongly. The net result of such a situation is currently unknown, although current data from blazars suggest that both polarizations must be absorbed to some degree [263]. Since even at best the constraint is not competitive with other constraints, and since there is so much uncertainty about the situation, we will not treat this constraint in any more detail. For discussions see [154,17].
6.5.6 The GZK cutoff and ultra-high energy cosmic rays The GZK cutoff Ultra-high energy cosmic rays (UHECR), if they are protons, will interact strongly with the cosmic microwave background and produce pions, p + γ −→ p + π 0 , losing energy in the process. As the energy of a proton increases, the GZK reaction can happen with lower and lower energy CMBR photons. At very high energies (5 × 10 19 eV), the interaction length (a function of the power spectrum of interacting background photons coupled with the reaction cross section) becomes of order 50 Mpc. Since cosmic ray sources are probably at further distances than this, the spectrum of high energy protons should show a cutoff around 5 × 10 19 eV [135,281]. A number of experiments have looked for the GZK cutoff, with conflicting results. AGASA found trans-GZK events inconsistent with the GZK cutoff at 2.5σ [96], while Hi-Res has found evidence for the GZK cutoff (although at a lower confidence level; for a discussion see [263]). New experiments such as AUGER [113] may resolve this issue in the next few years. Since Lorentz violation shifts the location of the GZK cutoff, significant information about Lorentz violation (even for n = 4 type dispersion) can be gleaned from the UHECR spectrum. If the cutoff is seen then Lorentz violation will be severely constrained, while if no cutoff or a shifted cutoff is seen then this might be a positive signal.
For the purposes of this review, we will assume that the GZK cutoff has been observed and describe the constraints that follow. We can estimate their size by noting that in the Lorentz invariant case the conservation equation can be written as as the outgoing particles are at rest at threshold. Here p is the UHECR proton 4-momentum and k is the soft photon 4-momentum. At threshold the incoming particles are anti-parallel, which gives a threshold energy for the GZK reaction of where ω 0 is the energy of the CMBR photon. The actual GZK cutoff occurs at 5 × 10 19 eV due to the tail of the CMBR spectrum and the particular shape of the cross section (the ∆ resonance). From this heuristic threshold analysis, however, it is clear that Lorentz violation Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 can become important when the modification to the dispersion relation is of the same order of magnitude as the proton mass. For n = 2 dispersion, a constraint of f (2) π − f (2) p < O(10 −23 ) was derived in [88,87,10]. The case of n = 3 dispersion with f was studied in [130,132,131,48,47,49,26,16,25,166,12,167,264,9], while the possibility of f (3,4) π = f (3,4) p was studied in [154]. A simple constraint [154] can be summarized as follows. If we demand that the GZK cutoff is between 2 × 10 19 eV and 7 × 10 19 eV then for f then there is a wedge shaped region in the parameter space that is allowed [154].
The numerical values of these constraints should not be taken too literally. While the order of magnitude is correct, simply moving the value of the threshold for the proton that interacts with a CMBR photon at some energy does not give accurate numbers. GZK protons can interact with any photon in the CMBR distribution above a certain energy. Modifying the threshold modifies the phase space for a reaction with all these photons in the region to varying degrees, which must be folded in to the overall reaction rate. Before truly accurate constraints can be calculated from the GZK cutoff, a more detailed analysis to recompute the rate in a Lorentz violating EFT considering the particulars of the background photon distribution and ∆-resonance must be done. However, the order of magnitude of the constraints above is roughly correct. Since they are so strong, the actual numeric coefficient is not particularly important. 24 Another difficulty with constraints using the GZK cutoff is the assumption that the source spectrum follows the same power law distribution as at lower energies. It may seem that proposing a deviation from the power law source spectrum at that energy would be a conspiracy and considered unlikely. However, this is not quite correct. A constraint on f (n) will, by the arguments above, be such that the Lorentz violating terms are important only near the GZK energy -below this energy we have the usual Lorentz invariant physics. However, such new terms could then also strongly affect the source spectrum only near the GZK energy. Hence the GZK cutoff could vanish or be shifted due to source effects as well. Unfortunately, we have little idea as to the mechanism that generates the highest energy cosmic rays, so we cannot say how Lorentz violation might affect their generation. In summary, while constraints from the position of the GZK cutoff are impressive and useful, their actual values should be taken with a grain of salt, since a number of unaccounted for effects may be tangled up in the GZK cutoff.

UHECRČerenkov
A complimentary constraint to the GZK analysis can be derived by recognizing that 10 19 -10 20 eV protons reach us -a vacuumČerenkov effect must be forbidden up to the highest observed UHECR energy [88,154,119]. The direct limits from photon emission, treating a 5 × 10 19 eV proton as a single constituent are f [154,86,119] for n = 2 25 , f [154], and f (4) p − f (4) γ < O(10 −5 ) for n = 4 [154]. Equivalent bounds on Lorentz violation in a conjectured low energy limit of loop quantum gravity have also been derived using UHECRČerenkov [190].
Cerenkov emission for UHECR has been used most extensively in [119], where two-sided limits on Lorentz violating dimension 4, 5, and 6 operators for a number of particles are derived. The argument is as follows. If we view a UHECR proton as actually a collection of constituent partons (i.e. quarks, gauge fields, etc.) then the dispersion correction should be a function of the corrections for the component partons. By evaluating the parton distribution function for protons and other particles at high energies 26 , one can get two sided bounds by considering multiple reactions, in the same way one obtains two sided bounds in QED. As a simple example, consider only dimension four rotationally invariant operators (i.e. n = 2 dispersion) and assume that all bosons propagate with speed 1 while all fermions have a maximum speed of 1 − . Let us take the case < 0. A proton is about half fermion and half gauge boson, while a photon is 80 percent gauge boson and 20 percent fermion. The net effect, therefore, is that a proton travels faster than a photon and henceČerenkov radiates. Demanding that a 10 20 eV proton not radiate yields the bound > −10 −23 , similar to the standardČerenkov bound above.
If instead > 0, then e + e − pair emission becomes possible as electrons and positrons are 85 percent fermion and 15 percent gauge boson. Pair emission would also reduce the UHECR energy, so one can demand that this reaction is forbidden as well. This yields the bound < 10 −23 . Combined with the above bound we have | | < 10 −23 , which is a strong two sided bound. The parton approach yields two-sided bounds on dimension six operators of order |f (4) | < O(10 −2 ) for all constituent particles, depending on the assumptions made about equal parton dispersion corrections. Bounds on the coefficients of CPT violating dimension five operators are of the order 10 −15 . 27 For the exact constraints and assumptions, see [119]. Note that if one treated electrons, positrons, and protons as the fundamental constituents with only n = 2 dispersion and assigned each a common speed 1 − , one would obtain no constraints. Therefore the parton model is more powerful. However, for higher dimension operators that yield energy dependent dispersion, simply assigning electrons and protons equal coefficients f (n) does yield comparable constraints. Finally, we comment that [119] does not explicitly include possible effects such as SUSY that would change the parton distribution functions at high energy.

GravitationalČerenkov
High energy particles travelling faster than the speed of graviton modes will emit gravitonČerenkov radiation. The authors of [224] have analyzed the emission of gravitons from a high energy particle with n = 2 type dispersion and find the rate to be where c p is the speed of the particle and G is Newton's constant. We have normalized the speed of gravity to be one. The corresponding constraint from the observation of high energy cosmic rays is c p − 1 ≤ 2 × 10 −15 . This bound assumes that the cosmic rays are protons, uses the highest record energy 3 × 10 20 eV, and assumes that the protons have travelled over at least 10 kpc. Furthermore, the bound assumes that all the cosmic ray protons travel at the same velocity, which is not the case if CPT is violated or d = 0 in the mSME. The corresponding bounds for n = 3, 4 type dispersion are not known, but one can easily estimate their size. The particle speed is approximately 1 + f (n) (E/E Pl ) n−2 . For a proton at an energy of 10 20 eV (10 −8 E Pl ) the constraint on the coefficient f (3) is then of O(10 −7 ). Note though, that in this case only one of the UHECR protons must satisfy this bound due to helicity dependence. Similarly, the n = 4 bound is of O(10).
Equation (101) only considers the effects of Lorentz violation in the matter sector which give rise to a difference in speeds, neglecting the effect of Lorentz violation in the gravitational sector. Specifically, the analysis couples matter only to the two standard graviton polarizations. However, as we shall see in Section 7.1, consistent Lorentz violation with gravity can introduce new gravitational polarizations with different speeds. In the aether theory (see Section 4.4) there are three new modes, corresponding to the three new degrees of freedom introduced by the constrained aether vector. The correspondingČerenkov constraint from possible emission of these new modes has recently been analyzed in [105]. Demanding that high energy cosmic rays not emit these extra modes and assuming no significant Lorentz violation for cosmic rays yields the bounds on the coefficents in Equation (48). The next to last bound requires that ( If, as the authors of [105] argue, no gravity-aether mode can be superluminal, then these bounds imply that every coefficient is generically bounded by |c i | < 10 −15 . There is, however, a special case given by c 3 = −c 1 , c 4 = 0, c 2 = c 1 /(1 − 2c 1 ) where all the modes propagate at exactly the speed of light and hence avoid this bound.

Thresholds and DSR
Doubly special relativity modifies not only the particle dispersion relation but also the form of the energy conservation equations. The situation is therefore very different from that in EFT. The first difference between DSR and EFT is that DSR evades all of the photon decay and vacuum Cerenkov constraints that give strong limits on EFT Lorentz violation. Since there is no EFT type description of particles and fields in a DSR framework, one has no dynamics and cannot calculate reaction rates. However, one still can use the DSR conservation laws to analyze the threshold kinematics. By using the pseudo-momentum π and and energy one can show that if a reaction does not occur in ordinary Lorentz invariant physics, it does not occur in DSR [146]. Physically, this is obvious. If the vacuumČerenkov effect for, say, electrons began to occur at some energy E th , in a different reference frame the reaction would occur at some other energy E th , as the threshold energy is not an invariant. Therefore frames could be distinguished by labelling them according to the energy when the vacuumČerenkov effect for electrons begins to occur. This violates the equivalence of all inertial frames that is postulated in DSR theories. A signal of DSR in threshold reactions would be a shift of the threshold energies for reactions that do occur, such as the GZK reaction or γ-ray annihilation off the infrared background [21]. However, the actual shift of threshold energies due to DSR is negligible at the level of sensitivity we have with astrophysical observations [21]. Hence DSR cannot be ruled out or confirmed by any threshold type analysis we currently have. The observational signature of DSR would therefore be a possible energy dependence of the speed of light (see Section 6.2) without any appreciable change in particle thresholds [258].

Thresholds and non-systematic dispersion
Similar to DSR, the lack of dynamics in the non-systematic dispersion framework of Section 3.5 makes it more problematic to set bounds on the parameters f (n) . In [160,12,11,24], the authors assume that the net effect of spacetime foam can be derived by considering energy conservation and non-systematic dispersions at a point. There is a difficulty with this, which we shall address, but for now let us assume that this approach is correct.
As an example of the consequences of non-systematic dispersion let us consider the analysis of the GZK reaction in [11]. The authors consider n = 3 nonsystematic dispersion relations with normally distributed coefficients f (3) p,π that can take either sign and have a variance of O (1). Looking solely at the kinematical threshold condition, they find that all cosmic ray protons would undergo photo-pion production at energies above 10 15 eV. This is perhaps expected, as the energy scale at which an n = 3 term becomes important is E ≈ (m 2 E Pl /f (3) ) 1/3 ≈ 10 15 eV for f (3) of O(1). There is a large region of the f (3) p,π parameter space that is susceptible to the vacuumČerenkov effect with pion emission [154] and hence a significant amount of the time the random coefficients will fall in this region of parameter space. If an ultra-high energy proton can emit a pion without scattering off of the CMBR, then certainly it can scatter as well, which implies that GZK reaction is also accessible. This same type of argument can be rapidly extended to n = 4 dispersion, yielding a cutoff in the spectrum at 10 18 eV. The n = 4 cutoff could easily be pushed above GZK energies if the coefficients had a variance slightly less than O(1). In short, since we see high energy cosmic rays at energies of 10 20 eV, the results of [160,12,11,24] imply that we could not have n = 3 non-systematic dispersion unless the coefficients are O(1), while for n = 4 the coefficients would only have to be an order of magnitude or two below O(1).
We now return to a possible problem with this type of analysis, which has been raised in [43]. Performing threshold analyses on non-systematic dispersion assumes that energy-momentum conservation can be applied with a single fluctuation (i.e. the reaction effectively happens at a point). It further assumes that the matrix element is roughly unchanged. In GZK orČerenkov reactions, however, one of the outgoing particles is much softer than the incoming particle. In this situation the interaction region is much larger than the de Broglie wavelength of the high energy incoming particle, which means that many dispersion fluctuations will occur during the interaction. The amplitude of low energy emission in regular quantum field theory changes dramatically in this situation (e.g., Bremsstrahlung with a rapidly wiggling source) as opposed to the case in which there is only one fluctuation (e.g., theČerenkov effect). The above approach, modified conservation plus unchanged matrix element/rate when the reaction is allowed, is not correct when a low energy particle is involved. If the outgoing particle has an energy comparable to the incoming particle, then it may be possible to avoid this problem. However, in this case the reverse reaction is also kinematically possible with a different fluctuation of the same order of magnitude, so it is unclear what the net effect on the spectrum should be. Note, finally, that these arguments only concern the rate of decay -the conclusion that high energy particles would decay in this framework is unchanged.

Synchrotron radiation
Jacobson et al. [153] noticed that the synchrotron emission from the Crab nebula (and other astrophysical objects) is very sensitive to n = 3 modified dispersion relations. The logic behind the constraint in [153] is as follows. The Crab nebula emits electromagnetic radiation from radio to multi-TeV frequencies. As noted in Section 6.5.5, the spectrum is well fit over almost the entire frequency range by the synchrotron self-Compton model. If this model is correct, as seems highly probable, then the observed radiation from the Crab at 100 MeV is due to synchrotron emission from high energy electrons and/or positrons. The authors of [153], in the context of effective field Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 theory, argued that the maximum frequency ω c of synchrotron radiation in the presence of Lorentz violation is given by where e is the charge, B is the magnetic field, and γ and E are the gamma factor and energy of the source particle, respectively. The derivation of Equation (103) was challenged in [78]. More detailed calculations [222,111,221] show that in the case of the Crab nebula Equation (103) is correct, although the argument of [153] does not necessarily hold in general. 28 Assuming the source particles are electrons, if f (3) e < 0 then there is a maximum electron velocity for any energy particle and hence a maximum possible value for ω c . The maximum frequency must be above 100 MeV in the Crab, which leads to a constraint of either f (3) e,R or f (3) e,L > −7 × 10 −8 , i.e. at least one of the electron parameters must be above this value.
The analysis of [153] does not take into account the possibility that the high energy synchrotron emission could be due to positrons, which may also be generated near the pulsar. This is an important possibility, since in the EFT that gives rise to f (3) terms for electrons, Equation (40), the positron has an opposite dispersion modification. Hence there is always some charged particle in the Crab with a dispersion modification that evades the synchrotron constraint. The possibility that there are two different populations, one electrons and one positrons, that contribute to the overall spectrum would be a departure from the synchrotron-self Compton model, which presupposes only one population of particles injected into the nebula. However, such a possibility cannot be ruled out without more detailed modelling of the Crab nebula and a better understanding of how the initial injection spectrum of particles from the pulsar is produced. 29 The possible importance of positrons in the Crab implies that the synchrotron constraint is always satisfied when considered by itself. However, the synchrotron constraint can be combined withČerenkov constraints to create a two sided bound [155]. Essentially, the SSC model is such that whatever species of particle is producing the synchrotron spectrum must also be responsible for the inverse Compton spectrum. Hence at least one helicity of electron or positron must satisfy both the vacuumČerenkov and synchrotron constraints. This is not automatically satisfied in EFT and constitutes a true constraint. For more discussion see [155]. The combined synchrotron, threshold, birefringence, and time of flight constraints are displayed in Figure 4 (taken from [155]). In this plot η ± is equal to f Finally, note that if the effective field theory is CPT conserving then positrons/electrons have the same dispersion relation. So, for n = 2 dispersion the 100 MeV synchrotron radiation from the Crab yields a parallel constraint of roughly f (2) e,e + > −10 −20 for at least one helicity of electron/positron.

Neutrinos
Neutrinos can provide excellent probes of Lorentz violation, as their mass is much smaller than any other known particle. To see this consider the modified dispersion framework. For an electron with n = 3 and n = 4 dispersion the energies at which Lorentz violation can become appreciable are at 10TeV and 10 5 TeV, respectively. However, for a neutrino with mass even at 1 eV the corresponding energies are only 1 GeV for n = 3 and 1 TeV for n = 4, well within the realm of accelerator physics. The most sensitive tests of Lorentz violation in the neutrino sector come from Figure 4: Constraints on LV in QED at n = 3 on a log-log plot. For negative parameters minus the logarithm of the absolute value is plotted, and region of width 10 −10 is excised around each axis. The constraints in solid lines apply to ξ and both η ± , and are symmetric about both the ξ and the η axis. At least one of the two pairs (η ± , ξ) must lie within the union of the dashed bell-shaped region and its reflection about the ξ axis. Intersecting lines are truncated where they cross.
neutrino oscillation experiments, which we now describe. For a more comprehensive overview of neutrino mixing, see for example [102,165].

Neutrino oscillations
Lorentz violating effects in the neutrino sector have been considered by many authors [86,182,181,180,83,55,18,126,191,82]. To illustrate how Lorentz violation affects neutrino propagation, we consider the simplest case, where the limiting speeds for mass eigenstates of the neutrino are different, i.e. neutrinos have dispersions where i denotes the energy eigenstate. In this case, the energy eigenstates are also the mass eigenstates (this is not necessarily the case with general Lorentz violation). This is a special case of the neutrino sector of the mSME if we assume that c 00 ν is flavor diagonal and is the only non-zero term. For relativistic neutrinos, we can expand the energy to be Now consider a neutrino produced via a particle reaction in a definite flavor eigenstate I with energy E. We denote the amplitude for this neutrino to be in a particular energy eigenstate i by the matrix U Ii , where i U † Ji U Ii = δ IJ . The amplitude for the neutrino to be observed in another Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 flavor eigenstate J at some distance L, T from the source is then for relativistic neutrinos. If we define an "effective mass" N i as then the probability P IJ = |A IJ | 2 can be written as where δN ij = N 2 i − N 2 j , and F IJij and G IJij are functions of the U matrices. We can immediately see from Equation (108) that Lorentz violation can have a number of consequences for standard neutrino oscillation experiments. The first is simply that neutrino oscillation still occurs even if the mass is zero. In fact, some authors have proposed that Lorentz violation could be partly responsible for the observed oscillations [181]. Oscillations due to the type of Lorentz violation above vary as EL [181]. Current data support neutrino oscillations that vary as a function of L/E [35,125], so it seems unlikely that Lorentz violation could be the sole source of neutrino oscillations. It is possible, however, that Lorentz violation may explain some of the current problems in neutrino physics by giving a contribution in addition to the mass term. For example it has been proposed in [182] that Lorentz violation might explain the LSND (Liquid Scintillator Neutrino Detector) anomaly [36] 30 , which is an excess of ν µ → ν µ events that cannot be reconciled with other neutrino experiments [129]. We note that the above model for Lorentz violating effects in neutrino oscillations is perhaps the simplest case. In the neutrino sector of the mSME there can be more complicated energy dependence, directional dependence, and new oscillations that do not occur in the standard model. For a discussion of these various possibilities see [180].
The difference in speeds between electron and muon neutrinos was bounded in [88] to be |f (2) νe − f (2) νµ | < 10 −22 . Oscillation data from Super Kamiokande have improved this bound to O(10 −24 ) [114]. Current neutrino oscillation experiments are projected to improve on this by three orders of magnitude, giving limits on maximal speed differences of order 10 −25 [126]. For comparison, the time of flight measurements from supernova 1987A constrain |f (2) νi − f (2) γ | < 10 −8 [265]. Neutrino oscillations are sensitive enough to directly probe non-renormalizable Lorentz violating terms. In [69] current neutrino oscillation experiments are shown to yield bounds on dimension five operators stringent enough that the energy scale suppressing the operator must be a few orders of magnitude above the Planck energy. Such operators are therefore very unlikely in the neutrino sector. Ultra-high energy neutrinos, when observed, will provide further information about neutrino Lorentz violation. For example, flavor oscillations of ultra-high energy neutrinos at 10 21 eV propagating over cosmic distances would be able to probe Lorentz violating dispersion suppressed by seven powers of E Pl [83] (or more if the energies are even higher).
Additionally, neutrino Lorentz violation can modify the energy thresholds for reactions involving neutrinos, which can have consequences for the expected flux of ultra-high energy neutrinos for detectors such as ICECUBE. The expected flux of ultra-high energy neutrinos is bounded above by the Bahcall-Waxman bound [273] if the neutrinos are produced in active galactic nuclei or gamma ray bursters. It has been shown [18] that Lorentz violation can in fact raise (or lower) this bound significantly. A higher than expected ultra-high energy neutrino flux therefore could be a signal of Lorentz violation.

NeutrinoČerenkov effect
Finally, neutrinos can also undergo a vacuumČerenkov effect. Even though a neutrino is neutral there is a non-zero matrix element for interaction with a photon as well as a graviton. Graviton emission is very strongly suppressed and unlikely to give any useful constraints. The matrix element for photon emission, while small, is still larger than that for graviton emission and hence the photonČerenkov effect is more promising. The photon-neutrino matrix element can be split into two channels, a charge radius term and a magnetic moment term. The charge radius interaction is suppressed by the W mass, leading to a reaction rate too low for current neutrino observatories such as AMANDA to constrain n = 3, 4 Lorentz violation. However, the rate from the charge radius interaction scales strongly with energy, and it has been estimated [154] that atmospheric PeV neutrinos may provide good constraints on n = 3 Lorentz violation. The magnetic moment interaction has not yet been conclusively analyzed, so possible constraints from the magnetic moment interaction are unknown. In Lorentz invariant physics, the magnetic moment term is suppressed by the small neutrino mass, so energy loss rates are likely small. However, it should be noted that some Lorentz violating terms in an effective field theory give rise to effective masses that scale with energy. These might be much larger than the usual neutrino mass at high energies, yielding a large neutrino magnetic moment.

Phase coherence of light
An interesting and less well known method of constraining non-systematic Lorentz violation is looking at Airy rings (interference fringes) from distant astrophysical objects. In order for an interference pattern from an astrophysical source to be observed, the photons reaching the detector must be in phase across the detector surface. However, if the dispersion relation is fluctuating, then the phase velocity v φ = ω/k is also changing. If the fluctuations are uncorrelated, then initially in-phase collections of photons will lose phase coherence as they propagate. Uncorrelated fluctuations are reasonable, since for most of their propagation time, photons that strike across a telescope mirror are separated by macroscopic distances. Observation of Airy rings implies that photons are in phase and hence limits the fluctuations in the dispersion relation [204,245,233]. The aggregate phase fluctuation is given by [233] ∆φ = 2πf (n) L n−2 Pl D 3−n λ , where L Pl is the Planck length, D is the distance to the source, and λ is the wavelength of the observed light. This technique was originally applied by [204], but the magnitude of the aggregate phase shift was overestimated. PKS1413+135, a galaxy at a distance of 1.2 Gpc, shows Airy rings at a wavelength of 1.6 µm. Demanding that the overall phase shift is less than 2π yields O(1) constraints for n = 5/2 and constraints of order 10 9 for n = 8/3. Hence this type of constraint is only able to minimally constrain Lorentz violating non-systematic models. In principle the frequency of light used for the measurement can be increased, however, in which case this type of constraint will improve. However, Coule [92] has argued that other effects mask the loss of phase coherence from quantum gravity, making even this approach uncertain.

Gravitational Observations
So far we have restricted ourselves to Lorentz violating tests involving matter fields. It is also possible that Lorentz violation might manifest itself in the gravitational sector. There are three obvious areas where the consequences of such Lorentz violation might manifest itself: gravitational waves, cosmology, and post-Newtonian corrections to weak field general relativity.

Gravitational waves
In the presence of dynamical Lorentz violation, where the entire action is diffeomorphism invariant, one generically expects new gravitational wave polarizations. 31 The reason is simple. Any dynamical Lorentz violating tensor field must have kinetic terms involving derivatives of the form ∇ µ U αβ... , where U αβ... is the Lorentz violating tensor. Furthermore, U must take a non-zero vacuum expectation value if it violates Lorentz invariance. At linear order in the perturbations h αβ , u αβ... (where g αβ = η αβ +h αβ , U αβ... = U αβ... +u αβ... ), the connection terms in the covariant derivative are also first order, for example ∂ α h βγ U βδ... . Upon varying the linearized metric, these terms contribute to the graviton equations of motion. The extra terms in the graviton equations give rise to new solutions. Since the potential that forces U to take a non-zero vacuum expectation value must involve the metric, variations in U are usually coupled to metric variations, implying that the new graviton modes mix with excitations of the Lorentz violating tensor fields.
There is a large literature on gravitational wave polarizations in theories of gravity other than general relativity. For a thorough discussion, see [277] and references therein. Many of the models with preferred frame effects are similar to the types of theories that give rise to dynamical Lorentz violation. For example, the vector-tensor theories of Will, Hellings, and Nordtvedt [278,237,145] have many similarities to the aether theory of Section 4.4. The aether model's wave spectrum has been calculated in [158,136] and limits from the absence ofČerenkov emission of these modes by cosmic rays has been studied in [105] (see Section 6.5.7). Other consequences of dynamical Lorentz violation in Riemann-Cartan spacetimes have been examined in [57].
Unfortunately, few constraints currently exist on dynamical Lorentz violation from gravitational wave observations as the spectrum is only part of the story. Currently, the expected rate of production of these modes from astrophysical sources as a function of the coefficients in the Lagrangian is unknown. However, both the energy loss from inspiral systems due to gravitational radiation and gravitational wave observatories such as LIGO and LISA should produce strict bounds on the possibility of dynamical Lorentz violating fields. 32 We note that aether type theories seem to be free of certain obvious problems such as a van Dam-Veltman-Zakharov type discontinuity [136]. The theories can therefore be made arbitrarily close to GR by tuning the coefficients to be near zero.

Cosmology
Cosmology also provides a way to test Lorentz violation. The most obvious connection is via inflation. If the number of e-foldings of inflation is high enough, then the density fluctuations responsible for the observed cosmic microwave background (CMB) spectrum have a size shorter than the Planck scale before inflation. It might therefore be possible for trans-Planckian physics/quantum gravity to influence the currently observed CMB spectrum. If Lorentz violation is present at or near the Planck scale (as is implicit in models that use a modified dispersion relation at high 31 There are exceptions, for example see [151]. Here gravity is modified by a Chern-Simons form, yet there are still only two gravitational wave polarizations. The only modification is that the intensity of the polarizations differs from what would be expected in general relativity. 32 It has also been proposed that laser interferometry may eventually be capable of direct tests of Planck suppressed Lorentz violation dispersion [22]. Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 energies [213]), then the microwave background may still carry an imprint. 33 A number of authors have addressed the possible signatures of trans-Planckian physics in the CMB (for a sampling see [94,212,214,262,101,164,66,236] and references therein). While the possibility of such constraints is obviously appealing, the CMB imprint (if any) of trans-Planckian physics, much less Lorentz violation, is model dependent and currently the subject of much debate. 34 In short, although such cosmological explorations are interesting and may provide an eventual method for ultra-high energy tests of Lorentz invariance, for the purposes of this review we forego any more discussion on this approach.
A simple low energy method to limit the coefficients in the aether model (47) that is less fraught with ambiguities has been explored by Carroll and Lim [76]. They consider a simplified version of the model (47) without the c 4 term and choose the potential V (u α u α ) to be of the form λ(u α u α − a 2 ), where λ is a Lagrange multiplier. Without loss of generality, we can rescale the coefficients c i in Equation (47) to set a 2 = 1. In the Newtonian limit Carroll and Lim find that Newton's constant as measured on earth is rescaled to be In comparison, the effective cosmological Newton's constant is calculated to be G obs cosmo = 2G 2 − (c 1 + 3c 2 + c 3 ) .
The difference between the cosmological and Newtonian regimes implies that we have to adjust our measured Newton's constant before we insert it into cosmological evolution equations. Such an adjustment modifies the rate of expansion. A change in the expansion rate modifies big bang nucleosynthesis and changes the ratio of the primordial abundance of 4 He to H. By comparing this effect with observed nucleosynthesis limits, Carroll and Lim are able to constrain the size of c 1 , c 2 , and c 3 . In addition to the nucleosynthesis constraint, the authors impose restrictions on the choice of coefficients such that in the preferred frame characterized byū α the perturbations δu α have a positive definite Hamiltonian, are non-tachyonic, and propagate subluminally. With these assumptions Carroll and Lim find the following constraint: 0 < (14 c 1 + 21c 2 + 7c 3 + 7c 4 ) < 2, where the c 4 dependence has been included for completeness.

PPN parameters
Preferred frame effects, as might be expected from Lorentz violating theories, are nicely summarized in the parameterized post-Newtonian formalism, otherwise known as PPN (for a description, see [277] or [276]). The simplest setting in which the PPN parameters might be different than GR is in the static, spherically symmetric case. For static, spherically symmetric solutions in vectortensor models the only PPN parameters that do not vanish are the Eddington-Robertson-Schiff (ERS) parameters γ and β. For GR, β = γ = 1. The ERS parameters for the general Hellings-Nordvedt vector-tensor theory [145] are not necessarily unity [276], so one might expect that the 33 The B-mode polarization of the CMB might also carry an imprint of Lorentz violation due to modifications in the gravitational sector [206]. 34 The above approach presumes inflation and speculates about the low energy signature of Lorentz violating physics. Lorentz violation can also be a component in the so-called variable speed of light (VSL) cosmologies (for a review see [209]) which are a possible alternative to inflation. Some bounds on VSL theories are known from Lorentz symmetry tests, but in these cases the VSL model can be equivalently expressed in one of the frameworks of this review.
Living Reviews in Relativity http://www.livingreviews.org/lrr-2005-5 constrained aether model also has non-trivial ERS parameters. However, it turns out that the constrained aether model with the Lagrange multiplier potential also has β = γ = 1 for generic choices of the coefficients [103]. Therefore, at this point there is no method by which the ERS parameters can be used to constrain Lorentz violating theories. The ERS parameters for more complicated theories with higher rank Lorentz violating tensors are largely unknown.
The observational limit on α 2 is |α 2 | < 4 × 10 −7 [277]. Barring cancellations this translates to a very strong bound of order 10 −7 on the coefficients c i in the aether action.

Conclusions and Prospects
As we have seen, over the last decade or two a tremendous amount of progress has been made in tests of Lorentz invariance. Currently, we have no experimental evidence that Lorentz symmetry is not an exact symmetry in nature. The only not fully understood experiments where Lorentz violation might play a role is in the (possible) absence of the GZK cutoff and the LSND anomaly. New experiments such as AUGER, a cosmic ray telescope, and MiniBooNE [112], a neutrino oscillation experiment specifically designed to test the LSND result, may resolve the experimental status of both systems and allow us to determine if Lorentz violation plays a role.
Terrestrial experiments will continue to improve. Cold anti-hydrogen can now be produced in enough quantities [117,27] for hydrogen/anti-hydrogen spectroscopy to be performed. The frequency of various atomic transitions (1S-2S, 2S-nd, etc.) can be observationally determined with enough precision to improve bounds on various mSME parameters [61,256]. Spectroscopy of hydrogen-deuterium molecules might lead to limits on electron mSME parameters an order of magnitude better than current cavity experiments [228].
There are proposals for space based experiments (cf. [59,194]) that will extend current constraints from terrestrial experiments. Space based experiments are ideal for testing Lorentz violation. They can be better isolated from contaminating effects like seismic noise. In a microgravity environment interferometers can run for much longer periods of time as the cooled atoms in the system will not fall out of the interferometer. As well, the rate of rotation can be controlled. Sidereal variation experiments look for time dependent effects due to rotations. In space, the rate of rotation can be better controlled, which allows the frequency of any possible time dependent signal to be tuned to achieve the best signal-to-noise ratio. Furthermore, space based experiments allow for cavity and atomic clock comparison measurements to be combined with time dilation experiments (as proposed in OPTIS [194]), thereby testing all the fundamental assumptions of special relativity. The estimated level of improvement from a space based mission such as OPTIS over the corresponding terrestrial experiments is a few orders of magnitude.
Another possibility for seeing a novel signal of Lorentz violation is in GLAST [260]. GLAST is a gamma ray telescope that is very sensitive to extremely high energy GRBs. As we have mentioned, DSR evades almost all known high energy tests of Lorentz invariance. If the theoretical issues are straightened out and DSR does eventually predict a time of flight effect then GLAST may be able to see it for some burst events. An unambiguous frequency to time-of-arrival correlation linearly suppressed in the Planck energy, coupled with the observed lack of birefringence at the same order, will be a smoking gun for DSR, as other constraints forbid such a construction in effective field theory [230].
The question that must be asked at this juncture in regards to Lorentz invariance is: When have we tested enough? We currently have bounds on Lorentz violation strong enough that there is no easy way to put Lorentz violating operators of dimension ≤ 6 coming solely from Planck scale physics into our field theories. It therefore seems hard to believe that Lorentz invariance could be violated in a simple way. If we are fortunate, the strong constraints we currently have will force us to restrict the classes of quantum gravity theories/spacetime models we should consider. Without a positive signal of Lorentz violation, this is all that can reasonably be hoped for.

Acknowledgements
I would like to thank Steve Carlip, Ted Jacobson, Stefano Liberati, Sayandeb Basu, and Damien Martin for helpful comments on early drafts of this paper. As well, I would like to thank Bob McElrath and Nemanja Kaloper for useful discussions. This work was funded under DOE grant DE-FG02-91ER40674.