A note on"symmetric"vielbeins in bimetric, massive, perturbative and non perturbative gravities

We consider a manifold endowed with two different vielbeins $E^{A}{}_{\mu}$ and $L^{A}{}_{\mu}$ corresponding to two different metrics $g_{\mu\nu}$ and $f_{\mu\nu}$. Such a situation arises generically in bimetric or massive gravity (including the recently discussed version of de Rham, Gabadadze and Tolley), as well as in perturbative quantum gravity where one vielbein parametrizes the background space-time and the other the dynamical degrees of freedom. We determine the conditions under which the relation $g^{\mu\nu} E^{A}{}_{\mu} L^{B}{}_{\nu} = g^{\mu\nu} E^{B}{}_{\mu} L^{A}{}_{\nu}$ can be imposed (or the"Deser-van Nieuwenhuizen"gauge chosen). We clarify and correct various statements which have been made about this issue.


Introduction
There are various situations in physics where one has to consider a manifold endowed with two different vielbein fields. Obviously, this appears to be the case in bimetric theories, theories where two different metrics are defined on the same space-time 1 deffayet@iap.fr 2 mourad@apc.univ-paris7.fr 3 zahariad@apc.univ-paris7.fr 4 UMR 7164 (CNRS, Université Paris 7, CEA, Observatoire de Paris) manifold [1]. Each of these metrics can then be described by a different vielbein. This is also true even if one of the two metrics is not dynamical. It also applies to non linear massive gravity (for recent reviews see [2,3]), which is nothing else than a special class of bigravity, and in particular it applies to the recently introduced massive gravity theories of de Rham-Gabadadze-Tolley (dRGT in the following) [4,5,6] as well as to the extension of these to the dynamical bimetric case [7,8]. A similar situation also occurs when one expands General Relativity around a fixed background metric and expresses both the background and the dynamical metric in terms of vielbeins. This is the starting point of many works dealing with quantum gravity (see e.g. [9,10]).
Considering such situations, let us define, in arbitrary D dimensions, E A and L A to be two bases of 1-forms obeying at every space-time point 5 or equivalently where g µν and f µν are respectively the metrics associated with the vielbeins. We will also need the vectors e A and ℓ A , respectively dual to the 1-forms E A and L A , that verify For future use, let us rewrite the above relations (and consequences thereof) using matrix notations. We have where 1 D is the D × D identity matrix, m t denotes the matrix transpose of the matrix m, η is just diag(−1, 1, · · · , 1) and the same relations hold between E, e and g respectively. The defining relations (2) and (3) imply the gauge symmetry 5 Our convention is that Greek letters denote space-time indices, while capital Latin letters denote Lorentz indices that are moved up and down with the canonical Minkowski metric η AB with Λ A C andΛ B D Lorentz matrices. It is often convenient to ask for a "symmetry" condition on the vielbeins which reads e µ A L Bµ = e µ B L Aµ .
Notice that this condition can also be written as g µν E A µ L B ν = g µν E B µ L A ν and that Ref. [11] uses an equivalent form which reads E A µ L Aν = E A ν L Aµ . In the recent discussions about massive gravity, such a condition has been used to ensure the existence of, and express, the matrix square root of g −1 f which enters in a crucial way in the definition of dRGT theory (see e.g. [12,13]). Indeed, whenever condition (11) holds, γ defined as verifies the defining equation of the matrix square root of g −1 f given by It has also been argued by Hinterbichler and Rosen [14] that, in the vielbein reformulation of dRGT theories, condition (11) is obtained as a consequence of field equations. To prove this, they use a decomposition of an arbitrary matrix M (representing some unconstrained arbitrary vielbein multiplied by η) as where λ is a Lorentz matrix and s is a symmetric matrix. This is reminiscent of the so-called polar decomposition stating that an arbitrary invertible matrix can be written as the product of an orthogonal matrix with a symmetric matrix. However we will show that such a decomposition does not hold in general if one replaces the orthogonal matrix by a Lorentz transformation. This makes in particular the argument of Ref. [14] incomplete. Furthermore, in massive gravity as well as in perturbative quantum gravity condition (11) has been used as a gauge condition. In the quantum gravity context, this gauge (sometimes dubbed Deser-van Nieuwenhuizen gauge in reference to [9]) has been first introduced via a gauge fixing term in the action and dealt with perturbatively [9,10]. It was then later argued that this gauge can be set "non perturbatively", i.e. that given a set of arbitrary vielbeins E A and L A that do not fullfill condition (11), one can always Lorentz rotate them as in (9), and (10) to define a new set of vielbeins obeying this condition [11] (with the consequence that the corresponding gauge would not suffer from Gribov-like ambiguities). Interestingly enough, the same statements have also been made in the context of massive gravity. Indeed, there as well the condition (11) has been used "perturbatively" (i.e. in the case when both metrics g µν and f µν are close to one another, see e.g. [12]), but it has also been argued that condition (11) can be reached as a (Lorentz) gauge choice for arbitrary metrics [13]. This contradicts various other statements made in the literature, for example in Ref. [10], where it is stated that gauge (11) cannot be set beyond perturbation theory. Settling this contradiction, as we intend to do here, will also illuminate issues discussed in the previous paragraph, since, as we will show, to set (11) via suitable Lorentz rotations of the vielbeins involves a decomposition similar to (14).
To be precise, the purpose of this note is to determine when and how the condition (11) can be enforced, as well as when the decomposition (14) holds. These questions, beyond their mathematical interest, are especially important for massive gravity. Indeed, one can argue that the vielbein formulation of dRGT theories has several advantages over their metric formulations. First of all, it allows a simple extraction of what plays the role of the Hamiltonian constraint [14]. Second, in some cases it also allows to dynamically derive the existence of the square root of g −1 f that has to be assumed or enforced by Lagrange multipliers in the metric formulation [14,15]. Finally, the frame formulation permits a simple discussion of the constraints and the counting of dynamical degrees of freedom in the Lagrangian framework [15]. In this formulation, relation (11) plays a key role, and it is important to know whether it can be obtained by Lorentz gauge transformations, or it needs additional constraints to be imposed. This paper is organized as follows. In the next section, we will discuss necessary and sufficient conditions for (11) and (14) to hold. Then, in section 3, using results on matrix square roots, we will spell out sufficient conditions to achieve (11) and (14). In the last sections, and before concluding, we will discuss the specific cases of D = 2, D = 4, and D = 3 space-time dimensions, and in particular the cases which cannot be handled via the results of section 3.
Before proceeding, let us mention a special choice for one of the metrics (say f µν ) and the associated vielbein L A . This choice is made in some contexts (e.g. dRGT theories, but also perturbative quantum gravity). It amounts to first assuming that the metric f µν is flat and takes the canonical form η µν , i.e.
and then choosing L A = dx A , i.e. such that (in components) When the choice (15)-(16) is made, the constraint (11) simply reads (labelling here space-time indices and Lorentz indices with the same set of letters) stating that the vielbein e Aµ can be represented as a symmetric matrix. This choice will not be used to derive the results of this paper, but will just sometimes be considered as an example.

Necessary and sufficient conditions
Let us first try to set the constraint (11) by using the freedom to Lorentz rotate independently the two sets of vielbeins L A and e A . Considering two arbitrarily chosen vielbeins e A and L B , assume that there exist two Lorentz transforms Λ A B andΛ A B such that the matrix S AB defined by is symmetric. Defining M as the matrix of components M AB given by 6 (note that this definition implies that M is invertible), the above equality (18) reads in matricial notations Multiplying it on the right by Λ t −1 and on the left byΛ −1 we get For S to be symmetric, the matrix on the left hand side above should be symmetric, call it s. Defining the Lorentz tranformation λ by λ = Λ −1Λ we get that the invertible matrix M should be written as in Eq. (14). Being a Lorentz transformation, λ verifies As we already stated, a decomposition such as in Eq. (14) does not hold in general (in constrast to the polar decomposition). Indeed, rewriting (14) as λ = Ms −1 and inserting this into (22) we get after some trivial manipulation, that M and s should fullfill the necessary condition Running backward the above argument it is easy to see that the above condition is also sufficient (just because the matrix defined as Ms −1 will be a Lorentz transformation). Hence we have proven the following proposition.
Proposition 1. An arbitrary invertible matrix M can be decomposed as M = λs, λ being the matrix of a Lorentz transformation and s a symmetric matrix, if and only if (i) the real matrix ηM t ηM has a real square root, and (ii) at least one such square root can be written as the product of η with a symmetric matrix.
In particular, when M is given by (19), we have (using relations (6)-(8) as well as definition (19)) So if g −1 f has a square root γ, then (i) above holds: a square root of ηM t ηM being then given by (LγL −1 ) t . We then prove the following proposition, Proposition 2. Given two metrics g µν and f µν , g −1 f has a square root γ such that γ = f −1 s, with s a symmetric matrix, if and only if the matrix M defined by (19) (and which verifies relations (24)) is such that the real matrix ηM t ηM has a real square root which can be written as the product of η by a symmetric matrix.
Proof. We first assume that g −1 f can be written as g −1 f = (f −1 s) 2 with s a symmetric matrix. Then using this hypothesis into the first equality of (24) we get The matrix ηℓsℓ t η being symmetric, this proves one side of the equivalence. Conversely, we assume that there exists a symmetric matrix s ′ such that η (M t ) ηM = (ηs ′ ) 2 . Then g −1 f is given by Noticing that the matrix f ℓ t s ′ ℓf is symmetric ends the proof.
Hence, gathering the above results, we have proven the following statement.
Proposition 3. There exist vielbeins e A µ and L B ν corresponding to the metrics g µν and f µν respectively (i.e. η AB e A µ e B ν = g µν and Direct proof. Suppose first we have vielbeins e A and L B satisfying the above symmetry property. Then which shows that the matrix f γ is symmetric. Notice that this is equivalent to γf −1 symmetric. Conversely, suppose we have a real matrix γ such that γ 2 = g −1 f and f γ symmetric. We start by choosing an arbitrary vielbein L A for the metric f µν i.e.
But the symmetry of and e A is a well-defined vielbein for the metric g µν . Notice that this definition tells us γ µ ν = e A µ L A ν . It remains to be shown that these vielbeins have the required symmetry property. We start from the symmetry of f γ which we can rewrite Multiplying by ℓ D µ ℓ E ν we get e E ρ L Dρ = e D ρ L Eρ and this completes the proof.
As we just showed the hypotheses (i) of Propositions 1 and 3 are that a certain real (invertible) matrix has a real square root. It is however well known that not all real invertible matrices have real square roots (see e.g. [16,17]) and we will later recall what are the necessary and sufficient conditions for this to occur. In our case, though, the matrix which should have a square root is not totally arbitrary. For example, in Proposition 1 it must be of the form η (M t ) ηM. This alone does however not ensure the existence of a square root. For example, choosing we get which doesn't have any real square roots. Indeed, such a 4 × 4 diagonal matrix with four distinct eigenvalues has 2 4 square roots which are given here by diag (±3i, ±i, ±2, ±1). None of them is real. Hence the decomposition (14) can at best hold for a restricted set of matrices.
We thus see that considering the matrix M above as given by the form (19) invalidates the result of Ref. [11]. Notice that, if one makes now the simple choice (15)-(16) (and considering equation (24)), our example involves a "mismatch" between the time directions of the two metrics f µν and g µν . However, beyond perturbation theory there is no reason to think that these time directions should coincide or even be compatible. We will come back to this question later. Notice further that perturbatively, if g = f + h, with h small, then to the first order in h g −1 f = (1 D − 1/2f −1 h) 2 , and so the assumption (i) and (ii) of Proposition 3 are always true perturbatively.

Sufficient conditions
Here, in order to formulate simple sufficient conditions allowing to obtain (11) and (14), we will discuss the precise relation between hypotheses (i) and (ii) of Propositions 1 and 3. We need to recall how square roots of real matrices are obtained. We first use the following theorem (that we quote here from Ref. [16]). Theorem 1. Let A be an invertible real square matrix (of arbitrary dimension). If A has no real negative eigenvalues, then there are precisely 2 r+c real square roots of A which are polynomial functions of A, where r is the number of distinct eigenvalues of A and c is the number of distinct complex conjugate eigenvalue pairs. If A has a real negative eigenvalue, then A has no real square root which is a polynomial function of A.
Let us first use this theorem to prove that (i) of Proposition 1 (respectively Proposition 3) implies (ii) of the same proposition whenever the matrix ηM t ηM (respectively the matrix g −1 f ) has no real negative eigenvalues. To see this, just consider a real matrix A with no negative eigenvalues, given by the product of two symmetric invertible matrices S and S ′ . By virtue of the above theorem, we know that this matrix has at least one real square root which is a polynomial function of A, that we note F (A). One then has where the sum runs over a finite number of integers k, and c k are real numbers. Using the fact that A = S ′ S, one then has where the term [SS ′ S · · · S ′ S] k contains k factors of S and k − 1 factors of S ′ , and is a symmetric matrix. This means that that the square root F (A) is given by the product of S ′ by a symmetric matrix. It is enough to prove our assertion by choosing S ′ to be given by η and S to be given by M t ηM (respectively S ′ given by f −1 and S to be given by f g −1 f ). Hence, using the above result, and Propositions 1 and 3 we have shown the following two propositions Proposition 4. A sufficient condition for an arbitrary invertible real matrix M to be decomposed as M = λs, λ being the matrix of a Lorentz transform and s a symmetric matrix, is that the matrix η (M t ) ηM has no negative eigenvalues.
Proposition 5. A sufficient condition for the existence of vielbeins e A µ and L B ν corresponding to the metrics g µν and f µν respectively (i.e. η AB e A µ e B ν = g µν and η AB L A µ L B ν = f µν ) such that e A µ L Bµ = e B µ L Aµ , is that the matrix g −1 f has no negative eigenvalues.
If A has one (or more) real negative eigenvalue, Theorem 1 does not imply that A does not have a real square root, but just that such a square root cannot be a polynomial function of A. In order to enunciate the necessary and sufficient conditions for a real matrix to have a real square root, one first needs to introduce the so-called Jordan decomposition of a matrix. It uses Jordan blocks which can be defined as r × r matrices wich are of the form J (r,z) given by (for r ≥ 2) where z is a complex number, and one has J (1,z) = (z) for r = 1. One can then show that for an arbitrary n × n matrix A, there exists an invertible matrix P , and a matrix J such that and the matrix J is a so called Jordan matrix of the form J = diag J (r 1 ,z 1 ) , J (r 2 ,z 2 ) , · · · , J (r k ,z k ) , where k is an integer and the matrices J (r j ,z j ) are called the Jordan blocks of J. For a given matrix A, the number of Jordan blocks, the nature of the distinct Jordan blocks, and the number of times a given Jordan block occurs in the Jordan matrix J are uniquely determined. Moreover, the z i are the eigenvalues of A. One can further show that a given Jordan block J (r,z) with z = 0, has precisely two upper triangular square roots, j ± (r,z) , which are in addition polynomial functions of J (r,z) [16]. These can be used to find all the square roots (possibly complex) of a given matrix using the following theorem.
Theorem 2. Let A be a n × n complex matrix which has a Jordan decomposition given by (36)-(37), then all the square roots (which may include complex matrices) of A are given by the matrices P Udiag j ± (r 1 ,z 1 ) , j ± (r 2 ,z 2 ) , · · · , j ± (r k ,z k ) U −1 P −1 , where U is an arbitrary matrix which commutes with J.
The Jordan blocks of a matrix also play a crucial role in the following theorem which gives the necessary and sufficient condition for a real matrix to have a real square root (see e.g. [17]).
Theorem 3. Let A be an invertible real square matrix (of arbitrary dimension). The matrix A has a real square root if and only if for each of its negative eigenvalues z i , the number of identical Jordan block J (r i ,z i ) where this eigenvalue occurs in the Jordan decomposition of the matrix A is even.
In the following, we will use the above theorems to discuss in detail the cases 7 which are not covered by our Propositions 4 and 5. Namely, we will ask if it possible for a matrix to fullfill condition (i) (of Propositions 1 and 3) without obeying condition (ii) (of the same propositions). We will do it for various space-time dimensions, starting with the two dimensional case, which has less interest as far as gravity is concerned, but where results useful for the other cases can be derived. In this case we will also be able to give an explicit proof of the propositions of section 2.

Two dimensional case
A certain number of the results derived before can easily be obtained in two dimensions by an explicit calculation. Consider first the decomposition (14). We ask if an arbitrary 2 × 2 invertible matrix M given by can be written as (beginning here with proper orthochronous Lorentz transformations) where c = cosh ψ and s = sinh ψ (and ψ a real number). Expanding the matrix product in the right hand side, we obtain a system of 4 linear equations obeyed by the three coefficients { a, b, d} which we can use, eliminating b, to get the necessary condition (A − D)s = (C − B)c, which cannot hold for |C − B| > |A − D|. This obviously shows that the decomposition (39) is not always possible 8 , as we showed in a more general way in Proposition 1.
In two dimensions, one can also explicitly show that the condition (i) of Proposition 1 always implies the condition (ii) of the same proposition. Indeed, consider a 2 × 2 matrix m, that is written as m = ηs, with s symmetric. Let us then assume that this matrix has a square root. According to the proof of Proposition 4, we know that if this matrix has no negative eigenvalues, it has a square root which is a product of η times a symmetric matrix. Let us study the case where it has at least one negative eigenvalue. In this case, according to Theorem 3, it must be of the form m = P diag (−u, −u) P −1 = −u1 2 , where u is a positive non zero number 9 (note that such a matrix is indeed in the form ηs). It remains then to study all the square roots of The matrix equation γ 2 = m is easy to solve explicitly. We obtain that a real square root γ is given by any of the matrices where β and α are real numbers and β is non zero. Choosing then α and β which obey the constraint u = β 2 − α 2 we find an infinite family of real matrix square roots of m which are written in the form of the product of η by a symmetric matrix. A similar straightforward calculation can be made to prove that hypothesis (i) of Proposition 3 implies (ii) of the same proposition. In fact, it is easy to see that for every symmetric matrix with ac − b 2 < 0 there exist real α, β such that is symmetric i.e. such that aβ 2 − 2αβb + cu + cα 2 = 0. Indeed, either c = 0 and the discriminant of the above second order polynomial equation with respect to α, ∆ α = 4β 2 (b 2 −ac)−4c 2 u, is positive for large enough β, or c = 0 in which case b must be non-zero and α = aβ 2b is an obvious solution. This shows that in 2 dimensions, being able to choose zweibeins obeying (11) is equivalent to the existence of a real square root of g −1 f .

Four dimensional case
Considering here the case of 4 × 4 real matrices, and using Theorems 2 and 3, we have that the only real invertible matrices A that have at least one negative real eigenvalue and also have at least one real square root must have one of the following Jordan forms where J k is one of the Jordan matrices where u, v and w are positive real numbers, u is always non zero, the same is true for v and w except in the case of J 3 where v and w cannot vanish simultaneously. Let us first consider the case where one simply has A = J 2 . As will be shown below, J 2 is an example of a real matrix which is a product of η by a symmetric matrix and has real square roots (in fact it has infinitely many, as we will see below), but which is such that none of those square roots is a product of η by a symmetric matrix. As such it shows that, in full generality, hypothesis (i) of Proposition 1 does not imply (ii) of the same proposition (this has to be contrasted with the two dimensional case where we showed the opposite). Let us indeed find all the square roots of J 2 using Theorem 2. We first determine all the matrices U which commute with the Jordan matrix J 2 . It is easy to see (e.g. by explicitly computing the commutator) that those matrices are simply of the form U = diag(V, W ) where V and W are arbitrary 2×2 invertible matrices. This means in turn that all the square roots (including complex square roots) of J 2 are of the form 10 But this implies also that the 2 ). Hence, using (41) we obtain all real square roots of J 2 as where α, β, a and b are real but otherwise arbitrary (with b and β non vanishing). However, none of the above square roots is the product of η by a symmetric matrix (this would require b 2 + a 2 = −v), which proves our assertion. The same example can a priori be applied to Proposition 3. However, there, hypothesis (i) concerns the matrix g −1 f which, as we will see, cannot be equal to J 2 . Let us indeed consider the more general case where the matrix g −1 f is diagonalizable. This has to be the case if g −1 f is similar to J 2 (or indeed J 1 or J 3 ). One can show that this is a sufficient (and in fact also necessary) condition to be able to diagonalize in a common basis the matrices g µν and f µν corresponding to the symmetric bilinear forms represented by the metrics [18] 11 . In this common basis, each of the diagonal matrices corresponding to g and f has only one negative eigenvalue, and hence there is no way that g −1 f can be equal or similar (in the mathematical sense) to J 2 , which has four negative eigenvalues. The same reasoning excludes J 1 and J 3 as admissible Jordan matrices associated to g −1 f , because those matrices are diagonal and have eigenvalues which cannot all be given by products of eigenvalues of two commonly diagonal metrics with a Lorentzian signature. The only cases left over are thus those with Jordan matrices given by J 4 , J 5 , J 6 and J 7 . We will not discuss here the case of J 5 and J 6 (see below), but will however study the J 4 and J 7 cases (which are admissible as Jordan matrices of g −1 f ). In these cases, however, there exists a basis where both metrics are diagonal (as a consequence of the result we just used) and the time direction of any of the two metrics is space-like with respect to the other. This might be considered as a pathology if both metrics are dynamical, leading possibly to causality violations. However, nothing goes wrong a priori if only one metric is dynamical, even though, this has to be checked on a case by case basis. For example, in dRGT theory the light cone seen by some of the polarizations of 10 Indeed, (±i √ u) and (±i √ v) are just the square roots of the one dimensional Jordan blocks (−u) and (−v) 11 If one of the two bilinear forms had a euclidean signature, then it would have been possible to diagonalize matrices corresponding to both forms in the same basis without any further assumption. the massive graviton is not set by the dynamical metric background alone, but also by the non dynamical metric, as exemplified e.g. in appendix A of ref. [15].
In the cases of matrices A which have J 4 or J 7 as Jordan matrices, one can find examples of matrices given by the product of η with a symmetric matrix, and having real square roots, but such that none of these square roots are the product of η with a symmetric matrix. Indeed, consider the invertible matrix P given by such that Applying Theorem 2 and the same reasoning as above, all the real square roots of A are given by the matrices with α and β real and β non zero 12 . We can compute this explictly to check that these matrices can never be written as the product of η with a symmetric matrix. Indeed, looking for example at the (2, 3) and (3, 2) elements of the above matrix (56), it can be seen that they are equal if and only if (3α − β) 2 + 8β 2 + 9 vanishes, which never occurs for real α and β. This example can be directly applied to Proposition 3 by considering the choice (15) for f µν (and then A = f −1 g −1 f ), showing that in general (i) of this proposition does not imply (ii). So far, we have only discarded cases of matrices similar (in the mathematical sense) to J 5 and J 6 , i.e. only discussed in detail diagonalizable matrices (possibly only in C). There is however a well defined criterion allowing us to do so. Indeed, one can show that a sufficient condition for g −1 f to be diagonalizable is that the light cones of g µν and that of f µν do not intersect (except at the origin) [18]. When it is the case, using the result we mentioned above, one can find a basis where g µν 12 Complex α and β, as well the replacement of the 2 × 2 matrix above α β − 1+α 2 β −α by ± i 0 0 i give in fact all the square roots of A, but only those given by (56) are real. and f µν are commonly diagonal. Then, in this basis, either the time directions of the two metrics coincide and the sufficient condition of Proposition 5 is fullfilledone can find a basis of vierbein satisfying (11) -or they do not coincide, and then one has either J 4 or J 7 as Jordan matrices and the counter example above applies.

Three dimensional case
The results obtained in the previous section can easily be extended to the case of a spacetime with 3 dimensions, which has some relevance for physics and in particular massive gravity [19,20,21]. In three dimensions, the only cases which are not covered by Propositions 4 and 5 are the cases of real invertible matrices A which have the form where u and v are non zero positive real numbers, and P is an invertible matrix.
Here it is also easy to find an example of the kind (54)-(56). Indeed consider now A to be given by

.(58)
This matrix has the form of a product of η with a symmetric matrix, but none of its real square roots, given by (with α and β real numbers, β non vanishing) has. Hence, one can make here similar considerations as those given at the end of the previous section.

Conclusions
In this note, we studied in detail the sufficient and necessary conditions for two vielbeins L A and E B associated with two metrics f µν and g µν defined on a given manifold to be chosen so that they obey the symmetry condition (11) which has been used as a gauge condition in vielbein gravity or massive gravity. We also studied as a byproduct the necessary and sufficient condition for an arbitrary matrix M to be decomposed as in (14). We showed that, in contrast to what has sometimes been claimed in the literature, the condition (11) and the decomposition (14) cannot be achieved in general but require some extra assumptions related to the existence and properties of square roots of matrices. These assumptions are gathered in Propositions 1 to 5 of the present work. We also showed that, in the 4 dimensional case, it is enough to assume that the light cones of the two metrics do not intersect and that the metrics share the same time direction (in the sense given at the end of section 5), in order to satisfy a sufficient condition for (11) to be true.