A note on “symmetric” vielbeins in bimetric, massive, perturbative and non perturbative gravities

We consider a manifold endowed with two different vielbeins \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${E^A}_{\mu }$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${L^A}_{\mu }$\end{document} corresponding to two different metrics \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${g_{{\mu \nu }}}$\end{document} and fμν. Such a situation arises generically in bimetric or massive gravity (including the recently discussed version of de Rham, Gabadadze and Tolley), as well as in perturbative quantum gravity where one vielbein parametrizes the background space-time and the other the dynamical degrees of freedom. We determine the conditions under which the relation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${g^{{\mu \nu }}}{E^A}_{\mu }{L^B}_{\nu }={g^{{\mu \nu }}}{E^B}_{\mu }{L^A}_{\nu }$\end{document} can be imposed (or the “Deser-van Nieuwenhuizen” gauge chosen). We clarify and correct various statements which have been made about this issue. We show in particular that in D = 4 dimensions, this condition is always equivalent to the existence of a real matrix square root of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${g^{-1 }}$\end{document}f.


Introduction
There are various situations in physics where one has to consider a manifold endowed with two different vielbein fields. Obviously, this appears to be the case in bimetric theories, theories where two different metrics are defined on the same space-time manifold [1][2][3][4][5][6][7][8][9]. Each of these metrics can then be described by a different vielbein. This is also true even if one of the two metrics is not dynamical. It also applies to non linear massive gravity (for recent reviews see [10,11]), which is nothing else than a special class of bigravity, and in particular it applies to the recently introduced massive gravity theories of de Rham-Gabadadze-Tolley (dRGT in the following) [12][13][14] as well as to the extension of these to the dynamical bimetric case [15,16]. A similar situation also occurs when one expands General Relativity around a fixed background metric and expresses both the background and the dynamical metric in terms of vielbeins. This is the starting point of many works dealing with quantum gravity (see e.g. [17,18]).
Considering such situations, let us define, in arbitrary D dimensions, E A and L A to be two bases of 1-forms obeying at every space-time point 1 or equivalently

JHEP03(2013)086
where g µν and f µν are respectively the metrics associated with the vielbeins. We will also need the vectors e A and ℓ A , respectively dual to the 1-forms E A and L A , that verify (1.5) For future use, let us rewrite the above relations (and consequences thereof) using matrix notations. We have f = L t ηL , (1.6) f −1 = l t ηl , (1.7) where 1 D is the D × D identity matrix, m t denotes the matrix transpose of the matrix m, η is just diag(−1, 1, · · · , 1) and the same relations hold between E, e and g respectively. The defining relations (1.2) and (1.3) imply the gauge symmetry with Λ A C andΛ B D Lorentz matrices. It is often convenient to ask for a "symmetry" condition on the vielbeins which reads e µ A L Bµ = e µ B L Aµ . (1.11) Notice that this condition can also be written as g µν E A µ L B ν = g µν E B µ L A ν and that ref. [19] uses an equivalent form which reads E A µ L Aν = E A ν L Aµ .
In the recent discussions about massive gravity, such a condition has been used to ensure the existence of, and express, the matrix square root of g −1 f which enters in a crucial way in the definition of dRGT theory (see e.g. [20,21]). Indeed, whenever condition (1.11) holds, γ defined as γ µ ν = e µ A L A ν (1.12) verifies the defining equation of the matrix square root of g −1 f given by It has also been argued by Hinterbichler and Rosen [22] that, in the vielbein reformulation of dRGT theories, condition (1.11) is obtained as a consequence of field equations. To prove this, they use a decomposition of an arbitrary matrix M (representing some unconstrained arbitrary vielbein multiplied by η) as where λ is a Lorentz matrix and s is a symmetric matrix. This is reminiscent of the socalled polar decomposition stating that an arbitrary invertible matrix can be written as

JHEP03(2013)086
the product of an orthogonal matrix with a symmetric matrix. However we will show that such a decomposition does not hold in general if one replaces the orthogonal matrix by a Lorentz transformation. This makes in particular the argument of ref. [22] incomplete. Furthermore, in massive gravity as well as in perturbative quantum gravity condition (1.11) has been used as a gauge condition. In the quantum gravity context, this gauge (sometimes dubbed Deser-van Nieuwenhuizen gauge in reference to [17]) has been first introduced via a gauge fixing term in the action and dealt with perturbatively [17,18]. It was then later argued that this gauge can be set "non perturbatively", i.e. that given a set of arbitrary vielbeins E A and L A that do not fullfill condition (1.11), one can always Lorentz rotate them as in (1.9), and (1.10) to define a new set of vielbeins obeying this condition [19] (with the consequence that the corresponding gauge would not suffer from Gribov-like ambiguities). Interestingly enough, the same statements have also been made in the context of massive gravity. Indeed, there as well the condition (1.11) has been used "perturbatively" (i.e. in the case when both metrics g µν and f µν are close to one another, see e.g. [20]), but it has also been argued that condition (1.11) can be reached as a (Lorentz) gauge choice for arbitrary metrics [21]. This contradicts various other statements made in the literature, for example in ref. [18], where it is stated that gauge (1.11) cannot be set beyond perturbation theory. Settling this contradiction, as we intend to do here, will also illuminate issues discussed in the previous paragraph, since, as we will show, to set (1.11) via suitable Lorentz rotations of the vielbeins involves a decomposition similar to (1.14).
To be precise, the purpose of this note is to determine when and how the condition (1.11) can be enforced, as well as when the decomposition (1.14) holds. These questions, beyond their mathematical interest, are especially important for massive gravity. Indeed, one can argue that the vielbein formulation of dRGT theories has several advantages over their metric formulations. First of all, it allows a simple extraction of what plays the role of the Hamiltonian constraint [22]. Second, in some cases it also allows to dynamically derive the existence of the square root of g −1 f that has to be assumed or enforced by Lagrange multipliers in the metric formulation [22,23]. Finally, the frame formulation permits a simple discussion of the constraints and the counting of dynamical degrees of freedom in the Lagrangian framework [23]. In this formulation, relation (1.11) plays a key role, and it is important to know whether it can be obtained by Lorentz gauge transformations, or it needs additional constraints to be imposed. This paper is organized as follows. In the next section, we will discuss necessary and sufficient conditions for (1.11) and (1.14) to hold. Then, in section 3, using results on matrix square roots, we will spell out sufficient conditions to achieve (1.11) and (1.14). In the next sections we will discuss the specific cases of D = 2, D = 3, and D = 4 space-time dimensions, and in particular some examples clarifying the results of section 3 as well as some left over cases. Finally we will quickly look at the stability of these conditions with respect to the dynamics of the system, i.e. we will discuss whether they are preserved under time evolution in some particular theories, and we will point out some consequences for massive gravity.
Before proceeding, let us mention a special choice for one of the metrics (say f µν ) and the associated vielbein L A . This choice is made in some contexts (e.g. dRGT theories, but JHEP03(2013)086 also perturbative quantum gravity). It amounts to first assuming that the metric f µν is flat and takes the canonical form η µν , i.e. f µν = η µν , (1.15) and then choosing L A = dx A , i.e. such that (in components) stating that the vielbein e Aµ can be represented as a symmetric matrix. This choice will not be used to derive the results of this paper, but will just sometimes be considered as an example.

Necessary and sufficient conditions
Let us first try to set the constraint (1.11) by using the freedom to Lorentz rotate independently the two sets of vielbeins L A and e A . Considering two arbitrarily chosen vielbeins e A and L B , assume that there exist two Lorentz transformations Λ A B andΛ A B such that the matrix S AB defined by (note that this definition implies that M is invertible), the above equality (2.1) reads in matricial notations Multiplying it on the right by Λ t −1 and on the left byΛ −1 we get For S to be symmetric, the matrix on the left hand side above should be symmetric, call it s. Defining the Lorentz transformation λ by λ = Λ −1Λ we get that the invertible matrix M should be written as in eq. (1.14). Being a Lorentz transformation, λ verifies

JHEP03(2013)086
As we already stated, a decomposition such as in eq. (1.14) does not hold in general (in constrast to the polar decomposition). Indeed, rewriting (1.14) as λ = M s −1 and inserting this into (2.5) we get after some trivial manipulation, that M and s should fullfill the necessary condition (ηs) (ηs) = ηM t ηM .
(2.6) Running backward the above argument it is easy to see that the above condition is also sufficient (just because the matrix defined as M s −1 will be a Lorentz transformation). Hence we have proven the following proposition.
Proposition 1. An arbitrary invertible matrix M can be decomposed as M = λs, λ being the matrix of a Lorentz transformation and s a symmetric matrix, if and only if (i) the real matrix ηM t ηM has a real square root, and (ii) at least one such square root can be written as the product of η with a symmetric matrix.
In particular, when M is given by (2.2), we have (using relations (1.6)-(1.8) as well as definition (2.2)) So if g −1 f has a square root γ, then (i) above holds: a square root of ηM t ηM being then given by LγL −1 t . We then prove the following proposition, Proposition 2. Given two metrics g µν and f µν , g −1 f has a square root γ such that γ = f −1 s, with s a symmetric matrix, if and only if the matrix M defined by (2.2) (and which verifies relations (2.7)) is such that the real matrix ηM t ηM has a real square root which can be written as the product of η by a symmetric matrix.
Proof. We first assume that g −1 f can be written as g −1 f = f −1 s 2 with s a symmetric matrix. Then using this hypothesis into the first equality of (2.7) we get The matrix ηℓsℓ t η being symmetric, this proves one side of the equivalence. Conversely, we assume that there exists a symmetric matrix s ′ such that η M t ηM = (ηs ′ ) 2 . Then g −1 f is given by Noticing that the matrix f ℓ t s ′ ℓf is symmetric ends the proof.

JHEP03(2013)086
Hence, gathering the above results, we have proven the following statement.
Proposition 3. There exist vielbeins e A µ and L B ν corresponding to the metrics g µν and Direct proof. Suppose first we have vielbeins e A and L B satisfying the above symmetry property. Then which shows that the matrix f γ is symmetric. Notice that this is equivalent to γf −1 symmetric. Conversely, suppose we have a real matrix γ such that γ 2 = g −1 f and f γ symmetric. We start by choosing an arbitrary vielbein L A for the metric f µν i.e. f µν = η AB L A µ L B ν , and we denote by ℓ B its dual vector i.e. f µν = η AB ℓ A µ ℓ B ν . We then define and e A is a well-defined vielbein for the metric g µν . Notice that this definition tells us γ µ ν = e A µ L A ν . It remains to be shown that these vielbeins have the required symmetry property. We start from the symmetry of f γ which we can rewrite Multiplying by ℓ D µ ℓ E ν we get e E ρ L Dρ = e D ρ L Eρ and this completes the proof.

JHEP03(2013)086
As we just showed the hypotheses (i) of Propositions 1 and 3 are that a certain real (invertible) matrix has a real square root. It is however well known that not all real invertible matrices have real square roots (see e.g. [24,25]) and we will later recall what are the necessary and sufficient conditions for this to occur. In our case, though, the matrix which should have a square root is not totally arbitrary. For example, in Proposition 1 it must be of the form η M t ηM . This alone does however not ensure the existence of a square root. For example, choosing (2.14) we get which doesn't have any real square roots. Indeed, such a 4 × 4 diagonal matrix with four distinct eigenvalues has 2 4 square roots which are given here by diag (±3i, ±i, ±2, ±1). None of them is real. Hence the decomposition (1.14) can at best hold for a restricted set of matrices.
We thus see that considering the matrix M above as given by the form (2.2) invalidates the result of ref. [19]. Notice that, if one makes now the simple choice (1.15)-(1.16) (and considering equation (2.7)), our example involves a "mismatch" between the time directions of the two metrics f µν and g µν . However, beyond perturbation theory there is no reason to think that these time directions should coincide or even be compatible. Notice further that perturbatively, if g = f + h, with h small, then to the first order in h, 2 , and so the assumptions (i) and (ii) of Proposition 3 are always true perturbatively.

Sufficient conditions
Here, in order to formulate simple sufficient conditions allowing to obtain (1.11) and (1.14), we will discuss the precise relation between hypotheses (i) and (ii) of Propositions 1 and 3. We need to recall how square roots of real matrices are obtained. We first use the following theorem (that we quote here from ref. [24]). Theorem 1. Let A be an invertible real square matrix (of arbitrary dimension). If A has no real negative eigenvalues, then there are precisely 2 r+c real square roots of A which are polynomial functions of A, where r is the number of distinct eigenvalues of A and c is the number of distinct complex conjugate eigenvalue pairs. If A has a real negative eigenvalue, then A has no real square root which is a polynomial function of A.

JHEP03(2013)086
Let us first use this theorem to prove that (i) of Proposition 1 (respectively Proposition 3) implies (ii) of the same proposition whenever the matrix ηM t ηM (respectively the matrix g −1 f ) has no real negative eigenvalues. To see this, just consider a real matrix A with no negative eigenvalues, given by the product of two symmetric invertible matrices S and S ′ . By virtue of the above theorem, we know that this matrix has at least one real square root which is a polynomial function of A, that we note F (A). One then has where the sum runs over a finite number of integers k, and c k are real numbers. Using the fact that A = SS ′ , one then has where the term [S ′ SS ′ · · · SS ′ ] k contains k factors of S ′ and k − 1 factors of S, and is a symmetric matrix. This means that that the square root F (A) is given by the product of S by a symmetric matrix. It is enough to prove our assertion by choosing S to be given by η and S ′ to be given by M t ηM (respectively S given by f −1 and S ′ to be given by f g −1 f ). Hence, using the above result, and Propositions 1 and 3 we have shown the following two propositions Proposition 4. A sufficient condition for an arbitrary invertible real matrix M to be decomposed as M = λs, λ being the matrix of a Lorentz transformation and s a symmetric matrix, is that the matrix η M t ηM has no negative eigenvalues.
Proposition 5. A sufficient condition for the existence of vielbeins e A µ and L B ν corresponding to the metrics g µν and f µν respectively (i.e. η AB e A µ e B ν = g µν and η AB L A µ L B ν = f µν ) such that e A µ L Bµ = e B µ L Aµ , is that the matrix g −1 f has no negative eigenvalues.
If A has one (or more) real negative eigenvalue(s), Theorem 1 does not imply that A does not have a real square root, but just that such a square root cannot be a polynomial function of A. In order to enunciate the necessary and sufficient conditions for a real matrix to have a real square root, one first needs to introduce the so-called Jordan decomposition of a matrix. It uses Jordan blocks which can be defined as r × r matrices wich are of the form J (r,z) given by (for r ≥ 2) where z is a complex number, and one has J (1,z) = (z) for r = 1. One can then show that for an arbitrary n × n matrix A, there exists an invertible matrix P (possibly complex), and a matrix J such that

JHEP03(2013)086
and the matrix J is a so called Jordan matrix of the form J = diag J (r 1 ,z 1 ) , J (r 2 ,z 2 ) , · · · , J (r k ,z k ) , (3.5) where k is an integer and the matrices J (r j ,z j ) are called the Jordan blocks of J. For a given matrix A, the number of Jordan blocks, the nature of the distinct Jordan blocks, and the number of times a given Jordan block occurs in the Jordan matrix J are uniquely determined. Moreover, the z i are the eigenvalues of A. One can further show that a given Jordan block J (r,z) with z = 0, has precisely two upper triangular square roots, j ± (r,z) , which are in addition polynomial functions of J (r,z) [24]. These can be used to find all the square roots (possibly complex) of a given matrix using the following theorem.
Theorem 2. Let A be a n × n complex matrix which has a Jordan decomposition given by (3.4)-(3.5), then all the square roots (which may include complex matrices) of A are given by the matrices P −1 U −1 diag j ± (r 1 ,z 1 ) , j ± (r 2 ,z 2 ) , · · · , j ± (r k ,z k ) U P , where U is an arbitrary matrix which commutes with J.
The Jordan blocks of a matrix also play a crucial role in the following theorem which gives the necessary and sufficient condition for a real matrix to have a real square root (see e.g. [25]).

Theorem 3. Let A be an invertible real square matrix (of arbitrary dimension). The matrix
A has a real square root if and only if for each of its negative eigenvalues z i , the number of identical Jordan block J (r i ,z i ) where this eigenvalue occurs in the Jordan decomposition of the matrix A is even.
In the following, we will use the above theorems to discuss in detail the cases 3 which are not covered by our Propositions 4 and 5. Namely, we will ask if it is possible for a matrix to fullfill condition (i) (of Propositions 1 and 3) without obeying condition (ii) (of the same propositions). We will do it for various space-time dimensions, starting with the two dimensional case, which has less interest as far as gravity is concerned, but where results useful for the other cases can be derived. In this case we will also be able to give an explicit proof of the propositions of section 2.

Two dimensional case
A certain number of the results derived before can easily be obtained in two dimensions by an explicit calculation. Consider first the decomposition (1.14). We ask if an arbitrary 2 × 2 invertible matrix M given by 3 Note however that according to Theorem 3 these cases should be of zero measure with respect to those which are included.

JHEP03(2013)086
can be written as (beginning here with proper orthochronous Lorentz transformations) where c = cosh ψ and s = sinh ψ (and ψ a real number). Expanding the matrix product in the right hand side, we obtain a system of 4 linear equations obeyed by the three coefficients {a, b, d} which we can use, eliminating b, to get the necessary condition (A−D)s = (C−B)c, which cannot hold for |C −B| > |A−D|. This obviously shows that the decomposition (4.2) is not always possible, 4 as we showed in a more general way in Proposition 1.
In two dimensions, one can also explicitly show that the condition (i) of Proposition 1 always implies the condition (ii) of the same proposition. Indeed, consider a 2 × 2 matrix m, that is written as m = ηs, with s symmetric. Let us then assume that this matrix has a square root. According to the proof of Proposition 4, we know that if this matrix has no negative eigenvalues, it has a square root which is a product of η times a symmetric matrix. Let us study the case where it has at least one negative eigenvalue. In this case, according to Theorem 3, it must be of the form m = P diag (−u, −u) P −1 = −u1 2 , where u is a positive non zero number 5 (note that such a matrix is indeed in the form ηs). It remains then to study all the square roots of The matrix equation γ 2 = m is easy to solve explicitly. We obtain that a real square root γ is given by any of the matrices where β and α are real numbers and β is non zero. Choosing then α and β which obey the constraint u = β 2 − α 2 we find an infinite family of real matrix square roots of m which are written in the form of the product of η by a symmetric matrix. A similar straightforward calculation can be made to prove that hypothesis (i) of Proposition 3 implies (ii) of the same proposition. In fact, it is easy to see that for every symmetric matrix

JHEP03(2013)086
is symmetric i.e. such that aβ 2 − 2αβb + cu + cα 2 = 0. Indeed, either c = 0 and the discriminant of the above second order polynomial equation with respect to α, ∆ α = 4β 2 (b 2 − ac) − 4c 2 u, is positive for large enough β, or c = 0 in which case b must be nonzero and α = aβ 2b is an obvious solution. This shows that in 2 dimensions, being able to choose zweibeins obeying (1.11) is equivalent to the existence of a real square root of g −1 f .

Three dimensional case
The results obtained in the previous section can be extended to the case of a spacetime with 3 dimensions, which has some relevance for physics and in particular massive gravity [26][27][28]. In three dimensions, the only cases which are not covered by Propositions 4 and 5 are the cases of real invertible matrices A which have the form where u and v are non zero positive real numbers, and P is an invertible matrix. Notice that because A, u and v are real, P may also be assumed to be real. Before going any further, notice that one can find 3 × 3 matrices A, in the form A = ηs with s symmetric, having real square roots, but such that none of these square roots is the product of η by a symmetric matrix. Indeed consider A to be given by This matrix has the form of a product of η with a symmetric matrix, but none of its real square roots, given by (with α and β real numbers, β non vanishing) has the same form. However, this example does not apply to the cases of interest here because ηA does not have the correct signature: instead of being of signature (−, +, +) as e.g. a matrix of the form s = M t ηM , it is negative definite. In contrast we are going to show that (i) of Proposition 1 (respectively Proposition 3) implies (ii) of the same proposition whenever the matrix ηM t ηM (respectively the matrix g −1 f ) is of the form (5.1). In order to do that let us assume (for the same reason as in section 3) that A = P −1 JP = SS ′ with S and S ′ two symmetric matrices of (−, +, +) signature. The fact that S ′ is symmetric implies that P SP t commutes with J and thus it must be of the form

JHEP03(2013)086
with S 2 a symmetric two by two matrix and r a real number such that r det(S 2 ) = 0. Since P SP t is of (−, +, +) signature, it is obvious that S 2 cannot be negative definite. From the fact that S ′ has (−, +, +) signature we can infer that JP SP t = (P S)S ′ (P S) t also has the same signature. But and thus S 2 cannot be positive definite either. We therefore necessarily conclude that S 2 must have (−, +) signature and that r > 0. This means that there exists a two by two invertible matrix U 2 such that Now let us define This matrix clearly commutes with J and if we further define we can see that γ 2 = P −1 JP = A and thus γ is a real square root of A. Furthermore it is easy to see using (5.4), (5.6) and (5.7) that is symmetric. This provides a constructive proof of our statement.

Four dimensional case
Considering here the case of 4 × 4 real matrices, and using Theorems 2 and 3, we have that the only real invertible matrices A that have at least one negative real eigenvalue and also have at least one real square root must have one of the following Jordan forms

JHEP03(2013)086
where J k is one of the Jordan matrices where u, v and w are positive real numbers, u and w are always non zero, and v can only vanish in the case of J 3 . Because A is real, the invertible matrix P may be chosen to be real in the J 1 , J 2 , J 4 and J 5 cases. The case of J 3 is a bit more tricky, but we can also assume P to be real as long as we replace the Jordan matrix J 3 by its real 6 counterpart We will show here that results similar to the ones obtained above in the D = 2 and D = 3 cases hold for D = 4 whenever A is of the form (6.1) and A = P −1 J k P = SS ′ with S and S ′ two symmetric matrices of Lorentzian signature. We will look in turn at the different cases for what concerns J k . Consider first the case where the matrix A = SS ′ is diagonalizable over R. One can show that this is a sufficient (and in fact also necessary) condition to be able to diagonalize (in the sense of forms) in a common basis the matrices S −1 and S ′ corresponding to two symmetric bilinear forms [29]. 7 In this common basis, each of the diagonal matrices corresponding to S −1 and S ′ has only one negative eigenvalue, and hence there is no way that A = SS ′ can be equal or similar (in the mathematical sense) to J 1 , which has four negative eigenvalues. This excludes the J 1 case from the start.
The discussion of the J 2 case proceeds along the same lines as in the D = 3 case. The fact that S ′ is symmetric implies that P SP t commutes with J 2 and thus it must be of the form with S 2 and S ′ 2 symmetric two by two matrices such that det(S 2 ) det(S ′ 2 ) = 0. Notice that S ′ 2 must be diagonal whenever v = w. Since P SP t is of (−, +, +, +) signature, it is obvious that S 2 and S ′ 2 cannot be negative definite. From the fact that S ′ has (−, +, +, +) signature we can infer that J 2 P SP t = (P S)S ′ (P S) t also has the same signature. But (6.9)

JHEP03(2013)086
and thus S 2 cannot be positive definite either. We therefore necessarily get that S 2 must have (−, +) signature and that S ′ 2 must be positive definite. In particular this means that there exist two by two invertible matrices U 2 and V 2 such that and whenever v = w, we can further assume that V 2 is diagonal (this is because S ′ 2 is then diagonal and positive definite). Now let us define This matrix clearly commutes with J 2 and if we further define we can see that γ 2 = P −1 J 2 P = A and thus γ is a real square root of A. Analogously to what has been done in the previous section, using (6.8), (6.10) and (6.11), it is also easy to see that is symmetric. This shows, as in the D = 3 case, that whenever A = P −1 J 2 P and hypothesis (i) of Proposition 1 (respectively Proposition 3) is verified, hypothesis (ii) of the same proposition is also verified. The three remaining cases (J 3 , J 4 and J 5 ) actually never occur as long as we assume that A is the product of two symmetric matrices of Lorentzian signature (A = SS ′ ), as we now show. In the J 3 case, it is easier to work with the real Jordan form of A i.e. J ′ 3 . In order to understand the implications of the symmetry of S ′ we need to introduce the matrix (6.14) Then it is easy to see that, given the particular form of J ′ 3 , the symmetry of S ′ implies that P SP t σ commutes with J ′ 3 . Therefore

JHEP03(2013)086
with S 2 a symmetric two by two matrix and r, r ′ real numbers such that det(S 2 )(r 2 +r ′2 ) = 0. Since the signature of S is (−, +, +, +) and r 2 + r ′2 > 0 (which is the opposite of the determinant of the 2×2 lower block in the right matrix above), S 2 must be positive definite. But we also know that the signature of J ′ 3 P SP t = (P S)S ′ (P S) t is (−, +, +, +) and since S 2 cannot be positive definite and we have a contradiction. This proves by reductio ad absurdum that the J 3 case cannot occur in this context. A similar argument works for the J 4 case. Indeed the symmetry of S ′ again implies that P SP t σ commutes with J 4 . Therefore with S 2 a symmetric two by two matrix and r, r ′ real numbers such that r 2 det(S 2 ) = 0.
Since the signature of S is (−, +, +, +) and r 2 > 0, S 2 must be positive definite. But, with a similar argument as in the above case, we know that S 2 cannot be positive definite and we again stumble upon a contradiction. Finally the J 5 case can be handled in the same manner. Introducing we can express the symmetry of S ′ as the fact that P SP t σ ′ commutes with J 5 . This in turn means that with a, b, c, d, e, f real numbers such that ae − c 2 = 0. But det(P SP t ) = (ae − c 2 ) 2 > 0 which is incompatible with the Lorentzian signature of P SP t and this excludes the last case. This lengthy discussion has shown that (i) of Proposition 1 (respectively Proposition 3) implies (ii) of the same proposition whenever the matrix ηM t ηM (respectively the matrix g −1 f ) is of the form (6.1).
In this section (as well as the previous two) we have therefore shown that (at least up to dimension D = 4) hypotheses (ii) of Propositions 1 and 3 are superfluous. To summarize, we have proven the following two propositions. L B ν corresponding to the metrics g µν and f µν respectively (i.e. η AB e A µ e B ν = g µν and η AB L A µ L B ν = f µν ) such that e A µ L Bµ = e B µ L Aµ , if and only if there exists a real matrix γ such that γ µ ρ γ ρ ν = g µρ f ρν (i.e. γ 2 = g −1 f ).
We expect that these results continue to hold in higher dimensions even though we do not have a dimension independent proof. 7 Time evolution and application to ghost-free massive gravity Now that we have discussed the different necessary and sufficient conditions for (1.11) to hold, we may ask ourselves if these conditions are preserved through time evolution. It is easy to see that there is no general answer to this question i.e. it depends on the theory. Consider for example the case of a bimetric theory where the two metrics are not coupled to each other (or just very weakly). The action of such a theory in four dimensions is given by It is easy to see that in some coordinate patch a solution to the equations of motion of this theory is simply given by clearly verifies the above conditions. This means that on the t = 0 hypersurface, one may choose vierbeins obeying condition (1.11). However as soon as t = 0 this condition ceases to be true as g −1 f does not even admit a real square root anymore. Thus in the above theory, condition (1.11) is not preserved under time evolution. In contrast, let us consider the recently proposed dRGT theory [12][13][14]. We first note that in the metric formulation of this theory, one assumes the existence of a real square root of g −1 f (where g is a dynamical metric and f a non-dynamical one); then, according to proposition 7, this mere assumption is equivalent to assuming the existence of vierbeins JHEP03(2013)086 verifying condition (1.11). On the other hand, in the vielbein formulation of dRGT theory, it has been shown in [23] (see also [22]) that, at least for some region of parameter space, condition (1.11) is imposed by the equations of motion and is therefore preserved under time evolution. When this is the case, the propositions proven in this work then also imply that the existence of the matrix square root of g −1 f is dynamically imposed.

Conclusions
In this note, we studied in detail the sufficient and necessary conditions for two vielbeins L A and E B associated with two metrics f µν and g µν defined on a given manifold to be chosen so that they obey the symmetry condition (1.11) which has been used as a gauge condition in vielbein gravity or massive gravity. We also studied as a byproduct the necessary and sufficient condition for an arbitrary matrix M to be decomposed as in (1.14). We showed that, in contrast to what has sometimes been claimed in the literature, the condition (1.11) and the decomposition (1.14) cannot be achieved in general but require some extra assumptions related to the existence and properties of square roots of matrices. These assumptions are gathered in Propositions 1 to 7 of the present work. An example where this result is particularly relevant is dRGT massive gravity. Indeed, this theory has been considered in two different frameworks: the first one uses two metrics f and g in such a way that the mass term involves the symmetric polynomials of γ = g −1 f [12][13][14][15][16], while the second one relies on two vielbeins E A and L B and the mass term is polynomial in these 1-forms 8 [22]. A consequence of our results is that, in general, these two formulations are not equivalent. They become so only when condition (1.11) is satisfied. In a region of parameter space it has been shown in [23] that the above condition holds as a consequence of the equations of motion, and thus the equivalence is true dynamically. In the complementary parameter space region however, this is not true in general and it is even possible that the real square-root γ does not exist.
We also showed that, in general, in the 4 dimensional case, it is enough to assume that the matrix g −1 f admits a real square root, in order to satisfy a sufficient condition for (1.11) to be true. However, for general theories with two metrics, this assumption may be violated dynamically as can be seen explicitly from the example of two decoupled metrics obeying Einstein's equations. 9