ADM-like Hamiltonian formulation of gravity in the teleparallel geometry: derivation of constraint algebra

We derive a new constraint algebra for a Hamiltonian formulation of the teleparallel equivalent of general relativity treated as a theory of cotetrad fields on a spacetime. The algebra turns out to be closed.


Introduction
In our previous paper [1] we presented a Hamiltonian formulation of the teleparallel equivalent of general relativity (TEGR) regarded as a theory of cotetrad fields on a spacetime-the formulation is meant to serve as a point of departure for canonical quantization à la Dirac of the theory (preliminary stages of the quantization are described in [2][3][4]). In [1] we found a phase space, a set of (primary and secondary) constraints on the phase space and a Hamiltonian. We also presented an algebra of the constraints. An important fact is that this algebra is closed i.e. a Poisson bracket of every pair of the constraints is a sum of all the constraints multiplied by some factors. This property of the constraint algebra together with a fact that the Hamiltonian is a sum of the constraints allowed us to conclude that (1) the set of constraints is complete and (2) all the constraints are of the first class.
Let us emphasize that knowledge of a complete set of constraints and their properties as well as knowledge of an explicite form of a constraint algebra is very important from the point of view the Dirac's approach to canonical quantization of constrained A. Okołów (B) Institute of Theoretical Physics, Warsaw University, ul. Hoża 69, 00-681 Warsaw, Poland e-mail: oko@fuw.edu.pl systems since the knowledge enables a right treatment of constraints in the procedure of quantization.
However, the derivation of the constraint algebra turned out to be too long to be included in [1]. To fill this gap, that is, to prove that the constraint algebra is correct we carry out the derivation in the present paper 1 . Moreover, to the best of our knowledge a derivation of a constraint algebra of TEGR treated as a theory of cotetrad fields has never been presented before-in papers [5,6] describing a distinct Hamiltonian formulation of this version 2 of TEGR one can find a constraint algebra but its derivation is not shown.
The paper is organized as follows: in Sect. 2 we recall the description of the phase space and the constraints on it derived in [1]. In Sect. 3 we derive the constraint algebra. Section 4 contains a short summary.
Let us finally emphasize that since the present paper plays a role of an appendix to [1] we do not discuss here the results nor compare them to results of previous works-all these can be found in [1].

Preliminaries
Let M be a four-dimensional oriented vector space equipped with a scalar product η of signature (−, +, +, +). We fix an orthonormal basis (v A ) (A = 0, 1, 2, 3) of M such that the components (η AB ) of η given by the basis form a matrix diag(−1, 1, 1, 1). The matrix (η AB ) and its inverse (η AB ) will be used to, respectively, lower and raise capital Latin letter indices.
Let be a three-dimensional oriented manifold. We assume moreover that it is a compact manifold without boundary.
In [1] we obtained the phase space of TEGR as a Cartesian product of 1. a set of all quadruplets of one-forms (θ A ) (A = 0, 1, 2, 3) on such that for each quadruplet a metric on is Riemannian (i.e. positive definite); 2. a set of all quadruplets ( p B ) (B = 0, 1, 2, 3) the two-form on -the two-form p A is the momentum conjugate to θ A .
The metric q defines a volume form on and a Hodge operator * acting on differential forms on the manifold. Throughout the paper we will often use functions on defined as follows [9]: where ε ABC D are components of a volume form on M given by the scalar product η.
Components of q in a local coordinate frame (x i ) (i = 1, 2, 3) on will be denoted by q i j . Obviously where θ A i are components of θ A . The metric q and its inverse q −1 , will be used to, respectively, lower and raise, indices (here: lower case Latin letters) of components of tensor fields defined on . In particular we will often map one-forms to vector fields on by means of q -a vector field corresponding to a one form α will be denoted by α i.e. if α = α i dx i then Let us emphasize that all object defined by q (as , * , ξ A and q −1 ) are functions of (θ A ) which means that they are functions on the phase space.
In [1] we found some constraints on the phase space of TEGR. Smeared versions of the constraints read where a, b, M and M are smearing fields on : a and b are one-forms, M is a function and M a vector field on the manifold. The smearing field possess altogether ten degrees of freedom per point of . In [1] we called B(a) boost constraint and R(b) rotation constraint. S(M) is a scalar constraint and V ( M) a vector constraint of TEGR.

Derivation of the constraint algebra
In this section we will calculate Poisson brackets of the constraints presented above and show that each Poisson bracket is a sum of the constraints smeared with some fields. We assume that the reader is familiar with tensor calculus, differential form calculus (including the contraction X α of a vector field X with a differential form α) and properties of a Hodge operator on a three-dimensional manifold defined by a Riemannian metric.

Poisson bracket
If F and G are functionals on the phase space then their Poisson bracket [8] where the functional derivatives with respect to θ A and p A are defined as follows [9]: δ F/δθ A is a differential two-form on and δ F/δp A is a differential one-form on such that for every δθ A and δp A . Calculating functional derivatives of the smeared constraints would be straightforward if (1) the Hodge operator * did not depend on θ A and (2) the constraints did not depend on ξ A being a complicated function of θ A . Thus explicite formulae describing these derivatives are needed.
Given k-forms α and β, denote If the forms α, β do not depend on the canonical variables then [9] An important property of every two-form α * A β is that it vanishes once contracted with the function ξ A [9]: Consider now a three-form κ A on which does not depend on θ A and p A and a functional Using (3.2) we obtain

Auxiliary formulae
Auxiliary formulae presented below will be used throughout the calculations. Except them we will need many other formulae which will be derived in the subsequent subsections.
The functions (ξ A ) satisfy the following important conditions [10]: These formulae will be used very often and therefore it would be troublesome to refer to them each time. Therefore we kindly ask the reader to keep the formulae in mind since they will be used without any reference. For any one-form α and any k-form β [9] * ( * β ∧ α) = α β. (3.5) Setting β = * γ and taking into account that * * = id we obtain an identity * (γ ∧ α) = α ( * γ ) (3.6) valid for every l-form γ . It was shown in [1] that where α is a k-form on .

Tensor calculus
Although our original wish was to carry out all necessary calculations using differential form calculus only we were forced in some cases to use tensor calculus. Below we gathered expressions very useful for the calculations.
Let ∇ denote a covariant derivative on defined by the Levi-Civita connection given by the metric q. Then (the first equation above holds by virtue of (2.3)). For any one-forms α and β If α is a one-form, β a two-form and γ a three-form then We will also apply the following identities (for a proof see e.g. [9]): and a formula where α and β are k-forms.

Auxiliary formulae
The following formulae are useful while calculating the brackets: In (3.14) α, β, b, b are one-forms, in (3.15) α and β are k-forms, and b is a one-form, in (3.16) α is a k-form, and in (3.17) b is a one-form.
Proof of (3.14) (3.19) where in the second step we used (3.5), in the third the fact that b ∧ * b is symmetric in b and b and finally in the fifth step we applied (3.6).
Proof of (3.15) To prove (3.15) note that the two-form α * A β given by (3.1) is of the form θ B γ AB , where the three-form γ AB is symmetric in A and B: γ AB = γ B A . Thus where in the last step we used (3.6).
Proof of (3.16) where in the first step we used (3.6) and in the second one we applied (3.8).
Proof of (3.17) Let us transform the following expression by means of (3.6): (3.20) The first term at the r.h.s. of this equation is symmetric in A and B, while the second one-in A and C. This means that both terms vanish once contracted with The last formula (3.18) is proven in [9].

Poisson bracket {R(b), R(b )}
Let us define Then by virtue of (2.6) Due to (3.15) where in the last step we used (3.14). Similarly, using (3.15) we obtain Note now that due to (3.17) the term containing D BC A vanishes. By virtue of (3.14)

Poisson bracket of S(M) and S(M )
Following [9] we split the constraint S(M) given by (2.7) into three functionals (3.26)

Poisson brackets {S i (M), S j (M )}
Functional derivatives of the functionals read: Let us begin the calculations with the bracket {S 1 (M), S 1 (M )}: because the first term under the integral is symmetric in M and M . It was shown in [9] that -to obtain the result we removed terms symmetric in M and M and applied (3.3).
-here there vanished terms symmetric in M and M and one being an exact threeform (recall that is a compact manifold without boundary); two terms vanished due to (3.3).
where in the first step some terms disappeared by virtue of their symmetricity in M and M ; moreover, we used (3.16) in the last step. Adding (3.28), (3.29), (3.30) and (3.31) we obtain (3.32)

Isolating constraints
Our goal now is to isolate constraints at the r.h.s. of (3.32), that is, to show that the r.h.s. of (3.32) is a sum of the constraints (2.5)-(2.8) smeared with some fields. It is clear (see (2.8)) that the first term at the r.h.s. of (3.32) The second and the third terms of (3.32) are equal to Similarly, the fourth and the fifth ones are equal to Thus (3.32) can be rewritten as follows: where the remaining terms read Now we will transform the remaining terms (3.34) to a form which will be a convenient starting point for isolating constraints. The first term in (3.34) where we used in turn (3.6) and (3.5). Similarly, the second term in Next, we transform the third term in (3.34): where in the first step we used (3.6), then (3.7) and (3.6) again. Similarly, the fourth term in (3.34) By virtue of (3.5) the last term in (3.34) can be expresses as follows [9]: Now it is easy to see that the following pairs 1. the first term at the r.h.s. of (3.35) and the first term at the r.h.s. of (3.37), 2. the first term at the r.h.s. of (3.36) and the first term at the r.h.s. of (3.38), 3. the second term at the r.h.s. of (3.37) and (3.39) sum up to zero. Moreover, the second term at the r.h.s. of (3.35) is equal to the second term at the r.h.s. of (3.36). Consequently, the terms (3.34) can be expresses as Note now that the term above containing ξ A can be transformed as follows: (3.41) Gathering the result above, (3.40) and (3.33) we obtain where the remaining terms read now Now let us show that the remaining terms (3.43) can be expressed as R(b) with some one-form b. By shifting the contraction θ B in the first term above and using (3.6) one can easily show that the sum of the first and the second terms in (3.43) reads Expressing the third term in (3.43) by means of the components of the variables and the covariant derivative ∇ a and using (3.10) and (3.12) we obtain Similarly, the fourth term in (3.43) Consequently the sum of the third and the fourth terms in (3.43) is of the following form: -here in the fourth step we used (3.5). Thus the sum of the third and the fourth terms in (3.43) once integrated over reads: Finally, the last term in (3.43) can be easily transformed to (3.46) Gathering the three results (3.44), (3.45) and (3.46) we conclude that the terms (3.43) can be expressed as where the terms independent of p A read where we isolated the factor mξ A by means of (3.6) and (3.5). It is not difficult to show that the terms in the square brackets in (3.48) sum up to zero. Consequently, the terms (3.43) are equal to Setting this result to (3.42) we obtain (3.50)

Another form of {S(M), S(M )}
Let us now transform (3.50) to a form in which both constraints B given by (2.5) and R defined by (2.6) appear on an equal footing. Consider the following transformation: It is easy to see that under this transformation Assume now that a function F on the phase space is mapped by the transformations (3.51) to a function G and that we know from somewhere else that F = G. Then F = (F + G)/2. We will use this fact to transform (3.50) to the desired form.
Let us now express the formula (3.41) in the following form: Note now that the l.h.s. of the identity above is invariant with respect to the transformation (3.51). Consequently, the r.h.s. has to be invariant too and treating the r.h.s. as a function F by virtue of (3.52) we obtain from it a function G such that F = G. Thus the r.h.s. is equal to (F + G)/2 and reads (3.53) In this way we obtained an alternative expression for the fourth and the fifth terms at the r.h.s. of (3.50). Similarly, the expression (3.43) is also invariant with respect to (3.51). On the other hand, (3.43) is equal to the last term at the r.h.s. of (3.50) (see the sentence containing (3.49)) which means that the term can be treated as a function F and expressed as (3.55) Note that the sum of the constraints B and R at the r.h.s. of (3.55) is explicitely invariant with respect to the transformation (3.51).

Poisson bracket of R(b) and S(M)
Recall that the constraints R(b) and S(M) are defined by, respectively, (2.6) and (2.7).
To show that the bracket {R(b), S(M)} is a sum of the constraints (2.5)-(2.8) smeared with some fields we will proceed according to the following prescription. The bracket under consideration can be expressed as where the functionals at the r.h.s. are given by (3.21) and (3.26). It is not difficult to see that each bracket in the sum is either quadratic in p A , linear in p A or independent of p A . So we will first calculate the brackets and then we will gather similar terms according to the classification. Next, it will turn out that the terms quadratic in p A can be re-expressed as a constraint plus a term linear in p A . Then it will turn out that all the linear terms can be re-expressed as some constraints plus terms independent of p A and that all the terms independent of p A sum up to zero. Except the prescription we will need some formulae and identities which will make easier the calculations.

Auxiliary formulae
The following formulae will be used in the sequel while calculating {R(b), S(M)} (they are also very useful in calculating {B(a), S(M)}): where in (3.57) α is a one-form, and in (3.58) α and β are k-forms and γ a one-form, in (3.59) α and β are k-forms and finally in (3.60) α A is a k-form, while β A is a (3 − k)-form. Moreover, we will apply the following two identities: where α, β and κ are one-forms, and γ is a two-form. Note that if in (3.58) α and β are three-forms then the formula can be simplified further. Indeed, in this case * α and * β are zero-forms thus and setting this equality to the r.h.s. of the first line of (3.58) we obtain (3.64) Using this result we can also simplify (3.59)-if α and β are three-forms then setting to (3.64) γ = (δS 1 (M))/(δp A ) we obtain Proof of (3.56) Recall that the functions ξ B are given by the formula (2.2). Using it we obtain where in the second step we used (3.7), and in the last one (3.18).
Proof of (3.57) By virtue of (3.7) and (3.18) the l.h.s. of (3.57) can be transformed as follows: Shifting the contraction θ D in the first of the two resulting terms and applying once again (3.7) and (3.18) we obtain Now to justify (3.57) it is enough to note that (1) the first term on the r.h.s of the equation above is proportional to the term on the l.h.s. and (2) the second term on the r.h.s. by virtue of (2.2) is equal to 3( * ξ A ) θ D α.
Proof of (3.58) Let us now consider the l.h.s. of (3.58): By virtue of (3.5) and (3. Setting this result to the r.h.s. of the previous equation we obtain the r.h.s. of the first line of (3.58). To obtain the result at the second line it is enough to shift the contraction θ A in the term −( θ A α) ∧ * β ∧ γ .
Proof of (3.59) By virtue of the first line of Eq. (3.58) just proven where the first equality holds true by virtue of (3.5) and the last one-due to (3.16).
On the other hand due to (3.8) Setting the two results above to (3.66) we get The last step of the proof aims at simplifying the last term in the equation above: (here we used (3.5) and (3.6)). Taking into account that * α ∧ β = α ∧ * β we set the result above to (3.67) obtaining thereby (3.59).
Proof of (3.60) By virtue of (3.7) Now to get (3.60) it is enough to shift the contraction θ B in the first term at the r.h.s. of the equation above.
Proof of (3.61) First transform the l.h.s. of (3.7) by means of (3.5) then act on the both sides of the resulting formula by d.
Proof of (3.62) Note that β ∧ γ is a three form. Therefore * α * where in the second step we used (3.5). Transforming similarly the term * β * (α ∧ γ ) we obtain Proof of (3.63) Act by the Hodge operator * on both sides of (3.62) and set κ = * γ .

Terms quadratic in p A
Terms quadratic in the momenta come form the Poisson bracket where we used (3.15) to simplify the r.h.s. It is not difficult to see that Let us now transform the remaining term in (3.68)-by virtue of (3.59) Consequently, the second term at the r.h.s. above vanishes and (3.69)

Terms linear in p A
Here we will calculate the brackets {R 1 (b), S 2 (M)} and {R 2 (b), S 1 (M)}, which give terms linear in the momenta.

70)
A. Okołów where we used the second line of (3.58) and (3.3). The other bracket, (3.71) The last two terms in the square bracket above give together zero once multiplied by (δS 1 (M))/(δp A ). Indeed, due to (3.65) On the other hand, -the first equality holds by virtue of (3.57) and due to a fact that ξ A (δS 1 (M))/(δp A ) = 0 ((δS 1 (M))/(δp A ) is of the form θ A i γ i j dx j form some tensor field γ i j ), in the last step we used (3.7). The fact just mentioned and (3.6) allows us two express the only remaining term in (3.71) as follows: (3.72) Gathering all the terms linear in p A , that is, (3.70) and (3.72) we obtain (3.73)

Terms independent of p A
It turns out that the remaining three brackets, Applying (3.64), (3.18) and (3.57) it is not difficult to show that .
Our goal now is to express the r.h.s. of the equation above as a sum of a term containing Mb and a one containing d M. To this end we first act by the operator d on the factors constituting terms in the square brackets. Next, in those cases when it is possible, we use (3.16) to simplify θ A ∧ * (b ∧ θ A ) to 2 * b and θ A ∧ * (dθ B ∧ θ A ) to * dθ B , finally in all the terms containing M and * b we shift the Hodge operator * to get b. Thus we obtain Note that since * dθ A is a one-form the first term at the r.h.s. above vanishes. Thus the terms independent of p A read Now by setting in (3.62) α = b, β = θ B and γ = dθ B the last term above can be simplified to This means that the terms independent of p A read (3.74)

Isolating constraints
Our goal now is to express the bracket Terms quadratic in p A We are going to transform the last term of (3.69) to a form containing the factor θ A ∧ * p A being a part of the constraint R(b): applying (3.60) to the term with α A = * p A we obtain The first term at the r.h.s. above is zero-indeed, using (3.6) we get Note now that * ( p A ∧ θ B ) * ( p C ∧ θ B ) is symmetric in A and C, while θ A ∧ θ C antisymmetric. Transforming the remaining term in (3.75) we obtain where in the last step we used (3.8) and (3.7). Setting this result to (3.69) we arrive at an alternative description of the terms quadratic in p A : (3.76) Consequently, the bracket {R(b), S(M)} is of the following form +terms linear in p A +the terms (3.74) independent of p A , (3.77) where now the phrase "terms linear in p A " means the terms (3.73) and the last term in (3.76). Now we are going to isolate constraints from the linear terms.
Terms linear in p A The terms read The first term can be written as where in the last step we used (3.6). The last term of (3.78) -here in the second step we used (3.5). The last two results allow us to express in a more simpler form the sum of the terms in (3.78) which do not contain d M: Our goal now is to rewrite the sum above in a form of a single term containing the factor θ A ∧ * p A . Let us begin with the first term in (3.80): where in the last step we first applied (3.6) and (3.5) to factors containing the contraction θ B and then (3.8). The second term in (3.80) (in the second step we applied (3.6)). Finally, the last term in (3.80) by virtue of (3.5) can be written as Gathering (3.81), (3.82) and the equation above and using (3.5) we obtain the desired expression for these terms in (3.78) which do not contain d M: We can now simplify the term in the big parenthesis-it is enough to use (3.63) setting where the phrase "terms independent of p A " means here the terms given by (3.74), the last term in (3.80) and the last one in (3.83).
Terms independent of p A Gathering all the terms under consideration which appear in (3.84) we see that the sum of the last term of (3.74) and the last term of (3.80) is zero. Note now that the last term in (3.83) contains ξ A which does not appear in the others term. To get rid of ξ A let us use (3.61): A. Okołów Consequently, the terms in (3.84) independent of p A read (3.86)

Poisson bracket of B(a) and S(M)
Using the same procedure as in the previous subsection we arrive at +terms linear in p A + terms independent of p A , (3.87) where the terms linear in p A read and the terms independent of p A read Note now that the form of constraints at the r.h.s. of (3.87) resemble closely the form of the constraints at the r.h.s. of (3.86). We then assume that To justify the assumption one can proceed as follows: one adds to the r.h.s. of (3.87) zero expressed as (here in the last step we used (3.61)). Next using tensor calculus (Sect. 3.1.3) it is possible to show that all the remaining terms linear in p A sum up to zero, and that similarly do all the remaining terms independent of the momenta. Note that now the description of the terms linear in and independent of the momenta given just below Eq. (3.87) has to be completed by taking into account the two last term in (3.89).

Poisson brackets of V ( M)
The functional derivatives of the smeared vector constraint V ( M) (see (2.8)) are of the following form [9]: In the formulae above L M denotes the Lie derivative on with respect to the vector field M.
As mentioned in the introduction a discussion of the results can be found in [1], here we restrict ourselves to a statement that a Poisson bracket of every pair of the constraints (2.5)-(2.8) is a sum of the constraints smeared with some fields. In other words, the constraint algebra presented above is closed.