ADM-like Hamiltonian formulation of gravity in the teleparallel geometry: derivation of constraint algebra

We derive a new constraint algebra for a Hamiltonian formulation of the Teleparallel Equivalent of General Relativity treated as a theory of cotetrad fields on a spacetime. The algebra turns out to be closed.


Introduction
In our previous paper [1] we presented a Hamiltonian formulation of the Teleparallel Equivalent of General Relativity (TEGR) regarded as a theory of cotetrad fields on a spacetime-the formulation is meant to serve as a point of departure for canonical quantization à la Dirac of the theory (preliminary stages of the quantization are described in [2,3,4]). In [1] we found a phase space, a set of (primary and secondary) constraints on the phase space and a Hamiltonian. We also presented an algebra of the constraints. An important fact is that this algebra is closed i.e. a Poisson bracket of every pair of the constraints is a sum of all the constraints multiplied by some factors. This property of the constraint algebra together with a fact that the Hamiltonian is a sum of the constraints allowed us to conclude that (i) the set of constraints is complete and (ii) all the constraints are of the first class.
Let us emphasize that knowledge of a complete set of constraints and their properties as well as knowledge of an explicite form of a constraint algebra is very important from the point of view the Dirac's approach to canonical quantization of constrained systems since the knowledge enables a right treatment of constraints in the procedure of quantization.
However, the derivation of the constraint algebra turned out to be too long to be included in [1]. To fill this gap, that is, to prove that the constraint algebra is correct we carry out the derivation in the present paper 1 . Moreover, to the best of our knowledge a derivation of constraint algebra of TEGR treated as a theory of cotetrad fields has never been presented before-in papers [5,6] describing a distinct Hamiltonian formulation of this version 2 of TEGR one can find a constraint algebra but its derivation is not shown.
The paper is organized as follows: in Section 2 we recall the description of the phase space and the constraints on it derived in [1]. In Section 3 we derive the constraint algebra. Section 4 contains a short summary.
Let us finally emphasize that since the present paper plays a role of an appendix to [1] we do not discuss here the results nor compare them to results of previous works-all these can be found in [1].

Preliminaries
Let M be a four-dimensional oriented vector space equipped with a scalar product η of signature (−, +, +, +). We fix an orthonormal basis (v A ) (A = 0, 1, 2, 3) such that the components (η AB ) of η given by the basis form a matrix diag(−1, 1, 1, 1). The matrix (η AB ) and its inverse (η AB ) will be used to, respectively, lower and raise capital Latin letter indeces.
Let Σ be a three-dimensional oriented manifold. We assume moreover that it is a compact manifold without boundary.
In [1] we obtained the phase space of TEGR as a Cartesian product of 1. a set of all quadruplets of one-forms (θ A ) (A = 0, 1, 2, 3) on Σ such that for each quadruplet a metric q := η AB θ A ⊗ θ B (2.1) on Σ is Riemannian (i.e. positive definite); 2. a set of all quadruplets (p B ) (B = 0, 1, 2, 3) of two-forms on Σ-a two-form p A is the momentum conjugate to θ A .
The metric q defines a volume form ǫ on Σ and a Hodge operator * acting on differential forms on the manifold. Throughout the paper we will often use functions on Σ defined as follows [9]: 2) 1 The derivation is also an example of an application of differential form calculus to a derivation of a constraint algebra which usually is done by means of tensor calculus. 2 There is another version of TEGR configuration variables of which are cotetrad fields and flat Lorentz connections of non-zero torsion. For a complete Hamiltonian analysis of this version of TEGR see [7].
where ε ABCD are components of a volume form on M given by the scalar product η.
Components of q in a local coordinate frame (x i ) (i = 1, 2, 3) on Σ will be denoted by q ij . Obviously where θ A i are components of θ A . The metric q and its inverse q −1 , will be used to, respectively, lower and raise, indeces (here: lower case Latin letters) of components of tensor fields defined on Σ. In particular we will often map one-forms to vector fields on Σ-a vector field corresponding to a one form α will be denoted by α i.e. if α = α i dx i then α := q ij α i ∂ j .
Let us emphasize that all object defined by q (as ǫ, * , ξ A and q −1 ) are functions of (θ A ) which means that they are functions on the phase space.
In [1] we found some constraints on the phase space of TEGR. Smeared versions of the constraints read

Derivation of the constraint algebra
In this section we will calculate Poisson brackets of all pairs of the constraints presented above and show that each Poisson bracket is a sum of the constraints smeared with some fields.
Calculations needed to achieve the goal will be long, laborious and complicated. We assume that the reader is familiar with tensor calculus, differential form calculus (including the contraction X α of a vector field X with a differential form α) and properties of a Hodge operator on three-dimensional manifold defined by a Riemannian metric.

Poisson bracket
If F and G are functionals on the phase space then their Poisson bracket [8] where the functional derivatives with respect to θ A and p A are defined as follows [9]: δF/δθ A is a differential two-form on Σ and δF/δp A is a differential one-form on Σ such that for every δθ A and δp A . Calculating functional derivatives of the smeared constraints would be straightforward if (i) the Hodge operator * did not depend on θ A and (ii) the constraints did not depend on ξ A being a complicated function of θ A . Thus explicite formulae describing these derivatives are needed.
Given k-forms α and β, denote If the forms α, β do not depend on the canonical variables then [9] δ An important property of every two-form α ∧ * ′ A β is that it vanishes once contracted with the function ξ A [9]: Consider now a three-form κ A on Σ which does not depend on θ A and p A and a functional Using (3.2) we obtain

Auxiliary formulae
Auxiliary formulae presented below will be used throughout the calculations. Except them we will need many other formulae which will be derived in the subsequent subsections.
The functions (ξ A ) satisfy the following important conditions [10]: These two equations imply These formulae will be used very often and therefore it would be troublesome to refer to them each time. Therefore we kindly ask the reader to keep the formulae in mind since they will be used without any reference. For any one-form α and any k-form β [9] * ( * β ∧ α) = α β. (3.5) Setting β = * γ and taking into account that * * = id we obtain an identity * (γ ∧ α) = α ( * γ) (3.6) valid for every l-form γ. It was shown in [1] that where α is a k-form on Σ.

Tensor calculus
Although our original wish was to carry out all necessary calculations using differential form calculus only we were forced in some cases to use tensor calculus. Below we gathered some expressions which will be applied repeatedly in the sequel. Let ∇ denote a covariant derivative on Σ defined by the Levi-Civita connection given by the metric q. Consequently, by virtue of (2.3) and Note also that ∇ a ǫ ijk = 0 (3.11) because ǫ is defined by q.
For any one-forms α and β If α is a one-form, β a two-form and γ a three-form then We will also apply the following identities (for a proof see e.g. [9]): (3.14) and a formula where α and β are k-forms.

Poisson brackets of B(a) and R(b)
In this subsection we will calculate Poisson brackets {B(a),

Auxiliary formulae
The following formulae will be used while calculating the brackets: In (3.16) α, β, b, b ′ are one-forms, in (3.17) α and β are k-forms, and b is a one-form, in (3.18) α is a k-form, and in (3.19) b is a one-form.
where in the second step we used (3.5), in the third the fact that b ∧ * b ′ is symmetric in b and b ′ and finally in the fifth step we applied (3.6).
Proof of (3.17). To prove (3.17) note that the two-form α ∧ * ′ A β given by (3.1) is of the form θ B γ AB , where the three-form γ AB is symmetric in A and B: γ AB = γ BA . Thus , where in the last step we used (3.6). But (b ∧ θ A ∧ θ B ) is antisymmetric in A and B, hence (3.17) follows.
where in the first step we used (3.6) and in the second one we applied (3.8).
Proof of (3.19). Let us transform the following expression by means of (3.6): Note that the first term at the r.h.s. of the equation above is symmetric in A and B, while the second one-in A and C. This means that both terms vanish once contracted with ǫ DBCA .
Taking into account (2.5) we see that B(a) = B 1 (a) + B 2 (a) and consequently (3.24) Corresponding variational derivatives read (3.25) Obviously, {B 1 (a), B 1 (a ′ )} = 0. The next term in (3.24) where in the first step we used (3.3) and in the third one (3.16). The last term in (3.24) due to (3.3) reads By virtue of (3.20) and (3.5) Thus where in the third step we used (3.5).
We conclude that (see (2.6)) Then by virtue of (2.6) Corresponding variational derivatives read (3.29) Due to (3.17) the first bracket at the r.h.s. of (3.28) reduces to where in the last step we used (3.16). Similarly, by virtue of (3.17) the next two brackets in (3.28) reduce to Note now that due to (3.19) the term containing ǫ DBCA vanishes. Therefore Using (3.17) we immediately obtain where we transformed the first term by means of (3.16). Three terms constituting the next bracket in (3.31) vanish by virtue of (3.19), (3.17) and (3.3) and consequently The bracket {B 1 (a), R 2 (b)} is obviously zero and where in the first step we used (3.3), in the second one we applied (3.20) and in the fourth step (3.18). Gathering the partial results we obtain Let us show now that the integrand in the last line of (3.32) is zero. The first term in the integrand can be expressed as follows: -the second equality holds by virtue of (3.15) and the third one due to (3.13) and (3.11). Using (3.14) we rewrite δ l n by means of "epsilons" and continue transformations -here in the second step we applied (3.14) to express ǫ lab ǫ ijk by means of "deltas". On the other hand the second term in the integrand -the second equality holds by virtue of (3.15) and (3.11). Thus the sum of the last two terms in (3.32) is equal to where we used (3.9). Finally  Following [9] we split the constraint S(M ) given by (2.7) into three functionals (3.34) Then Functional derivatives of the functionals read: because the first term under the integral is symmetric in M and M ′ . It was shown in [9] that Because S 3 (M ) does not depend on the momenta Next, Thus we obtain an explicite expression for the r.h.s. of (3.35):

Isolating constraints
Our goal now is to isolate constraints at the r.h.s. of (3.46), that is, to show that the r.h.s. of (3.46) is a sum of the constraints (2.5)-(2.8) smeared with appropriately chosen fields.
It is clear (see (2.8)) that the first term at the r.h.s. of (3.46) The second and the third terms are equal to Similarly, the fourth and the fifth ones are equal to Thus (3.46) can be rewritten as follows: where the remaining terms read Now we will transform the remaining terms (3.48) to a form which will be a convenient starting point for isolating constraints. The first term in (3.48) where we used (3.6) in the second step and (3.5) in the fourth one. Similarly, the second term in (3.48 Next, we transform the third term in (3.48): where in the first step we used (3.6), in the fourth one (3.7) and in the last one (3.6) again. Similarly, the fourth term in (3.48) By virtue of (3.5) the last term in (3.48) can be expresses as follows [9]: Now it is easy to see that the following pairs 1. the first term at the r.h.s. of (3.49) and the first term at the r.h.s. of (3.51), 2. the first term at the r.h.s. of (3.50) and the first term at the r.h.s. of (3.52), 3. the second term at the r.h.s. of (3.51) and (3.53) sum up to zero. Moreover, the second term at the r.h.s. of (3.49) is equal to the second term at the r.h.s. of (3.50). Consequently, the terms (3.48) can be expresses as Note now that the term above containing ξ A can be transformed as follows: Gathering the result above, (3.54) and (3.47) we obtain where the remaining terms read now Now let us show that the remaining terms (3.57) can be expressed as R(b) with the one-form b being a complicated function of the canonical variables and m. By shifting the contraction θ B in the first term above and using (3.6) one can easily show that the sum of the first and the second terms in (3.57) reads Let us now express the third term in (3.57) by means of the components of the canonical variables and the covariant derivative ∇ a (see (3.12)): where in the last step we used (3.14). Similarly, the fourth term in (3.57) and consequently the sum of the third and the fourth terms in (3.57) is of the following form: -here in the fourth step we used (3.5). Integrating the equation above over Σ we obtain: Finally, the last term in (3.57) Gathering the three results (3.58), (3.59) and (3.60) we conclude that the terms (3.57) can be expressed as Now it is enough to show that the terms (3.62) sums up to zero for all m and θ A . To this end let us isolate the factor mξ A in each term of (3.62)-using (3.6) and (3.5) we obtain Consider now the terms in the square bracket above: Setting this result to (3.56) we obtain Let us now transform the result to a form in which both constraints B given by (2.5) and R defined by (2.6) appear on an equal footing. Consider the following transformation: It is easy to see that under this transformation Let us now express the formula (3.55) in the following form: Note now that the l.h.s. of the identity above is invariant with respect to the transformation (3.65). Consequently, the r.h.s. has to be invariant too. Thus the fourth and the fifth terms in (3.64) where the last equation holds by virtue of the following trivial fact: if x = y then x = 1 2 (x + y). Similarly, the expression (3.57) is also invariant with respect to (3.65). Thus by virtue of the identity (3.63) the last term in (3.64) Note that the sum of the constraints B and R at the r.h.s. of (3.68) is explicitely invariant with respect to the transformation (3.65).

Poisson bracket of R(b) and S(M)
Recall that the constraints R(b) and S(M ) are defined by, respectively, (2.6) and (2.7).
To show that the bracket {R(b), S(M )} is a sum of the constraints (2.5)-(2.8) smeared with some fields we will proceed according to the following prescription. The bracket under consideration can be expressed as where the functionals at the r.h.s. are given by (3.27) and (3.34). It is not difficult to see that each bracket in the sum is either quadratic in p A , linear in p A or independent of p A . So we will first calculate the brackets and then we will gather similar terms according to the classification. Next, it will turn out that the terms quadratic in p A can be re-expressed as a constraint plus a term linear in p A . Then it will turn out that all the linear terms can be re-expressed as some constraints plus a term independent of p A . Finally we will show that all the term independent of p A sum up to zero. Except the prescription we will need some formulae and identities which will make easier the calculations.

Auxiliary formulae
The following formulae will be used in the sequel while calculating both {R(b), S(M ))} and {B(a), S(M )}: where in (3.70) α is a one-form, and in (3.71) α and β are k-forms and γ a one-form, in (3.72) α and β are k-forms and finally in ( Moreover, we will apply the following two identities: where α, β and κ are one-forms, and γ is a two-form. Note that if in (3.71) α and β are three-forms then the formula can be simplified further. Indeed, in this case * α and * β are zero-forms thus and setting this equality to the r.h.s. of the first line of (3.71) we obtain Using this result we can also simplify (3.72)-if α and β are three-forms then setting to Proof of (3.69). Recall that the functions ξ B are given by the formula (2.2). Using it we obtain where in the second step we used (3.7), and in the last one (3.20).
Proof of (3.70). By virtue of (3.7) and (3.20) the l.h.s. of (3.70) can be transformed as follows: Shifting the contraction θ D in the first of the two resulting terms and applying once again (3.7) and (3.20) we obtain Now to justify (3.70) it is enough to note that (i) the first term on the r.h.s of the equation above is proportional to the term on the l.h.s. and (ii) the second term on the r.h.s. by virtue of (2.2) is equal to 3( * ξ A ) θ D α.
Proof of (3.71). Let us now consider the l.h.s. of (3.71): By virtue of (3.5) and (3. Setting this result to the r.h.s. of the previous equation we obtain the r.h.s. of the first line of (3.71). To obtain the result at the second line it is enough to shift the contraction θ A in the term −( θ A α) ∧ * β ∧ γ.
Proof of (3.72). By virtue of the first line of Equation (3.71) just proven where the first equality holds true by virtue of (3.5) and the last one-due to (3.18). On the other hand due to (3.8) Setting the two results above to (3.79) we get The last step of the proof aims at simplifying the last term in the equation above: (here we used (3.5) and (3.6)). Taking into account that * α ∧ β = α ∧ * β we set the result above to (3.80) obtaining thereby (3.72).
Proof of (3.73). By virtue of (3.7) Now to get (3.73) it is enough to shift the contraction θ B in the first term at the r.h.s. of the equation above.
Proof of (3.74). First transform the l.h.s. of (3.7) by means of (3.5) then act on the both sides of the resulting formula by d.

Terms quadratic in p A
Terms quadratic in the momenta come form the Poisson bracket where we used (3.17) to simplify the r.h.s. It is not difficult to see that Let us now transform the remaining term in (3.81)-by virtue of (3.72) To justify the last step let us note that (b ∧ θ B ∧ θ A ) is antisymmetric in A and B while * p A ∧ p B is symmetric. Consequently, where we have used the second line of (3.71) and (3.3). The other bracket, (3.84) The last two terms in the square bracket above give together zero once multiplied by (δS 1 (M ))/(δp A ). Indeed, due to (3.78) On the other hand, -the first equality holds by virtue of (3.70) and due to a fact that ξ A (δS 1 (M ))/(δp A ) = 0 ((δS 1 (M ))/(δp A ) is of the form θ A i γ i j dx j form some tensor field γ i j ), in the last step we used (3.7). Using the fact just mentioned and (3.6) we transform the only remaining term in (3.84) as follows: Gathering all the terms linear in p A , that is, (3.83) and (3.85) we obtain being an exact three-form-the integral of this term over Σ is zero hence

Applying (3.77), (3.20) and (3.70) it is not difficult to show that
Consider now the last bracket {R 1 (b), S 3 (M )}. Due to (3.17) Our goal now is to express the r.h.s. of the equation above as a sum of a term containing M b and a one containing dM . To this end we first act by the operator d on the factors constituting terms in the square brackets. Next, in those cases when it is possible, we use (3.18) to simplify θ A ∧ * (b ∧ θ A ) to 2 * b and θ A ∧ * (dθ B ∧ θ A ) to * dθ B , finally in all the terms containing M and * b we shift the Hodge operator * to get b. Thus we obtain Note that since * dθ A is a one-form the first term at the r.h.s. above vanishes. Thus the terms independent of p A read The term containing dM can be expressed as Now by setting in (3.75) α = b, β = θ B and γ = dθ B the term under consideration can be simplified to This means that the terms independent of p A read

Isolating constraints
Our goal now is to express the bracket Terms quadratic in p A We are going to transform the last term of (3.82) to a form containing the factor θ A ∧ * p A being a part of the constraint R(b): applying (3.73) to the term with α A = * p A we obtain The first term in the last line is zero-indeed, using (3.6) we get Note now that * (p A ∧θ B ) * (p C ∧θ B ) is symmetric in A and C, while θ A ∧θ C antisymmetric. Transforming the remaining term in (3.88) we obtain where in the last step we used (3.8) and (3.7). Setting this result to (3.82) we arrive at the terms (3.82) quadratic in where now the phrase "terms linear in p A " means the terms (3.86) and the last term in (3.89). Now we are going to isolate constraints from the linear terms.
Terms linear in p A The terms read The first term can be written as (3.92) Transformation of the remaining terms in (3.91) (i.e. those which do not contain dM ) to an appropriate form takes more effort. Applying (3.73) to the first of them by -here in the last step we used (3.6). The last term of (3.91) can be transformed as follows where in the second step we used (3.5). The last two results allow us to express in a more simpler form the sum of the terms in (3.91) which do not contain dM : Our goal now is to rewrite the sum above in a form of a single term containing the factor θ A ∧ * p A . Let us begin with the first term in (3.93): where we applied (3.6) and (3.5) in the third step and (3.8) in the last step. The second term in (3.93) (in the second step we applied (3.6)). Finally, the last term in (3.93) by virtue of (3.5) can be written as Gathering (3.94), (3.95) and the equation above we obtain the desired expression for these terms in (3.91) which do not contain dM : where again we used (3.5). We can now simplify the term in the big parenthesis -it is enough to use (3.76) setting α = b, β = dξ A and κ = θ A to get where the phrase "terms independent of p A " means here the terms given by (3.87), the last term in (3.92) and the last one in (3.96).
Terms independent of p A Our goal now is to show that the terms independent of p A sum up to zero. Gathering all the terms under consideration which appear in (3.97) we see that the sum of the last term of (3.87) and the last term of (3.92) is zero. Note now that the last term in (3.96) contains ξ A which does not appear in the others term.
To get rid of ξ A let us use (3.74): Consequently, the terms in (3.97) independent of p A read Now we are going to show that the expression above is zero for every M, b and θ A . This will be achieved by proving that the terms in the big parenthesis sum up to zero for every θ A . The proof will be carried out with application of tensor calculus (see formulae in Section 3.1.3).
The first term in (3.98) By virtue of the third equation in (3.14) we can express the r.h.s. above as a sum of six terms. Four of them vanish: two of them contain the vanishing factor (∇ a θ Bb )θ Bb (see (3.10)), the remaining two vanish because they are of the form In the case of the second term in (3.98) we proceed similarly-applying (3.14) we obtain six terms and again four of them vanish: two of them are of the form (3.99), the other two can be transformed to this form by means of (3.9) hence Let us consider now the third term in (3.98): where in the second step we used (2.3). Applying (3.14) we obtain twelve terms: six of them contain a second covariant derivative of components of θ B , while the remaining ones are quadratic in covariant derivatives of the components. Three terms of those containing second covariant derivatives vanish: two terms turn out to be of the form (3.99), the third one can be transformed to this form by means of (3.10): Four terms of those quadratic in covariant derivatives are zero: two of them contain the factor (3.10), the other two can be transformed to the form (3.99) by means of (3.9). Finally (3.13) to express the fourth term in (3.98) as follows Acting by ∇ b on the factors in the square brackets we obtain The last term in (3.98) by virtue of (3.5) and (3.14) can be expressed as follows The last term in the second line above vanishes-indeed, due to (2.3) the term is equal to and each term in this sum is of the form (3.99). Using (2.3) again we obtain Now we are ready to gather all the results (3.100)-(3.104) to show that (3.98) is zero. To make the task easier let us note that we obtained three kinds of terms: (i) ones containing second covariant derivatives of θ A a , (ii) ones quadratic in covariant derivatives of θ A a and (iii) ones quadratic both in the covariant derivatives and in θ A a . Expressions containing second covariant derivatives of θ A a appear in (3.102) and (3.103) and they read We see now that the first and the last term sum up to zero, similarly do the third and the fifth ones. The sum of the remaining second an fourth terms can be expressed as where (i) in the first step we used the Riemann tensor R b eda of the Levi-Civita connection compatible with q to express the commutator (∇ d ∇ a −∇ a ∇ d ) acting on θ b B and (ii) in the second step we applied (2.3). Note that the last equality holds by virtue of symmetricity of the Ricci tensor R cd .
Terms quadratic in covariant derivatives of θ A a appear in (3.103) and (3.104): It is easy to see that the first and the third terms sum up to zero, similarly do the second and fourth ones. Let us finally consider the terms quadratic in covariant derivatives of θ A a and quadratic in θ A a -there are eight of them and they can be grouped into pairs such that each pair sums up to zero. These pairs are: 1. the first term at the r.h.s. of (3.100) and the fourth one at the r.h.s. of (3.102), 2. the second term at the r.h.s. of (3.100) and the third one at the r.h.s. of (3.104) (apply (3.9) to the latter term), 3. the first term at the r.h.s. of (3.101) and the fifth one at the r.h.s. of (3.102), 4. the second term at the r.h.s. of (3.101) and the fourth one at the r.h.s. of (3.104) (apply (3.9) to the latter term).
In this way we demonstrated that (3.98) is zero for every M , b and θ A . Thus the formula (3.97) turns into the final expression of the Poisson bracket of R(b) and S(M ):

Terms quadratic in p A
The only term in (3.106) quadratic in p A is The first term of the r.h.s. of this equation turns out to be zero. To show this let us express the term as follows (3.108) Our strategy now is to restore in each term above the function ξ A which originally appears in B 2 (a). Thus by virtue of (3.70) Due to (2.2) the second term at the r.h.s of (3.108) Gathering the three results (3.109), (3.110) and (3.111) we see that, indeed, the first term on the r.h.s. of (3.107) is zero.
The second term at the r.h.s. of (3.107) requires (3.3) to be applied, then some simple transformations give us an expression for terms in (3.106) quadratic in the momenta:

Terms linear in p A
It turns out that {B 1 (a), S 1 (M )} and {B 2 (a), S 2 (M )} give terms linear in p A . The first of the two brackets can be calculated as follows Using (3.72) after some simple algebra we obtain The next bracket reads -here we omitted two terms which are zero by virtue of (3.3). To simplify the resulting expression let us first consider the two terms above containing dξ A -the first of the terms can be transformed by means of (3.70): On the other hand by virtue of (3.77) the other term Thus the sum of the two terms in (3.113) containing dξ A is zero. Consequently, where we applied (3.20). Finally, the terms in (3.106) linear in p A read where in the second step we used the second line of (3.3), (3.71), (3.6) and (3.5). The third bracket -here in the first step we applied (3.3) and in the second one we carried out the exterior differentiation at the r.h.s. of the first line. Thus the terms in (3.106) independent of p A read Terms quadratic in p A We immediately see that the formula (3.112) can be expressed as where now the phrase "terms linear in p A " means (3.115) and the last term in (3.117).
Terms linear in p A According to the last statement of the previous paragraph the remaining terms linear in Let us now transform the terms containing dM and dp D appearing in the first line of (3.119): where we used (3.5). On the other hand the term with dp D Consequently, the sum of the two terms The result just obtained means that (3.118) can be re-expressed as where now (i) the phrase "terms linear in p A " means the second term at the r.h.s. of (3.120) and (3.119) except the terms containing dM and dp D and (ii) "terms independent of p A " means the last term in (3.120) and the terms (3.116).
Note now that the form of constraints at the r.h.s. of (3.121) we managed to isolate so far resemble closely the form of the constraints at the r.h.s. of (3.105). Let us then assume that (3.122) To justify the assumption we will proceed as follows: we will add to the r.h.s. of (3.121) zero expressed as (here in the last step we used (3.74)). Next we will show that all the remaining terms linear in p A sum up to zero, and that similarly do all the remaining terms independent of the momenta. Note that now the description of the terms linear in and independent of the momenta given just below Equation (3.121) has to be completed by taking into account the two last term in (3.123).
To demonstrate that all the remaining terms linear in p A , that is, sum up to zero let us first perform some transformations. First we are going to show that the first, fourth, sixth and eighth terms above give together zero. To this end let us transform the eighth one as follows where in the first step we applied (3.6), and in the second one we shifted the contraction θ B and used (3.6) and (3.5). Let us now transform the last term above in an analogous and indeed the first, fourth, sixth and eighth terms disappear from (3.124). Let us consider now the second and the seventh terms in (3.124). After a slight transformation of the second one their sum can be expressed as where in the last step we used (3.18). Now the terms (3.124) can be re-expressed in a simpler form as Note that in each term above one can isolate the factor p B obtaining thereby Now using tensor calculus (see Section 3.1.3) we will show that the terms in the big parenthesis above sum up to zero for every θ A and a. The first term in (3.125) can be expressed as where in the first step we applied (3.15) and * ǫ = 1, and in the last one (3.10). The second term in (3.125) by virtue of (3.13). Due to (3.5) the third one The fourth term in (3.125) by means of the last formula in (3.12) (set α = * (a ∧ θ A ) and β = θ B ), (3.14) and (2.3) can be expressed as Using (3.14) we obtain six terms and two of them vanish by virtue of (3.10). Thus Collecting all the results (3.126)-(3.131) we note that we obtain two kinds of terms: ones containing a covariant derivative of a a and ones containing a covariant derivative of θ A a . The terms containing ∇ b a a appear in (3.127), (3.128) and (3.129) and sum up to zero: Regarding the terms containing ∇ a θ A b , there are ten of them and they can be grouped into pairs such that each pair sums up to zero. These pairs are: 3. the second term at the r.h.s. of (3.129) and the last term at the r.h.s. of (3.130) (shift the derivative by means of (3.9) in the latter term), 4. the last term at the r.h.s. of (3.129) and the first term at the r.h.s. of (3.130) (shift the derivative by means of (3.9) in the latter term), 5. the second term at the r.h.s. of (3.130) and the first term at the r.h.s. of (3.131) (again shift the derivative by means of (3.9) in the latter term).
Terms independent of p A Our goal now is to show that all the remaining terms independent of the momenta i.e. the terms (3.116), the last term in (3.120) and the second term at the r.h.s. of (3.123) sum up to zero. Note that the first term in (3.116) cancels the last term in (3.120). Now in all remaining terms there is the factor M a and therefore they can be expressed as In the fourth, fifth and sixth terms above there appears the function ξ A while in the remaining ones there is the derivative dξ A . Let us then transform the three terms to obtain ones containing dξ A . To transform the fourth one we note that which means that the sum of the second and the fourth term in (3.132) is zero. The fifth term in (3.132) Transforming similarly the sixth term we can rewrite (3.132) as follows: By a direct calculation using tensor calculus we will demonstrate that the terms in the big parenthesis sum up to zero for all θ A . More precisely, we will show that * − * dθ is equal to zero. The first term in the expression above reads by virtue of (3.5) Using twice (3.5) we express the second term in (3.133) as follows where in the second step we applied (3.10).
The third term )ǫ abc ǫ def dx f . Applying (3.14) we again obtain six terms, two of them vanish by virtue of (3.10) and we are left with the following expression 138) where in the last step we used (3.14) and (3.10).
In this way we managed to express (3.133) in terms of the components θ A a and ξ A and their covariant derivatives obtaining altogether sixteen terms (3.134)-(3.138). As before those terms can be grouped into pairs such that terms in each pair sum up to zero. Let us now enumerate the pairs: 7. the third term at the r.h.s. of (3.136) and the fourth term at the r.h.s. of (3.137), 8. the fourth term at the r.h.s. of (3.136) and the term at the r.h.s. of (3.138).
Thus we managed to demonstrate that all the remaining terms (3.133) independent of p A sum up to zero and thereby proved the assumption (3.122).

Poisson brackets of V ( M)
The functional derivatives of the smeared scalar constraint V ( M ) (see (2.8)) are of the following form [9]: It was shown in [9] that Derivations of brackets of V ( M ) and the other constraints will be based on the following formula [9]: (3.141) We will also apply the following well known properties of the Lie derivative: where α ∧ β is a three-form, and γ any k-form on Σ.
-functional derivatives of S 11 (M ) used to calculate the bracket can be read off from  The first term at the r.h.s. of the formula above where we shifted the derivative d and applied (3.143). Thus the sum of the first two terms at the r.h.s. of (3.147) reads where B 1 (a) and B 2 (a) are given by (3.23). We have The first term at the r.h.s. above The other bracket The first term at the r.h.s. above and the second one at the r.h.s. of (3.153) where in the last step we used (2.2). Setting these two results to (3.151) and applying (3.141) and (3.142) we obtain In the formulae above L M denotes the Lie derivative on Σ with respect to the vector field M . A discussion of the results can be found in [1], here we restrict ourselves to a statement that a Poisson bracket of every pair of the constraints (2.5)-(2.8) is a sum of the constraints smeared with some fields. In other words, the constraint algebra presented above is closed.