Bounds for theta sums in higher rank II

In the first paper of this series we established new upper bounds for multi-variable exponential sums associated with a quadratic form. The present study shows that if one adds a linear term in the exponent, the estimates can be further improved for almost all parameter values. Our results extend the bound for one-variable theta sums obtained by Fedotov and Klopp in 2012.


Introduction
For M > 0, a real n × n symmetric matrix X, and x, y ∈ R n , we define a theta sum as the exponential sum θ f (M, X, x, y) = m∈Z n f M −1 (m + x) e 1  2 mX t m + m t y , where f : R n → C is a rapidly decaying cut-off and e(z) = e 2πiz for any complex z.If f = χ B is the characteristic function of a bounded set B ⊂ R n we have the finite sum θ f (M, X, x, y) = e( 1 2 mX t m + m t y). (1.2) In this case we will also use the notation θ f = θ B .In this paper we will focus on the case when The theorems below remain valid if f = χ B is replaced by any function f in the Schwartz class S(R n ) (infinitely differentiable, with rapid decay of all derivatives).The results in the latter case follow from a simpler version of the argument for the sharp truncation, so we do not discuss them here.The principal result of part I [10] in this series is the following.
For example, for any ǫ > 0, the function ψ(x) = (x + 1) 1 2n+2 +ǫ satisfies the condition (1.3), which produces the bound M n 2 (log M) 1 2n+2 +ǫ for almost every X and any x and y.This improved the previously best bound due to Cosentino and Flaminio [3] by a factor of (log M) n .Moreover, in the case n = 1, theorem 1.1 recovers the optimal result obtained by Fiedler, Jurkat and Körner [5].
In what follows we establish a stronger bound than (1.4), for example M n 2 (log M) +ǫ , but now only valid for almost every y.In the case n = 1, theorem 1.2 recovers theorem 0.1 of Fedotov and Klopp [4].
The paper is organized as follows.In section 2 we review some basic properties of theta functions and the Jacobi group.The Jacobi group is defined as the semi-direct product H ⋊ G of the Heisenberg group H and the symplectic group G = Sp(n, R), and, following a construction due to Lion and Vergne [8], the theta function associated to a Schwartz function f ∈ S(R n ) is a function Θ f : H ⋊ G → C that, for appropriate g ∈ G and h ∈ H, is a simple rescaling of the theta sums θ f .The theta functions Θ f satisfy an automorphy equation, theorem 3.1, under a certain subgroup Γ ⊂ H ⋊ G.This subgroup, defined in section 3, projects to the discrete subgroup Γ = Sp(n, Z) ⊂ G.
In order to exploit additional savings from the linear term parameterized by y, we found it necessary to have a better understanding of the shape of the cusp of Γ\G than in the first paper in this series [10].For this reason we define in section 3.1 a new fundamental domain for Γ\G which has "box-shape" cusps, as explicated in section 3.2.
Section 4 contains the proof of theorem 1.2, which is based on a Borel-Cantelli type argument together with a multi-dimensional dyadic decomposition of the characteristic function of the open unit cube (0, 1) n that is naturally realized as an action of the diagonal subgroup of G.The execution of the Borel-Cantelli argument rests on a kind of "uniform continuity" property of a certain height function on H ⋊ G that controls the theta function Θ f , see corollary 4.1.The required property is proved in section 4.1, see lemma 4.4, whose proof is the motivation for the creation of the fundamental domain and the study of its cuspidal regions in sections 3.1 and 3.2.We remark that the interaction of the dyadic decomposition with the H coordinate in the Jacobi group leads to additional complications not seen in [10], see section 4.2.

Theta functions and the Jacobi group
The theta function Θ f associated to a Schwartz function f ∈ S(R n ) is a complex-valued function defined on the Jacobi group H ⋊ G, the semi-direct product of the Heisenberg group H with the rank n symplectic group G = Sp(n, R).Here H is the set R n × R n × R with multiplication given by and G is the group of 2n × 2n real matrices g preserving the standard symplectic form: with I the n × n identity.Alternatively, writing g in n × n blocks, We note that G acts on H by automorphisms via so we may define the semidirect product H ⋊ G, the Jacobi group, with multiplication (2.5) The theta function is defined by where W is the Schrödinger representation of H and R is the Segal-Shale-Weil (projective) representation of G.We refer the reader to [10] for details regarding these representations, including the slightly non-standard definition of W and the unitary cocycle ρ : We recall here that for we have For f (x) = exp(−πx t x) and h = (0, 0, 0), we recover (det Y ) 1 4 times the classical Siegel theta series that is holomorphic in the complex symmetric matrix Z = X + iY .Here we choose Y .For general g ∈ G we have the Iwasawa decomposition, where X, Y are symmetric and Q is unitary.Explicitly, we have We often further decompose Y = UV t U with U upper-triangular unipotent and V positive diagonal, so Y . It is easy to express the Haar measure µ on G in these coordinates, where dQ is Haar measure on U(n) and dx ij , du ij , dv jj are respectively the Lebesgue measures on the entries of X, U, V .We can also express the Haar measure on the open, dense set of g which can be written as with A ∈ GL(n, R) and X and T symmetric.In these coordinates we have where c is a positive constant and dx ij , da ij , dt ij are respectively the Lebesgue measure on the entries of X, A, T , see [10].We note that the Haar measure μ on the Jacobi group is simply dμ(h, g) = dx dy dt dµ(g), (2.14) with h = (x, y, t) and dx, dy, and dt the Lebesgue measures.
We often make use of the following refinements of the Iwasawa decomposition.For 1 ≤ l ≤ n and the same Q as in (2.9), we write g ∈ G as where R l and S l are l × (n − l) matrices, T l is l × l symmetric, U l is l × l upper-triangular unipotent, V l is l×l positive diagonal, X l is (n−l)×(n−l) symmetric, and Y l is (n−l)×(n−l) positive definite symmetric.We note that for l = n we recover X = T l and the factorization In what follows we use g l = g l (g) ∈ Sp(n − l, R) to denote the matrix (2.16) These decompositions are closely related to the Langlands decompositions of the maximal parabolic subgroups P l of G.For 1 ≤ l < n, P l is the subgroup of g ∈ G which can be written in the form where R l and S l are l × (n − l) matrices, T l is l × l symmetric, a l > 0, U l ∈ GL(l, R) with det U l = ±1, and The maximal parabolic P n is the subgroup of g ∈ G that can be written as where T n is n × n symmetric, a n > 0, and U n ∈ GL(n, R) with det U n = ±1.The factorizations (2.17), (2.18) are in fact the Langlands decompositions of P l , P n .The first paper in this series [10] contains more details on parabolic subgroups and their Langlands decompositions, and we refer the readers to [12], particularly sections 4.5.3 and 5.1, [7], particularly section 7.7, and the authors' lecture notes [9] for further details.

The subgroups Γ and Γ
We denote by Γ the discrete subgroup Γ = Sp(n, Z) ⊂ G. Recalling the notation of [10], for we set h γ = (r, s, 0) ∈ H where the entries or r are 0 or 1 2 depending on whether the corresponding diagonal entry of C t D is even or odd, and the entries of s are 0 or 1  2 depending on whether the corresponding diagonal entry of A t B is even or odd.As in [10], we now define the group Γ ⊂ H ⋊ G by The relevance of the subgroup Γ is made apparent by the following theorem, see theorem 4.1 in [10].
A proof of this theorem is found in [8] but with Γ replaced by the finite index subgroup for which h γ = (0, 0, 0).The automorphy under the full Γ is proved in [11], but only for the special function f (x) = exp(−πx t x).It is shown in [8] that this f is an eigenfunction for all the operators R(k(Q)), with R the Segal-Shale-Weil representation and Q ∈ U(n), and it can be seen from the theory built in [8] that the automorphy for any Schwartz function follows from that for exp(−πx t x).A self-contained proof along the lines of [8] is presented in the authors' lecture notes [9].

Fundamental domains
We say that a closed set D ⊂ G is a fundamental domain for Γ\G if • for all g ∈ G there exists γ ∈ Γ such that γg ∈ D and We note that if D is a fundamental domain for Γ\G, then In contrast to our previous paper [10], here we need to make careful use of the shape of our fundamental domain D in the cuspidal regions.Drawing inspiration for the fundamental domain for GL(n, Z)\GL(n, R) constructed in [6] as well as from the reduction theory developed in [2] (see also [1]), we construct in this section a new fundamental domain D = D n for Γ\G.In the following section we study the cuspidal region of D n .
For n = 1, we let D 1 ⊂ G denote the standard fundamental domain for Γ\G = SL(2, Z)\SL(2, R).That is, We now define fundamental domains D n inductively using the decomposition (2.15) for l = 1.
Writing g ∈ G as where

16), and
• the entries of r 1 (g), s 1 (g), and t 1 (g) are all less than or equal to 1 2 in absolute value with the first entry of r 1 greater than or equal to 0. Proposition 3.2.D n is a fundamental domain for Γ\G.
Proof.We begin by showing that for g ∈ G, sup γ∈Γ v 1 (γg) is indeed obtained by some γ ∈ Γ.From (2.10), we have for where and c, d are the first rows of C, D. Since Y is positive definite, there are only finitely many c such that cY t c, and hence v 1 (γg) −1 , is below a given bound.Similarly, for a fixed c, the positive definiteness of Y −1 implies that there are only finitely many d such that v 1 (γg) −1 is below a given bound.It follows that there are only finitely many γ ∈ Γ 1 \Γ such that v 1 (γg) is larger than a given bound, where Γ 1 = Γ ∩ P 1 and we recall P 1 is given by (2.17).As v 1 (γg) = v 1 (g) for γ ∈ Γ 1 it follows that v 1 (γg) is maximized for some γ ∈ Γ.Let γ 0 be so that v 1 (γ 0 g) is maximal.We now decompose an arbitrary γ ∈ Γ 1 as in (2.17), Proceeding inductively, there exists γ 1 such that γ 1 g 1 (γ 0 g) = g 1 (γγ 0 g) ∈ D n−1 .Now, we can change r 1 (γ), s 1 (γ), t 1 (γ), and the ±, noting that this does not change g 1 (γγ 0 g), so that the entries of r 1 (γγ 0 g), s 1 (γγ 0 g) and t 1 (γγ 0 g) are all ≤ 1 2 in absolute value and the first entry of r 1 (γγ 0 g) is nonnegative.Therefore γγ 1 g ∈ D n as required.
We now suppose that g ∈ D n and there is a non-identity γ ∈ Γ such that γg ∈ D n .We set By the maximality, we have v 1 (g) = v 1 (γg) and therefore where c and d are the first rows of C and D. Let us first consider the case when c = 0. To show that g is on the boundary of D n in this case, we consider we have that g ǫ ∈ D n .As g ǫ can be made arbitrarily close to g, we conclude that g is on the boundary of D n .If c = 0, then from (3.13) we have where d = d (1) d (2) are as above This time we consider we conclude that g is on the boundary of D n as before.When c = 0 and d (2) = 0 we have d (1) = ±1, and so γ ∈ Γ 1 .We decompose γ as in (3.10) and define γ 1 as in (3.11).By the construction of D n , we have g 1 (g) ∈ D n−1 and g 1 (γg) = γ 1 g 1 (g) ∈ D n−1 .By induction, we have that either γ 1 is the identity or g 1 (g) is on the boundary of D n−1 .In the latter case we have that g is on the boundary of D n , and so it remains to consider If any of the entries of r 1 (γ) or s 1 (γ) is not zero, then the corresponding entry of r 1 (g) or s 1 (g) is ± 1  2 and so g is on the boundary of D n .Similarly if t 1 (γ) = 0, we have t 1 (g) = ± 1 2 and again g is on the boundary of D n .If all of r 1 , s 1 , t 1 are 0, the sign must be − as γ is not the identity, and it follows that the first entry of r 1 (g) is 0 and g is again on the boundary of D n .
The following proposition records some useful properties of D n .It and its proof are very similar to the analogous statement for the different fundamental domain used in [10], see proposition 3.1 there.Proposition 3.3.Let g ∈ D n and write where X is symmetric, Y is positive definite symmetric, U upper triangular unipotent, V positive diagonal, and Q ∈ U(n), and Then we have Proof.For the first, we observe that by the inductive construction of D n , we have that 2 .To demonstrate that v j ≥ 3  4 v j+1 , we note that by the construction of D n , it suffices to consider only j = 1.We start with for any c d ∈ Z 2n nonzero and primitive.Choosing c = 0 and d = 0 1 0 1 where r 1 is the first entry of r 1 .Since 0 ≤ r 1 ≤ 1 2 , we conclude that v 1 ≥ 3 4 v 2 .To demonstrate the second part of the proposition, we let y 1 , . . ., y n denote the rows of (3.28) where the x j are the entries of x, our aim is to prove that for some constants 0 < c 1 < 1 < c 2 depending only on n, We let 0 < φ 1 < π denote the angle between y 1 and y and 0 < φ 2 < π 2 denote the angle between y 1 and the hyperplane span(y 2 , . . ., y n ).We have φ 2 ≤ min(φ 1 , π − φ 1 ), and so We bound cos φ 2 away from 1 by bounding sin φ 2 away from 0.
We have so it suffices to show that v Here ∧ denotes the usual wedge product on R n and the norm on k R n is given by Using the inductive construction of D n and the fact that the entries of r 1 (Y ), r 1 (Y 1 ), . . .are at most 1 2 in absolute value, we observe that U has entries bounded by a constant depending only on n.We find that with the implied constant depending on n.

Shape of the cusp
As explicated in [1] and [2], the cusp of Γ\G can be partitioned into 2 n − 1 box-shaped regions.These regions are in correspondence with the conjugacy classes of proper parabolic subgroups of G and are formed as K times the product of three subsets, one for each of the components -nilpotent, diagonal, and semisimple -of the Langlands decomposition of P .
In what follows we use the fundamental domain D n constructed in section 3.1 to prove a variation of this fact, although only for the maximal parabolic subgroups (2.17), (2.18).Our main result for this section is proposition 3.6, which roughly states that if g ∈ G is close enough the boundary in a precise sense, then g can be brought into D n by an element γ in some maximal parabolic subgroup which depends on the way g approaches the boundary.
For 1 ≤ l < n we denote by Γ l,1 and Γ l,2 the subgroups of Γ l = Γ ∩ P l given by and we let Γ n,2 be trivial.We now define for g ∈ G and 1 and, for 1 We note that in the proof of proposition 3.2, we saw that the maximum in (3.40) does exist.
As for the minimum in (3.39), we simply note that where a is the last row of A ∈ GL(l, Z), so the positive definiteness of U l V l t U l implies that there are only finitely many values of v l (AU l V l t U l t A) below a given bound.We now define a fundamental domain D ′ l for the action of GL(l, Z) on l × l positive definite symmetric matrices.We set D ′ 1 = {y > 0} and the standard fundamental domain for GL(2, Z) acting on 2 × 2 positive definite symmetric matrices.The domain D ′ l for l > 2 is then defined inductively as the set of all 3. |r j | ≤ 1 2 and 0 ≤ r 1 ≤ 1 2 where r j are the entries of r.This is in fact the set of Y such that Y −1 is in Grenier's fundamental domain, see [6] and [12], so we do not prove that D ′ l is a fundamental domain here.We do however record the following properties of D ′ l .
positive diagonal and U upper triangular unipotent.Then we have with implied constant depending only on l, and 3. min with implied constant depending only on l.
Proof.The first and second parts are proved in proposition 3.1 of [10].To prove the third part, we note that with a the last row of A, by the second part of the lemma.Applying the first part of the lemma we have aV t a ≫ v l ||a|| 2 ≥ v l , and (3.46) follows.
As the proof is almost identical to the proof of the third part of lemma 3.4, we record the following lemma for later use.
with the implied constant depending only on n.
Proof.We recall from the second part of proposition 3.3 that for x ∈ R l ,

.50)
Now as c = 0, we have c 2 j ≥ 1 for some 1 ≤ j ≤ l, and so by the first part of proposition 3.3.
We are now ready to prove the main result for this section.
Proposition 3.6.For 1 ≤ l ≤ n, there are constants a l > 0 such that for l < n, if g ∈ G satisfies v l (Γ l g) ≥ a l v l+1 (Γ l g), and for l = n if g ∈ G satisfies v n (Γ n g) ≥ a n , then there exists γ ∈ Γ l so that γg ∈ D n .Moreover, for this γ we have v l (Γ l g) ≍ v l (γg) and, for l < n, v l+1 (Γ l g) = v l+1 (γg).
We remark that this proposition can be extended to any of the parabolic subgroups P L of G by taking intersections of the maximal parabolics.However some care needs to be taken regarding the possible non-uniqueness of the γ bringing g into D n .Since it is unnecessary for our goals, we do not discuss this here.
Proof.By multiplying g by we may assume that U l V l t U l ∈ D ′ l and We recall that for γ where c, d are the first rows of C, D. Now, writing c = c (1) c (2) , d = d (1) d (2) and by the second part of lemma 3.4.Since, for l < n, we have v l+1 ≫ 1, see proposition 3.3, and so v l ≫ a l by the hypothesis.For l = n, we directly have v n ≫ a n by hypothesis.Since also v 1 ≫ v l by lemma 3.4, we have v 1 v l ≫ a 2 l , so by taking a l to be a sufficiently large constant, it follows that v 1 ≥ v 1 (γg).
For l < n, if c (1) = 0 but c (2) d (2) = 0, then we have for a l sufficiently large, and it follows that v 1 ≥ v 1 (γg).Now, if l = n or if c (1) , c (2) , and d (2) are all 0, then we have d (1) = 0 and We have verified that for any γ ∈ Γ, v 1 ≤ v 1 (γg), which is the first condition defining the fundamental domain D n .
Restricting to γ ∈ Γ 1 , which fixes v 1 (g), the same argument as above shows that v 2 (g) ≥ v 2 (γg) for all γ ∈ Γ 1 .Continuing this way, we find that the v j , 1 ≤ j ≤ l are all maximal (over Γ j,2 ), and so, by the construction of D n , there is a γ ∈ Γ l with the form where A is upper-triangular unipotent (so γ ∈ Γ l for all l) such that γg ∈ D n .

Proof of the main theorem
In the following subsection we gather some technical lemmas regarding the height function needed in the proof of theorem 1.2, see section 4.2.This height function is motivated by the following corollary from [10].
Corollary 4.1.For a Schwartz function f ∈ S(R n ) and (h, g) ∈ D, and A > 0, we have where We remark that in [10] this is obtained as a consequence of full asymptotics of the theta function in the various cuspidal regions.We also remark that in [10] we use a slightly different fundamental domain, however an examination of the proof there shows that the fundamental domain can be replaced by any set satisfying the conclusions of proposition 3.3.

Heights and volumes
For a fixed A > 0 sufficiently large depending only on n, we define the function where (uh γ , γ) ∈ Γ is so that (uh γ , γ)(h, g) ∈ D.Here we write h ∈ H as h = (x(h), y(h), t(h)).For completeness, in case there are more than one (uh γ , γ) ∈ Γ such that (uh γ , γ)(h, g) ∈ D, then we define D Γ(h, g) to be the largest of the finite number of values (4.3).This point is not essential as these values are within constant multiples of each other; see the argument in lemma 4.4 for how this can be proved.
We begin by analyzing the growth of the height function.We let μ denote the Haar probability measure on Γ\(H ⋊ G), which is µ, the Haar probability measure on Γ\G, times the Lebesgue measure on the entries of h = (x, y, t).Lemma 4.2.For R ≥ 1 we have with the implied constant depending only on n.
Proof.We recall that g ∈ D n is written as for U upper-triangular unipotent, X symmetric, Q ∈ U(n), and positive diagonal.The Haar measure µ on G is then proportional to Lebesgue measure with respect to the entries of X and the off-diagonal entries of U, U(n)-Haar measure on Q, and the measure given by v on V .By proposition 3.3, we observe that the set in (4.4) is contained in the set of (h, g) satisfying v j ≥ cv j+1 for all 1 ≤ j < n and some c > 0 in addition to det Y ≥ R and A .Moreover, the variables x, y, t as well as U, X are constrained to compact sets, and so the measure of the set (4.4) is where ǫ = n 2A .Changing variables v j = exp(u j ), the integral in (4.8) is .9) We now make the linear change of variables s j = u j − u j+1 for j < n and This transformation has determinant n and its inverse is given by We find that the exponent in (4.9) is then As j(n−j)

2
> 0 for j < n, the bound ( Lemma 4.4 below contains a key estimate, establishing a kind of 'uniform continuity' for log D. The proof of this lemma is the primary motivation for defining our new fundamental domain and studying the shape of its cusp in sections 3.1 and 3.2.For the proof, we first establish a similar kind of 'uniform continuity' for the functions v l (Γ l g) and v l+1 (Γ l g) that are essential to section 3.2.
for all 1 ≤ l ≤ n with implied constants depending only on n.
Proof.We first note that we may in fact work with ||I −g 0 || ≤ ǫ as then the statement would follow by repeated application of the estimates.In fact, we may assume ||I − g −1 0 || ≤ ǫ as well.Now write with On the other hand, letting y j and y ′ J denote the rows of t Y − 1 2 and t Y (gg 0 ) − 1 2 , we have and so v l (g) ≍ v l (gg 0 ) follows.Now let γ ∈ Γ l be so that v l (Γ l g) = v l (γg).We have and the reverse bound follows by switching the roles of g and gg 0 , and using ||g −1 0 − I|| ≤ ǫ.The final estimate in (4.12) is proved in the same way.
Proof.We observe as in lemma 4.Moreover, it suffices to show that D( Γ(h, g)(h 0 , g 0 )) ≫ D( Γ(h, g)) as the other inequality follows from switching (h, g) and (h, g)(h 0 , g 0 ) as we may assume in addition that (h 0 , g 0 ) −1 = (h −g 0 0 , g −1 0 ) also satisfies (4.20).Now let us suppose that (h, g) ∈ D so that where a is a constant determined by the constants in proposition 3.6 and lemma 4.3.If no such l exists, then we have v j (g) ≍ 1 for all j, and lemma 4.3 implies that v j (gg 0 ) ≍ 1 as well.The bounds then follow immediately.Now assuming that such a maximal l exists, we have that v j (g) ≍ 1 for all j > l.For these j, lemma 4.3 then implies that v j (gg 0 ) ≍ 1, and it follows that v j (γgg 0 ) ≍ 1 for γ ∈ Γ l such that g l (γgg 0 ) ∈ D n−l , see (2.16).By lemma 3.5, we have v l (Γ l g) ≫ v l (g), and so v l (Γ l g) ≫ av l+1 (g) = av l+1 (Γ l g) (4.23) since g l (g) ∈ D n−l .Via lemma 4.3, this implies that v l (Γ l gg 0 ) ≫ av l+1 (Γ l gg 0 ), so a can be chosen large enough so that gg 0 satisfies the hypotheses of proposition 3.6, and we let γ ∈ Γ l be so that γgg 0 ∈ D. We write where A 1 ∈ GL(l, Z).From the estimates above, we have where the equality follows from the fact that γ ∈ Γ l normalizes the first matrix in (2.15) and det A 1 = ±1.

Proof of theorem 1.2
We recall the following lemma from [10].
Lemma 4.5.There exists a smooth, compactly supported function f 1 : R → R ≥0 such that where χ 1 is the indicator function of the open unit interval (0, 1).Now, following the method of [10], we define for a subset S ⊂ {1, . . ., n} and j = (j 1 , . . ., j n ) ∈ Z n with j i ≥ 0, where E S is diagonal with (i, i) entry −1 if i ∈ S, +1 if i ∈ S, and

.37)
We also set h S = (x S , 0, 0) ∈ H where x S has ith entry −1 if i ∈ S and 0 if i ∈ S.
As in [10], we have and the sums are over j ∈ Z n with nonnegative entries.Let ψ : [0, ∞) → [1, ∞) be an increasing function.Then for C > 0 we define G j (ψ, C) to be the set of Γ(h, g) for all S ⊂ {1, . . ., n} and s ≥ 1.

.42)
Proof.Suppose that Γ(h, g) ∈ G j (ψ, C), so there exists S ⊂ {1, . . ., n} and s ≥ 1 such that We let k be a nonnegative integer such that where K j = K2 j 1 +•••+jn with K a constant to be determined.We have where, with s

.46)
As |s ′ | ≤ K −1 j , we can make K sufficiently large so that (h 1 , g 1 ) satisfies the conditions of lemma 4.4.From this and the fact that ψ is increasing, we have that  We break the sum in (4.50) into terms j such that 2 j i b −1 j i ≤ M for all i and terms j such that 2 j i b −1 j i > M for some i.Using (2.8), we write the first part as Here we let ǫ > 0 be a sufficiently small constant, G j (ψ, C) is defined in (4.40), and K is the compact subset from the statement of theorem 1.2 identified with the compact subset of positive diagonal matrices B in the obvious way.We then set X (ψ) to be the set of (X, y) ∈ R n×n sym × R n such that ∈ X (n−#L) } (4.59) for some (R, s) ∈ Z n×n × Z n , where s R ∈ R n has entries 0 or 1 2 depending on whether the corresponding diagonal entry of R is even or odd, and a > 0 is a constant to be determined.

1 2 1 2 t Y 1 2 1 2)
to be the upper-triangular matrix with positive diagonal entries such that Y = Y , and we emphasize that Y − 1 2 is always interpreted as (Y −1 and not (Y −1 ) 1 2