1 Introduction

Recently Poisson–Voronoi tessellations became an object for extensive investigations. The first non-trivial result concerning Poisson–Voronoi tessellations belongs to J. L. Meijering. His paper [6] shows that a typical cell of a Poisson–Voronoi tessellation of the three-dimensional Euclidean space \(\mathbb E ^3\) has an expectation of number of facets equal to

$$\begin{aligned} \tfrac{48\pi ^2}{35}+2 = 15.5354\ldots . \end{aligned}$$

A survey [9] contains a number of further results concerning Poisson–Voronoi tessellations.

It is possible to consider Voronoi tessellations of a sphere or a hyperbolic space of constant curvature as well as Voronoi tessellations of a Euclidean space. Given a locally finite set \(A\) in a sphere, Euclidean space or a hyperbolic space of constant curvature, it is possible to consider an associated Delaunay triangulation. It is known (see, for example, [7]) that the following statements are equivalent.

  1. 1.

    A subset \(B\subset A\) spans a face of Delaunay triangulation.

  2. 2.

    A set of points equidistant to all points of \(B\) contains a face of Voronoi tessellation associated with \(A\).

Therefore, the notions of Voronoi tessellation and Delaunay triangulation are dual to each other.

Consider a finite set \(A\) of points in general position in the sphere

$$\begin{aligned} S^d_r=\big \{(\xi _1, \xi _2, \ldots , \xi _{d+1})\subset \mathbb E ^{d+1}:\xi _1^2+\xi _2^2+\cdots +\xi _{d+1}^2=r^2\big \}. \end{aligned}$$

A subset \(B\subset A\) determines a face of the Delaunay triangulation associated with \(A\) if and only if conv\(\, B\) is a face of conv\(\, A\).

N. Dolbilin and M. Tanemura (see [3]) studied convex hulls of finite subsets of the Clifford torus \(T^2\) embedded in \(\mathbb E ^4\). Since \(T^2\subset S^3_{\sqrt{2}}\), this case can be considered as an additional restriction for a finite subset of \(S^3_{\sqrt{2}}\) generating the Delaunay triangulation (or the Voronoi tessellation). For a special class of point sets in \(T^2\) called regular sets [3] completely describes the combinatorial structure of the convex hull.

In addition, the convex hull of the Poisson point process within \(T^2\) has been explored by numeric methods. Dolbilin and Tanemura considered the average number \(\bar{f}\) of 2-faces of a cell in the corresponding Voronoi tessellation of \(\mathbb S ^3\), which is exactly the average degree of a vertex in the Delaunay triangulation. A strong linear relation between \(\bar{f}\) and \(\log _{10} N\) (\(N = 4\pi ^2\lambda \) is the average number of points) has been observed, and the obtained regression formula was

$$\begin{aligned} \bar{f} \approx -2.419308 + 9.971915 \log _{10} N. \end{aligned}$$

In other words, the simulation has shown that the mean valence of a vertex of the convex hull (or the mean number of hyperfaces of a Poisson–Voronoi cell) is likely to have an expectation \(O^*(\ln \lambda )\), as the rate of the process \(\lambda \) tends to infinity.

Here and further \(F_1=O^*(F_2)\) means that

$$\begin{aligned} \limsup \limits _{\lambda \rightarrow \infty } \max \big ( \big |\tfrac{F_1}{F_2} \big |, \big |\tfrac{F_2}{F_1} \big | \big ) <\infty . \end{aligned}$$

N. Dolbilin suggested the author to prove the conjecture on the logarithmic growth of the mean valence of a vertex.

In this paper this conjecture and several related results are proved.

2 Notation and Main Results

In the four-dimensional Euclidean space \(\mathbb E ^4\) consider the two-dimensional Clifford torus

$$\begin{aligned} T^2=\left\{ (\cos \phi , \sin \phi , \cos \psi , \sin \psi ):-\pi <\phi , \psi \le \pi \right\} . \end{aligned}$$

Clearly, \(T^2\) is a submanifold of the three-dimensional sphere

$$\begin{aligned} S^3_{\sqrt{2}}=\big \{(\xi _1,\xi _2,\xi _3,\xi _4):\xi _1^2+\xi _2^2+\xi _3^2+\xi _4^2=2\big \}. \end{aligned}$$

\(T^2\) has a locally Euclidean planar metric and, consequently, the natural Borel measure mes\(_2\), where mes\(_2(T^2)=4\pi ^2\).

Consider a random point set \(\mathcal P \subset T^2\). For every Borel-measurable set \(A\subset T^2\) define a random variable

$$\begin{aligned} n(A) = n_\mathcal{P }(A) = |\mathcal P \cap A|. \end{aligned}$$

Denote by Pois\((\nu )\) the Poisson distribution with rate parameter \(\nu \), i.e. the distribution of a random variable \(\zeta _\nu \) such that

$$\begin{aligned} \mathsf P (\zeta _\nu = j) = e^{-\nu } \frac{\nu ^j}{j!}\quad \text{ for} \quad j=0,1,2,\ldots . \end{aligned}$$

Say that \(\mathcal P = \mathcal P _{\lambda }\) is the (homogeneous) Poisson point process of rate \(\lambda >0\) if the random variable \(n(A)\) is distributed according to Pois\(\bigl ( \lambda \, \mathrm mes _2(A)\bigr ) \) law for every Borel-measurable set \(A\subset T^2\).

Call a polytope in \(\mathbb E ^4\) generic if it is a simplicial 4-polytope, or a simplex of dimension at most 3, or an empty polytope. Remind the notion of \(f\)-vector of a 4-polytope and extend it to the cases of other generic polytopes.

The \(f\)-vector of a 4-polytope \(P\) is a 4-vector \((f_0, f_1, f_2, f_3)\), where \(f_i\) is the number of \(i\)-faces of \(P\) for \(i=0,1,2,3\). By definition, set the \(f\)-vectors for the three-dimensional simplex, the two-dimensional simplex, the segment, the one-point set and the empty polytope as \((4,6,4,2), (3,3,1,0), (2,1,0,0), (1,0,0,0)\) and \((0,0,0,0)\) respectively.

If \(P = \mathrm conv \, \mathcal P _{\lambda }\), then \(P\) is almost surely a generic polytope, and therefore \((f_0, f_1, f_2, f_3)\) is a well-defined random vector.

Call the event \(n(T^2)\le 4\) a degenerate case and the complementary event \(n(T^2)>4\), respectively, a non-degenerate case.

Remark

The reason to choose \(f_3=2\) for a three-dimensional simplex is the convenience to treat it as a polytope with 2 hyperfaces equal to this simplex. The other components were chosen to satisfy Dehn–Sommerville equations (see, for example, [1]). The \(f\)-vectors for other polytopes occurring in degenerate cases were chosen rather arbitrarily according to the common idea of simplices in dimensions lower than 3.

The main results of this paper are below.

Theorem 2.1

The number of hyperfaces of \(\mathrm conv \, \mathcal P _{\lambda }\) has a magnitude of expectation \(O^*(\lambda \ln \lambda )\) as \(\lambda \) tends to infinity.

Theorem 2.2

The numbers of 1-faces and 2-faces of \(\mathrm conv \, \mathcal P _{\lambda }\) both have magnitudes of expectation \(O^*(\lambda \ln \lambda )\) as \(\lambda \) tends to infinity.

In addition, one can easily observe that the value of \(f_0\) (i.e. the number of vertices) for the polytope conv\(\, \mathcal P _{\lambda }\) is exactly \(n(T^2)\). Therefore

$$\begin{aligned} \mathsf E \, f_0 = \mathsf E \, n(T^2) = 4\lambda \pi ^2, \end{aligned}$$

as \(n(T^2)\) is Pois\((4\lambda \pi ^2)\)-distributed (expectations of Poisson random variables are computed, for example, in [8]).

Remark

For a random polytope conv\(\, \mathcal P _{\lambda }\) the asymptotics of the expectation of \(f\)-vector as \(\lambda \rightarrow \infty \) is now completely described.

The other combinatorial characteristic of a polytope is mean valence of its vertices. More precisely, given a polytope \(P\) in \(\mathbb E ^4\) (possibly, empty) with \(f\)-vector \((f_0, f_1, f_2, f_3)\), consider the value

$$\begin{aligned} \bar{v}= \bar{v}(P) = \left\{ \begin{array}{ll} \frac{2f_1}{f_0},\quad&\text{ if}\; f_0\ne 0,\\ 0,\quad&\text{ if}\;f_0=0. \end{array} \right. \end{aligned}$$

Then \(\bar{v}\) is called the mean valence of vertex of \(P\). If \(P = \mathrm conv \, \mathcal P _{\lambda }\), then \(\bar{v}=\bar{v}(\mathrm conv \, \mathcal P _{\lambda })\) is a random variable.

Theorem 2.3

The expectation of the mean valence of a vertex of \(\mathrm conv \, \mathcal P _{\lambda }\) has asymptotics \(\mathsf E \,\bar{v}=O^*(\ln \lambda )\) as \(\lambda \) tends to infinity.

Remark

Theorem 2.3 provides an answer to the problem proposed by Dolbilin and Tanemura.

Here and further the designations of all combinatorial characteristics apply to the random polytope conv\(\, \mathcal P _{\lambda }\).

3 Integral Expressions for \(\mathsf E \,f_3\) and \(\mathsf E \,\bar{v}\)

Let \((T^2)^4\) be the fourth Cartesian power of \(T^2\) with natural measure mes\(_8\). Let \(X\subset (T^2)^4\) be the set of all points \(x=(x_1, x_2, x_3, x_4)\), where \(x_i\in T^2\) such that points \(x_1, x_2, x_3, x_4\) are affinely independent in \(\mathbb E ^4\).

For every \(x\in X\) denote by \(p(x)\) a hyperplane spanned by points \(x_1\), \(x_2\), \(x_3\), \(x_4\). It is obvious that \(X\) is open in \((T^2)^4\). Moreover, it is easily seen that \((T^2)^4\setminus X\) has a zero measure.

Denote by \(\Pi ^+(x)\) and \(\Pi ^-(x)\) the two half-spaces determined by \(p(x)\) for every \(x\in X\).

The sets

$$\begin{aligned} C^+(x) = T^2\cap \Pi ^+(x)\quad \text{ and}\quad C^-(x) = T^2\cap \Pi ^-(x) \end{aligned}$$

are called caps.

Without loss of generality, assume that for every \(x\in X\)

$$\begin{aligned} \mathrm mes _2(C^+(x))\le \mathrm mes _2(C^-(x)). \end{aligned}$$

Let \(G:X\rightarrow \mathbb R \) be a function determined by

$$\begin{aligned} G(x) = \mathrm mes _2(C^+(x)). \end{aligned}$$

Clearly, \(G(x)\) is continuous on \(X\).

The integral expressions for \(E\,f_3\) and \(E\,\bar{v}\) will be obtained using the famous Slivnyak–Mecke formula. This formula was proved for the first time in [5], and in [2] it is stated as follows.

Proposition 3.1

(Slivnyak–Mecke formula) Let \(\mathcal X \) be a space with measure \(\mu \). Suppose \(\mathcal N _\mathcal{X }\) is a space of all locally finite point configurations in \(\mathcal X \). Consider a Poisson point process \(\mathcal P _\mu \) within \(\mathcal X \) corresponding to the measure \(\mu \). Then for every measurable function \(F: \mathcal{X }^s \times \mathcal N _\mathcal{X } \rightarrow [0, \infty )\) holds

$$\begin{aligned} \begin{array}{ll}&\mathsf{E }\, \sum \limits _{\{x_1, x_2, \ldots , x_s\}\subset \mathcal P _\mu }^{\ne } F\left(x_1, x_2, \ldots , x_s, \mathcal P _\mu \setminus \{x_1, x_2, \ldots , x_s\} \right)\\&\quad =\displaystyle \int \limits _{\mathcal{X }^{s}} \mathsf E \, \bigl ( F\left(x_1, x_2, \ldots , x_s, \mathcal P _\mu \right)\bigr ) \, d\mu (x_1)d\mu (x_2),\ldots , d\mu (x_s). \end{array} \end{aligned}$$
(1)

The sign \(\ne \) here stands for summation over all \(s\)-tuples of distinct points.

Throughout the proofs of Lemmas 3.2 and 3.3 \(\lambda \) is assumed to be a fixed positive real number.

Lemma 3.2

$$\begin{aligned} \mathsf E \,f_3= \frac{1}{24} \int \limits _{(T^2)^4} \lambda ^4\big (e^{-\lambda G(x)}+e^{-\lambda (4\pi ^2-G(x))}\big )\, dx. \end{aligned}$$
(2)

Proof

Apply the Slivnyak–Mecke formula (1) for

$$\begin{aligned} \mathcal X = T^2,\quad s=4, \quad \mu =\lambda \cdot \mathrm mes _2 \end{aligned}$$

and

$$\begin{aligned} F(x_1, x_2, x_3, x_4, X) = \mathbf 1 _{C^+(x)\cap X\subset \partial C^+(x)}+\mathbf 1 _{C^-(x)\cap X\subset \partial C^-(x)}. \end{aligned}$$

If \(n(T^2)>4\) and \(x_1, x_2, x_3, x_4\) are distinct points of \(\mathcal P _\lambda \) then it is not hard to see that almost surely

$$\begin{aligned}&F\left(x_1, x_2, x_3, x_4, \mathcal P _\nu \setminus \{x_1, x_2, x_3, x_4\} \right)\\&\quad \quad =\left\{ \begin{array}{ll} 1,\quad&\text{ if} x_1, x_2, x_3, x_4 \text{ span} \text{ a} \text{ hyperface} \text{ of} \mathcal P _\lambda , \\ 0,\quad&\text{ otherwise}. \end{array} \right. \end{aligned}$$

If \(n(T^2)=4\) and \(x_1, x_2, x_3, x_4\) are the four points of \(\mathcal P _\lambda \) then almost surely

$$\begin{aligned} F\left(x_1, x_2, x_3, x_4, \mathcal P _\nu \setminus \{x_1, x_2, x_3, x_4\} \right)=2. \end{aligned}$$

Finally, if \(n(T^2)<4\) then there are no quadruples in \(\mathcal P _\lambda \) and the left part of (1) is an empty sum.

Therefore, in every case

$$\begin{aligned} \sum \limits _{\{x_1, x_2, x_3, x_4\}\subset \mathcal P _\lambda }^{\ne } F\left(x_1, x_2, x_3, x_4, \mathcal P _\lambda \setminus \{x_1, x_2, x_3, x_4\} \right)= 24f_3, \end{aligned}$$
(3)

since the quadruple \((x_1, x_2, x_3, x_4)\) can be ordered in 24 different ways.

Moreover, by definition of a Poisson point process,

$$\begin{aligned} \begin{array}{ll} \mathsf E \,\mathbf 1 _{C^+(x)\cap X\subset \partial C^+(x)} = e^{-\lambda \cdot \mathrm mes _2(C^+(x))}&= e^{-\lambda G(x)},\\ \mathsf E \, \mathbf 1 _{C^-(x)\cap X\subset \partial C^-(x)} = e^{-\lambda \cdot \mathrm mes _2(C^-(x))}&= e^{-\lambda (4\pi ^2-G(x))}. \end{array} \end{aligned}$$
(4)

Substitution of (3) and (4) into (1) gives the statement of Lemma 3.2.\(\square \)

For \(\nu >0\) let \(\zeta _\nu \) be distributed as Pois\((\nu )\). Denote

$$\begin{aligned} h(\nu )= \mathsf E \, \tfrac{1}{\zeta _\nu +4} = \sum \limits _{j=0}^{\infty } \tfrac{\nu ^j}{j!(j+4)}e^{-\nu }. \end{aligned}$$
(5)

Direct computation of the sum in (5) gives

$$\begin{aligned} h(\nu )=\tfrac{1}{\nu }-\tfrac{3}{\nu ^2}+\tfrac{6}{\nu ^3}-\tfrac{6-6e^{-\nu }}{\nu ^4}. \end{aligned}$$
(6)

Obviously, \(h(\nu )\) is continuous for \(\nu >0\).

Lemma 3.3

$$\begin{aligned} \mathsf E \,\bar{v}&= \tfrac{1}{12} \int \limits _{(T^2)^4} \lambda ^4\big (e^{-\lambda G(x)}h\bigl (4\lambda \pi ^2-\lambda G(x)\bigr )+ e^{-4\lambda \pi ^2+\lambda G(x)}h\bigl (\lambda G(x)\bigr )\big )\, dx\nonumber \\&+2 - \mathsf P \bigl (n(T^2)=2\bigr ) - 2\mathsf P \bigl (n(T^2)<2\bigr ). \end{aligned}$$
(7)

Proof

The Dehn–Sommerville equations [1, Sect. 1.2] hold for conv\( \, \mathcal P _\lambda \) almost surely in non-degenerate cases as well as in the case \(n(T^2)=4\). These equations imply \(f_1=f_3+f_0\). Therefore, in these cases

$$\begin{aligned} \bar{v}=2\tfrac{f_3}{f_0}+2. \end{aligned}$$
(8)

Apply the Slivnyak–Mecke formula (1) for

$$\begin{aligned}&\mathcal X = T^2,\quad s=4, \quad \mu =\lambda \cdot \mathrm mes _2 \end{aligned}$$

and

$$\begin{aligned} F(x_1, x_2, x_3, x_4, X)&= \left(\mathbf 1 _{C^+(x)\cap X\subset \partial C^+(x)}+\mathbf 1 _{C^-(x)\cap X\subset \partial C^-(x)} \right) \nonumber \\&\quad \times \frac{1}{|X\cup \{x_1, x_2, x_3, x_4\}|}. \end{aligned}$$

The substitution gives the following identity:

$$\begin{aligned}&24 \mathsf E \big (\tfrac{f_3}{f_0} \mid n(T^2)\ge 4\big ) \cdot P\bigl (n(T^2)\ge 4\bigr ) \nonumber \\&\quad =\int \limits _{(T^2)^4} \lambda ^4\big (e^{-\lambda G(x)}h\bigl (4\lambda \pi ^2-\lambda G(x)\bigr )+ e^{-4\lambda \pi ^2+\lambda G(x)}h\bigl (\lambda G(x)\bigr )\big )\, dx. \end{aligned}$$
(9)

According to the law of total probability,

$$\begin{aligned} \mathsf E \, \bar{v}&= 2\mathsf E \big (\tfrac{f_3}{f_0} \mid n(T^2)\ge 4\big )\cdot \mathsf P \bigl (n(T^2)\ge 4\bigr )+ 2 \mathsf P \bigl (n(T^2)\ge 4\bigr )\nonumber \\&+2 \mathsf P \bigl (n(T^2)=3 \bigr ) + \mathsf P \bigl (n(T^2)=2\bigr ). \end{aligned}$$
(10)

By (9), the first summand at the right-hand side of (10) is equal to the integral in (). Further,

$$\begin{aligned} \mathsf P \bigl (n(T^2)<2 \bigr ) + \mathsf P \bigl (n(T^2)=2 \bigr ) + \mathsf P \bigl (n(T^2)=3 \bigr )+ \mathsf P \bigl (n(T^2)\ge 4 \bigr ) =1, \end{aligned}$$

therefore

$$\begin{aligned}&2 \mathsf P \bigl (n(T^2)\ge 4\bigr )+ 2 \mathsf P \bigl (n(T^2)=3 \bigr ) + \mathsf P \bigl (n(T^2)=2\bigr ) \\&\quad =2 - \mathsf P \bigl (n(T^2)=2\bigr ) - 2 \mathsf P \bigl (n(T^2)<2\bigr ), \end{aligned}$$

so the remaining parts of the right-hand sides of () and () are equal as well.\(\square \)

4 Estimates for the Measure Function

To proceed we need two statements about the caps. Both of them are proved by a fairly simple computation, so the proofs are given in the Appendix.

Lemma 4.1

The following statements hold:

  1. 1.

    For every cap \(C^+(x)\) (respectively, \(C^-(x)\)) there exist \(a, b\ge 0\), \(\phi _0, \psi _0\) satisfying \(a^2+b^2\ge 2\) and \(-\pi <\phi _0,\psi _0\le \pi \) such that

    $$\begin{aligned} C^+(x)=\big \{ (\phi , \psi )\in T^2: a^2 \sin ^2 \frac{\phi - \phi _0}{2} + b^2 \sin ^2 \tfrac{\phi - \phi _0}{2} \le 1\big \}, \end{aligned}$$

    and, respectively,

    $$\begin{aligned} C^-(x)=\big \{ (\phi , \psi )\in T^2: a^2 \sin ^2 \tfrac{\phi - \phi _0}{2} + b^2 \sin ^2 \frac{\phi - \phi _0}{2} \ge 1\big \}. \end{aligned}$$

    where \(\gamma _1,\gamma _2>0\) and do not depend on \(x\).

  2. 2.

    For every \(a, b\ge 0\), \(\phi _0, \psi _0\) satisfying \(a^2+b^2\ge 2\) and \(-\pi <\phi _0,\psi _0\le \pi \) the sets

    $$\begin{aligned}&\big \{(\phi , \psi )\in T^2: a^2 \sin ^2 \tfrac{\phi - \phi _0}{2} + b^2 \sin ^2 \tfrac{\phi - \phi _0}{2} \le 1\big \} \end{aligned}$$

    and

    $$\begin{aligned}&\big \{(\phi , \psi )\in T^2: a^2 \sin ^2 \tfrac{\phi - \phi _0}{2} + b^2 \sin ^2 \tfrac{\phi - \phi _0}{2} \ge 1\big \} \end{aligned}$$

    are caps.

Remark

\(a\), \(b\), \(\phi _0\), \(\psi _0\) can be now considered as functions \(a(x)\), \(b(x)\), \(\phi _0(x)\), \(\psi _0(x)\) of the argument \(x\in X\).

Lemma 4.2

There exist positive constants \(\gamma _1, \gamma _2\) such that for every \(x\in X\) holds

$$\begin{aligned} \gamma _1<(a(x)+1)(b(x)+1)G(x)<\gamma _2. \end{aligned}$$

For every \(t\in \mathbb R \) define

$$\begin{aligned} M(t)&= \mathrm mes _8\{x\in X: G(x)<t\},\\ N(t)&= \mathrm mes _8\{x\in X: G(x)<t\; \text{ and}\; \min (a(x), b(x))<100\},\\ L(t)&= \mathrm mes _8\{x\in X: G(x)<t\; \text{ and}\; \min (a(x), b(x))\ge 100\}. \end{aligned}$$

It is easily seen that \(M(t)=N(t)=L(t)=0\) for \(t<0\) and \(M(t)=N(t)+L(t)\) for every \(t\in \mathbb R \).

The main goal of the present section is to estimate \(M(t)\). We estimate \(N(t)\) and \(L(t)\) separately in Lemmas 4.3 and 4.4.

Lemma 4.3

There exists \(\gamma _3>0\) such that

$$\begin{aligned} N(t)<\gamma _3 t^3 \end{aligned}$$

for every \(0<t<\frac{1}{2}\).

Lemma 4.4

There exist \(\gamma _4, \gamma _5>0\) such that

$$\begin{aligned} \gamma _4 t^3|\ln t|< L(t)< \gamma _5 t^3|\ln t| \end{aligned}$$

for every \(0<t<\frac{1}{2}\).

Before the proofs we give an estimate of \(M(t)\) as a corollary.

Corollary 4.5

There exist positive constants \(\gamma _6, \gamma _7\) such that

$$\begin{aligned} \gamma _6 t^3|\ln t|<M(t)< \gamma _7 t^3|\ln t| \end{aligned}$$

for every \(0<t<\frac{1}{2}\).

Proof of Lemma 4.3

Introduce the functions

$$\begin{aligned} N_1(t)&= \mathrm mes _8\{x\in X: G(x)<t\; \text{ and}\; a(x)<100\},\\ N_2(t)&= \mathrm mes _8\{x\in X: G(x)<t\; \text{ and}\; b(x)<100\}. \end{aligned}$$

Obviously, \(N_1(t)=N_2(t)\) and \(N(t)\le N_1(t)+N_2(t)\).

Suppose

$$\begin{aligned} 0<t\le \tfrac{\gamma _1}{1000\pi }. \end{aligned}$$

Let \(a(x)<100\) and \(G(x)<t\). Lemma 4.2 implies

$$\begin{aligned} b(x)\ge \tfrac{\gamma _1}{(a+1)G(x)}-1 > \tfrac{\gamma _1}{200t}. \end{aligned}$$

By Lemma 4.1, cap \(C^+(x)\) is described by the inequality

$$\begin{aligned} a(x)^2 \sin ^2 \tfrac{\phi -\phi _0}{2}+ b(x)^2 \sin ^2 \tfrac{\psi -\psi _0}{2}\le 1. \end{aligned}$$

From the last inequality follows that every point of \(C^+(x)\) with coordinates \((\phi , \psi )\) satisfies

$$\begin{aligned} \big | \sin \tfrac{\psi -\psi _0}{2} \big | \le \tfrac{1}{b} < \tfrac{200t}{\gamma _1}. \end{aligned}$$

Hence

$$\begin{aligned} C^+(x)\subset \big \{ (\phi , \psi )\in T^2: \big | \sin \tfrac{\psi -\psi _0}{2} \big |< \tfrac{200t}{\gamma _1} \big \} = S(t,x). \end{aligned}$$

A set \(S\subset T^2\) is called a strip if there exist \(\psi _1\in (-\pi , \pi ]\) and \(d\in (-1, 1)\) such that

$$\begin{aligned} S=\big \{(\phi , \psi )\in T^2: \cos (\psi -\psi _1) \le d\big \}. \end{aligned}$$

The centerline of \(S\) is the line \(\psi =\psi _1\), and \(2\arccos d\) is the width of \(S\).

\(S(t,x)\) is obviously a strip of width

$$\begin{aligned} w(t)=4\arcsin \tfrac{200t}{\gamma _1}< \tfrac{1000\pi t}{\gamma _1} \end{aligned}$$

and the centerline of \(S(t,x)\) is described by the equation \(\psi =\psi _0\).

Let

$$\begin{aligned} k=k(t)=\big \lceil \tfrac{2\pi }{w(t)} \big \rceil . \end{aligned}$$

Consider \(k\) strips \(S_1, S_2, \ldots S_k \subset T^2\) of width \(2w(t)\) each such that \(S_j\) has centerline \(\psi =-\pi +\frac{2\pi j}{k}\).

It is obvious that \(S(t,x)\subset S_j\), where \(j\) is the nearest integer to \(\frac{k (\psi _0+\pi )}{2\pi }\) and \(S_0=S_k\).

Let \(x=(x_1, x_2, x_3, x_4)\), where \(x_i\in T^2\). Obviously, every \(x_i\in \partial C^+(x)\), therefore \(x\in S_j^4\).

Finally,

$$\begin{aligned} N_1(t)&= \mathrm mes _8\{x\in X: G(x)<t\; \text{ and}\; a(x)<100\} \\&\le \mathrm mes _8 \big (\bigcup \limits _{j=1}^k(t) S_j^4 \big ) \le k(t)(4\pi w(t))^4 \le (4\pi )^5w(t)^3\le \gamma ^{\prime }_3 t^3. \end{aligned}$$

Then

$$\begin{aligned} N(t)\le 2N_1(t) \le 2\gamma ^{\prime }_3 t^3. \end{aligned}$$

The case

$$\begin{aligned} 0<t\le \tfrac{\gamma _1}{1000\pi } \end{aligned}$$

is proved completely.

Suppose

$$\begin{aligned} \tfrac{\gamma _1}{1000\pi }<t<\tfrac{1}{2}. \end{aligned}$$

Obviously,

$$\begin{aligned} N(t)\le \mathrm mes _8\bigl ((T^2)^4 \bigr )= 256 \pi ^8. \end{aligned}$$

Then

$$\begin{aligned} N(t)< 256 \pi ^8 \big ( \tfrac{1000\pi }{\gamma _1} \big )^3 t^3, \end{aligned}$$

and Lemma 4.3 is now proved completely.\(\square \)

Proof of Lemma 4.4

Suppose \(\min (a(x), b(x))\ge 100\). Assume

$$\begin{aligned} x=(x_1, x_2, x_3, x_4),\quad \text{ where}\quad x_i=(\phi _i, \psi _i)\in T^2 \quad \text{ for} \quad i=1,2,3,4. \end{aligned}$$

Let

$$\begin{aligned} \alpha (x)=\tfrac{1}{a(x)}, \quad \beta (x)=\tfrac{1}{b(x)}. \end{aligned}$$

By assumptions, \(0<\alpha (x),\beta (x)<\frac{1}{100}\).

Lemma 4.2 easily implies that there exist \(\gamma _1^{\prime }, \gamma _2^{\prime }>0\) such that

$$\begin{aligned} \gamma _1^{\prime }\alpha (x)\beta (x)<G(x)<\gamma _2^{\prime }\alpha (x)\beta (x). \end{aligned}$$
(11)

Since \(x_i = (\phi _i, \psi _i) \in \partial C^+(x)\) for \(i=1,2,3,4\), then

$$\begin{aligned} \frac{\sin ^2 \frac{\phi _i-\phi _0}{2}}{\alpha ^2}+\frac{\sin ^2 \frac{\psi _i-\psi _0}{2}}{\beta ^2}=1. \end{aligned}$$

Therefore, we can define parameters \(-\pi < \theta _i \le \pi \) for \(i=1,2,3,4\) such that

$$\begin{aligned} \sin \tfrac{\phi _i-\phi _0}{2}=\alpha \cos \theta _i\quad \text{ and} \quad \sin \tfrac{\psi _i-\psi _0}{2}=\beta \sin \theta _i. \end{aligned}$$

It is not hard to see that every point \(x\in X\) parametrized by 8 numbers

$$\begin{aligned} (\alpha , \beta , \phi _0, \psi _0, \theta _1, \theta _2, \theta _3, \theta _4) \end{aligned}$$

can be uniquely parametrized by another 8 numbers

$$\begin{aligned} (\phi _1, \psi _1, \phi _2, \psi _2, \phi _3, \psi _3, \phi _4, \psi _4). \end{aligned}$$

Since there are two parametrizations of a point \(x\in X\), consider a Jacobi matrix between these parametrizations. The elements are computed as follows:

$$\begin{aligned} \tfrac{\partial \phi _i}{\partial \phi _0}&= 1,\quad \tfrac{\partial \psi _i}{\partial \phi _0}=0;\\ \tfrac{\partial \phi _i}{\partial \psi _0}&= 0,\quad \tfrac{\partial \psi _i}{\partial \psi _0}=1;\\ \tfrac{\partial \phi _i}{\partial \alpha }&= \tfrac{2\cos \theta _i}{\cos \tfrac{\phi _i-\phi _0}{2}};\quad \tfrac{\partial \psi _i}{\partial \alpha }=0;\\ \tfrac{\partial \phi _i}{\partial \beta }&= 0;\quad \tfrac{\partial \psi _i}{\partial \beta }=\tfrac{2\sin \theta _i}{\cos \tfrac{\psi _i-\psi _0}{2}};\\ \tfrac{\partial \phi _i}{\partial \theta _i}&= \tfrac{-2\alpha \sin \theta _i}{\cos \tfrac{\phi _i-\phi _0}{2}};\quad \tfrac{\partial \psi _i}{\partial \theta _i}=\tfrac{2\beta \cos \theta _i}{\cos \tfrac{\psi _i-\psi _0}{2}};\quad \tfrac{\partial \phi _i}{\partial \theta _j}=\tfrac{\partial \psi _i}{\partial \theta _j}=0. \end{aligned}$$

Therefore

$$\begin{aligned} J&= \left|\tfrac{D(\phi _1, \psi _1, \phi _2, \psi _2, \phi _3, \psi _3, \phi _4, \psi _4)}{D(\phi _0, \psi _0, \alpha , \beta , \theta _1, \theta _2, \theta _3, \theta _4)}\right|\\&= \left| \begin{array}{cccccccc} 1&0&1&0&1&0&1&0\\ 0&1&0&1&0&1&0&1\\ \frac{2\cos \theta _1}{\cos \frac{\phi _1-\phi _0}{2}}&0&\frac{2\cos \theta _2}{\cos \frac{\phi _2-\phi _0}{2}}&0&\frac{2\cos \theta _3}{\cos \frac{\phi _3-\phi _0}{2}}&0&\frac{2\cos \theta _4}{\cos \frac{\phi _4-\phi _0}{2}}&0\\ 0&\frac{2\sin \theta _1}{\cos \frac{\psi _1-\psi _0}{2}}&0&\frac{2\sin \theta _2}{\cos \frac{\psi _2-\psi _0}{2}}&0&\frac{2\sin \theta _3}{\cos \frac{\psi _3-\psi _0}{2}}&0&\frac{2\sin \theta _4}{\cos \frac{\psi _4-\psi _0}{2}}\\ \frac{-2\alpha \sin \theta _1}{\cos \frac{\phi _1-\phi _0}{2}}&\frac{2\beta \cos \theta _1}{\cos \frac{\psi _1-\psi _0}{2}}&0&0&0&0&0&0\\ 0&0&\frac{-2\alpha \sin \theta _2}{\cos \frac{\phi _2-\phi _0}{2}}&\frac{2\beta \cos \theta _2}{\cos \frac{\psi _2-\psi _0}{2}}&0&0&0&0\\ 0&0&0&0&\frac{-2\alpha \sin \theta _3}{\cos \frac{\phi _3-\phi _0}{2}}&\frac{2\beta \cos \theta _3}{\cos \frac{\psi _3-\psi _0}{2}}&0&0\\ 0&0&0&0&0&0&\frac{-2\alpha \sin \theta _4}{\cos \frac{\phi _4-\phi _0}{2}}&\frac{2\beta \cos \theta _4}{\cos \frac{\psi _4-\psi _0}{2}}\\ \end{array} \right|. \end{aligned}$$

Direct computation shows that

$$\begin{aligned} J&= \sum \limits _{(i\,j\,k\,l)} 64\, \mathrm sign \,(i\,j\,k\,l)\, \alpha ^2\beta ^2 \cdot \frac{1}{\prod _{m=1}^4 \cos \frac{\phi _m-\phi _0}{2} \cos \frac{\psi _m-\psi _0}{2} } \\&\times \cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \cos \frac{\phi _j-\phi _0}{2}\cos \frac{\psi _l-\psi _0}{2}, \end{aligned}$$

where \((i\,j\,k\,l)\) runs through all permutations of \((1\, 2\, 3\, 4)\).

From (11) easily follows that

$$\begin{aligned}&\mathrm mes _8\big \{x\in (T^2)^4: \alpha (x)\beta (x)<\tfrac{t}{\gamma _2^{\prime }} \; \text{ and} \max (\alpha (x), \beta (x))<\tfrac{1}{100} \big \}\le L(t) \\&\quad \le \mathrm mes _8\big \{x\in (T^2)^4: \alpha (x)\beta (x)<\tfrac{t}{\gamma _1^{\prime }} \; \text{ and} \max (\alpha (x), \beta (x))<\tfrac{1}{100} \big \}. \end{aligned}$$

Therefore

$$\begin{aligned}&\int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma _2^{\prime }} \end{matrix}} d\phi _1\,d\psi _1\,d\phi _2\,d\psi _2\,d\phi _3\,d\psi _3\,d\phi _4\,d\psi _4 \le L(t) \\&\quad \le \int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma _1^{\prime }} \end{matrix}}\,d\phi _1\,d\psi _1\,d\phi _2\,d\psi _2\,d\phi _3\,d\psi _3\,d\phi _4\,d\psi _4. \end{aligned}$$

In variables \((\alpha , \beta , \phi _0, \psi _0, \theta _1, \theta _2, \theta _3, \theta _4)\) the last inequality can be written as follows:

$$\begin{aligned}&\int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma _2^{\prime }} \end{matrix}} \int \limits _{\begin{matrix} \phi _0, \psi _0 \in (-\pi , \pi ]\\ \theta _{1,2,3,4} \in (-\pi , \pi ] \end{matrix}} |J|\, d\alpha \,d\beta \,d\phi _0\,d\psi _0\,d\theta _1\,d\theta _2\,d\theta _3\,d\theta _4 \le L(t) \\&\quad \le \int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma _1^{\prime }} \end{matrix}} \int \limits _{\begin{matrix} \phi _0, \psi _0 \in (-\pi , \pi ]\\ \theta _{1,2,3,4} \in (-\pi , \pi ] \end{matrix}} |J|\,d\alpha \,d\beta \,d\phi _0\,d\psi _0\,d\theta _1\,d\theta _2\,d\theta _3\,d\theta _4. \end{aligned}$$

Let

$$\begin{aligned} J_1&= \frac{J}{\alpha ^2\beta ^2}=\sum \limits _{(i\,j\,k\,l)} 64\, \mathrm sign \,(i\,j\,k\,l)\, \cdot \frac{1}{\prod _{m=1}^4 \cos \frac{\phi _m-\phi _0}{2} \cos \frac{\psi _m-\psi _0}{2} } \nonumber \\&\times \cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \cos \frac{\phi _j-\phi _0}{2}\cos \frac{\psi _l-\psi _0}{2}. \end{aligned}$$
(12)

Then \(J_1\) can be considered as a function \(J_1(\alpha , \beta , \phi _0, \psi _0, \theta _1, \theta _2, \theta _3, \theta _4)\).

Obviously, with \(\gamma = \gamma _1^{\prime }\) or \(\gamma _2^{\prime }\)

$$\begin{aligned}&\int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma } \end{matrix}} \int \limits _{\begin{matrix} \phi _0, \psi _0 \in (-\pi , \pi ]\\ \theta _{1,2,3,4} \in (-\pi , \pi ] \end{matrix}} |J|\, d\alpha \,d\beta \,d\phi _0\,d\psi _0\,d\theta _1\,d\theta _2\,d\theta _3\,d\theta _4\nonumber \\&\quad =\int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma } \end{matrix}} \alpha ^2\beta ^2\, d\alpha \, d\beta \int \limits _{\begin{matrix} \phi _0, \psi _0 \in (-\pi , \pi ]\\ \theta _{1,2,3,4} \in (-\pi , \pi ] \end{matrix}} |J_1|\, d\phi _0\,d\psi _0\,d\theta _1\,d\theta _2\,d\theta _3\,d\theta _4. \end{aligned}$$
(13)

Since \(\max (\alpha , \beta )<\frac{1}{100}\), then

$$\begin{aligned} \big |\sin \tfrac{\phi _m-\phi _0}{2}\big |<\tfrac{1}{100}\quad \text{ and} \quad \big |\sin \tfrac{\psi _m-\psi _0}{2}\big |<\tfrac{1}{100} \end{aligned}$$

for \(m=1,2,3,4\).

Hence

$$\begin{aligned} \cos \tfrac{\phi _m-\phi _0}{2}> \tfrac{4999}{5000} \quad \text{ and} \quad \cos \tfrac{\psi _m-\psi _0}{2}> \tfrac{4999}{5000}. \end{aligned}$$

Consequently,

$$\begin{aligned}&\big |\cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \cos \tfrac{\phi _j-\phi _0}{2}\cos \tfrac{\psi _l-\psi _0}{2} \\&\quad -\cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \big | \le \tfrac{2}{5000}. \end{aligned}$$

Applying (12), we obtain the following sequence of inequalities which are independent from \(\alpha , \beta \):

$$\begin{aligned} J_1&\ge 64 \big | \sum \limits _{(i\,j\,k\,l)} \, \mathrm sign \,(i\,j\,k\,l)\, \cdot \cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \cos \tfrac{\phi _j-\phi _0}{2}\cos \frac{\psi _l-\psi _0}{2} \big |\\&\ge 64 \big | \sum \limits _{(i\,j\,k\,l)} \, \mathrm sign \,(i\,j\,k\,l)\, \cdot \cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \big | - 64\cdot 24\cdot \frac{2}{5000}. \end{aligned}$$

Let \(\theta _m=(m-2)\pi /2\). Then

$$\begin{aligned} \big | \sum \limits _{(i\,j\,k\,l)} \, \mathrm sign \,(i\,j\,k\,l)\, \cdot \cos ^2 \theta _i \cos \theta _j \sin ^2 \theta _k \sin \theta _l \big | = 4. \end{aligned}$$

Consequently, there exists some neighbourhood of point \((-\frac{\pi }{2}, 0, \frac{\pi }{2}, \pi )\) in coordinates \((\theta _1, \theta _2, \theta _3, \theta _4)\) such that \(|J_1|>64\) in this neighbourhood, and the neighbourhood is independent from \(\alpha , \beta , \phi _0, \psi _0\).

Therefore, there exist positive constants \(\gamma ^{\prime }_4, \gamma ^{\prime }_5\) independent of \(\alpha , \beta \) and satisfying

$$\begin{aligned} \gamma ^{\prime }_4 < \int \limits _{\begin{matrix} \phi _0, \psi _0 \in (-\pi , \pi ]\\ \theta _{1,2,3,4} \in (-\pi , \pi ] \end{matrix}} |J_1|\, d\phi _0\, d\psi _0\, d\theta _1\, d\theta _2 \,d\theta _3\, d\theta _4 < \gamma ^{\prime }_5 \end{aligned}$$
(14)

for every \(0<\alpha , \beta < \frac{1}{100}\).

Inequalities (13) and (14) together imply

$$\begin{aligned} \gamma ^{\prime }_4\cdot \int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma ^{\prime }_2} \end{matrix}} \alpha ^2\beta ^2\, d\alpha \, d\beta < L(t) < \gamma ^{\prime }_5\cdot \int \limits _{\begin{matrix} \max (\alpha , \beta )<\frac{1}{100}\\ \alpha \beta <\frac{t}{\gamma ^{\prime }_1} \end{matrix}} \alpha ^2\beta ^2\, d\alpha \, d\beta . \end{aligned}$$
(15)

If \(\tau <\frac{1}{10000}\) then

$$\begin{aligned} \int \limits _{\begin{matrix} \max (\alpha , \beta )<\tfrac{1}{100}\\ \alpha \beta <\tau \end{matrix}} \alpha ^2\beta ^2\, d\,\alpha \,d\beta = \tfrac{\tau ^3}{9}-\tfrac{2 \ln 100}{9} \tau ^3 + \tfrac{1}{9}\tau ^3|\ln \tau |. \end{aligned}$$

Consequently, as \(t\rightarrow 0\), the main terms in left and right parts of (15) have order \(t^3|\ln t|\). Therefore, \(\frac{L(t)}{t^3|\ln t|}\) is bounded from above and below in some interval \((0, \varepsilon )\) by two positive constants.

In the segment \([\varepsilon , \frac{1}{2}]\) the functions \(L(t)\) and \(t^3|\ln t|\) are continuous and positive. Consequently, the quotient \(\frac{L(t)}{t^3|\ln t|}\) in this segment is also bounded from above and below by two positive constants (but, probably, not the same as in the previous paragraph). However, combining the cases \(t \in (0, \varepsilon )\) and \(t \in [\varepsilon , \frac{1}{2}]\) allows to conclude that \(\frac{L(t)}{t^3|\ln t|}\) is bounded from above and below by some positive constants in the whole interval \((0, \frac{1}{2}]\). Thus Lemma 4.4 is proved.\(\square \)

5 Proofs of Main Results

Now proceed with the proofs of Theorems 2.1, 2.2 and 2.3.

Proof of Theorem 2.1

It is obvious that

$$\begin{aligned} \int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}\, dx&< \int \limits _{(T^2)^4} \lambda ^4\left(e^{-\lambda G(x)}+e^{-\lambda (4\pi ^2-\lambda G(x))}\right)\, dx\\&< 2\int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}\, dx. \end{aligned}$$

Then, by Lemma 3.2,

$$\begin{aligned} \mathsf E \, f_3 = O^*\big ( \int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}\, dx \big ). \end{aligned}$$

According to [4], the identity

$$\begin{aligned} \int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}\, dx= \int \limits _\mathbb{R } \lambda ^4 e^{-\lambda t}\, dM(t) \end{aligned}$$

holds, where the right-hand side is a Stieltjes integral.

Since \(G(x)\) is the measure of \(C^+(x)\), then \(0<G(x)\le 2\pi ^2\) holds for every \(x\in X\). Therefore, \(M(t)\) is a constant for \(t\le 0\) and for \(t\ge 2\pi ^2\). Hence

$$\begin{aligned} \int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}\, dx=\int \limits _{0}^{2\pi ^2} \lambda ^4 e^{-\lambda t}\, dM(t). \end{aligned}$$

Thus

$$\begin{aligned} \mathsf E \, f_3 = O^* \big ( \int \limits _{0}^{2\pi ^2} \lambda ^4 e^{-\lambda t}\, dM(t) \big ). \end{aligned}$$
(16)

Since \(M(t)\) is non-decreasing, \(e^{-\lambda t}\) is decreasing and continuous, then integration by parts is possible and gives

$$\begin{aligned} \int \limits _{0}^{2\pi ^2} \lambda ^4 e^{-\lambda t}\, dM(t)&= \lambda ^4 M(2\pi ^2)e^{-2\lambda \pi ^2}\nonumber \\&\quad + \lambda ^5 \int \limits _{\frac{1}{2}}^{2\pi ^2} e^{-\lambda t}M(t)\, dt + \lambda ^5 \int \limits _{0}^{\frac{1}{2}} e^{-\lambda t}M(t)\, dt. \end{aligned}$$
(17)

Obviously, as \(\lambda \rightarrow \infty \),

$$\begin{aligned}&\lambda ^4 M(2\pi ^2)e^{-2\lambda \pi ^2}=o(1),\quad \lambda ^5 \int \limits _{\frac{1}{2}}^{2\pi ^2} e^{-\lambda t}M(t)\, dt =o(1),\nonumber \\&\quad \lambda ^5 \int \limits _{0}^{\frac{1}{2}} e^{-\lambda t}M(t)\, dt = O^*\big ( \lambda ^5\int \limits _{0}^{\frac{1}{2}} e^{-\lambda t}t^3|\ln t|\, dt \big ). \end{aligned}$$
(18)

Let \(u=e^{-\lambda t}\), then

$$\begin{aligned} t=- \frac{\ln u}{\lambda }\quad \text{ and} \quad dt=-\frac{du}{\lambda u}. \end{aligned}$$

Therefore

$$\begin{aligned} \int \limits _{0}^{\frac{1}{2}} e^{-\lambda t}t^3|\ln t|\, dt = \int \limits _{e^{\frac{-\lambda }{2}}}^{1} \frac{|\ln ^3 u|}{\lambda ^4} \cdot (\ln \lambda -\ln (-\ln u))\, du= O^*(\lambda ^{-4}\ln \lambda ). \end{aligned}$$

Hence

$$\begin{aligned} \lambda ^5 \int \limits _{0}^{\frac{1}{2}} e^{-\lambda t}M(t)\, dt = O^*(\lambda \ln \lambda ). \end{aligned}$$
(19)

Substitution of (18) and (19) into (17) gives

$$\begin{aligned} \int \limits _{0}^{2\pi ^2} \lambda ^4 e^{-\lambda t}\, dM(t) = O^*(\lambda \ln \lambda ). \end{aligned}$$

Thus, according to (16),

$$\begin{aligned} \mathsf E \, f_3 = O^*(\lambda \ln \lambda ), \end{aligned}$$

which is the statement of Theorem 2.1.\(\square \)

Proof of Theorem 2.2

From Dehn–Sommerville equations for a simplicial 4-polytope follows that

$$\begin{aligned} f_2=2f_3+r_2 \quad \text{ and} \quad f_1=f_3-f_0+r_1, \end{aligned}$$

where random variables \(r_1\) and \(r_2\) are errors of degenerate cases, i.e. \(r_1=r_2=0\) almost surely if \(n(T^2)>4\) and \(r_1, r_2<10\) almost surely. Since

$$\begin{aligned} \lim \limits _{\lambda \rightarrow \infty }\mathsf P (n(T^2)\le 4)=0, \end{aligned}$$

then

$$\begin{aligned} \mathsf E \, r_1 = o(1), \quad \mathsf E \, r_2 = o(1),\quad \mathsf E \, f_3 = O^*(\lambda \ln \lambda ) \end{aligned}$$

as \(\lambda \rightarrow \infty \) by Theorem 2.1. Also,

$$\begin{aligned} \mathsf E \, f_0 = \mathsf E \, n(T^2) = 4\lambda \pi ^2 \end{aligned}$$

since \(n(T^2)\) is distributed as Pois\(\,(4\lambda \pi ^2)\).

Finally,

$$\begin{aligned} \mathsf E \, f_2&= 2\mathsf E \, f_3 + \mathsf E \, r_2 = O^*(\lambda \ln \lambda ),\\ \mathsf E \, f_1&= \mathsf E \, f_3 - \mathsf E \, f_0 + \mathsf E \, r_1 = O^*(\lambda \ln \lambda ), \end{aligned}$$

and Theorem 2.2 is proved.\(\square \)

Proof of Theorem 2.3

Notice that

$$\begin{aligned} h\bigl (\lambda G(x)\bigr )<\frac{1}{4} \quad \text{ and} \quad e^{-4\lambda \pi ^2+\lambda G(x)}\le e^{-2\lambda \pi ^2}. \end{aligned}$$

These inequalities imply the estimate

$$\begin{aligned} \int \limits _{(T^2)^4} e^{-4\lambda \pi ^2+\lambda G(x)}h\bigl (\lambda G(x)\bigr )\, dx \le 64\pi ^8 e^{-2\lambda \pi ^2} = o(1) \end{aligned}$$

as \(\lambda \rightarrow \infty \).

Further,

$$\begin{aligned} 2\lambda \pi ^2\le 4\lambda \pi ^2-\lambda G(x) < 4\lambda \pi ^2. \end{aligned}$$

Therefore, from (6) follows

$$\begin{aligned} h\bigl (4\lambda \pi ^2-\lambda G(x)\bigr )=O^*(\lambda ^{-1}). \end{aligned}$$

Consequently,

$$\begin{aligned} \int \limits _{(T^2)^4} \lambda ^4 e^{-\lambda G(x)}h\bigl (4\lambda \pi ^2-\lambda G(x)\bigr ) \, dx = O^*\big ( \lambda ^3 \int \limits _{(T^2)^4} e^{-\lambda G(x)} \, dx \big ) O^*(\ln \lambda ), \end{aligned}$$

as the integral in the middle part was estimated in the proof of Theorem 2.1.

Obviously,

$$\begin{aligned} \mathsf P \bigl (n(T^2)=2\bigr )=o(1), \quad \mathsf P \bigl (n(T^2)<2\bigr )=o(1). \end{aligned}$$

Now Theorem 2.3 easily follows from (7) because every summand in this identity was estimated.\(\square \)