Superdiffusion in the periodic Lorentz gas

We prove a superdiffusive central limit theorem for the displacement of a test particle in the periodic Lorentz gas in the limit of large times $t$ and low scatterer densities (Boltzmann-Grad limit). The normalization factor is $\sqrt{t\log t}$, where $t$ is measured in units of the mean collision time. This result holds in any dimension and for a general class of finite-range scattering potentials. We also establish the corresponding invariance principle, i.e., the weak convergence of the particle dynamics to Brownian motion.


Introduction
The periodic Lorentz gas is one of the iconic models of "chaotic" diffusion in deterministic systems. It describes the dynamics of a test-particle in an infinite periodic array of spherically symmetric scatterers. The main results characterizing the diffusive nature of the periodic Lorentz gas have to date been mainly restricted to the two-dimensional setting and hardsphere scatterers. The first seminal result on this subject was the proof of a central limit theorem for the displacement of the test particle at large times t for the finite-horizon Lorentz gas by Bunimovich and Sinai [8]. For more general invariance principles see Melbourne and Nicol [21] and references therein. In the case of the infinite-horizon Lorentz gas, Bleher [5] pointed out that the mean-square displacement grows like t log t when t → ∞, as opposed to a linear growth in the finite-horizon case. The superdiffusive central limit theorem suggested in [5] was first proved by Szász and Varjú [27] for the discrete-time billiard map. Dolgopyat and Chernov [14] provided an alternative proof, and established the central limit theorem and invariance principle for the billiard flow. Analogous results hold for the stadium billiard (Bálint and Gouëzel [2]) and billiards with cusps (Bálint, Chernov and Dolgopyat [1]). The difficulty in extending the above findings to dimensions greater than two lies in the possibly exponential growth of the complexity of singularities (Bálint and Tóth [3,4], Chernov [12]) and, in the case of infinite horizon, the subtle geometry of channels (Dettmann [13], Nándori, Szász and Varjú [22]).
In the present paper we prove an unconditional superdiffusive central limit theorem for the periodic Lorentz gas in any dimension d ≥ 2, valid in the limit of low scatterer density (Boltzmann-Grad limit) and for a general class of finite-range scattering potentials. The precise setting is as follows. Let L ⊂ R d be a fixed Euclidean lattice of covolume one (such as the cubic lattice L = Z d ), and define the scaled lattice L r := r (d−1)/d L. At each point in L r we center a sphere of radius r. We consider a test particle that moves along straight lines with unit speed until it hits a sphere, where it is scattered elastically. The above scaling of scattering radius vs. lattice spacing ensures that the mean free path length (i.e., the average distance between consecutive collisions) has the limit ξ = 1/v d−1 as r → 0, where v d−1 = π d−1 2 /Γ( d+1 2 ) denotes the volume of the unit ball in R d−1 .
In the case of the classic Lorentz gas the scattering mechanism is given by specular reflection, but as in [19] we will here also allow more general spherically symmetric scattering maps. The precise conditions will be stated in Section 2. The position of our test particle at time t is denoted by (1.1) x t = x t (x 0 , v 0 ) ∈ K r := R d \ (L r + rB d 1 ), where x 0 and v 0 are position and velocity at time t = 0, and B d 1 is the open unit ball in R d centered at the origin. We use the convention that for any boundary point x 0 ∈ ∂K r we choose the outgoing velocity v 0 , i.e. the velocity after the scattering. The corresponding phase space is denoted by T 1 (K r ). For notational reasons it is convenient to extend the dynamics to T 1 (R d ) := R d × S d−1 1 by setting x t = x 0 for all initial conditions x 0 / ∈ K r . We consider the time evolution of a test particle with random initial data (x 0 , v 0 ) ∈ T 1 (R d ), distributed according to a given Borel probability measure Λ on T 1 (R d ). The following superdiffusive central limit theorem, valid for small scattering radii and large times, is the main result of this paper. Theorem 1.1. Let d ≥ 2 and fix a Euclidean lattice L ⊂ R d of covolume one. Assume (x 0 , v 0 ) is distributed according to an absolutely continuous Borel probability measure Λ on T 1 (R d ).
Then, for any bounded continuous f : R d → R, .
Here ζ(d) := ∞ n=1 n −d denotes the Riemann zeta function. Theorem 1.1 will follow from its descrete-time analogue, Theorem 1.2 below. Let us denote by q n = q n (q 0 , v 0 ) ∈ ∂K r (n = 1, 2, 3, . . .) the location where the test particle with initial condition (q 0 , v 0 ) leaves the nth scatterer. It is natural in this setting to assume q 0 ∈ ∂K r . By the translational invariance of the lattice, we may in fact assume without loss of generality q 0 ∈ rS d−1 1 . For given exit velocity v 0 , we write and stipulate in the following that the random variable s 0 is uniformly distributed in the unit disc orthogonal to v 0 . Theorem 1.2. Let d ≥ 2 and L as above. Assume v 0 is distributed according to an absolutely continuous Borel probability measure λ on S d−1 1 . Then, for any bounded continuous f : The starting point of our analysis is the paper [19], which proves that, for every fixed t > 0, the limit r → 0 in (1.2) (resp. (1.5)) exists and is given by a continuous-time (resp. discretetime) Markov process. The main objective of the present study is therefore to prove a superdiffusive central limit theorem for each of these Markov processes. This is stated as Theorem 3.2 in Section 3 after a brief survey of the relevant results from [19]. The subsequent sections of the paper are devoted to the proof of Theorem 3.2.

The scattering map
We now specify the conditions on the scattering map that are assumed in Theorems 1.
, and consider the scattering map The incoming data is denoted by (v − , b) ∈ S, where v − is the velocity of the particle before the collision and b the impact parameter, i.e., the point of impact on S d−1 The outgoing data is analogously defined as (v + , s) ∈ S, where v + is the velocity of the particle after the collision and s the exit parameter, cf. Figure  1. Since we assume the scattering map is spherically symmetric, it is sufficent to define Θ for (v − , b) = (e 1 , we 2 ) for w ∈ [0, 1), where e j denotes the unit vector in the jth coordinate direction. Any spherically symmetric scattering map (2.2) which preserves angular momentum is thus uniquely determined by (2.3) Θ(e 1 , we 2 ) = e 1 cos θ(w) + e 2 sin θ(w), −e 1 w sin θ(w) + e 2 w cos θ(w) where θ(w) is called the scattering angle.
Note that for more general impact parameters of the form we have (by spherical symmetry) Illustration of a scattering map satisfying Hypothesis (A).
with the matrix More explicitly, .
We extend the definition of S(w) to w = 0 by setting S(0) := −I d ∈ SO(d) for d even and S(0) := −I d−1 1 ∈ SO(d) for d odd. This choice ensures that S(0)e 1 = −e 1 . For the case of general initial data (v − , b) ∈ S, assume R(v − ) ∈ SO(d) and w ∈ B d−1 1 are chosen so that We use an inductive argument to work out the velocity v n after the nth collision, as well as the impact and exit parameters b n and s n of the nth collision.
Lemma 2.1. Fix v 0 and R 0 ∈ SO(d) so that v 0 = R 0 e 1 , and denote by (v n ) n∈N , (b n ) n∈N , (s n ) n∈N the sequence of velocities, impact and exit parameters of a given particle trajectory. Then there is a unique sequence (w n ) n∈N in B d−1 1 such that for all n ∈ N where (2.12) R n := R 0 S(w 1 ) · · · S(w n ).
Proof. We proceed by induction. We have v 0 · b 1 = 0 and thus e 1 · R −1 0 b 1 = 0. We define Then the assumption (2.9) is satisfied and (2.10) yields which proves the case n = 1. Let us therefore assume the statement is true for n = k − 1.
By the induction hypothesis, Therefore (2.9) holds with v − = v k−1 , b = b k , and we can apply (2.10): . This completes the proof.

The Boltzmann-Grad limit
We now recall the results of [18,19] that are relevant to our investigation. Define the Markov chain We will discuss the transition kernel Ψ 0 (w, x, z) in detail in Section 5. At this point, it sufficies to note that it is independent of ξ n−1 and symmetric, i.e. Ψ 0 (w, x, z) = Ψ 0 (z, x, w). It is also independent of the choice of θ, L and Λ [18]. (Note that Ψ 0 is related to the kernel Φ 0 studied in [18,19,20] with the mean free path length ξ = 1/v d−1 . Both Ψ 0 (z, z) and Ψ(x, z) define probability densities on R >0 × B d−1 1 with respect to dx dz. The first fact follows from the symmetry of the transition kernel, and the second from the relation (3.5) Suppose in the following that the sequence of random variables is given by the Markov chain (3.1), where (ξ 1 , η 1 ) has density either Ψ(x, z) (for the continuous time setting) or Ψ 0 (x, z) (for the discrete time setting). The relation (3.4) between the two reflects the fact that the continuous time Markov process is a suspension flow over the discrete time process, where the particle moves with unit speed between consecutive collisions; see [19, Sect. 6] for more details. We assume in the following that R is a function S d−1 1 → SO(d) which satisfies v = R(v)e 1 and which is smooth when restricted to S d−1 where v ⊥ := (v 2 , . . . , v d ) ∈ R d−1 , and R(e 1 ) = I, R(−e 1 ) = −I. For n ∈ N, define the following random variables: (3.9) ν t := max{n ∈ Z ≥0 : τ n ≤ t} (number of collisions within time t); (3.10) V n := R(v 0 )S(η 1 ) · · · S(η n )e 1 , V 0 := v 0 , (velocity after the nth collision); (3.11) as r → 0, where the random variable (ξ 1 , η 1 ) has density Ψ(x, z).
The main part of this paper is devoted to the proof of the following superdiffusive central limit theorem for the processes X t and Q n , which in turn implies Theorems 1.1 and 1.2. We will only assume that the random variable (ξ 1 , η 1 ) is such that the marginal distribution of η 1 is absolutely continuous on B d−1 1 with respect to Lebesgue measure; there is no further assumption on the distribution of ξ 1 . This hypothesis is satisfied for (ξ 1 , η 1 ) with density Ψ 0 (x, z), since That is, the marginal distribution of η 1 is uniform on B d−1 1 . We will later see that (ξ 1 , η 1 ) with density Ψ(x, z) also complies with the above hypothesis (cf. Proposition 10.1). The processes X t and Q n are independent of x 0 and q 0 , respectively, and we will in the following fix v 0 ∈ S d−1 1 . Also, the required assumptions on the scattering angle θ are significantly weaker than in the previous theorems.
and (ii) where N (0, I d ) is a centered normal random variable in R d with identity covariance matrix.
In view of Theorem 3.1, Theorem 3.2 implies Theorems 1.1 and 1.2.
It is interesting to compare the above results with the case of a random, rather than periodic, scatterer configuration, where the scatterers are placed at the points of a fixed realisation of a Poisson process in R d . In the case of fixed scattering radius there is, to the best of our knowledge, no proof of a central limit theorem even in dimension d = 2. In the Boltzmann-Grad limit, however, the work of Gallavotti [15], Spohn [24] and Boldrighini, Bunimovich and Sinai [6] shows that we have an analogue of Theorem 3.1, where the limit random flight process X t is governed by the linear Boltzmann equation. In this setting, (3.6) is a sequence of independent random variables, where ξ n has density Ψ 0 ( 1 . Routine techniques [23] show that in this case the central limit theorem holds for X t with a standard √ t normalisation, and for Q n with a √ n normalisation.

Outline of the proof of Theorem 3.2
We will now outline the central arguments in the proof of Theorem 3.2 (ii) for discrete time by reducing the statement to four main lemmas, whose proof is given in Section 9. The continuous-time case (i) follows from (ii) via technical estimates supplied in Section 11. We will assume from now on that (ξ 1 , η 1 ) has density Ψ 0 (x, z), and discuss the generalisation to more general distributions in Section 10. We note that for η 0 uniformly distributed in and it is therefore equivalent to consider instead of (3.6) the Markov chain with the same transition probability (3.2), η 0 uniformly distributed in B d−1 1 and ξ 0 = 0. The sequence , with η 0 as defined above, is itself generated by a Markov chain on the state space The objective is to prove a central limit theorem of sums of the random variables ξ n V n−1 . The first observation is that these are of course not independent. If we, however, condition on the sequence η, then the V n are deterministic, and (ξ n ) ∞ n=1 is a sequence of independent (but not identically distributed) random variables, The plan is now to apply the Lindeberg central limit theorem to the sum of independent random variables, Q n = n j=1 ξ j V j−1 , conditioned on η.
To this end we first truncate Q n by defining the random variable for some fixed γ ∈ (1, 2). The following lemma tells us that it is sufficient to prove Theorem 3.2 (ii) for Q n instead of Q n .
Lemma 4.1. We have almost surely.
To prove the central limit theorem for Q n , we center ξ j by setting with the conditional expectation where r j := j(log j) γ and (4.12) The following lemma shows that Q n and Q n are close relative to √ n log n.
Lemma 4.2. The sequence of random variables (4.14) It is therefore sufficient to prove Theorem 3.2 (ii) for Q n in place of Q n . This will be achieved by applying the Lindeberg central limit theorem to the conditional sum as aluded to above. We begin by estimating the conditional variance. Set There is a constant σ d > 0 such that, for n → ∞, By taking the trace in (4.18), we have in particular The next lemma verifies the Lindeberg conditions for random η.
Lemma 4.4. For any fixed ε > 0, Given these lemmas, let us now conclude the proof of the fact that By Chebyshev's inequality we have, for any K > 0, and thus, for any κ > 0, (4.24) By (4.19), the second term on the right hand side of (4.24) converges to 0 as n → ∞, if we choose κ = d, say. So (4.24) implies that the sequence of random variables Y n is tight. By the Helly-Prokhorov theorem, there is an infinite subset S 1 ⊂ N so that Y n converges in distribution along n ∈ S 1 to some limit Y . Assume for a contradiction that Y is not distributed according to N (0, I d ). The Borel-Cantelli lemma implies that there is an infinite subset S 2 ⊂ S 1 , so that in the statements of Lemmas 4.3 and 4.4 we have almost-sure convergence along n ∈ S 2 : The hypotheses of the Lindeberg central limit theorem are met, and we infer that Y n ⇒ N (0, I d ) for n ∈ S 2 . (We use the Lindeberg theorem for triangular arrays of independent random variables, since we have veriefied the Lindeberg conditions only along a subsequence.) This, however, contradicts our assumption that Y is not normal, and hence N (0, I d ) is indeed the unique limit point of any converging subsequence. This in turn implies that every sequence converges, and therefore completes the proof of (4.22). In view of Lemmas 4.1 and 4.2, this implies Theorem 3.2 (ii) (still under the assumption that (ξ 1 , η 1 ) has density Ψ 0 (x, z)). Let us briefly describe the further contents of this paper. In Section 5 we recall the basic properties of the transition kernel Ψ 0 (w, x, z) from [20]. Section 6 establishes key estimates for the moments K p,r (w, z), m j and a j introduced above. In Sections 7 and 8 we prove spectral gap estimates and exponential mixing for the discrete time Markov process defined in (4.4). The estimates from Sections 6-8 are the main input in the proof of Lemmas 4.1-4.4, which is given in Section 9. In Section 10 we show that the discrete-time statement in Theorem 3.2 (ii) holds for more general initial distributions than Ψ 0 (x, z). It holds in particular for Ψ(x, z), which appears in the continuous-time variant. Section 11 explains how to pass from discrete to continuous time, thus completing the proof of Theorem 3.2 (i).

The transition kernel
In dimension d = 2 we have the following explicit formula for the transition kernel. For w, z ∈ (−1, 1), This formula has been derived, independently and with different methods, by Marklof and Strömbergsson [17], Caglioti and Golse [10,11] and by Bykovskii and Ustinov [9]. In dimension d ≥ 3 we have no such explicit formulas for the transition kernel. We recall from [18, 19, 20] the following properties. If d ≥ 3, the function is continuous. Ψ 0 (w, x, z) depends only on x, w := w , z := z and the angle ϕ := ϕ(w, z) ∈ [0, π] between the vectors w, z ∈ B d−1 1 . Note that in dimension d = 2 the angle ϕ can only take the values 0 and π. For statements that are specific to dimension d = 2, we will often use w ∈ (−1, 1) instead of w, and |w| instead of w = w . We recall once more that Ψ 0 (w, x, z) = Φ 0 (x, w, −z) in the notation of [18,19,20], and so in particular the angle ϕ between w, z becomes π − ϕ.
Our proofs will exploit the following estimates on the transition kernel [20,26]. All bounds are uniform in x > 0 and w, z ∈ B d−1 1 . We have by [20, Thm. 1.1], Furthermore, by [20, Thm. 1.7], there exists a continuous and uniformly bounded function where the error term is It is noted in [20] that F 0,d is uniformly bounded from below for t 1 , t 2 , α near zero. That is, there is a small constant c > 0 which only depends on d such that Furthermore, the support of F 0,d is contained in (0, c ] × (0, c ] × R ≥0 for some c > 0, and for any fixed t 1 , t 2 > 0, the function F 0,d (t 1 , t 2 , ·) has compact support. In dimension d ≥ 3, the following upper bound will prove useful [26, Thm. 1.8]: , π]. The notation f g is here defined as f = O(g), i.e., there exists a constant C > 0 such that |f | ≤ C|g|.
The support of Ψ 0 (w, x, z) is described by a continuous function (If ϕ = π then the right hand side of (5.9) should be interpreted as t − d 2 .) For the distribution of free path length between consecutive collitions, .
This asymptotic estimate sharpens earlier upper and lower bounds by Bourgain, Golse and Wennberg [7,16]. Note that the variances in Theorems 1.1 and 1.2 are related to the above tail via

Moment estimates
We now provide key estimates of the random variables introduced in the previous section. For p = 0, 1, 2 and r > 0, set We furthermore define the random variables (recall Section 4) with r j = j(log j) γ for some fixed γ ∈ (1, 2).
Proposition 6.6. Let d ≥ 3. There are constants c 2 > c 1 > 0, such that for u ≥ 1, Proof. The upper bounds in (6.16) for K 1 , and the upper and lower bounds for K 0 in Lemma 6.2 imply (6.36) Using spherical coordinates, we see that the second term is, up to a multiplicative constant, equal to We now substitute x = 1 − w, y = 1 − z, φ = π − ϕ, and integrate x over (0, y), and then over y. This yields (again up to a multiplicative constant) (6.38) The first term in (6.36) can be dealt with similarly, and yields a lower order contribution. This proves the upper bound . As to the lower bound in (6.35), we use (6.15) (and the same variable substitutions as above): (6.40) Proposition 6.7.
Proof. This follows from Propositions 6.5 and 6.6, since m j ≤ µ j and m j ≤ r j .
Proposition 6.8.  Proof. This is a direct corollary of Proposition 6.7.
(6.51) P(β n > u) ∼ 1 2π 2 u 2 . Proof. We follow the same strategy as in the proof of Proposition 6.5. For (6.52) f (y) = 6y 2 (1 + y) 2 1 + 2y ln(1 + y −1 ), the variable substitution (6.31) yields The leading order contribution comes from the first term, which evaluates to This proves (6.51). To see that (6.50) has the same asymptotics, recall that β 2 n − α 2 n = µ 2 n and (6.27). Proposition 6.13. Let d ≥ 3. There are constants c 2 > c 1 > 0, such that for u ≥ 1, Proof. We exploit the bounds in Lemma 6.4. For the upper bound, (6.56) and note that Using polar co-ordinates as before, the second term in (6.56) evaluates to (6.59) A similar calculation shows that the first term in (6.56) produces a lower order contribution. This establishes the upper bound in (6.55). For the lower bound for P(β n > u), (6.60) The lower bound for P(α n > u) follows by combining the lower bound for P(β n > u) with (6.35).

Spectral gaps
Let V be a finite-dimensional real vector space with inner product · , · V and norm x : and norm f := f, f 1/2 . We will also denote by · the corresponding operator norm on H → H. In the following, (ρ, V ) will denote a representation of SO(d) with group homomorphism ρ : Define the following operators on H: We have Proposition 7.1. The operator P has the spectral gap 1 − ω 0 with .
Proof. This follows from the standard Doeblin argument. Note that, since K 0 (w, z) is the kernel of a stochastic transition operator with respect to dw on B d−1 1 , we have (7.7) If J = 1, we have P = Π and thus ω 0 = 0. Assume therefore 0 ≤ J < 1. Then is itself a stochastic transition operator, with the same stationary measure. Using (7.5), we can write and so (7.10) The claim of the proposition now follows from (6.10).
Lemma 7.2. Let θ : [0, 1) → R be measurable, so that (7.11) meas{w ∈ [0, 1) : θ(w) / ∈ Q} > 0, and let (ρ, V ) be a non-trivial irreducible representation of SO(d). Then Proof. For any fixed e ∈ S d−1 1 , let Λ e be the push-forward of Lebesgue measure on [0, 1) under the map The group generated by the support of Λ e is, by assumption (7.11), dense in the subgroup with E(x) as in (2.6). Next, let Λ be the push-forward of Lebesgue measure on B d−1 1 under the map The above observation, together with the fact that generates SO(d), implies that the group generated by the support of Λ is dense in SO(d). The claim now follows from well known arguments [25]. Proof. It is sufficient to prove P U P < 1, that is We may restrict to functions of the form f = αf 0 + f 1 , where α > 0 and f 0 ∈ H 0 , f 1 ∈ H 1 with f 0 = f 1 = 1. Note that in this case f 2 = α 2 + 1, and hence the supremum (7.18) equals Now, We have Therefore, In the last equality we have used the relations which follows from U f 0 , U f 1 = f 0 , f 1 = 0, and Since f 0 is a constant function with f 0 = 1, we have by Lemma 7.2 y 0 ≤ δ ρ < 1. Furthermore y 1 ≤ 1. This shows (7.28) sup The final step in the proof of Proposition 7.3 is now to show that To achieve this, first note that (7.31) sup To prove, (7.30), it is therfore sufficient that the quadratic equation has no positive real solution for all y 0 ≤ δ ρ , y 1 ≤ 1. This in turn holds, if the discriminant of Eq. (7.33) is strictly negative, i.e.

Exponential mixing
We will now apply the spectral estimates of the previous section to obtain exponential mixing rates.
It is therefore sufficient to prove (8.5) conditioned on V n 1 with a constant C m independent on V n 1 , or equivalently, to show that form any n ∈ Z ≥0 , v 0 , e ∈ S d−1 1 , The case n ≤ m follows from the Cauchy-Schwarz inequality. We assume therefore in the following that n > m.
The bound (8.28) now follows from Proposition 7.3. Relation (8.29) follows similarly from Case C of the previous proof.

Proof of the main lemmas
We now turn to the proofs of the four main lemmas in Section 4.
Proof of Lemma 4.1. We have This is summable (since γ > 1) and so, by the Borel-Cantelli lemma, ζ j = 0 only for finitely many j. This proves Lemma 4.1.
Proof of Lemma 4.3. Let ζ j := (e · V j−1 ) 2 a 2 j . It is sufficient to prove that for any unit vector e ∈ S d−1 1 (9.9) n j=1 ζ j n log n P −→ σ 2 d .
Using (8.29) in Proposition 8.2, and Proposition 6.10, and, by Proposition 6.11, (9.11) E a 4 j = O j(log j) γ . Furthermore, due to Proposition 8.1 (p = 2), we have (9.12) Cov Hence, Proof of Lemma 4.4. In view of the asymptotic relation for A n in (4.19) we have to prove that for any ε > 0 The lower tailξ j < −ε √ n log n is estimated by (9.16) Proposition 6.7 yields Since this is summable, we have, by the Borel-Cantelli lemma, n log n} η n log n a.s.
For the upper tailξ j > ε √ n log n we have On the other hand, in view of (5.11), we have for n → ∞, (9.20) E ζ n,j log n(log n) γ ε 2 n log n ∼ γ − 1 2 log log n, and therefore (9.21) E n j=1 ζ n,j n log n → 0.

General initial data
Up to now we have assumed that (ξ 1 , η 1 ) has density Ψ 0 (x, z). We now extend the above results to more general initial data (ξ 1 , η 1 ), where the only assumption is that the marginal distribution of η 1 is absolutely continuous with respect to Lebesgue measure on B d−1 1 . Proof of Theorem 3.2 (ii) for general initial data. Since it is sufficient to show that where η 1 has (by assumption) an absolutely continuous distribution and ξ 1 = 0. By an obvious re-labelling, this is equivalent to showing that where η 0 has an absolutely continuous distribution. In view of the remarks following Eq. (4.1), the only difference from the proof of Theorem 3.2 is now that η 0 is distributed according to an absolutely continuous probability measure, rather than Lebesgue measure. Because tightness, almost sure convergence and convergence in probability continue to hold when passing from a measure to a measure which is absolutely continuous with respect to the first, the Lemmas in Section 4 remain valid also in the present setting. The proof of Theorem 3.2 for general initial data therefore follows from these lemmas in the same way as for the density Ψ 0 (x, z), as described at the end of Section 4.

From discrete to continuous time
The following proposition, together with Theorem 3.2 (ii), immediately implies Theorem 3.2 (i). Let us denote by (11.1) n t := ξ −1 t the (integer part of the) expected number of collisions within time t.
By the same argument as in the proof of Theorem 3.2 (see Section 10), it is sufficient to prove Proposition 11.1 in the case when (ξ 1 , η 1 ) has density Ψ 0 (x, z). We will assume this from now on. Furthermore, note that the left hand side of (11.2) is independent of the choice of v 0 . We may therefore assume without loss of generality that v 0 is a random variable uniformly distributed in S d−1 1 (this will only be used in the justification of rel. (11.31) below). The proof of Proposition 11.1, which is given at the end of this section, exploits the following three lemmas.
Lemma 11.2. Proof. This is a simple variant of the proof of Theorem 3.2 (ii).
Lemma 11.3. For all n ∈ N and u ≥ 2, (11.4) P Q n > u = O n log u u 2 .
Proof. We begin by observing that (11.5) where ξ * j = ξ j − µ j and µ j is the conditional expectation of the (untruncated) ξ j defined in (6.3). Recall also the definition of the corresponding conditional variance α j in (6.5). Now, (11.7) To control the first term on the right hand side of (11.7), it is sufficient to bound (11.8) E n j=1 ζ j 2 , ζ j := (e · V j−1 )µ j 1 {µ j ≤u} , for arbitrary e ∈ S d−1 1 . We follow the same steps as in the proof of Lemma (4.2). Let us first show that To this end choose in Proposition 8.2 m = 1 and (11.10) g(w, z) = K 1 (w, z) K 0 (w, z) 1 {K 1 (w,z)≤uK 0 (w,z)} .
On the other hand, it follows from Lemma 11.2 that (11.22) lim t→∞ P τ N (t) < t = 0.
This completes the proof of Lemma 11.4.