1 Introduction

In this article we study percolation on the subset obtained by removing from \({\mathbb {R}}^3\) a Poissonian ensemble of infinite cylinders of radius one. Before presenting our main results, let us give some of the motivation and historical background to this problem.

Perhaps the simplest model for a random environment in \({\mathbb {R}}^d\) is the so-called “continuum” (or Boolean) percolation, in which a Poissonian ensemble of unit balls is placed in \({\mathbb {R}}^d\). Each one of the balls can be thought as being an obstacle. Letting \(\mathcal {V}\) stand for the complement of this random set of obstacles (which is sometimes called “vacant set” or “carpet”, see [11, 18]), the primary question one can ask is whether \(\mathcal {V}\) contains or not an unbounded connected component with positive probability. If so, one says that the vacant set \(\mathcal {V}\) percolates. Due to the uniformly boundedness of the obstacles, a number of techniques developed in the study of Bernoulli site percolation can be adapted to this continuum case, see for instance [4, Section 12.10].

However, for other models containing large obstacles, the induced random environment may feature long-range dependencies, often leading to some intriguing behavior and challenging problems such as in [5]. We now describe some instances of such models.

Symanzik in his seminal work [16], introduced a representation of the \(\phi ^4\) quantum field as a classical gas of Brownian paths which interact when they cross. This development naturally led to the ideas of loop measures, whose geometry have been intensively investigated, both for planar Brownian motion in relation with SLE processes in [8] and for simple random walks in [7]. See also [11] and the excellent study in [9]. In three dimensions the current knowledge of those models is more restricted, except for the work [13], concerned with the percolative properties of the Brownian loop soup in \({\mathbb {R}}^3\).

Recently other interesting models took a central stage in the field of random media. Notably, the random interlacements on \({\mathbb {Z}}^d, d \ge 3\), introduced by Sznitman in [18]. For this model, the set of obstacles consist of a Poissonian cloud of bi-infinite random walk trajectories modulo time shift. An intensity parameter \(u \ge 0\) controls the amount of trajectories to be removed from the ambient space \({\mathbb {Z}}^d\) and the complement of these trajectories (the so-called vacant set of random interlacements) was extensively studied in [14, 15, 19, 20]. It was shown in [14, 18] that the connectivity of the vacant set undergoes a non-trivial phase transition as \(u\) crosses a critical threshold.

Another percolation model having similar features is the so-called coordinate percolation, introduced by the second author. In this model each discrete line parallel to one of the coordinate axis of \({\mathbb {Z}}^3\) is independently removed with a positive probability \(q = 1-p\), and retained with probability \(0< p <1\). In [6] it was shown that the vacant set left after the removal of lines undergoes a non-trivial phase transition as \(p\) varies. It is important to stress that this model has polynomial decay of connectivity in the super-critical and upper sub-critical phases, see Remark 2.1 below. Polynomial decay and several other properties of coordinate percolation in \({\mathbb {Z}}^3\) are remarkably similar to that of P. Winkler’s percolation models, see [3], including the question of the compatibility of binary sequences and scheduling of random walks (also known as the clairvoyant daemon problem).

In this article we study a model governed by a Poisson point process, on the space \({\mathbb {L}}\) of lines in \({\mathbb {R}}^d\), having intensity measure \(u \mu \). Here \(u\) is a positive real parameter and \(\mu \) is, up to a multiplicative constant, the unique Haar measure on \({\mathbb {L}}\) which is invariant with respect to isometries of \({\mathbb {R}}^d\), see (2.3) for details.

Having specified the intensity measure \(u \mu \) [see (2.3)], a corresponding Poisson point process can be easily constructed in an appropriate probability space \((\Omega , \mathcal {A}, {\mathbb {P}}_u)\), as we describe in Sect. 2. Each element \(\omega \in \Omega \) is a point measure, i.e.

$$\begin{aligned} \omega = {\sum _{i \ge 0}} \delta _{l_i},\text { where } {l_{i}} \text { runs over a countable collection of lines in } {\mathbb {R}}^d. \end{aligned}$$
(1.1)

We are mainly interested in the set

$$\begin{aligned} \mathcal {L}(\omega ) = {\textstyle \bigcup \limits _{l \in \text {supp}(\omega )}} C(l), \end{aligned}$$
(1.2)

where \(C(l)\) stands for the cylinder of radius one around \(l\). As well as its complement

$$\begin{aligned} \mathcal {V}(\omega ) = \mathbb {R}^d\!\setminus \!\mathcal {L}(\omega ), \end{aligned}$$
(1.3)

the so-called ‘vacant set’. Intuitively speaking, the set \(\mathcal {V}\) represents what is left after we drill through all the lines in the support of \(\omega \). As for the parameter \(u\), it controls the amount of cylinders to be removed from \({\mathbb {R}}^3\): as \(u\) increases, more and more cylinders are drilled, making it increasingly harder for \(\mathcal {V}\) to be well connected.

The main contribution of this paper is to prove the following

Theorem 1.1

\((d=3)\) For \(u\) small enough, the vacant set \(\mathcal {V}\) contains almost surely an unbounded connected component.

See Theorem 3.1 below for a stronger version of this statement. If one defines the critical parameter by

$$\begin{aligned} u_* = \inf \{u \ge 0; \; \mathbb {P}_u [ \mathcal {V}\,\text {has an unbounded connected component} ] = 0 \}, \end{aligned}$$
(1.4)

then Theorem 1.1 proves that \(u_*\) is strictly positive.

The model was introduced by Benjamini and first studied by Tykesson and Windisch in [21], where among other results they established the existence of a phase transition for the vacant set left by these cylinders in \({\mathbb {R}}^d\) when \(d \ge 4\). More specifically in Theorems 4.1 and 5.1 of [21], they proved that

$$\begin{aligned} u_*&< \infty , \text { for every}\, d \ge 3 \text { and}\end{aligned}$$
(1.5)
$$\begin{aligned} u_*&> 0, \text { for every}\, d \ge 4. \end{aligned}$$
(1.6)

The most challenging and physically relevant question concerns the three-dimensional case, for which the existence of a percolative phase was still open. Our result settles the existence of a non-trivial phase transition as the parameter \(u\) crosses the non-degenerate threshold \(u_*\): The super-critical or percolative phase is the one for which \(u<u_*\) and the sub-critical phase is the one for which \(u>u_*\).

Cylinders’ percolation, in any dimension \(d \ge 2\), has also been considered by Broman and Tykesson in [1]. There, instead of studying the set \(\mathcal {V}\) they show that \(\mathcal {L}\) is connected almost surely. More precisely, on one side, they show that any two cylinders in the process are connected via at most \(d-2\) other cylinders while, in the other side, for any \(k < d-2\) there exist pairs of cylinders not connected to each other by \(k\) other cylinders.

One of the difficulties in establishing Theorem 1.1, is the slow decay of correlations observed in the set \(\mathcal {V}\). As it was observed in Remark 3.2 (4) of [21], for any \(x,y \in {\mathbb {R}}^d\) with \(|x-y| > 2\),

$$\begin{aligned} \frac{c_{d,u}}{|x-y|^{d-1}} \le \text {cov}_u(\mathbf {1}_{x \in \mathcal {V}}, \mathbf {1}_{y \in \mathcal {V}}) \le \frac{c'_{d,u}}{|x-y|^{d-1}}, \end{aligned}$$
(1.7)

where \(c_{d,y}\) and \(c'_{d,u}\) are positive constants depending on \(u\) and \(d\) and \(\text {cov}_u\) stands for the covariance under the measure \({\mathbb {P}}_u\). From (1.7) it is clear that in low dimensions the vacant set \(\mathcal {V}\) presents a slower decay of correlations, what makes the problem more challenging.

It is worth noticing that an equation similar to (1.7) also holds for the vacant set left by random interlacements, but with the exponent \(d-1\) replaced by \(d-2\), see Remark 1.6 (4) in [18]. We note that also in the case of interlacements, the low dimensional cases are harder. Indeed, the existence of a percolative phase for random interlacements was first established for \(d \ge 7\), in [18], but only later this result was extended to \(d \ge 3\), see [14].

Another difficulty that appears in the present context is the absence of exponential bounds or domination by Boolean percolation (see Remark 2.1).

In order to state what we perceive as the main difficulty to prove Theorem 1.1 and to explain why the three dimensional case is qualitatively different from the others, let us briefly describe how the case \(d \ge 4\) was handled in [21]. In that work, the authors restricted their attention to the intersection between \(\mathcal {V}\) and \({\mathbb {R}}^2\) (naturally embedded in \({\mathbb {R}}^d\)). A similar procedure was also employed in the context of interlacements percolation in [14]. In Theorem 5.1 of [21], the authors proved that for \(d \ge 4\) and for \(u\) small enough there exists \({\mathbb {P}}_u\)-a.s. an unbounded connected component in \(\mathcal {V} \cap {\mathbb {R}}^2\), yielding (1.6). However, as they also observed, this strategy is destined to fail in three dimensions, as

$$\begin{aligned} \begin{array}{l} \text {for }d = 3,\text { for every }u > 0,\text { the set }\mathcal {V}\cap \mathbb {R}^2\text { contains}\\ \mathbb {P}_u\text {-}\mathrm{a.s.} \text { no unbounded connected component}. \end{array} \end{aligned}$$
(1.8)

see Proposition 5.6 of [21].

In view of (1.8), in order to establish Theorem 1.1 we have to search for connections outside the plane \({\mathbb {R}}^2\). But, first of all, why would someone be interested in restricting the set \(\mathcal {V}\) to \({\mathbb {R}}^2\)? This is done in order to use the so-called ‘path duality’ of the plane, which roughly speaking, states that

$$\begin{aligned} \begin{array}{l} \text {if the connected component of }\mathcal {V}\cap \mathbb {R}^2\text { containing the origin is bounded,}\\ \text {then there exists a circuit surrounding the origin in }\mathcal {L} \cap {\mathbb {R}}^2, \end{array} \end{aligned}$$
(1.9)

see (5.22) of [21]. The above statement reduces the task of proving percolation to showing that typical paths in \(\mathcal {L} \cap {\mathbb {R}}^2\) are small. In our case, we will make use of a statement similar to (1.9), see (5.6). However, instead of \({\mathbb {R}}^2\), we will intersect \(\mathcal {V}\) with a periodic surface \(H\) defined in (3.2), see also Fig. 1. This surface is contained in the slab \({\mathbb {R}}^2 \times [0,1{,}000]\) and has two important properties. First, \(H\) is homeomorphic to \({\mathbb {R}}^2\), which allows us to use duality on \(H\) in an indirect way. Moreover, \(H\) is ‘rough’, meaning that its intersection with any fixed cylinder gives rise to small connected components only, see (4.9).

It is striking that there is never percolation on \(\mathcal {V} \cap {\mathbb {R}}^2\), but it is even more surprising that, at the same time, \(\mathcal {V} \cap H\) does percolate, as shown in Theorem 3.1. This contrast between the behavior of the random sets \(\mathcal {V} \cap {\mathbb {R}}^2\) and \(\mathcal {V}\cap H\) [both satisfying (1.7)] is further discussed in Remark 3.2, raising the following question: What property of a given Poissonian cloud of obstacles prevents the existence of a percolative regime? We hope that this work will bring attention to this question.

Let us briefly explain why is it that \(\mathcal {V} \cap {\mathbb {R}}^2\) never percolates. In [21], the authors show that no matter how small \(u\) is taken, there are infinitely many triangles (contained in the union of exactly three cylinders in \(\mathcal {L}\)) that surround the origin in \(\mathcal {L} \cap {\mathbb {R}}^2\). This is intuitive, since for small values of \(u\) we don’t expect that several cylinders could cooperate in creating a long dual path. Therefore, the only way to prevent percolation on \(\mathcal {V} \cap {\mathbb {R}}^2\) is indeed to have few cylinders that alone manage to create a long dual circuit around the origin. This is certainly possible in \({\mathbb {R}}^2\), but not in the surface \(H\), due to its roughness, see (4.9). The renormalization scheme developed in Sect. 3 allows us to formalize this heuristic argument, providing a way to isolate the collective and individual influence of obstacles.

Next we briefly explain the novelties on the renormalization technique presented here. We first define a rapidly increasing sequence of scale lengths \((a_n)_{n \ge 0}\), see (3.5). Our aim is to analyze the probability \(p_n\) that

$$\begin{aligned} \begin{array}{l} \text {there exists some path in } \mathcal {L}\cap H \text { connecting the ball of radius } \\ a_n/10\text { to the surface of the ball of radius } a_n \text { around the origin.} \end{array} \end{aligned}$$
(1.10)

It can be easily seen that the event in (1.10) implies the occurrence of similar events in two smaller balls (of radius \(a_{n-1}\)) which are far apart, see (3.16). It is therefore tempting to bound \(p_n\) in terms of \(p_{n-1}^2\), but for this we need an approximate independence between what happens to \(\mathcal {V}\) inside these two smaller balls. In [21], the authors accomplish this by plugging in a bound on this dependence which resembles (1.7). This is enough to establish the result for \(d \ge 4\), but for \(d = 3\), the problem is fundamentally more complicated, as we can infer from (1.8).

At this point, we introduce an auxiliary sequence \(q_n\) that corresponds to the probability of the events appearing in (1.10) with two cylinders being deterministically added to the random set \(\mathcal {L}(\omega )\). A delicate balance between the probability that two distant balls intersect the same cylinders and a combinatorial factor for the possible choices of these two balls makes it possible to construct a contracting recursion relation between \((p_n, q_n)\) and \((p_{n-1}, q_{n-1})\). Finally, we use the roughness of \(H\) to trigger these recursion relations, i.e. show that \(p_0\) and \(q_0\) are small if \(u\) is small, finishing the proof of Theorem 1.1.

This paper is organized as follows: In Sect. 2 we give a rigorous construction of the model and introduce the notation used throughout the text. In Sect. 3 we state Theorem 3.1 which is our main result and introduce the mathematical setting for the renormalization used in its proof, finishing with recurrence relations between scales. Section 4 is dedicated to triggering the recurrence relations obtained previously. Finally, in Sect. 5 we join the results of the two previous sections in order to prove Theorem 3.1. We also include an Appendix, where we prove some basic geometric facts that are useful in the proof of the recursion relations.

2 Notation

Throughout the text \(c\) or \(c'\) denote strictly positive constants, with value changing from place to place. Dependence of constants on additional parameters appears in the notation. For instance \(c_u\) denotes a positive constant possibly depending on \(u\). Numbered constants, such as \(c_0, c_1, \ldots \) are fixed according to their first appearance in the text.

As we have mentioned in the last section, we let

$$\begin{aligned} {\mathbb {L}} \text { denote the space of all} 1\text {-dimensional affine subspaces of }\mathbb {R}^3. \end{aligned}$$
(2.1)

We introduce a measure \(\mu \) in the space \({\mathbb {L}}\) of lines in \({\mathbb {R}}^3\) following the construction in [21]. For this, let \(e_i, i=1,2,3\), stand for the vectors of the canonical orthonormal basis of \({\mathbb {R}}^3\) and define \(l\) to be the axis \(\{t \cdot e_3; \; t \in {\mathbb {R}}\}\). We also let \({\mathbb {R}}^2\) correspond in the natural way to the plane \(\{(x,y,0); \; x, y \in {\mathbb {R}}\}\), orthogonal to \(l\), endowed with the Lebesgue measure \(\lambda \). Consider also the group \(SO_3\) of rigid rotations in \({\mathbb {R}}^3\) endowed with the natural topology and the unique Haar measure \(\nu \) normalized in a way that \(\nu (SO_3) = 1\). Then we define

$$\begin{aligned} \alpha : {\mathbb {R}}^2 \times SO_3&\rightarrow \mathbb {L}\nonumber \\ (x, \theta )&\mapsto \theta (\tau _x(l)), \end{aligned}$$
(2.2)

where \(\tau _x\) is the translation map from \({\mathbb {R}}^3\) onto itself defined by \(y \mapsto x + y\).

With this definition, we can endow the set \({\mathbb {L}}\) with the finest topology that makes the map \(\alpha \) continuous. Let \(\mathcal {B}({\mathbb {L}})\) stand for the corresponding Borel \(\sigma \)-algebra. We can thus introduce the measure \(\mu \) on \(({\mathbb {L}}, \mathcal {B}({\mathbb {L}}))\):

$$\begin{aligned} \mu = \alpha (\lambda \otimes \nu ). \end{aligned}$$
(2.3)

We note that \(\mu \) is (up to multiplicative constants) the unique Haar measure on \({\mathbb {L}}\) which is invariant under isometries of \({\mathbb {R}}^3\).

We now consider the space of point measures

$$\begin{aligned} \Omega = \left\{ \omega \!=\! \sum _{i \ge 0} \delta _{l_i}; \; l_i \!\in \! \mathbb {L} \text { and } \omega (A) \!<\! \infty , \text { for every compact }A \in \mathcal {B}(\mathbb {L}) \right\} \!,\qquad \end{aligned}$$
(2.4)

endowed with the \(\sigma \)-algebra \(\mathcal {A}\) generated by the evaluation maps \(\phi _A: \omega \mapsto \omega (A)\), for \(A \in \mathcal {B}({\mathbb {L}})\).

We are now in the position to define the main process we intend to analyze. For this, fix some \(u \ge 0\) and define the probability space \((\Omega , \mathcal {A}, {\mathbb {P}}_u)\) of a Poisson point process with intensity measure given by \(u \cdot \mu \). The expectation operator associated with \({\mathbb {P}}_u\) will be denoted by \({\mathbb {E}}_u\). For a reference for this construction, see for instance Proposition 3.6 of [12].

Due to the fact that \(\mu \) is invariant under the isometries of \({\mathbb {R}}^3\), one can show that the law \({\mathbb {P}}_u\) governing this Poisson point process is also invariant under such transformations (see Remark 2.1 of [21]). Furthermore the law \({\mathbb {P}}_u\) can be shown to be ergodic under translations in the sense that will be described in Sect. 5.

The Euclidean distance in \({\mathbb {R}}^3\) or in \({\mathbb {R}}^2\) will be denoted by dist\((\cdot ,\cdot )\). For a point \(x \in {\mathbb {R}}^3\) and \(r>0\) we denote \(B(x,r) = \{y \in {\mathbb {R}}^3; \text { dist}(x,y)\le r\}\) and for a set \(A \subset {\mathbb {R}}^3\) we denote \(B(A,r) = \cup _{x \in A}B(x,r)\). For a line \(l \in {\mathbb {L}}\) let \(C(l) = B(l,1)\) be the cylinder of radius one and axis equal to \(l\). We denote by \({\mathbb {C}}\) the set of all cylinders of radius one: \(\{C(l);~ l\in {\mathbb {L}}\}\).

We let \(\mathcal {L}( \omega )\) be the ‘thickening’ of the lines in the support of \(\omega \in \Omega \), i.e.

$$\begin{aligned} \mathcal {L}(\omega ) = \bigcup _{l \in {\mathop {\mathrm{supp}}}(\omega )} C(l), \end{aligned}$$
(2.5)

as well as its complement

$$\begin{aligned} \mathcal {V}(\omega ) = \mathbb {R}^3\!\setminus \!\mathcal {L}(\omega ), \end{aligned}$$
(2.6)

also referred to as the ‘vacant set left by the cylinders’.

As proved in Proposition 5.6 of [21],

$$\begin{aligned} \begin{array}{l} \text {in}\,d = 3,\text { for every plane }K \subset \mathbb {R}^3\text { and every} u > 0,\text { there is no}\\ \text {percolation in }\mathcal {V} \cap K,\text { almost surely with respect to }\mathbb {P}_u. \end{array} \end{aligned}$$
(2.7)

This means that in order to establish the existence of an unbounded component in \(\mathcal {V}\) we need to search for components that may exit planes. As it turns out, it is enough to consider the vacant set \(\mathcal {V}\) intersected with the slab \({\mathbb {R}}^2 \times [0,1{,}000]\). The number 1,000 carries no special significance and it was chosen large enough so that the proof of Proposition 4.1 could be carried out.

Remark 2.1

  1. 1.

    As it was established in Remark 3.2 (1) and (3) in [21], the model considered in this article does not dominate nor is dominated by any (constant radius) Boolean percolation model, indicating that the techniques currently available for the Boolean and the Bernoulli percolation may not work to establish results in the current context. This is well illustrated in [21], Remark 3.2 (2), where the authors rule out the so-called exponential bounds which are very useful for Boolean and Bernoulli percolation.

  2. 2.

    After establishing the existence of a non-trivial phase transition, one could be interested in studying the uniqueness of such transition. Roughly speaking this corresponds to study whether the correlation length undergoes any abrupt change, besides the one expected at \(u_*\). Both Boolean percolation and Bernoulli percolation present a unique phase transition in this sense. Moreover, for these models the two points function decays exponentially both in the sub-critical phase (Menshikov’s theorem) and in the finite clusters of the super-critical phase (see Theorem (8.18) in [4], p. 205).

It is important to notice that these classical results may fail in the presence of long-range dependence. For the coordinate percolation one can show that the two points function decays slower than a polynomial in the super-critical phase, see [6]. On the other hand in the sub-critical phase for low enough parameter \(p\), this rate is exponential. Is is still not known whether the decay is exponential throughout all the sub-critical phase. For the Winkler percolation process, it has been shown in [3] that the decay is also polynomial throughout all the super-critical phase. For interlacements percolation, this decay is known to be no faster than a stretched exponential, see Theorem 3.6 of [20]. It is an interesting problem to study the above questions for the cylinder’s percolation model, see Remark 5.2 (3).

3 The renormalization scheme

In this section we start to develop the renormalization scheme that leads to the proof that \(\mathcal {V}\) percolates within the slab \({\mathbb {R}}^2 \times [0,1{,}000]\) provided that the parameter \(u\) is small enough. As we have mentioned above, we will define a surface contained in this slab. For this end we start by defining a hexagonal tilling of the plane \({\mathbb {R}}^2 \subset {\mathbb {R}}^3\).

First consider the set

$$\begin{aligned} G = \{2{,}000 (n + m \, e^{i\tfrac{\pi }{3}}); \; \text {for } n, m \in {\mathbb {Z}}\}, \end{aligned}$$
(3.1)

which will correspond to the centers of the faces defining the tilling. The boundary \(\mathcal {H}\) of the hexagonal tilling is defined as the set of points \(x \in {\mathbb {R}}^2\) such that the distance between \(x\) and \(G\) is attained for more than one point in \(G\). We also denote by the face of the tilling containing the origin, or more precisely, is the closure of the connected component of \({\mathbb {R}}^2\!\setminus \!\mathcal {H}\) containing the origin.

Consider the map \({\mathop {\mathrm{dist}}}(\cdot ,\mathcal {H}) : {\mathbb {R}}^2 \rightarrow {\mathbb {R}}_+\), which associates to \(x \in {\mathbb {R}}^2 \subset {\mathbb {R}}^3\) the distance \({\mathop {\mathrm{dist}}}(x,\mathcal {H})\) between \(x\) and \(\mathcal {H}\). We will be interested in the graph of this map regarded as a subset of \({\mathbb {R}}^3\), i.e.

$$\begin{aligned} H = \{ (x,t) \in {\mathbb {R}}^2 \times {\mathbb {R}}; \; x \in {\mathbb {R}}^2\,\text {and}\,t = {\mathop {\mathrm{dist}}}(x,\mathcal {H})\}, \text { see Fig. 1.} \end{aligned}$$
(3.2)

Note that \(H\) is a surface contained in the slab \({\mathbb {R}}^2 \times [0,1{,}000]\) mentioned above and that \((0,0,1{,}000)\) belongs to \(H\).

Fig. 1
figure 1

A piece of the set \(H\)

We now state the main result in this article

Theorem 3.1

For \(d = 3\), for \(u\) small enough,

$$\begin{aligned} \mathbb {P}_u [\mathcal {V} \cap H \text { has an unbounded connected component}] = 1. \end{aligned}$$
(3.3)

In particular, \(u_* > 0\) in three dimensions.

We denote by \(\pi \) the orthogonal projection from \({\mathbb {R}}^3\) onto \({\mathbb {R}}^2\). When restricted to \(H, \pi \) defines a homeomorphism between \(H\) and \({\mathbb {R}}^2\).

Remark 3.2

Let us briefly compare (1.8) to Theorem 3.1. Note that both \(\mathcal {V} \cap {\mathbb {R}}^2\) and \(\pi (\mathcal {V} \cap H)\) are random subsets of \({\mathbb {R}}^2\) which are ergodic (see Lemma 3.3 in [21] and Sect. 5). Moreover, they present similar decays of correlation, indeed they both satisfy (1.7). However, (1.8) shows that \(\mathcal {V} \cap {\mathbb {R}}^2\) does not present a phase transition, while Theorem 3.1 proves that \(\pi (\mathcal {V} \cap H)\) does. In this paper, we explain the discrepancy between these two processes in Proposition 4.1, which holds true only for \(\pi (\mathcal {V} \cap H)\) (see Remark 4.2). This raises the following question: for which models of Poissonian obstacles can one establish the existence (or absence) of a non-degenerate phase transition as the intensity \(u\) varies?

The proof of this theorem uses in an indirect way the duality of \({\mathbb {R}}^2\). More precisely,

$$\begin{aligned} \begin{array}{l} \text {if the connected component of }\mathcal {V}(\omega ) \cap H \text {containing }(0,0,1{,}000)\text { is bounded},\\ \text {then there exists a circuit in }\pi (\mathcal {L}(\omega ) \cap H)\text { surrounding the origin in }{\mathbb {R}}^2, \end{array}\qquad \end{aligned}$$
(3.4)

see the paragraph before Eq. (5.6) for a proof.

In view of (3.4), we should pursue a bound on the existence of large paths in \(\pi (\mathcal {L}(\omega ) \cap H)\). For this, we follow a renormalization argument inspired in [18, 21]. But first we introduce some notation. Let

$$\begin{aligned} a_0 \in [288^6, \infty ), \quad \gamma = 7/6 \quad \text {and} \quad a_n = a_0^{\gamma ^n}. \end{aligned}$$
(3.5)

The choice of the parameter \(a_0\) will be made latter, but it is important to notice that all the statements we make in this section (and in the Appendix) hold true for any \(a_0\) as above. Moreover, in accordance to our convention, all the constants appearing in this section are independent of the specific choice of \(a_0\) unless stated otherwise.

The sequence \((a_n)_{n \ge 0}\) can be understood as a fast increasing sequence of scale lengths. Note that this sequence grows faster than exponentially. In fact

$$\begin{aligned} \left( \frac{a_n}{a_{n-1}}\right) = a_{n-1}^{\gamma -1}. \end{aligned}$$
(3.6)

The reason to impose that \(a_0 \ge 288^6\), is to guarantee that

$$\begin{aligned} a_n \ge 8{,}000 \quad \text {and} \quad a_{n+1} \ge 288 \, a_{n}, \text { for every}\, n \ge 0, \end{aligned}$$
(3.7)

which will be useful for instance in the proofs of Lemmas 3.4, 3.9 and 3.10 below.

For \(x \in {\mathbb {R}}^2\) and \(r>0\) set \(S(x,r) = \{y \in {\mathbb {R}}^2; \text { dist}(x,y) \le r\}\) and \(\partial S(x,r)\) the boundary of \(S(x,r)\) in \({\mathbb {R}}^2\). For \(n \ge 0\) and any \(x \in {\mathbb {R}}\), we define the following function \(A_n: {\mathbb {R}}^2 \times \Omega \rightarrow \{0,1\}\)

$$\begin{aligned} A_n (x, \omega ) = \mathbf {1}{\{S(x, a_n/10) \leftrightarrow \partial {S(x,a_n)} \text { in } \pi (\mathcal {L}(\omega ) \cap H)\}}, \end{aligned}$$
(3.8)

where the event appearing in the right-hand side of the previous equation is: ‘there exists a continuous path starting at a point in \(S(x, a_n/10)\) and ending at a point in \(\partial {S(x, a_n)}\) and having its image contained in \(\pi (\mathcal {L}(\omega ) \cap H)\)’. Note that for a fixed \(x \in {\mathbb {R}}^2\) and a point measure \(\omega ' \in \Omega \) with finite support,

$$\begin{aligned} \text {the function } \omega \mapsto A_n(x,\omega + \omega ') \text { is measurable.} \end{aligned}$$
(3.9)

To see this, first observe that the set \(\mathcal {L}(\omega + \omega ') \cap H \cap (S(x,a_n) \times {\mathbb {R}})\) is given by the union of finitely many convex and compact sets, as it can be seen by splitting the set \(H\) into its faces. The rest of the proof follows the same arguments as for Lemma 5.2 in [21].

Denoting by \(A_n(x)\) the random variable \(A_n(x, \omega ): \Omega \rightarrow \{0,1\}\), we can define

(3.10)

where for the second equality we used the periodicity of the set \(H\) and the translation invariance of \({\mathbb {P}}_u\). In order to prove Theorem 3.1, we first need to show that for \(u\) small enough the sequence \(p_n(u)\) decays fast with \(n\). This will be obtained via a recursion relation that we develop below.

We define for each \(n \ge 1\), the lattice

$$\begin{aligned} \mathcal {J}_n = \left( \frac{a_{n-1}}{10}\right) \cdot {\mathbb {Z}}^2. \end{aligned}$$
(3.11)

and for \(i = 1,2,3,4\)

(3.12)

The reason to consider four different spheres \((i = 1, \ldots , 4)\) is explained in the paragraph before Lemma 3.10.

Fig. 2
figure 2

For \(i = 1, \ldots , 4\), the figure on the left illustrates the sets (in black) and the balls \(S(x,a_{n-1}/10)\), for \(x \in \mathcal {H}^i_n\) (in light gray). In the right a section of these sets are depicted in more detail

The sets \(\mathcal {H}^i_n\) defined in (3.12) satisfy three important properties that will be useful for proving Theorem 3.1. They are stated in Lemmas 3.3, 3.4 and 3.10 and, even though their proofs are quite simple, we include them in the Appendix for the convenience of the reader.

The first of these properties states that the sets \(\mathcal {H}^i_n\) can be used to define coverings of the spheres \(\partial {S} ( x_0, {(i+1)a_n}/6 )\), see Fig. 2. More precisely,

Lemma 3.3

For \((a_n)_{n \ge 0}\) as in (3.5), if we let \(\mathcal {H}_n^i\) be defined as in (3.12), then

(3.13)

for all \(i = 1, \ldots , 4\), and \(n \ge 1\).

Proof

The proof of this lemma is postponed to the Appendix.\(\square \)

In Lemma 3.5 below, we are going to use union bounds on \(x \in \mathcal {H}^i_n\), therefore we need the following control on the cardinality of these sets.

Lemma 3.4

There exists a positive constant \(c_{ {0} }\) such that,for any \((a_n)_{n \ge 0}\) as in (3.5) if we let \(\mathcal {H}_n^i\) be defined as in (3.12), then

$$\begin{aligned} \max _{i = 1,\ldots , 4} | \mathcal {H}_n^i| \le c_{ {0} } \left( \tfrac{a_n}{a_{n-1}}\right) , \end{aligned}$$
(3.14)

for all \(n \ge 1\). Note that, in accordance with our convention on constants, \(c_{ {0} }\) does not depend on the specific choice of the scale parameter \(a_0\).

Proof

The proof of this lemma is also presented in the Appendix.\(\square \)

As mentioned above, in order to prove that for some \(u\) small enough the probability \(p_n(u)\) decays with \(n\), we are going to obtain a recursion relation between \(p_n(u)\) and \(p_{n-1}(u)\). The next lemma gives an indication why this should be possible, as it relates \(p_n\) with the random variable \(A_{n-1}(\cdot )\).

Lemma 3.5

Fix \(u > 0\) and recall the definitions of \(p_n(u)\) in (3.10) and \(\mathcal {H}^i_n\) in (3.12). For any pair of distinct \(i_1, i_2 \in \{1,2,3,4\}\), it holds that

$$\begin{aligned} p_n(u) \le {c_{ {0} }}^2 \left( \frac{a_n}{a_{n-1}}\right) ^2 \sup _{{x_1 \in \mathcal {H}^{i_1}_n}, \; {x_2 \in \mathcal {H}^{i_2}_n}} {\mathbb {E}}_u [ A_{n-1} (x_1) A_{n-1} (x_2)], \end{aligned}$$
(3.15)

for all \(n \ge 1\), where the constant \(c_{ {0} } > 0\) is the one appearing in Lemma 3.4.

Proof

Fix an as in the right-hand side of (3.10). By the property of the sets \(\mathcal {H}^i_n\) stated in Lemma 3.3, we have that for any \(j = 1,2\) the family \(\{S(x,a_{n-1}/10)\}_{x \in \mathcal {H}^{i_j}_n}\) covers the sphere \(\partial {S}(x_0, (i_j+1)a_n/6)\). Therefore, any path connecting \(S(x_0, a_n/10)\) to \(\partial {S}(x_0, a_n)\) must intersect a ball \(S(x_j,(a_{n-1})/10)\) with \(x_j \in \mathcal {H}^{i_j}_n\) for both \(j = 1,2\). It also follows that this path must connect \(S(x_j,(a_{n-1})/10)\) to \(\partial {S}(x_j, a_{n-1})\), for \(j=1,2\). In particular we have the following inclusion

$$\begin{aligned}&\left\{ S\left( x, \tfrac{a_n}{10}\right) \leftrightarrow \partial {S(x,a_n)} \text { in } \pi (\mathcal {L}(\omega ) \cap H)\right\} \nonumber \\&\quad \subseteq \bigcup _{x_1 \in \mathcal {H}^{i_1}_n, \; x_2 \in \mathcal {H}^{i_2}_n} \;\;\bigcap _{j = 1,2} \left\{ S(x_j, \tfrac{a_{n-1}}{10}) \leftrightarrow \partial {S} (x_j, a_{n-1}) \text { in } \pi (\mathcal {L}(\omega ) \cap H) \right\} .\nonumber \\ \end{aligned}$$
(3.16)

Using Lemma 3.4, we have that the above union has no more than \({c_{ {0} }}^2 (a_n/a_{n-1})^2\) members, so that

$$\begin{aligned} {\mathbb {E}}_u[A_n(x_0)] \le c_{ {0} }^2 \left( \tfrac{a_n}{a_{n-1}}\right) ^2 \sup _{{x_1 \in \mathcal {H}^{i_1}_n}, \; {x_2 \in \mathcal {H}^{i_2}_n}} {\mathbb {E}}_u [ A_{n-1} (x_1) A_{n-1} (x_2)]. \end{aligned}$$
(3.17)

The result now follows by taking the supremum over .\(\square \)

We now have to deal with the dependence between the two indicator functions appearing in the right-hand side of (3.15). Let us mention here that the technique employed to bound this dependence in Theorem 5.1 of [21] is destined to fail, see Remark 3.7 below and Proposition 5.6 of [21].

Therefore, in order to keep track of more refined details on the dependence between \(A_{n-1} (x_1)\) and \(A_{n-1} (x_2)\), we introduce the following sequence

(3.18)

where the above expectation is taken with respect to \(\omega \in \Omega \). Note that the random variable \(A_n(x, \omega + \delta _{l_1} + \delta _{l_2})\) corresponds to \(A_n(x,\omega )\) after we add two deterministically positioned lines to the random point measure \(\omega \). The quantity \(q_{n-1}(u)\) will help us to control the dependence between the random variables \(A_{n-1} (x_1)\) and \(A_{n-1} (x_2)\) for \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\). This control is attained by considering the number of cylinders intersecting at the same time some suitable neighborhoods of \(x_1\) and \(x_2\) in three different scenarios: The first in which this number is equal to zero, the second in which this number is either one or two and the third in which this number is at least three. In the first scenario we can consider \(A_{n-1}(x_1)\) and \(A_{n-1}(x_2)\) as being independent. The probability that the third scenario occurs is sufficiently small for our purposes. In the second scenario we dominate the dependencies by adding two cylinders to the process. That is the point where \(q_n(u)\) will be useful and this explains why two lines appear in its definition. The details are carried out in the lemma below.

Given two sets \(A, B \subset {\mathbb {R}}^3\) that are either open or compact, we define the following subsets of \({\mathbb {L}}\).

$$\begin{aligned} L_A&= \{l \in {\mathbb {L}}; C(l) \text { intersect } A\}\end{aligned}$$
(3.19)
$$\begin{aligned} L_{A,B}&= \{l \in {\mathbb {L}}; C(l) \text { intersects both }A\text { and }B\}. \end{aligned}$$
(3.20)

We refer to the paragraph below Equation (2.11) in [21] for an explanation concerning the measurability of these sets.

Lemma 3.6

Fix \(n \ge 1\), two distinct \(i_1, i_2 \in \{1,2,3,4\}\) and points \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\). Defining \(D_j = S(x_j, a_{n-1}) \times [0,1{,}000]\) for \(j = 1,2\), we have

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2)] \le p_{n-1}^2(u) + \mathbb {P}_u [ \omega (L_{D_1, D_2}) \ge 3]\\&\quad + \mathbb {P}_u [ 1 \le \omega (L_{D_1, D_2}) \le 2] q_{n-1}^2(u). \end{aligned}$$

Note that the first term in the right-hand side of the above equation corresponds to the natural bound that would be obtained if \(A_{n-1} (x_1)\) and \(A_{n-1} (x_2)\) were independent. Roughly speaking, the other two terms respectively account for the possibilities of ‘high’ and ‘medium’ interaction between \(A_{n-1} (x_1)\) and \(A_{n-1} (x_2)\).

Proof

We consider the partition of \(\Omega \) into the three disjoint sets given by: \([\omega (L_{D_1, D_2}) = 0], [\omega (L_{D_1, D_2}) \ge 3]\) and \([1 \le \omega (L_{D_1, D_2}) \le 2]\). These three events will respectively correspond to the three terms in the right hand side of the above equation.

Let us first show that \({\mathbb {E}}_u [ A_{n-1} (x_1) A_{n-1} (x_2);~ \omega (L_{D_1, D_2}) = 0] \le p_{n-1}^2(u)\). Indeed

$$\begin{aligned}&{\mathbb {E}}_u [A_{n-1} (x_1) A_{n-1}(x_2);~ \omega (L_{D_1,D_2}) = 0] \nonumber \\&\quad = {\mathbb {E}}_u [A_{n-1} ( x_1, \mathbf {1}_{{\mathbb {L}}\slash L_{D_2}} \cdot \omega ) A_{n-1} ( x_2, \mathbf {1}_{L_{D_2}} \cdot \omega ); \omega (L_{D_1,D_2}) = 0] \nonumber \\&\quad \le {\mathbb {E}}_u [A_{n-1} (x_1, \mathbf {1}_{\mathbb {L}\slash L_{D_2}} \cdot \omega )] {\mathbb {E}}_u [ A_{n-1} ( x_2, \mathbf {1}_{L_{D_2}} \cdot \omega )] \le p_{n-1}^2 (u). \end{aligned}$$
(3.21)

In the above estimate, we first used that when \(\omega (L_{S_1,S_2}) = 0\), we have that \(A_{n-1}(x_2, \omega ) = A_{n-1}(x_2, \mathbf {1}_{L_{D_2}} \cdot \omega )\) and \(A_{n-1}(x_1, \omega ) = A_{n-1}(x_2, \mathbf {1}_{{\mathbb {L}} \setminus L_{D_2}} \cdot \omega )\). Then we neglected the intersection with \(\omega (L_{D_1,D_2}) = 0\) and used the independence between the random variables \(A_{n-1}(x_2, \mathbf {1}_{L_{D_2}} \cdot \omega )\) and \(A_{n-1}(x_2, \mathbf {1}_{{\mathbb {L}} \setminus L_{D_2}} \cdot \omega )\) (note that they depend on the realization of a Poisson point process in disjoint sets).

It is clear that \({\mathbb {E}}_u [ A_{n-1} (x_1) A_{n-1} (x_2), \omega (L_{D_1, D_2}) \ge 3] \le {\mathbb {P}}_u [ \omega (L_{D_1, D_2}) \ge 3]\). Therefore, all we need to do in order to finish the proof is to show that

$$\begin{aligned} {\mathbb {E}}_u [ A_{n-1} (x_1) A_{n-1} (x_2);~ 1 \le \omega (L_{D_1, D_2}) \le 2] \le {\mathbb {P}}_u [ 1 \le \omega (L_{D_1, D_2}) \le 2 ] q_{n-1}^2(u). \end{aligned}$$
(3.22)

For this, let us define for each given \(\omega ' \in \Omega \), the following function

$$\begin{aligned} \phi (\omega ') = \mathbb {E}_u [A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}} \cdot \omega + \omega ')] \mathbb {E}_u [A_{n-1} (x_2, \mathbf {1}_{L_{D_2} \setminus L_{D_1}} \cdot \omega + \omega ')], \end{aligned}$$
(3.23)

where again the above expectations are taken with respect to \(\omega \in \Omega \). Intuitively speaking, the above function is the product of the expectations of \(A_{n-1}\) in \(D_1\) and \(D_2\) after a fixed penalization \(\omega '\) is introduced into the point measure.

We note that \(\mathbf {1}_{L_{D_1} \setminus L_{D_2}}\cdot \omega , \mathbf {1}_{L_{D_2} \setminus L_{D_1}}\cdot \omega \) and \(\mathbf {1}_{L_{D_1, D_2}}\cdot \omega \) are independent and

$$\begin{aligned} A_{n-1}(x_1, \omega )&= A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}}\cdot \omega + \mathbf {1}_{L_{D_1, D_2}}\cdot \omega ),\nonumber \\ A_{n-1}(x_2, \omega )&= A_{n-1} (x_2, \mathbf {1}_{L_{D_2} \setminus L_{D_1}}\cdot \omega + \mathbf {1}_{L_{D_1, D_2}}\cdot \omega ). \end{aligned}$$
(3.24)

Which implies that

$$\begin{aligned} \mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2) | \mathbf {1}_{L_{D_1,D_2}} \cdot \omega ] = \phi (\mathbf {1}_{L_{D_1,D_2}} \cdot \omega ), \end{aligned}$$
(3.25)

almost surely.

Note also that

$$\begin{aligned}&\sup \{ \phi (\omega '); \; \omega ' \in \Omega \text { satisfying } \text {supp}(\omega ') \subseteq L_{D_1, D_2} \text { and } 1 \le \omega '(L_{D_1, D_2}) \le 2 \}\nonumber \\&\quad \le \sup _{l_1, l_2 \in L_{D_1,D_2}} \; \mathbb {E}_u[A_{n-1}(x_1, \omega +\delta _{l_1} \!+\! \delta _{l_2})]\mathbb {E}_u[A_{n-1}(x_2, \omega +\delta _{l_1} \!+\! \delta _{l_2})] \le q^2_{n-1}(u).\nonumber \\ \end{aligned}$$
(3.26)

This, together with (3.25) implies (3.22) and finishes the proof of the lemma.\(\square \)

Remark 3.7

Let us briefly mention here how Lemma 3.6 improves the technique employed in Theorem 5.1 of [21] to bound the dependence between \(A_{n-1} (x_1)\) and \(A_{n-1} (x_2)\). Roughly speaking, in (5.16) of [21], they obtained that

$$\begin{aligned} \mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2)] \le p_{n-1}^2(u) + \mathbb {P}_u [ \omega (L_{D_1, D_2}) \ge 1]. \end{aligned}$$

Here, by considering the case \(1 \le \omega (L_{D_1, D_2}) \le 2\) separately, we can obtain better exponents for our induction relations (see Lemma 3.9) allowing this technique to work in the case \(d = 3\).

From Lemma 3.6, it is clear that we will need to bound probabilities of the form \(\mathbb {P}_u [ \omega (L_{D_1, D_2}) \ge k]\), for \(k \ge 1\). This is done with the help of the following

Lemma 3.8

For fixed \(x_1, x_2 \in \mathbb {R}^2\) and \(s \ge 1\), define the sets \(D_1 = S (x_1, s) \times [0,1{,}000]\) and \(D_2 = S (x_2, s) \times [0,1{,}000]\), c.f. Lemma 3.6. Then

$$\begin{aligned} \mu (L_{D_1, D_2}) \le c_{ {1} } \frac{s^2}{r^{2}}, \end{aligned}$$
(3.27)

where \(r = {\mathop {\mathrm{dist}}}\{S(x_1,s), S(x_2,s)\}\).

Proof

In case \(r<4\), the result follows easily from Lemma 2.2 of [21] using the fact that \(L_{D_1,D_2} \subset L_{D_1}\) and \(s \ge 1\).

Supposing that \(r \ge 4\), for \(i =1,2\), let \(R_i = \partial {S}(x_i, s) \times [0,1{,}000]\) consider a covering \(\mathcal {R}_i\) of \(R_i\) with no more than \(cs\) balls of radius one centered in a point of \(R_i\). By the convexity of a cylinder \(C \in \mathbb {C}\), if \(C\) intersects both \(D_1\) and \(D_2\), then it ought to intersect \(R_1\) and \(R_2\). Therefore it touches at least two balls \(B_1 \in \mathcal {R}_1\) and \(B_2 \in \mathcal {R}_2\), and the centers of \(B_1\) and \(B_2\) are within distances at least \(4\). Using [21, Lemma 3.1], with \(d=3\) we have that

$$\begin{aligned} \mu (L_{D_1, D_2}) \le \sum _{\begin{array}{c} B_1 \in \mathcal {R}_1\\ B_2 \in \mathcal {R}_2 \end{array}} \mu (L_{B_1, B_2}) \le \frac{c_{ {1} } s^2}{r^{2}}. \end{aligned}$$
(3.28)

This finishes the proof of the lemma.\(\square \)

From now on we fix the parameter \(u > 0\) assuming it to be not larger than one. This assumption is made only to simplify the calculations. Since the value of \(u\) will be fixed, we omit it in the notations \(p_n(u)\) and \(q_n(u)\), writing simply \(p_n\) and \(q_n\).

In Lemmas 3.9 and 3.12 below, we develop a system of recurrence relation between \(p_n\)’s and \(q_n\)’s.

Lemma 3.9

There exists a positive constant \(c_{ {2} }\) such that, for any \((a_n)_{n \ge 0}\) as in (3.5) and for all \(n \ge 1\),

$$\begin{aligned} p_n \le c_{ {2} }\left( a_{n-1}^{\gamma -1}\right) ^2 \left[ p_{n-1}^2 + \left( a_{n-1}^{1 - \gamma }\right) ^6 + \left( a_{n-1}^{1 - \gamma }\right) ^2 q_{n-1}^2 \right] . \end{aligned}$$
(3.29)

Recall that \(\gamma = 7/6\) and note that \(c_{ {2} }\) does not depend on the choice of \(a_0 \ge 288^6\).

Proof

We take \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\) and \(D_1, D_2\) as in Lemma 3.6. Applying Lemma 3.8 with \(s = a_{n-1}\) and \(r = \text {dist}(D_1, D_2)\) (which by (3.7) is greater or equal to \({a_n}/{10}\)). We then deduce that,

$$\begin{aligned} \mu (L_{D_1,D_2}) \le c \left( \frac{a_{n-1}}{a_n}\right) ^2 = c (a_{n-1}^{1-\gamma })^2 \; (\le c), \text { for all }n \ge 1. \end{aligned}$$
(3.30)

Recalling that \(\mathbb {P}_u\) is a Poisson point process in \(\mathbb {L}\) having intensity measure \(u\mu \), we can infer that

$$\begin{aligned} \mathbb {P}_u [ 1 \le \omega (L_{D_1,D_2}) \le 2] \le \mathbb {P}_u [\omega (L_{D_1, D_2}) \ge 1] \le u \mu (L_{D_1,D_2}) \end{aligned}$$
(3.31)

and

$$\begin{aligned} \mathbb {P}_u [\omega (L_{D_1, D_2}) \ge 3]&\le \exp \{u \mu ( L_{D_1,D_2})\} - 1 - u\mu ( L_{D_1,D_2}) -\frac{u^2\mu (L_{D_1,D_2})^2}{2}\nonumber \\&\le c u^3 \mu ( L_{D_1,D_2})^3, \text { for all} \,n \ge 1, \end{aligned}$$
(3.32)

where in the last inequality we used a Taylor expansion together with \(u \le 1\) and with the fact that \(u \mu ( L_{D_1,D_2}) \le c\) by (3.30).

We now use Lemma 3.6, together with (3.30) and the two above bounds to obtain (recalling that \(u \le 1\)) that:

$$\begin{aligned} \mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2)] \le p_{n-1}^2 + c(a_{n-1}^{1 - \gamma })^6 + c (a_{n-1}^{1- \gamma })^2 q_{n-1}^2, \text { for }n\ge 1. \end{aligned}$$
(3.33)

Taking the supremum over \(x_1 \in \mathcal {H}^{i_1}\) and \(x_2 \in \mathcal {H}^{i_2}\) as in Lemma 3.5, this leads to (3.29), concluding the proof of Lemma 3.9.\(\square \)

In order to analyze the decay of \(q_n(u)\), we will make use of a suitable property of the sets \(\mathcal {H}^i_n\) which we now discuss. Roughly speaking, given two cylinders \(C_1 = C(l_1)\) and \(C_2 = C(l_2)\) with \(l_1\) and \(l_2\) as in as in the definition of \(q_n(u)\) (see (3.18)), we need to bound the number of points in \(\mathcal {H}^i_n\) that these cylinders may approach. Intuitively speaking, the only way in which a given cylinder \(C\) may approach too many points in \(\mathcal {H}^i_n\) is if \(C\) is ‘approximately tangent’ to the sphere \(\partial S ( x_0, {(i+1)a_n}/{6})\), see Fig. 2. However, a given cylinder can only be ‘approximately tangent’ to at most one of these spheres, say for some \(\bar{i} \in \{1, \ldots , 4\}\). This explains why we allow \(i\) to assume four different values: If we are given two cylinders \(C_1\) and \(C_2\) as above, we can still find \(i_1\) and \(i_2\) for which both \(C_1\) and \(C_2\) are ‘secant’ to the spheres corresponding to \(i_1\) and \(i_2\). Consequently, the number of balls in the covering corresponding to \(\mathcal {H}^{i_1}_n\) and \(\mathcal {H}^{i_2}_n\) that are intersected by \(C_1\) and \(C_2\) is bounded by a universal constant. This is made precise in the following

Lemma 3.10

There exists a constant \(c_{ {3} } > 0\) such that the following holds. For any \((a_n)_{n \ge 0}\) as in (3.5) if we let \(\mathcal {H}_n^i\) be defined as in (3.12), then, for all \(n \ge 1\) and every pair of cylinders \(C_1, C_2 \in \mathcal {C}\), there exist distinct \(i_1, i_2 \in \{1, \ldots , 4\}\) such that

$$\begin{aligned} \left| \left\{ x \in \mathcal {H}_n^{i_j} ; ~ S \left( x, \tfrac{a_{n-1}}{10} \right) \times [0,1{,}000] \cap ( C_1 \cup C_2) \ne \emptyset \right\} \right| \le c_{ {3} }, \end{aligned}$$
(3.34)

for any \(j=1,2\). Note that, in accordance with our convention on constants, \(c_{ {3} }\) does not depend on the specific choice of the scale parameter \(a_0\).

Proof

The proof of this lemma is postponed to the Appendix.\(\square \)

Our next aim is to obtain a recursive equation for \(q_n\)’s, which resembles the one obtained in (3.29) for the \(p_n\)’s. In order to do this, we first establish a result analogous to the Lemmas 3.5 and 3.6.

Lemma 3.11

Fix and \(l_1, l_2 \in \mathbb {L}\). Given \(i_1\) and \(i_2\) as in Lemma 3.10 we have that

$$\begin{aligned}&\mathbb {E}_u [A_n (x_0, \omega + \delta _{l_1} + \delta _{l_2})] \le c_{ {0} }^2\left( \frac{a_n}{a_{n-1}}\right) ^2 \sup _{{x_1 \in \mathcal {H}^{i_1}_n}, \; {x_2 \in \mathcal {H}^{i_2}_n}} \mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2)]\nonumber \\&\quad + c_{ {3} } \; c_{ {0} } \left( \frac{a_n}{a_{n-1}}\right) \sup _{{x_1 \in \mathcal {H}^{i_1}_n}, \; {x_2 \in \mathcal {H}^{i_2}_n}} \mathbb {E}_u [\begin{aligned}&A_{n-1} (x_1) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2})\\&+ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2)] \end{aligned}\nonumber \\&\quad + c_{ {3} }^2 \sup _{{x_1 \in \mathcal {H}^{i_1}_n}, \; {x_2 \in \mathcal {H}^{i_2}_n}} \mathbb {E}_u [ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2})]. \end{aligned}$$
(3.35)

Before going into the proof of Lemma 3.11, let us briefly discuss the meaning of the three above terms. Roughly speaking these terms respectively represent the cases where the lines \(l_1\) and \(l_2\) influence: ‘none’, ‘one’ or ‘both’ random variables \(A_{n-1}(x_1)\) and \(A_{n-1}(x_2)\). What is important to notice in these three terms is that, although bounding their corresponding expectations gets harder and harder (as the influence of \(l_1\) and \(l_2\) increases), the combinatorial factors multiplying these expectations are getting smaller. This trade-off was made possible by Lemma 3.10 as we will see in the proof below.

Proof

Recall from Lemma 3.3 that

$$\begin{aligned} \partial S \left( x_0, \left( \tfrac{i+1}{6}\right) a_n \right) \subseteq {\textstyle \bigcup \limits _{x \in \mathcal {H}^{i_j}_n}} S \left( x,\tfrac{a_{n-1}}{10}\right) , \text { for}\, j = 1,2. \end{aligned}$$
(3.36)

Since any path in \(\mathbb {R}^2\) connecting the ball \(S(x_0, a_n/10)\) to \(\partial S(x_0, a_n)\) must intersect both \(S (x_1, a_{n-1}/10)\) and \(S (x_2, a_{n-1}/10)\), for some \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\), it follows that

$$\begin{aligned} \mathbb {E}_u [A_n (x_0, \omega + \delta _{l_1} + \delta _{l_2})] \le \sum _{\begin{array}{c} x_1 \in \mathcal {H}^{i_1}_n, \\ x_2 \in \mathcal {H}^{i_2}_n \end{array}} \mathbb {E}_u [ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2})]. \end{aligned}$$
(3.37)

The lines \(l_1\) and \(l_2\) may or not influence the functions \(A_{n-1}(\cdot )\) appearing above. To distinguish these cases, we defined the sets \(\mathcal {D}^{j}_n (x_0, l_1,l_2) \subseteq \mathcal {H}^{i_j}_n\) for \(j = 1,2\) by

$$\begin{aligned} \mathcal {D}^{j}_n (l_1,l_2) = \{x \in \mathcal {H}^{i_j}_n; S(x, a_{n-1}/10) \times [0,1{,}000] \cap (C(l_1) \cup C(l_2)) \not = \varnothing \}. \end{aligned}$$
(3.38)

It is clear from the definitions of \(\mathcal {D}^{j}_n\) and \(A_n(x, \omega )\) that

$$\begin{aligned} A_{n-1} (x, \omega + \delta _{l_1} + \delta _{l_2}) = A_{n-1} (x, \omega ), \quad \text { for all }x \in \mathcal {H}^{i_j}_n \setminus \mathcal {D}^{j}_n. \end{aligned}$$
(3.39)

Note that by our choice of \(i_1\) and \(i_2\) as in Lemma 3.10, we have that \(|\mathcal {D}^{j}_n| \le c_{ {3} }\) for \(j = 1,2\) and all \(n \ge 1\). By Lemma 3.4, \(|\mathcal {H}^{i_j}_n| \le c_{ {0} }(a_n/a_{n-1})\) for every \(n \ge 1\). Now, by splitting the sum in (3.37) into the terms where \(x_j \in \mathcal {H}^{i_j}_n\!\setminus \!\mathcal {D}^{j}_n\) and \(x_j \in \mathcal {D}^{j}_n (j=1,2)\) we obtain from (3.39) the terms in the right-hand side of (3.35), finishing the proof of Lemma 3.11.\(\square \)

We now obtain the promised recurrence relation for \(q_n\) analogous to (3.29).

Lemma 3.12

There exists a positive constant \(c_{ {4} }\) such that, for any \((a_n)_{n \ge 0}\) as in (3.5) and for all \(n \ge 1\),

$$\begin{aligned} q_n&\le c_{ {4} } (a^{\gamma -1}_{n-1})^2 [ p^2_{n-1} + (a^{1-\gamma }_{n-1})^6 + (a^{1-\gamma }_{n-1})^2 q^2_{n-1}]\nonumber \\&+ c_{ {4} } a_{n-1}^{\gamma -1} [ p_{n-1}q_{n-1} + (a^{1-\gamma }_{n-1})^2 q_{n-1} + (a_{n-1}^{1-\gamma })^6] + c_{ {4} } [ q^2_{n-1} + (a^{1-\gamma }_{n-1})^2].\nonumber \\ \end{aligned}$$
(3.40)

Proof

We first fix arbitrarily in \(\mathbb {L}\) and we take \(i_1\) and \(i_2\) as in Lemma 3.10. The three terms in the above equation will be derived from the corresponding terms in (3.35), after taking the supremum.

Note that the first term in the right-hand side of (3.35) can be easily bounded using (3.33), yielding the first term in (3.40).

In order to bound the second term in the right-hand side of (3.35), we now show that, for every \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\),

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1, \omega ) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2})]\nonumber \\&\quad \le c [ p_{n-1}q_{n-1} + (a^{1-\gamma }_{n-1})^2 q_{n-1} + (a^{1-\gamma }_{n-1})^6], \end{aligned}$$
(3.41)

for all \(n \ge 1\).

Indeed, using the same notation as in Lemma 3.6 and ideas analogous to those in its proof, we obtain first that

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2}), \omega (L_{D_1,D_2}) = 0]\nonumber \\&\quad \le \mathbb {E}_u [ A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}} \cdot \omega ) A_{n-1} (x_2, \mathbf {1}_{L_{D_2} \setminus L_{D_1}} \cdot \omega + \delta _{l_1} + \delta _{l_2})]\nonumber \\&\quad = \mathbb {E}_u [ A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}} \cdot \omega )] \mathbb {E}_u [ A_{n-1} (x_2, \mathbf {1}_{L_{D_2} \setminus L_{D_1}} \cdot \omega + \delta _{l_1} + \delta _{l_2})]\nonumber \\&\quad \le p_{n-1}q_{n-1}. \end{aligned}$$
(3.42)

Furthermore, we have that

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2}), \omega (L_{D_1,D_2}) \ge 3]\nonumber \\&\quad \le \mathbb {P}_u [ \omega (L_{D_1,D_2}) \ge 3 ] \le c(a^{1-\gamma }_{n-1})^6, \end{aligned}$$
(3.43)

where the last inequality follows from (3.32) and (3.30).

Finally, we consider

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2}), 1\le \omega (L_{D_1,D_2}) \le 2]\nonumber \\&\quad \le \mathbb {E}_u [ A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}} \cdot \omega + \mathbf {1}_{L_{D_1,D_2}} \cdot \omega ), 1\le \omega (L_{D_1,D_2}) \le 2]\nonumber \\&\quad = \mathbb {E}_u [ \mathbf {1}_{\{1\le \omega (L_{D_1,D_2}) \le 2\}} \mathbb {E}_u [ A_{n-1} (x_1, \mathbf {1}_{L_{D_1} \setminus L_{D_2}} \cdot \omega + \mathbf {1}_{L_{D_1,D_2}} \cdot \omega )| \mathbf {1}_{L_{D_1,D_2}} \cdot \omega ]]\nonumber \\&\quad \le \sup _{l, l' \in \mathbb {L}} \mathbb {E}_u [ A_{n-1}(x_1, \omega + \delta _l + \delta _{l'})] \cdot \mathbb {P}_u [ 1\le \omega (L_{D_1,D_2}) \le 2] \le c (a^{1-\gamma }_{n-1})^2 q_{n-1},\nonumber \\ \end{aligned}$$
(3.44)

where the last inequality follows from the definition of \(q_n\), together with (3.31) and (3.30). Putting together (3.42), (3.43) and (3.44) we obtain (3.41) as promised.

To finish the proof, we show that for every \(x_1 \in \mathcal {H}^{i_1}_n\) and \(x_2 \in \mathcal {H}^{i_2}_n\),

$$\begin{aligned} \mathbb {E}_u [ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2})] \le c [q^2_{n-1} + (a^{1-\gamma }_{n-1})^2], \text { for all } n\ge 1, \end{aligned}$$
(3.45)

which will correspond to the last term in the right-hand side of (3.40).

To prove the above, we first proceed as in (3.42) to obtain that

$$\begin{aligned} \mathbb {E}_u [ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2}), \omega (L_{D_1,D_2}) = 0] \le q^2_{n-1}. \end{aligned}$$
(3.46)

Then we use (3.31) and (3.30) to get

$$\begin{aligned}&\mathbb {E}_u [ A_{n-1} (x_1, \omega + \delta _{l_1} + \delta _{l_2}) A_{n-1} (x_2, \omega + \delta _{l_1} + \delta _{l_2}), \omega (L_{D_1,D_2}) \ge 1]\nonumber \\&\quad \le \mathbb {P}_u [\omega (L_{D_1,D_2}) \ge 1] \le c(a^{1-\gamma }_{n-1})^2. \end{aligned}$$
(3.47)

Together with (3.46) and (3.47), this yields (3.45).

The bounds (3.41) and (3.45) can be plugged into (3.35) (see also the first paragraph of this proof) and, after taking the supremum over and \(l_1, l_2 \in \mathbb {L}\), we obtain (3.40). This finishes the proof of Lemma 3.12.\(\square \)

Now that we have established a system of relations between \(p_n(u)\) and \(q_n(u)\), we can obtain the promised induction step used to bound these probabilities. From now on, we finally fix the scale parameter

$$\begin{aligned} \hat{a}_0 = 288^6 \vee (8 \; (c_{ {2} } \vee c_{ {4} }))^{168}, \end{aligned}$$
(3.48)

we reefer to Lemmas 3.9 and 3.12 for the definitions of these constants. As we stressed then, these constants do not depend on the value of \(a_0 \in [288^6, \infty )\).

Proposition 3.13

Take \(\hat{a}_0\) as in (3.48) and define the corresponding sequence \((\hat{a}_n)_{n\ge 1}\) through (3.5). Then, if for some \(n \ge 1\),

$$\begin{aligned} p_{n-1} \le \hat{a}_{n-1}^{(5/2) (1-\gamma )} \quad \mathrm{and} \quad q_{n-1} \le \hat{a}_{n-1}^{(3/2) (1-\gamma )}, \end{aligned}$$
(3.49)

then this also holds with \(n-1\) replaced by \(n\), i.e.

$$\begin{aligned} p_{n} \le \hat{a}_{n}^{(5/2) (1-\gamma )} \quad \mathrm{and} \quad q_{n} \le \hat{a}_{n}^{(3/2) (1-\gamma )}. \end{aligned}$$
(3.50)

Proof

Using Lemmas 3.9 and 3.12 together with (3.49), we obtain that

$$\begin{aligned} p_n&\le c_{ {2} } \hat{a}_{n-1}^{2(\gamma -1)} [ \hat{a}_{n-1}^{5(1-\gamma )} + \hat{a}_{n-1}^{6(1-\gamma )} + \hat{a}_{n-1}^{2(1-\gamma )}\hat{a}_{n-1}^{3(1-\gamma )}] \le 8 \; c_{ {2} } \hat{a}_{n-1}^{3(1-\gamma )}\nonumber \\ q_n&\le c_{ {4} } \hat{a}_{n-1}^{2(\gamma -1)} [ \hat{a}_{n-1}^{5(1-\gamma )} + \hat{a}_{n-1}^{6(1-\gamma )} + \hat{a}_{n-1}^{2(1-\gamma )}\hat{a}_{n-1}^{3(1-\gamma )}] \nonumber \\&+ c_{ {4} } \hat{a}_{n-1}^{(\gamma -1)} [ \hat{a}_{n-1}^{4(1-\gamma )} + \hat{a}_{n-1}^{(7/2)(1-\gamma )} + \hat{a}_{n-1}^{6(1-\gamma )}] + c_{ {4} } [ \hat{a}_{n-1}^{3(1-\gamma )} + \hat{a}_{n-1}^{2(1-\gamma )}]\nonumber \\&\le 8 \; c_{ {4} } \hat{a}_{n-1}^{2(1-\gamma )}. \end{aligned}$$
(3.51)

Using (3.48), we obtain that

$$\begin{aligned} p_n&\le \hat{a}_0^{1/168} \cdot \hat{a}_{n-1}^{3(1-\gamma )} \overset{(3.6)}{=} \hat{a}_0^{1/168} \cdot \hat{a}_{n}^{3(\frac{1}{\gamma } -1)} \le \hat{a}_n^{3(\frac{1}{\gamma } - 1) + 1/168}\nonumber \\ q_n&\le \hat{a}_0^{1/168} \cdot \hat{a}_{n-1}^{2(1-\gamma )} \le \hat{a}_n^{2(\frac{1}{\gamma } - 1) + 1/168}. \end{aligned}$$
(3.52)

But we have chosen \(\gamma \) to be \(7/6\), so that

$$\begin{aligned} 3\left( \frac{1}{\gamma }- 1\right) + \frac{1}{168} < \frac{5}{2} (1-\gamma ) \quad \text {and} \quad 2 \left( \frac{1}{\gamma }- 1\right) + \frac{1}{168} < \frac{3}{2} (1-\gamma ), \end{aligned}$$
(3.53)

which together with (3.52) yields the Proposition.\(\square \)

It is clear from Proposition 3.13 that all we need to do in order to obtain the decay of \(p_n\) is to bound the values of \(p_0(u)\) and \(q_0(u)\). This will be done in the next section.

4 Triggering the recurrence relation

In this section we are going to show that for \(u\) small enough we have that \(p_0 (u) \le \hat{a}_0^{(5/2) (1-\gamma )}\) and \(q_0 (u) \le \hat{a}_0^{(3/2) (1-\gamma )}\), allowing us to trigger the chain of inequalities provided by Lemma 3.13. This result will follow once we prove the following

Proposition 4.1

As \(u\) goes to zero, both \(p_0(u)\) and \(q_0(u)\) vanish.

Remark 4.2

Before going into the proof of this lemma, we would like to stress that this is the first part of the proof of Theorem 3.1 where our specific choice of the surface \(H\) will play a role. If we had chosen \(H\) to be equal to \(\mathbb {R}^2\), all the considerations of previous sections would still hold true. However we will strongly use the rough shape of \(H\) in establishing that \(q_0(u)\) goes to zero with \(u\) (which would not be the case for instance for \(\mathbb {R}^2\)).

Recall that the quantity \(q_0\) involves the addition of two cylinders \(C(l_1)\) and \(C(l_2)\) to the random set of cylinders \(\mathcal {L}(\omega )\). In order to make our analysis easier we start by proving a lemma which reduces questions concerning two cylinders to a question related to a single (thicker) one. As before, \(\mathbb {C}\) stands for the set of all cylinders of radius one, i.e., \(\mathbb {C}=\{C(l);~l \in \mathbb {L}\}\).

Lemma 4.3

Given any two cylinders \(C_1, C_2 \in \mathbb {C}\), and any curve \(\eta :[0,1] \rightarrow C_1 \cup C_2\), such that dist\((\eta (0), \eta (1)) \ge a_0/10\) we can find a line \(l \in \mathbb {L}\) and \(t_1 < t_2 \in [0,1]\) such that:

$$\begin{aligned}&\text {dist} (\eta (t_1), \eta (t_2)) \ge a_0/100\end{aligned}$$
(4.1)
$$\begin{aligned}&\eta ([t_1, t_2]) \subset B(l,4) \end{aligned}$$
(4.2)

Proof

Let \(U = \eta ([0,1])\) be the image of the curve \(\eta \).

We start by treating two simple cases, namely the one in which \(C_1 \cap C_2 = \emptyset \) and the other in which \(C_1 \cap C_2 \ne \emptyset \) but \(C_1\) and \(C_2\) have their axis parallel to each other. In the former we have that either \(U \subset C_1\) or \(U \subset C_2\) so there is nothing to be proved and in the latter we can take \(l\) to be the axis of \(C_1\), and conclude the proof of the lemma.

So, from now on, we assume that \(C_1\) and \(C_2\) are not parallel and intersect each other, which implies that:

$$\begin{aligned} C_1 \cap C_2 \text { is non-empty and bounded.} \end{aligned}$$
(4.3)

For \(x \in \mathbb {R}^3\) and \(i=1,2\), we define \(\varphi _i(x)\) as the closest point to \(x\) lying on the axis of \(C_i\). Note that \(\varphi _i ^{-1}(y)\) (for \(y\) in the axis of \(C_i\)) is a plane perpendicular to \(C_i\). From (4.3) and the fact that \(C_1 \cap C_2\) is convex we conclude that \(\varphi _i (C_1 \cap C_2)\) is a line segment whose end-points are denoted by \(x_i\) and \(y_i\). Since \(U\) is connected and diam\((U) \ge a_0/10\) we have that the set \(U \backslash (B(x_1,4) \cup B(y_1,4))\) is non-empty.

Moreover we claim that

$$\begin{aligned} \exists \, \,t_1 < t_2 \in [0,1]\text { such that }\eta ([t_1,t_2]) \subseteq U \backslash (B(x_1,4) \cup B(y_1,4)) \text { and } (4.1) \text { holds} \end{aligned}$$
(4.4)

(see Fig. 3).

The claim (4.4) is obviously true when \(\eta \) does not hit any of the balls \(B(x_1,4)\) and \(B(y_1,4)\). So we can assume that \(\eta \) hits at least one of the balls.

Fig. 3
figure 3

This picture shows a curve \(\eta \) contained in the union of two cylinders \(C_1\) and \(C_2\). The region \(\bar{C}_1 \cup \bar{C}_2\) is shown in darker grey. Also the balls \(B(x_1,4)\) and \(B(y_1,4)\) are represented. In this picture, the curve \(\eta \) is disconnected by these balls into three pieces, one inside \(C_1\backslash (\bar{C}_1 \cup \bar{C}_2)\), other inside \(\bar{C}_1 \cup \bar{C}_2\) and the third inside \(C_2 \backslash (\bar{C}_1 \cup \bar{C}_2)\). If \(\eta \) is very long, then at least one of these pieces has to be long

First consider the case in which \(B(x_1,4) \cap B(y_1,4) = \emptyset \). Now assume that both \(B(x_1,4)\) and \(B(y_1,4)\) are hit by \(\eta \) and that \(B(x_1,4)\) is hit before \(B(y_1,4)\) (all the other cases can be handled similarly). In this case, define also the following times:

$$\begin{aligned}&s_1 \text { is the time when }\,\eta \, \text {first enters the ball}\, B(x_1,4),\\&s_2 \text { is the time of the last visit to}\, B(x_1,4)\, \text {before visiting}\, B(y_1,4),\\&s_3 \text { is the time when} \,\eta \, \text {first enters}\, B(y_1,4)\, \text { and}\\&s_4 \text { is the time of the last visit to}\, B(y_1,4). \end{aligned}$$

Then, using the triangle inequality one can see that one of the pairs \((0,s_1), (s_2, s_3)\) or \((s_4, 1)\) satisfy (4.4) and (4.1). Now if only one of the balls, say \(B(x_1,4)\), is hit by \(\eta \), then we define:

$$\begin{aligned}&s_1 \text { is the time when} \,\eta \, \text {first enters the set}\, B(x_1,4)\\&s_2 \text { is the time of the last visit to the set}\, B(x_1,4). \end{aligned}$$

Again by the triangle inequality, either the pair \((0,s_1)\) or \((s_2,1)\) satisfy (4.4) and (4.1).

Finally, consider the case in which \(B(x_1,4)\) and \(B(y_1,4)\) intersect each other. Define now the following times:

$$\begin{aligned}&s_1 \text { is the time when}\, \eta \, \text {first enters the set} B(x_1,4)\cup B(y_1,4),\\&s_2 \text { is the time of the last visit to the set}\, B(x_1,4) \cup B(y_1,4). \end{aligned}$$

Again by the triangle inequality, either the pair \((0,s_1)\) or \((s_2,1)\) satisfy (4.4) and (4.1) finishing the proof of the claim (4.4).

Let us denote \(U' = \eta ([t_1,t_2])\). Since (4.1) has already been established, all we need to prove is that

$$\begin{aligned} U' \text { is contained in a cylinder of radius 4.} \end{aligned}$$
(4.5)

Let \([x_i,y_i]\) stand for the line segment determined by \(x_i\) and \(y_i\) and denote by \(\bar{C}_i\) the set \(C_i \cap \varphi _i^{-1}([x_i,y_i])\). We now show (4.5) by considering the following cases:

Case 1 \(U' \cap (\bar{C}_1 \cup \bar{C}_2) = \emptyset \).

In this case, since \(C_1 \cap C_2 \subset \bar{C}_1 \cup \bar{C}_2\) and \(U'\) is connected and contained in \(C_1 \cup C_2\), we conclude that \(U'\) must be entirely contained in one of the two cylinders \(C_1\) or \(C_2\). This proves (4.5) in this case.

Case 2 \(U' \cap (\bar{C}_1 \cup \bar{C}_2) \ne \emptyset \).

In this case we first show that

$$\begin{aligned} U' \subseteq \bar{C_1} \cup \bar{C}_2 \end{aligned}$$
(4.6)

We start by claiming that

$$\begin{aligned} \text {Haus}([x_1,y_1],[x_2,y_2]) \le 2 \end{aligned}$$
(4.7)

where Haus\((\cdot ,\cdot )\) stands for the Hausdorff distance between two sets. Indeed, given \(x \in [x_1, y_1]\) (the case \(x \in [x_2, y_2]\) is analogous) there is some \(z \in C_1 \cap C_2\) such that \(\varphi _1(z) = x\). Then it follows that

$$\begin{aligned} \text {dist}(x, [x_{2}, y_{2}]) \le \text {dist}(x, \varphi _{2}(z)) \le \text {dist}(x,z) + \text {dist}(z, \varphi _{2}(z)) \le 2, \end{aligned}$$

establishing (4.7).

In addition we also claim that

$$\begin{aligned} \text {Haus}(\{x_1,y_1\},\{x_2,y_2\}) \le 2 \sqrt{2}. \end{aligned}$$
(4.8)

In order to prove this, suppose by contradiction that, say, dist\((x_2, \{x_1, y_1\}) > 2 \sqrt{2}\) (the other cases are analogous). Then we would obtain by (4.7), together with Pythagoras’ theorem, that \(d(\varphi _1(x_2), \{x_1,y_1\}) > 2\). Since the segment \([x_2,y_2]\) has to be contained in one of the closed half-spaces determined by \(\varphi ^{-1}_1(\varphi _1(x_2))\), we would have that either dist\((x_1, [x_2,y_2])\) or dist\((y_1, [x_2,y_2])\) would be strictly bigger than \(2\), contradicting then (4.7). This establishes the bound (4.8).

Recall that \(U'\) is connected, contained in \(C_1 \cup C_2\) and it intersects \(\bar{C}_1 \cup \bar{C}_2\). Then either (4.6) holds or \(U'\) intersects one of the discs \(\varphi ^{-1}_1 (x_1) \cap C_1, \varphi ^{-1}_1 (y_1) \cap C_1, \varphi ^{-1}_2 (x_2) \cap C_1\), and \(\varphi ^{-1}_2 (y_2) \cap C_2\). However \(U'\) cannot intersect any of those discs since it is, by definition, disjoint from the balls \(B(x_1,4) \cup B(y_1,4)\), which, by (4.8) contains all these four discs (see Fig. 3). It follows that (4.6) holds. Since, by (4.7) we have that \(\bar{C}_1 \cup \bar{C}_2 \subset B([x_1,y_1],4)\) we have that \(U'\) is contained in the cylinder of radius \(4\) and having the same axis as \(C_1\). This finishes the proof of (4.5) in Case 2 yielding thus the lemma.\(\square \)

Proof of Proposition 4.1

We have that . Therefore, as \(u\) goes to zero, vanishes. In particular this already gives that \(\lim _{u \rightarrow 0} p_0(u) = 0\). Moreover it also implies that, in order to prove that \(\lim _{u \rightarrow 0} q_0(u) = 0\), all we have to do is to show that,

(4.9)

For proving this, it will be convenient to use Lemma 4.3, as we show below.

Assume by contradiction that, for some and cylinders \(C_1, C_2 \in \mathbb {C}\), there is a path in \(\pi ((C_1 \cup C_2)\cap H)\) connecting \(S(x_0, \hat{a}_0/10)\) to \(\partial S(x_0, \hat{a}_0)\). This would imply that there exists some curve \(\eta : [0,1] \rightarrow (C_1 \cup C_2) \cap H\) for which dist\((\eta (0),\eta (1)) \ge \hat{a}_0/10\). Using Lemma 4.3 we obtain \(t_1 < t_2 \in [0,1]\) such that dist\((\eta (t_1),\eta (t_2))\ge \hat{a}_0/100\) and \(\eta ([t_1, t_2])\) is contained in \(B(l,4)\) for some \(l \in \mathbb {L}\). Denoting again by \(\varphi (x)\) the closest point to \(x\) in \(l\) and using Pythagoras’ Theorem we conclude that

$$\begin{aligned} \text {dist}(\pi \circ \varphi \circ \eta (t_1), \pi \circ \varphi \circ \eta (t_2))&\ge \text {dist}(\pi \circ \eta (t_1), \pi \circ \eta (t_2)) - 8 \nonumber \\&\ge \sqrt{\left( \hat{a}_0/100\right) ^2 - 1{,}000^2} - 8 \ge 10^7,\qquad \end{aligned}$$
(4.10)

where, in the third inequality, we have used that \(\hat{a}_0 > 10^{10}\) which follows directly from (3.48).

Let us denote \(\zeta = \pi \circ \varphi \circ \eta \). Note that \(\zeta \) is a linear path defined in \([0,1]\) and taking values in \(\mathbb {R}^2\). By (4.10) we conclude that dist\((\zeta (t_1), \zeta (t_2)) \ge 10^7\), so we can find \(t'_1 < t'_2\) such that the following holds

$$\begin{aligned}&\zeta (t'_1) \text { and } \zeta (t'_2) \text { belong to } \mathcal {H},\end{aligned}$$
(4.11)
$$\begin{aligned}&\text {dist}(\zeta (t'_1),\zeta (t'_2)) \ge 10^6 \text { and}\end{aligned}$$
(4.12)
$$\begin{aligned}&\zeta ([t'_1, t'_2]) \text { is contained in the segment } [\zeta (t'_1),\zeta (t'_2)] \subset \mathbb {R}^2. \end{aligned}$$
(4.13)

We now analyze the image of the path \(\zeta |_{[{t'_1},{t'_2}]}\) when it is pulled back to \(H\). For that, let us consider the function from \(\mathbb {R}^3\) to itself given by \(F(x) = (\pi (x), \text {dist}(\pi (x), \mathcal {H}))\) and note that

$$\begin{aligned}&F \circ \eta = \eta \text { and }\end{aligned}$$
(4.14)
$$\begin{aligned}&F \text { is a Lipschitz function with constant } \sqrt{2}. \end{aligned}$$
(4.15)
Fig. 4
figure 4

On the right part of the set \(B(\mathcal {H},20)\) and of the cylinder \(B(l,4)\) are depicted in blue and in grey respectively. Also, part of the curve \(\eta \) is shown in black inside the cylinder. On the left, a horizontal section of \(H\) is shown, together with the slab \(\mathbb {R}^2\times [0,10]\) in grey. Note that the curve \(\eta [t_1,t_2]\) is contained in \(H\). The points \(\zeta (t'_1)\) and \(\zeta (t'_2)\) belong to \(\mathcal {H}\) and the straight segment \(\zeta [t'_1,t'_2]\) is contained in \(\mathbb {R}^2\)

It follows that, for all \(t \in [0,1]\)

$$\begin{aligned} \text {dist}(F(\varphi \circ \eta (t)), \varphi \circ \eta (t))&\overset{(4.14)}{\le } \text {dist}(F(\varphi \circ \eta (t)), F(\eta (t))) + \text {dist}(\eta (t), \varphi \circ \eta (t)) \nonumber \\&\overset{(4.15)}{\le } (\sqrt{2} + 1) \text {dist}(\eta (t), \varphi \circ \eta (t)) \le 10. \end{aligned}$$
(4.16)

Together with (4.11), this inequality implies that \(\varphi \circ \eta (t'_i)\) belongs to the slab \(\mathbb {R}^2 \times [0,10]\) for \(i =1,2\) (see Fig. 4). Since \(\varphi \circ \eta \) is contained in the line \(l\), using (4.13) we conclude that \(\varphi \circ \eta (t)\) also belongs to the slab \(\mathbb {R}^2 \times [0,10]\) for every \(t \in [t'_1,t'_2]\). But that would mean that, the line \(\zeta ([t'_1, t'_2])\) is contained in \(B(\mathcal {H}, 20)\). This together with (4.12) leads to a contradiction to the fact that \(B(\mathcal {H}, 20)\) cannot contain any line segment of length \(10^6\).

We omit a rigorous proof of this fact as we believe it should be clear enough: recalling that the mesh of the lattice \(\mathcal {H}\) is 2,000, and looking at the right-hand side of Fig. 4, it should be clear that no line of length \(10^6\) can be contained in the set \(B(\mathcal {H}, 20)\), which is depicted in blue. A more extensive discussion on the above can be found for instance in [17, p. 335].\(\square \)

5 Proof of Theorem 3.1

In this section we use Propositions 3.13 and 4.1 in order to obtain our main result. We start by making some considerations concerning the ergodicity properties of the measure \(\mathbb {P}_u\).

Consider the space \(\{0,1\}^{\mathbb {Q}^3}\) equipped with the canonical sigma-algebra \(\mathcal {Y}\) generated by the coordinate projections \((Y_x)_{x \in \mathbb {Q}^3}\) and let \((t_x)_{x \in \mathbb {Q}^3}\) be the shift operator in \(\{0,1\}^{\mathbb {Q}^3}\). Denoting by \(\mathcal {Q}_u\) the law of \(\psi = ( \mathbf {1}_{\{x \in \mathcal {V}\}})_{x \in \mathbb {Q}^3}\) on \(\{0,1\}^{\mathbb {Q}^3}\) and by \(\Gamma \) the subgroup \(2{,}000\cdot \mathbb {Z} e_1\) then, for any event \(A \in \mathcal {Y}\) we have (see (3.12) in Lemma 3.4 in [21])

$$\begin{aligned} \text {if } t_x(A) = A \text { for all } x \in \Gamma , \quad \text { then } \mathcal {Q}_u [A] \in \{0,1\}. \end{aligned}$$
(5.1)

This is to say that \((\mathbb {P}_u \circ \psi ^{-1}, (t_x)_{x \Gamma })\) is ergodic.

Let Perc be the event \(\{\mathcal {V} \text { has an unbounded connected component} \}\) and Perc\(_H\) the corresponding event replacing \(\mathcal {V}\) for \(\mathcal {V} \cap H\). Note that both \(\mathbf {1}_{\text {Perc}} \circ \psi ^{-1}\) and \(\mathbf {1}_{\text {Perc}_H}\circ \psi ^{-1}\) are invariant under \((t_x)_{x \in \Gamma }\). Furthermore, in view of (5.1) a simple modification in Proposition 3.5 in [21] gives that

$$\begin{aligned} \mathbb {P}_u [\text {Perc}] \in \{0,1\} ~ \text { and } ~ \mathbb {P}_u[\text {Perc}_H] \in \{0,1\} \end{aligned}$$
(5.2)

Remark 5.1

In [21] the authors state the same result corresponding to (5.1) with \(\Gamma = 2{,}000\cdot \mathbb {Z} e_1\) replaced by \(\mathbb {Q}{e_1}\), however no modification in their proof is required in order to obtain (5.1). They also prove a result similar to (5.2) with the event Perc\(_H\) replaced by the event Perc\(_2\) where \(\mathcal {V}\cap \mathbb {R}^2\) appear instead of \(\mathcal {V}\cap H\). Since the surface \(H\) is invariant under \(t_x\) for \(x\in \Gamma \) the proof that they present adapts easily to our context.

Proof of Theorem 3.1

By Proposition 4.1, there is some constant \(c_{ {6} } > 0\) such that for all \(u \in [0,c_{ {6} })\)

$$\begin{aligned} p_0(u) \le \hat{a}_0^{(5/2)(1-\gamma )} \quad \text {and} \quad q_0(u) \le \hat{a}_0^{(3/2)(1-\gamma )}. \end{aligned}$$
(5.3)

Then, using Proposition 3.13 we obtain that for such values of \(u\),

$$\begin{aligned} p_n(u) \le \hat{a}_n^{(5/2)(1-\gamma )}. \end{aligned}$$
(5.4)

We are now going to see how this implies that

$$\begin{aligned} \mathbb {P}_u[\mathcal {V} \cap H \text { has an unbounded connected component}] = 1. \end{aligned}$$
(5.5)

As above, let the event appearing in (5.5) be denoted by \(\text {Perc}_H\). By (5.2) it will be enough to prove that \(\text {Perc}_H\) has positive probability under \(\mathbb {P}_u\).

Let \(\mathcal {C}\) be the component of \(\mathcal {V}\cap H\) containing the point \((0,0,1{,}000)\). If the event \(\text {Perc}_H^c\) occurs, then \(\mathcal {C}\) is bounded. By the local finiteness of \(\omega \), we have that \(\mathcal {C}\) is delimited by the intersection of \(H\) with a finite number of cylinders in the support of \(\omega \).

Note that the intersection of any cylinder with \(H\) is a union of pieces of ellipsoids delimited by lines. This implies that, if \(\mathcal {C}\) is non-empty and bounded, then its boundary in \(H\) is given by a closed and piecewise smooth curve \(\sigma '\) surrounding the point \((0,0,1{,}000)\) in \(H\). We denote by \(\sigma \) the projection of the curve \(\sigma '\) into \(\mathbb {R}^2\) (under the orthogonal map \(\pi \)). Clearly, \(\sigma \) is a closed curve surrounding the origin in \(\mathbb {R}^2\). This proves that

$$\begin{aligned} \text {Perc}_H^c \subseteq \left\{ \begin{array}{l} \text {there is a closed curve} \sigma \subset \pi (\mathcal {L}\cap H)\\ \text {surrounding the origin in}\, \mathbb {R}^2 \end{array}\right\} \end{aligned}$$
(5.6)

Let us define the following sequence of real numbers

$$\begin{aligned} x_{k,i} = \hat{a}_{k-1} + \left( \tfrac{i-1}{10}\right) \hat{a}_{k-1}, \text { for}\, k \ge 1\,\quad \text {and}\quad \,i = 1, \ldots , \left\lceil 10 \left( \tfrac{\hat{a}_k}{\hat{a}_{k-1}}\right) \right\rceil . \end{aligned}$$
(5.7)

Recall here that \((\hat{a}_n)_{n\ge 1}\) has been obtained in Proposition 3.13. Note that, using the notation \(M_k = \lceil 10 \tfrac{\hat{a}_k}{\hat{a}_{k-1}} \rceil \),

$$\begin{aligned}{}[\hat{a}_{k_0-1}, \infty ) \subseteq \bigcup _{k = k_0}^{\infty } \bigcup _{i=1}^{M_k} S \left( x_{k,i}, \tfrac{\hat{a}_{k-1}}{10} \right) , \text { for every}\, k_0 \ge 1. \end{aligned}$$
(5.8)

From (5.6),

$$\begin{aligned}&\text {Perc}_H^c \cap \{ \omega \in \Omega ; \; \omega (L_{S(0,\hat{a}_{k_0 - 1}) \times [0,1{,}000]}) = 0 \}\nonumber \\&\quad \subseteq \left\{ \begin{array}{l} \text {there is a closed curve}\, \sigma \subseteq [\pi (\mathcal {L} \cap H)]\!\setminus \!S(0,\hat{a}_{k_0-1})\nonumber \\ \text {surrounding the origin in }\,\mathbb {R}^2 \end{array}\right\} \\&\quad \subseteq \bigcup _{k = k_0}^{\infty } \left\{ \begin{array}{l} \text {there is a closed curve} \sigma \subseteq [\pi (\mathcal {L} \cap H)]\!\setminus \!S(0,\hat{a}_{k_0-1})\\ \text {surrounding the origin in}\,\mathbb {R}^2 \,\text {and intersecting}\, [\hat{a}_{k-1}, \hat{a}_k] e_1 \end{array} \right\} . \end{aligned}$$
(5.9)

Whenever a curve \(\sigma \) intersects \([\hat{a}_{k-1}, \hat{a}_k] e_1\), it also intersects one of the balls \(S(x_{k,i},\tfrac{\hat{a}_{k-1}}{10})\) for some \(i \in 1, \ldots , M_k\). Moreover, such a curve must also intersect the sphere \(\partial S(x_{k,i},{\hat{a}_{k-1}})\) in order to surround the origin. In particular, the event \(A_{k-1}(x_{k,i})\) occurs, so that

$$\begin{aligned} \text {Perc}_H^c \cap \{ \omega \in \Omega ; \; \omega (L_{S(0,\hat{a}_{k_0 - 1}) \times [0,1{,}000]}) = 0 \} \subseteq \bigcup _{k = k_0}^{\infty } \bigcup _{i=1}^{M_k}A_{k-1}(x_{k,i}). \end{aligned}$$
(5.10)

So that

$$\begin{aligned}&\mathbb {P}_u [\text {Perc}_H^c, \{ \omega \in \Omega ; \; \omega (L_{S(0,\hat{a}_{k_0 - 1}) \times [0,1{,}000]}) = 0\} ] \le \sum _{k = k_0}^{\infty } \sum _{i=1}^{M_k}\mathbb {P}_u[A_{k-1}(x_{k,i})]\nonumber \\&\quad \overset{(3.10)}{\le } \sum \limits _{k = k_0}^\infty 20 \left( \tfrac{\hat{a}_{k}}{\hat{a}_{k-1}} \right) p_{k-1}(u) \overset{(5.4)}{\le } 20 \sum \limits _{k = k_0}^\infty \hat{a}_{k-1}^{(\gamma -1)} \hat{a}_{k-1}^{(5/2)(1-\gamma )}\nonumber \\&\quad \quad = 20 \sum \limits _{k=k_0}^\infty \hat{a}_{k-1}^{(3/2)(1-\gamma )} = 20 \; \sum \limits _{{k=k_0-1}}^\infty \; (\hat{a}_0^{-\frac{1}{4}})^{\gamma ^k}. \end{aligned}$$
(5.11)

Now recall that

$$\begin{aligned} \mathbb {P}_u [\omega (L_{S(\hat{a}_{k_0},0) \times [0,1{,}000]}) > 0] \le u \mu (L_{S(\hat{a}_{k_0},0) \times [0,1{,}000]}). \end{aligned}$$
(5.12)

Putting (5.11) and (5.12) together, we obtain

$$\begin{aligned} \mathbb {P}_u [\text {Perc}_H^c] \le 20 \; \sum _{{k=k_0-1}}^\infty \; (\hat{a}_0^{-\frac{1}{4}})^{\gamma ^k} \!+\! u \mu (L_{S(\hat{a}_{k_0},0) \times [0,1{,}000]}), \text { for every choice of}\, k_0 \!\ge \!1. \end{aligned}$$
(5.13)

Finally, take \(k_0\) large enough so that the first term in the right-hand side of the above equation is at most \(1/3\) and \(u \le c_{ {6} }\) small enough so that the second term is also smaller than \(1/3\). This proves that \(\text {Perc}_H\) has positive probability, concluding the proof of Theorem 3.1.\(\square \)

Remark 5.2

This paper leaves several questions untouched such as:

  1. 1.

    Is the infinite connected component of \(\mathcal {V}\) unique? Note that this is not a direct consequence of the results in [2] since the set \(\mathcal {L}\) fails to satisfy the so-called finite energy property. This can be seen from the fact that \(\mathcal {L}\) has no bounded components.

  2. 2.

    It is not even clear that the number of infinite connected components of \(\mathcal {V}\) belongs almost surely to \(\{0,1,\infty \}\), which would be natural to expect from the ergodicity of \(\mathcal {V}\), see Lemma 3.3 in [21]. Note also that the set \(\mathcal {V}\) (or a discretized version of it) is not ‘insertion tolerant’ in the sense of [10], Definition 3.2, so that Corollary 3.8 in [10] cannot be directly applied in this situation.

  3. 3.

    Assuming the uniqueness of the infinite cluster of \(\mathcal {V}\), one could be interested in the decay of the probability that \(x\) and \(y\) in \(\mathbb {R}^d\) are connected through a bounded component of \(\mathcal {V}\) as the distance between \(x\) and \(y\) diverges. Similar questions have been answered in a rather satisfactory way for the case of Bernoulli percolation, see for instance Theorem 8.18 of [4].