Abstract
The path \(W[0,t]\) of a Brownian motion on a \(d\)-dimensional torus \(\mathbb T ^d\) run for time \(t\) is a random compact subset of \(\mathbb T ^d\). We study the geometric properties of the complement \(\mathbb T ^d{{\setminus }} W[0,t]\) as \(t\rightarrow \infty \) for \(d\ge 3\). In particular, we show that the largest regions in \(\mathbb T ^d{{\setminus }} W[0,t]\) have a linear scale \(\varphi _d(t)=[(d\log t)/(d-2)\kappa _d t]^{1/(d-2)}\), where \(\kappa _d\) is the capacity of the unit ball. More specifically, we identify the sets \(E\) for which \(\mathbb T ^d{{\setminus }} W[0,t]\) contains a translate of \(\varphi _d(t)E\), and we count the number of disjoint such translates. Furthermore, we derive large deviation principles for the largest inradius of \(\mathbb T ^d{{\setminus }} W[0,t]\) as \(t\rightarrow \infty \) and the \(\varepsilon \)-cover time of \(\mathbb T ^d\) as \(\varepsilon \downarrow 0\). Our results, which generalise laws of large numbers proved by Dembo et al. (Electron J Probab 8(15):1–14, 2003), are based on a large deviation estimate for the shape of the component with largest capacity in \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\), where \(W_{\rho (t)}[0,t]\) is the Wiener sausage of radius \(\rho (t)\), with \(\rho (t)\) chosen much smaller than \(\varphi _d(t)\) but not too small. The idea behind this choice is that \(\mathbb T ^d {{\setminus }} W[0,t]\) consists of “lakes”, whose linear size is of order \(\varphi _d(t)\), connected by narrow “channels”. We also derive large deviation principles for the principal Dirichlet eigenvalue and for the maximal volume of the components of \(\mathbb T ^d {{\setminus }} W_{\rho (t)}[0,t]\) as \(t\rightarrow \infty \). Our results give a complete picture of the extremal geometry of \(\mathbb T ^d{{\setminus }} W[0,t]\) and of the optimal strategy for \(W[0,t]\) to realise extreme events.
1 Introduction
1.1 Five key questions
\(\bullet \) Our basic object of study is the complement of a random path:
Question 1
Run a Brownian motion \(W=(W(t))_{t \ge 0}\) on a \(d\)-dimensional torus \(\mathbb T ^d,d\ge 3\). What is the geometry of the random set \(\mathbb T ^d {{\setminus }} W[0,t]\) for large \(t\)?
Figure 1 shows a simulation in \(d=2\).
Simulation of \(W[0,t]\) (shown in black) for \(t=15\) in \(d=2\). The holes in \(\mathbb T ^2{{\setminus }} W[0,t]\) (shown in white) have an irregular shape. The goal is to understand the geometry of the largest holes. The present paper only deals with \(d \ge 3\). In Sect. 1.6.8 below we will reflect on what happens in \(d=2\)
Regions with a random boundary have been studied intensively in the literature, and questions such as Question 1 have been approached from a variety of perspectives. Sznitman [21] studies the principal Dirichlet eigenvalue when a Poisson cloud of obstacles is removed from Euclidean space \(\mathbb{R }^d,d\ge 1\). Van den Berg et al. [5] consider the large deviation properties of the volume of a Wiener sausage on \(\mathbb{R }^d,d \ge 2\), and identify the geometric strategies for achieving these large deviations. Probabilistic techniques also play a role in the analysis of deterministic shapes, such as strong circularity in rotor-router and sandpile models shown by Levine and Peres [13], and heat flow in the von Koch snowflake and its relatives analysed by van den Berg and den Hollander [7], van den Berg [3], and van den Berg and Bolthausen [4]. The discrete analogue to Question 1, random walk on a large discrete torus, is connected to the random interlacements model of Sznitman [22] (to which we will return in Sect. 1.6.3 below).
Question 1 is studied by Dembo et al. [9] for \(d\ge 3\) and Dembo et al. [10] for \(d=2\). In both cases, a law of large numbers is established for the \(\varepsilon \)-cover time (the time for the Brownian motion to come within distance \(\varepsilon \) of every point) as \(\varepsilon \downarrow 0\). For \(d\ge 3\), Dembo, Peres and Rosen also obtain the multifractal spectrum of late points (those points that are approached within distance \(\varepsilon \) on a time scale that is a positive fraction of the \(\varepsilon \)-cover time). In the present paper we will consider a large but fixed time \(t\), and we will use a key lemma from [9] to obtain global information about \(\mathbb T ^d {{\setminus }} W[0,t]\). Throughout the paper we fix the dimension \(d \ge 3\). The behaviour in \(d=2\) is expected to be quite different (see the discussion in Sect. 1.6.8 below).
A random set is an infinite-dimensional object, hence issues of measurability require care. In general, the basic events are whether a random closed set intersects a given closed set, or whether a random open set contains a given closed set (see Matheron [14] or Molchanov [16] for a general theory of random sets and questions related to their geometry). On the torus we will parametrize these basic events as
(see (1.6) below), where \(\varphi >0\) acts as a scaling factor. The set \(E\) in (1.1) plays a role similar to that of a test function, and we will restrict our attention to suitably regular sets \(E\), for instance, compact sets with non-empty interior.
\(\bullet \) In giving an answer to Question 1, we must distinguish between global properties, such as the size of the largest inradius or the principal Dirichlet eigenvalue of the random set, and local properties, such as whether or not the random set is locally connected. In the present paper we focus on the global properties of \(\mathbb T ^d {{\setminus }} W[0,t]\). We will therefore be interested in the existence of subsets of \(\mathbb T ^d{{\setminus }} W[0,t]\) of a given form:
Question 2
For a given compact set \(E\subset \mathbb{R }^d\), what is the probability of the event
formed as the uncountable union of events from (1.1)?
For instance, questions about the inradius can be formulated in terms of Question 2 by setting \(E\) to be a ball.
The answer to Question 2 depends on the scaling factor \(\varphi \). To obtain a non-trivial result we are led to choose \(\varphi =\varphi _d(t)\) depending on time, where
and \(\kappa _d\) is the constant
We will see that \(\varphi _d(t)\) represents the linear size of the largest subsets of \(\mathbb T ^d{{\setminus }} W[0,t]\), in the sense that the limiting probability of the event in (1.2) decreases from \(1\) to \(0\) as the set \(E\) increases from small to large, in the sense of small or large capacity (see Sect. 1.3.3 below).
In what follows we will see that \(T^d{{\setminus }} W[0,t]\) is controlled by two spatial scales:
The linear size of the typical holes in \(T^d{{\setminus }} W[0,t]\) is of order \(\varphi _\mathrm{local}(t)\), the linear size of the largest holes of order \(\varphi _\mathrm{global}(t)\). The choice (1.3) of \(\varphi _d(t)\) is a fine tuning of the latter.
\(\bullet \) For a typical point \(x\in \mathbb T ^d\), the event \(\left\{ x+\varphi _d(t)E\subset \mathbb T ^d{{\setminus }} W[0,t]\right\} \) in (1.1) is unlikely to occur even when \(E\) is small. However, given a compact set \(E\subset \mathbb{R }^d\), the points \(x\in \mathbb T ^d\) for which \(x+\varphi _d(t)E\subset \mathbb T ^d{{\setminus }} W[0,t]\) (i.e., the points that realize the event in (1.2)) are atypical, and we can ask whether the subset \(x+\varphi _d(t)E\) is likely to form part of a considerably larger subset:
Question 3
Are the points \(x\in \mathbb T ^d\) for which \(x+\varphi _d(t)E\subset \mathbb T ^d{{\setminus }} W[0,t]\) likely to satisfy \(x+\varphi _d(t)E'\subset \mathbb T ^d{{\setminus }} W[0,t]\) for some substantially larger set \(E'\supset E?\)
Question 3 aims to distinguish between the two qualitative pictures shown in Fig. 2, which we call sparse and dense, respectively. We will show that in \(d \ge 3\) the answer to Question 3 is no, i.e., the picture is dense as Fig. 2b. In Sect. 1.6.8 below we will argue that in \(d=2\) the answer to Question 3 is yes, i.e., the picture is sparse as in Fig. 2a. The latter can already be guessed from Fig. 1.
In a similar spirit, we can ask about temporal versus spatial avoidance strategies:
Question 4
For a given \(x\in \mathbb T ^d\), does the unlikely event \(\left\{ x+\varphi _d(t)E\subset \mathbb T ^d{{\setminus }} W[0,t]\right\} \) arise primarily because the Brownian motion spends an unusually small amount of time near \(x\), or because the Brownian motion spends a typical amount of time near \(x\) and simply happens to avoid the set \(x+\varphi _d(t) E?\)
Questions 3 and 4, though not equivalent, are interrelated: if the Brownian motion spends an unusually small amount of time near \(x\), then it may be plausibly expected to fill the vicinity of \(x\) less densely, and vice versa. We will show that in \(d \ge 3\) the Brownian motion follows a spatial avoidance strategy, i.e., the second alternative in Question 4 applies, and that, indeed, the Brownian motion is very likely to spend approximately the same amount of time around all points of \(\mathbb T ^d\). In Sect. 1.6.8 below we will argue that in \(d=2\) the first alternative in Question 4 applies.
\(\bullet \) The negative answer to Question 3 and the heuristic picture in Fig. 2b suggest that regions of \(\mathbb T ^d\) where \(W[0,t]\) is relatively dense nearly separate the large subsets \(x+\varphi _d(t)E\subset \mathbb T ^d{{\setminus }} W[0,t]\) into disjoint components. Making sense of this heuristic is complicated by the fact that \(\mathbb T ^d{{\setminus }} W[0,t]\) is connected almost surely (see Proposition 1.11 below), so that all large subsets belong to the same connected component in \(\mathbb T ^d{{\setminus }} W[0,t]\).
Question 5
Can the approximate component structure of the large subsets of \(\mathbb T ^d{{\setminus }} W[0,t]\) be captured in a well-defined way?
We will provide a positive answer to Question 5 by enlarging the Brownian path \(W[0,t]\) to a Wiener sausage \(W_{\rho (t)}[0,t]\) of radius \(\rho (t)=o(\varphi _d(t))\). Under suitable hypotheses on the enlargement radius \(\rho (t)\) (see (1.17) below) we are able to control certain properties of all the connected components of \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\) simultaneously: for instance, we compute the asymptotics of their maximum possible volume and capacity and minimal possible Dirichlet eigenvalue. The well-definedness of the approximate component structure lies in the fact that (subject to the hypothesis in (1.17) below) these properties do not depend on the precise choice of \(\rho (t)\).
The existence of a connected component of \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\) having a given property, for instance, having at least a specified volume, involves an uncountable union of the events in (1.2) as \(E\) runs over a suitable class of connected sets. Central to our arguments is a discretization procedure that reduces such an uncountable union to a suitably controlled finite union (see Sect. 3 below).
1.2 Outline
Our main results concern the extremal geometry of the set \(\mathbb T ^d{{\setminus }} W[0,t]\) as \(t\rightarrow \infty \). Our key theorem is a large deviation estimate for the shape of the component with largest capacity in \(\mathbb T ^d {{\setminus }} W_{\rho (t)}[0,t]\) as \(t\rightarrow \infty \), where \(W_{\rho (t)}[0,t]\) is the Wiener sausage of radius \(\rho (t)\). From this we derive large deviation principles for the maximal volume and the principal Dirichlet eigenvalue of the components of \(\mathbb T ^d {{\setminus }} W_{\rho (t)}[0,t]\) as \(t\rightarrow \infty \), and identify the number of disjoint translates of \(\varphi _d(t)E\) in \(\mathbb T ^d {{\setminus }} W[0,t]\) as \(t\rightarrow \infty \) for suitable sets \(E\). We further derive large deviation principles for the largest inradius as \(t\rightarrow \infty \) and the \(\varepsilon \)-cover time as \(\varepsilon \downarrow 0\), extending laws of large numbers that were derived in Dembo et al. [9]. Along the way we settle the five questions raised in Sect. 1.1.
It turns out that the costs of the various large deviations are asymmetric: polynomial in one direction and stretched exponential in the other direction. Our main results are linked by the heuristic that sets of the form \(x+\varphi _d(t)E\) appear according to a Poisson point process with total intensity \(t^{J_d({{\mathrm{Cap}}}E)+o(1)}\), where \(J_d\) is given by (1.16) below (see Fig. 3 below).
The remainder of the paper is organised as follows. In Sect. 1.3 we give definitions and introduce notations. In Sects. 1.4 and 1.5 we state our main results: four theorems, five corollaries and two propositions. In Sect. 1.6 we discuss these results, state some conjectures, make the link with random interlacements, and reflect on what happens in \(d=2\). Section 2 contains various estimates on hitting times, hitting numbers and hitting probabilities for Brownian excursions between the boundaries of concentric balls, which serve as key ingredients in the proofs of the main results. Section 3 looks at non-intersection probabilities for lattice animals, which serve as discrete approximations to continuum sets. The proofs of the main results are given in Sects. 4–5. The Appendix in Sect. 6 contains the proof of two lemmas that are used along the way.
1.3 Definitions and notations
1.3.1 Torus
The \(d\)-dimensional unit torus \(\mathbb T ^d\) is the quotient space \(\mathbb{R }^d/\mathbb{Z }^d\), with the canonical projection map \(\pi _0:\,\mathbb{R }^d\rightarrow \mathbb T ^d\). We consider \(\mathbb T ^d\) as a Riemannian manifold in such a way that \(\pi _0\) is a local isometry. The space \(\mathbb{R }^d\) acts on \(\mathbb T ^d\) by translation: given \(x=\pi _0(y_0)\in \mathbb T ^d, y_0,y\in \mathbb{R }^d\), we define \(x+y=\pi _0(y_0+y)\in \mathbb T ^d\). (Having made this definition, we will no longer need to refer to the projection map \(\pi _0\), nor to the particular representation of the torus \(\mathbb T ^d\).) Given a set \(E\subset \mathbb{R }^d\), a scale factor \(\varphi >0\), and a point \(x\in \mathbb T ^d\) or \(x\in \mathbb{R }^d\), we can now define
Euclidean distance in \(\mathbb{R }^d\) and the induced distance in \(\mathbb T ^d\) are both denoted by \(d(\cdot ,\cdot )\). The distance from a point \(x\) to a set \(E\) is \(d(x,E)=\inf \left\{ d(x,y) :\,y\in E\right\} \). The closed ball of radius \(r\) around a point \(x\) is denoted by \(B(x,r)\), for \(x\in \mathbb T ^d\) or \(x\in \mathbb{R }^d\). We will only be concerned with the case \(0<r<\tfrac{1}{2}\), so that \(B(x,r)=x+B(0,r)\) for \(x\in \mathbb T ^d\) and the local isometry from \(B(0,r)\ to\ B(x,r)\ defined\ by\ y\mapsto x+y\) one-to-one.
1.3.2 Brownian motion and Wiener sausage
We write \(\mathbb{P }_{x_0}\) for the law of the Brownian motion \(W=(W(t))_{t\ge 0}\) on \(\mathbb T ^d\) started at \(x_0\in \mathbb T ^d\), i.e., the Markov process with generator \(-\tfrac{1}{2}\varDelta _\mathbb{T ^d}\), where \(\varDelta _\mathbb{T ^d}\) is the Laplace operator for \(\mathbb T ^d\). We can always take \(W(t)=x_0 +\tilde{W}(t)\), where \(\tilde{W}=(\tilde{W}(t))_{t \ge 0}\) is the standard Brownian motion on \(\mathbb{R }^d\) started at \(0\), so that \(W\) is the projection onto \(\mathbb T ^d\) (via \(\pi _0\)) of a Brownian motion in \(\mathbb{R }^d\). When \(x_0\in \mathbb{R }^d\) we will also use \(\mathbb{P }_{x_0}\) for the law of the Brownian motion on \(\mathbb{R }^d\). When the initial point \(x_0\) is irrelevant we will write \(\mathbb{P }\) instead of \(\mathbb{P }_{x_0}\). The image of the Brownian motion over the time interval \([a,b]\) is denoted by \(W[a,b]=\left\{ W(s):\, a\le s\le b\right\} \).
For \(r>0\) and \(E\subset \mathbb{R }^d\) or \(E\subset \mathbb T ^d\), we write \(E_r=\cup _{x\in E} B(x,r)\) and \(E_{-r}=[\cup _{x\in E^c} B(x,r)]^c\). The Wiener sausage of radius \(r\) run for time \(t\) is the \(r\)-enlargement of \(W[0,t]\), i.e., \(W_r[0,t]=\cup _{s \in [0,t]} B(W(s),r)\).
1.3.3 Capacity
The (Newtonian) capacity of a Borel set \(E\subset \mathbb{R }^d\), denoted by \({{\mathrm{Cap}}}E\), can be defined as
where the infimum runs over the set of probability measures \(\mu \) on \(E\), and
is the Green function associated with Brownian motion on \(\mathbb{R }^d\) (throughout the paper we restrict to \(d \ge 3\)). In terms of the constant \(\kappa _d\) from (1.4), we can write \(G(x,y)=1/\kappa _d\,d(x,y)^{d-2}\), and it emerges that \(\kappa _d={{\mathrm{Cap}}}B(0,1)\) is the capacity of the unit ball.Footnote 1
The function \(E\mapsto {{\mathrm{Cap}}}E\) is non-decreasing in \(E\) and satisfies the scaling relation
and the union bound
Capacity has an interpretation in terms of Brownian hitting probabilities:
Thus, capacity measures how likely it is for a set to be hit by a Brownian motion that starts far away. We will make extensive use of asymptotic properties similar to (1.11).
If a set \(E\) is polar—i.e., with probability 1, \(E\) is not hit by a Brownian motion started away from \(E\)—then \({{\mathrm{Cap}}}E=0\). For instance, any finite or countable union of \((d-2)\)-dimensional subspaces has capacity zero.
1.3.4 Sets
The boundary of a set \(E\) is denoted by \(\partial E\), the interior by \(\mathrm{int}\left( E\right) \), and the closure by \(\mathrm{clo}\left( E\right) \). We define
We will use these sets to describe the possible components of \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\). We further define
The condition \({{\mathrm{Cap}}}(\mathrm{int}\left( E\right) )={{\mathrm{Cap}}}(\mathrm{clo}\left( E\right) )\) in the definition of \({\fancyscript{E}}^*\) is satisfied when every point of \(\partial E\) is a regular point for \(\mathrm{int}\left( E\right) \), which in turn is satisfied when \(E\) satisfies a cone condition at every point (see Port and Stone [19, Chapter 2, Proposition 3.3]). In particular, any finite union of cubes, or any \(r\)-enlargement \(E_r\) with \(r>0\) of a compact set \(E\), belongs to \({\fancyscript{E}}^*\).
1.3.5 Maximal capacity of a component
A central role will be played by the largest capacity \(\kappa ^*(t,\rho )\) for a component of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\), defined by
Note that by rescaling we have
1.4 Component structure
We begin by describing the component structure of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\). In formulating the results below we will use the abbreviation (see Fig. 3a)
Our first theorem quantifies the likelihood of finding sets of large capacity that do not intersect \(W_\rho [0,t]\), for \(\rho \) in a certain window between the local and the global spatial scales defined in (1.5).
Theorem 1.1
Fix a positive function \(t \mapsto \rho (t)\) satisfying
Then the family \(\mathbb{P }(\kappa ^*(t,\rho )/\varphi _d(t)^{d-2}\in \cdot )\), \(t>1\), satisfies the LDP on \([0,\infty ]\) with rate \(\log t\) and rate function (see Fig. 3b)
with the convention that \(I_d(\infty )=\infty \).
The counterpart of Theorem 1.1 for small capacities is contained in the following two theorems, which show that components of small capacity are likely to exist and to be numerous. Let \(\chi _\rho (t,\kappa )\) denote the number of components \(C\) of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\) such that \(C\) contains some ball of radius \(\rho \) and has the form \(C=x+\varphi _d(t)E\) for a connected open set \(E\) with \({{\mathrm{Cap}}}E\ge \kappa \).
Theorem 1.2
Fix a positive function \(t\mapsto \rho (t)\) satisfying (1.17), and let \(\kappa <\kappa _d\). Then
Theorem 1.3
Fix a non-negative function \(t\mapsto \rho (t)\) satisfying \(\rho (t)=o(\varphi _d(t))\), and let \(E\subset \mathbb{R }^d\) be compact with \({{\mathrm{Cap}}}E <\kappa _d\). Then
The next theorem identifies the shape of the components of \(\mathbb T ^d {{\setminus }} W_{\rho (t)}[0,t]\). For \(E\subset E'\) a pair of nested compact connected subsets of \(\mathbb{R }^d\), we say that a component \(C\) of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\) satisfies condition \(({\fancyscript{C}}(t,\rho ,E,E'))\) when
Define \(\chi _\rho (t,E,E')\) to be the number of components of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\) satisfying condition \(({\fancyscript{C}}(t,\rho ,E,E'))\), and define \(F_\rho (t,E,E')\) to be the event
In words, \(F_\rho (t,E,E')\) is the event that \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\) contains a component sandwiched between \(x+\varphi _d(t) E\) and \(x+\varphi _d(t)E'\), and any other component has smaller capacity (when viewed as a subset of \(\mathbb{R }^d\)).
Theorem 1.4
Fix a positive function \(t\mapsto \rho (t)\) satisfying (1.17), let \(E\in {\fancyscript{E}}_c\), and let \(\delta >0\). If \({{\mathrm{Cap}}}E\ge \kappa _d\), then
while if \({{\mathrm{Cap}}}E<\kappa _d\), then
Theorems 1.1–1.4 yield the following corollary. For \(E\subset \mathbb{R }^d\), let \(\chi (t,E)\) denote the maximal number of disjoint translates \(x+\varphi _d(t)E\) in \(\mathbb T ^d{{\setminus }} W[0,t]\).
Corollary 1.5
Suppose that \(E\in {\fancyscript{E}}^*\). Then
Furthermore,
and if \({{\mathrm{Cap}}}E<\kappa _d\), then
1.5 Geometric structure
Having described the components in terms of their capacities in Sect. 1.4, we are ready to look at the geometric structure of our random set. Our first corollary concerns the maximal volume of a component of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\), which we denote by \(V(t,\rho )\). Volume is taken w.r.t. the \(d\)-dimensional Lebesgue measure, and we write \(V_d={{\mathrm{Vol}}}B(0,1)\) for the volume of the \(d\)-dimensional unit ball.
Corollary 1.6
Subject to (1.17), the family \(\mathbb{P }(V(t,\rho (t))/\varphi _d(t)^d\in \cdot ),t>1\), satisfies the LDP on \((0,\infty )\) with rate \(\log t\) and rate function
Moreover, for \(v<V_d\),
Our second corollary concerns \(\lambda (t,\rho )=\lambda (\mathbb T ^d{{\setminus }} W_\rho [0,t])\), the principal Dirichlet eigenvalue of \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\), where by \(\lambda (E)\) (for \(E\subset \mathbb T ^d\) or \(E\subset \mathbb{R }^d\)) we mean the principal eigenvalue of the operator \(-\tfrac{1}{2}\varDelta _E\) with Dirichlet boundary conditions on \(\partial E\). We write \(\lambda _d=\lambda (B(0,1))\) for the principal Dirichlet eigenvalue of the \(d\)-dimensional unit ball.
Corollary 1.7
Subject to (1.17), the family \(\mathbb{P }(\varphi _d(t)^2 \lambda (t,\rho (t)) \in \cdot ),t>1\), satisfies the LDP on \((0,\infty )\) with rate \(\log t\) and rate function
Moreover, for \(\lambda > \lambda _d\),
Our last two corollaries concern the largest inradius of \(\mathbb T ^d{{\setminus }} W[0,t]\),
and the \(\varepsilon \)-cover time,
For the latter we need the scaling function
Corollary 1.8
The family \(\mathbb{P }(\rho _\mathrm{in}(t)/\varphi _d(t)\in \cdot \,),t>1\), satisfies the LDP on \((0,\infty )\) with rate \(\log t\) and rate function
Moreover, for \(0<r<1\),
Corollary 1.9
The family \(\mathbb{P }({\fancyscript{C}}_\varepsilon /\psi _d(\varepsilon )\in \cdot \,)\), \(0<\varepsilon <1\), satisfies the LDP on \((0,\infty )\) with rate \(\log (1/\varepsilon )\) and rate function
Moreover, for \(0<u<d\),
Corollary 1.9 is equivalent to Corollary 1.8 because of the relation \(\left\{ \rho _\mathrm{in}(t)>\varepsilon \right\} =\left\{ {\fancyscript{C}}_\varepsilon > t\right\} \) and the asymptotics
1.6 Discussion
1.6.1 Upward versus downward deviations and the role of\(J_d(\kappa )\)
Theorem 1.1 says that the region with largest capacity not intersecting the Wiener sausage of radius \(\rho (t)\) lives on scale \(\varphi _d(t)\), and that upward large deviations on this scale have a cost that decays polynomially in \(t\). Theorem 1.2 identifies how many components there are with small capacity. This number grows polynomially in \(t\). Theorem 1.3 says that this number is extremely unlikely to be zero: the cost is stretched exponential in \(t\). Theorem 1.4 completes the picture obtained from Theorems 1.1–1.3 by showing that components can approximate any shape in \({\fancyscript{E}}_c\).
Theorems 1.1–1.4 and are linked by the heuristic that components of the form \(x+\varphi _d(t)E\) appear according to a Poisson point process with total intensity \(t^{J_d({{\mathrm{Cap}}}E)+o(1)}\). When \({{\mathrm{Cap}}}E>\kappa _d\) we have \(J_d({{\mathrm{Cap}}}E)<0\), and the likelihood of even a single such component is \(t^{{J_d({{\mathrm{Cap}}}E)} +o(1)}\), as in Corollary 1.5. When \({{\mathrm{Cap}}}E<\kappa _d\) we have \(J_d({{\mathrm{Cap}}}E)>0\), and a Poisson random variable \(X\) of mean \(t^{J_d({{\mathrm{Cap}}}E)+o(1)}\) satisfies \(X=t^{J_d({{\mathrm{Cap}}}E)+o(1)}\) with high probability and \(\mathbb{P }(X=0)= \exp [ -t^{J_d({{\mathrm{Cap}}}E)+o(1)}]\). Based on this heuristic, we conjecture that the inequalities in (1.28), (1.30), (1.35) and (1.37) are all equalities asymptotically.
1.6.2 Components and the role of \(\rho (t)\)
Theorems 1.1–1.4 concern components of the form \(x+\varphi _d(t)E\). We begin by remarking that, with high probability, all components have this form:
Proposition 1.10
Assume (1.17). Let \(\mathrm{Wrap}(t,\rho )\) be the event that \(\mathbb T ^d{{\setminus }} W_\rho [0,t]\) has a component \(C\) that, when considered as a Riemannian manifold with its intrinsic metric, is not the isometric image \(x+E\) of some bounded subset \(E\) of \(\mathbb{R }^d\). Then
Informally, such a component must “wrap around” the torus, so that the local isometry from \(\mathbb{R }^d\) to \(\mathbb T ^d\) is not a global isometry. Proposition 1.10 means that, apart from a negligible event, we may sensibly consider the components as subsets of \(\mathbb{R }^d\) and discuss their capacities as defined in (1.7).
Collectively, Theorems 1.1–1.4, Corollaries 1.6–1.7 and Proposition 1.10 show that \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\) has a component structure, with well-defined bounds on the capacities, volumes and principal Dirichlet eigenvalues of these components. By contrast, the choice \(\rho (t)=0\) does not give a component structure at all:
Proposition 1.11
With probability \(1\), the set \(\mathbb T ^d{{\setminus }} W[0,t]\) is path-connected, open and dense for every \(t\), and the set \(\mathbb T ^d{{\setminus }} W\left[ 0,\infty \right) \) is path-connected, locally path-connected and dense.
The picture behind Propositions 1.10–1.11 is that the set \(\mathbb T ^d {{\setminus }} W[0,t]\) consists of “lakes” whose linear size is of order \(\varphi _d(t)\), connected by narrow “channels” whose linear size is at most \(\varphi _d(t)/(\log t)^{1/d}\). By inflating the Brownian motion to a Wiener sausage of radius \(\rho (t)\) with (recall (1.5) and (1.17))
we effectively block off these channels, so that \(\mathbb T ^d{{\setminus }} W_{\rho (t)}[0,t]\) consists of disjoint lakes.
Proposition 1.11 shows that some lower bound on \(\rho (t)\) is necessary for the results of Theorems 1.1–1.4, Corollaries 1.6–1.7 and Proposition 1.10 to hold.Footnote 2 It would be of interest to know whether the condition \(\rho (t)\gg \varphi _d(t)/(\log t)^{1/d}\) can be relaxed, i.e., whether the true size of the channels is of smaller order than \(\varphi _d(t)/(\log t)^{1/d}\). By analogy with the random interlacements model (see Sect. 1.6.3 below), the relevant regime to study would be \(\varphi _\mathrm{local}(t) \asymp \varphi _d(t)/(\log t)^{1/(d-2)} \ll \rho (t) \ll \varphi _d(t)/(\log t)^{1/d}\), i.e., the missing part of (1.40).
1.6.3 A comparison with random interlacements
The discrete analogue of \(\mathbb T ^d{{\setminus }} W[0,t]\) is the complement \(\mathbb T _N^d{{\setminus }} S[0,n]\) of the path of a random walk \(S=(S(n))_{n\in \mathbb{N }_0}\) on a large discrete torus \(\mathbb T _N^d=(\mathbb{Z }/N\mathbb{Z })^d\). The spatial scale being fixed by discretization, it is necessary to take \(N\rightarrow \infty \) and \(n\rightarrow \infty \) simultaneously, and the choice \(n=uN^d\) for \(u\in (0,\infty )\) has been extensively studied: see for instance Benjamini and Sznitman [2], Sznitman [22] and Sidoravicius and Sznitman [20]. Teixeira and Windisch [24] prove that \(S[0,uN^d]\), seen locally from a typical point, converges in law as \(N\rightarrow \infty \), namely,
where \(X_N\) is drawn uniformly from \(\mathbb T _N^d\), and \({{\mathrm{Cap}}}_{\mathbb{Z }^d} E\) is the discrete capacity. The right-hand side of (1.41) is the non-intersection probability
for the random interlacements model with parameter \(u\) introduced by Sznitman [22]. The set \({\fancyscript{I}}^u\subset \mathbb{Z }^d\) can be constructed as the union of a certain Poisson point process of random walk paths, with an intensity measure proportional to the parameter \(u\). The random interlacements model has a critical value \(u_*\in (0,\infty )\) such that \(\mathbb{Z }^d{{\setminus }}{\fancyscript{I}}^u\) has an unbounded component a.s. when \(u<u_*\) and has only bounded components a.s. when \(u>u_*\).
The continuous analogue of (1.41) is the probability of the event in (1.1) with the scaling factor \(\varphi =\varphi _\mathrm{local}(t) = t^{-1/(d-2)}\) instead of \(\varphi =\varphi _d(t) \asymp \varphi _\mathrm{global}(t)\). Our methods (see Propositions 2.1 and 2.4 below) yield
for \(X\) drawn uniformly from \(\mathbb T ^d\), which implies that the random set \(\mathbb T ^d{{\setminus }} W[0,t]\), seen locally from a typical point, converges in law (see Molchanov [16, Theorem 6.5] for a discussion of convergence in law for random sets) to a random closed set \({\fancyscript{I}}\) uniquely characterized by its non-intersection probabilities
As with the discrete random interlacements \({\fancyscript{I}}^u\), the limiting random set \({\fancyscript{I}}\) can be constructed from a Poisson point process of Brownian motion paths (see Sznitman [23, Section 2]).
Because of scale invariance, no parameter is needed in (1.43)–(1.44). Indeed, the continuous model corresponds to a rescaled limit of the discrete model when \((N,u)\) is replaced by \((kN,u/k^{d-2})\) and \(k\rightarrow \infty \). In this rescaling the parameter \(u\) tends to zero, and \(\mathbb{Z }^d{{\setminus }}{\fancyscript{I}}^u\) loses its finite component structure, which is in accordance with the connectedness result in Proposition 1.11.
Inflating the Brownian motion to a Wiener sausage can be interpreted as reintroducing a kind of discretization. However, because of (1.17), the spatial scale \(\rho (t)\) of this discretization is much larger than the spatial scale \(\varphi _\mathrm{local}(t) = t^{-1/(d-2)}\) corresponding to (1.43) (cf. Sect. 1.6.2).
In the random interlacements model no sharp bound is currently known for the tail behaviour of the capacity of the component containing the origin. Recently, Popov and Teixeira [18] showed that for \(d \ge 3\) the diameter of the component containing \(0\) in \(\mathbb{Z }^d{{\setminus }}{\fancyscript{I}}^u\) has an exponential tail for \(u\) sufficiently large (with a logarithmic correction in \(d=3\)). In particular, the largest diameter of a component in a box of volume \(N^d,d\ge 4\), can grow at most as \(\log N\), and therefore the largest capacity of a component can grow at most as \((\log N)^{d-2}\).
When this last bound is translated heuristically to our context, the corresponding assertion is that the maximal capacity of a component is at most of order \((\log t)^{d-2}/t\). By Theorem 1.1, this bound is very far from sharp for \(d\ge 4\). It is tempting to conjecture that the capacity of the component containing \(0\) in \(\mathbb{Z }^d{\setminus }{\fancyscript{I}}^u\) also has an exponential tail for \(u\) sufficiently large. The reasonableness of this conjecture is related to whether or not the condition on \(\rho (t)\) in (1.17) can be weakened to \(\rho (t)\ge u\varphi _\mathrm{local}(t)\) for \(u\) sufficiently large. Possibly the scaling behaviour of \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) with \(\rho (t) = u\varphi _\mathrm{local}(t)\) undergoes some sort of percolation transition at a critical value \({\bar{u}}_* \in (0,\infty )\).
1.6.4 Corollaries of the capacity bounds
Corollary 1.5 summarizes for which set \(E\) a subset \(x+\varphi _d(t)E \subset \mathbb T ^d{\setminus } W[0,t]\) can be expected to exist: according to Theorems 1.1–1.4, subsets of large capacity are unlikely to exist, whereas subsets of small capacity are numerous.
Corollaries 1.6–1.7 follow from Theorems 1.1–1.4 with the help of the isoperimetric inequalities
where we recall that \(\kappa _d,V_d,\lambda _d\) are the capacity, volume and principal Dirichlet eigenvalue of \(B(0,1)\). The first inequality is the Poincaré–Faber–Szegö theorem, which says that among all sets with a given volume the ball has the smallest capacity. The second inequality is the Faber–Krahn theorem, which says that among all sets of a given volume the ball has the smallest Dirichlet eigenvalue.Footnote 3 Comparing with Theorem 1.1, we see that the most efficient way to produce a component of a given large volume (or small principal Dirichlet eigenvalue) is for that component to be a ball.
Equality holds throughout (1.45) when \(E\) is a ball, and the lower bounds in Corollaries 1.6–1.7, together with Corollaries 1.8–1.9, follow by specializing Theorems 1.1–1.4 to that case.
The large deviation principles in Theorem 1.1 and Corollaries 1.6–1.9 each imply a weak law of large numbers, e.g. \(\lim _{t\rightarrow \infty } \kappa ^*(t,\rho (t))/\varphi _d(t)^{d-2}=1\) in \(\mathbb{P }\)-probability. The weak laws of large numbers implied by Corollaries 1.8–1.9 were proved in Dembo et al. [9] in the stronger form \(\lim _{t\rightarrow \infty } \rho _\mathrm{in}(t)/\varphi _d(t)=1\) and \(\lim _{t\rightarrow \infty } {\fancyscript{C}}_\varepsilon /\psi _d(\varepsilon )=d\) \(\mathbb{P }\)-a.s. The \(L^1\)-version of this convergence is proved in van den Berg et al. [6]. Note that none of these forms are equivalent: for instance, a.s. convergence does not follow from Corollaries 1.8–1.9, since the sum \(\sum _{t\in \mathbb{N }} \exp [-I_d(\kappa )\log t]\) fails to converge when \(I_d(\kappa )\) is small.
1.6.5 The maximal diameter of a component
There is no analogue of Corollary 1.6 for the maximal diameter instead of the maximal volume. The capacity and the diameter are related by \({{\mathrm{Cap}}}E \le \kappa _d ({{\mathrm{diam}}}E)^{d-2}\). However, there is no inequality in the reverse direction: a set of fixed capacity can have an arbitrarily large diameter. It turns out that the maximal diameter of the components of \(\mathbb T ^d {\setminus } W_{\rho (t)}[0,t]\) is of larger order than \(\varphi _d(t)\). More precisely, suppose that \(\rho (t)=o(\varphi _d(t))\), and let \(D(t,\rho (t))\) denote the largest diameter of a component of \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\). Then \(\lim _{t\rightarrow \infty } D(t,\rho (t))/\varphi _d(t)=\infty \) in \(\mathbb{P }\)-probability. Indeed, choose a compact connected set \(E\) of zero capacity and large diameter, say \(E=[0,L]\times \left\{ 0\right\} ^{d-1}\) with \(L\) large. Then, by Theorem 1.3, \(\mathbb T ^d {\setminus } W_{\rho (t)}[0,t]\) has a component containing \(x+\varphi _d(t) E\) for some \(x\) with a high probability. See also the discussion at the end of Sect. 1.6.3 above.
1.6.6 The second-largest component
The component of second-largest capacity (or second-largest volume, principal Dirichlet eigenvalue, or inradius) has a different large deviation behaviour, due to the fact that \(E\mapsto {{\mathrm{Cap}}}E\) is not additive. Indeed, typically \({{\mathrm{Cap}}}(E^{(1)}\cup E^{(2)}) <{{\mathrm{Cap}}}(E^{(1)})+{{\mathrm{Cap}}}(E^{(2)})\), even for disjoint sets \(E^{(1)}, E^{(2)}\). In the case of concentric spheres, \({{\mathrm{Cap}}}(\partial B(0,r_1)\cup \partial B(0,r_2))=\max \left\{ {{\mathrm{Cap}}}(\partial B(0,r_1)), {{\mathrm{Cap}}}(\partial B(0,r_2))\right\} \). It follows that the most efficient way to produce two large but disjoint components is to have them almost touching.
1.6.7 Answers to Questions 1–5
The results in this paper give a partial answer to Question 1. Question 2 is answered by Corollary 1.5 subject to \(E\in {\fancyscript{E}}^*,{{\mathrm{Cap}}}E\ne \kappa _d\) (see also Sect. 3 for results that are simultaneous over a certain class of sets \(E\)). The resolution to Question 3, namely, the fact that the dense picture in Fig. 2b applies, is provided by Corollary 1.5. If \(E\subset E'\) with \({{\mathrm{Cap}}}E'\ge {{\mathrm{Cap}}}E+\delta ,\delta >0\), and \(E,E'\in {\fancyscript{E}}^*\), then, compared to subsets of the form \(x+\varphi _d(t)E\), subsets of the form \(x+\varphi _d(t)E'\) are much less numerous (when \({{\mathrm{Cap}}}E<\kappa _d\)) or much less probable (when \({{\mathrm{Cap}}}E\ge \kappa _d\)). Moreover, if (1.17) holds, then Theorems 1.1–1.2 answer Question 3 simultaneously over all possible sets \(E'\). The answer to Question 4, namely, that the Brownian motion follows a spatial avoidance strategy, will follow from Proposition 2.1 below. Finally, Theorems 1.1–1.4, Corollaries 1.6–1.7 and Proposition 1.10 provide the answer to Question 5.
1.6.8 Two dimensions
It remains a challenge to extend the results in the present paper to \(d=2\) (see Fig. 1). A law of large numbers for the \(\varepsilon \)-cover time is derived in Dembo et al. [10]:
However, the relation \(\psi _2(\varepsilon (t))\sim \psi _2(\tilde{\varepsilon }(t))\), where \(\varepsilon (t),\tilde{\varepsilon }(t)\downarrow 0\), no longer implies \(\varepsilon (t)\sim \tilde{\varepsilon }(t)\): cf. (1.38). Hence the identity \(\left\{ \rho _\mathrm{in}(t) >\varepsilon \right\} = \left\{ {\fancyscript{C}}_\varepsilon > t\right\} \) does not lead to a law of large numbers for the largest inradius \(\rho _\mathrm{in}(t)\) itself, but only for its logarithm \(\log \rho _\mathrm{in}(t)\):
In order to give a detailed geometric description, the error term \(o(\sqrt{t})\) in (1.47) would need to be controlled up to order \(O(1)\). Rough asymptotics for the logarithm of the average principal Dirichlet eigenvalue are conjectured in van den Berg et al. [6].
In contrast to \(d\ge 3\), the large subsets of \(\mathbb T ^2 {\setminus } W[0,t]\) are expected to arise because of a temporal avoidance strategy and to resemble the sparse picture of Fig. 2a (see Questions 3–4). Furthermore, the Poisson point process heuristic, valid for \(d \ge 3\) as explained in Sect. 1.6.1, fails in \(d=2\). The components of \(\mathbb T ^2 {\setminus } W[0,t]\) are expected to have a hierarchical structure, with long-range spatial correlations.
2 Brownian excursions
In this section we list a few properties of Brownian excursions that will be needed as we go along. Section 2.1 looks at the times and the numbers of excursions between the boundaries of two concentric balls, Sect. 2.2 estimates the hitting probabilities of these excursions in terms of capacity, while Sect. 2.3 collects a few elementary properties of capacity.
2.1 Counting excursions between balls
\(\bullet \) Excursion times. Let \(x\in \mathbb T ^d\) and \(0<r<R<\tfrac{1}{2}\). Regard these values as fixed for the moment. Set \(T_0=\inf \left\{ t\ge 0:\,W(t)\in \partial B(x,R)\right\} \) and, for \(i\in \mathbb{N }\), define recursively the hitting times (see Fig. 4)
We call \(W[T'_i,T_i]\) the \(i\)th excursion from \(\partial B(x,r)\) to \(\partial B(x,R)\), and write \(\xi '_i(x)=W(T'_i)\), \(\xi _i(x)=W(T_i)\) for its starting and ending points.Footnote 4
Set
Thus, \(\tau _i(x,r,R)\) is the duration of the \(i\)th excursion from \(\partial B(x,R)\) to itself via \(\partial B(x,r)\), while \(\tau '_i(x,r,R)<\tau _i(x,r,R)\) is the duration of the \(i\)th excursion from \(\partial B(x,R)\) to \(\partial B(x,r)\).
(All the variables \(T_i,T'_i,\xi _i,\xi '_i,\tau _i,\tau '_i\) depend on all the parameters \(x,r,R\). Nevertheless, in our notation we only indicate some of these dependencies.)
\(\bullet \) Excursion numbers. Define
Thus, \(N(x,t,r,R)\) is the number of completed excursions from \(\partial B(x,r)\) to \(\partial B(x,R)\) by time \(t\), while \(N'(x,t,r,R)\) is the number of (necessarily completed) excursions when the total time spent not making an excursion reaches \(t\).
As we will see in Proposition 2.1 below, \(N(x,t,r,R)\) and \(N'(x,t,r,R)\) have very similar scaling behaviour for \(t\rightarrow \infty \) and \(r\ll R\ll 1\). Indeed, the times \(\tau _i(x,r,R)\) and \(\tau '_i(x,r,R)\) are typically large (since the Brownian motion typically visits the bulk of \(\mathbb T ^d\) many times before travelling from \(\partial B(x,R)\) to \(\partial B(x,r)\)), whereas \(\tau _i(x,r,R)-\tau '_i(x,r,R)=T_i(x)-T'_i(x)\) scales as \(R^2\). The advantage of \(N'(x,t,r,R)\) is that it is independent of non-intersection events within \(B(x,r)\) given the starting and ending points \(\xi '_i(x),\xi _i(x)\) of the excursions.
Define
The following proposition shows that \(N_d(t,r,R)\) represents the typical size for the random variables \(N(x,t,r,R)\) and \(N'(x,t,r,R)\).
Proposition 2.1
For any \(\delta \in (0,1)\) there is a \(c=c(\delta )>0\) such that, uniformly in \(x,x_0\in \mathbb T ^d,t>1\) and \(0<r^{1-\delta }\le R\le c\),
Proof
The result follows from a lemma in Dembo et al. [9], which we reformulate in our notation. (Note that the constant \(\kappa _d\) defined by (1.4) corresponds to the quantity \(1/\kappa _\mathbb{T ^d}\) from [9, page 2] rather than \(\kappa _\mathbb{T ^d}\).)
Lemma 2.2
[9, Lemma 2.4] There is a constant \(\eta >0\) such that if \(N\ge \eta ^{-1},0<\delta <\delta _0<\eta \) and \(0<2r\le R<R_0(\delta )\), then for some \(c=c(r,R)>0\) and uniformly in \(x,x_0\in \mathbb T ^d\),
Moreover, \(c\) can be chosen to depend only on \(\delta _0\) as soon as \(R>r^{1-\delta _0}\). The same result holds when \(\tau '_i(x,r,R)\) is replaced by \(\tau _i(x,r,R)\).
(The same result for \(\tau '_i\) is not included in [9], but follows from the estimates in that paper. Indeed, \(\tau _i-\tau '_i\) is shown to be an error term.)
To prove Proposition 2.1, we begin with (2.8). Fix \(\delta >0\). We may assume without loss of generality that \(\delta <\tfrac{1}{2}\) and \(1/(1-\tfrac{1}{2}\delta )<1+\tfrac{2}{3}\delta <1+\eta \). Set \(N=\left\lfloor (1-\delta )N_d(t,r,R)\right\rfloor +1\). Since \(N/N_d(t,r,R) \rightarrow 1-\delta \) as \(N_d(t,r,R)\rightarrow \infty \), we can choose \(r\) small enough so that \(\tfrac{1}{2}N_d(t,r,R)\le N\le (1-\tfrac{1}{2}\delta ) N_d(t,r,R)\) and \(N\ge \eta ^{-1}\), uniformly in \(R\) and \(t>1\). We have
Since \(T_N=\sum _{i=0}^N \tau _i(x,r,R)\), it follows that
Hence (2.8) follows from Lemma 2.2 with \(\delta \) and \(\delta _0\) replaced by \(\tfrac{1}{2}\delta /(1-\tfrac{1}{2}\delta )\) and \(\tfrac{2}{3}\delta \), respectively, with the constant \(c\) in Proposition 2.1 chosen small enough so that \(2r\le R<R_0 [\tfrac{1}{2}\delta /(1-\tfrac{1}{2}\delta )]\).
The proof of (2.7) is similar. Let \(\delta >0\) be such that \(\tfrac{1}{2}\delta /(1+\tfrac{1}{2}\delta )<\eta \) and set \(N'=\left\lceil (1+\delta )N_d(t,r,R)\right\rceil \). As before, we have
and we can apply the version of Lemma 2.2 with \(\tau '_i(x,r,R)\) instead of \(\tau _i(x,r,R)\) and \(\delta \) replaced by \(\tfrac{1}{2}\delta /(1+\tfrac{1}{2}\delta )\).
Finally, because \(N'(t,x,r,R) \le N(t,x,r,R)\), (2.6) follows from (2.7).\(\square \)
Proposition 2.1 forms the link between the global structure of \(\mathbb T ^d\), notably the fact that a Brownian motion on \(\mathbb T ^d\) has a finite mean return time to a small ball, and the excursions of \(W\) within small balls, during which \(W\) cannot be distinguished from a Brownian motion on all of \(\mathbb{R }^d\).
2.2 Hitting sets by excursions
The concentration inequalities in Proposition 2.1 will allow us to treat the number of excursions as deterministic. This observation motivates the following definition.
Definition 2.3
Let \(0<r<R<\tfrac{1}{2},\varphi >0\) and \(N\in \mathbb{N }\). A pair \((x,E)\) with \(x\in \mathbb T ^d,E\subset \mathbb{R }^d\) Borel, will be called \((N,\varphi ,r,R)\)-successful if none of the first \(N\) excursions of \(W\) from \(\partial B(x,r)\) to \(\partial B(x,R)\) hit \(x+\varphi E\).
Proposition 2.4
Let \(0<\varepsilon <r<R<\tfrac{1}{2}\). Then, uniformly in \(\varphi >0\), \(x_0,x\in \mathbb T ^d\) and \(E\subset \mathbb{R }^d\) a Borel set with \(\varphi E\subset B(0,\varepsilon )\), and uniformly in \((\xi '_i(x),\xi _i(x))_{i=1}^N\),
Since the error term is uniform in \((\xi '_i(x),\xi _i(x))_{i=1}^N\), Proposition 2.4 also applies to the unconditional probability \(\mathbb{P }_{x_0}( (x,E)\) is \((N,\varphi ,r,R)\)-successful ).
To prove Proposition 2.4 we need the following lemma for the hitting probability of a single excursion given its starting and ending points. For \(\xi '\in \partial B(x,r),\xi \in \partial B(x,R)\), write \(\mathbb{P }_{\xi ',\xi }\) for the law of an excursion \(W[0,\zeta _R],\zeta _R=\inf \left\{ t\ge 0:\, d(x,W(t))\ge R\right\} \), from \(\partial B(x,r)\) to \(\partial B(x,R)\), started at \(\xi '\) and conditioned to end at \(\xi \).
Lemma 2.5
Let \(0<\varepsilon <r<R<\tfrac{1}{2}\). Then, uniformly in \(x\in \mathbb T ^d\), \(\xi '\in \partial B(x,r), \xi \in \partial B(x,R)\) and \(E\) a Borel set with \(E\subset B(0,\varepsilon )\),
Lemma 2.5 is a more elaborate version of (1.11): it states that the asymptotics of (1.11) remain valid when we stop the Brownian motion upon exiting a sufficiently distant ball, and hold conditionally and uniformly, provided the balls and the set are well separated. In the proof we use the relation
where \(\sigma _r\) denotes the uniform measure on \(\partial B(0,r)\). Equation (2.15) becomes an identity as soon as \(B(0,r)\) contains \(E\), and as such it is a more precise version of (1.11): see Port and Stone [19, Chapter 3, Theorem 1.10] and surrounding material.
We defer the proof of Lemma 2.5 to Sect. 6.1. We can now prove Proposition 2.4.
Proof
Conditional on their starting and ending points \((\xi '_i(x),\xi _i(x))_{i=1}^N\), the successive excursions from \(\partial B(x,r)\) to \(\partial B(x,R)\) are independent with laws \(\mathbb{P }_{\xi '_i(x),\xi _i(x)}\). Applying Lemma 2.5, we have
Since \({{\mathrm{Cap}}}(\varphi E)\le \kappa _d\,\varepsilon ^{d-2} = o(r^{d-2})\) as \(r/\varepsilon \rightarrow \infty \), we can rewrite the right-hand side of (2.16) as
so that the scaling relation in (1.9) implies the claim.\(\square \)
2.3 Properties of capacity
In this section we collect a few elementary properties of capacity.
2.3.1 Continuity
Proposition 2.6
Let \(E\) denote a Borel subset of \(\mathbb{R }^d\).
-
(a)
If \(E\) is compact, then \({{\mathrm{Cap}}}E_r\downarrow {{\mathrm{Cap}}}E\) as \(r\downarrow 0\).
-
(b)
If \(E\) is open, then \({{\mathrm{Cap}}}E_{-r}\uparrow {{\mathrm{Cap}}}E\) as \(r\downarrow 0\).
-
(c)
If \(E\) is bounded with \({{\mathrm{Cap}}}(\mathrm{clo}\left( E\right) )={{\mathrm{Cap}}}(\mathrm{int}\left( E\right) )\), then \({{\mathrm{Cap}}}E_r\downarrow {{\mathrm{Cap}}}E\) and \({{\mathrm{Cap}}}E_{-r}\uparrow {{\mathrm{Cap}}}E\) as \(r\downarrow 0\).
Proof
For \(r\downarrow 0\) we have \(E_r\downarrow \mathrm{clo}\left( E\right) \) and \(E_{-r}\uparrow \mathrm{int}\left( E\right) \) for any set \(E\). By Port and Stone [19, Chapter 3, Proposition 1.13], it follows that \({{\mathrm{Cap}}}E_{-r} \uparrow {{\mathrm{Cap}}}(\mathrm{int}\left( E\right) )\) and, if \(E\) is bounded, \({{\mathrm{Cap}}}E_r\downarrow {{\mathrm{Cap}}}(\mathrm{clo}\left( E\right) )\). The statements about \(E\) follow depending on which inequalities in \({{\mathrm{Cap}}}(\mathrm{int}\left( E\right) )\le {{\mathrm{Cap}}}E\le {{\mathrm{Cap}}}(\mathrm{clo}\left( E\right) )\) are equalities.\(\square \)
Proposition 2.6 is a statement about the continuity of \(E\mapsto {{\mathrm{Cap}}}E\) with respect to enlargement and shrinking. The assumptions on \(E\) are necessary, since there are sets \(E\) with \({{\mathrm{Cap}}}(\mathrm{clo}\left( E\right) )>{{\mathrm{Cap}}}(\mathrm{int}\left( E\right) )\). Note that \(E\mapsto {{\mathrm{Cap}}}E\) is not continuous with respect to the Hausdorff metric, even when restricted to reasonable classes of sets. For instance, the finite sets \(B(0,1)\cap \tfrac{1}{n}\mathbb{Z }^d\) converge to \(B(0,1)\) in the Hausdorff metric, but have zero capacity for all \(n\).
2.3.2 Asymptotic additivity
Lemma 2.7
Let \(0<\varepsilon <r\). Then, uniformly in \(x_1,x_2\in \mathbb{R }^d\) with \(d(x_1,x_2)\ge r\) and \(E^{(1)},E^{(2)}\) Borel subsets of \(\mathbb{R }^d\) with \(E^{(1)},E^{(2)}\subset B(0,\varepsilon )\),
Proof
Fix \(\tilde{r}\) large enough so that \((x_1+E^{(1)})\cup (x_2+E^{(2)})\subset B(0, \tilde{r})\). On the event \(\left\{ W\text { hits }x_j+E^{(j)}\right\} \), write \(Y_j\) for the first point of \(x_j+E^{(j)}\) hit by \(W\). Applying (1.10), (2.15), and the Markov property, we get
where the second inequality uses that every \(Y_j\in x_j+E^{(j)}\) is at least a distance \(r-\varepsilon \) from \(x_{j'}\). But \((\varepsilon /(r-\varepsilon ))^{d-2}=o(1)\) for \(r/\varepsilon \downarrow 0\), and so the claim follows.\(\square \)
3 Non-intersection probabilities for lattice animals
An event such as
is a simultaneous statement about an infinite collection \((x+\varphi _d(t) E)_{x\in \mathbb T ^d}\) of sets. In this section, we apply the results of Sect. 2 to prove simultaneous statements for a finite collection of discretized sets, the lattice animals defined below. Section 3.1 proves a bound for sets of large capacity that forms the basis for Theorem 1.1, while Sect. 3.2 proves bounds for sets of small capacity that form the basis for Theorems 1.2–1.3.
Definition 3.1
A lattice animal is a connected set \(A\subset \mathbb{R }^d\) that is the union of a finite number of closed unit cubes with centres in \(\mathbb{Z }^d\). We write \({\fancyscript{A}}^\Box \) for the collection of all lattice animals, and \({\fancyscript{A}}^\Box _Q\) for the collection of lattice animals \(A\in {\fancyscript{A}}^\Box \) that contain \(0\) and consist of at most \(Q\) unit cubes.
It is readily verified that, for any \(d\ge 2\), there is a constant \(C<\infty \) such that
In fact, subadditivity arguments show that \(|{\fancyscript{A}}^\Box _Q|\) grows exponentially, in the sense that \(\lim _{Q\rightarrow \infty }|{\fancyscript{A}}^\Box _Q|^{1/Q}\) exists in \((1,\infty )\) for any \(d\ge 2\). See, for instance, Klarner [12] for the case \(d=2\), or Mejia Miranda and Slade [15, Lemma 2] for a general upper bound that implies (3.2).
Lattice animals are commonly considered as discrete combinatorial objects. In our context, we can identify \(A\in {\fancyscript{A}}^\Box \) with the collection \(A\cap \mathbb{Z }^d\) of lattice points in \(A\). Requiring \(A\) to be a connected subset of \(\mathbb{R }^d\) is then equivalent to requiring the vertices \(A\cap \mathbb{Z }^d\) to form a connected subgraph of the lattice \(\mathbb{Z }^d\). (Because of the details of our definition, the relevant choice of lattice structure is that vertices \(x,y\in \mathbb{Z }^d\) are adjacent when their \(\ell _\infty \)-distance is \(1\).)
For \(n\in \mathbb{N }\), set \(G_n=x+\tfrac{1}{n}\mathbb{Z }^d\) to be a grid of \(n^d\) points in \(\mathbb T ^d\), for some \(x\in \mathbb T ^d\). The choice of \(x\) (i.e., the alignment of the grid) will generally not be relevant to our purposes.
3.1 Large lattice animals
Proposition 3.2
Fix an integer-valued function \(t \mapsto n(t)\) such that
Given \(A\in {\fancyscript{A}}^\Box \), write \(E(A)=n(t)^{-1}\varphi _d(t)^{-1}A\). Then, for each \(\kappa \),
Proposition 3.2 gives an upper bound on the probability of finding unhit sets of large capacity, simultaneously over all sets of the form \(E(A),A\in {\fancyscript{A}}^\Box \). Note that \(x+\varphi _d(t) E(A)\) is a finite union of cubes of side length \(1/n(t)\) centred at points of \(G_{n(t)}\). In Sect. 4 we will use \(x+\varphi _d(t) E(A)\) as a lattice approximation to a generic set \(x+\varphi _d(t) E\). The fineness of this lattice approximation is determined by the relation between the lengths \(1/n(t)\) and \(\varphi _d(t)\). The hypothesis in (3.3) means that the lattice scale \(1/n(t)\) is a factor of order \(o((\log t)^{1/d})\) smaller compared to the scale \(\varphi _d(t)\). This order is chosen so that the number of lattice animals does not grow too quickly.
Before proving Proposition 3.2, we give some definitions and make some remarks that we will use throughout Sect. 3. We abbreviate
For \(x\in \mathbb T ^d\), we introduce the nested balls \(B(x,r)\) and \(B(x,R)\), where
and \(\delta \in (0,\frac{1}{2})\) is fixed. We have \(\varphi \ll r\ll R\rightarrow 0\) as \(t\rightarrow \infty \), and we will always take \(t\) large enough so that \(\varphi <1\) and \(R<\frac{1}{2}\).
Suppose \(\kappa \in (0,\infty )\) is given and consider the collection of lattice animals \(A\in {\fancyscript{A}}^\Box \) such that \({{\mathrm{Cap}}}E(A)\le \kappa \). By (1.45), it follows that \({{\mathrm{Vol}}}E(A)\) is uniformly bounded. Consequently, we may assume that such a lattice animal \(A\) consists of at most \(Q=Q(t)\) unit cubes, where \(Q\) is suitably chosen with
Suppose, instead, that \(A\in {\fancyscript{A}}^\Box \) is minimal subject to the condition \({{\mathrm{Cap}}}E(A) \ge \kappa \), and suppose that \(n\varphi \rightarrow \infty \). By (1.10), upon removing a single unit cube from \(A\) the capacity \({{\mathrm{Cap}}}E(A)\) decreases by at most \(O(1/n^{d-2}\varphi ^{d-2})\), and so it follows that \(\kappa \le {{\mathrm{Cap}}}E(A)\le \kappa + O(1/n^{d-2}\varphi ^{d-2})\). In particular, \({{\mathrm{Cap}}}E(A)\) is uniformly bounded for \(t\) sufficiently large, and we may again assume (3.7).
In what follows, we will always work in a context where one of these two assumptions applies. We will therefore always assume that \(A\) consists of at most \(Q\) cubes, where \(Q\) satisfies (3.7).
Given \(x\in G_n\) and \(A\in {\fancyscript{A}}^\Box \), the translate \(x+\varphi E(A)\) can be written as \(x'+\varphi E(A')\), where \(x'\in G_n\) and \(0\in A'\). By the above, we have \(A'\in {\fancyscript{A}}^\Box _Q\). Since \(A'\) is connected and \(0\in A'\), it follows that \(\varphi E(A')\subset B(0,\varphi Q\sqrt{d})\). If \(Q=t^{o(1)}\) (in particular, if (3.3) is assumed, or the weaker hypothesis in (3.17)), then \(r/\varphi Q\rightarrow \infty \) as \(t\rightarrow \infty \). We may therefore always take \(t\) large enough so that \(B(0,\varphi Q\sqrt{d}) \subset B(0,r)\), and we may apply Proposition 2.4 to \(\varphi E(A)\), uniformly over \(A\in {\fancyscript{A}}_Q^\Box \).
Proof
Note that if we replace \(n\) by a suitable multiple \(kn=k(t)n(t)\) for \(k(t)\in \mathbb{N }\), we can only increase the probability in (3.4). Thus it is no loss of generality to assume that \(n\varphi \rightarrow \infty \).
The event that \(W\) hits \(x+\varphi E(A)\) is decreasing in \(A\). Therefore we may restrict our attention to lattice animals \(A\) that are minimal subject to \({{\mathrm{Cap}}}E(A)\ge \kappa \). By the remarks above, we may assume that \(A\in {\fancyscript{A}}^\Box _Q\). Combining (3.3) and (3.7), we have \(Q=o(\log t)\).
Set \(N=(1-\delta )N_d(t,r,R)\). Recalling (2.5) and (3.6), we have \(N_d(t,r,R)= t^{\delta +o(1)}\) as \(t\rightarrow \infty \). If the event in (3.4) occurs, then there must exist a point \(x\in G_n\) with \(N(x,t,r,R) < N\) or a pair \((x,A)\in G_n\times {\fancyscript{A}}^\Box _Q\) such that \({{\mathrm{Cap}}}E(A)\ge \kappa \) and \((x,E(A))\) is \((\left\lfloor N\right\rfloor , \varphi ,r,R)\)-successful. Write \(\tilde{\chi }^\Box \) for the number of such pairs. Then
by Proposition 2.1. The first term in the right-hand side is negligible. For the second term, \(Q=o(\log t)\) implies that \(|{\fancyscript{A}}_Q^\Box |\le e^{O(Q)}=t^{o(1)}\) by (3.2), and so Proposition 2.4 gives
and the Markov inequality completes the proof.\(\square \)
Proposition 3.2 bounds the probability that a single rescaled lattice animal \(x+\varphi _d(t)E(A)\) is not hit. We will also need the following bounds, for finite unions of lattice animals that are relatively close, and for pairs of lattice animals that are relatively distant.
Lemma 3.3
Assume (3.3). Fix a capacity \(\kappa \ge \kappa _d\), a positive integer \(k\in \mathbb{N }\) and a positive function \(t\mapsto h(t)>0\) satisfying
Then the probability that there exist a point \(x\in G_{n(t)}\) and lattice animals \(A^{(1)}, \cdots ,A^{(k)}\in {\fancyscript{A}}^\Box \), such that the union \(E=\cup _{j=1}^k E(A^{(j)})\) satisfies \({{\mathrm{Cap}}}E\ge \kappa ,\varphi _d(t)E\subset B(0,h(t))\), and \((x+\varphi _d(t)E) \cap W[0,t] =\varnothing \), is at most \(t^{-I_d(\kappa )+o(1)}\).
Proof
The proof is the same as for Proposition 3.2. Abbreviate \(h=h(t)\). Since \(h=t^{o(1)} \varphi \), it follows that \(r/h\rightarrow \infty \) as \(t\rightarrow \infty \), so that Proposition 2.4 applies to \(\varphi E\). Similarly, writing \(A^{(j)}=y_j+\tilde{A}^{(j)}\) with \(\tilde{A}^{(j)} \in {\fancyscript{A}}_Q^\Box \) and \(y_j\in B(0,nh)\cap \mathbb{Z }^d\), we have that there are at most \(O((nh)^{dk})|{\fancyscript{A}}_Q^\Box |^k\) possible choices for \(A^{(1)},\cdots ,A^{(k)}\). This number is \(t^{o(1)}\) by (3.3) and (3.10), so that a counting argument applies as before.\(\square \)
Lemma 3.4
Assume (3.3). Fix a positive function \(t \mapsto h(t)>0\) satisfying
and let \(\kappa ^{(1)},\kappa ^{(2)}>\kappa _d,x_1\in \mathbb T ^d\). Then the probability that there exist a point \(x_2\in G_{n(t)}\) with \(d(x_1,x_2)\ge h(t)\) and lattice animals \(A^{(1)},A^{(2)} \in {\fancyscript{A}}^\Box \) with \({{\mathrm{Cap}}}E(A^{(j)})\ge \kappa ^{(j)}\) such that \((x_j+\varphi _d(t)E(A^{(j)})) \cap W[0,t]=\varnothing ,j=1,2\), is at most \(t^{-[d \kappa ^{(1)}/(d-2)\kappa _d] -I_d(\kappa ^{(2)})+o(1)}\).
Proof
We resume the notation and assumptions from the proof of Proposition 3.2, this time taking \(\delta <\tfrac{1}{4}\). Abbreviate \(h=h(t)\).
For \(x_2\in G_n\) such that \(d(x_1,x_2)\ge 2R\), the events of \((x_j,E(A_j))\) being \((\left\lfloor N\right\rfloor ,\varphi ,r,R)\)-successful, \(j=1,2\), are conditionally independent given \((\xi '_i(x_j),\xi _i(x_j))_{i,j}\). The required bound for the case \(d(x_1,x_2)\ge 2R\) therefore follows by the same argument as in the proof of Proposition 3.2.
For \(x_2\in G_n\) such that \(d(x_1,x_2)\le 2R\), set \(\tilde{r}=\varphi ^{1-3\delta }\), \(\tilde{R} =\varphi ^{1-4\delta }\) and \(\tilde{N}=(1-\delta )N_d(t,\tilde{r},\tilde{R})\). We have \(\varphi E(A_j) \subset B(0,\varphi Q\sqrt{d})\) for \(j=1,2\), with \(Q=o(\log t)\) (without loss of generality, as in the proof of Proposition 3.2). Write \(x_2=x_1+\varphi y\), where \(y\in \mathbb{R }^d\) with \(h/\varphi \le d(0,y) \le 2R/\varphi \). The hypothesis (3.11) implies that \(h/\varphi Q\rightarrow \infty \). Hence we can apply Lemma 2.7 (with \(\varepsilon =\varphi Q\sqrt{d}\) and \(h\) playing the role of \(r\)), to conclude that
We also have \(E(A_1) \cup (y+E(A_2))\subset B(0,2R+\varphi Q\sqrt{d})\) with \(\tilde{r}/R,\tilde{r} /\varphi Q\rightarrow \infty \). In particular, \(x_1+\varphi (E(A_1)\cup (y+E(A_2)))\subset B(x_1,\tilde{r})\) for \(t\) large enough. As in the proof of Proposition 3.2, \((x_j+\varphi E(A_j)) \cap W[0,t]=\varnothing \) implies that \(N(x_1,t,\tilde{r},\tilde{R})<N\) or \((x_1,E(A_1) \cup (y+E(A_2)))\) is \((\left\lfloor {\tilde{N}}\right\rfloor ,\varphi ,\tilde{r},\tilde{R})\)-successful. By (3.12) and Proposition 2.4,
and the rest of the proof is the same as for Proposition 3.2.\(\square \)
3.2 Small lattice animals
The bound in Proposition 3.2 is only meaningful when \(\kappa >\kappa _d\). For \(\kappa <\kappa _d\), there are likely to be many unhit sets of capacity \(\kappa \), and the two propositions that follow will quantify this statement.
For \(E\subset \mathbb{R }^d\), write \(\chi (t,n(t),E)\) for the number of points \(x\in G_{n(t)}\) such that \((x+\varphi _d(t) E)\cap W[0,t]=\varnothing \), and write \(\chi ^\mathrm{disjoint}(t,n(t),E)\) for the maximal number of disjoint translates \(x+\varphi _d(t) E\) such that \(x\in G_{n(t)}\) and \((x+\varphi _d(t) E)\cap W[0,t]=\varnothing \). For \(\kappa >0\), define
Proposition 3.5
Fix an integer-valued function \(t \mapsto n(t)\) satisfying condition (3.3) such that \(\lim _{t\rightarrow \infty } n(t)\varphi _d(t)=\infty \). Then, for \(0<\kappa <\kappa _d\),
Proposition 3.6
Fix an integer-valued function \(t \mapsto n(t)\) and a non-negative function \(t \mapsto h(t)\) satisfying
and collections of points \((S(t))_{t>1}\) in \(\mathbb T ^d\) such that \(\max _{x\in \mathbb T ^d} d(x,S(t)) \le h(t)\) for all \(t>1\). Given \(A\in {\fancyscript{A}}^\Box \), write \(E(A)=n(t)^{-1}\varphi _d(t)^{-1}A\). Then, for each \(\kappa \in (0,\kappa _d)\),
Compared to Theorem 1.3, Proposition 3.6 requires \((x+\varphi _d(t)E(A))\cap W[0,t] \ne \varnothing \) only for \(x\) in some subset \(S(t)\) of the torus, subject to the requirement that \(S(t)\) should be within distance \(h(t)\) of every point in \(\mathbb T ^d\). The reader may assume that \(S(t)=\mathbb T ^d,h(t)=0\) for simplicity.
In Proposition 3.6, the scale \(n(t)\) of the lattice need only satisfy (3.17) instead of the stronger condition (3.3). This reflects the difference in scaling between the probabilities in Proposition 3.6 compared to Proposition 3.2.
3.2.1 Proof of Proposition 3.5
Proof
Let \(\delta \in (0,\tfrac{1}{2})\) be given. It suffices to show that \(t^{J_d(\kappa )-O(\delta )} \le \chi _-^\Box (t,n,\kappa )\) and \(\chi _+^\Box (t,n,\kappa )\le t^{J_d(\kappa ) +O(\delta )}\) with high probability. (Given \(\kappa <\kappa '\), the assumption \(n\varphi \rightarrow \infty \) implies the existence of some \(A\) with \(\kappa \le {{\mathrm{Cap}}}E(A)\le \kappa '\), and therefore \(\chi _-^\Box (t,n,\kappa ')\le \chi _+^\Box (t,n,\kappa )\).)
For the upper bound, recall \(N\) and \(\tilde{\chi }^\Box \) from the proof of Proposition 3.2. On the event \(\{N(x,t,r,R)<N \;\forall \,x\in G_n\}\) (whose probability tends to \(1\)) we have \(\chi _+^\Box (t,\kappa ,n) \le \tilde{\chi }^\Box \). From (3.9) it follows that \(\tilde{\chi }^\Box \le t^{J_d(\kappa )+O(\delta )}\) with high probability.
For the lower bound, let \(\left\{ x_1,\cdots ,x_K\right\} \) denote a maximal collection of points in \(G_n\) satisfying \(d(x_j,x_k)>2R\) for \(j\ne k\), so that \(K=R^{-d+o(1)}=t^{d/(d-2)-O(\delta )}\). Write \(N_-=(1+\delta )N_d(t,r,R)\). By Proposition 2.1, in the same way as in the proof of Proposition 3.2, \(N(x_j,t,r,R)\le N_-\) for each \(j=1,\cdots ,K\), with high probability. Moreover we may take \(t\) large enough so that \(\varphi E(A)\subset B(0,R)\), so that the translates \(x_j+\varphi E(A)\) are disjoint. Let \(\tilde{\chi }_-^\Box (A)\) denote the number of points \(x_j\), \(j\in \left\{ 1,\cdots ,K\right\} \), such that \((x_j,E(A))\) is \((\left\lceil N_-\right\rceil ,\varphi ,r,R)\)-successful. We have \(\chi _-^\Box (t,n(t),E(A)) \ge \tilde{\chi }_-^\Box (A) - 1\) on the event \(\left\{ N(x_j,t,r,R)\le N_- \, \forall j\right\} \), since at most one translate \(x_j+\varphi E(A)\) may have been hit before the start of the first excursion, in the case \(x_0\in B(x_j,R)\). On the other hand, since the balls \(B(x_j,R)\) are disjoint, the excursions are conditionally independent given the starting and ending points \((\xi '_i(x_j),\xi _i(x_j))_{i,j}\). It follows that, for each \(A\) with \({{\mathrm{Cap}}}E(A)\le \kappa ,\tilde{\chi }_-^\Box (A)\) is stochastically larger than a Binomial\((K,p)\) random variable, where \(p\ge t^{-d\kappa /(d-2)-O(\delta )}\) by Proposition 2.4. A straightforward calculation shows that \(\mathbb{P }(\text {Binomial} (K,p)<\frac{1}{2}Kp)\le e^{-cKp}\) for some \(c>0\), so that
As in the proof of Proposition 3.2, there are at most \(t^{o(1)}\) animals \(A\) to consider, so a union bound completes the proof.\(\square \)
As with Lemma 3.3, we may modify Proposition 3.5 to deal with a finite union of lattice animals.
Lemma 3.7
Assume the hypotheses of Proposition 3.5, let \(k\in \mathbb{N }\), and let \(t\mapsto h(t)>0\) be a positive function satisfying (3.10). Define
where the sum and minimum are over sets \(E=\cup _{j=1}^k E(A^{(j)})\) such that \(\varphi _d(t) E\subset B(0,h(t))\); \((x+\varphi _d(t)E)\cap W[0,t]=\varnothing \); and \({{\mathrm{Cap}}}E\ge \kappa \) (for \(\chi ^\Box _+\)) or \({{\mathrm{Cap}}}E\le \kappa \) (for \(\chi ^\Box _-\)), respectively. Then \((\log \chi ^\Box _+(t,n(t),\kappa ,k,h(t)))/\log t\) and \((\log \chi ^\Box _-(t,n(t), \kappa ,k,h(t)))/\log t\) converge in \(\mathbb{P }_{x_0}\)-probability to \(J_d(\kappa )\) as \(t\rightarrow \infty \).
3.2.2 Proof of Proposition 3.6
The proof of Proposition 3.5 compares \(\chi _-^\Box (t,n(t),\kappa )\) to a random variable that is approximately Binomial \((t^{d/(d-2)},t^{-d\kappa /(d-2)})\). If this identification were exact, then the asymptotics in Proposition 3.6 would follow in a similar way. However, the bound for each individual probability \(\mathbb{P }_{x_0} (N(x_j,t,r,R)\ge (1+\delta )N_d(t,r,R))\), \(j=1,\cdots ,K\), although relatively small, is still much larger than the probability in Proposition 3.6. Therefore an additional argument is needed.
Proof
Abbreviate \(h=h(t),S=S(t)\).
Recall that the condition \({{\mathrm{Cap}}}E(A)\le \kappa \) implies that \(A\) consists of at most \(Q\) cubes, where because of (3.7) and (3.17) we have \(Q=t^{o(1)}\). Fix such an \(A\), and write \(A=p+A'\), where \(p\in \mathbb{Z }^d\) and \(A'\in {\fancyscript{A}}^\Box _Q\). In particular, \(E(A')\subset B(0,Q\sqrt{d})\). Since \(x+\varphi E(A) =x+\tfrac{1}{n}p+\varphi E(A')\), we can assume by periodicity that \(p\in \left\{ 0,\cdots ,n-1\right\} ^d\).
Let \(\delta \in (0,\tfrac{1}{3})\), take \(r,R\) as in (3.6), and choose \(\tilde{n} =\tilde{n}(t)\in \mathbb{N }\) such that \(1/\tilde{n}= \varphi ^{1-3\delta +o(1)}\) and \(1/\tilde{n}\ge 2R\). Let \(\left\{ \tilde{x}_1,\cdots ,\tilde{x}_{\tilde{n}^d}\right\} \) denote a grid of points in \(\mathbb T ^d\) with spacing \(1/\tilde{n}\) (i.e., a translate of \(G_{\tilde{n}}\)), chosen in such a way that \(d(x_0,\tilde{x}_j)>R\). To each grid point \(\tilde{x}_j,j=1,\cdots ,\tilde{n}^d\), associate in some deterministic way a point \(x_j\in S\) with \(d(x_j+\tfrac{1}{n}p,\tilde{x}_j) =d(x_j,\tilde{x}_j-\tfrac{1}{n}p)\le h\) (this is always possible by the hypothesis on \(S\)). The choice of \(\tilde{x}_j,x_j\) depends on \(t\), but we suppress this dependence in our notation.
Since \(h/\varphi \le t^{o(1)}\), we have \(r/h\ge \varphi ^{-\delta +o(1)}\rightarrow \infty \). Since also \(r/\varphi Q\rightarrow \infty \), we may take \(t\) large enough so that \(h+\varphi Q\sqrt{d}<r<R<1/ \tilde{n}\), implying that \(x_j+\varphi E(A)=x_j+\tfrac{1}{n}p+E(A')\subset B(\tilde{x}_j,r)\) for \(j=1,\cdots ,\tilde{n}^d\), and so we can apply Lemma 2.5 to the sets \(x_j+\varphi E(A)\), uniformly in the choice of \(A\) and \(j\).
Let \(\sigma (s)\) be the total amount of time, up to time \(s\), during which the Brownian motion is not making an excursion from \(\partial B(\tilde{x}_j,r)\) to \(\partial B(\tilde{x}_j,R)\) for any \(j=1,\cdots ,\tilde{n}^d\). In other words, \(\sigma (s)\) is the Lebesgue measure of \([0,s] {\setminus } ( \cup _{j=1}^{\tilde{n}^d} \cup _{i=1}^\infty [T'_i(\tilde{x}_j),T_i(\tilde{x_j})])\). Define the stopping time \(T''=\inf \left\{ s:\, \sigma (s)\ge t\right\} \). Clearly, \(T''\ge t\). Define \(N''_j\) to be the number of excursions from \(\partial B(\tilde{x}_j,r)\) to \(\partial B(\tilde{x}_j,R)\) by time \(T''\), and write \((\xi '_i(\tilde{x}_j),\xi _i(\tilde{x}_j))_{i=1,\cdots ,N''_j}\) for the starting and ending points of these excursions.
If \((x+\varphi E(A)) \cap W[0,t]\ne \varnothing \) for each \(x\in S\), then necessarily, for each \(j=1,\cdots ,\tilde{n}^d\), at least one of the \(N''_j\) excursions from \(\partial B(\tilde{x}_j,r)\) to \(\partial B(\tilde{x}_j,R)\) must hit \(x_j+\varphi E(A)\). (Here we use that \(d(x_0,\tilde{x}_j)>R\), which implies that the Brownian motion cannot hit \(x_j+\varphi E(A)\) before the start of the first excursion.) These excursions are conditionally independent given \((\xi '_i(\tilde{x}_j),\xi _i(\tilde{x}_j))\) for \(i=1,\cdots ,N''_j, j=1,\cdots ,\tilde{n}^d\). Applying Lemma 2.5 and (1.9), we get
In this upper bound, which no longer depends on \((\xi '_i(\tilde{x}_j),\xi _i(\tilde{x}_j))_{i,j}\), the function \(y \mapsto \log (1-e^{cy})\) is concave, and hence we can replace each \(N''_j\) by the empirical mean \(\bar{N}''=\tilde{n}^{-d} \sum _{j=1}^{\tilde{n}^d} N''_j\):
Write \(M=(1+\delta )N_d(t,r,R)\). On the event \(\left\{ \bar{N}''\le M\right\} \), the relations \((\varphi /r)^{d-2} M\sim (1+\delta )d(d-2)^{-1}\log t\) and \(\tilde{n}^d=t^{d/(d-2)-O(\delta )}\) imply that
Next, we will show that \(\mathbb{P }_{x_0}(\bar{N}''\ge M)\le \exp [-ct^{d/(d-2)-O(\delta )}]\). To that end, let \(\pi ^{(\tilde{n})}\) denote the projection map from the unit torus \(\mathbb T ^d\) to a torus of side length \(1/\tilde{n}\). Under \(\pi ^{(\tilde{n})}\), every grid point \(\tilde{x}_j\) maps to the same point \(\pi ^{(\tilde{n})}(\tilde{x}_j)\), and \(\sigma (s)\) is the total amount of time the projected Brownian motion \(\pi ^{(\tilde{n})}(W)\) in \(\pi ^{(\tilde{n})}(\mathbb T ^d)\) spends not making an excursion from \(\partial B(\pi ^{(\tilde{n})}(\tilde{x}_j),r)\) to \(\partial B(\pi ^{(\tilde{n})}(\tilde{x}_j),R)\), by time \(s\). Moreover, \(\tilde{n}^d \bar{N}'' = \sum _{j=1}^{\tilde{n}^d} N''_j\) can be interpreted as the number of such excursions in \(\pi ^{(\tilde{n})}(\mathbb T ^d)\) completed by time \(T''\).
Write \(x\mapsto \tilde{n}x\) for the dilation that maps the torus \(\pi ^{(\tilde{n})}(\mathbb T ^d)\) of side length \(1/\tilde{n}\) to the unit torus \(\mathbb T ^d\). By Brownian scaling, \((\tilde{W}(u) )_{u\ge 0} = (\tilde{n}\pi ^{(\tilde{n})}(W(\tilde{n}^{-2}u)))_{u\ge 0}\) has the law of a Brownian motion in \(\mathbb T ^d\). Moreover, \(\tilde{n}^d\bar{N}''\) can be interpreted as the number of excursions of \(\tilde{W}(u)\) from \(\partial B(\tilde{n}\pi ^{(\tilde{n})}(\tilde{x_j}), \tilde{n} r)\) to \(\partial B(\tilde{n}\pi ^{(\tilde{n})}(\tilde{x_j}),\tilde{n} R)\) until the time spent not making such excursions first exceeds \(\tilde{n}^2 t\), i.e., precisely the quantity \(N'(\tilde{n}\pi ^{(\tilde{n})}(\tilde{x}_j),\tilde{n}^2 t,\tilde{n}r,\tilde{n}R)\) from Sect. 2.1. We have \(N_d(\tilde{n}^2 t,\tilde{n}r,\tilde{n}R)=\tilde{n}^d N_d(t,r,R)\), so Proposition 2.1 gives
Equations (3.23)–(3.24) imply that, for each fixed \(A=p+A'\) with \({{\mathrm{Cap}}}E(A)\le \kappa \), we have
But the number of pairs \((p,A')\) is at most \(n^d |{\fancyscript{A}}^\Box _Q|= t^{d/(d-2)+o(1)} e^{O(Q)}\), by (3.2) and (3.17). Since \(Q=t^{o(1)}\), a union bound completes the proof.\(\square \)
4 Proofs of Theorems 1.1–1.4 and Propositions 1.10–1.11
In proving Theorems 1.1–1.4, we bound non-intersection probabilities for Wiener sausages, e.g.
in terms of the Brownian non-intersection probabilities estimated in Propositions 3.2 and 3.5–3.6, in which \(E\) is a rescaled lattice animal. In Sect. 4.1 we prove an approximation lemma for lattice animals, which leads directly to the proofs of Theorems 1.1–1.3 and Proposition 1.10. Proving Theorem 1.4 requires an additional argument to show that a component containing a given set is likely to be not much larger, and we prove this in Sect. 4.2. Finally, in Sect. 4.3 we give the proof of Proposition 1.11.
4.1 Approximation by lattice animals
Lemma 4.1
Let \(\rho >0\) and \(n\in \mathbb{N }\) satisfy \(\rho n \ge 2\sqrt{d}\), and let \(\varphi >0\). Then, given a bounded connected set \(E\subset \mathbb{R }^d\), there is an \(A\in {\fancyscript{A}}^\Box \) such that \(E(A)=n^{-1}\varphi ^{-1} A\) satisfies \(E\subset E(A)\subset E_{\rho /\varphi }\) and, for any \(x\in \mathbb T ^d,0\le \tilde{\rho }\le \tfrac{1}{4}\rho \),
Proof
Let \(A\) be the union of all the closed unit cubes with centres in \(\mathbb{Z }^d\) that intersect \(n\varphi E_{\rho /4\varphi }\). This set is connected because \(E\) is connected, and therefore \(A\in {\fancyscript{A}}^\Box \). Every cube in \(A\) is within distance \(\sqrt{d}\) of some point of \(n\varphi E_{\rho /4\varphi }\), so that \(E\subset E_{\rho /4\varphi }\subset E(A)\subset E_{\rho /4\varphi +\sqrt{d}/n\varphi }\). By assumption, \(\sqrt{d}/n\le \rho /2\), so that \(E(A)\subset E_{3\rho /4\varphi }\subset E_{\rho /\varphi }\) (see Fig. 5a).
a From inside to outside: an F-shaped set \(E\); the enlargement \(E_{\rho /4\varphi };E(A)\), the union of the rescaled cubes intersecting \(E_{\rho /4\varphi }\); the bounding set \(E_{3\rho /4\varphi }\). The grid shows the cubes in the definition of \(E(A)\), rescaled to have side length \(1/n\varphi \). The parameters \(\rho ,n\) satisfy \(\rho n =2\sqrt{d}\). b From inside to outside (scaled by \(\varphi \) compared to part a): the prospective subset \(x+\varphi E\) of \(\mathbb T ^d{\setminus } W_\rho [0,t]\); the approximating grid-aligned set \(x'+\varphi E(A)\); the taboo set \(x+(\varphi E)_\rho \) that the Brownian motion must not visit
Given \(x\in \mathbb T ^d\), let \(x'\in G_n\) satisfy \(d(x,x')\le \sqrt{d}/2n\). Then \(x+\varphi E\subset x'+(\varphi E)_{\sqrt{d}/2n}\subset x'+\varphi E(A)\subset x+(\varphi E(A))_{\sqrt{d}/2n}\subset x +(\varphi E)_\rho \) since \(\sqrt{d}/2n\le \rho /4\) and \(\varphi E(A)\subset (\varphi E)_{3\rho /4}\). See Fig. 5b. This proves (4.2); (4.3) follows immediately because \((x+\varphi E)\cap W_\rho [0,t]=\varnothing \) is equivalent to \((x+(\varphi E)_\rho ) \cap W[0,t]=\varnothing \).
Similarly, since \((\varphi E)_{\rho /4}\subset \varphi E(A)\) and since \((x+\varphi E)\cap W_{\tilde{\rho }}[0,t]\ne \varnothing \) is equivalent to \((x+(\varphi E)_{\tilde{\rho }})\cap W[0,t]\ne \varnothing \), the inclusion in (4.4) follows.\(\square \)
4.1.1 Proof of Theorem 1.3
In this section we prove the following theorem, of which Theorem 1.3 is the special case with \(S(t)=\mathbb T ^d\).
Theorem 4.2
Fix non-negative functions \(t \mapsto \rho (t)\) and \(t \mapsto h(t)\) satisfying
and collections of points \((S(t))_{t>1}\) in \(\mathbb T ^d\) such that \(\max _{x\in \mathbb T ^d} d(x,S(t)) \le h(t)\) for all \(t>1\). Then, for any \(E\subset \mathbb{R }^d\) compact with \({{\mathrm{Cap}}}E <\kappa _d\),
Proof
Fix \(E\subset \mathbb{R }^d\) compact with \({{\mathrm{Cap}}}E<\kappa _d\), and let \(\delta >0\) be arbitrary with \({{\mathrm{Cap}}}E +\delta <\kappa _d\). By Proposition 2.6(a), we can choose \(r>0\) so that \({{\mathrm{Cap}}}(E_r)\le {{\mathrm{Cap}}}E+\tfrac{1}{2}\delta \). If \(E_r\) is not already connected, then enlarge it to a connected set \(E'\supset E_r\) by adjoining a finite number of line segments (this is possible because \(E_r\) is the \(r\)-enlargement of a compact set). Doing so does not change the capacity, so we may apply Proposition 2.6(a) again to find \(r'>0\) so that \({{\mathrm{Cap}}}((E')_{r'})\le {{\mathrm{Cap}}}E+\delta \).
Define \(\rho _0(t)=r'\varphi _d(t)\) and \(n(t)=\left\lceil 2{\sqrt{d}} /\rho _0(t)\right\rceil \), so that \(\rho _0(t)n(t)\ge 2\sqrt{d}\) and the condition (3.17) from Proposition 3.6 holds. Since \(\rho (t)/\varphi _d(t)\rightarrow 0\), we may choose \(t\) sufficiently large so that \(\rho (t)\le \tfrac{1}{4}\rho _0(t)\).
Apply Lemma 4.1 to \(E'\) with \(\rho =\rho _0(t)\), \(\tilde{\rho }=\rho (t)\), and \(\varphi =\varphi _d(t)\). Note that if \((x+\varphi _d(t)E)\cap W_{\rho (t)}[0,t]\ne \varnothing \) for all \(x\in S(t)\), then \((x+\varphi _d(t)E(A))\cap W[0,t]\ne \varnothing \) for all \(x\in S(t)\), where \({{\mathrm{Cap}}}E(A)\le {{\mathrm{Cap}}}((E')_{\rho /\varphi })={{\mathrm{Cap}}}((E')_{r'})\le {{\mathrm{Cap}}}E+\delta \). By Proposition 3.6 with \(\kappa ={{\mathrm{Cap}}}E +\delta \), this event has a probability that is at most \(\exp [-t^{J_d({{\mathrm{Cap}}}E)-O(\delta )}]\), and taking \(\delta \downarrow 0\) we get the desired result. \(\square \)
4.1.2 Proof of Theorem 1.1
Proof
First consider \(\kappa <\kappa _d\). Since \(I_d(\kappa )\) is infinite for such \(\kappa \), it suffices to show that \(\lim _{t\rightarrow \infty } \log \mathbb{P }(\kappa ^*(t,\rho (t))\le \kappa \varphi ^{d-2}) /\log t=-\infty \). Let \(\kappa <\kappa '<\kappa _d\), and take \(E\) to be a ball of capacity \(\kappa '\). If \(\kappa ^*(t,\rho (t)) \le \kappa \varphi ^{d-2}\), then no translate \(x+\varphi _d(t)E,x\in \mathbb T ^d\), can be a subset of \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\). Applying Theorem 1.3, we conclude that \(\mathbb{P }(\kappa ^*(t,\rho (t))\le \kappa \varphi ^{d-2})\le \exp [-t^{J_d(\kappa )+o(1)}]\), which implies the desired result.
Next consider the LDP upper bound for \(\kappa \ge \kappa _d\). Since \(\kappa \mapsto I(\kappa )\) is increasing and continuous on \([\kappa _d,\infty ]\), it suffices to show that \(\mathbb{P }(\kappa ^* (t,\rho (t))\ge \kappa \varphi ^{d-2})\le t^{-I_d(\kappa )+o(1)}\) for \(\kappa >\kappa _d\). Therefore, suppose that \(x+\varphi _d(t) E\subset \mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) for some \(x\in \mathbb T ^d\) and \(E\subset \mathbb{R }^d\) compact with \({{\mathrm{Cap}}}E\ge \kappa \). As in the proof of Theorem 4.2, define \(n(t)=\left\lceil 2{\sqrt{d}}/\rho (t)\right\rceil \). Lemma 4.1 gives \((x'+\varphi _d(t) E(A))\cap W[0,t]=\varnothing \) for some \(x'\in G_{n(t)}\) and \({{\mathrm{Cap}}}E(A) \ge {{\mathrm{Cap}}}E\ge \kappa \). The condition in (1.17) on \(\rho (t)\) implies the condition in (3.3) on \(n(t)\), and therefore we may apply Proposition 3.2 to conclude that \(\mathbb{P }(\kappa ^*(t,\rho (t))\ge \kappa \varphi ^{d-2}) \le t^{-I_d(\kappa )+o(1)}\).
Finally, the LDP lower bound for \(\kappa \ge \kappa _d\) will follow (with \(E\) the ball of capacity \(\kappa \), say) from the lower bound proved for Theorem 1.4 (see Sect. 4.2).\(\square \)
4.1.3 Proof of Theorem 1.2
Proof
As in the proof of Theorem 1.1, the lower bound will follow from the more specific lower bound proved for Theorem 1.4 (see Sect. 4.2).
Choose \(n(t)\) such that \(n(t)\ge 2\sqrt{d}/\rho (t)\) and the hypotheses of Proposition 3.5 hold. (The conditions on \(n(t)\) are mutually consistent because \(2\sqrt{d}/\rho (t)=O(1/\varphi _d(t))\).) Given any component \(C\) containing a ball of radius \(\rho (t)\) and having the form \(C=x+\varphi _d(t)E\) for \({{\mathrm{Cap}}}E\ge \kappa \), apply Lemma 4.1 to find \(x'_C\in G_{n(t)}\) and \(A_C\in {\fancyscript{A}}^\Box \) such that \(C\subset x'_C+\varphi _d(t)E(A_C)\subset C_{\rho (t)}\subset \mathbb T ^d{\setminus } W[0,t]\). The pairs \((x'_C,E(A_C))\) so constructed must be distinct: for \(C'\ne C\), we have \(x'_{C'}+\varphi _d(t)E(A_{C'})\subset C'_{\rho (t)} \subset (\mathbb T ^d{\setminus } C)_{\rho (t)}=\mathbb T ^d{\setminus } C_{-\rho (t)}\), and since \(C_{-\rho (t)}\) is non-empty by assumption, it follows that \(C\nsubseteq x'_{C'}+\varphi _d(t)E(A_{C'})\). We therefore conclude that \(\chi _{\rho (t)} (t,\kappa )\le \chi _+^\Box (t,n(t),\kappa )\), so the required upper bound follows from Proposition 3.5. \(\square \)
4.1.4 Proof of Proposition 1.10
Proof
Abbreviate \(\varphi =\varphi _d(t),\rho =\rho (t)\). It suffices to bound the probability that \(\mathbb T ^d{\setminus } W_\rho [0,t]\) has a component of diameter at least \(\tfrac{1}{2}\), since the mapping \(x+y\mapsto y\) from \(B(x,r)\subset \mathbb T ^d\) to \(B(0,r)\subset \mathbb{R }^d\) is a well-defined local isometry if \(r<\tfrac{1}{2}\).
Suppose that \(x\in \mathbb T ^d{\setminus } W_\rho [0,t]\) belongs to a connected component intersecting \(\partial B(x,\tfrac{1}{2})\). Then there is a bounded connected set \(E\subset \mathbb{R }^d\) such that \((x+\varphi E)\cap W_\rho [0,t]\) and \(E\cap \partial B(0,\tfrac{1}{2}\varphi ^{-1}) \ne \varnothing \) (see Fig. 6). Define \(n=n(t)=\left\lceil 2\big .{\sqrt{d}/\rho }\right\rceil \) and apply Lemma 4.1 to conclude that \((x'+\varphi E(A))\cap W[0,t]=\varnothing \) with \(E\subset E(A)\), \(A\in {\fancyscript{A}}^\Box ,x'\in G_n\). Since \(E(A)\) contains \(E\), it has diameter at least \(\tfrac{1}{2}\varphi ^{-1}\), so \(A\) has diameter at least \(\tfrac{1}{2}n\) and must consist of at least \(n/(2\sqrt{d})\) unit cubes. Since \(\rho =o(\varphi )\) and \(\varphi =t^{-d/(d-2) +o(1)}\), we have \(n\ge t^{d/(d-2)+o(1)}\). The hypothesis in (1.17) implies that \(n\varphi =o((\log t)^{1/d})\), as in condition (3.3) from Proposition 3.2. Therefore \({{\mathrm{Vol}}}E(A) \ge (n\varphi )^{-d}n/(2\sqrt{d})\ge t^{d/(d-2)+o(1)}\), and in particular \({{\mathrm{Vol}}}E(A)\rightarrow \infty \). By (1.45), \({{\mathrm{Cap}}}E(A)\rightarrow \infty \) also. Thus, if \(\mathbb T ^d{\setminus } W_\rho [0,t]\) has a component of diameter at least \(\tfrac{1}{2}\), then the event in Proposition 3.2 occurs, with \(\kappa \) arbitrarily large for \(t\rightarrow \infty \). By Proposition 3.2, the probability of this occurring is negligible, as claimed.\(\square \)
This proof is unchanged if the radius \(\tfrac{1}{2}\) is replaced by any \(\delta \in (0,\tfrac{1}{2})\), which shows that the maximal diameter \(D(t,\rho (t))\) satisfies \(D(t,\rho (t))\rightarrow 0\) in \(\mathbb{P }\)-probability when (1.17) holds (see Sect. 1.6.5).
4.2 Proof of Theorem 1.4
In Theorems 1.1–1.3 we deal with components that contain a subset \(x+\varphi _d(t)E\) of a given form. Theorem 1.4 adds the requirement that the component containing such a subset should not extend further than distance \(\delta \varphi _d(t)\) from \(x+\varphi _d(t)E\). In the proof, we will bound the probability that the component extends no further than distance \(\rho (t)\) from \(x+\varphi _d(t)E\), but only for sets \(E\in {\fancyscript{E}}^\Box _c\) of the following kind: define
to be the collection of sets in \({\fancyscript{E}}_c\) that are rescalings of lattice animals.
Note that, unlike in Sect. 3, the scaling factor \(\tfrac{1}{n}\) in (4.7) is fixed and does not depend on \(t\). We begin by showing that the collection \({\fancyscript{E}}^\Box _c\) is dense in \({\fancyscript{E}}_c\).
Lemma 4.3
Given \(E\in {\fancyscript{E}}_c\) and \(\delta >0\), there exists \(E^\Box \in {\fancyscript{E}}^\Box _c\) with \(E\subset E^\Box \subset E_\delta \).
Lemma 4.3 will allow us to prove Theorem 1.4 only for \(E\in {\fancyscript{E}}^\Box _c\).
Proof
For \(y\notin E\), define
Since \(\mathbb{R }^d{\setminus } E\) is open and connected, \(b(y)\) is continuous and positive on \(\mathbb{R }^d{\setminus } E\). By compactness, we may choose \(\eta \in (0,\delta )\) such that \(b(y)>\eta \) for \(y\notin E_\delta \). Apply Lemma 4.1 (with \(\rho \) and \(\varphi \) replaced by \(\eta \) and 1, and \(n\) sufficiently large) to find \(E'=\tfrac{1}{n}A\) with \(E\subset E'\subset E_\eta \). The set \(E'\) is a rescaled lattice animal, but \(\mathbb{R }^d{\setminus } E'\) might not be connected. However, if \(y\) belongs to a bounded component of \(\mathbb{R }^d{\setminus } E'\), then \(b(y)\le \eta \) by construction: since \(E'\subset E_\eta \), \(y\) cannot belong to the unbounded component of \(\mathbb{R }^d{\setminus } E_\eta \). By choice of \(\eta \), it follows that every bounded component of \(\mathbb{R }^d{\setminus } E'\) is contained in \(E_\delta \). Thus, if we define \(E^\Box \) to be \(E'\) together with these bounded components (see Fig. 7), then \(E^\Box \in {\fancyscript{E}}_c^\Box \) and \(E^\Box \subset E_\delta \), as claimed. \(\square \)
A set \(E\) (white) and its enlargement \(E_\delta \) (dark shading). Every bounded component of \(\mathbb{R }^d{\setminus } E_\delta \) can reach infinity without touching \(E_\eta \) (medium shading). A set \(E'\) (light shading) with \(E\subset E'\subset E_\eta \) may disconnect a region from infinity (diagonal lines), but this region must belong to \(E_\delta \)
In the proof of Theorem 1.4, we adapt the concept of \((N,\varphi ,r,R)\)-successful from Definition 2.3 to formulate the desired event in terms of excursions. To this end we next introduce the sets and events that we will use. In the remainder of this section, we abbreviate \(\varphi =\varphi _d(t),\rho =\rho (t),I_d(\kappa )=I(\kappa )\) and \(J_d(\kappa )=J(\kappa )\).
Fix \(E\in {\fancyscript{E}}_c^\Box \) and \(\delta >0\). We may assume that \(E\subset B(0,a)\) with \(a>\delta \). Let \(\eta \in (0,\tfrac{1}{2})\) be small enough that \(\kappa _d \eta ^{d-2}<{{\mathrm{Cap}}}E\). Set \(r=\varphi ^{1-\eta },R=\varphi ^{1-2\eta }\), and let \(\left\{ x_0,\cdots ,x_K\right\} \subset \mathbb T ^d\) denote a maximal collection of points in \(\mathbb T ^d\) satisfying \(d(x_0,x_j)>R\) and \(d(x_j,x_k)>2R\) for \(j\ne k\), so that
Take \(t\) large enough that \(\rho <\tfrac{1}{2}\delta \varphi \) and \(R<\tfrac{1}{2}\). Set \(N=(1+\eta ) N_d(t,r,R)\) (see (2.5)).
Choose \(q=q(t)\) with \(q>2a+\delta ,q\ge \log t\), and \(q=(\log t)^{O(1)}\). Let \(\left\{ y_1, \cdots ,y_L\right\} \subset B(0,2q){\setminus } E_\delta \) denote a maximal collection of points in \(B(0,2q){\setminus } E_\delta \) satisfying \(d(y_\ell ,E)\ge \delta ,d(y_\ell ,y_m)\ge \tfrac{1}{2}\rho /\varphi \) for \(\ell \ne m\), so that \(L=O((q\varphi /\rho )^d)=(\log t)^{O(1)}\) by (1.17).
(The collection \(\left\{ y_1,\cdots ,y_L\right\} \) will be used to ensure that a component containing \(x_j+\varphi E\) is contained in \(x_j+\varphi E_\delta \); see the event \(F_3(j)\) below. The requirements on \(q\) are chosen so that \(L\) is suitably bounded, while also allowing us to apply Lemma 3.4 to deal with components that are relatively far from \(x_j\).)
Let \(Z=\partial (E_{\rho /\varphi }) \cup ( \cup _{z\in B(0,2a)\cap \eta \mathbb{Z }^d} \partial B(z,\eta ) {\setminus } E_{\rho /\varphi } )\) (see Fig. 8: \(Z\) consists of a \((d-1)\)-dimensional shell around \(E\) together with a finite number of \((d-1)\)-dimensional spheres). Let \(\left\{ z_1,\cdots ,z_M\right\} \subset Z\) denote a maximal collection of points in \(Z\) with \(d(z_m,z_p)\ge \tfrac{1}{2}\rho /\varphi \) for \(m\ne p\). Since \(Z\) is \((d-1)\)-dimensional, we have \(M=O((\rho /\varphi )^{d-1})\).
For \(j=1,\cdots ,K\), define the following events.
-
\(F_1(j)=\left\{ \frac{1}{2}N\le N(x_j,t,r,R)\le N\right\} \) is the event that \(W\) makes between \(\tfrac{1}{2}N\) and \(N\) excursions from \(\partial B(x_j,r)\) to \(\partial B(x_j,R)\) by time \(t\).
-
\(F_2(j)\) is the event that \((x_j,E_{\rho /\varphi })\) is \((\left\lfloor N\right\rfloor ,\varphi ,r,R)\)-successful.
-
\(F_3(j)\) is the event that, for each \(\ell =1,\cdots ,L\), the \(i\)th excursion from \(\partial B(x_j,r)\) to \(\partial B(x_j,R)\) hits \(x_j+B(\varphi y_\ell , \tfrac{1}{2}\rho )\) for some \(i=i(\ell )\in \left\{ 1,\cdots ,\left\lfloor N/4\right\rfloor \right\} \).
-
\(F_4(j)\) is the event that, for each \(m=1,\cdots ,M\), the \(i\)th excursion from \(\partial B(x_j,r)\) to \(\partial B(x_j,R)\) hits \(x_j+B(\varphi z_m, \tfrac{1}{2}\rho )\) for some \(i=i(m) \in \left\{ \left\lfloor N/4\right\rfloor +1,\cdots ,\left\lfloor N/2\right\rfloor \right\} \).
-
\(F_5(j)\) is the event that \(\mathbb T ^d{\setminus } W_\rho [0,t]\) contains no component of capacity at least \(\varphi ^{d-2}{{\mathrm{Cap}}}E \) disjoint from \(B(x_j,2q\varphi )\).
-
\(F(j)=F_1(j)\cap F_2(j)\cap F_3(j)\).
-
\(F_\mathrm{max}(j)=F_1(j)\cap F_2(j)\cap F_3(j)\cap F_4(j)\cap F_5(j)\).
Lemma 4.4
On \(F(j)\), the component of \(\mathbb T ^d{\setminus } W_\rho [0,t]\) containing \(x_j+\varphi E\) satisfies condition \(({\fancyscript{C}}(t,\rho ,E,E'))\) with \(E'=E_\delta \). Furthermore, \(F_\mathrm{max}(j)\subset F_\rho (t,E,E_\delta )\) for \(t\) sufficiently large.
Proof
Note that if \(F_1(j)\cap F_2(j)\) occurs, then \(x_j+\varphi E\subset \mathbb T ^d{\setminus } W_\rho [0,t]\). If \(F_1(j)\cap F_3(j)\) occurs, then the set \(x_j+\cup _{\ell =1}^L B(\varphi y_\ell , \tfrac{1}{2}\rho )\) is entirely covered by the Wiener sausage. By choice of \(\left\{ y_1,\cdots , y_L\right\} \), this set contains \(x_j+( B(0,2q\varphi ){\setminus }\varphi E_\delta )\), and consequently \(( \mathbb T ^d{\setminus } W_\rho [0,t])\cap B(x_j,2q\varphi ) \subset x_j +\varphi E_\delta \).
We have therefore shown that, on \(F(j),\mathbb T ^d{\setminus } W_\rho [0,t]\) has a component containing \(x_j+\varphi E\) and satisfying condition \({\fancyscript{C}}(t,\rho ,E,E_\delta )\). To show further that \(F_\mathrm{max}(j)\subset F_\rho (t,E,E_\delta )\), we will show any other component must have capacity smaller than \(\varphi ^{d-2}{{\mathrm{Cap}}}E\).
If \(F_1(j)\cap F_4(j)\) occurs, then \(x_j+\varphi Z\) is entirely covered by the Wiener sausage, by choice of \(\left\{ z_1,\cdots ,z_M\right\} \). By choice of \(Z\), all components of \(B(x_j,a\varphi ){\setminus } (x_j+\varphi Z)\), other than any components that are subsets of \(x_j+\varphi E_{\rho /\varphi }=x_j+(\varphi E)_\rho \), must be contained in a ball of radius \(\eta \varphi \), and in particular have capacity at most \(\kappa _d(\eta \varphi )^{d-2}<\varphi ^{d-2}{{\mathrm{Cap}}}E\).
Finally, if \(F_5(j)\) occurs, then the component of largest capacity cannot occur outside \(B(x_j,2q\varphi )\), and therefore must be the component of largest capacity contained in \(x_j+(\varphi E)_\rho \).
It therefore remains to show that the component of largest capacity in \(x_j+(\varphi E)_\rho \) is in fact the component containing \(x_j+\varphi E\). Suppose that \(a\in x_j+\varphi E\) is the centre of a \((d-1)\)-dimensional ball of radius \(\rho \) that is completely contained in some face of \(x_j+\varphi E\), and let \(b\) be a point at distance at most \(\rho \) from \(a\) along the line perpendicular to the face (see Fig. 9). If both \(x_j+\varphi E\) and \(b\) are contained in \(\mathbb T ^d{\setminus } W_\rho [0,t]\), then so is the line segment from \(a\) to \(b\), so that \(b\) belongs to the same component as \(x_j+\varphi E\).
A point \(b\) near the centre \(a\) of a ball (thicker line) on a face of \(x_j+\varphi E\), and a point \(c\) near the boundary of a face. The Brownian path must not touch the dotted lines, but the Wiener sausage can fill the shaded circles by visiting the crossed points. The point \(c\) can belong to a different component than \(x_j+\varphi E\), but \(b\) cannot
We therefore conclude that, on \(F_\mathrm{max}(j)\), any point of \(x_j+(\varphi E)_\rho \) that is not in the same component as \(x_j+\varphi E\) must lie within distance \(2\rho \) of the boundary of some face of \(x_j+\varphi E\). Write \(H\) for the set of boundaries of faces of \(E\). Since \(H\) is \((d-2)\)-dimensional, its capacity is \(0\), and therefore \({{\mathrm{Cap}}}((\varphi H)_{2\rho })=\varphi ^{d-2} {{\mathrm{Cap}}}(H_{2\rho /\varphi })=o(\varphi ^{d-2})\) by Proposition 2.6(a), since \(\rho /\varphi \rightarrow 0\). In particular, for \(t\) sufficiently large the component of largest capacity in \(x_j+(\varphi E)_\rho \) must be the component containing \(x_j+\varphi E\), which completes the proof of Lemma 4.4.\(\square \)
Proof of Theorem 1.4
Because of the upper bound proved for Theorems 1.1–1.2, we need only prove the lower bounds
and
Moreover, it suffices to prove (4.10)–(4.11) under the assumption that \(E\in {\fancyscript{E}}_c^\Box \) and, in (4.10), that \({{\mathrm{Cap}}}E>\kappa _d\). Indeed, given any \(\delta '\in (0,\tfrac{1}{2}\delta )\), apply Lemma 4.3 to find \(E^\Box \in {\fancyscript{E}}_c^\Box \) with \(E\subset E^\Box \subset E_{\delta '}\). By adjoining, if necessary, a sufficiently small cube to \(E^\Box \), we may assume that \({{\mathrm{Cap}}}E^\Box > {{\mathrm{Cap}}}E\). Apply (4.10)–(4.11) with \(E\) and \(\delta \) replaced by \(E^\Box \) and \(\delta '\), respectively. Proposition 2.6(a) implies that \({{\mathrm{Cap}}}E^\Box \downarrow {{\mathrm{Cap}}}E\) as \(\delta '\downarrow 0\). Since \(\kappa \mapsto J(\kappa )\) is continuous, we conclude that the bounds for \(E\in {\fancyscript{E}}_c\) follows from those for \(E\in {\fancyscript{E}}^\Box _c\).
We next relate the left-hand side of (4.10) to the events \(F_1(j),\cdots ,F_5(j)\). Noting that \(F_1(j)\cap F_2(j)\cap F_1(k)\cap F_2(k) \subset F_5(j)^c\) for \(j\ne k\), Lemma 4.4 implies that
We will bound each of the sums in the right-hand side of (4.12).
Applying Proposition 2.1 and (4.9) (and noting that \(N_d(t,r,R) =t^{\eta +o(1)}\) and that \(\tfrac{1}{2}N/N_d(t,r,R)=\tfrac{1}{2}(1+\eta )<\tfrac{3}{4}\)), we see that the second sum in the right-hand side of (4.12) is at most \(t^{d/(d-2)+O(\eta )} \exp [-c t^{\eta +o(1)}]\). This term will be negligible compared to the scale of (4.10).
For the last sum in (4.12), we assume that \({{\mathrm{Cap}}}E>\kappa _d\) and use Lemma 3.4. Set \(h(t)=2q\varphi \), and note that \(h(t)/(\varphi \log t)\ge 1\) by assumption on \(q\). If \(F_1(j)\cap F_2(j)\cap F_5(j)^c\) occurs, then, by Lemma 4.1, there are lattice animals \(A,A'\in {\fancyscript{A}}^\Box \) with \({{\mathrm{Cap}}}E(A), {{\mathrm{Cap}}}E(A')\ge {{\mathrm{Cap}}}E\) and a point \(x'\in \mathbb T ^d{\setminus } B(x_j,2q\varphi )\) with \((x_j+\varphi E(A)) \cap W[0,t]=(x'+\varphi E(A'))\cap W[0,t]=\varnothing \). By Lemma 3.4 with \(\kappa ^{(1)}=\kappa ^{(2)}={{\mathrm{Cap}}}E\), we have
Hence the last sum in (4.12) is at most \(t^{-2I({{\mathrm{Cap}}}E)+O(\eta )}\). Since \(I({{\mathrm{Cap}}}E)>0\), this term is also negligible, for \(\eta \) sufficiently small, compared to the scale of (4.10). (This is the only part of the proof where \({{\mathrm{Cap}}}E>\kappa _d\) is used.)
We have therefore proved that (4.10) will follow if we can give a suitable lower bound for the first sum on the right-hand side of (4.12). Using again the asymptotics (4.9) for \(K\), (4.10) will follow from
In fact, (4.14) also implies (4.11). On the event \(\cap _{j=1}^K F_1(j)\) (which occurs with high probability, by Proposition 2.1), Lemma 4.4 implies that \(\chi _\rho (t,E,E_\delta )\) is at least as large as the number of \(j\in \left\{ 1,\cdots ,K\right\} \) for which \(F_2(j)\cap F_3(j)\) occurs. Since the events \(F_2(j)\cap F_3(j)\) are conditionally independent for different \(j\) given the starting and ending points \(((\xi '_i(x_j), \xi _i(x_j))_{i=1}^N)_{j=1}^K\), (4.14) and (4.9) immediately imply that \(\chi _\rho (t,E,E_\delta )\ge t^{J_d(\kappa )-O(\eta )}\) with high probability (cf. the proof of Proposition 3.5 in Sect. 3.2.1).
It therefore remains to prove (4.14). To do so, we will condition on not hitting \(x_j+(\varphi E)_\rho \) and use the following lemma to estimate the conditional probability of hitting small nearby balls. Note that, conditional on the occurrence of \(F_2(j)\) and the starting and ending points \((\xi '_i(x_j),\xi _i(x_j))_{i=1}^N\), the events \(F_3(j)\) and \(F_4(j)\) are independent.
Lemma 4.5
Fix \(E\in {\fancyscript{E}}_c^\Box \) and \(\delta >0\), and let \(0<\rho <\varphi <r<R<\tfrac{1}{2}\). Then there is an \(\varepsilon >0\) such that if \(\rho /\varphi <\varepsilon ,\varphi /r<\varepsilon \) and \(r/R\le \tfrac{1}{2}\), then, uniformly in \(x\in \mathbb T ^d,\xi '\in \partial B(x,r)\), and \(\xi \in \partial B(x,R)\),
where \(\alpha >d-2\) is some constant depending only on \(d\).
We give the proof of Lemma 4.5 in Sect. 6.2.
The event \(F_3(j)\) says that all \((x_j,B(y_\ell ,\tfrac{1}{2}\rho /\varphi )),\ell =1,\cdots ,L\), are not \((\left\lfloor N/4\right\rfloor ,\varphi ,r,R)\)-successful. Lemma 4.5 implies (as in the proof of Proposition 2.4) that, uniformly in \(\ell \),
Recalling (1.3) and (2.5), we have \(N(\varphi /r)^{d-2}\ge (d/(d-2)+O(\eta ))\log t\), so that
By (1.17), \((\log t)^{1/d}\rho /\varphi \rightarrow \infty \), whereas \(L=(\log t)^{O(1)}\). Hence, the conditional probability in (4.17) is \(o(1)\) and \(\mathbb P \left( \left. F_3(j)\,\right| F_2(j)\right) =1-o(1)\).
For \(F_4(j)\), write \(k=\left\lfloor N/2\right\rfloor -\left\lfloor N/4\right\rfloor \) and \(p=\varepsilon (\varphi /r)^{d-2} (\rho / \varphi )^\alpha \). Lemma 4.5 states that, conditional on \(F_2(j)\), each ball \(x_j+B(\varphi z_m,\tfrac{1}{2}\rho )\) has a probability at least \(p\) of being hit during each of the \(k\) excursions from \(\partial B(x_j,r)\) to \(\partial B(x_j,R)\) in the definition of \(F_4(j)\). It follows that \(\mathbb P \left( \left. F_4(j)\,\right| F_2(j)\right) \) is at least the probability that a Binomial\((k,p)\) random variable has value \(M\) or larger. We have \(p\rightarrow 0\) and \(k-M\rightarrow \infty \) as \(t\rightarrow \infty \), so using Stirling’s approximation, we get
Observe that \(kp=e^{O(1)}N_d(t,r,R)(\varphi /r)^{d-2}(\rho /\varphi )^\alpha =e^{O(1)}(\rho /\varphi )^\alpha \log t\). The assumption \(\rho /\varphi \rightarrow 0\) implies that \(kp=o(\log t)\). On the other hand, recall that \(M=O((\varphi /\rho )^{d-1})\), so that \(M/kp=e^{O(1)}(\varphi /\rho )^{\alpha +d-1}/\log t\). The hypothesis (1.17) means that \(\varphi /\rho =o((\log t))^{1/d}\). Consequently, \(M=o((\log t)^{(d-1)/d})\) and \(\log (M/kp)\le O(\log \log t)\). In particular, \(M\log (M/kp) \le o(\log t)\), and we conclude that
Combining (4.17), (4.19), and Proposition 2.4, we obtain
We have therefore verified (4.14), and this completes the proof.\(\square \)
4.3 Proof of Proposition 1.11
Proof
\(\mathbb T ^d{\setminus } W[0,t]\) is open since \(W[0,t]\) is the (almost surely) continuous image of a compact set.
Consider first a Brownian motion \(\tilde{W}\) in \(\mathbb{R }^d\). Define
and note that \(\tilde{Z}\) is the inverse image \(\pi _0^{-1}(Z)\) of a path-connected, locally path-connected, dense subset \(Z=\pi _0(\tilde{Z})\subset \mathbb T ^d\) (where \(\pi _0:,\mathbb{R }^d\rightarrow \mathbb T ^d\) is the canonical projection map). Since \(\tilde{Z}\) is the countable union of \((d-2)\)-dimensional subspaces, \(\tilde{W}\left[ 0,\infty \right) \) does not intersect \(\tilde{Z}\), except perhaps at the starting point, with probability \(1\). Projecting onto \(\mathbb T ^d\), it follows that \(W\left[ 0,\infty \right) \) intersects \(Z\) in at most one point, and in particular \(\mathbb T ^d {\setminus } W\left[ 0,\infty \right) \) contains a path-connected, locally path-connected, dense subset. This implies the remaining statements in Proposition 1.11. \(\square \)
5 Proofs of Corollaries 1.5–1.9
5.1 Proof of Corollary 1.5
Proof
(1.24) follows immediately from the more precise statements in (1.25)–(1.26). By monotonicity and continuity, it suffices to show (1.25) for \({{\mathrm{Cap}}}E>\kappa _d\).
Consider first the lower bounds in (1.25)–(1.26). Replace \(E\) by the compact set \(\mathrm{clo}\left( E\right) \) (by hypothesis, this does not change the value of \({{\mathrm{Cap}}}E\)). Let \(\kappa >{{\mathrm{Cap}}}E\) be arbitrary and use Proposition 2.6(a) to find \(r>0\) such that \({{\mathrm{Cap}}}(E_r)\le \kappa \). Adjoin finitely many lines to \(E_r\) to make it into a connected set \(E'\) (as in the proof of Theorem 4.2) and then adjoin any bounded components of \(\mathbb{R }^d{\setminus } E'\) to form a set \(E''\in {\fancyscript{E}}_c\) that satisfies the conditions of Theorem 1.4. For \({{\mathrm{Cap}}}E\ge \kappa _d\), Theorem 1.4 implies that \(x+\varphi _d(t)E\subset \mathbb T ^d{\setminus } W[0,t]\) for some \(x\in \mathbb T ^d\), with probability at least \(t^{J_d(\kappa )-o(1)}\). If instead \({{\mathrm{Cap}}}E<\kappa _d\), then it is no loss of generality to assume that \(\kappa <\kappa _d\) also. Then Theorem 1.4 shows that there are at least \(t^{J_d(\kappa )-o(1)}\) components containing translates \(x+\varphi _d(t) E\); these translates are necessarily disjoint. In both cases we conclude by taking \(\kappa \downarrow {{\mathrm{Cap}}}E\).
For the upper bounds, we will shrink the set \(E\). The results nearly follow from Theorems 1.1–1.2, since the existence of \(x+\varphi _d(t) E\subset \mathbb T ^d{\setminus } W[0,t]\) implies the existence of \(x+(\varphi _d(t) E)_{-\rho (t)}\subset \mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\). However, the set \(E\) might not be connected. To handle this possibility, we will appeal directly to Lemmas 3.3 and 3.7.
Let \(\kappa \in (\kappa _d,{{\mathrm{Cap}}}E)\) (for (1.25)) or \(\kappa \in (0,{{\mathrm{Cap}}}E)\) (for (1.26)) be arbitrary. Apply Proposition 2.6(c) to find an \(r>0\) such that \({{\mathrm{Cap}}}(E_{-2r})>\kappa \). The enlargement \((E_{-2r})_r\) has a finite number \(k\) of components, by boundedness. Set \(\rho =\rho (t)=r\varphi _d(t)\) and choose \(n=n(t)\) such that \(n(t) \ge 2\sqrt{d}/\rho (t)\) and the hypotheses of Proposition 3.5 hold. (As in the proof of Theorem 1.2, these conditions on \(n(t)\) are mutually consistent.) Apply Lemma 4.1 to each of the \(k\) components of \((E_{-2r})_r\) to obtain a set \(E^\Box =\cup _{j=1}^k E(A^{(j)})\) satisfying \((E_{-2r})_r\subset E^\Box \subset (E_{-2r})_{2r}\subset E\). Thus, \({{\mathrm{Cap}}}E^\Box \ge \kappa \). Furthermore, given \(x\in \mathbb T ^d\) there is \(x'\in G_{n(t)}\) such that \(x'+\varphi _d(t)E^\Box \subset x+\varphi _d(t)((E_{-2r})_{2r}) \subset x+\varphi _d(t) E\). Define \(h(t)=C\varphi _d(t)\), where \(C\) is a constant large enough so that \(E\subset B(0,C)\). For \({{\mathrm{Cap}}}E>\kappa _d\), we can then apply Lemma 3.3 to conclude that \(\mathbb{P }(\exists x\in \mathbb T ^d:\, x+\varphi _d(t)E\subset \mathbb T ^d{\setminus } W[0,t])\le t^{J_d(\kappa )+o(1)}\). For \({{\mathrm{Cap}}}E<\kappa _d\), Lemma 3.7 implies that \(\chi (t,E)\le \chi _+^\Box (t,n(t),\kappa ,h(t))\le t^{J_d(\kappa )+o(1)}\) with high probability. In both cases take \(\kappa \uparrow {{\mathrm{Cap}}}E\). \(\square \)
5.2 Proof of Corollaries 1.6–1.7
Proof
Note the scaling relation
Corollaries 1.6–1.7 follow from Theorems 1.1, 1.3 and 1.4 because of the inequalities (1.45). Indeed, apart from the fact that the principal Dirichlet eigenvalue \(\lambda (E)\) is decreasing in \(E\) rather than increasing, the proofs are identical and we will prove only Corollary 1.7.
Since \(\lambda \mapsto I^\mathrm{Dirichlet}_d(\lambda )\) is continuous and decreasing on \(\left( 0,\lambda _d\right] \), it suffices to prove (1.30) and to show that \(\mathbb{P }(\varphi _d(t)^2\lambda (t,\rho (t)) \le \lambda )=t^{-I_d^\mathrm{Dirichlet}(\lambda )+o(1)}\) for \(\lambda <\lambda _d\).
For (1.30), note that \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) cannot contain a ball of capacity \(>\kappa _d(\lambda _d/\lambda (t,\rho (t)))^{(d-2)/2}\): by (1.9) and (5.1), the component of \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) containing such a ball would have an eigenvalue strictly smaller than \(\lambda (t,\rho (t))\). In particular, if \(\lambda >\lambda _d\) and \(\lambda (t,\rho (t)) \ge \lambda \varphi _d(t)^{-2}\), then \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) cannot contain a ball of capacity \(\kappa _d\, \varphi _d(t)^{d-2}((\lambda _d/\lambda )^{(d-2)/2}+\delta )\) for any \(\delta >0\). Taking \(\delta \) small enough so that \((\lambda _d/\lambda )^{(d-2)/2}\) \(+\delta <1\), applying Theorem 1.3 with \(E\) the ball of capacity \(\kappa _d((\lambda _d/\lambda )^{(d-2)/2} +\delta )\), and letting \(\delta \downarrow 0\), we obtain (1.30).
Now take \(\lambda <\lambda _d\). By Proposition 1.10, apart from an event of negligible probability, every component \(C\) of \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) can be isometrically identified (under its intrinsic metric) with a bounded open subset \(E\) of \(\mathbb{R }^d\), via \(C=x+E\) for some \(x\in \mathbb T ^d\). In particular, \(\lambda (C)=\lambda (E)\), and we can apply (1.45) to conclude that \(\kappa ^*(t,\rho (t))\ge {{\mathrm{Cap}}}E\ge \kappa _d(\lambda _d/ \lambda (C))^{(d-2)/2}\). Applying Theorem 1.1,
For the reverse inequality, note that Theorem 1.4 implies that \(\mathbb T ^d{\setminus } W_{\rho (t)}[0,t]\) contains a ball of capacity \(\kappa _d\,\varphi _d(t)^{d-2}(\lambda _d/ \lambda )^{(d-2)/2}\) with probability at least \(t^{-I_d^\mathrm{Dirichlet}(\lambda )-o(1)}\).\(\square \)
5.3 Proof of Corollary 1.8
Proof
Since \(r\mapsto I_d^\mathrm{inradius}(r)\) is continuous and strictly increasing on \(\left[ 1,\infty \right) \) and is infinite elsewhere, it suffices to verify (1.35) and show \(\mathbb{P }(\rho _\mathrm{in}(t)> r\varphi _d(t))= t^{-I_d^\mathrm{inradius}(r) +o(1)}\) for \(r\ge 1\). But the events \(\left\{ \rho _\mathrm{in}(t)\le r\varphi _d(t)\right\} \) and \(\left\{ \rho _\mathrm{in}(t)> r\varphi _d(t)\right\} \) are precisely the event
and its complement
from Theorem 1.3, with \(\rho (t)=0\), and equation (1.25) from Corollary 1.5, with \(E=B(0,r)\).\(\square \)
5.4 Proof of Corollary 1.9
Proof
Recall that \(\left\{ \rho _\mathrm{in}(t) > \varepsilon \right\} = \left\{ {\fancyscript{C}}_\varepsilon > t\right\} \), so that setting \(t=u\psi _d(\varepsilon )\), \(r=\varepsilon /\varphi _d(u\psi _d(\varepsilon ))\) rewrites the event \(\left\{ {\fancyscript{C}}_\varepsilon >u\psi _d(\varepsilon )\right\} \) as \(\left\{ \rho _\mathrm{in}(t)>r\varphi _d(t)\right\} \). By (1.38), \(r\rightarrow (u/d)^{1/(d-2)}\) as \(\varepsilon \downarrow 0\). It follows that \(\mathbb{P }({\fancyscript{C}}_\varepsilon >u\psi _d(\varepsilon ))=t^{-I_d^\mathrm{inradius}((u/d)^{1/(d-2)})+o(1)}\) for \(u>d\), since \(r\mapsto I_d^\mathrm{inradius}(r)\) is continuous on \((1,\infty )\). Noting that \(t=\varepsilon ^{-(d-2)+o(1)}\), this last expression is \(\varepsilon ^{I_d^\mathrm{cover}(u)+o(1)}\). A similar argument proves (1.37). Because \(u\mapsto I_d^\mathrm{cover}(u)\) is continuous and strictly increasing on \(\left[ d,\infty \right) \) and \(I_d^\mathrm{cover}(v)=\infty \) otherwise, these two facts complete the proof.\(\square \)
Notes
The choice \(\rho (t)=0\) makes the eigenvalue result in Corollary 1.7 false for \(d\ge 4\), since the path of the Brownian motion itself is a polar set for \(d\ge 4\). However, for \(d = 3\) the eigenvalue \(\lambda (t,\rho (t))\) is non-trivial even when \(\rho (t)=0\), and we conjecture that Corollary 1.7 remains valid, i.e., the eigenvalue is determined primarily by the large lakes in \(\mathbb T ^d{{\setminus }} W[0,t]\), and not by the narrow channels connecting them. See the conjectures in van den Berg et al. [6].
If the starting point \(x_0\) lies inside \(B(x,R)\), then the Brownian motion may travel from \(\partial B(x,r)\) to \(\partial B(x,R)\) before time \(T_0\). To simplify the application of Dembo et al. [9, Lemma 2.4], we do not call this an excursion from \(\partial B(x,r)\) to \(\partial B(x,R)\).
This follows from hitting estimates for Brownian motion in a cone. For instance, via the notation of Burkholder [8, pp. 192–193], the harmonic functions on \(C(0,z_0)\) given by \(u_1(z)=r_0^{p+d-2}(\left| z\right| ^{-(p+d-2)}-\left| z\right| ^p) h(\vartheta )\) and \(u_2(z)=\left| z\right| ^p h(\vartheta )\) (with \(\vartheta \) the angle between \(z\) and \(z_0\) and the value \(p>0\) chosen so that \(u_1(z)=u_2(z)=0\) on \(\partial C(0,z_0)\)) are lower bounds for the probabilities, starting from \(z\in C(0,z_0)\), of hitting \(\partial B(0,r_0)\) before \(\partial B(0,1) \cup \partial C(0,z_0)\) and of hitting \(\partial B(0,1)\) before \(\partial C(0,z_0)\), respectively.
References
Bandle, C.: Isoperimetric Inequalities and Applications, vol. 7. Monographs and Studies in Mathematics. Pitman, Boston (1980)
Benjamini, I., Sznitman, A.S.: Giant component and vacant set for random walk on a discrete torus. J. Eur. Math. Soc. 10(1), 133–172 (2008)
van den Berg, M.: Heat equation on the arithmetic von Koch snowflake. Probab. Theory Rel. Fields 118, 17–36 (2000)
van den Berg, M., Bolthausen, E.: Area versus capacity and solidification in the crushed ice model. Probab. Theory Rel. Fields 130, 69–108 (2004)
van den Berg, M., Bolthausen, E., den Hollander, F.: Moderate deviations for the volume of the Wiener sausage. Ann. Math. 153(2), 355–406 (2001)
van den Berg, M., Bolthausen, E., den Hollander, F.: Heat content and inradius for regions with a Brownian boundary (2012). arXiv:1304.0579[math.PR]
van den Berg, M., den Hollander, F.: Asymptotics for the heat content of a planar region with a fractal polygonal boundary. Proc. Lond. Math. Soc. 78(3), 627–661 (1999)
Burkholder, D.L.: Exit times of Brownian motion, harmonic majorization, and Hardy spaces. Adv. Math. 26(2), 182–205 (1977)
Dembo, A., Peres, Y., Rosen, J.: Brownian motion on compact manifolds: cover time and late points. Electron. J. Probab. 8(15), 1–14 (2003)
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Cover times for Brownian motion and random walks in two dimensions. Ann. Math. 160(2), 433–464 (2004)
Doob, J.L.: Classical Potential Theory and Its Probabilistic Counterpart. Grundlehren der mathematischen Wissenschaften, vol. 262. Springer, Berlin (1984)
Klarner, D.A.: Cell growth problems. Can. J. Math. 19, 851–863 (1967)
Levine, L., Peres, Y.: Strong spherical asymptotics for rotor-router aggregation and the divisible sandpile. Potential Anal. 30, 1–27 (2009)
Matheron, G.: Random Sets and Integral Geometry. Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1975)
Mejía Miranda, Y., Slade, G.: The growth constants of lattice trees and lattice animals in high dimensions. Electron. Comm. Probab. 16(13), 129–136 (2011)
Molchanov, I.: Theory of Random Sets. Probability and its Applications. Springer, Berlin (2005)
Pólya, G., Szegö, G.: Isoperimetric Inequalities in Mathematical Physics. Annals of Mathematics Studies, vol. 27. Princeton University Press, Princeton (1951)
Popov, S., Teixeira, A.: Soft local times and decoupling of random interlacements (2012). arXiv:1212. 1605[math.PR]
Port, S.C., Stone, C.J.: Brownian Motion and Classical Potential Theory. Probability and Mathematical Statistics. Academic Press, New York (1978)
Sidoravicius, V., Sznitman, A.S.: Percolation for the vacant set of random interlacements. Commun. Pure Appl. Math. 62(6), 831–858 (2009)
Sznitman, A.: Brownian Motion, Obstacles and Random Media. Springer Monographs in Mathematics. Springer, Berlin (1998)
Sznitman, A.S.: Vacant set of random interlacements and percolation. Ann. Math. 171(3), 2039–2087 (2010)
Sznitman, A.S.: On scaling limits and Brownian interlacements (2012). arXiv:1209.4531[math.PR]
Teixeira, A., Windisch, D.: On the fragmentation of a torus by random walk. Commun. Pure Appl. Math. 64(12), 1599–1646 (2011)
Acknowledgments
The research of the authors was supported by the European Research Council through ERC Advanced Grant 267356 VARIS. The authors are grateful to M. van den Berg for helpful input.
Author information
Authors and Affiliations
Corresponding author
Appendix: Hitting probabilities for excursions
Appendix: Hitting probabilities for excursions
1.1 Unconditioned excursions: proof of Lemma 2.5
Proof
Since \(R<\tfrac{1}{2}\), we may consider \(x,\xi ',\xi ,W(t)\) to have values in \(\mathbb{R }^d\) instead of \(\mathbb T ^d\). Furthermore, w.l.o.g. we may assume that \(x=0\).
We first remove the effect of conditioning on the exit point \(\xi \in \partial B(0,R)\). Define \(T=\sup \left\{ t<\zeta :\,d(0,W(t))\le r\right\} \) to be the last exit time from \(B(0,r)\) before time \(\zeta \); note that \(E\cap W[0,\zeta _R]=E\cap W[0,T]\). Let \(\tilde{r}\in (r,R)\) and define \(\tilde{\tau }=\inf \left\{ t>T:\,d(0,W(t))=\tilde{r}\right\} \) to be the first hitting time of \(\partial B(0,\tilde{r})\) after time \(T\).
Under \(\mathbb{P }_{\xi '}\) (i.e., without conditioning on the exit point \(W(\zeta _R)\)) we can express \((W(t))_{0\le t\le \zeta _R}\) as the initial segment \((W(t))_{0\le t\le \tilde{\tau }}\) followed by a Brownian motion, conditionally independent given \(W(\tilde{\tau })\), started at \(\tilde{\xi }=W(\tilde{\tau })\) and conditioned to exit \(B(0,R)\) before hitting \(B(0,r)\). Let \(f_{\tilde{r},R}(\tilde{\xi },\cdot )\) denote the density, with respect to the uniform measure \(\sigma _R\) on \(\partial B(0,R)\), of the first hitting point \(W(\zeta _R)\) on \(\partial B(0,R)\) for this conditioned Brownian motion. Then for \(S\subset \partial B(0,R)\) measurable, we have
From (6.1) it follows that the conditioned measure \(\mathbb{P }_{\xi ',\xi }\) satisfies
(More precisely, we would conclude (6.2) for \(\sigma _R\)-a.e. \(\xi \), but by a continuity argument we can take (6.2) to hold for all \(\xi \).)
Now choose \(\tilde{r}\) in such a way that \(R/\tilde{r}\rightarrow \infty , \tilde{r}/r\rightarrow \infty \), for instance, \(\tilde{r}=\sqrt{rR}\). Since \(R/\tilde{r}\rightarrow \infty \), we have \(f_{\tilde{r},R} (\tilde{\xi },\xi )=1+o(1)\), uniformly in \(\tilde{\xi },\xi \). Therefore
By the Markov property, the last term in (6.3) is the probability of hitting \(E\) when starting from some point \(W(\zeta _R)\in \partial B(0,R)\) (averaged over the value of \(W(\zeta _R)\)). Since \(R/r\rightarrow \infty \), this will be shown to be an error term, and the proof will have been completed, once we show that
Note that (6.4) is essentially the limit in (1.11), taken uniformly over the choice of \(E\subset B(0,\varepsilon )\).
To show (6.4), let \(g_\varepsilon (\xi ',\cdot )\) denote the density, with respect to the uniform measure \(\sigma _\varepsilon \) on \(\partial B(0,\varepsilon )\), of the first hitting point of \(\partial B(0,\varepsilon )\) for a Brownian motion started at \(\xi '\) and conditioned to hit \(B(0,\varepsilon )\). Then
Since \(r/\varepsilon \rightarrow \infty \), we have \(g_\varepsilon (\xi ',y)\rightarrow 1\) uniformly in \(\xi ',y\). Hence (6.4) follows from (6.5) and (2.15).\(\square \)
1.2 Excursions avoiding an obstacle: proof of Lemma 4.5
Proof
It suffices to bound from below
since conditioning on \((x+(\varphi E)_\rho )\cap W[0,\zeta _R]=\varnothing \) can only increase the probability in (6.6). Moreover, as in the proof of Lemma 2.5, we may replace \(\mathbb{P }_{\xi ',\xi }\) by \(\mathbb{P }_{\xi '}\), using now that the densities \(f_{\tilde{r},R}\) and \(g_\varepsilon \) are bounded away from \(0\) and \(\infty \) when \(r\le \tfrac{1}{2}R\).
Fix \(E\in {\fancyscript{E}}_c^\Box \), so that \(E=\tfrac{1}{n}A\) for some \(A\in {\fancyscript{A}}^\Box \cap {\fancyscript{E}}_c\) and \(n\in \mathbb{N }\), and fix \(\delta >0\) (we may assume that \(\delta <1/(2n)\)). By assumption, \(E\) is bounded, say \(E\subset B(0,a)\). By adjusting \(\varepsilon \), we may assume that \(\rho /\varphi <a\) (so that \((\varphi E)_\rho \subset B(0,2a\varphi )\)) and \(r>4a\varphi \). We distinguish between three cases:
\(\bullet \) \(y\in B(0,3a) {\setminus } E_\delta \). Consider \(w\in \mathbb{Z }^d{\setminus } A\). Because \(A\in {\fancyscript{E}}_c\), there is a finite path of open cubes with centres \(w_0,w_1,\dots ,w_k\in \mathbb{Z }^d\) such that \(w_0\in \mathbb{Z }^d{\setminus } B(0,3an),w_k=w,d(w_{j-1},w_j)=1\) and \(\mathrm{int}\left( \cup _{j=0}^k (w_j+[-\tfrac{1}{2},\tfrac{1}{2}]^d)\right) \cap A=\varnothing \). By compactness, the length \(k\) of such paths may be taken to be uniformly bounded. Hence, if \(\rho /\varphi <\delta /2\), then, given \(\xi ''\in \partial B(x,3a\varphi )\), there is a path \(\varGamma \subset B(x,3a\varphi )\) from \(\xi ''\) to \(x+\varphi y\) consisting of a finite number of line segments, each of length at most \(\varphi \), such that \(\varGamma _{\delta \varphi /2}\cap (x+(\varphi E)_\rho )=\varGamma _{\delta \varphi /2} \cap (x+\varphi (E_{\rho /\varphi }))=\varnothing \). Moreover, the number of line segments can be taken to be bounded uniformly in \(y\) and \(\xi ''\). In fact, \(\varGamma \) can be chosen as the union of line segments between points \(x+\varphi w_0/n,\cdots , x+\varphi w_k/n\) as above, together with a bounded number of line segments to join \(\xi ''\) to \(x+\varphi w_0/n\) in \(B(x,3a\varphi ) {\setminus } B(x,2a\varphi )\) and to join \(x+\varphi w_k/n\) to \(x+\varphi y\) in the cube \(x+(\varphi /n)(w+[-\tfrac{1}{2}, \tfrac{1}{2}]^d)\) containing \(y\) (see Fig. 10)
From \(\xi '\in \partial B(x,r)\), the Brownian path reaches \(\partial B(x,3a\varphi )\) before \(\partial B(x,R)\) with probability \((r^{-(d-2)}-R^{-(d-2)})/((3a\varphi )^{-(d-2)}-R^{-(d-2)})\). By our assumptions, this is at least \(c_1 (\varphi /r)^{d-2}\) for some \(c_1>0\). Uniformly in the first hitting point \(\xi ''\) of \(\partial B(x,3a\varphi )\), there is a positive probability of hitting \(\partial B(x+\varphi y,\tfrac{1}{4}\delta \varphi )\) via \(\varGamma _{\delta \varphi /2}\) before hitting \(\partial B(x,4a\varphi )\). The probability of next hitting \(\partial B(x+\varphi y, \tfrac{1}{2}\rho )\) before \(\partial B(x+\varphi y,\tfrac{1}{2}\delta \varphi )\) is
which is at least \(c_2(\rho /\varphi )^{d-2}\) for some \(c_2>0\). Thereafter there is a positive probability of returning to \(\partial B(x,r)\) without hitting \(x+(\varphi E)_\rho \), via \(\varGamma _{\delta \varphi /2}\). Combining these probabilities gives the required bound.
\(\bullet \) \(y\in E_\delta {\setminus } E_{\rho /\varphi }\). We have \(y\in \tfrac{1}{n}(w+[-\tfrac{1}{2}, \tfrac{1}{2}]^d)\) for some \(w\in \mathbb{Z }^d\). Write \(C_\theta (y,\tfrac{1}{n}w)\) for the cone with vertex \(y\), central angle \(\theta \), and axis the ray from \(y\) to \(\tfrac{1}{n}w\). We can choose the angle \(\theta \) and a constant \(c_3>0\) small enough (in a manner depending only on \(d\)) so that \(C_\theta (y,\tfrac{1}{n}w) \cap E_{\rho /\varphi } \cap B(y,(1+c_3) d(y,w))=\varnothing \). With \(\theta \) and \(c_3\) fixed, we can choose \(c_4>0\) so that every point of \(B(\tfrac{1}{n}w,c_4)\) is a distance at least \(c_5>0\) from \(\partial C_\theta (y, \tfrac{1}{n}w)\) and \(\partial B(y,(1+c_3)d(y,\tfrac{1}{n}w))\) (see Fig. 11).
Cones \(C_\theta (y,\tfrac{1}{n}w)\) and parts of balls \(B(y,\rho /(2\varphi ))\) and \(B(y,(1+c_3)d(y,\tfrac{1}{n}w))\) for three choices of \(y\). The outer square is the cube \(\tfrac{1}{n}(w+[-\tfrac{1}{2},\tfrac{1}{2}]^d)\) containing \(y\). The dashed line shows the greatest possible extent of \(E_{\rho /\varphi }\). At least one face of the cube is not contained in \(E\), resulting in a conduit to the outside of the cube (dotted lines). The ball \(B(\tfrac{1}{n}w,c_4)\) in the centre is uniformly bounded away from the sides of the cones and from the other balls. On the left the parameters \(\rho /\varphi <1/4n\) are depicted as equal. On the right is the more relevant situation \(\rho /\varphi \ll 1/(4n)\)
Under these conditions, there is a probability at least \(c_6(\rho /\varphi )^\alpha \) for a Brownian path started from a point of \(\partial B(x+\varphi w/n,c_4 \varphi )\) to reach \(\partial B(x+\varphi y,\tfrac{1}{2}\rho )\) before hitting \(\partial B(x+\varphi y,\varphi (1+c_3) d(y,w))\cup \partial (x+\varphi C_\theta (y,w))\), and then to reach \(\partial B(x+\varphi y, \varphi d(y,w))\) before hitting \(\partial (x+\varphi C_\theta (y,w))\).Footnote 5 The rest of the proof proceeds as in the previous case.
\(\bullet \) \(y\in B(0,r/\varphi ){\setminus } B(0,3a)\). Let \(b=d(0,y)\in [3a,r/\varphi ]\). The probability that a Brownian path started from \(\xi '\) first hits \(\partial B(x,b\varphi )\) without hitting \(\partial B(x,R)\), then hits \(\partial B(x+\varphi y,\tfrac{1}{12}b\varphi )\) without hitting \(\partial B(x,\tfrac{2}{3}b\varphi )\), then hits \(\partial B(x+\varphi y, \tfrac{1}{2}\rho )\) before hitting \(\partial B(x+\varphi y,\tfrac{1}{6}b\varphi )\), and finally exits \(B(x,R)\) without hitting \(\partial B(x,\tfrac{2}{3}b\varphi )\), is at least \([c_7 (b\varphi /r)^{d-2}] [c_8][c_9(\rho /(b\varphi ))^{d-2}][c_{10}]\). Since \(x+(\varphi E)_\rho \subset B(x,2a\varphi )\subset B(x,\tfrac{2}{3}b\varphi )\), this is the required bound.\(\square \)
Rights and permissions
About this article
Cite this article
Goodman, J., den Hollander, F. Extremal geometry of a Brownian porous medium. Probab. Theory Relat. Fields 160, 127–174 (2014). https://doi.org/10.1007/s00440-013-0525-9
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0525-9
Keywords
- Brownian motion
- Random set
- capacity
- Largest inradius
- Cover time
- Principal Dirichlet eigenvalue
- Large deviation principle
Mathematics Subject Classification (2000)
- 60D05
- 60F10
- 60J65










