Abstract
We study properties of the random metric space called the Brownian map. For every \(r>0\), we consider the connected components of the complement of the open ball of radius \(r\) centered at the root, and we let \(\mathbf {N}_{r,\varepsilon }\) be the number of those connected components that intersect the complement of the ball of radius \(r+\varepsilon \). We then prove that \(\varepsilon ^3\mathbf {N}_{r,\varepsilon }\) converges as \(\varepsilon \rightarrow 0\) to a constant times the density at \(r\) of the profile of distances from the root. In terms of the Brownian cactus, this gives asymptotics for the number of points at height \(r\) that have descendants at height \(r+\varepsilon \). Our proofs are based on a similar approximation result for local times of super-Brownian motion by upcrossing numbers. Our arguments make a heavy use of the Brownian snake and its special Markov property.
1 Introduction
This paper is devoted to certain properties of the random metric space known as the Brownian map, which can be viewed as a canonical model of random geometry in two dimensions. These properties are closely related to an approximation result for local times of super-Brownian motion in terms of upcrossing numbers, which is similar to the classical result for linear Brownian motion.
In order to present our main results, let \((\mathbf{m}_\infty ,D)\) denote the Brownian map. This is a random compact metric space, which is a.s. homeomorphic to the two-dimensional sphere and has recently been shown to be the scaling limit in distribution, in the Gromov–Hausdorff sense, of several classes of random planar maps [1, 3, 13, 16]. The Brownian map is equipped with a volume measure \(\lambda \), which in a sense is the uniform probability measure on \(\mathbf{m}_\infty \), and a distinguished point, which we denote here by \(\rho \). This point plays no particular role in the sense that, if we “re-root” the Brownian map at another point \(\tilde{\rho }\) chosen according to \(\lambda \), the pointed metric spaces \((\mathbf{m}_\infty , D, \rho )\) and \((\mathbf{m}_\infty ,D,\tilde{\rho })\) have the same distribution [12, Theorem 8.1]. For every \(h>0\), let \(B_h(\rho )\) stand for the open ball of radius \(h\) centered at \(\rho \). Then, on the event where \(B_h(\rho )^c\not = \varnothing \), \(B_h(\rho )^c\) will have infinitely many connected components, but a compactness argument shows that only finitely many of them intersect \(B_{h+\varepsilon }(\rho )^c\), for any fixed \(\varepsilon >0\). Our first objective is to get precise information about the number of these components. Recall that the profile of distances from \(\rho \) in \(\mathbf{m}_\infty \) is the probability measure \(\Delta \) on \({\mathbb {R}}_+\) defined by
for any Borel subset \(A\) of \({\mathbb {R}}_+\). The measure \(\Delta \) has a.s. a continuous density with respect to Lebesgue measure.
Theorem 1
For every \(h>0\) and \(\varepsilon >0\), let \(\mathbf {N}_{h,\varepsilon }\) be the number of connected components of \(B_h(\rho )^c\) that intersect \(B_{h+\varepsilon }(\rho )^c\). Then,
in probability. Here \(\mathbf {L}^h\) is the density at \(h\) of the profile of distances from \(\rho \) in \(\mathbf{m}_\infty \), and the constant \(c_1>0\) is given by
Theorem 1 can be reformulated in terms of the Brownian cactus discussed in [5]. Recall that, with any pointed geodesic compact metric space, one can associate a rooted \({\mathbb {R}}\)-tree called the cactus of the initial space. Roughly speaking, the root of the cactus corresponds to the distinguished point in the original space, and distances from this point are in a sense preserved in the cactus. Furthermore, the points of the cactus at a given height \(h\), that is, at distance \(h\) from the root, correspond to the connected components of the complement of the open ball of radius \(h\) centered at the distinguished point (see [5, Section 2.5]). The cactus associated with the Brownian map is called the Brownian cactus (one of the main reasons for introducing this object is the fact that the convergence in distribution of discrete cactuses associated with random planar maps toward the Brownian cactus has been proved in great generality [5]). The quantity \(\mathbf {N}_{h,\varepsilon }\) is then equal to the number of points of the Brownian cactus at height \(h\) that have descendants at height \(h+\varepsilon \), and (1) shows that this number is typically of order \(\varepsilon ^{-3}\) when \(\varepsilon \) tends to \(0\).
Perhaps more surprisingly, the convergence (1) is also closely related to an approximation result for local times of super-Brownian motion in terms of upcrossing numbers. If \(w:[0,T]\longrightarrow {\mathbb {R}}\) is a continuous function defined on the interval \([0,T]\), and \(h\in {\mathbb {R}}\), we say that \(r\in [0,T)\) is an upcrossing time of \(w\) from \(h\) to \(h+\varepsilon \) if \(w(r)=h\) and if there exists \(t\in (r,T]\) such that \(w(t)=h+\varepsilon \) and \(w(s)>h\) for every \(s\in (r,t]\). Then, if \(\mathrm {N}_{h,\varepsilon }(T)\) is the number of upcrossing times from \(h\) to \(h+\varepsilon \) of a standard linear Brownian motion \(B\) over the time interval \([0,T]\), \(2\varepsilon \,\mathrm {N}_{h,\varepsilon }(T)\) converges a.s. as \(\varepsilon \rightarrow 0\) to the local time of \(B\) at level \(h\) and at time \(T\). This is the classical approximation of Brownian local times by upcrossing numbers (see [9, Section 2.4] or [18, Theorem VI.1.10]). In view of a similar result for super-Brownian motion, we would like to count upcrossing times for all “historical paths”, and for this we need to introduce the historical super-Brownian motion (see [6, 7] for the general theory of historical superprocesses).
So let \(\mathbf {Y}=(\mathbf {Y}_t)_{t\ge 0}\) be a one-dimensional historical super-Brownian motion. For every \(t\ge 0\), \(\mathbf {Y}_t\) is a random measure on the space \(C([0,t],{\mathbb {R}})\) of all continuous functions from \([0,t]\) into \({\mathbb {R}}\). Informally, the support of \(\mathbf {Y}_t\) consists of the historical paths followed between times \(0\) and \(t\) by all “particles” alive at time \(t\). The associated super-Brownian motion \(\mathbf {X}=(\mathbf {X}_t)_{t\ge 0}\) is obtained from \(\mathbf {Y}\) by the formula
for any Borel subset \(A\) of \({\mathbb {R}}\) and every \(t\ge 0\). Then, for every \(t>0\), \(\mathbf {X}_t\) has a continuous density denoted by \(u_t\) (see e.g. [17, Theorem III.4.2]), and we set, for every \(x\in {\mathbb {R}}\),
Clearly, the function \(x\rightarrow \mathrm {L}^x\) is also the density of the occupation measure \(\int _0^\infty \mathrm {d}t\,\mathbf {X}_t\), and for this reason we call \(\mathrm {L}^x\) the local time of \(\mathbf {X}\) at level \(x\). Note that these local times also exist in dimensions \(2\) and \(3\) even though the measures \(\mathbf {X}_t\) are then singular (see [8, 19]).
We say that \(r\ge 0\) is an upcrossing time of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \) if there exist \(t>r\) and a function \(w\in C([0,t],{\mathbb {R}})\) that belongs to the topological support of \(\mathbf {Y}_t\), such that \(r\) is an upcrossing time of \(w\) from \(h\) to \(h+\varepsilon \).
Theorem 2
Assume that \(\mathbf {X}_0=a\,\delta _0\) for some \(a>0\), where \(\delta _0\) denotes the Dirac measure at \(0\). Let \(h\in {\mathbb {R}}\backslash \{0\}\) and for every \(\varepsilon >0\), let \(\fancyscript{N}_{h,\varepsilon }\) be the number of upcrossing times of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \). Then
in probability. Here, \(\mathrm {L}^h\) is the local time of \(\mathbf {X}\) at level \(h\), and the constant \(c_1\) was defined in Theorem 1.
Remark
The definition of upcrossings in the superprocess setting can also be interpreted in terms of the genealogical structure of super-Brownian motion. Recall that the genealogy of \(\mathbf {X}\) is coded by a random \({\mathbb {R}}\)-tree, or more precisely by a countable collection of random \({\mathbb {R}}\)-trees. Each point (vertex) in these trees is assigned a spatial location in \({\mathbb {R}}\), and the measure \(\mathbf {X}_t\) is in a sense “uniformly spread” over the spatial locations of vertices at height \(t\). Then upcrossing times of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \) are in one-to-one correspondence with vertices \(v\) whose spatial location is equal to \(h\) and which have (at least) one descendant \(v'\) with spatial location \(h+\varepsilon \), such that spatial locations stay greater than \(h\) on the line segment between \(v\) and \(v'\) in the tree. See Sect. 3 below for a rigorous presentation of this interpretation.
Our proof of Theorem 1 relies on a version of the convergence of Theorem 2 under the excursion measure of super-Brownian motion (Theorem 6). Let us explain the connection between connected components of the complement of a ball in the Brownian map and upcrossings of super-Brownian motion. We first recall that the Brownian map is constructed as a quotient space of Aldous’ Continuous Random Tree (the so-called CRT) for an equivalence relation which is defined in terms of Brownian labels assigned to the vertices of the CRT (see Sect. 7 for more details). Note that the CRT is just a conditional version of the random trees coding the genealogy of super-Brownian motion, and that the Brownian labels can be viewed as spatial locations in the superprocess setting. From the properties of the Brownian map, it is not too hard to prove that connected components of \(B_h(\rho )^c\) in \(\mathbf{m}_\infty \) correspond to connected components of the set of vertices in the CRT whose label is greater than \(h\) (for this correspondence to hold, one needs to shift the labels so that the minimal label is \(0\), and one also re-roots the CRT at the vertex with minimal label). It follows that \(\mathbf {N}_{h,\varepsilon }\) counts those connected components of the set of vertices with label greater than \(h\) that contain (at least) one vertex with label \(h+\varepsilon \). In such a component, there is a unique vertex with label \(h\) that is at minimal distance from the root, and, in the superprocess setting, the remark following Theorem 2 shows that this vertex corresponds to an upcrossing from \(h\) to \(h+\varepsilon \).
The paper is organized as follows. Section 2 recalls basic facts about the Brownian snake, which is our key tool to generate both the Brownian labels on the CRT and the historical paths of super-Brownian motion. In Sect. 3, we introduce upcrossings of the Brownian snake, and we state Theorem 6, which deals with the convergence (2) under the excursion measure of the Brownian snake. The proof of Theorem 6 is given in Sect. 5, after an important preliminary lemma (Lemma 7) has been established in Sect. 4. Theorem 2 is then an easy consequence of Theorem 6. Section 6 provides conditional versions of (2), concerning first the excursion measure of the Brownian snake conditioned to have a fixed duration, and then the same excursion measure under the additional conditioning that the Brownian snake stays on the positive half-line. The latter conditional version is needed for our application to the Brownian map in Sect. 7, where we prove Theorem 1.
2 Preliminaries about the Brownian snake
We refer to the book [11] (especially Chapters IV and V) for the basic facts about the Brownian snake that we will use.
The Brownian snake. Throughout this work, \(W=(W_s)_{s\ge 0}\) denotes the one-dimensional Brownian snake. This is a strong Markov process taking values in the space \({\mathcal {W}}\) of all finite continuous paths \(\mathrm{w}:[0,\zeta ]\longrightarrow {\mathbb {R}}\), where \(\zeta =\zeta _{(\mathrm{w})}\) is a nonnegative real number depending on \(\mathrm{w}\) and called the lifetime of \(\mathrm{w}\). We write \(\widehat{\mathrm{w}}:=\mathrm{w}(\zeta _{(\mathrm{w})})\) for the endpoint of \(\mathrm{w}\). We let \((\zeta _s)_{s\ge 0}\) stand for the lifetime process associated with \((W_s)_{s\ge 0}\), that is, \(\zeta _s = \zeta _{(W_s)}\) for every \(s\ge 0\). For every \(x\in {\mathbb {R}}\), we identify the trivial element of \({\mathcal {W}}\) starting from \(x\) and with zero lifetime with the point \(x\).
It will be convenient to assume that the Brownian snake \((W_s)_{s\ge 0}\) is the canonical process on the space \(C({\mathbb {R}}_+,{\mathcal {W}})\) of all continuous mappings from \({\mathbb {R}}_+\) into \({\mathcal {W}}\). The notation \({\mathbb {P}}_x\) will then stand for the probability measure on \(C({\mathbb {R}}_+,{\mathcal {W}})\) under which the Brownian snake starts from \(x\). Under \({\mathbb {P}}_x\), the process \((\zeta _s)_{s\ge 0}\) is a reflected Brownian motion on \({\mathbb {R}}_+\) started from \(0\). Informally, the path \(W_s\) is shortened from its tip when \(\zeta _s\) decreases and, when \(\zeta _s\) increases, it is extended by adding “little pieces of Brownian paths” at its tip. See [11, Section IV.1] for a more rigorous presentation.
We let \({\mathbb {N}}_x\) denote the (infinite) excursion measure of the Brownian snake away from \(x\). Note that, when we speak about excursions away from \(x\), we mean excursions away from the trivial path \(x\) with zero lifetime. The excursion measure \({\mathbb {N}}_x\) is normalized as in [11], so that, for every \(\varepsilon >0\),
We also set
which represents the duration of the excursion under \({\mathbb {N}}_x\). The preceding informal description of the behavior of the Brownian snake remains valid under \({\mathbb {N}}_x\), but the “law” of the lifetime process under \({\mathbb {N}}_x\) is now the Itô measure of positive excursions of linear Brownian motion. Both under \({\mathbb {P}}_x\) and under \({\mathbb {N}}_x\), the Brownian snake takes values in the subset \({\mathcal {W}}_x\) of \({\mathcal {W}}\) that consists of all finite paths starting from \(x\). Note that \(W_s=x\) for every \(s\ge \sigma \), \({\mathbb {N}}_x\)-a.e.
For every \(h\in {\mathbb {R}}\), we set
with the usual convention \(\inf \varnothing =\infty \) that will be used throughout this work. Suppose that \(h\not = 0\). Then
(see e.g. [15, Lemma 2.1]), and we will use the notation \({\mathbb {N}}^h_0\) for the conditional probability measure
Exit measures and the special Markov property. We will make an extensive use of exit measures of the Brownian snake. Let \(D\) be an open interval of \({\mathbb {R}}\), such that \(D\not ={\mathbb {R}}\). Suppose that \(x\in D\) and, for every \(\mathrm{w}\in {\mathcal {W}}_x\), set
where we recall that \(\inf \varnothing =\infty \). The exit measure \({\mathcal {Z}}^D\) from \(D\) (see [11, Chapter 5]) is a random measure supported on \(\partial D\), which is defined under \({\mathbb {N}}_x\) and is supported on the set of all exit points \(W_s(\tau (W_s))\) for the paths \(W_s\) such that \(\tau (W_s)<\infty \) (note that here \(\partial D\) has at most two points, but the preceding discussion remains valid for the \(d\)-dimensional Brownian snake and an arbitrary subdomain \(D\) of \({\mathbb {R}}^d\)).
The first-moment formula for exit measures states that, for any nonnegative measurable function \(g\) on \(\partial D\),
where, in the right-hand side, \(B=(B_t)_{t\ge 0}\) is a linear Brownian motion starting from \(x\) under the probability measure \(P_x\), and \(\tau _D:=\inf \{t\ge 0: B_t\notin D\}\).
We will use the fact that, for every \(y\in \partial D\),
It is immediate from the support property of the exit measure that the set in the left-hand side is a subset of the set in the right-hand side. So, to get the equality in (5), it suffices to show that both sets have the same finite \({\mathbb {N}}_x\)-measure. However, using the connections between the Brownian snake and partial differential equations [11, Chapters V,VI], one verifies that the \({\mathbb {N}}_x\)-measure of either set solves, as a function of \(x\), the differential equation \(u''=4u^2\) in \(D\) with boundary values \(\infty \) at \(y\), and \(0\) at the other end of \(D\) (at \(\infty \) if \(D\) is unbounded). Since this boundary value problem has a unique nonnegative solution, the desired result follows.
A crucial ingredient of our study is the special Markov property of the Brownian snake [10]. In order to state this property, we first observe that, \({\mathbb {N}}_x\)-a.e., the set
is open and thus can be written as a union of disjoint open intervals \((a_i,b_i)\), \(i\in I\), where \(I\) may be empty. From the properties of the Brownian snake, it is easy to verify that, \({\mathbb {N}}_x\)-a.e. for every \(i\in I\) and every \(s\in (a_i,b_i)\),
and more precisely all paths \(W_s\), \(s\in [a_i,b_i]\) coincide up to their exit time from \(D\). For every \(i\in I\), we then define an element \(W^{(i)}\) of \(C({\mathbb {R}}_+,{\mathcal {W}})\) by setting
Informally, the \(W^{(i)}\)’s represent the “excursions” of the Brownian snake outside \(D\) (the word “outside” is a little misleading here, because although these excursions start from a point of \(\partial D\), they will typically come back inside \(D\)).
We also need to introduce a \(\sigma \)-field that contains the information about the paths \(W_s\) before they exit \(D\). To this end, we set, for every \(s\ge 0\),
and we let \(\mathcal {E}^D\) be the \(\sigma \)-field generated by the process \((W_{\gamma ^D_s})_{s\ge 0}\) and the class of all sets that \({\mathbb {N}}_x\)-negligible for every \(x\in D\). The random measure \({\mathcal {Z}}^D\) is measurable with respect to \(\mathcal {E}^D\) (see [10, Proposition 2.3]).
We now state the special Markov property [10, Theorem 2.4].
Proposition 3
Under \({\mathbb {N}}_x\), conditionally on \(\mathcal {E}^D\), the point measure
is Poisson with intensity
Thanks to this proposition, we can consider each excursion \(W^{(i)}\) again as a Brownian snake excursion starting from a point of \(\partial D\) and, if \(D'\) is another domain containing \(\partial D\), we can consider the “subexcursions” of \(W^{(i)}\) outside \(D'\), and so on. Repeated applications of this idea will play an important role in what follows.
Local times. We consider the total occupation measure \(\mathcal {O}\) of the process \(\widehat{W}\), which is defined under \({\mathbb {N}}_x\) by the formula
for any Borel subset \(A\) of \({\mathbb {R}}\). The random measure \(\mathcal {O}\) under \({\mathbb {N}}_0(\cdot \,|\,\sigma =1)\) is sometimes called one-dimensional ISE for integrated super-Brownian excursion (see [2] and [11, Section IV.6]).
We will use the fact that \(\mathcal {O}\) has \({\mathbb {N}}_x\)-a.e. a continuous density \((L^a)_{a\in {\mathbb {R}}}\):
for any Borel subset \(A\) of \({\mathbb {R}}\). This can be derived from regularity properties of super-Brownian motion (see Section 1 and the references therein). Alternatively, we can use Theorem 2.1 in [4], which gives the existence of a continuous density for \(\mathcal {O}\) under \({\mathbb {N}}_0(\cdot \,|\,\sigma =1)\) (it is of course easy to get rid of the conditioning by \(\sigma =1\) via a scaling argument).
3 Upcrossings of the Brownian snake
Consider the Brownian snake \((W_s)_{s\ge 0}\) under \({\mathbb {N}}_x\) or under \({\mathbb {P}}_x\), for some fixed \(x\in {\mathbb {R}}\).
Definition 4
Let \(h\in {\mathbb {R}}\) and \(\varepsilon >0\). We say that \(s\ge 0\) is an upcrossing time of the Brownian snake from \(h\) to \(h+\varepsilon \) if \(\widehat{W}_s=h\) and if there exists \(s'\in (s,\infty )\) such that \(\widehat{W}_{s'}=h+\varepsilon \), \(\zeta _r>\zeta _s\) for every \(r\in (s,s']\), and \(W_{s'}(t)>h\) for every \(t\in (\zeta _s,\zeta _{s'}]\).
The time \(s'\) in the definition is in general not uniquely determined by \(s\). However, there is a smallest possible value of \(s'\) such that the properties stated in the definition hold. In what follows, we will always assume that \(s'\) is chosen in this way, and we will say that \(s'\) is associated with the upcrossing time \(s\).
Remark
Obviously, a stopping time cannot be an upcrossing time of \(W\). On the other hand, it is easy to see that we can find a countable collection \((T_1,T_2,\ldots )\) of stopping times such that the set of all times \(s'\) associated with upcrossing times from \(h\) to \(h+\varepsilon \) is contained in \(\{T_1,T_2,\ldots \}\). This remark will be useful at the end of Sect. 5.
The reader may have noticed that the preceding definition seems rather different from the definition of an upcrossing time for a function \(w:[0,T]\longrightarrow {\mathbb {R}}\), which was given in Sect. 1 (we might have considered upcrossing times of the function \(s\longrightarrow \widehat{W}_s\), but this is not what we want!). To relate both definitions, we observe that, if \(s\) is an upcrossing time of the Brownian snake from \(h\) to \(h+\varepsilon \), and if \(s'\) is the associated time, then \(\zeta _s\) is an upcrossing time of the function \(t\longrightarrow W_{s'}(t)\) from \(h\) to \(h+\varepsilon \). Definition 4 is more easily understood if we interpret the Brownian snake as a tree-indexed Brownian motion. Let us explain this in detail, as the relevant objects will also be useful later (see e.g. [14, Sections 3 and 4] for a more detailed account of the considerations that follow).
We argue under \({\mathbb {N}}_x\), so that the lifetime process \((\zeta _s)_{s\ge 0}\) is just a single Brownian excursion. The tree coded by \((\zeta _s)_{s\ge 0}\) is the quotient space \({\mathcal {T}}_\zeta :=[0,\sigma ]\,/\!\sim \), where the equivalence relation \(\sim \) is defined by
We let \(p_\zeta \) stand for the canonical projection from \([0,\sigma ]\) onto \({\mathcal {T}}_\zeta \), and equip \({\mathcal {T}}_\zeta \) with the metric \(d_\zeta \) defined by
for every \(s,s'\in [0,\sigma ]\). Then \({\mathcal {T}}_\zeta \) is a compact \({\mathbb {R}}\)-tree, which is rooted at \(\rho _\zeta :=p_\zeta (0)\). By analogy with the terminology for discrete trees, we often refer to elements of \({\mathcal {T}}_\zeta \) as “vertices” of the tree. Note that the generation (distance from the root) of the vertex \(p_\zeta (s)\) is \(\zeta _s\). For \(a,b\in {\mathcal {T}}_\zeta \), we will use the notation \([\![a,b ]\!]\) for the line segment between \(a\) and \(b\) in \({\mathcal {T}}_\zeta \). The notions of an ancestor and a descendant in \({\mathcal {T}}_\zeta \) are defined in an obvious way: For \(a,b\in {\mathcal {T}}_\zeta \), \(a\) is an ancestor of \(b\) if \(a\) belongs to \([\![\rho _\zeta ,b ]\!]\). If \(s,s'\in [0,\sigma ]\), \(p_\zeta (s)\) is an ancestor of \(p_\zeta (s')\) if and only if \(\zeta _r\ge \zeta _s\) for every \(r\in [s\wedge s',s\vee s']\).
It follows from the properties of the Brownian snake that, \({\mathbb {N}}_x\)-a.e., \(\widehat{W}_s=\widehat{W}_{s'}\) for every \(s,s'\) such that \(s\sim s'\). Hence we can define \(\Gamma _a\) for every \(a\in {\mathcal {T}}_\zeta \) by declaring that \(\Gamma _{p_\zeta (s)}= \widehat{W}_s\) for every \(s\in [0,\sigma ]\), and it is very natural to interpret \((\Gamma _a)_{a\in {\mathcal {T}}_\zeta }\) as Brownian motion indexed by \({\mathcal {T}}_\zeta \). We view \(\Gamma _a\) as a spatial location or label assigned to the vertex \(a\). For every \(s\in [0,\sigma ]\) and every \(t\in [0,\zeta _s]\), \(W_s(t)\) corresponds to the spatial location of the ancestor of \(p_\zeta (s)\) at generation \(t\).
It is now easy to verify that upcrossing times of \(W\) from \(h\) to \(h+\varepsilon \) are in one-to-one correspondence with vertices \(a\) of \({\mathcal {T}}_\zeta \) such that \(\Gamma _a=h\) and there exists a descendant \(b\) of \(a\) in \({\mathcal {T}}_\zeta \) such that \(\Gamma _{b}=h+\varepsilon \) and \(\Gamma _c>h\) for every interior point \(c\) of the line segment \([\![a,b ]\!]\). In this form, we see that our definition is the exact analog of the one for upcrossing times of a real function defined on the interval \([0,T]\) (provided we see \([0,T]\) as an \({\mathbb {R}}\)-tree rooted at \(0\)).
Lemma 5
Let \(h\in {\mathbb {R}}\) and \(\varepsilon >0\). Let \(N_{h,\varepsilon }\) be the number of upcrossing times of the Brownian snake from \(h\) to \(h+\varepsilon \). Then, \(N_{h,\varepsilon }<\infty \), \({\mathbb {N}}_x\)-a.e.
Proof
By continuity, there exists \({\mathbb {N}}_x\)-a.e. a real \(\delta >0\) such that \(|\widehat{W}_{s_1} - \widehat{W}_{s_2}|<\varepsilon \) for every \(s_1,s_2\ge 0\) such that \(|s_1-s_2|\le \delta \). If \(s\) is an upcrossing time from \(h\) to \(h+\varepsilon \), we let \(s'>s\) be the time associated with \(s\) (see the comment following Definition 4), and we set \(I_s:=[s'-\delta ,s'+\delta ]\). The statement of the lemma follows from the fact that the intervals \(I_s\), when \(s\) varies over the set of all upcrossing times, are pairwise disjoint. To verify the latter fact, consider two distinct upcrossing times \(s\) and \(\tilde{s}\) and the associated times \(s'\) and \(\tilde{s}'\). If \(s<s'<\tilde{s}<\tilde{s}'\), the desired property is immediate from our choice of \(\delta \) and the definition of upcrossing times. From this definition, it is also easy to verify that we cannot have \(s<\tilde{s}<s'<\tilde{s}'\) (otherwise, \(p_\zeta (\tilde{s})\) would be both a strict descendant of \(p_\zeta (s)\) and an ancestor of \(p_\zeta (s')\), implying \(\widehat{W}_{\tilde{s}}>h\)). The only case that remains is when \(s<\tilde{s}<\tilde{s}'\le s'\). In that case, \(p_\zeta (\tilde{s})\) is an ancestor of \(p_\zeta (\tilde{s}')\) but not an ancestor of \(p_\zeta (s')\). It follows that, if \(s'':=\inf \{r\ge \tilde{s}: \zeta _r<\zeta _{\tilde{s}}\}\), we have \(\tilde{s}<\tilde{s}'<s''<s'\). However, \(p_\zeta (s'')=p_\zeta (\tilde{s})\) and so \(\widehat{W}_{s''}=\widehat{W}_{\tilde{s}}=h\), whereas \(\widehat{W}_{\tilde{s}'}= h+\varepsilon \) and \(\widehat{W}_{s'}=h+\varepsilon \). The property \(s'-\tilde{s}'>2\delta \) now follows from our choice of \(\delta \). \(\square \)
Recall our notation \({\mathbb {N}}_0^h\) for the excursion measure \({\mathbb {N}}_0\) conditioned on the event that the Brownian snake hits the level \(h\). The following statement is the main technical result of the paper, from which we will deduce the theorems stated in Sect. 1.
Theorem 6
Let \(h\in {\mathbb {R}}\backslash \{0\}\). We have
in probability under \({\mathbb {N}}^h_0\). Here \(L^h\) is the density at \(h\) of the occupation measure \(\mathcal {O}\), and the constant \(c_1>0\) is as in Theorem 1.
Remark
We exclude the value \(h=0\), in particular because the measure \({\mathbb {N}}^h_0\) is not defined when \(h=0\).
The proof of Theorem 6 is given below in Sect. 5. Section 4 contains some preliminary lemmas.
4 Preliminary lemmas
For technical reasons, we will first deal with the Brownian snake under the probability measure \({\mathbb {P}}_0\). We write \((\ell ^0_s)_{s\ge 0}\) for the local time at level \(0\) of the reflected Brownian motion \((\zeta _s)_{s\ge 0}\) (the normalization of local times is such that the occupation density formula holds, and local times are right-continuous in the space variable). For every \(r>0\), we set
The excursions of \(W\) away from the trivial path \(0\), before time \(\eta _r\), form a Poisson measure with intensity \(r\,{\mathbb {N}}_0\).
For every \(\varepsilon >0\) and \(\mathrm{w}\in {\mathcal {W}}\), set \(\tau _\varepsilon (\mathrm{w}):=\inf \{t\in [0,\zeta _{(\mathrm{w})}]: \mathrm{w}(t)\ge \varepsilon \}\), and also define, for every \(r>0\),
Lemma 5 implies that \(M_{\varepsilon }(r)<\infty \), \({\mathbb {P}}_0\)-a.s. (note that only finitely many excursions of \(W\) away from \(0\) hit \(\varepsilon \) before time \(\eta _r\)).
Lemma 7
For every \(\varepsilon >0\) and \(r>0\),
where the constant \(c_1\) is as in Theorem 1.
Proof
In this proof, \(\varepsilon >0\) and \(r>0\) are fixed. We also consider a real \(\delta >0\), that later will tend to \(0\) (to avoid problems with uncountable unions of negligible sets, we may and will restrict our attention to rational values of \(\delta \)). We write
for the point measure of excursions of \(W\) away from \(0\) before time \(\eta _r\). With each excursion \(\omega ^0_i\), we associate its exit measure \({\mathcal {Z}}^{(-\delta ,\varepsilon )}(\omega ^0_i)\) from the interval \((-\delta ,\varepsilon )\). This exit measure is a finite measure supported on the pair \(\{-\delta ,\varepsilon \}\). We set
which represents the total mass assigned to the point \(-\delta \) by the exit measures associated with the excursions \(\omega ^0_i\), \(i\in I_0\).
Then, for every \(i\in I_0\) (we need only consider those values of \(i\) such that \(\omega ^0_i\) hits \(-\delta \)), we can introduce the excursions of \(\omega ^0_i\) outside \((-\delta ,\varepsilon )\) that start from \(-\delta \), as defined in Sect. 2. Write \((\tilde{\omega }^0_j)_{j\in J_0}\) for the collection of all these excursions when \(i\) varies over \(I_0\). By the special Markov property (Proposition 3), we know that, conditionally on \(X^1_\delta \), the point measure
is Poisson with intensity \(X^1_\delta \,{\mathbb {N}}_{-\delta }\).
For every \(j\in J_0\), \(\widetilde{\omega }^0_j\) is a Brownian snake excursion starting from \(-\delta \), and therefore we can consider its exit measure \({\mathcal {Z}}^{(-\infty ,0)}(\tilde{\omega }^0_j)\) from the interval \((-\infty ,0)\). We then set
Furthermore, for every \(j\in J_0\), we can also consider the excursions of \(\tilde{\omega }^0_j\) outside \((-\infty ,0)\) (of course these excursions start from \(0\)). We write \((\omega ^1_i)_{i\in I_1}\) for the collection of all these excursions when \(j\) varies over \(J_0\), and we set
By the special Markov property again, we get that, conditionally on \(Y^1_\delta \), the point measure \({\mathcal {N}}^1_\delta \) is Poisson with intensity \(Y^1_\delta \,{\mathbb {N}}_0\). Informally, the point measure \({\mathcal {N}}^1_\delta \) contains the information about the behavior after their first return to \(0\) via \(-\delta \) of those paths \(W_s\) that hit \(-\delta \) before hitting \(\varepsilon \).
We can continue this construction by induction. Let us briefly describe the second step. We set
and write \((\tilde{\omega }^1_j)_{j\in J_1}\) for the collection of all excursions of \(\omega ^1_i\), \(i\in I_1\), outside \((-\delta ,\varepsilon )\) that start from \(-\delta \). We then set
and
where \((\omega ^2_i)_{i\in I_2}\) is the collection of all excursions of \(\tilde{\omega }^1_j\), \(j\in J_1\), outside \((-\infty ,0)\). Again, conditionally on \(Y^2_\delta \), the point measure \({\mathcal {N}}^2_\delta \) is Poisson with intensity \(Y^2_\delta \,{\mathbb {N}}_0\).
At every step \(k\ge 1\), we similarly get a nonnegative random variable \(Y^k_\delta \), and a point measure
which, conditionally on \(Y^k_\delta \), is Poisson with intensity \(Y^k_\delta \,{\mathbb {N}}_0\). Informally, \({\mathcal {N}}^k_\delta \) describes the paths \(W_s\) after their \(k\)-th return to \(0\) via \(-\delta \), for those paths \(W_s\) that perform \(k\) descents from \(0\) to \(-\delta \) before (possibly) hitting \(\varepsilon \).
We now set, for every integer \(k\ge 0\),
which counts those Brownian snake excursions \(\omega ^k_i\), \(i\in I_k\), for which there exists \(s\ge 0\) such that the path \(\omega ^k_i(s)\) hits \(\varepsilon \) before \(-\delta \) (by (5), the existence of such a value of \(s\) is equivalent to the property \(\langle {\mathcal {Z}}^{(-\delta ,\varepsilon )}(\omega ^k_i), {\mathbf 1}_{\{\varepsilon \}}\rangle >0\)). We also set
At this point, we need another lemma. \(\square \)
Lemma 8
We have \(M_{\varepsilon ,\delta }\le M_\varepsilon (r)\) for every \(\delta >0\). Moreover,
We postpone the proof of Lemma 8 and complete the proof of Lemma 7. We note that, by Lemma 8 and Fatou’s lemma, we have
On the other hand, the first assertion of Lemma 8 also shows that \({\mathbb {E}}_0[M_{\varepsilon ,\delta }]\le {\mathbb {E}}_0[M_\varepsilon (r)]\) for every \(\delta >0\), so that we have
To complete the argument, we will compute \({\mathbb {E}}_0[M_{\varepsilon ,\delta }]\). We first set
As we already mentioned after (5), we have \(a_{\varepsilon ,\delta }=u(0)\), where the function \((u(x),x\in (-\delta ,\varepsilon ))\) solves the differential equation \(u''=4\,u^2\) with boundary conditions \(u(\varepsilon )=\infty \) and \(u(-\delta )=0\). Solving this differential equation leads to
where the constant \(c_{\varepsilon ,\delta }>0\) is determined by
It follows that \(c_{\varepsilon ,\delta }= (\varepsilon +\delta )^{-6} (c_0)^2\), where the constant \(c_0>0\) is such that
Since \(a_{\varepsilon ,\delta }=u(0)\), we have then
and elementary analysis shows that
Now note that
and, using the conditional distribution of \({\mathcal {N}}^k_\delta \) given \(Y^k_\delta \),
for every \(k\ge 1\). On the other hand, by the first moment formula for exit measures (4), we have
and
An easy induction argument gives, for every \(k\ge 1\),
Hence,
To complete the proof of Lemma 7, it only remains to verify that \(c_0=2c_1\). However, from (8), we get
and the integral can be computed with the help of Mathematica, yielding the desired result. \(\square \)
Proof of Lemma 8
Recall the construction of excursions of the Brownian snake outside an interval. For every \(k\ge 0\) and every \(i\in I_k\), the excursion \(\omega ^k_i\) corresponds to a closed subinterval \(\mathcal {I}_{k,i}\) of \([0,\eta _r]\) (in such a way that the paths \(\omega ^k_i(s)\), \(s\ge 0\), are exactly the paths \(W_s\), \(s\in \mathcal {I}_{k,i}\) shifted at the time of their \(k\)-th return to \(0\) via \(-\delta \)). Next, if \(\langle {\mathcal {Z}}^{(-\delta ,\varepsilon )}(\omega ^k_i), {\mathbf 1}_{\{\varepsilon \}}\rangle >0\), we can find \(s_0\in \mathcal {I}_{k,i}\) such that \(\widehat{W}_{s_0}=\varepsilon \) and \(\tau _\varepsilon (W_{s_0})= \zeta _{s_0}\), and the path \(W_{s_0}\) performs exactly \(k\) descents from \(0\) to \(-\delta \). Set
and
Note that \(r_0\) also belongs to \(\mathcal {I}_{k,i}\), because, for \(r_0 < s\le s_0\), we have \(\zeta _s>\zeta _{r_0}=\lambda _0(W_{s_0})\) and the path \(W_s\) coincides with \(W_{s_0}\) up to a time strictly greater than \(\lambda _0(W_{s_0})\). From our definitions, \(r_0\) is an upcrossing time of \(W\) from \(0\) to \(\varepsilon \). If we now vary \(k\) and \(i\) (among all pairs \((k,i)\) such that \(\langle {\mathcal {Z}}^{(-\delta ,\varepsilon )}(\omega ^k_i), {\mathbf 1}_{\{\varepsilon \}}\rangle >0\)), we get distinct upcrossing times. This is obvious if we vary \(i\) for a fixed value of \(k\), because the intervals \(\mathcal {I}_{k,i}\), \(i\in I_k\), are disjoint. If we vary \(k\), this follows from the fact that \(k\) can be interpreted as the number of descents of \(W_{r_0}\) from \(0\) to \(-\delta \). The preceding discussion shows that \(M_\varepsilon (r)\ge M_{\varepsilon ,\delta }\), proving the first assertion of the lemma.
In order to prove the second assertion, let us start with a few remarks. Suppose that \(s\) is an upcrossing time from \(0\) to \(\varepsilon \), and let \(s'\) be associated with \(s\) as explained after Definition 4. Let \(k\) be the number of descents from \(0\) to \(-\delta \) of the path \(W_s\). Then \(s\) must belong to exactly one interval \(\mathcal {I}_{k,i}\), with \(i\in I_k\), and \(s'\) belongs to the same interval. Write \(\mathcal {I}_{k,i}=[\alpha _{k,i},\beta _{k,i}]\), with \(\alpha _{k,i}<\beta _{k,i}\), and note that, by construction, all paths \(W_u\), \(u\in \mathcal {I}_{k,i}\) coincide up to time \(\zeta _{\alpha _{k,i}}=\zeta _{\beta _{k,i}}\). Furthermore, for every \(u\in [0,\beta _{k,i}-\alpha _{k,i}]\), the path \(\omega ^k_i(u)\) is just the path \(W_{\alpha _{k,i}+u}\) shifted at time \(\zeta _{\alpha _{k,i}}\). Now, using the fact that \(W_{s'}\) makes the same number of descents from \(0\) to \(-\delta \) as \(W_s\), and the definition of an upcrossing time, we see that \(\omega ^k_i(s'-\alpha _{k,i})\) hits \(\varepsilon \) before \(-\delta \). Using (5), it follows that
Consider then another upcrossing time \(\tilde{s}> s\) and the associated time \(\tilde{s}'\). To simplify notation, set
By the properties of the Brownian snake, the paths \(W_s\) and \(W_{\tilde{s}}\) coincide over the interval \([0, \check{\zeta }_{s,\tilde{s}}]\). Suppose that \(W_{\tilde{s}}\) also makes \(k\) descents from \(0\) to \(-\delta \), and belongs to the same interval \(\mathcal {I}_{k,i}\) as \(s\). Then necessarily \(\check{\zeta }_{s,\tilde{s}}\ge \zeta _{\alpha _{i,k}}\), and we have
because otherwise \(W_{\tilde{s}}\) would make (at least) \(k+1\) descents from \(0\) to \(-\delta \).
Now let \(s_1,s_2,\ldots ,s_p\) be \(p\) distinct upcrossing times from \(0\) to \(\varepsilon \) such that \(s_1<s_2<\cdots <s_p\). The second assertion of the lemma will follow if we can prove that we have \(M_{\varepsilon ,\delta } \ge p\) for \(\delta >0\) small enough. We first observe that, for every \(i,j\in \{1,\ldots ,p\}\) such that \(i<j\), we have \(\check{\zeta }_{s_i,s_j}<\zeta _{s_j}\), because otherwise (recalling the definition of an upcrossing time) \(s_j\) would be a time of local minimum of \(\zeta \), and it is easy to see that such a time cannot be an upcrossing time. Note that \(W_{s_j}(\zeta _{s_j})=\widehat{W}_{s_j}=0\), and also observe that
Indeed argue by contradiction and suppose that the latter minimum vanishes. Then writing \(s'_j\) for the time associated with \(s_j\), we obtain that the path \(W_{s'_j}\) has a local minimum equal to \(0\) at time \(\zeta _{s_j}<\zeta _{s_j'}\). This is a contradiction because, with probability one, none of the paths \(W_s\) can have a local minimum equal to \(0\) at an interior point of \([0,\zeta _s]\) (observe that it is enough to consider rational values of \(s\), and then note that a fixed constant is a.s. not a local minimum of linear Brownian motion).
To complete the argument, we observe that, if
then the pairs \((k_j,i_j)\) corresponding to the different upcrossing times \(s_1,\ldots ,s_p\) must be distinct, because otherwise this would contradict the property (11). Furthermore, we can apply (10) to each pair \((k_j,i_j)\), and it follows that, for \(\delta >0\) small enough, we have \(M_{\varepsilon ,\delta } \ge p\). This completes the proof of Lemma 8. \(\square \)
5 Proofs of Theorem 6 and Theorem 2
Most of this section is devoted to the proof of Theorem 6. We then explain how to derive Theorem 2 from this statement.
Proof of Theorem 6
Let \(M_\varepsilon \) be the analog of \(M_\varepsilon (r)\) for a single Brownian snake excursion,
Clearly, \({\mathbb {E}}_0[M_\varepsilon (r)]= r\,{\mathbb {N}}_0(M_\varepsilon )\), and we deduce from Lemma 7 that
Let us fix \(h\in {\mathbb {R}}\backslash \{0\}\). In the present proof and the next one, we argue under the probability measure \({\mathbb {N}}_0^{h}\) (for technical reasons, it will sometimes be convenient to enlarge the probability space so that it carries certain real random variables or processes independent of the Brownian snake). We have
where, for every integer \(k\ge 0\), \(N^k_{h,\varepsilon }\) counts the number of upcrossing times \(s\) from \(h\) to \(h+\varepsilon \) such that the path \(W_s\) has made exactly \(k\) upcrossings from \(h\) to \(h+\varepsilon \). Considering the excursions of the Brownian snake outside the domain \((-\infty ,h)\) if \(h>0\), or the domain \((h,\infty )\) if \(h<0\), we see that
where \(n^0_\varepsilon \) denotes the number of excursions outside \((-\infty ,h)\) (if \(h>0\)) or outside \((h,\infty )\) (if \(h<0\)) that hit \(h+\varepsilon \), and, for every \(1\le i\le n^0_\varepsilon \), \(N^{0,i}_{h,\varepsilon }\) counts the contribution to \(N^0_{h,\varepsilon }\) of the \(i\)-th excursion, assuming that these excursions are listed in a uniform random orderFootnote 1 given the Brownian snake. In other words, \(N^{0,i}_{h,\varepsilon }\) counts those upcrossing times \(s\) that belong to the interval associated with the \(i\)-th excursion, and have the additional property that \(W_s\) makes no upcrossing from \(h\) to \(h+\varepsilon \). From the special Markov property, we see that, conditionally on \(n^0_\varepsilon \), the variables \(N^{0,1}_{h,\varepsilon }, N^{0,2}_{h,\varepsilon }, \ldots \) are independent and follow the distribution of \(M_\varepsilon \) under \({\mathbb {N}}_0(\cdot \mid T_\varepsilon <\infty )\). By scaling, the latter distribution does not depend on \(\varepsilon \), and we denote it by \(\mu \).
A similar decomposition holds for \(N^k_{h,\varepsilon }\), for every \(k\ge 1\). Let us discuss the case \(k=1\). We consider again the excursions of the Brownian snake outside the domain \((-\infty ,h)\) if \(h>0\), or the domain \((h,\infty )\) if \(h<0\). We apply the special Markov property to each of these excursions (which start from \(h\)) and to the domain \((-\infty ,h+\varepsilon )\), in order to get a collection of Brownian snake excursions starting from \(h+\varepsilon \). Once again, we apply the special Markov property to each of the latter excursions and to the domain \((h,\infty )\), and we let \(n^1_\varepsilon \) be the number of the resulting excursions (starting from \(h\)) that hit \(h+\varepsilon \). We have then
where, conditionally on the pair \((n^0_\varepsilon ,n^1_\varepsilon )\), the random variables \(N^{1,1}_{h,\varepsilon },N^{1,2}_{h,\varepsilon },\ldots \) are independent and distributed according to \(\mu \). Furthermore, still conditionally on \((n^0_\varepsilon ,n^1_\varepsilon )\), the vector \((N^{1,1}_{h,\varepsilon },\ldots ,N^{1,n^1_\varepsilon }_{h,\varepsilon })\) is independent of the vector \((N^{0,1}_{h,\varepsilon },\ldots ,N^{0,n^0_\varepsilon }_{h,\varepsilon })\): This follows again from the special Markov property, and the fact that the vector \((N^{0,1}_{h,\varepsilon },\ldots ,N^{0,n^0_\varepsilon }_{h,\varepsilon })\) is measurable with respect to the \(\sigma \)-field generated by the paths \(W_s\) up to the end of their first upcrossing from \(h\) to \(h+\varepsilon \).
Arguing inductively, we get that, for every \(k\ge 0\),
where, conditionally on the sequence \((n^0_\varepsilon ,n^1_\varepsilon ,\ldots ,n^k_\varepsilon )\), the random variables \(N^{k,i}_{h,\varepsilon }\), \(1\le i\le n^k_\varepsilon \), are independent, and independent of the collection \((N^{\ell ,j}_{h,\varepsilon })_{0\le \ell <k,1\le j\le n^\ell _\varepsilon }\), and are distributed according to \(\mu \). Furthermore, the variables \(n^k_\varepsilon \) can be characterized as follows. For every \(k\ge 0\), \(n^k_\varepsilon \) counts the instants \(s\) such that:
-
(i)
the path \(W_s\) makes exactly \(k\) upcrossings from \(h\) to \(h+\varepsilon \);
-
(ii)
\(\zeta _s\) is the time of the first return of the path \(W_s\) to \(h\) after its \(k\)-th upcrossing from \(h\) to \(h+\varepsilon \) (when \(k=0\), \(\zeta _s\) coincides the first hitting time of \(h\) by \(W_s\));
-
(iii)
in the tree \({\mathcal {T}}_\zeta \), \(p_\zeta (s)\) has (at least) one descendant \(p_\zeta (s')\) such that \(\widehat{W}_{s'}= h+\varepsilon \).
Notice that, for every fixed \(\varepsilon >0\), we have \(n^k_\varepsilon =0\) for \(k\) large enough, \({\mathbb {N}}^h_0\)-a.s.
Let us emphasize that there is no independence between the sequence \((n^k_\varepsilon )_{k\ge 0}\) on one hand and the collection of variables \((N^{k,i}_{h,\varepsilon })_{k\ge 0,1\le i\le n^k_\varepsilon }\) on the other hand. At an intuitive level, if \(N^{0,1}_{h,\varepsilon }\) is large, the exit measure from \((-\infty ,h+\varepsilon )\) of the first excursion outside \((-\infty ,h)\) (if \(h>0\)) or outside \((h,\infty )\) (if \(h<0\)) is likely to be large, and, with high probability, \(n^1_\varepsilon \) will also be large. \(\square \)
Lemma 9
For every \(\varepsilon >0\), set
We have
in probability under \({\mathbb {N}}^{h}_0\).
We postpone the proof of the lemma and complete the proof of Theorem 6. Write \(\xi ^\varepsilon _1,\xi ^\varepsilon _2,\ldots \) for the sequence
which is completed by adding a sequence of independent random variables distributed according to \(\mu \) at its end (these auxiliary random variables are supposed to be independent of the Brownian snake). As a consequence of the properties stated after (13), it is a simple exercise to verify that \(\xi ^\varepsilon _1,\xi ^\varepsilon _2,\ldots \) form a sequence of independent random variables distributed according to \(\mu \). Set \(S^\varepsilon _j=\xi ^\varepsilon _1+\xi ^\varepsilon _2+\cdots + \xi ^\varepsilon _j\), for every \(j\ge 0\). By construction, we have
To complete the proof, we simply use the law of large numbers. Using (3) and (12), we get that the first moment of \(\mu \) is
which does not depend on \(\varepsilon \) as expected. It is easy to verify that \(n_\varepsilon \longrightarrow \infty \) as \(\varepsilon \rightarrow 0\), \({\mathbb {N}}^{h}_0\) a.s. (by (5) and the special Markov property, this is even true if we replace \(n_\varepsilon \) by \(n^0_\varepsilon \)), and the law of large numbers gives
in \({\mathbb {N}}^{h}_0\)-probability. Notice that the preceding argument applies even though \(n_\varepsilon \) is not independent of the sequence \((S^\varepsilon _j)_{j\ge 1}\). The convergence of Theorem 6 now follows by writing
and using Lemma 9. \(\square \)
Proof of Lemma 9
In this proof, we keep arguing under \({\mathbb {N}}^h_0\), and we now assume that \(h>0\). Only minor modifications are needed when \(h<0\). It will be convenient to replace the convergence of Lemma 9 by an analogous convergence in terms of certain exit measures. We argue in a way very similar to the proof of Lemma 7, and for this reason we will omit some details. We first set
Then we consider all excursions of the Brownian snake outside \((-\infty ,h+\varepsilon )\) and we define \(Z^{\varepsilon ,2}\) as the sum, over all these excursions, of the total masses of their exit measures from \((h,\infty )\). For each of the preceding excursions outside \((-\infty ,h+\varepsilon )\), we consider its “subexcursions” outside \((h,\infty )\) and define \(Z^{\varepsilon ,3}\) as the sum over all these subexcursions (and over all choices of the initial excursion outside \((-\infty ,h+\varepsilon )\)) of the total masses of their exit measures from \((-\infty ,h+\varepsilon )\). We continue by induction in an obvious way. Informally, for any \(k\ge 1\), \(Z^{\varepsilon ,2k-1}\) “counts” the paths \(W_s\) that make exactly \(k\) upcrossings from \(h\) to \(h+\varepsilon \), and are stopped at the end of the \(k\)-th upcrossing, and similarly \(Z^{\varepsilon ,2k}\) “counts” the paths \(W_s\) that make \(k\) upcrossings from \(h\) to \(h+\varepsilon \), then one additional descent from \(h+\varepsilon \) to \(h\), and are stopped at the end of this last descent.
Using a symmetry argument analogous to the classical reflection principle for Brownian motion (but now relying on the special Markov property rather than on the strong Markov property of Brownian motion), one immediately verifies that
This argument is easily extended to yield
As an easy consequence of the special Markov property and the first-moment formula (4), the process \((\langle {\mathcal {Z}}^{(-\infty ,h+a)}, 1\rangle )_{a\ge 0}\), which is now indexed by the real variable \(a\ge 0\), is a nonnegative martingale (in fact a critical continuous-state branching process) under \({\mathbb {N}}^{h}_0\). Consequently, this process has a càdlàg modification, which we consider from now on, and using the preceding identity in distribution, we have, for every \(\varepsilon >0\),
and
We set
Then, from the special Markov property again and formula (3), one obtains that, for every \(k\ge 0\), the conditional distribution of \(n^k_\varepsilon \) knowing \(Z^{\varepsilon ,2k}\) is Poisson with parameter \(\frac{3}{2\varepsilon ^2}\,Z^{\varepsilon ,2k}\). Simple Borel-Cantelli type arguments, using also (14) and (15), now show that
in \({\mathbb {N}}^{h}_0\)-probability. So the proof of Lemma 9 will be complete if we can verify that
in \({\mathbb {N}}^{h}_0\)-probability.
To this end, we note that, by (6),
and we write
Recall that \(B=(B_t)_{t\ge 0}\) stands for a linear Brownian motion starting from \(0\) under the probability measure \(P_0\), and set \(\theta _h=\inf \{t\ge 0: B_t=h\}\). By the first-moment formula for the Brownian snake [11, Proposition 4.2],
where the last equality holds for \(0<\varepsilon \le h\), by an application of a classical Ray-Knight theorem. It follows that we have also
in \({\mathbb {N}}^{h}_0\)-probability.
On the other hand, we have, for every \(\varepsilon >0\),
where, for every \(k\ge 0\), \(Y^\varepsilon _k\) accounts for the contribution of those values of \(s\) such that the path \(W_s\) (hits \(h\) and) performs exactly \(k\) upcrossings from \(h\) to \(h+\varepsilon \). In order to derive (16) from (17), we will argue that, for every \(k\ge 0\), the quantity \(\varepsilon ^{-1}Y^\varepsilon _k\) is close to \(2\varepsilon \,Z^{\varepsilon ,k}\). Consider first the case \(k=0\). Then, if \(\sum _{i\in I}\delta _{\omega _i}\) stands for the point measure of excursions of the Brownian snake outside \((-\infty ,h)\), we have
where
By translation invariance, the “law” of \(\Phi _{h,\varepsilon }(\omega )\) under \({\mathbb {N}}_h\) coincides with the “law” under \({\mathbb {N}}_0\) of
Let \(\pi _\varepsilon \) denote the latter (\(\sigma \)-finite) distribution. The special Markov property now implies that the conditional law of \(Y^\varepsilon _0\) knowing \(Z^{\varepsilon ,0}\) is the law of \(U_\varepsilon (Z^{\varepsilon ,0})\), where \(U_\varepsilon = (U_\varepsilon (r))_{r\ge 0}\) is a subordinator with no drift and Lévy measure \(\pi _\varepsilon \).
Repeated applications of the special Markov property (in a way very similar to the proof of Lemma 7) allow us to iterate this argument and to obtain that, for every \(k\ge 0\), the conditional distribution of \(Y^\varepsilon _k\) knowing \(Z^{\varepsilon ,2k}\) is the law of \(U_\varepsilon (Z^{\varepsilon ,2k})\).
By scaling arguments, we have
where \(U\) is a subordinator with Lévy measure \(\pi _1\). Using the first-moment formula for the Brownian snake, we get that
Let \(\mathbf {E}\) stand for the expectation on an enlarged probability space carrying both the Brownian snake (distributed according to \({\mathbb {N}}^h_0\)) and the subordinators \(U_\varepsilon \) and \(U\), which are assumed to be independent of the Brownian snake. By the law of large numbers, we have
Next, thanks to (14) and (15), we can fix \(A>0\) large enough so that the event
has a probability arbitrarily close to \(1\), uniformly for all \(\varepsilon \) sufficiently small. Furthermore,
which tends to \(0\) by (19), using also the independence of \(U\) and \(Z^{\varepsilon ,2k}\). The previous considerations imply that
in \({\mathbb {N}}^h_0\)-probability. By combining this convergence with (17) and (18), we get our claim (16). \(\square \)
We finally explain how Theorem 2 is derived from Theorem 6 via the Brownian snake construction of (historical) super-Brownian motion [11, Chapter IV], which we briefly recall below.
Proof of Theorem 2
We now argue under the probability measure \({\mathbb {P}}_0\). For every \(t\ge 0\), let \((\ell ^t_s)_{s\ge 0}\) denote the local time process at level \(t\) of the reflected Brownian motion \((\zeta _s)_{s\ge 0}\). Fix \(a>0\), and recall our notation \(\eta _a=\inf \{s\ge 0: \ell ^0_s>a\}\). A historical super-Brownian motion \(\mathbf {Y}\) starting from \(a\,\delta _0\) can be obtained under \({\mathbb {P}}_0\) by setting, for every \(t\ge 0\), and every nonnegative measurable function \(\Phi \) on \(C([0,t],{\mathbb {R}})\),
where the notation \(\mathrm {d}\ell ^t_s\) refers to integration with respect to the increasing function \(s\longrightarrow \ell ^t_s\). In particular, if \(\mathrm {supp}(\mathbf {Y}_t)\) stands for the topological support of \(\mathbf {Y}_t\), we have a.s. for every \(t\ge 0\),
Conversely, any \(s \in [0, \eta _{a}]\) such that \(\zeta _{s}=t\) and \(s\) is not a time of local extremum of the function \(r \rightarrow \zeta _{r}\) belongs to the support of the measure \({\mathbf{1}}_{[0,\eta _{a}]}(s)\,\mathrm{d}\ell _{s}^{t}\), and it follows from (20) that for such values of \(s\) we have \(W_{s} \in \hbox {supp}(\mathbf {Y}_t)\). Also note that, if \(\mathbf {X}\) is the super-Brownian motion associated with \(\mathbf {Y}\), the random measure \(\int _0^\infty \mathrm {d}t\,\mathbf {X}_t\) coincides with the occupation measure of \(\widehat{W}\) over the interval \([0,\eta _a]\).
Write \(N_{h,\varepsilon }(a)\) for the number of upcrossing times of \(W\) from \(h\) to \(h+\varepsilon \) before time \(\eta _a\). Before time \(\eta _a\), there is only a finite number of excursions of \(W\) away from \(0\) that hit level \(h\), and obviously \(N_{h,\varepsilon }(a)\) is the sum of the upcrossing numbers corresponding to each of these excursions. We can then apply Theorem 6 to see that \(\varepsilon ^3 N_{h,\varepsilon }(a)\) converges in probability to (\(c_1\) times) the density at \(h\) of the occupation measure of \(\widehat{W}\) over \([0,\eta _a]\), which coincides with the local time of \(\mathbf {X}\) at level \(h\).
From the previous considerations, the proof of Theorem 2 will be complete if we can verify, with the notation of this theorem, that \(\fancyscript{N}_{h,\varepsilon }=N_{h,\varepsilon }(a)\), a.s. for every fixed \(h\) and \(\varepsilon \). In other words, we need to prove that upcrossing times of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \) are in one-to-one correspondence with upcrossing times of \(W\) from \(h\) to \(h+\varepsilon \) before time \(\eta _a\).
Consider an upcrossing time \(r\) of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \). By the definition, there exists \(t>r\) and \(w\in \mathrm {supp}(\mathbf {Y}_t)\) such that \(r\) is an upcrossing time of the function \(w\). From (21), there exists \(s\in [0,\eta _a]\) such that \(\zeta _s=t\) and \(W_s=w\). Set \(\tilde{s}:=\sup \{u<s: \zeta _u=r\}\), so that in particular \(\zeta _{\tilde{s}}=r\). Then, by the properties of the Brownian snake, the path \(W_{\tilde{s}}\) coincides with the path \(W_s=w\) restricted to \([0,r]\). Furthermore, we have \(\zeta _u>r\) for every \(u\in (\tilde{s},s]\) by construction, and it easily follows that \(\tilde{s}\) is an upcrossing time of \(W\) (if \(\tilde{r}=\inf \{r'>r : w(r')=h+\varepsilon \}\), take \(\tilde{s}'=\sup \{u<s: \zeta _u=\tilde{r}\}\), and note that \(W_{\tilde{s}'}\) coincides with the restriction of \(w\) to \([0,\tilde{r}]\), so that the pair \((\tilde{s},\tilde{s}')\) satisfies the properties of the definition of an upcrossing time of the Brownian snake).
By the previous discussion, for any upcrossing time \(r\) of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \), we can construct an upcrossing time \(\tilde{s}\in [0,\eta _a]\) of \(W\) from \(h\) to \(h+\varepsilon \) such that \(\zeta _{\tilde{s}}=r\). In fact, \(\tilde{s}\) is uniquely determined by \(r\): The point is that the quantities \(\zeta _s\) when \(s\) varies among upcrossing times of \(W\) from \(h\) to \(h+\varepsilon \) are distinct a.s. (recall that \(h\) and \(\varepsilon \) are fixed). The latter property essentially follows from the fact that, if \(B\) and \(B'\) are two independent linear Brownian motions, the set of all left ends of excursion intervals of \(B\) away from \(h\) and the similar set for \(B'\) are disjoint a.s. We omit some details here.
Clearly the mapping \(r\rightarrow \tilde{s}\) is one-to-one. It remains to verify that this mapping is also onto, and to this end it will suffice to check that, for any upcrossing time \(s\) of \(W\) from \(h\) to \(h+\varepsilon \) before time \(\eta _a\), \(\zeta _s\) is an upcrossing time of \(\mathbf {Y}\) from \(h\) to \(h+\varepsilon \). Let \(s\in [0,\eta _a]\) be an upcrossing time of \(W\) from \(h\) to \(h+\varepsilon \), and let \(s'\) be the associated time. We already noticed that \(\zeta _{s}\) is an upcrossing time of the function \(W_{s'}\). If \(t=\zeta _{s'}\), we have then \(W_{s'}\in \mathrm {supp}(\mathbf {Y}_t)\), by the observations following (21) and the fact that the time \(s'\) cannot be a time of local extremum of \(\zeta \) (this fact is a consequence of the strong Markov property of the Brownian snake, using the remark following Definition 4). Hence, we get that \(\zeta _s\) is an upcrossing time of \(\mathbf {Y}\) as desired.
Finally, the mapping \(r\rightarrow \tilde{s}\) is a bijection, and it follows that \(\fancyscript{N}_{h,\varepsilon }=N_{h,\varepsilon }(a)\), a.s. This completes the proof of Theorem 2. \(\square \)
Remark
Theorem 2 can be extended to more general initial values of \(\mathbf {X}\). In particular, the preceding proof shows that the result still holds if \(\mathbf {X}_0\) is supported on a compact interval \(I\) and \(h\notin I\). The convergence (2) presumably holds for any initial value \(\mathbf {X}_0\) and any \(h\in {\mathbb {R}}\). Proving this would however require some additional estimates.
6 Conditioned excursion measures
In view of our applications to the Brownian map, we will now establish certain conditional versions of Theorem 6. We first consider the probability measure \({\mathbb {N}}_0^{(1)}\) defined by
Under \({\mathbb {N}}_0^{(1)}\), the lifetime process \((\zeta _s)_{s\ge 0}\) is a normalized Brownian excursion, and the conditional distribution of \((W_s)_{s\ge 0}\) knowing \((\zeta _s)_{s\ge 0}\) remains the same as under \({\mathbb {N}}_0\). The definition of upcrossing times of \(W\) still makes sense under \({\mathbb {N}}_0^{(1)}\), and the local times \((L^h)_{h\in {\mathbb {R}}}\) are again well defined thanks to Theorem 2.1 in [4].
Proposition 10
Let \(h\in {\mathbb {R}}\backslash \{0\}\), and, for every \(\varepsilon >0\), let \(N_{h,\varepsilon }\) be the number of upcrossing times from \(h\) to \(h+\varepsilon \). Then,
in \({\mathbb {N}}_0^{(1)}\)-probability.
Remark
It is very plausible that this result also holds for \(h=0\), but we will leave this extension as an exercise for the reader, since it is not needed in our application to the Brownian map.
Proof
We rely on an absolute continuity argument to derive Proposition 10 from Theorem 6. We fix \(\eta >0\) and, on the event \(\{\zeta _{1/2} > \eta \}\), we set
If \(\zeta _{1/2}\le \eta \), we take \(R_\eta =S_\eta =\frac{1}{2}\). We claim that the law of the process
under the conditional probability measure \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2} > \eta )\) is absolutely continuous with respect to the Itô measure of Brownian excursions. To see this, fix \(x>\eta \). From the explicit form of the finite-dimensional marginal distributions of the normalized Brownian excursion, we get that, under the conditional measure \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2}= x)\), the two processes \((\zeta _{1/2 +s})_{0\le s\le 1/2}\) and \((\zeta _{1/2-s})_{0\le s\le 1/2}\) are independent and both distributed as a linear Brownian motion started from \(x\) and conditioned to hit \(0\) at time \(1/2\). It then easily follows that, still under \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2}= x)\), the law of the pair consisting of the processes \((\zeta _{(1/2 +s)\wedge S_\eta })_{s\ge 0}\) and \((\zeta _{(1/2 -s)\vee R_\eta })_{s\ge 0}\) is absolutely continuous with respect to the law of two independent Brownian motions started from \(x\) and stopped upon hitting \(\eta \). We now get our claim by comparing the latter assertion with the classical Bismut decomposition of the Itô measure (see e.g. [18, Chapter XII]).
Next we note that, for \(R_\eta \le s\le S_\eta \), we have \(W_s(\eta )=W_{R_\eta }(\eta )=\widehat{W}_{R_\eta }\) by the properties of the Brownian snake. Still on the event \(\{\zeta _{1/2} > \eta \}\), we define a path-valued process \(W^\eta =(W^\eta _s)_{s\ge 0}\) by setting, for every \(s\ge 0\),
Then the law of \((W^\eta _s)_{s\ge 0}\) under \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2} > \eta )\) is absolutely continuous with respect to \({\mathbb {N}}_0\). Furthermore, the process \((W^\eta _s)_{s\ge 0}\) is independent of \(H_\eta :=\widehat{W}_{R_\eta }\) under the same probability measure. These facts are simple consequences of the properties of the Brownian snake.
Let \(h>0\) (the case \(h<0\) is treated in a similar way). We can choose \(\eta >0\) small enough in such a way that,
is arbitrarily small. On the other hand, on the event
simple considerations give, with an obvious notation,
and, for every \(\varepsilon >0\),
Because the law of \(W^\eta \) is absolutely continuous with respect to \({\mathbb {N}}_0\), and using also the fact that \(H_\eta \) is independent of \(W_\eta \) under \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2} > \eta )\), we can use Theorem 6 to obtain that the convergence
holds in probability under \({\mathbb {N}}_0^{(1)}(\cdot \mid \zeta _{1/2} > \eta )\). The result of Proposition 10 follows from the preceding considerations. \(\square \)
We finally give the analog of Proposition 10 for the Brownian snake “conditioned to stay positive”. It is proved in [15] that the conditional measures \({\mathbb {N}}^{(1)}_0(\cdot \mid \inf \{\widehat{W}_s:s\ge 0\} >-\delta )\) converge as \(\delta \downarrow 0\) to a limit, which is denoted by \(\overline{{\mathbb {N}}}^{(1)}_0\). This limiting measure can also be constructed directly as the law under \({\mathbb {N}}^{(1)}_0\) of the Brownian snake “re-rooted” at its minimum. Let us describe this construction (see [15] for more details). We argue under the measure \({\mathbb {N}}^{(1)}_0\). Fix \(r\in [0,1]\), and set, for every \(s\in [0,1]\),
with the notation \(r\oplus s= r+s\) if \(r+s\le 1\), and \(r\oplus s=r+s-1\) if \(r+s>1\). Also set \(\zeta ^{[r]}_s=0\) if \(s>1\). Then, the tree \({\mathcal {T}}_{\zeta ^{[r]}}\) is identified isometrically with the tree \({\mathcal {T}}_\zeta \) re-rooted at \(p_\zeta (r)\), via the mapping \(p_{\zeta ^{[r]}}(s)\longrightarrow p_\zeta (r\oplus s)\). We also introduce a path-valued process \(W^{[r]}\) such that the associated lifetime process is \(\zeta ^{[r]}\): We first set \(\widehat{W}^{[r]}_s:= \widehat{W}_{r\oplus s} - \widehat{W}_r\), for every \(s\in [0,1]\), and we then define the path \(W^{[r]}_s\) by saying that, for every \(t\in [0,\zeta ^{[r]}_s]\), \( W^{[r]}_s(t) = \widehat{W}^{[r]}_u\) if \(u\in [0,1]\) is such that \(p_{\zeta ^{[r]}}(u)\) is the (unique) ancestor of \(p_{\zeta ^{[r]}}(s)\) at distance \(t\) from the root in the tree \({\mathcal {T}}_{\zeta ^{[r]}}\). The invariance of the Brownian snake under re-rooting (see formula (3) in [15]) asserts that \((W^{[r]}_s)_{0\le s\le 1}\) has the same distribution as \((W_s)_{0\le s\le 1}\) under \({\mathbb {N}}^{(1)}_0\).
Of course, the preceding invariance property may fail if we allow \(r\) to be random. We let \(s_*\) be the almost surely unique element of \([0,1]\) such that
By [15, Theorem 1.2], the process \(W^{[s_*]}\) is distributed according to \(\overline{{\mathbb {N}}}^{(1)}_0\). Furthermore, \(s_*\) is uniformly distributed over \([0,1]\), and \(s_*\) and \(W^{[s_*]}\) are independent under \({\mathbb {N}}^{(1)}_0\). The latter two properties are straightforward consequences of the invariance under (deterministic) re-rooting.
We write \(W_*=\widehat{W}_{s_*}\) to simplify notation. In a way similar to the discussion before Lemma 5, we assign the spatial location \(\Gamma ^{[s_*]}_v=\widehat{W}^{[s*]}_s\) to the vertex \(v= p_{\zeta ^{[s*]}}(s)\) of \({\mathcal {T}}_{\zeta ^{[s_*]}}\), for every \(s\in [0,1]\), and, modulo the identification of \({\mathcal {T}}_{\zeta ^{[s_*]}}\) with \({\mathcal {T}}_\zeta \) re-rooted at \(p_\zeta (s_*)\), we have \(\Gamma ^{[s_*]}_v=\Gamma _v - W_{*}\) for every \(v\).
Furthermore, it is clear that the definition of the local times \(L^h\) still makes sense under \(\overline{{\mathbb {N}}}^{(1)}_0\): Just note that the occupation measure of \(\widehat{W}^{[s*]}\) coincides with the occupation measure of \(\widehat{W}\) shifted by \(-W_{*}\).
Proposition 11
Let \(h>0\). The convergence of Proposition 10 also holds in \(\overline{{\mathbb {N}}}^{(1)}_0\)-probability.
Proof
We fix \(\kappa >0\). We can choose \(\alpha \in (0,1/4)\) such that
Then, we choose \(\eta >0\) such that
Finally, recalling that \((\zeta _s)_{0\le s\le 1}\) is distributed under \({\mathbb {N}}^{(1)}_0\) as a normalized Brownian excursion, we can choose \(\delta \in (0,\frac{\alpha }{2})\) such that
Since \(s_*\) is uniformly distributed over \([0,1]\), this last bound also implies
From the results recalled before the statement of the proposition, and in particular the fact that \(s_*\) and \(W^{[s_*]}\) are independent under \({\mathbb {N}}^{(1)}_0\), we obtain that, under the conditional probability measure \({\mathbb {N}}^{(1)}_0(\cdot |\, s_*<\delta )\), the process \(W^{[s_*]}\) is distributed according to \(\overline{{\mathbb {N}}}^{(1)}_0\), so that we can apply the bounds (22) and (23) to this process. Combining the bounds (22), (23) and (24), we see that, except on a set of \({\mathbb {N}}^{(1)}_0(\cdot |\, s_*<\delta )\)-measure smaller than \(3\kappa \), we have
-
(i)
\(\forall s\in [\frac{\alpha }{2}, 1-\frac{\alpha }{2}],\; \zeta ^{[s_*]}_s > 2\eta \);
-
(ii)
\(\forall s\in [0,2\alpha ]\cup [1-2\alpha ,1],\; |\widehat{W}^{[s_*]}_s| \le \frac{h}{4}\);
-
(iii)
\({\sup _{s\le \delta } \zeta _s \le \eta }\).
Now recall the definition of \(\zeta ^{[s_*]}\) and \(\widehat{W}^{[s_*]}\) in terms of the pair \((\zeta ,\widehat{W})\). Using also the fact that \(\delta <\frac{\alpha }{2}\), we see that (i)–(iii) imply, on the event \(\{s_*<\delta \}\),
-
(i)’
\(\forall s\in [\alpha ,1-\alpha ],\;\zeta _s > \eta \);
-
(ii)’
\(\forall s\in [0,\alpha ]\cup [1-\alpha ,1],\; |\widehat{W}_s| \le \frac{h}{4}\).
Recall the notation \(R_\eta ,S_\eta ,W^\eta ,H_\eta \) introduced in the proof of Proposition 10. Obviously (i)’ implies that \(R_\eta <\alpha \) and \(S_\eta >1-\alpha \). Therefore we can summarize the preceding discussion by saying that, except on a set of \({\mathbb {N}}^{(1)}_0(\cdot |\, s_*<\delta )\)-measure smaller than \(3\kappa \), we have both the properties (i)’ and (iii) above, and
Write \(A_{\delta ,\eta }\) for the intersection of the event \(\{s_*<\delta \}\) with the event where (i)’, (iii) and (25) hold, and use the obvious notation \(N_{h,\varepsilon }(W^{[s_*]})\) and \(L^h(W^{[s_*]})\) for respectively the upcrossing numbers and the local times of \(W^{[s_*]}\). Also note that (iii) forces \(\delta \le R_\eta \). Then,
In the fourth line of the preceding display, we use (25) (and the fact that \(s_*<\delta \le R_\eta \)) to verify that \(N_{h,\varepsilon }(W^{[s_*]})= N_{h+W_*,\varepsilon }\) and \(L^h(W^{[s_*]})= L^{h+W_*}\) on the event \(A_{\delta ,\eta }\). In particular, the simplest way to obtain the identity \(N_{h,\varepsilon }(W^{[s_*]})= N_{h+W_*,\varepsilon }\) is to use the interpretation of upcrossing times in terms of vertices of the tree \({\mathcal {T}}_\zeta \) (see the discussion before Lemma 5), observing that \({\mathcal {T}}_{\zeta ^{[s_*]}}\) is identified with \({\mathcal {T}}_\zeta \) re-rooted at \(p_\zeta (s_*)\) and that, modulo this identification, a vertex \(v\) of \({\mathcal {T}}_\zeta \) such that \(\Gamma _v=h\) has, on the event \(A_{\delta ,\eta }\), the same descendants in \({\mathcal {T}}_\zeta \) and in \({\mathcal {T}}_{\zeta ^{[s_*]}}\). Similarly, in the last equality of (26), we use (25) to replace \(N_{h+W_*,\varepsilon }\) and \(L^{h+W_*}\) by \(N_{h+W_*-H_\eta ,\varepsilon } (W^\eta )\) and \(L^{h+W_*-H_\eta } (W^\eta )\) respectively.
Clearly, in the last line of (26), we can replace \(W_*\) by
Under the probability measure \({\mathbb {N}}^{(1)}_0\), if we condition on \(R_\eta \) and \(S_\eta \), \(W^\eta \) becomes independent of the pair \((W_*^{(R_\eta )},H_\eta )\), and is distributed as a Brownian snake excursion with duration \(S_\eta -R_\eta \). Therefore we can apply Proposition 10 to see that
Noting that \(A_{\delta ,\eta }\subset \{R_\eta \le \alpha \le 1-\alpha \le S_\eta , |W_*^{(R_\eta )}-H_\eta |\le \frac{h}{2}\}\), we now conclude from (26) that
and, since \(\kappa \) was arbitrary, this completes the proof. \(\square \)
7 Application to the Brownian map
Let us recall the construction of the Brownian map \((\mathbf{m}_\infty , D)\) from the Brownian snake. In the following presentation, we argue under the probability measure \(\overline{{\mathbb {N}}}^{(1)}_0\). The fact that \(\overline{{\mathbb {N}}}^{(1)}_0\) coincides with the law under \({\mathbb {N}}^{(1)}_0\) of the Brownian snake “re-rooted at its minimum” (as explained above) shows that this presentation is equivalent to the one given in [13] or in [14] .
Under the measure \(\overline{{\mathbb {N}}}^{(1)}_0\), the process \((\zeta _s)_{0\le s\le 1}\) is no longer distributed as a Brownian excursion, but we can still make sense of the tree \({\mathcal {T}}_\zeta \) and we can again define the collection \((\Gamma _a)_{a\in {\mathcal {T}}_\zeta }\) by setting \(\Gamma _a=\widehat{W}_s\) if \(p_\zeta (s)=a\), exactly as we did under \({\mathbb {N}}_0\) in Sect. 3. Note that \(\Gamma _{\rho _\zeta }=0\) (we recall that \(\rho _\zeta =p_\zeta (0)\) is the root of \({\mathcal {T}}_\zeta \)) and \(\Gamma _a\ge 0\) for every \(a\in {\mathcal {T}}_\zeta \), \(\overline{{\mathbb {N}}}^{(1)}_0\) a.s. We now interpret \(\Gamma _a\) as a label assigned to the vertex \(a\). For every \(a,b\in {\mathcal {T}}_\zeta \), we then set
and
where the infimum is over all choices of the integer \(p\ge 1\) and of the elements \(a_0,a_1,\ldots ,a_p\) of \({\mathcal {T}}_\zeta \) such that \(a_0=a\) and \(a_p=b\). Then \(D\) is a pseudo-metric on \({\mathcal {T}}_\zeta \), and we consider the associated equivalence relation \(\approx \): if \(a,b\in {\mathcal {T}}_\zeta \),
One can prove that this property is also equivalent to \(D^\circ (a,b)=0\). Informally, this means that \(a\) and \(b\) have the same label, and that one can go from \(a\) to \(b\) moving “around” the tree and encountering only vertices with a larger label.
The Brownian map is the quotient space \(\mathbf{m}_\infty :={\mathcal {T}}_\zeta \,/\! \approx \), which is equipped with the metric induced by \(D\), for which we keep the same notation \(D\). We write \(\Pi \) for the canonical projection from \({\mathcal {T}}_\zeta \) onto \(\mathbf{m}_\infty \). The projection \(\Pi \) is continuous (see [12, Section 2.5]). We will use the following lower bound [12, Corollary 3.2]: For every \(a,b\in {\mathcal {T}}_\zeta \),
The distinguished point of the Brownian map is \(\rho =\Pi (\rho _\zeta )\). We have then \(D(\rho ,\Pi (a))= \Gamma _a\) for every \(a\in {\mathcal {T}}_\zeta \). The volume measure \(\lambda \) on \(\mathbf{m}_\infty \) is the image of Lebesgue measure on \([0,1]\) under \(\Pi \circ p_\zeta \).
In the proof of Theorem 1, we will need the following lemma. We say that \(h\in {\mathbb {R}}\) is a local minimum of \(\mathrm{w}\in {\mathcal {W}}\) if there exists \(t\in (0,\zeta _{(\mathrm{w})})\) and \(\beta >0\), with \((t-\beta ,t+\beta )\subset (0,\zeta _{(\mathrm{w})})\), such that \(\mathrm{w}(t)=h\) and \(\mathrm{w}(t')\ge h\) for every \(t'\in (t-\beta ,t+\beta )\).
Lemma 12
Let \(h>0\). Then \(\overline{{\mathbb {N}}}^{(1)}_0\) a.s. for every \(s\in [0,1]\), \(h\) is not a local minimum of \(W_s\).
Proof
If we replace \(\overline{{\mathbb {N}}}^{(1)}_0\) by \({\mathbb {N}}^{(1)}_0\) in the statement of the lemma, the proof is easy, by an argument already explained at the end of the proof of Lemma 8. To get the precise statement of the lemma, we need to verify that \(h+W_*\) is not a local minimum of one of the paths \(W_s\), \({\mathbb {N}}^{(1)}_0\) a.s. The fact that \(h+W_*\) is random, and of course not independent of the paths \(W_s\), makes the proof a little harder. Still one can use arguments very similar to the proof of Proposition 11, conditioning on the event \(\{s_*<\delta \}\) and replacing \(W_*\) by \(W_*^{(R_\eta )}\) (where \(\delta \) and \(\eta \) are chosen as in the latter proof): Except on a set of small probability, one can then concentrate on the paths \(W_s\) for \(s\in [R_\eta ,S_\eta ]\), or more precisely on the paths \(W^\eta _s\) for \(s\in [0,S_\eta -R_\eta ]\), and use the same independence property as in the end of the proof of Proposition 11 to conclude. We leave the details to the reader. \(\square \)
Proof of Theorem 1
It easily follows from the formula \(D(\rho ,\Pi (a))= \Gamma _a\) and our definition of the volume measure \(\lambda \) that the profile of distances \(\Delta \) coincides with the occupation measure of the (conditioned) Brownian snake. Consequently, \(\lambda \) has a continuous density \((\mathbf {L}^h)_{h\ge 0}\) and \(\mathbf {L}^h=L^h\), \(\overline{{\mathbb {N}}}^{(1)}_0\) a.s. We then claim that, for every fixed \(h>0\) and \(\varepsilon >0\),
Once the claim is proved, the statement of the theorem follows from Proposition 11.
Say that \(a\in {\mathcal {T}}_\zeta \) is an \((h,\varepsilon )\)-upcrossing vertex if \(a=p_\zeta (s)\) where \(s\) is an upcrossing time of the Brownian snake from \(h\) to \(h+\varepsilon \). This is equivalent to saying that \(\Gamma _a=h\) and \(a\) has a descendant \(b\) such that \(\Gamma _b=h+\varepsilon \) and \(\Gamma _c>h\) for every \(c\in [\![a,b]\!]\backslash \{a\}\). Note that we have then \(D(\rho ,a)=\Gamma _a=h\). To prove our claim, we verify that \((h,\varepsilon )\)-upcrossing vertices are in one-to-one correspondence with connected components of \(B_h(\rho )^c\) that intersect \(B_{h+\varepsilon }(\rho )^c\).
Let \(a\) be an \((h,\varepsilon )\)-upcrossing vertex. We define \(C_a\) as the set of all vertices \(b\in {\mathcal {T}}_\zeta \) such that \(b\) is a descendant of \(a\) and \(\Gamma _c\ge h\) for every \(c\in [\![a,b]\!]\). Note that if \(b\in C_a\), then the whole segment \([\![a,b]\!]\) is contained in \(C_a\). It follows that \(C_a\) is (path-)connected, and it is also easy to check that \(C_a\) is a closed subset of \({\mathcal {T}}_\zeta \). Furthermore the fact that \(a\) is an \((h,\varepsilon )\)-upcrossing vertex ensures that \(C_a\) contains (at least) one vertex \(a'\) such that \(\Gamma _{a'}= a+\varepsilon \).
To simplify notation, set \({\mathcal {T}}_\zeta ^{\ge h}:=\{b\in {\mathcal {T}}_\zeta : \Gamma _b\ge h\}\). We next verify that \(C_a\) is a connected component of \({\mathcal {T}}_\zeta ^{\ge h}\). To this end, we set for every \(\delta >0\),
It is easy to verify that \(O_\delta \) is open in \({\mathcal {T}}_\zeta ^{\ge h}\). The set \(O_\delta \) is also closed. In fact, if \((b_n)\) is a sequence in \(O_\delta \) that converges to \(b\), and if, for every \(n\), \(b'_n\) is the unique vertex of \({\mathcal {T}}_\zeta \) such that \([\![a,b ]\!]\cap [\![a,b_n ]\!]= [\![a,b'_n ]\!]\), then we must have \(d_\zeta (b'_n,b)\longrightarrow 0\) as \(n\rightarrow \infty \), and since \(\Gamma _b=\lim \Gamma _{b_n} \ge h\), it follows that for \(n\) large enough we have
so that \(b\in O_\delta \) as desired. We then observe that
Indeed, let \(b\in C_a^c\). If \(b\) is a descendant of \(a\), we must have
yielding that \(b\in O_\delta ^c\) as soon as \(\delta \) is small enough. If \(b\) is not a descendant of \(a\), we observe that \([\![a,b]\!]\cap [\![\rho ,a]\!]=[\![\tilde{a}, a]\!]\), for some \(\tilde{a}\in [\![\rho ,a]\!]\) such that \(\tilde{a}\not = a\). Now recall that \(a\) has a descendant \(a'\) such that \(\Gamma _{a'}=h+\varepsilon \) and \(\Gamma _c\ge h\) for every \(h\in [\![a,a']\!]\). From Lemma 12, the values of \(\Gamma \) along \([\![\rho ,a']\!]\) cannot have a local minimum equal to \(h\), and we again obtain that
Since we know that all sets \(O_\delta \) are both open and closed in \({\mathcal {T}}_\zeta ^{\ge h}\), (28) implies that \(C_a\) is a connected component of \({\mathcal {T}}_\zeta ^{\ge h}\).
Now let \({\mathcal {C}}_a:=\Pi (C_a)\). The preceding considerations imply that \({\mathcal {C}}_a\) is a connected component of \(\Pi ({\mathcal {T}}_\zeta ^{\ge h})=B_h(\rho )^c\). Let us explain this. By the continuity of \(\Pi \), \({\mathcal {C}}_a\) is (path-)connected and closed in \(\mathbf{m}_\infty \). From (28) and a simple compactness argument, we get that
The sets \(\Pi (O_\delta )\) are closed by the continuity of \(\Pi \). Let us prove that they are also open in \(B_h(\rho )^c\). Since we already know that \({\mathcal {T}}_\zeta ^{\ge h}\backslash O_\delta \) is closed, this will follow from the equality
In this equality, the inclusion \(\subset \) is obvious. To prove the reverse inclusion, we need to verify that \( \Pi (O_\delta )\cap \Pi ({\mathcal {T}}_\zeta ^{\ge h}\backslash O_\delta )=\varnothing \). Let \(b\in O_\delta \) and \(\tilde{b}\in {\mathcal {T}}_\zeta ^{\ge h}\backslash O_\delta \). We have then \(\Gamma _b\ge h\), \(\Gamma _{\tilde{b}}\ge h\), and
Since \([\![b,\tilde{b} ]\!]\supset [\![a,\tilde{b} ]\!]\backslash [\![a,b ]\!]\), it follows that
so that \(\Pi (b)\not = \Pi (\tilde{b})\) by (27). We have thus proved that the sets \(\Pi (O_\delta )\) are both closed and open in \(B_h(\rho )^c\), and (29) now implies that \({\mathcal {C}}_a\) is a connected component of \(B_h(\rho )^c\).
Summarizing, with each \((h,\varepsilon )\)-upcrossing vertex \(a\) we can associate a connected component \({\mathcal {C}}_a\) of \(B_h(\rho )^c\) that intersects \(B_{h+\varepsilon }(\rho )^c\). If \(a\) and \( \tilde{a}\) are two distinct \((h,\varepsilon )\)-upcrossing vertices, we have \({\mathcal {C}}_a \not ={\mathcal {C}}_{\tilde{a}}\) because otherwise \(\tilde{a}\) would be a descendant of \(a\) and \(a\) would be a descendant of \(\tilde{a}\), which is only possible if \(a=\tilde{a}\). So it only remains to show that any connected component of \(B_h(\rho )^c\) that intersects \(B_{h+\varepsilon }(\rho )^c\) is of this form. Let \({\mathcal {C}}\) be such a connected component and let \(x\in {\mathcal {C}}\cap B_{h+\varepsilon }(\rho )^c\). Choose \(b\in {\mathcal {T}}_\zeta \) such that \(\Pi (b)=x\). Then \(\Gamma _b\ge h+\varepsilon \). By a continuity argument, there exists a unique vertex \(a\in [\![\rho ,b]\!]\) such that \(\Gamma _a=h\) and \(\Gamma _c>h\) for every \(c\in [\![a,b]\!]\backslash \{a\}\). Then \(a\) is an \((h,\varepsilon )\)-upcrossing vertex, and \({\mathcal {C}}= {\mathcal {C}}_a\). This completes the proof. \(\square \)
Remark
With the notation of the preceding proof, set \(C^\circ _a:=\{b\in C_a: \Gamma _b>h\}\) and \({\mathcal {C}}^\circ _a:=\Pi (C^\circ _a)\), for every \((h,\varepsilon )\)-upcrossing vertex \(a\). Then the sets \({\mathcal {C}}^\circ _a\) are open in \(\mathbf{m}_\infty \) and these sets, when \(a\) varies among all \((h,\varepsilon )\)-upcrossing vertices, are exactly those connected components of the complement of the closed ball \(\overline{B}_h(\rho )\) that intersect \(B_{h+\varepsilon }(\rho )^c\). So we could have stated Theorem 1 in terms of connected components of the open set \(\overline{B}_h(\rho )^c\), and the preceding proof would have been a little simpler. We chose to deal with connected components of the complement of the open ball mainly in view of the connection with the Brownian cactus [5, Section 2.5]. As a final remark, it is not hard to verify that the boundary of \({\mathcal {C}}^\circ _a\), \(\partial {\mathcal {C}}^\circ _a =\Pi (\{b\in C_a:\Gamma _b=h\})\), is a simple loop in \(\mathbf{m}_\infty \). By Jordan’s theorem, all sets \({\mathcal {C}}^\circ _a\) are homeomorphic to the open unit disk.
Notes
The argument also goes through if excursions are listed in chronological order, but then we need a slightly more precise version of the special Markov property.
References
Addario-Berry, L., Albenque M.: The scaling limit of random simple triangulations and random simple quadrangulations. Preprint (2013), available at: arXiv:1306.5227
Aldous, D.: Tree-based models for random distribution of mass. J. Stat. Phys. 73, 625–641 (1993)
Beltran, J., Le Gall, J.F.: Quadrangulations with no pendant vertices. Bernoulli 19, 1150–1175 (2013)
Bousquet-Mélou, M., Janson, S.: The density of the ISE and local limit laws for embedded trees. Ann. Appl. Probab. 16, 1597–1632 (2006)
Curien, N., Le Gall, J.-F., Miermont, G.: The Brownian cactus I. Scaling limits of discrete cactuses. Ann. Inst. H. Poincaré Probab. Stat. 49, 340–373 (2013)
Dawson, D.A., Perkins E.A.: Historical processes. Mem. Am. Math. Soc. 454 (1991)
Dynkin, E.B.: Path processes and historical superprocesses. Probab. Theory Relat. Fields 90, 1–36 (1991)
Fleischmann, K.: Critical behavior of some measure-valued processes. Math. Nachr. 135, 131–147 (1988)
Itô, K., McKean, H.P.: Diffusion Processes and Their Sample Paths. Springer, Berlin (1965)
Le Gall, J.-F.: The Brownian snake and solutions of \(\Delta u = u^2\) in a domain. Probab. Theory Relat. Fields 102, 393–432 (1995)
Le Gall, J.-F.: Spatial branching processes, random snakes and partial differential equations. In: Lectures in Mathematics ETH Zürich. Birkhäuser, Basel (1999)
Le Gall, J.-F.: Geodesics in large planar maps and in the Brownian map. Acta Math. 205, 287–360 (2010)
Le Gall, J.-F.: Uniqueness and universality of the Brownian map. Ann. Probab. 41, 2880–2960 (2013)
Le Gall, J.-F., Miermont, G.: Scaling limits of random trees and planar maps. Probability and statistical physics in two and more dimensions, pp. 155–211. Clay Math. Proc., vol. 15. Amer. Math. Soc., Providence (2012)
Le Gall, J.-F., Weill, M.: Conditioned Brownian trees. Ann. Inst. H. Poincaré Probab. Stat. 42, 455–489 (2006)
Miermont, G.: The Brownian map is the scaling limit of uniform random plane quadrangulations. Acta Math. 210, 319–401 (2013)
Perkins, E.: Dawson–Watanabe superprocesses and measure-valued diffusions. In: Lectures on Probability Theory and Statistics (Saint-Flour, 1999), pp. 125–324. Lecture Notes in Math., vol. 1781. Springer, Berlin (2002)
Revuz, D., Yor, M.: Continuous martingales and Brownian motion. Springer, Berlin (1991)
Sugitani, S.: Some properties for the measure-valued branching diffusion processes. J. Math. Soc. Japan 41, 437–462 (1989)
Acknowledgments
I thank Nicolas Curien and Grégory Miermont for fruitful conversations about this work, which is a continuation of our previous article in collaboration [5]. I am also indebted to two anonymous referees for useful suggestions and to Igor Kortchemski for his help in computing the constant of Theorem 1.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Le Gall, JF. The Brownian cactus II: upcrossings and local times of super-Brownian motion. Probab. Theory Relat. Fields 162, 199–231 (2015). https://doi.org/10.1007/s00440-014-0569-5
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-014-0569-5
Mathematics Subject Classification
- Primary 60J80
- 60G57
- Secondary 60J55