1 Introduction

Branching Brownian motion is the subject of a large literature that one can trace back at least to [18]. The connection of this probabilistic model with the well-known F-KPP equation has in particular attracted much interest from both the probabilistic and the analytic side starting with the seminal studies of McKean [24], Bramson [6], Lalley and Sellke [23], Chauvin and Rouault [9] and more recently with works by Harris [15], Kyprianou [22] and Harris, Harris and Kyprianou [16].

In the present work we consider a continuous-time branching Brownian motion with quadratic branching mechanism: the system starts with a single particle at the origin which follows a Brownian motion with drift \(\varrho \) and variance \(\sigma ^2>0\). After an exponential time with parameter \(\lambda >0\) the particle splits into two new particles which each start a new independent copy of the same process started from its place of birth. Each of them thus moves according to a Brownian motion with drift \(\varrho \) and variance \(\sigma ^2>0\) and splits into two after an exponential time with parameter \(\lambda >0\) and so on.

We write \(X_1(t) \le \cdots \le X_{N(t)} (t)\) for the positions of the particles of the branching Brownian motion alive at time \(t\) enumerated from left to right (where \(N(t)\) is the number of particles alive at time \(t\)). The corresponding random point measure is denoted by

$$\begin{aligned} \fancyscript{N}(t) := \sum _{i=1,\ldots , N(t)} \delta _{X_i(t)}. \end{aligned}$$

We will work under conditions on \(\lambda , \varrho , \sigma ^2\) which ensure that for all \(t>0\),

$$\begin{aligned} \mathbf{E}\left[\sum _{i =1, \ldots , N(t)} \mathrm{e }^{-X_i(t)}\right] =1, \quad \mathbf{E}\left[\sum _{i =1, \ldots , N(t)} X_i(t) \mathrm{e }^{-X_i(t)}\right] =0. \end{aligned}$$
(1.1)

Since \(\mathbf{E}(N(t))=\mathrm{e }^{\lambda t}\), for any measurable function \(F\) and each \(t>0\),

$$\begin{aligned} \mathbf{E}\left[\sum _{i =1, \ldots , N(t)} F(X_{i,t}(s), \, s\in [0, \, t])\right] = \mathrm{e }^{\lambda t}\, \mathbf{E}\left[F(\sigma B_s + \varrho s, \, s\in [0, \, t])\right], \end{aligned}$$

where, for each \(i \in \{1,\ldots , N(t)\}\) we let \(X_{i,t}(s), \, s\in [0, \, t]\) be the position, at time \(s\), of the unique ancestor of \(X_i(t)\) and \(B\) is a standard Brownian motion. Thus the Eq. (1.1) become \(\varrho = \lambda + {\sigma ^2\over 2}\) and \(\varrho = \sigma ^2\). Hence the usual conditions amount to supposing \(\varrho = \sigma ^2= 2\lambda \). In this paper we always assume \(\lambda =1, \varrho =2\) and \(\sigma =\sqrt{2}\). The choice of a binary branching is arbitrary. Our results certainly hold true for a more general class of branching mechanisms, e.g. when the law of the number of offsprings is bounded or has finite second moment. For the sake of clarity we only consider the simple case of binary branching which already contains the full phenomenology.

The position \(X_{N(t)}(t)\) of the rightmost particle of the branching Brownian motion has been much studied (see [5, 6, 23, 24]). In these classical works, the authors usually assume that \(\varrho =0, \lambda = \sigma =1\). We recall some of their results adapted to our normalization. In particular, instead of the rightmost particle we prefer to work with the position \(X_1(t)\) of the leftmost particle.

Bramson [6] shows that there exists a constant \(C_B \in \mathbb{R }\) and a real valued random variable \(W\) such that

$$\begin{aligned} X_1(t) - m_t \;\; {\text{ law} \over \rightarrow } \;\; W, \qquad t \rightarrow \infty , \end{aligned}$$
(1.2)

where

$$\begin{aligned} m_t := {3 \over 2} \log t + C_B \end{aligned}$$
(1.3)

and furthermore the distribution function \(\mathbf{P}(W\le x) =w(x)\) is a solution to the critical F-KPP travelling wave equation

$$\begin{aligned} w^{\prime \prime } + 2w^{\prime }+w(w-1)=0. \end{aligned}$$

Lalley and Sellke’s paper [23] can be seen as the real starting point of the present work. Realizing that the convergence (1.2) cannot hold in an ergodic sense, they prove the following result. Define

$$\begin{aligned} Z(t) : = \sum _{i=1,\ldots , N(t)} X_i(t) \mathrm{e }^{-X_i(t)}. \end{aligned}$$
(1.4)

We know that \(\mathbf{E}(Z(t)) = 0\) by (1.1) and it is not hard to see that \((Z(t), t\ge 0)\) is in fact a martingale (the so-called derivative martingale). It can be shown that

$$\begin{aligned} Z :=\lim _{t \rightarrow \infty } Z(t) \end{aligned}$$
(1.5)

exists, is finite and strictly positive with probability 1. The main result of Lalley and Sellke’s paper is then that \(\exists C>0\) such that

$$\begin{aligned} \lim _{s \rightarrow \infty } \lim _{t \rightarrow \infty } \mathbf{P}(X_1(t+s) - m_{t+s} \ge x |{\fancyscript{F}}_s) = \exp \left(- CZ\mathrm{e }^{x}\right) \end{aligned}$$

where \({\fancyscript{F}}_t\) is the natural filtration of the branching Brownian motion. As a consequence,

$$\begin{aligned} \mathbf{P}(W \le x) \;\sim \; C \, |x| \mathrm{e }^{x}, \qquad x\rightarrow -\infty . \end{aligned}$$
(1.6)

Since conditionally on \(Z\) the function \(y \mapsto \exp \left(- CZ \mathrm{e }^{y} \right) = \exp \left(- \mathrm{e }^{y +\log (CZ)} \right)\) is the distribution function of minus a Gumbel random variable centered on \(-\log (CZ)\), this suggests the following picture which is conjectured by Lalley and Sellke for the front of branching Brownian motion. The random variable \(X_1(t) - m_t\) converges in distribution and its limit is the sum of two terms. The first one is \(-\log (CZ)\), which depends on the limit of the derivative martingale, while the second term is simply minus a Gumbel random variable. Brunet and Derrida [8] interpret this as a random delay (which builds up early in the process and settles down to some value) and a fluctuation term around this position.

In the last section of [23], the authors conjecture that more generally, the point measure of particle positions relative to \(m_{t}-\log (CZ)\)

$$\begin{aligned} {\bar{{\fancyscript{N}}}}(t) := \sum _{i=1, \ldots , N(t)}\delta _{X_i(t) - m_t +\log (CZ)} \end{aligned}$$

converges to a stationary distribution.

In the present work we prove that \({\bar{{\fancyscript{N}}}}(t)\) converges to a stationary distribution which we describe precisely. We show that the structure of this limit point measure is a decorated Poisson point measure, i.e., a Poisson point measure on the real line where each atom is replaced by an independent copy of a certain point measure shifted by the position of the atom. Another proof and description has been obtained independently by Arguin et al. [4] (see Sect. 3).

2 Main results

Throughout the paper, all point measures are, as in the setting of Kallenberg [21], considered as elements of the space \(\mathcal M \) of Radon measures on \(\mathbb R \) equipped with the vague topology, that is, we say that \(\mu _n\) converges in distribution to \(\mu \) if and only if \(\int f \, \mathrm{d }\mu _n \rightarrow \int f \, \mathrm{d }\mu \) for any real continuous function \(f\) with compact support. By Theorem 4.2 (iii) p. 32 of [21], it is equivalent to say that \((\mu _n(A_j),\,1\le j\le k)\) converges in distribution to \((\mu (A_j),\,1\le j\le k)\) for any intervals \((A_j,\,1\le j\le k)\). The space \(C(\mathbb{R }_+, \, \mathbb{R })\) (or sometimes, \(C([0, \, t], \, \mathbb{R })\)) is endowed with topology of uniform convergence on compact sets. If \(F\) is a function on \(C(\mathbb{R }_+, \, \mathbb{R })\), then for any continuous function \((Z_s, \, s\in [0, \, t])\), we define \(F(Z_s, \, s\in [0, \, t])\) as \(F(\widetilde{Z}_s, \, s\ge 0)\), with \(\widetilde{Z}_s := Z_{\min \{s, \, t\}}\).

We now introduce two point measures which are the main focus of this work. First, consider the point measure of the particles seen from \(m_t - \log (CZ)\) and enumerated from the leftmost:

$$\begin{aligned} {\bar{{\fancyscript{N}}}} (t) = {\fancyscript{N}}(t) - m_t + \log (CZ) = \sum _{i=1, \ldots , N(t)} \delta _{X_i(t) - m_t + \log (CZ)}. \end{aligned}$$

We will also sometimes want to consider the particles as seen from the leftmost

$$\begin{aligned} {\fancyscript{N}}^{\prime }(t) := \sum _{i=1, \ldots , N(t)} \delta _{X_i(t) - X_1(t)} \end{aligned}$$

Theorem 2.1

As \(t \rightarrow \infty \) the pair \(\{{\bar{{\fancyscript{N}}}}(t), Z(t) \}\) converges jointly in distribution to \( \{{\fancyscript{L}}, Z \}\) where \(Z\) is as in (1.5), \({\fancyscript{L}}\) and \(Z\) are independent and \({\fancyscript{L}}\) is obtained as follows.

  1. (i)

    Define \({\fancyscript{P}}\) a Poisson point measure on \(\mathbb{R }\), with intensity measure \( \mathrm{e }^{x}\, \mathrm{d }x\).

  2. (ii)

    For each atom \(x\) of \({\fancyscript{P}}\), we attach a point measure \({\fancyscript{Q}}^{(x)}\) where \({\fancyscript{Q}}^{(x)}\) are independent copies of a certain point measure \({\fancyscript{Q}}\).

  3. (iii)

    \(\fancyscript{L}\) is then the point measure corresponding to the sum of all \(x+\fancyscript{Q}^{(x)}\), i.e.,

    $$\begin{aligned} \fancyscript{L} := \sum _{x \in {\fancyscript{P}}} \sum _{y \in {\fancyscript{Q}}^{(x)}}\delta _{x+y} \end{aligned}$$

    where \(x\in {\fancyscript{P}}\) means “\(x\) is an atom of \({\fancyscript{P}}\)”.

Since the leftmost atom of \(\fancyscript{P}\) has the Gumbel distribution, this implies that the Gumbel distribution is the weak limit of \(X_1(t) - m_t + \log (CZ)\). The following corollary, concerning the point measure seen from the leftmost position, contains strictly less information than the theorem.

Corollary 2.2

As \(t \rightarrow \infty \) the point measure \({\fancyscript{N}}^{\prime }(t)\) converges in distribution to the point measure \({\fancyscript{L}}^{\prime }\) obtained by replacing the Poisson point measure \({\fancyscript{P}}\) in step (i) above by \({\fancyscript{P}}^{\prime }\) described in step (i)\(^{\prime }\) below:

  • (i)\(^{\prime }\) Let \(\mathbf{e}\) be a standard exponential random variable. Conditionally on \(\mathbf{e},\) define \({\fancyscript{P}}^{\prime }\) to be a Poisson point measure on \(\mathbb{R }_+\), with intensity measure \(\mathbf{e} \mathrm{e }^{x} \mathbf{1}_{\mathbb{R }_+}(x) \, \mathrm{d }x\) to which we add an atom in \(0\).

The decoration point measure \({\fancyscript{Q}}(x)\) remains the same.

The variable \(Z\) is not \({\fancyscript{F}}_t\)-measurable, and in this sense Theorem 2.1 is a conditional statement. However, it is clear that if one replaces \({\bar{{\fancyscript{N}}}}(t)\) by

$$\begin{aligned} {\hat{{\fancyscript{N}}}}(t) := {\fancyscript{N}}(t) - m_t + \log (CZ(t)) = \sum _{i=1, \ldots , N(t)} \delta _{X_i(t) - m_t + \log (CZ(t))} \end{aligned}$$

which is \({\fancyscript{F}}_t\)-measurable, then the same result still holds.

Theorem 2.1 above should not be considered a new result when the decoration point measure \({\fancyscript{Q}}\) is not specified. Indeed, the convergence to a limiting point process was already implicit in the results of Brunet and Derrida [7] and is also proved independently in [4] by Arguin et al. See Sect. 3 for a detailed discussion.

We next give a precise description of the decoration point measure \({\fancyscript{Q}}\) which is the main result of the present work. For each \(s \le t\), recall that \(X_{1,t}(s)\) is the position at time \(s\) of the ancestor of \(X_1(t)\), i.e., \(s\mapsto X_{1,t}(s)\) is the path followed by the leftmost particle at time \(t\). We define

$$\begin{aligned} Y_t(s) := X_{1,t}(t-s) - X_1(t), \qquad s\in [0,t] \end{aligned}$$

the time reversed path back from the final position \(X_1(t)\). Let us write \(t\ge \tau _1(t)>\tau _2(t)>\ldots \) for the (finite number of) successive splitting times of branching along the trajectory \(X_{1,t}(s), s\le t\) (enumerated backward). We define \(\fancyscript{N}_i(t)\) to be the point measure corresponding to the set of all particles at time \(t\) which have branched off from \(X_{1,t}\) at time \(\tau _i(t)\) relative to the final position \(X_1(t)\) (see Fig. 1). We will also need the notation \(\tau _{i,j}(t)\) which is the time at which \(X_i(t)\) and \(X_j(t)\) share their most recent common ancestor. Observe that

$$\begin{aligned} \fancyscript{N}_i(t) = \sum _{j \le N(t) : \tau _{1,j}(t) = \tau _i(t)} \delta _{X_j(t) - X_1(t)}. \end{aligned}$$
Fig. 1
figure 1

\((Y, {\fancyscript{Q}})\) is the limit of the path \(s\mapsto X_{1,t}(t-s)-X_1(t)\) and of the points that have branched recently off from \(X_{1,t}\)

We then define

$$\begin{aligned} {\fancyscript{Q}}(t,\zeta ) := \delta _0 +\sum _{i : \tau _i(t) >t-\zeta }{\fancyscript{N}}_i(t) \end{aligned}$$

i.e., the point measure of particles at time \(t\) which have branched off \(X_{1,t}(s)\) after time \(t-\zeta \), including the particle at \(X_1(t)\) itself.

We will first show that \(((Y_t(s), s \in [0,t]), {\fancyscript{Q}}(t,\zeta ))\) converges jointly in distribution (by first letting \(t \rightarrow \infty \) and then \(\zeta \rightarrow \infty \)) towards a limit \(((Y(s), s\ge 0), {\fancyscript{Q}})\) where the second coordinate is our point measure \({\fancyscript{Q}}\) which is described by growing conditioned branching Brownian motions born at a certain rate on the path \(Y\). We first describe the limit \(((Y(s), s\ge 0), {\fancyscript{Q}})\) and then we state the precise convergence result.

The following family of processes indexed by a real parameter \(b>0\) plays a key role in this description. Let \(B:= (B_t, \, t\ge 0)\) be a standard Brownian motion and let \(R:= (R_t, \, t\ge 0)\) be a three-dimensional Bessel process started from \(R_0:=0\) and independent from \(B\) (see Fig. 2). Let us define \(T_b := \inf \{t\ge 0: \, B_t =b\}\). For each \(b >0\), we define the process \(\Gamma ^{(b)}\) as follows:

$$\begin{aligned} \Gamma ^{(b)}_s := {\left\{ \begin{array}{ll} B_s,&\text{ if} s\in [0, \, T_b], \\ b-R_{s-T_b},&\text{ if} s\ge T_b\text{.} \\ \end{array}\right.} \end{aligned}$$
(2.1)

Let us define

$$\begin{aligned} G_t(x) := \mathbf{P}_0(X_1(t) \le x) =\mathbf{P}_{-x} (X_1(t) \le 0) \end{aligned}$$

the probability of presence to the left of \(x\) at time \(t\), where we write \(\mathbf{P}_x\) for the law of the branching Brownian motion started from one particle at \(x\). Hence, by (1.2) we see that \(G_t (x+m_t) \rightarrow \mathbf{P}(W \le x)\).

Fig. 2
figure 2

The process \(\Gamma ^{(b)}\)

We can now describe the law of the backward path \(Y\): for any measurable set \(A\) of \(C(\mathbb{R }_+, \, \mathbb{R })\) and \(b \ge 0\),

$$\begin{aligned} \mathbf{P}(Y\in A, -\inf _{s \ge 0}Y(s) \in db) = {1\over c_1} \mathbf{E}\left[\mathrm{e }^{-2 \int _0^\infty G_v (\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \mathbf 1 _{-\sigma \Gamma ^{(b)} \in A} \right], \end{aligned}$$

where

$$\begin{aligned} c_1 := \int _0^\infty \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty G_v(\sigma \Gamma _v^{(b)})\, \mathrm{d }v} \Big ] \, \mathrm{d }b \end{aligned}$$

(observe that by Eq. (6.7) this constant is finite).

Observe that \(-\inf _{s \ge 0}Y(s)\) is a random variable with values in \((0, \, \infty )\) whose density is given by

$$\begin{aligned} \mathbf{P}\left(-\inf _{s \ge 0}Y(s) \in \!\, \mathrm{d }b\right) = {1\over c_1}\mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty G_v(\sigma \Gamma _v^{(b)})\, \mathrm{d }v} \Big ] \, \mathrm{d }b. \end{aligned}$$

Now, conditionally on the path \(Y\), we let \(\pi \) be a Poisson point process on \([0,\infty )\) with intensity \(2 \big (1 - G_t(-Y(t))\big ) \, \mathrm{d }t = 2 \big (1 - \mathbf{P}_{Y(t)} (X_1(t) <0)\big ) \, \mathrm{d }t\). For each point \(t\in \pi \) start an independent branching Brownian motion \(({\fancyscript{N}}^*_{Y(t)}(u), u\ge 0)\) at position \(Y(t)\) conditioned to have \(\min {\fancyscript{N}}^*_{Y(t)}(t) >0\).Footnote 1 Then define \({\fancyscript{Q}}:= \delta _0+ \sum _{t \in \pi } {\fancyscript{N}}^*_{Y(t)}(t)\).

Theorem 2.3

The following convergence holds jointly in distribution:

$$\begin{aligned} \lim _{\zeta \rightarrow \infty } \lim _{t \rightarrow \infty } ((Y_t(s), s\in [0,t]), \, {\fancyscript{Q}}(t,\zeta ), \, X_{1}(t)-m_t) = ((Y(s), s\ge 0), \, {\fancyscript{Q}}, \, W), \end{aligned}$$

where the random variable \(W\) is independent of the pair \(((Y(s),s\ge 0), \, {\fancyscript{Q}})\), and \({\fancyscript{Q}}\) is the point measure which appears in Theorem 2.1.

Observe that the parameter \(\zeta \) only matters for the decoration point measure in the second coordinate.

The following Theorem 2.4 characterizes the joint distribution of the path \(s \mapsto X_{1,t}(s)\) that the particle which is the leftmost at time \(t\) has followed, of the point measures of the particles to its right, and of the times at which these particles have split in the past, all in terms of a Brownian motion functional. The proof borrows some ideas from [1] but is more intuitive in the present setting of branching Brownian motion. Moreover, it also serves as a first step in the (much) more involved proof of Theorem 2.3 in Sect.  6

For any positive measurable functional \(F : C([0,t],\mathbb{R }) \mapsto \mathbb{R }_+\) and any positive measurable function \(f:[0, \, t] \rightarrow \mathbb{R }_+\), for \(n \in \mathbb{N }, (\alpha _1, \ldots , \alpha _n) \in \mathbb{R }_+^n\) and \(A_1,\ldots , A_n\) a collection of Borel subsets of \(\mathbb{R }_+\) define

$$\begin{aligned} I(t) := \mathbf{E}\Big \{F(X_{1,t}(s), \, s\in [0, \, t])\, \exp \Big (- \sum _i f(t-\tau _i(t))\, \sum _{j=1}^n \alpha _j \int _{A_j} \, \mathrm{d }\fancyscript{N}_i(t) \Big ) \Big \}, \end{aligned}$$

where for a point measure \({\fancyscript{N}}\) and a set \(A\) we write \(\int _A \, \mathrm{d }{\fancyscript{N}}\) in place of \( {\fancyscript{N}}(A)\).

For each \(r\ge 0\) and every \(x\in \mathbb{R }\) recall that \(G_r(x) = \mathbf{P}\{X_1(r) \le x\},\) and further define

$$\begin{aligned} \overline{G}_r^{(f)}(x)&:= \mathbf{E}\Big [\mathrm{e }^{-f(r) \sum _{j=1}^n \alpha _j [\int _{x+ A_j} \, \mathrm{d }\fancyscript{N}(r)]} \, \mathbf{1}_{\{X_1(r) \ge x\}} \Big ]. \end{aligned}$$

Hence, when \(f \equiv 0\) we have \(\overline{G}_r^{(f)}(x) = 1-G_r(x)\).

Theorem 2.4

We have

$$\begin{aligned} I(t) =\mathbf{E}\Big [\mathrm{e }^{\sigma B_t} \, F(\sigma B_s, \, s\in [0, \, t]) \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_{t-s}^{(f)}(\sigma B_t-\sigma B_s)] \, \mathrm{d }s} \Big ], \end{aligned}$$
(2.2)

where \(B\) in the expectation above is a standard Brownian motion. In particular, the path \((s \mapsto X_{1,t}(s), 0\le s\le t)\) is a standard Brownian motion in a potential:

$$\begin{aligned} \mathbf{E}\Big [F(X_{1,t}(s), \, s\in [0, \, t]) \Big ]=\mathbf{E}\Big [\mathrm{e }^{\sigma B_t} \, F(\sigma B_s, \, s\in [0, \, t]) \, \mathrm{e }^{- 2 \int _0^t G_{t-s}(\sigma B_t-\sigma B_s) \, \mathrm{d }s} \Big ].\nonumber \\ \end{aligned}$$
(2.3)

This result, which can be seen as a Feynman–Kac representation formula is hardly surprising and is reminiscent of the approach in Bramson’s work.

In addition to this “Brownian motion in a potential” description we also present some properties of a typical path \((X_{1,t}(s), \, s\in [0, \, t])\). Let us fix a constant \(\eta >0\) (that we will take large enough in a moment). For \(t \ge 1\) and \(x>0\), we define the good event \(A_{t}(x,\eta )\) by

$$\begin{aligned} A_{t}(x,\eta ) := E_1(x,\eta )\cap E_2(x,\eta ) \cap E_3(x,\eta ) \end{aligned}$$
(2.4)

where the events \(E_i\) (see Fig. 3) are defined by

$$\begin{aligned} E_1(x,\eta )&:= \left\{ \forall i \text{ s.t.} |X_i(t)-m_t| <\eta , \ \min _{s\in [0,t]} X_{i,t}(s) \ge -x,\right.\nonumber \\&\left.\, \ \min _{s\in [t/2,t]} X_{i,t}(s) \ge m_t -x \right\} ,\\ E_2(x,\eta )&:= \left\{ \forall i \text{ s.t.} |X_i(t)- m_t| <\eta , \forall s\in \left[x,{t\over 2}\right], X_{i,t}(s) \ge s^{1/3} \right\} ,\\ E_3(x,\eta )&:= \left\{ \forall i \text{ s.t.} |X_i(t)- m_t| <\eta , \forall s\in \left[x,{t\over 2}\right],\,\right.\nonumber \\&\quad \left. X_{i,t}(t-s)-X_i(t) \in [s^{1/3},s^{2/3}]\right\} . \end{aligned}$$

We will show that the event \(A_{t}(x,\eta )\) happens with high probability, the reason being that \(s\mapsto X_{1,t}(s)\) looks very much like a Brownian excursion over the curve \(s\rightarrow m_s\). We observe that the events \(E_i\) depend on \(t\) but we omit to write the dependency for sake of brevity.

Fig. 3
figure 3

The events \(E_1(x,\eta ),E_2(x,\eta )\) and \(E_3(x,\eta )\) together are the event that the paths of particles ending within distance \(\eta \) of \(m_t\) avoid all the dashed regions

Proposition 2.5

(Arguin, Bovier and Kistler [2]) Let \(\eta >0\). For any \(\varepsilon >0\), there exists \(x>0\) large enough such that \(\mathbf{P}(A_t(x,\eta )) \ge 1-\varepsilon \) for \(t\) large enough.

Observe in particular, that since \(\mathbf{P}(|X_{1,t}(t) -m_t | >\eta ) \rightarrow 0\) when \(\eta \rightarrow \infty \) we know that for \(\eta \) and \(x\) large enough, the path \(s \mapsto X_{1,t}(s)\) has the properties described in the event \(E_1, E_2, E_3\) with arbitrary high probability. Here the exponents \(1/3\) and \(2/3\) have been chosen arbitrarily in the sense that one could replace them with \(1/2 \pm \varepsilon \) for any \(0<\varepsilon <1/2\) (Fig. 3).

The rest of this paper is organized as follows. Section 3 is devoted to discussions on related results. The main goal of the paper is to prove Theorem , which is also the hardest. We start by proving Theorem  in Sect. 4 which is much easier, thus introducing some tools and ideas we will use throughout the paper. Next, in Sect. 5, we prove Proposition 2.5 which gives us estimates on the localization of the path followed by the rightmost particle. Section 6 contains the main arguments for the proof of Theorem 2.3, and Sects. 7, 8 and 9 are devoted to technical intermediary steps.

The proof of Theorem 2.1 is given last in Sect. 10. We show that by stopping particles when they first hit a certain position \(k\) and then considering only their leftmost descendants one recovers a Poisson point measure of intensity \(\mathrm{e }^x \, \mathrm{d }x\) as \(k \rightarrow \infty \). Then, we show that two particles near \(m_t\) have separated in a branching event that was either very recent or near the very beginning of the process and we finally combine those two steps to complete the proof of Theorem 2.1.

3 Related results and discussion

The goal of this section is to discuss the relevant literature and to give a brief account of the main differences and similarities between the present work and some related papers.

The description of the extremal point process of the branching Brownian motion is also the subject of [7, 8] by Brunet and Derrida. There, using the McKean representation and Bramson’s convergence result for the solutions of the F-KPP equation [6], the authors show that the limit point process exists and has the superposability property. From there, using classical arguments (see for instance [26]) it can then be shown that the only point processes having this property are those of the type “decorated exponential Poisson point processes”, proving in essence our Theorem 2.1. Recently, pursuing and adding to those ideas Arguin et al. have also shown the convergence of \({\bar{{\fancyscript{N}}}}(t)\) to a limiting point process with the superposability property (see [4, Proposition 2.2 and Corollary 2.4]). Therefore, it is really Theorem 2.3—the description of the decoration measure \({\fancyscript{Q}}\)—which is the main contribution of the present work. Finally we mention that Madaule [25] has proved the analogue of our Theorem 2.1 for non-lattice branching random walks by using the recent result in [1] on the maximum of branching random walks.

Most of the results presented here are identical or very closely related to those obtained independently by Arguin, Bovier and Kistler in a series of papers [24]. For reference we include here a brief description of their results, stated in the context of our normalization to ease comparison.

The main results of [2] concern the paths followed by the extremal particles and their genealogy. Our Proposition  is the same result as Theorem 2.1 of [2] which says that particles near \(m_t\) have either branched near time 0 or near time \(t\). Theorems 2.2, 2.3 and 2.5 in [2] concern the localization of paths of particles which end up near \(m_t\) at time \(t\). Arguin et al. show that at intermediary times \(s\), with arbitrarily large probability, they lie between \({s \over t} m_t - (s\wedge (t-s))^\alpha \) and \({s \over t} m_t - (s\wedge (t-s))^\beta \) for \(0<\alpha <1/2 <\beta <1\). This, of course, corresponds exactly to our Proposition 2.5. Since their arguments rely essentially on many-to-one calculations and Bessel bridge estimates, the methods of proof are also very similar. We include the proofs of Propositions 2.5 and 10.2 for the sake of self-containedness.

In [3], Arguin, Bovier and Kistler using the path localization argument obtained in [2] are able to show that if one only considers particles that have branched off from one another far enough into the past (the point process of maxima of the clusters), then it converges to a Poisson point process with exponential intensity ([3], Theorem 2). This of course very closely resembles our Proposition . Their proof relies on the convergence of Laplace functionals (for which a first Lalley–Sellke type representation is given) whereas we simply deduce this from the classical results about records of iid variables.

In [4] a complete description of the extremal point process of the branching Brownian motion is given. There, they show that \({\bar{{\fancyscript{N}}}}(t)\) (actually in [4] the point process \({\fancyscript{N}}\) is centered by \(m_t\) instead of \(m_t - \log (CZ)\)) converges in distribution to a limiting point process which is necessarily an exponential Poisson point process whose atoms are “decorated” with iid point measures. They give a complete description of this decoration point measure as follows. Let \({\fancyscript{D}}(t) = \sum _{i=1}^\infty \delta _{X_i(t)-X_1(t)}\) which is a random point measure on \(\mathbb{R }_+\). Conditionally on the event \(X_1(t)<0\) it converges in distribution to a limit \({\fancyscript{D}}\). Theorem 2.1 in [4] thus coincides with our Theorem 2.1 via \({\fancyscript{Q}}={\fancyscript{D}}\).

One of the key argument in [4] is to identify the limit extremal point process of the branching Brownian motion with the limit of an auxiliary point process. This auxiliary point process is constructed as follows. Let \((\eta _i, i \in \mathbb{N })\) be the atoms of a Poisson point process on \(\mathbb{R }_+\) with intensity

$$\begin{aligned} a (x e^{b x}) \, \mathrm{d }x \end{aligned}$$

for some constants \(a\) and \(b\). For each \(i\), they start form \(\eta _i\) an independent branching Brownian motion (with the same \(\lambda ,\sigma ,\varrho \) parameters as the original one) and call \(\Pi (t)\) the point process of the position of all the particles of all the branching Brownian motions at time \(t\). Theorem 2.5 in [4] shows that \(\lim _{t \rightarrow \infty } \Pi (t) = \lim _{t \rightarrow \infty } {\bar{{\fancyscript{N}}}}(t)\). This solves what Lalley and Sellke [23] call the conjecture on the standing wave of particles. The proof is based on the analysis of Bramson [6] for the solution of the F-KPP equation with various initial conditions and the subsequent work of Lalley and Sellke [23] and Chauvin and Rouault [10] which allows them to show convergences of Laplace type functionals of the extremal point process.

In the present work we also prove the convergence of the extremal point process to a decorated exponential Poisson point process. Our main result, Theorem 2.3, gives a description of the decoration measure \({\fancyscript{Q}}\) which is very different from [4]. The methods we use are also different since we essentially rely on path localization and decomposition. It is our hope to exploit the description of \({\fancyscript{Q}}\) given in Theorem  to prove a conjecture of Brunet and Derrida [7] concerning the asymptotic distribution of the extremal point measure \(\fancyscript{L}\).

4 Proof of Theorem 2.4

We will use repeatedly the following approach which is known as the spinal decomposition. The process

$$\begin{aligned} M_t := \sum _{i \le N(t)} \mathrm{e }^{-X_i(t)}, \qquad t\ge 0, \end{aligned}$$

is a so-called additive martingale, which is critical, not uniformly integrable and converges almost surely to 0. Let \(\mathbf{Q}\) be the probability measure on \(\fancyscript{F}_\infty \) such that, for each \(t\ge 0\),

$$\begin{aligned} \mathbf{Q}_{|_{\fancyscript{F}_t}} = M_t \bullet \mathbf{P}_{|_{\fancyscript{F}_t}}. \end{aligned}$$

Following Chauvin and Rouault ([10], Theorem 5), \(\mathbf{Q}\) is the law of a branching diffusion with a particle behaving differently. More precisely, for each time \(s\ge 0\) we let \(\Xi _s \in \{1,\ldots , N(s)\}\) be the label of the distinguished particle (the process \((\Xi _s, \, s\in [0, \, t])\) is called the spine). The particle with label \(\Xi _s\) at time \(s\) branches at (accelerated) rate \(2\) and gives birth to normal branching Brownian motions (without spine) with distribution \(\mathbf{P},\) whereas the process of the position of the spine \((X_{\Xi _s}(s), \, s\in [0, \, t])\) is a driftless Brownian motion of variance \(\sigma ^2=2\). Furthermore, for each \(t\ge 0\) and each \(i\le N(t)\),

$$\begin{aligned} \mathbf{Q}\{\Xi _t =i \, | \, \fancyscript{F}_t\} = {\mathrm{e }^{- X_i(t)}\over M_t}. \end{aligned}$$

We use this principle repeatedly in the present work in the following manner. For each \(i\le N(t)\) consider \(\Psi _i\) a random variable which is measurable in the filtration of the branching Brownian motion up to time \(t\) (i.e., it is determined by the history of the process up to time \(t\)) and suppose that we wish to compute \(\mathbf{E}_\mathbf{P}[\sum _{i \le N(t)} \Psi _i]\). Then, thanks to the above, we have

$$\begin{aligned} \mathbf{E}_\mathbf{P}\left[\sum _{i \le N(t)} \Psi _i \right] = \mathbf{E}_\mathbf{Q}\left[\frac{1}{M_t} \sum _{i \le N(t)} \Psi _i \right] = \mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{X_{\Xi _t}(t)} \Psi _{\Xi _t} \right]. \end{aligned}$$
(4.1)

We will refer to (4.1) as the many-to-one principle.

For any positive measurable function \(F : C(\mathbb{R }_+,\mathbb{R }) \rightarrow \mathbb{R }_+,\) any positive measurable function \(f:[0, \, t] \rightarrow \mathbb{R }_+, n \in \mathbb{N }, (\alpha _1, \ldots , \alpha _n) \in \mathbb{R }_+^n\) and \(A_1,\ldots , A_n\) a collection of Borel subsets of \(\mathbb{R }_+\) define

$$\begin{aligned} I(t) := \mathbf{E}\left\{ F(X_{1,t}(s), \, s\in [0, \, t])\, \exp \left(- \sum _i f(t-\tau _i(t))\, \sum _{j=1}^n \alpha _j \int _{A_j} \, \mathrm{d }\fancyscript{N}_i(t) \right) \right\} , \end{aligned}$$

as in Sect. 2. Letting \(X_{i,t}(s)\) be the position of the ancestor at time \(s\) of the particle at \(X_i(t)\) at time \(t\), we have

$$\begin{aligned} I(t) = \mathbf{E}\left[\sum _{i\le N(t)} \mathbf{1}_{\{i=1\}} \, F(X_{i,t}(s), \, s\in [0, \, t]) \, \Lambda _i(t)\right], \end{aligned}$$

with \(\Lambda _i(t) := \exp \{- \sum _k f(t-\tau ^{(i)}_k(t))\, \sum _{j=1}^n \alpha _j [\int _{A_j} \, \mathrm{d }\fancyscript{N}^{(i)}_k] \}\) where the sequence of times \(\tau ^{(i)}_k(t)\) are the successive branching times along \(X_{i,t}(s)\) enumerated backward from \(t\), and the point measures \(\fancyscript{N}^{(i)}_k\) are the particles which have branched off from \(X_{i,t}(s)\) at time \(\tau ^{(i)}_k(t)\)

$$\begin{aligned} \fancyscript{N}^{(i)}_k : =\sum _{\ell : \tau _{i,\ell }(t) = \tau ^{(i)}_k(t)}\delta _{(X_\ell (t)-X_i(t))}. \end{aligned}$$

Using the many-to-one principle and the change of probability presented in Eq. (4.1) we see that

$$\begin{aligned} I(t)&= \mathbf{E}_\mathbf{Q}\Big [\mathrm{e }^{X_{\Xi _t}(t)} \, \mathbf{1}_{\{\Xi _t=1\}} \, F(X_{\Xi _s}(s), \, s\in [0, \, t]) \, \Lambda _{\Xi _t}(t)\Big ]\\&= \mathbf{E}_\mathbf{Q}\Big [\mathrm{e }^{X_{\Xi _t}(t)} \, F(X_{\Xi _s}(s), \, s\in [0, \, t]) \, \Lambda _{\Xi _t}(t) \, \prod _{k} \mathbf{1}_{\{\min \fancyscript{N}^{(\Xi _t)}_k > 0 \}} \Big ] \end{aligned}$$

where we recall that by convention, for a point measure \({\fancyscript{N}}, \min {\fancyscript{N}}\) is the infimum of the support of \({\fancyscript{N}}\).

Conditioning on the \(\sigma \)-algebra generated by the spine (including the successive branching times) we obtain

$$\begin{aligned} I(t) \!= \!\mathbf{E}_\mathbf{Q}\bigg [\mathrm{e }^{X_{\Xi _t}(t)} \, F(X_{\Xi _s}(s), \, s\in [0, \, t]) \, \prod _i \overline{G}_{t-\tau _i^{(\Xi _t)}(t)}^{(f)} (X_{\Xi _t}(t) \!-\!X_{\Xi _t,t} (\tau _i^{(\Xi _t)}(t))) \!\bigg ], \end{aligned}$$

where, for any \(r\ge 0\) and any \(x\in \mathbb{R }\),

$$\begin{aligned} \overline{G}_r^{(f)}(x) := \mathbf{E}\Big [\mathrm{e }^{-f(r) \sum _{j=1}^n \alpha _j [\int _{A_j+x} \, \mathrm{d }\fancyscript{N}(r)]} \, \mathbf{1}_{\{\min \fancyscript{N}(r) \ge x\}} \Big ]. \end{aligned}$$
(4.2)

Since \((\tau _i^{(\Xi _t)}(t),i\ge 0)\) is a rate \(2\) Poisson process under \(\mathbf{Q}\), we arrive at:Footnote 2

$$\begin{aligned} I(t)&= \mathbf{E}_\mathbf{Q}\Big [\mathrm{e }^{X_{\Xi _t}(t)} \, F(X_{\Xi _s}(s), \, s\in [0, \, t]) \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_{t-s}^{(f)}(X_{\Xi _t}(t)-X_{\Xi _s}(s))] \, \mathrm{d }s} \Big ] \nonumber \\&= \mathbf{E}\Big [\mathrm{e }^{\sigma B_t} \, F(\sigma B_s, \, s\in [0, \, t]) \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_{t-s}^{(f)}(\sigma B_t-\sigma B_s)] \, \mathrm{d }s} \Big ], \end{aligned}$$
(4.3)

where, in the last identity, we used the fact that \((X_{\Xi _s}(s), \, s\in [0, \, t])\) under \(\mathbf{Q}\) is a centered Brownian motion (with variance \(\sigma ^2=2\)). This yields Theorem  2.4. \(\square \)

Remark

Although we do not need it in the present paper, we mention that (4.3) gives the existence and the form of the density of \(X_1(t)\) by taking \(f\equiv 0\) and \(F\) to be the projection on the coordinate \(s=t\):

$$\begin{aligned} \mathbf{P}\{X_1(t) \in \!\, \mathrm{d }y \}&= \mathrm{e }^y \, \mathbf{E}\Big [\mathrm{e }^{- 2 \int _0^t G_{t-s}(\sigma B_t-\sigma B_s) \, \mathrm{d }s} \, \mathbf{1}_{\{B_t \in {\!\, \mathrm{d }y\over \sigma }\}} \Big ]\\&= \mathrm{e }^y \, \mathbf{E}_{0,{y\over \sigma }}^{(t)} \Big [\mathrm{e }^{- 2 \int _0^t G_{t-s}(\sigma B_t-\sigma B_s) \, \mathrm{d }s} \Big ] \, \mathbf{P}\Big \{B_t \in {\!\, \mathrm{d }y\over \sigma }\Big \}. \end{aligned}$$

5 Properties of the path followed by the leftmost particle: proof of Proposition 2.5

When applying the many-to-one principle as in (4.1), if the functional \(\Psi _{\Xi } \) only depends on the path of \(X_{\Xi _s}(s)\) then the last expectation is simply the expectation of a certain event for the standard Brownian motion. For instance, suppose that we want to check if there exists a path \((X_{i,t}(s), s\in [0,t])\) with some property in the tree. Let \(A\) be a measurable subset of continuous functions \([0,t] \mapsto \mathbb{R }\). Then

$$\begin{aligned} \mathbf{P}(\exists i \le N(t) : (X_{i,t}(s), s\in [0,t]) \in A) \le \mathbf{P}(\mathrm{e }^{\sigma B_t} ; (\sigma B_s, s\in [0,t]) \in A)\quad \end{aligned}$$
(5.1)

where \((B_s,s \ge 0)\) is a standard Brownian motion under \(\mathbf{P}\). This is the main tool we use in proving Proposition 2.5.

Let \((B_s,s \ge 0)\) denote a standard Brownian motion. Before proceeding to the proof of Proposition 2.5, let us recall (see, for example, Revuz and Yor [30], Exercise III.3.14) the joint distribution of \(\min _{[0,t]}B_s\) and \(B_t\): for any \(x>0, y>0\) and \(t>0\),

$$\begin{aligned} \mathbf{P}\Big (\min _{[0,t]} B_s>-x,\; B_t+x \in \!\, \mathrm{d }y\Big )&= \Big ({2\over \pi t}\Big )^{\! 1/2} \, \mathrm{e }^{-{x^2+y^2\over 2 t}} \sinh \Big ({xy\over t}\Big ) \, \mathrm{d }y\nonumber \\&\le \Big ({2\over \pi t^3}\Big )^{\! 1/2} xy \, \mathrm{d }y \,, \end{aligned}$$
(5.2)

the last inequality following from the facts that \(\sinh z\le ze^z\) for \(z\ge 0\), and that \(\mathrm{e }^{-{x^2+y^2\over 2 t}+{y x\over t}}\le 1\).

We now turn to the proof of Proposition 2.5. Let \(J_\eta (t): = \{i \le N(t) : |X_i(t) -m_t|<\eta \}\) where \(m_t={3\over 2} \log t +C_B\) by (1.3). We recall that for \(t \ge 1\) and \(x>0\), we define the good event \(A_{t}(x,\eta )\) by

$$\begin{aligned} A_{t}(x,\eta ) := E_1(x, \eta )\cap E_2(x, \eta ) \cap E_3(x,\eta ) \end{aligned}$$

where the events \(E_i\) are defined by

$$\begin{aligned} E_1(x,\eta )&:\!=\!&\! \Big \{\forall i \in J_\eta (t), \ \min _{[0,t]} X_{i,t}(s) \ge -x,\, \min _{[{t\over 2},\, t]} X_{i,t}(s) \ge m_t -x \Big \},\\ E_2(x,\eta )&:\!=\!&\!\Big \{\forall i \in J_\eta (t), \forall s\in \Big [x,\, {t\over 2}\Big ], X_{i,t}(s) \ge s^{1/3} \Big \},\\ E_3(x,\eta )&:\!=\!&\!\Big \{\forall i \in J_\eta (t), \forall s\in \Big [{t\over 2},\, t-x\Big ],\, X_{i,t}(s)\!-\!X_i(t) \in [(t\!-\!s)^{1/3},(t\!-\!s)^{2/3}]\Big \}. \end{aligned}$$

We now prove the claim of Proposition 2.5: For any \(\varepsilon >0\) and \(\eta >0\), there exists \(x>0\) large enough such that \(\mathbf{P}(A_t(x,\eta )) \ge 1-\varepsilon \) for \(t\) large enough.

Proof

The notation \(c\) denotes a constant (that may depend on \(\eta \)) which can change from line to line. We deal separately with the events \(E_i(x,\eta )\). We want to show that for any \(i\in \{1,2,3\}\), there exists \(x\) large enough such that \(\mathbf{P}((E_i(x,\eta ))^\complement ) \le \varepsilon \) for \(t\) large enough.

Bound of \(\mathbf{P}(E_1(x,\eta )^\complement )\).

First, observe that \(\min \{X_i(t), i\le N(t), t\ge 0\}\) is an a.s. finite random variable and therefore

$$\begin{aligned} \mathbf{P}\Big (\min _{i \in J_\eta (t), s \in [0,t]} X_{i,t}(s) \le -x\Big ) \le \mathbf{P}\Big (\min \{X_i(t), i\le N(t), t\ge 0\} \le -x\Big ) \le \varepsilon \end{aligned}$$

for \(x\) large enough. It remains to bound the probability to touch level \(m_t -x\) between \({t\over 2}\) and \(t\). By the previous remarks, we can assume that \(\min _{[0,t]} X_{i,t}(s)\ge -z\) for all \(i \in J_\eta (t)\). We claim that, for any \(z,\eta \ge 0\), there exists \(c>0\) and a function \(\varepsilon _t \rightarrow 0\) such that for any \(x\ge 0\) and \(t\ge 1\),

$$\begin{aligned} \mathbf{P}\Big \{\exists i \in J_\eta (t), \ \min _{s\in [0,\, t]} X_{i,t}(s)\ge -z, \min _{s\in [{t\over 2},\, t]} X_{i,t}(s) = m_t -x \pm 1 \Big \} \le c\mathrm{e }^{-cx} + \varepsilon _t\nonumber \\ \end{aligned}$$
(5.3)

where \(y=u\pm v\) stands for \(y\in [u-v,u+v]\). This will imply the bound on \(E_1(x,\eta )^\complement \). Let us prove the claim. We see that the probability on the left-hand side is \(0\) if \(x> m_t +z +1\) (indeed, if \(x> m_t +z +1\) and \(\min _{s\in [{t\over 2},\, t]} X_{i,t}(s) \le m_t -x +1 < -z\), then it is impossible to have \(\min _{s\in [0,\, t]} X_{i,t}(s)\ge -z\)). We then take \(x\le {7\over 4}\log t\) (any constant lying in \(({3\over 2}, \, 2)\) would do the job in place of \({7\over 4}\)).

Let \(a\in (0, \, {t\over 2})\) (at the end, \(a=\mathrm{e }^{x/2}\)). We discuss whether \(\{\min _{s\in [t/2,t-a]} X_{i,t}(s) = m_t -x \pm 1\}\) or \(\{\min _{s\in [t-a,t]} X_{i,t}(s) = m_t -x \pm 1\}\). We denote by \(p_\mathrm{{claim}}^{[t/2,t-a]}(x)\) (resp. \(p_\mathrm{{claim}}^{[t-a,t]}(x)\)) the probability in (5.3) on the event \(\{\min _{s\in [t/2,t-a]} X_{i,t}(s) = m_t -x \pm 1\}\) (resp. \(\{\min _{s\in [t-a,t]} X_{i,t}(s) = m_t -x \pm 1\}\)). Equation (5.1) provides us with the following bound

$$\begin{aligned} p_\mathrm{{claim}}^{[t/2,t-a]}(x) \le \mathrm{e }^{\eta +C_B} t^{3/2}\mathbf{P}(B) \end{aligned}$$
(5.4)

where

$$\begin{aligned} \mathbf{P}(B)&:= \mathbf{P}\Big \{\sigma \underline{B}^{[0,t]}\ge -z, \sigma \underline{B}^{[{t\over 2},t-a]} = m_t -x \pm 1, \nonumber \\&\qquad \sigma \underline{B}^{[t/2,t]}= m_t -x \pm 1, \sigma B_t= m_t \pm \eta \Big \} \end{aligned}$$

and \(\underline{B}^{[b_1,b_2]} := \min _{s\in [b_1,b_2]} B_s\). By reversing time, we see that

$$\begin{aligned} \mathbf{P}(B)&\le \mathbf{P}\Big \{\sigma \underline{B}^{[0,t]} \ge -m_t -(z+\eta ), \sigma \underline{B}^{[0,a]} \ge -\eta -x-1, \\&\qquad \sigma B_t= -m_t \pm \eta ,\sigma \underline{B}^{[a,t/2]} =-x \pm (\eta +1) \Big \}. \end{aligned}$$

By the Markov property at time \(t/2\), we obtain

$$\begin{aligned} \mathbf{P}(B)&\le \mathbf{E}\Big [\mathbf{1}_{\{\sigma \underline{B}^{[a,t/2]} = -x \pm (\eta +1)\}}\mathbf{1}_{\{\sigma \underline{B}^{[0,a]} \ge -\eta -x-1\}}\\&\times \mathbf{P}_{B_{t/2}} \Big \{\sigma \underline{B}^{[0,t/2]}\ge -m_t -(z+\eta ), \sigma B_{t/2} = -m_t \pm \eta \Big \} \Big ], \end{aligned}$$

where, for any \(y\in \mathbb{R }, \mathbf{P}_y\) is the probability under which \(B\) starts at \(y\): \(\mathbf{P}_y(B_0=y) =1\). (So \(\mathbf{P}_0= \mathbf{P}\)). By (5.4) and (5.2), it follows that

$$\begin{aligned} p_\mathrm{{claim}}^{[t/2,t-a]}(x)&\le c (z+2\eta )^2 \mathbf{E}\Big [\mathbf{1}_{\{\sigma \underline{B}^{[a,t/2]} = -x \pm (\eta +1),\sigma \underline{B}^{[0,a]} \ge -\eta -x-1\}}\nonumber \\&\quad \qquad \qquad \qquad (\sigma B_{t/2} +m_t +(z+\eta ))\Big ]\\&\le c (z+2\eta )^2 (\mathbf{E}_1 +\mathbf{E}_2) \end{aligned}$$

where

$$\begin{aligned} \mathbf{E}_1&:= \mathbf{E}\Big [\mathbf{1}_{\{\sigma \underline{B}^{[a,t/2]} = -x \pm (\eta +1),\sigma \underline{B}^{[0,a]} \ge -\eta -x-1\}} (\sigma B_{t/2} + \eta +x+1)\Big ],\\ \mathbf{E}_2&:= (m_t +z-x-1)\mathbf{P}\Big \{\sigma \underline{B}^{[a,t/2]} = -x \pm (\eta +1),\sigma \underline{B}^{[0,a]} \ge -\eta -x-1 \Big \}. \end{aligned}$$

To bound \(\mathbf{E}_2\) is easy. We have \(|\mathbf{E}_2| \le O(\log t)\, \mathbf{P}(\sigma \underline{B}^{[0,t/2]} \ge -\eta - x - 1)=O((\log t)^2)\, t^{-1/2}\) uniformly in \(x\le {7\over 4} \log t\). Now consider \(\mathbf{E}_1\). We note that \((\sigma B_{t/2} +\eta +x+1)\mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]} \ge -\eta -x-1\}}\) is the \(h\)-transform of the three-dimensional Bessel process, and we denote by \((R_s,s\ge 0)\) a three-dimensional Bessel process. Then,

$$\begin{aligned} \mathbf{E}_1&= (\eta +x+1)\mathbf{P}_{\eta +x+1}(\sigma \underline{R}^{[a,t/2]}) \le 2(\eta + 1)\le (\eta +x+1)\nonumber \\&\times \,\mathbf{P}_{\eta +x+1}(\min _{s\ge a} \sigma R_s \le 2(\eta +1)) \end{aligned}$$

with natural notation. The infimum of a three-dimensional Bessel process starting from \(x\) is uniformly distributed in \([0,x]\) (see Revuz and Yor [30], Exercise V.2.14). Applying the Markov property at time \(a\), we get \(\mathbf{E}_1 \le {2(\eta +1)\over \sigma }(\eta +x+1)\mathbf{E}_{\eta +x+1}[1/R_a]\le c (\eta +x+1) a^{-1/2}\). We take \(a=\mathrm{e }^{x/2}\). The preceding inequality implies that for any \(x\ge 0\) and \(t\ge 1\),

$$\begin{aligned} p_\mathrm{{claim}}^{[t/2,t-a]}(x) \le c(\eta +x+1)^2\mathrm{e }^{-x/4} + c (\log t)^2 t^{-1/2}. \end{aligned}$$
(5.5)

We deal now with the probability \(p_\mathrm{{claim}}^{[t-a,t]}(x)\). In this case, the minimum on \([t-a,t]\) belongs to \([m_t -x-1,m_t -x+1]\). Since we know that \(p_\mathrm{{claim}}^{[t/2,t-a]}(x)\) is small, we can restrict to the case where the minimum on \([t/2,t-a]\) is greater than \(m_t -x+1\), i.e.,

$$\begin{aligned}&p_\mathrm{{claim}}^{[t-a,t]}(x) \le \sum _{y=x}^{\lfloor 2\log t \rfloor } p_\mathrm{{claim}}^{[t/2,t-a]}(y) \\&\quad +\mathbf{P}\Big \{\exists i \in J_\eta (t): \, \underline{X}_{i,t}^{[0,t]} \ge \!-\!z,\, \underline{X}_{i,t}^{[t/2,t-a]} > m_t \!-\!x +1, \, \underline{X}_{i,t}^{[t-a,t]} \le m_t -x +1 \Big \}. \end{aligned}$$

From (5.5), we know that \(\sum _{y=x}^{\lfloor 2\log t \rfloor } p_\mathrm{{claim}}^{[t/2,t-a]}(y)\le o(t) + c\, \mathrm{e }^{-x/8}\) with as usual \(\underline{X}_{i,t}^{[a,b]}:=\min _{s\in [a,b]} X_{i,t}(s)\).

Suppose that we kill particles as soon as they hit the position \(-z\) during the time interval \([0,t/2]\) and as soon as they are left of or at position \(m_t-x+1\) during the time interval \([t/2,t]\). Call \(\mathcal S ^{[t-a,t]}\) the number of particles that are killed during the time interval \([t-a,t]\). Hence,

$$\begin{aligned} p_\mathrm{{claim}}^{[t-a,t]}(x) \le o(t)+ c\mathrm{e }^{-x/8} + \mathbf{E}[\#\mathcal S ^{[t-a,t]}]. \end{aligned}$$
(5.6)

We observe that by stopping particles either at time \(t\) or when they first hit \(-z\) during \([0,t/2]\) or \(m_t-x+1\) during the time interval \([t/2,t],\) we are defining a so-called dissecting stopping line. Stopping lines were introduced and studied—among others—by [11, 20] essentially for branching random walks. More recently, they have been used with great efficacy by e.g. Kyprianou in the context of branching Brownian motion to study traveling wave solutions to the F-KPP equation [22]. More precisely, for a continuous path \(X : \mathbb{R }_+ \rightarrow \mathbb{R }\) let us call \(T(X)\) the stopping time

$$\begin{aligned} T(X) : = \inf \{s \le t/2 : X(s) \le -z\} \wedge \inf \{s \in [t/2,t] : X(s) \le m_t-x+1 \} \wedge t \end{aligned}$$

and for \(i \le N(t)\) define \(T_i := T(X_{i,t}(\cdot ))\). We also need a notation for the label of the progenitor at time \(T_i\) of the particle at \(X_i(t)\) at time \(t\): let \(J_i \le N(T_i)\) be the almost surely unique integer such that

$$\begin{aligned} X_{J_i}(T_i) = X_{i,t}(T_i). \end{aligned}$$

We now formally define the stopping line \(\ell \) by

$$\begin{aligned} \ell := \text{ enum}((J_i,T_i)_{i \le N(t)}) \end{aligned}$$

where \(\text{ enum}\) means that \(\ell \) is an enumeration without repetition. In general, stopping lines can be far more sophisticated objects, and \(\ell \) is a particularly simple example of this class, which is bounded by \(t\) (and thus dissecting).

We now need a generalization of the many-to-one principle (4.1) to stopping lines. Although this can now be considered common knowledge, surprisingly only [27, Lemmas 3.1 and 3.2] gives the result in sufficient generality for our purposes.

Fact 5.1

Let \(g : (x,t) \mapsto g(x,t), \mathbb{R }\times \mathbb{R }_+ \rightarrow \mathbb{R }\) be measurable. Then, if \(X(t) = \sigma B_t +\varrho t\) where \(B\) is a standard Brownian motion

$$\begin{aligned} \mathbf{E}_\mathbf{P}\left[\sum _{(i,t) \in \ell } g(X_{i}(t),t)\right]= \mathbf{E}[e^{\lambda T(X)} g(X_{T(X)},T(X))]. \end{aligned}$$

To see this, one can for instance adapt the proofs for the fixed-time many-to-one lemma in [14, 17] to the case of dissecting stopping lines.

Once one factors in the Girsanov term to get rid of the drift, one sees that

$$\begin{aligned} \mathbf{E}_\mathbf{P}\left[\sum _{(i,t) \in \ell } g(X_{i}(t),t)\right]&= \mathbf{E}[e^{\sigma B_{T(\sigma B)}} g(\sigma B_{T(\sigma B)},T(\sigma B)] \\&= \mathbf{E}_\mathbf{Q}[e^{X_{\Xi _{T(\Xi )}}(T(X_\Xi ))} g(X_{\Xi _{T(\Xi )}}(T(X_\Xi )),T(X_\Xi ))]. \end{aligned}$$

By applying this with \(g(x,s)= \mathbf 1 _{s \in (t-a,t)}\) we see that

$$\begin{aligned} \mathbf{E}[\# \mathcal S ^{[t-a,t]}]&= \mathrm{e }^{C_B}t^{3/2}\mathrm{e }^{-x+1} \mathbf{P}\Big \{\sigma \underline{B}^{[0,t/2]} \ge -z,\, T^{t/2}_{(m_t -x+1)/\sigma } \in [t-a,t],\nonumber \\&\qquad \qquad \qquad \qquad \quad \sigma B_{t/2}\ge m_t -x+1\Big \}. \end{aligned}$$

where \(T^{t/2}_{y}:= \min \{s\ge t/2\,: B_s = y \}\). As usual, we apply the Markov property at time \(t/2\) so that

$$\begin{aligned}&\mathbf{P}\Big \{\sigma \underline{B}^{[0,t/2]} \ge -z,\, T^{t/2}_{(m_t -x+1)/\sigma } \in [t-a,t], \sigma B_{t/2}\ge m_t -x+1\Big \}\\&\quad = \mathbf{E}\left[\mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]} \ge -z \}} \mathbf{P}_{B_{t/2}}\Big \{T_{(m_t -x+1)/\sigma } \in [t/2-a,t/2]\Big \}\right] \end{aligned}$$

where \(T_y := \min \{s\ge 0: B_s=y\}\) is the hitting time at level \(y\). We know that \(\mathbf{P}(T_y\in du)={y\over \sqrt{2\pi }}u^{-3/2}\mathrm{e }^{-{y^2\over 2u}}\, \mathrm{d }u\le c y u^{-3/2}\, \mathrm{d }u\) for \(u\ge 0\). It follows that for some constant \(c>0\) and any \(a\in [1,t/3]\)

$$\begin{aligned}&\mathbf{P}\Big \{\sigma \underline{B}^{[0,t/2]} \ge -z,\, T^{t/2}_{(m_t -x+1)/\sigma } \in [t-a,t], \sigma B_{t/2}\ge m_t -x+1\Big \}\\&\quad \le c a t^{-3/2}\mathbf{E}[B_{t/2} \mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]} \ge -z\}}]= c a t^{-3/2} z. \end{aligned}$$

Thus, \(\mathbf{E}[\#\mathcal S ^{[t-a,t]}] \le c a z \mathrm{e }^{-x} = cz \mathrm{e }^{-x/2} \) for \(a=\mathrm{e }^{x/2}\). Claim (5.3) now follows from equations (5.5) and (5.6).

Bound of \(\mathbf{P}((E_2(x,\eta ))^\complement )\).

We can restrict to the event \(E_1(z,\eta )\) for \(z\) large enough. By the many-to-one principle, we get

$$\begin{aligned} \mathbf{P}(E_2(x,\eta )^\complement ,E_1(z,\eta )) \le \mathrm{e }^{\eta +C_B} t^{3/2} \mathbf{P}(\widehat{B}) \end{aligned}$$

where \(\mathbf{P}(\widehat{B})\) is defined by

$$\begin{aligned} \mathbf{P}(\widehat{B}):&= \mathbf{P}\Big \{\exists s\in [x,t/2]: \sigma B_s \le s^{1/3},\sigma \underline{B}^{[0,t/2]} \ge -z, \nonumber \\&\qquad \sigma \underline{B}^{[t/2,t]}\ge m_t -z, \sigma B_t \le m_t +\eta \Big \}. \end{aligned}$$

We will actually bound the probability

$$\begin{aligned} \mathbf{P}(\widehat{B},\, \mathrm{d }r)&:= \mathbf{P}\Big \{\exists s\in [x,t/2]: \sigma B_s \le s^{1/3}, \sigma \underline{B}^{[0,t/2]} \ge -z,\nonumber \\&\qquad \sigma \underline{B}^{[t/2,t]}\ge m_t -z, \sigma B_t \in m_t+\, \mathrm{d }r \Big \}. \end{aligned}$$
(5.7)

Applying the Markov property at time \(t/2\) yields that

$$\begin{aligned}&\mathbf{P}(\widehat{B},\, \mathrm{d }r)\\&\quad = \mathbf{E}\left[\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \le s^{1/3}\}} \mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]}\ge -z\}} \mathbf{P}_{B_{t/2}}\Big \{\sigma \underline{B}^{[0,t/2]} \!\ge \! m_t \!-\!z, \sigma B_{t/2} \!\in \! m_t\!+\!\, \mathrm{d }r \Big \} \right]\\&\quad \le c (r\!+\!z) t^{-3/2} \mathbf{E}\left[\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \le s^{1/3}\}} \mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]} \ge -z\}} (\sigma B_{t/2} - m_t \!+\!z)_+ \right]\, \mathrm{d }r\\&\quad \le c (r+z)t^{-3/2} \mathbf{E}\left[\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \le s^{1/3}\}} \mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]} \ge -z\}} (\sigma B_{t/2} +z) \right]\, \mathrm{d }r \end{aligned}$$

where the second inequality comes from Eq. (5.2), and we set \(y_+:=\max (y,0)\). We recognize the \(h\)-transform of the Bessel process. Therefore

$$\begin{aligned} \mathbf{P}(\widehat{B},\, \mathrm{d }r) \le c z (r+z)t^{-3/2}\mathbf{P}_z(\exists s\in [x,t/2]: \sigma R_s \le z+s^{1/3})\, \mathrm{d }r \end{aligned}$$
(5.8)

where as before \((R_s,s\ge 0)\) is a three-dimensional Bessel process. In particular, \(\mathbf{P}(\widehat{B}) = \int _{-z}^\eta \mathbf{P}(\widehat{B},\, \mathrm{d }r)\le c z(z+\eta )^2t^{-3/2}\mathbf{P}_z(\exists s\in [x,t/2]: \sigma R_s \le z+s^{1/3})\). This yields that

$$\begin{aligned} \mathbf{P}(E_2(x,\eta )^\complement ,E_1(z))&\le \mathrm{e }^{\eta +C_B} c z(z+\eta )^2 \mathbf{P}_z(\exists s\in [x,t/2]: \sigma R_s \le z+s^{1/3})\\&\le \mathrm{e }^{\eta +C_B} c z (z+\eta )^2 \mathbf{P}_z(\exists s\ge x: \sigma R_s \le z+s^{1/3}) \end{aligned}$$

and we deduce that \(\mathbf{P}(E_2(x,\eta )^\complement ,E_1(z))\le \varepsilon \) for \(x\) large enough.

Bound of \(\mathbf{P}((E_3(x,\eta ))^\complement )\).

The bound on \(\mathbf{P}((E_3(x,\eta )^\complement ))\) works similarly. We have by the many-to-one principle

$$\begin{aligned} \mathbf{P}(E_3(x,\eta )^\complement ,E_1(z,\eta ),E_2(z,\eta )) \le \mathrm{e }^{\eta +C_B} t^{3/2} \mathbf{P}(\widetilde{B}) \end{aligned}$$
(5.9)

with \(\mathbf{P}(\widetilde{B})\) defined by

$$\begin{aligned}&\mathbf{P}\Big \{\exists s\in [t/2,t-x]: \sigma (B_s -B_t) \notin [(t-s)^{1/3},(t-s)^{2/3}], \\&\qquad \sigma \underline{B}^{[0,t/2]}\ge -z,\sigma \underline{B}^{[t/2,t]} \ge m_t -z, \sigma B_t = m_t \pm \eta \Big \}. \end{aligned}$$

Let

$$\begin{aligned}&\mathbf{P}(\widetilde{B},\, \mathrm{d }r) := \mathbf{P}\Big \{\exists s\in [t/2,t-x]: \sigma (B_s -B_t) \notin [(t-s)^{1/3},(t-s)^{2/3}],\nonumber \\&\quad \sigma \underline{B}^{[0,t/2]} \ge -z, ~~~ \sigma \underline{B}^{[t/2,t]} \ge m_t -z, \sigma B_t \in m_t + \, \mathrm{d }r\Big \}. \end{aligned}$$
(5.10)

Reversing time, we get

$$\begin{aligned} \mathbf{P}(\widetilde{B},\, \mathrm{d }r)&\le \mathbf{P}\Big \{\exists s\in [x,t/2]: \sigma B_s \notin [s^{1/3},s^{2/3}], \sigma \underline{B}^{[t/2,t]} \ge -m_t-z-\eta , \nonumber \\&\qquad \sigma \underline{B}^{[0,t/2]} \ge -z-\eta , \sigma B_t +m_t \in \, \mathrm{d }r\Big \}. \end{aligned}$$
(5.11)

By Eq. (5.2), we have for any \(y> -\frac{3}{2} \log t -z-\eta \), and \(t\ge 1\)

$$\begin{aligned} \mathbf{P}_{y} \Big \{\sigma \underline{B}^{[0,t/2]}&\ge -m_t -z-\eta ,\sigma B_{t/2}+m_t \in \, \mathrm{d }r \Big \}\nonumber \\&\le c (y+m_t +z+\eta )(r+z+\eta ) t^{-3/2}\, \mathrm{d }r. \end{aligned}$$

Applying the Markov property at time \(t/2\) in (5.11), we get for \(t\ge 1\)

$$\begin{aligned} \mathbf{P}(\widetilde{B},\, \mathrm{d }r)&\le c (r+z+\eta ) t^{-3/2}\mathbf{E}\Big [\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \notin [s^{1/3},s^{2/3}], \sigma \underline{B}^{[0,t/2]} \ge -z-\eta \}}\nonumber \\&(\sigma B_{t/2} +m_t+z+\eta )\Big ]\, \mathrm{d }r\\&\le c (r+z+\eta ) t^{-3/2}\bigg ({m_t\over \sqrt{t}} + \mathbf{E}\Big [\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \notin [s^{1/3},s^{2/3}], \sigma \underline{B}^{[0,t/2]} \ge -z-\eta \}}\nonumber \\&(\sigma B_{t/2} +z+\eta )\Big ] \bigg )\, \mathrm{d }r. \end{aligned}$$

On the other hand,

$$\begin{aligned}&\mathbf{E}\Big [\mathbf{1}_{\{\exists s\in [x,t/2]: \sigma B_s \notin [s^{1/3},s^{2/3}], \sigma \underline{B}^{[0,t/2]} \ge -z-\eta \}} (\sigma B_{t/2} +z+\eta )\Big ] \\&\quad = (z+\eta ) \mathbf{P}_{z+\eta }(\exists s\ge x: \sigma R_s-z-\eta \notin [s^{1/3},s^{2/3}]) \end{aligned}$$

where, as before, \((R_s,\, s\ge 0)\) is a three-dimensional Bessel process. This implies that

$$\begin{aligned} \mathbf{P}(\widetilde{B},\, \mathrm{d }r) \!\le \! c (r\!+\!z\!+\!\eta ) t^{-3/2}\bigg ({m_t\over \sqrt{t}} \!+\! (z\!+\!\eta ) \mathbf{P}_{z+\eta }(\exists s \!\ge \! x: \sigma R_s\!-\!z\!-\!\eta \!\notin \! [s^{1/3},s^{2/3}])\!\bigg )\, \mathrm{d }r.\nonumber \\ \end{aligned}$$
(5.12)

We get that

$$\begin{aligned} \mathbf{P}(\widetilde{B}) \!\le \! c (z\!+\!2\eta )^2 t^{-3/2}\left({m_t \over \sqrt{t}} \!+\!(z+\eta )\mathbf{P}_{z+\eta }(\exists s\!\ge \! x: \sigma R_s-z-\eta \notin [s^{1/3},s^{2/3}]) \right) \end{aligned}$$

which is less than \(c (z+2\eta )^2 t^{-3/2}\varepsilon \) for \(x\) large enough (as \(t\rightarrow \infty \)) and we conclude by (5.9). \(\square \)

For future reference we now prove the following lemma which shows that the probability for a Brownian path conditioned to end up near \(m_t\) of satisfying event \(E_1\) but not \(E_2\) or \(E_3\) decreases like \(1/t\). Let \(\mathbf{P}_{a,b}^{(t)}\) denote the probability under which \(B\) is a Brownian bridge from \(a\) to \(b\) of length \(t\). The notation \(o_x(1)\) designates an expression depending on \(x\) (and also on \(r\) and \(z\), but independent of \(t\)) which converges to 0 as \(x \rightarrow \infty \). We recall that \(\underline{B}^{[a,b]}:=\min _{s\in [a,b]} B_s\).

Lema 5.2

Fix \(r\in \mathbb{R }\) and \(z>0\). We have

$$\begin{aligned}&\mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t -z, \\&\quad \qquad \qquad \exists s\in [x,t-x]\,: \, \sigma B_s< \min (s^{1/3}, m_t +(t-s)^{1/3}) \Big ) = {1\over t} o_x(1) \end{aligned}$$

in the sense that \(\limsup _{t\rightarrow \infty } t\mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}(\ldots )=o_x(1)\). Furthermore, there exists a constant \(c>0\) such that for any \(t\ge 1, z>0\) and \(r\in \mathbb{R }\) such that \(|r|\le \sqrt{t}\),

$$\begin{aligned} \mathbf{P}_{0,{m_t+r\over \sigma }}^{(t)} \Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t -z \Big )\le {c\over t} z|r+z|. \end{aligned}$$

Proof

We have

$$\begin{aligned}&\mathbf{P}_{0,{m_t +r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t-z, \\&\qquad \qquad \qquad \exists s\in [x,t-x]\,: \, \sigma B_s< \min (s^{1/3}, m_t +(t-s)^{1/3}) \Big ) \\&\qquad \le \mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t -z, \, \exists s\in [x,t/2]\,: \, \sigma B_s< s^{1/3} \Big )\\&\qquad \qquad + \mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t-z,\\&\qquad \qquad \qquad \exists s\in [t/2, t-x]\,: \, \sigma B_s< m_t +(t-s)^{1/3} \Big ). \end{aligned}$$

We treat the two terms on the right-hand side successively. Using the definition of the Brownian bridge, we observe that, as \(t\rightarrow \infty \)

$$\begin{aligned}&\mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z,\,\sigma \underline{B}^{[t/2,t]}\ge m_t-z,\, \exists s\in [x,t/2]\,: \, \sigma B_s< s^{1/3} \Big )\\&\quad = \sigma \sqrt{2\pi t}\, \mathrm{e }^{{(m_t +r)^2 \over 2\sigma ^2 t}} \lim _{\, \mathrm{d }r \rightarrow 0} {1\over \, \mathrm{d }r} \mathbf{P}(\widehat{B},\, \mathrm{d }r) \end{aligned}$$

with \(\mathbf{P}(\widehat{B},\, \mathrm{d }r)\) defined in (5.7). By Eq. (5.8), \(\mathbf{P}(\widehat{B},\, \mathrm{d }r) \le ct^{-3/2}(r+z)\mathbf{P}_r(\exists s\in [x,t/2]: \sigma R_s \le s^{1/3})\, \mathrm{d }r\), where \((R_s,s\ge 0)\) is a three dimensional Bessel process. Hence

$$\begin{aligned} \mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\!\ge \! -z,\,\sigma \underline{B}^{[t/2,t]}\!\ge \! m_t\!-\!z,\, \exists s\in [x,t/2]\,: \, \sigma B_s\!<\! s^{1/3} \Big ) \sim {1\over t}o_x(1). \end{aligned}$$

Similarly, notice that

$$\begin{aligned}&\mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z,\, \sigma \underline{B}^{[t/2,t]}\ge m_t-z,\, \nonumber \\&\qquad \qquad \quad \exists s\in [t/2, t-x]\,: \, \sigma B_s< m_t +(t-s)^{1/3}\Big ) \\&\quad \le \sigma \sqrt{2\pi t}\, \mathrm{e }^{{(m_t +r)^2 \over 2\sigma ^2 t}} \lim _{\, \mathrm{d }r \rightarrow 0} {1\over \, \mathrm{d }r} \mathbf{P}(\widetilde{B},\, \mathrm{d }r) \end{aligned}$$

with \(\mathbf{P}(\widetilde{B},\, \mathrm{d }r)\) defined in (5.10). Then, Eq. (5.12) implies that

$$\begin{aligned}&\mathbf{P}_{0,{m_t+r \over \sigma }}^{(t)}\Big (\sigma \underline{B}^{[0,t/2]}\ge -z,\, \sigma \underline{B}^{[t/2,t]}\ge m_t-z,\,\nonumber \\&\qquad \qquad \quad \exists s \in [t/2, t-x]\,: \, \sigma B_s< m_t +(t-s)^{1/3}\Big ) \end{aligned}$$

is \({1\over t}o_x(1)\), which proves the first assertion. Let us prove the second one. We can suppose that \(r+z\ge 0\), since the statement is trivial otherwise. We have that

$$\begin{aligned}&\mathbf{P}_{0,{m_t+r\over \sigma }}^{(t)} \Big (\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t -z \Big )\\&\quad = \sigma \sqrt{2\pi t}\, \mathrm{e }^{{(m_t +r)^2 \over 2\sigma ^2 t}} \lim _{\, \mathrm{d }r \rightarrow 0} {1\over \, \mathrm{d }r} \mathbf{P}(\sigma \underline{B}^{[0,t/2]}\!\ge \! \!-\!z, \, \sigma \underline{B}^{[t/2,t]} \ge m_t \!-\!z,\, \sigma B_t \!\in \! m_t\!+\!\, \mathrm{d }r).\,\,\,\, \end{aligned}$$

By the Markov property at time \(t/2\) and equation (5.2), we see that

$$\begin{aligned}&\mathbf{P}(\sigma \underline{B}^{[0,t/2]}\ge -z, \, \sigma \underline{B}^{[t/2,t]}\ge m_t -z,\, \sigma B_t \in m_t+\, \mathrm{d }r)\\&\quad \le c t^{-3/2} (r+z) \mathbf{E}\left[\mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]}\ge -z \}} (\sigma B_{t/2} +z-m_t)_+\right]\, \mathrm{d }r \end{aligned}$$

where \(y_+\) stands for \(\max (y,0)\). We notice as before that \(\mathbf{E}\left[\mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]}\ge -z \}} (\sigma B_{t/2}+ z-m_t)_+\right] \le \mathbf{E}\left[\mathbf{1}_{\{\sigma \underline{B}^{[0,t/2]}\ge -z \}} (\sigma B_{t/2} +z)\right]=z\), which completes the proof.

6 The decoration point measure \(\fancyscript{Q}\): Proof of Theorem 2.3

This section is devoted to the study of the decoration point measure \(\fancyscript{Q}\).

Proof of Theorem 2.3

Recall that \(X_{i,t}(s)\) is the position at time \(s \in [0,t]\) of the ancestor of \(X_i(t)\) and that we have defined

$$\begin{aligned} Y_t(s) :=X_{1,t}(s)-X_1(t). \end{aligned}$$

Let \(\zeta >0\), and let \(f := \mathbf{1}_{[0, \, \zeta ]}\). Let \(t\ge \zeta \). Let

$$\begin{aligned} L_j(t, \, \zeta )&:= \int _{A_j} \, \mathrm{d }\fancyscript{Q}(t, \, \zeta ) \\&= \sum _{i: \, t-\tau _i(t) \le \zeta } \int _{A_j} \, \mathrm{d }\fancyscript{N}_i(t), \qquad 1\le j\le n. \end{aligned}$$

Let \(F_1: \, C(\mathbb{R }_+, \, \mathbb{R }) \rightarrow \mathbb{R }_+\) be a bounded continuous function and \(F_2:=\mathbf{1}_{[\eta _1,\eta _2]}\) for some \(\eta _2>\eta _1\). Fix \(x>0\) and let

$$\begin{aligned} a_s:= {\left\{ \begin{array}{ll} -x&\mathrm{if} \, s\in [0,t/2],\\ m_t - x&\mathrm{if} \, s\in (t/2,t]. \end{array}\right.} \end{aligned}$$
(6.1)

We define for any function \(X:[0,t]\rightarrow \mathbb{R }\), the event

$$\begin{aligned}&A(X) :=\{X(s)\ge a_s\, \forall \, s\in [0,t-\zeta ]\}\cap \{X(s)-X(t) \ge -x,\,\forall \, s\in [t-\zeta ,t]\}\cap \\&\quad \{X(t-\zeta )-X(t)\in (\zeta ^{1/3},\zeta ^{2/3})\} \cap \{\inf \{s: X(t-s) = \min _{u \in [0,t/2]}X(t-u) \} \le x \}. \end{aligned}$$

We easily check that Proposition 2.5 implies that \(\{X_{1}(t)-m_t\in [\eta _1,\eta _2]\}\cap (A(X_{1,t}))^\complement \) is of probability arbitrary close to \(0\) when \(x\) and \(\zeta \) are large enough. Therefore, we fix \(x\) large and we work on the event \(A(X_{1,t})\) and we will let \(t\rightarrow \infty \) then \(\zeta \rightarrow \infty \) then \(x\rightarrow \infty \). By (4.3), for \(t\ge \zeta \):

$$\begin{aligned}&\mathbf{E}\Big \{\mathbf{1}_{A(X_{1,t})} \, F_1(Y_t(s), \, s\in [0, \, \zeta ]) \, \mathrm{e }^{-\sum _{j=1}^n \alpha _j L_j(t, \, \zeta )} \, F_2(X_1(t)- m_t) \Big \}\\&\quad =\mathbf{E}\Big [\mathbf{1}_{A(\sigma B)} \, F_1(\sigma B_t - \sigma B_{t-s}, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{\sigma B_t} \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_{t-u}^{(f)}(\sigma B_t-\sigma B_u)] \, \mathrm{d }u}\\&\qquad \qquad F_2(\sigma B_t - m_t) \Big ]\\&\quad = \mathbf{E}\Big [\mathbf{1}_{A(\sigma \overline{B})} \, F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{\sigma B_t} \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_v^{(f)}(\sigma B_v)] \, \mathrm{d }v} F_2(\sigma B_t - m_t) \Big ] \\&\quad = \int _\mathbb{R }\mathbf{P}\{B_t \!\in \! {\! \, \mathrm{d }y\over \sigma } \} \, \mathrm{e }^y \, F_2(y\!-\! m_t)\, \mathbf{E}_{0, {y\over \sigma }}^{(t)} \Big [\mathbf{1}_{A(\sigma \overline{B})} \, F_1(\sigma B_s, s\!\in \! [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^t [1-\overline{G}_v^{(f)}(\sigma B_v)] \, \mathrm{d }v} \Big ], \end{aligned}$$

where \(\overline{B}_s := B_t-B_{t-s}, s\in [0,t]\) (so \(\overline{B}_t = B_t\)), and \(\mathbf{E}_{0, {y\over \sigma }}^{(t)}\) denotes expectation with respect to the probability \(\mathbf{P}_{0, {y\over \sigma }}^{(t)}\), under which \((B_v, \, v\in [0, \, t])\) is a Brownian bridge of length \(t\), starting at \(0\) and ending at \({y\over \sigma }\). Since \(f = \mathbf{1}_{[0, \, \zeta ]}\), the function \(\overline{G}_r^{(f)}\) in (4.2) becomes

$$\begin{aligned} \overline{G}_r^{(f)}(x) = {\left\{ \begin{array}{ll} \mathbf{E}\Big [\mathrm{e }^{-\sum _{j=1}^n \alpha _j \int _{x+A_j} \, \mathrm{d }\fancyscript{N}(r)} \, \mathbf{1}_{\{\min \fancyscript{N}(r) \ge x\}} \Big ],&\text{ if} r\in [0, \, \zeta ], \\ 1- G_r(x),&\text{ if} r>\zeta \text{.} \\ \end{array}\right.} \end{aligned}$$

So, if we write

$$\begin{aligned} G^*_v(x) := 1- \mathbf{E}\Big [\mathrm{e }^{-\sum _{j=1}^n \alpha _j \int _{x+A_j} \, \mathrm{d }\fancyscript{N}(v)} \, \mathbf{1}_{\{\min \fancyscript{N}(v) \ge x\}} \Big ], \end{aligned}$$
(6.2)

then for \(t\ge \zeta \), we have \(\int _0^t [1-\overline{G}_{v}^{(f)}(\sigma B_{v})] \, \mathrm{d }v = \int _0^\zeta G^*_v(\sigma B_v)\, \mathrm{d }v + \int _\zeta ^t G_v(\sigma B_v)\, \mathrm{d }v\), so that by writingFootnote 3

$$\begin{aligned} I_{(6.3)}(t, \, \zeta ) := t\, \mathbf{E}_{0,{y\over \sigma }}^{(t)} \Big [\mathbf{1}_{A(\sigma \overline{B})} \, F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v(\sigma B_v)\, \mathrm{d }v - 2 \int _\zeta ^t G_v(\sigma B_v)\, \mathrm{d }v} \Big ],\nonumber \\ \end{aligned}$$
(6.3)

we have

$$\begin{aligned}&\mathbf{E}\Big \{\mathbf{1}_{A(X_{1,t})} \, F_1(Y_t(s), \, s\in [0, \, \zeta ]) \, \mathrm{e }^{-\sum _{j=1}^n \alpha _j L_j(t, \, \zeta )} \, F_2(X_1(t)-m_t) \Big \}\\&\quad ={1\over t}\int _\mathbb{R }\mathbf{P}\Big \{B_t \in {\! \, \mathrm{d }y\over \sigma } \Big \} \, \mathrm{e }^y \, F_2(y-m_t)\, I_{(6.3)}(t, \, \zeta )\\&\quad = {1\over t^{3/2}}\int _\mathbb{R }{\mathrm{e }^{y - {y^2\over 2 \sigma ^2 t}} \over \sigma (2\pi )^{1/2}} \, F_2(y-m_t)\, I_{(6.3)}(t, \, \zeta ) \, \mathrm{d }y. \end{aligned}$$

Let \(y:= z+m_t\). Since \(F_2:=\mathbf{1}_{[\eta _1,\eta _2]}\), we have when \(t\rightarrow \infty , \mathrm{e }^{y - {y^2\over 2 \sigma ^2 t}} \sim \mathrm{e }^y = t^{3/2}\mathrm{e }^{C_B}\mathrm{e }^z\) where the numerical constant \(C_B\) is in (1.3). Therefore, for \(t\rightarrow \infty \),

$$\begin{aligned}&\mathbf{E}\Big \{\mathbf{1}_{A(X_{1,t})} \, F_1(Y_t(s), \, s\in [0, \, \zeta ]) \, \mathrm{e }^{-\sum _{j=1}^n \alpha _j L_j(t, \, \zeta )} \,F_2(X_1(t)-m_t)\Big \} \nonumber \\&\quad \sim \mathrm{e }^{C_B}\int _{\eta _1}^{\eta _2} {\mathrm{e }^z \over \sigma (2\pi )^{1/2}} \, I_{(6.3)}(t, \, \zeta ) \, \mathrm{d }z. \end{aligned}$$
(6.4)

We need to treat \(I_{(6.3)}(t, \, \zeta )\) when \(z\in [\eta _1,\eta _2]\). As we will let \(\zeta \rightarrow \infty \) before making \(x\rightarrow \infty \), we can suppose \(\zeta >x\). Let us write \(\theta =\theta _B(\zeta ) := \inf \{s\in [0,\zeta ] : B_s = \max _{u \in [0,\zeta ]} B_u \}\). Applying the Markov property at time \(v=\zeta \) (for the Brownian bridge which is an inhomogeneous Markov process, see Fact 7.4), gives

$$\begin{aligned} I_{(6.3)}(t, \, \zeta )&\!=\!&t \int _{-\zeta ^{2/3}}^{-\zeta ^{1/3}} \mathbf{E}\Big [\!\mathbf{1}_{\{\max _{[0,\zeta ]}\sigma B_s\le x, \, \theta \le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, \mathbf{1}_{\{\sigma B_\zeta \in \! \, \mathrm{d }w\}} \Big ] \\&\times \left(t\over t-\zeta \right)^{1/2} \frac{\mathrm{e }^{-{(y-w)^2\over 2\sigma ^2 (t-\zeta )}}}{\mathrm{e }^{-{y^2\over 2 \sigma ^2 t}}} \mathbf{E}_{0,{y- w\over \sigma }}^{(t-\zeta )} \Big [\mathbf{1}_{\{\sigma \overline{B}_s \ge a_s,\, s\in [0,t-\zeta ] \}} \mathrm{e }^{- 2 \int _0^{t-\zeta } G_{v+\zeta }(w + \sigma B_v) \, \mathrm{d }v} \Big ] \end{aligned}$$

where now \(\overline{B}_s:=B_{t-\zeta }-B_{t-\zeta -s}\). We recall that we look at the case \(z=y-m_t \in [\eta _1,\eta _2]\). It yields that \({(y-w)^2\over t-\zeta }\) and \({y^2\over t}\) are \(o_t(1)\), so that, for \(t\rightarrow \infty \),

$$\begin{aligned}&I_{(6.3)}(t, \, \zeta ) \sim t \int _{-\zeta ^{2/3}}^{-\zeta ^{1/3}} \mathbf{E}\Big [\mathbf{1}_{ \{\max _{[0,\zeta ]} \sigma B_s\le x, \, \theta \le x \} } F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, \mathbf{1}_{ \{ \sigma B_\zeta \in \! \, \mathrm{d }w \} } \Big ] \nonumber \\&\qquad \times \, \mathbf{E}_{0,{y- w\over \sigma }}^{(t-\zeta )} \Big [\mathbf{1}_{ \{\sigma \overline{B}_s \ge a_s, \, s\in [0,t-\zeta ] \} } \mathrm{e }^{- 2 \int _0^{t-\zeta } G_{v+\zeta }(w + \sigma B_v) \, \mathrm{d }v} \Big ]. \end{aligned}$$
(6.5)

At this stage, we need a couple of lemmas, stated as follows. We postpone the proof of these lemmas, and finish the proof of Theorem 2.3. Recalling the family of processes \(\Gamma ^{(b)}\) from (2.1), we write

$$\begin{aligned} \varphi _x(z) := \sigma \int _0^{x/\sigma } \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty F_W({z}+\sigma \Gamma ^{(b)}_v)\, \mathrm{d }v}\Big ] \, \mathrm{d }b, \qquad z\in \mathbb{R }, \end{aligned}$$
(6.6)

where \(F_W\) is the distribution function of the random variable \(W\) introduced in (1.2).

Lemma 6.1

Let \(z\in \mathbb{R }, y:= z + m_t, x>0\) and \((a_s,s\in [0,t])\) defined in (6.1). There exists a function \(f:\mathbb{R }\times \mathbb{R }_+\rightarrow \mathbb{R }\) such that for any \(w<x+z\) and \(\zeta >0\)

$$\begin{aligned} \lim _{t\rightarrow \infty } \, t \, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{ \{\sigma (B_t - B_{t-s}) \ge a_s,s\in [0,t] \} } \mathrm{e }^{- 2 \int _0^t G_{\zeta +v}(w+ \sigma B_v) \, \mathrm{d }v} \Big ] = \varphi _x(z)f(w,\zeta ). \end{aligned}$$

Moreover \(f(w,\zeta )\sim |w|\) as \(w\rightarrow -\infty \) and uniformly in \(\zeta >0\).

Lemma 6.2

Let \(\Gamma ^{(b)}\) be the family of processes defined in (2.1), and let \(T_b := \inf \{t\ge 0: \, B_t=b\}\). We have

$$\begin{aligned}&\lim _{\zeta \rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x,\, \sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \theta \le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, |B_\zeta | \Big ] \\&\quad = \int _0^{x/\sigma } \mathbf{E}\Big [F_1(\sigma \Gamma ^{(b)}_s, \, s\ge 0) \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \mathbf{1}_{\{T_b \le x\}} \Big ] \, \mathrm{d }b. \end{aligned}$$

 

Remark 6.3

It is possible, with some extra work, to obtain the following identity. Let \(\varphi (z):=\lim _{x\rightarrow \infty } \varphi _x(z)\) be the limit of (6.6). Then for any \(z\in \mathbb{R }\),

$$\begin{aligned} \varphi (z) = {\sqrt{2 \pi } \over c_1} \mathrm{e }^{-(z+C_B)} \, f_{W}({z}), \end{aligned}$$

where \(C_B\) is the constant in (1.3), \(W\) the random variable in (1.2), \(f_{W}\) the density function of \(W\), and

$$\begin{aligned} c_1 := \int _0^\infty \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty G_v(\sigma \Gamma ^{(b)}_v)\, \mathrm{d }v} \Big ] \, \mathrm{d }b, \end{aligned}$$

with \(\Gamma ^{(b)}\) as defined in (2.1). The appearance of \(f_W\) here is due to the fact that standard arguments in the study of parabolic p.d.e.’s show that the density of \(X_1(t)-m_t\) converges to that of \(W\). More precisely, using the classical interior parabolic a priori estimate [13], it is possible to show that \(v(t,\cdot ) \equiv u(t,m_t+\cdot )\) converges to \(w(\cdot )\) in locally \(C^2(\mathbb{R })\) topology.

We now continue with the proof of Theorem 2.3. Let us go back to (6.5). To apply Lemma 6.1, we want to use dominated convergence. First, fix \(\zeta >0\). Notice that

$$\begin{aligned}&\mathbf{E}_{0,{y- w\over \sigma }}^{(t-\zeta )} \Big [\mathbf{1}_{\{\sigma \overline{B}_s \ge a_s,s\in [0,t-\zeta ] \}} \mathrm{e }^{- 2 \int _0^{t-\zeta } G_{v+\zeta }(w + \sigma B_v) \, \mathrm{d }v} \Big ] \\&\quad \le \mathbf{P}_{0,{y- w\over \sigma }}^{(t-\zeta )} \Big (\sigma \overline{B}_s \ge a_s,s \in [0,t-\zeta ]\Big )\\&\quad = \mathbf{P}_{0,{y-w\over \sigma }}^{(t-\zeta )} \Big (\sigma B_s \ge a_s,s\in [0,t-\zeta ]\Big ), \end{aligned}$$

the last identity being a consequence of the fact that \((\overline{B}_s, s\in [0,t-\zeta ])\) and \((B_s, s\in [0,t-\zeta ])\) have the same distribution under \(\mathbf{P}_{0,{y-w\over \sigma }}^{(t-\zeta )}\). Using Lemma 5.2 the last probability is smaller than \({c\over t-\zeta } x|z-w+x|\) for some constant \(c>0\). Hence, we have for \(t>2\zeta \)

$$\begin{aligned} t\,\mathbf{E}_{0,{y- w \over \sigma }}^{(t-\zeta )} \Big [\mathbf{1}_{\{\sigma \overline{B}_s \ge a_s,s\in [0,t-\zeta ] \}} \mathrm{e }^{- 2 \int _0^{t-\zeta } G_{v+\zeta }(w + \sigma B_v) \, \mathrm{d }v} \Big ] \le {c\over 2} \, x|z-w+x|. \end{aligned}$$

We check that

$$\begin{aligned}&\int _{-\zeta ^{2/3}}^{-\zeta ^{1/3}}\!\! \mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, \mathbf{1}_{\{\sigma B_\zeta \in \! \, \mathrm{d }w\}} \Big ] |z-w+x|\\&\quad \le |\!| F_1|\!|_\infty \, \mathbf{E}[\, |z - \sigma B_\zeta + x| \,] \end{aligned}$$

which is finite. Hence, we can apply the dominated convergence, to see that

$$\begin{aligned} \lim _{t\rightarrow \infty } I_{(6.3)}(t, \, \zeta )&= \varphi _x(z)\mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x, \, \sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \, \theta \le x \}} \\&\times F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, f(\sigma B_\zeta ,\zeta ) \Big ]. \end{aligned}$$

Since \(f(w,\zeta )\sim |w|\) when \(w\rightarrow -\infty \) and uniformly in \(\zeta >0\), we have as \(\zeta \rightarrow \infty \),

$$\begin{aligned} \lim _{t\rightarrow \infty } I_{(6.3)}(t, \, \zeta )&\sim&\varphi _x(z) \mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x, \, \sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \, \theta \le x\}} \\&\quad \times F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, \sigma |B_\zeta | \Big ], \end{aligned}$$

which, in view of Lemma 6.2, gives that

$$\begin{aligned} \lim _{\zeta \rightarrow \infty }\lim _{t\rightarrow \infty } I_{(6.3)}(t, \, \zeta ) \!=\! \varphi _x(z) \sigma \,\!\! \int _0^{x/\sigma } \!\!\mathbf{E}\Big [F_1(\sigma \Gamma ^{(b)}_s, \, s\ge 0) \, \mathrm{e }^{\!-\!2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \mathbf{1}_{\{T_b \le x\}} \Big ] \, \mathrm{d }b. \end{aligned}$$

Going back to (6.4), this tells that

$$\begin{aligned}&\lim _{\zeta \rightarrow \infty } \lim _{t\rightarrow \infty } \mathbf{E}\left\{ \mathbf{1}_{A(X_{1,t})} \, F_1(Y_t(s), \, s\in [0, \, \zeta ]) \, \mathrm{e }^{-\sum _{j=1}^n \alpha _j L_j(t, \, \zeta )} \, F_2(X_1(t)- m_t) \right\} \\&\quad = {\mathrm{e }^{C_B}\over (2\pi )^{1/2}} \left(\int _{\eta _1}^{\eta _2} \varphi _x(z) \mathrm{e }^z \, \mathrm{d }z\right)\\&\qquad \times \left(\int _0^{x/\sigma }\! \mathbf{E}\left[F_1(\sigma \Gamma ^{(b)}_s, \, s\ge 0)\, \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \mathbf{1}_{\{T_b \le x\}}\! \right] \, \mathrm{d }b\right). \end{aligned}$$

Letting \(x\rightarrow \infty \) yields that \(\{(Y_t(s \in [0,t]) ; {\fancyscript{Q}}(t,\zeta )\}\) converges in distribution to \(\{(Y(s), s\ge 0) ; {\fancyscript{Q}}\}\), that \(X_{1}(t)-m_t\) converges in distribution, necessarily to \(W\) (by (1.2)), and that \(\{(Y_t(s \in [0,t]) ; {\fancyscript{Q}}(t,\zeta )\}\) and \(X_{1}(t)-m_t\) are asymptotically independent. Theorem 2.3 is proved.\(\Box \)

We observe that by letting \(x \rightarrow \infty \) the last identity proves that

$$\begin{aligned} \int _0^\infty \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \Big ] \, \mathrm{d }b <\infty , \ \ \int _{-\infty }^{\infty } \int _0^\infty \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^\infty F_W({z}+\sigma \Gamma ^{(b)}_v)\, \mathrm{d }v}\Big ] \mathrm{e }^z \, \mathrm{d }b \, \mathrm{d }z<\infty .\nonumber \\ \end{aligned}$$
(6.7)

It remains to check Lemmas 6.1 and 6.2. Their proof relies on some well known path decomposition results recalled in Sect. 7. Lemmas 6.1 and 6.2 are proved in Sects. 9 and 8, respectively.

Before proceeding with this program, observe that the arguments used above also yield the following Laplace transform characterization of \({\fancyscript{Q}}\). For any \(n \in \mathbb{N }, (\alpha _1, \ldots , \alpha _n) \in \mathbb{R }_+^n\) and \(A_1,\ldots , A_n\) a collection of Borel subsets of \(\mathbb{R }_+\) and \(\zeta >0\) define

$$\begin{aligned} I_\zeta (t) := \mathbf{E}\left\{ \exp \left(- \sum _i \mathbf 1 _{\{t-\tau _i(t) \le \zeta \}} \, \sum _{j=1}^n \alpha _j\, \int _{A_j} \, \mathrm{d }\fancyscript{N}_i(t) \right) \right\} , \end{aligned}$$

(i.e., only the particles whose common ancestor with \(X_1(t)\) is more recent than \(\zeta \) are taken into account). Clearly, the functional \(I_\zeta (t)\) characterizes the law of \({\fancyscript{Q}}(t,\, \zeta )\).

Then, for all \(n\) and all bounded Borel sets \(A_1, \ldots , A_n\) of \(\mathbb{R }_+\), the Laplace transform of the distribution of the random vector \((\fancyscript{Q}(A_1), \ldots , \fancyscript{Q}(A_n))\) is given by: \(\forall \alpha _j \ge 0\) (for \(1\le j\le n\)),

$$\begin{aligned} \mathbf{E}\Big \{\mathrm{e }^{-\sum _{j=1}^n \alpha _j \fancyscript{Q}(A_j)} \Big \}&= \lim _{\zeta \rightarrow \infty } \lim _{t \rightarrow \infty } I_{\zeta }(t) \nonumber \\&= {\int _0^\infty \mathbf{E}(\mathrm{e }^{-2 \int _0^\infty G_v^*(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v}) \, \mathrm{d }b \over \int _0^\infty \mathbf{E}(\mathrm{e }^{-2 \int _0^\infty G_v (\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v}) \, \mathrm{d }b}, \end{aligned}$$
(6.8)

where

$$\begin{aligned} G_v^*(x)&:= 1- \mathbf{E}\Big [\mathrm{e }^{-\sum _{j=1}^n \alpha _j \int _{x+A_j} \, \mathrm{d }\fancyscript{N}(v)} \, \mathbf{1}_{\{\min \fancyscript{N}(v) \ge x\}} \Big ]. \end{aligned}$$

Observe that the first equality in (6.8) is a consequence of the convergence in distribution of \({\fancyscript{Q}}(\zeta ,t)\) given in Theorem 2.3.

7 Meander, bridge and their sample paths

We collect in this section a few known results of Brownian motion and related processes. Recall that if \(g:= \sup \{t<1: \, B_t = 1\}\), then \((\mathfrak m _u := (1-g)^{-1/2} |B_{g+ (1-g)u}|, \, u\in [0, \, 1])\) is called a Brownian meander. In particular, \(\mathfrak m _1\) has the Rayleigh distribution: \(\mathbf{P}(\mathfrak m _1 >x) = \mathrm{e }^{-x^2/2}, x>0\).

Let \(B\) be Brownian motion, \(R\) a three-dimensional Bessel process, and \(\mathfrak m \) a Brownian meander. The processes \(B\) and \(R\) are assumed to start from \(a\) under \(\mathbf{P}_a\) (for \(a\ge 0\)) if stated explicitly; otherwise we work under \(\mathbf{P}:= \mathbf{P}_0\) so that they start from 0.

Fact 7.1

(Denisov [12]) Let \(\theta := \inf \{s\ge 0: \, B_s = \sup _{u\in [0, \, 1]} B_u\}\) be the location of the maximum of \(B\) on \([0, \, 1]\). The random variable \(\theta \) has the Arcsine law: \(\mathbf{P}(\theta \le x) = {2\over \pi } \arcsin \sqrt{x}, x\in [0, \, 1]\). The processes \(({B_\theta - B_{(1-u)\theta }\over \theta ^{1/2}}, \, u\in [0, \, 1])\) and \(({B_\theta - B_{\theta + u(1-\theta )}\over (1-\theta )^{1/2}}, \, u\in [0, \, 1])\) are independent copies of the Brownian meander, and are also independent of the random variable \(\theta \).

 

Fact 7.2

(Imhof [19]) For any continuous function \(F : \, C([0, \, 1], \, \mathbb{R }) \rightarrow \mathbb{R }_+\), we have

$$\begin{aligned} \mathbf{E}\Big [F(\mathfrak m _s, \; s\in [0, \, 1]) \Big ] = \Big ({\pi \over 2}\Big )^{\! 1/2} \, \mathbf{E}\Big [{1\over R_1} \, F(R_s, \; s\in [0, \, 1]) \Big ]. \end{aligned}$$

In particular, for any \(x>0\), the law of \((\mathfrak m _s, \; s\in [0, \, 1])\) given \(\mathfrak m _1=x\) is the law of \((R_s, \; s\in [0, \, 1])\) given \(R_1=x\).

 

Colloary 7.3

Let \(r>0\) and \(q>0\). Let \(T_a := \inf \{s\ge 0: \, B_s=a\}\) for any \(a\in \mathbb{R }\).

  • (i) The law of \((\mathfrak m _1-\mathfrak m _{1-s}, \; s\in [0, \, 1])\) under \(\mathbf{P}(\, \bullet \, | \, \mathfrak m _1=r)\) is the law of \((q^{-1/2}B_{qs}, \; s\in [0, \, {1\over q}T_{q^{1/2}r}])\) under \(\mathbf{P}(\, \bullet \, | \, T_{q^{1/2}r}=q)\).

  • (ii) For any \(t>0\), the law of \((R_1-R_{1-s}, \; s\in [0, \, 1])\) under \(\mathbf{P}(\, \bullet \, | \, R_1=r)\) is the law of \((q^{-1/2}(B_0-B_{qs}), \, s\in [0, \, {T_0\over q}])\) under \(\mathbf{P}_{q^{1/2}r}(\, \bullet \, | \, T_0=q)\).

 

Proof

By Imhof’s theorem (Fact 7.2), \((\mathfrak m _s, \; s\in [0, \, 1])\) given \(\mathfrak m _1=r\), as well as \((R_s, \; s\in [0, \, 1])\) given \(R_1=r\), are three-dimensional Bessel bridges of length \(1\), starting from \(0\) and ending at \(r\). By Williams [31], this is equivalent to saying that both \((\mathfrak m _1-\mathfrak m _{1-s}, \; s\in [0, \, 1])\) given \(\mathfrak m _1=r\), and \((R_1-R_{1-s}, \; s\in [0, \, 1])\) given \(R_1=r\), have the distribution of \((B_s, \; s\in [0, \, T_r])\) given \(T_r=1\).

By scaling, this gives (i).

To get (ii), we use moreover the fact that, by symmetry, \((B_s, \; s\in [0, \, T_r])\) under \(\mathbf{P}(\, \bullet \, | \, T_r=1)\) has the law of \((-B_s, \; s\in [0, \, T_{-r}])\) under \(\mathbf{P}(\, \bullet \, | \, T_{-r}=1)\), and thus has the law of \((B_0-B_s, \; s\in [0, \, T_0])\) under \(\mathbf{P}_r(\, \bullet \, | \, T_0=1)\). This yields (ii) by scaling. \(\square \)

Finally, we will use several times the Markov property for the Brownian bridge which is an inhomogeneous Markov process. Recall that \(\mathbf{E}^{(t+s)}_{0,x}\) is expectation with respect to \(\mathbf{P}^{(t+s)}_{0,x} (\, \cdot \,) := \mathbf{P}_0 (\, \cdot \, | \, B_{t+s}=x)\).

Fact 7.4

Fix \(t, s \ge 0\) and \(x \in \mathbb{R }\). For any measurable functions \(F : \, C([0, \,t], \, \mathbb{R }) \rightarrow \mathbb{R }_+\) and \(G : \, C([0, \,s], \, \mathbb{R }) \rightarrow \mathbb{R }_+\), we have

$$\begin{aligned}&\mathbf{E}^{(t+s)}_{0,x} \Big [F(B_s, s\in [0,t]) G(B_r, r\in [t,t+s]) \Big ]\\&\quad = \mathbf{E}_0 \left[\sqrt{{t+s\over s}}\mathrm{e }^{{x^2\over 2(t+s)}-{(x-B-t)^2\over 2s}} F(B_s, s\in [0,t]) \mathbf{E}^{(s)}_{B_t, x} \left\{ G(B_r, r\in [t,t+s]) \right\} \right]. \end{aligned}$$

8 Proof of Lemma 6.2

Let \(x>0\) and let \(F_1: \, C(\mathbb{R }_+, \, \mathbb{R }) \rightarrow \mathbb{R }_+\) be a bounded continuous function. We need to check

$$\begin{aligned}&\lim _{\zeta \rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x,\sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \theta \le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, |B_\zeta | \Big ]\\&\quad = \int _0^{x/\sigma } \mathbf{E}\Big [F_1(\Gamma ^{(b)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \Big ] \, \mathrm{d }b, \end{aligned}$$

where \(\Gamma ^{(b)}\) is the process defined in (2.1), \(\theta =\theta _B(\zeta ) := \inf \{s\in [0,\zeta ] : B_s = \max _{u \in [0,\zeta ]} B_u \}\), and \(G_v(\cdot )\) is the function defined in (6.2) (we do not use any particular property of \(G_v\) except its measurability and positivity).

The random variable \({\theta \over \zeta }\) has the Arcsine law. According to Denisov’s theorem (Fact 7.1), the two processesFootnote 4 \((Y_u := {B_\theta - B_{(1-u)\theta }\over \theta ^{1/2}}, \, u\in [0, \, 1])\) and \((Z_u := {B_\theta - B_{\theta + u(\zeta -\theta )}\over (\zeta -\theta )^{1/2}}, \, u\in [0, \, 1])\) are independent Brownian meanders, and are also independent of the random variable \(\theta \).

By definition,

$$\begin{aligned} \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v&= \theta \int _0^1 G^*_{u \theta } (\sigma \theta (Y_1 -Y_{1-u})) \, \mathrm{d }u \nonumber \\&+ (\zeta -\theta ) \int _0^1 G^*_{\theta + u(\zeta -\theta )} (\sigma \theta ^{1/2} Y_1 - \sigma (\zeta -\theta )^{1/2} Z_u) \, \mathrm{d }u.\qquad \end{aligned}$$
(8.1)

Also, \(B_\zeta = \theta ^{1/2} Y_1 - (\zeta -\theta )^{1/2} Z_1\), and

$$\begin{aligned} B_s = {\left\{ \begin{array}{ll} \theta ^{1/2} (Y_1 - Y_{1- {s\over \theta }}),&\text{ if} s\in [0, \, \theta ]\text{,} \\ \theta ^{1/2} Y_1 - (\zeta -\theta )^{1/2} Z_{s- \theta \over \zeta - \theta },&\text{ if} s\in [\theta , \, \zeta ]\text{.} \end{array}\right.} \end{aligned}$$
(8.2)

Lemma 8.1

Let \((\mathfrak m _s, \, s\in [0, \, 1])\) be a Brownian meander. Let \(\varepsilon ^1: \, \mathbb{R }_+ \rightarrow \mathbb{R }_+\) and \(\varepsilon ^2: \, \mathbb{R }_+ \rightarrow \mathbb{R }_+\) be two measurable functions such that \(\lim _{t\rightarrow \infty } \varepsilon ^1_t=0\) and \(\lim _{t\rightarrow \infty } \varepsilon _t^2=\infty \). For \(x\in \mathbb{R }, \ell \in \mathbb{R }, a\ge 0, b\ge 0\) and bounded continuous function \(F : \, C([0, \, 1], \, \mathbb{R }) \rightarrow \mathbb{R }_+\), we have

$$\begin{aligned}&\lim _{t\rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\mathfrak{m }_1\in (\varepsilon _{t}^{1}, \varepsilon ^{2}_{t})\}} \mathfrak{m }_{1}\, F(t^{1/2}\mathfrak{m }_{bs\over t}, \, s\in [0, \, 1])\, \mathrm{e }^{-at\int _{0}^{1} G^{*}_{x+ ut} (\ell - \sigma t^{1/2} \mathfrak{m }_{u}) \, \mathrm{d }u} \Big ]\\&\quad = \Big ({\pi \over 2}\Big )^{\! 1/2} \, \mathbf{E}\left[F(R_{bs}, \, s\in [0,\, 1])\, \mathrm{e }^{-a\int _{0}^{\infty } G^{*}_{x+v} (\ell - {\sigma } R_{v}) \, \mathrm{d }v} \right], \end{aligned}$$

where \(R\) is a three-dimensional Bessel process.

 

Proof of Lemma 8.1

By Imhof’s theorem (Fact 7.2), we have, for \(t\ge b\),

$$\begin{aligned}&\mathbf{E}\Big [\mathbf{1}_{\{\mathfrak{m }_1\in (\varepsilon _t^1,\varepsilon ^2_t)\}} \mathfrak{m }_1\, F(t^{1/2}\mathfrak{m }_{bs\over t}, \, s\in [0, \, 1])\, \mathrm{e }^{-at\int _0^1 G^*_{x+ ut} (\ell - \sigma t^{1/2} \mathfrak{m }_u) \, \mathrm{d }u} \Big ] \\&\quad = \Big ({\pi \over 2}\Big )^{\! 1/2} \, \mathbf{E}\Big [\mathbf{1}_{\{R_1\in (\varepsilon _t^1,\varepsilon ^2_t)\}} F(t^{1/2}R_{bs\over t}, \, s\in [0, \, 1])\, \mathrm{e }^{-at\int _0^1 G^{*}_{x+ ut} (\ell - \sigma t^{1/2} R_u) \, \mathrm{d }u} \Big ] \\&\quad = \Big ({\pi \over 2}\Big )^{\! 1/2} \, \mathbf{E}\Big [\mathbf{1}_{\{R_tt^{-1/2}\in (\varepsilon _t^1, \varepsilon ^2_t)\}} F(R_{bs},\, s\in [0,\, 1])\, \mathrm{e }^{-a\int _{0}^{t} G^{*}_{x+v} (\ell - \sigma R_{v}) \, \mathrm{d }v} \Big ], \end{aligned}$$

the second identity being a consequence of the scaling property. Let \(t\rightarrow \infty \). Since \(\mathbf{P}(R_tt^{-1/2}\notin (\varepsilon _t^1,\varepsilon ^2_t))\rightarrow 0\), Lemma 8.1 follows by dominated convergence. \(\square \)

Proof of Lemma 6.2

Recall (8.1) and (8.2). Let \(F_{1,a}(Y,Z) := F_1(a^{1/2} \sigma (Y_1 - Y_{1- {s\over a}})\mathbf{1}_{\{s\le a \}} +\sigma (a^{1/2} Y_1 - (\zeta -a)^{1/2} Z_{s- a\over \zeta - a})\mathbf{1}_{\{s\ge a \}}, \ s\in [0,\zeta ])\). Then

$$\begin{aligned}&\mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x, \sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \theta \le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, |B_\zeta | \Big ] \\&\quad = \int _0^x \mathbf{P}(\theta \in \! \, \mathrm{d }a)\, \mathbf{E}\Big [\mathbf{1}_{\{\sigma a^{1/2}Y_1 \le x\}} F_{1,a}(Y, Z) \mathrm{e }^{-2 a \int _0^1 G^*_{av}(\sigma a^{1/2} (Y_1 - Y_{1-v}))\, \mathrm{d }v} \\&\qquad \mathrm{e }^{-2(\zeta -a)\int _0^1 G^*_{a+v(\zeta -a)} (\sigma a^{1/2} Y_1 -(\zeta -a)^{1/2} \sigma Z_v)\, \mathrm{d }v} | a^{1/2} Y_1-(\zeta -a)^{1/2} Z_1 | \mathbf{1}_{\{-\sigma B_\zeta \in [\zeta ^{1/3},\zeta ^{2/3}] \}} \Big ] \\&\quad = \int _0^x \zeta ^{1/2} \mathbf{P}(\theta \in \! \, \mathrm{d }a)\, \mathbf{E}\Big \{\mathbf{1}_{\{\sigma a^{1/2}Y_1 \le x\}} \mathrm{e }^{-2 a \int _0^1 G^*_{av}(\sigma a^{1/2} (Y_1 - Y_{1-v}))\, \mathrm{d }v} \\&\qquad \mathbf{E}\Big [F_{1,a}(Y, Z) \mathrm{e }^{-2 (\zeta -a) \int _0^1 G^*_{a+v(\zeta -a)} (\sigma a^{1/2} Y_1 -(\zeta -a)^{1/2} \sigma Z_v)\, \mathrm{d }v} {| a^{1/2} Y_1-(\zeta -a)^{1/2} Z_1|\over \zeta ^{1/2}} \\&\qquad \mathbf{1}_{\{Z_1 \in [\varepsilon _\zeta ^1,\varepsilon _\zeta ^2] \}} \, \big | \, Y_s, \, s\le 1\Big ] \Big \}, \end{aligned}$$

where \(\varepsilon _\zeta ^1:= ({\zeta ^{1/3}\over \sigma } +a^{1/2} Y_1)(\zeta -a)^{-1/2} \) and \(\varepsilon _\zeta ^2 := ({\zeta ^{2/3}\over \sigma } +a^{1/2} Y_1)(\zeta -a)^{-1/2}\).

By Lemma 8.1, we get that for each \(a \in [0,x]\) when \(\zeta \rightarrow \infty \), the conditional expectation \(\mathbf{E}[\, \ldots \, | \, Y_s, \, s\le 1]\) on the right-hand side converges to

$$\begin{aligned} \Big ({\pi \over 2} \Big )^{1/2} \mathbf{E}\Big [{\bar{F}}_{1,a}(Y,R) \mathrm{e }^{- 2 \int _0^\infty G^*_{v+a}(\sigma a^{1/2} Y_1-\sigma R_v)\, \mathrm{d }v} \big | Y_s, s\le 1\Big ] \end{aligned}$$

where

$$\begin{aligned} {\bar{F}}_{1,a}(Y,R) := F_1(\sigma a^{1/2} (Y_1 - Y_{1- {s\over a}})\mathbf{1}_{\{s\le a \}} + \sigma (a^{1/2} Y_1 - R_{s-a})\mathbf{1}_{\{s\ge a \}}, \ s\in [0,\infty )), \end{aligned}$$

with \(R\) and \(Y\) being independent. Since we only allow \(a\) to vary between 0 and \(x\) we may conclude that

$$\begin{aligned}&\lim _{\zeta \rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\max _{[0,\zeta ]} \sigma B_s\le x, \sigma B_\zeta \in (-\zeta ^{2/3},-\zeta ^{1/3}), \theta \le x\}} F_1(\sigma B_s, \, s\in [0, \, \zeta ]) \, \mathrm{e }^{- 2 \int _0^\zeta G^*_v (\sigma B_v) \, \mathrm{d }v} \, |B_\zeta | \Big ] \nonumber \\&\quad =\int _0^x {\! \, \mathrm{d }a \over (2\pi a)^{1/2}} \, \mathbf{E}\Big [\mathbf{1}_{\{\sigma a^{1/2} \mathfrak m _1\le x\}} {\bar{F}}_{1,a}(\mathfrak m ,R)\nonumber \\&\qquad \times \,\mathrm{e }^{-2 a \int _0^1 G^*_{au} (\sigma a^{1/2} (\mathfrak m _1 - \mathfrak m _{1-u})) \, \mathrm{d }u -2 \int _0^\infty G^*_{a+v} (\sigma a^{1/2} \mathfrak m _1 - \sigma R_v) \, \mathrm{d }v} \Big ] \nonumber \\&\quad =:I_{(8.3)}, \end{aligned}$$
(8.3)

where the Brownian meander \(\mathfrak m \) and the three-dimensional Bessel process \(R\) are assumed to be independent. Let \(V^{(a)}_s := a^{1/2}(\mathfrak m _1-\mathfrak m _{1-{s\over a}})\) if \(s\in [0, \, a]\) and \(V^{(a)}_s := a^{1/2} \mathfrak m _1- R_{s-a}\) if \(s\ge a\). We observe that \(a \int _0^1 G^*_{ua} (\sigma a^{1/2} (\mathfrak m _1 - \mathfrak m _{1-u})) \, \mathrm{d }u +\int _0^\infty G^*_{a+v} (\sigma a^{1/2} \mathfrak m _1 - \sigma R_v) \, \mathrm{d }v\) is, in fact, \(\int _0^\infty G^*_s(\sigma V^{(a)}_s) \, \mathrm{d }s\). So

$$\begin{aligned} I_{(8.3)}&= \int _0^x {\! \, \mathrm{d }a\over (2\pi a)^{1/2}} \, \mathbf{E}\Big [\mathbf{1}_{\{\sigma a^{1/2} \mathfrak m _1\le x\}} F_1(\sigma V^{(a)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_s(\sigma V^{(a)}_s) \, \mathrm{d }s} \Big ]\\&= \int _0^x {\! \, \mathrm{d }a\over (2\pi a)^{1/2}} \int _0^{{x\over \sigma \sqrt{a}}} \!\! \, \mathrm{d }r \; r \mathrm{e }^{-r^2/2} \\&\quad \times \, \mathbf{E}\Big [F_1(\sigma V^{(a)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_s(\sigma V^{(a)}_s) \, \mathrm{d }s} \, \Big | \, \mathfrak m _1=r \Big ], \end{aligned}$$

where, in the last identity, we used the fact that \(\mathfrak m _1\) has the Rayleigh distribution. Applying Corollary 7.3 (i) to \(q:=a\), and recalling the process \(\Gamma ^{(a^{1/2}r)}\) from (2.1), this yields

$$\begin{aligned} I_{(8.3)}&= \int _0^x {\! \, \mathrm{d }a\over (2\pi a)^{1/2}} \int _0^{{x\over \sigma \sqrt{a}}} \!\! \, \mathrm{d }r \; r \mathrm{e }^{-r^2/2} \\&\times \, \mathbf{E}\Big [F_1(\sigma \Gamma ^{(a^{1/2}r)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(a^{1/2}r)}_v) \, \mathrm{d }v} \, \Big | \, T_{a^{1/2}r}=a \Big ]. \end{aligned}$$

By a change of variables \(r:= a^{-1/2} b\) and Fubini’s theorem, the expression on the right-hand is

$$\begin{aligned}&= \int _0^{x/\sigma } \!\! \, \mathrm{d }b \int _0^x \!\! \, \mathrm{d }a \, {b\mathrm{e }^{-b^2/(2a)}\over (2\pi a^3)^{1/2}} \mathbf{E}\Big [F_1(\sigma \Gamma ^{(b)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \, \Big | \, T_b=a \Big ] \\&= \int _0^{x/\sigma } \!\! \, \mathrm{d }b \, \mathbf{E}\Big [F_1(\sigma \Gamma ^{(b)}_s, \, s\ge 0) \, \mathrm{e }^{-2 \int _0^\infty G^*_v(\sigma \Gamma ^{(b)}_v) \, \mathrm{d }v} \mathbf{1}_{\{T_b \le x\}}\Big ], \end{aligned}$$

completing the proof of Lemma 6.2. \(\square \)

9 Proof of Lemma 6.1

We first recall the following fact concerning the F-KPP equation. As already pointed out, \(u(t,x) := G_t(x)\) is the solution of a version of the F-KPP equation with heavyside initial data. Define \(m_t(\varepsilon ) := \inf \{x: G_t(x) = \varepsilon \}\) for \(\varepsilon \in (0, 1)\). Bramson [6] shows that, for any \(\varepsilon \in (0, 1)\), there exists a constant \(C(\varepsilon ) \in \mathbb{R }\) such that \(m_t(\varepsilon ) = \frac{3}{2} \log t + C(\varepsilon ) + o(1), t\rightarrow \infty \).

Fact 9.1

(McKean [24, pp. 326–327]) For any \(\varepsilon \in (0,1)\), let \(c_\varepsilon = w^{-1}(\epsilon )\), i.e., \(\mathbf{P}(W\le c_\varepsilon ) = \varepsilon \). Then, for \(\varepsilon \in (0,1)\), the following convergences are monotone as \(t\rightarrow \infty \):

$$\begin{aligned} G_t(x+m_t(\varepsilon )) \nearrow \mathbf{P}(W \le x+c_\varepsilon )&= w(x+c_\varepsilon ) \qquad \text{ for} x \ \le 0,\\ G_t(x+m_t(\varepsilon )) \searrow \mathbf{P}(W \le x+c_\varepsilon )&= w(x+c_\varepsilon ) \qquad \text{ for} \ x \ge 0. \end{aligned}$$

Recall that \(G_t(m_t +x) \rightarrow w(x), \forall x\in \mathbb{R }\), and that \(m_t := {3\over 2}\log t + C_B\). Since \(\mathbf{P}(W\le y) \sim C|y|\mathrm{e }^y, y\rightarrow -\infty \) (see (1.6)), a consequence of Fact 9.1 (in the case \(x\le 0\)) is that for some constant \(c>0\), and any \(v>0\) and \(r\in \mathbb{R }\),

$$\begin{aligned} G_v(m_v +r)\le c\, (|r|+1) \mathrm{e }^{r}. \end{aligned}$$
(9.1)

Let us turn to the proof of Lemma 6.1. Let \(B\) be Brownian motion (under \(\mathbf{P}= \mathbf{P}_0\)). Recall that \(\mathbf{E}_{0,{y\over \sigma }}^{(t)}\) is expectation with respect to \(\mathbf{P}_{0,{y\over \sigma }}^{(t)} := \mathbf{P}(\, \bullet \, | \, B_t = {y\over \sigma })\). We further subdivise the proof of Lemma 6.1 into two lemmas.

Lemma 9.2

Let \(\kappa :\mathbb{R }_+\rightarrow \mathbb{R }\) be a bounded Borel function with compact support. Take \(x>0\) and recall the definition of \((a_s,s\in [0,t])\) in (6.1). Then, for any \(b>{a_0\over \sigma }\),

$$\begin{aligned} \lim _{t\rightarrow \infty } t^{3/2} \, \mathbf{E}_b\left[\mathbf{1}_{\{\sigma B_s\ge a_s,s\in [0,t] \}}\, \kappa (\sigma B_{t}-a_t) \right] = {\sigma b - a_0\over 2\sqrt{\pi }} \int _{\mathbb{R }_+} r\kappa (r) \, \mathrm{d }r. \end{aligned}$$

Lemma 9.3

Let \(F_W\) be the distribution function of \(W\), where \(W\) is the random variable in (1.2). For any \(z\in \mathbb{R }\),

$$\begin{aligned}&\lim _{M\rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\sigma B_{s}\ge -x,\,s\in [0,M]\}} \mathrm{e }^{- 2 \int _0^M F_W(z - \sigma B_v) \, \mathrm{d }v} (x+\sigma B_M)\Big ] \\&\quad = x\, \mathbf{E}_{{x\over \sigma }} \Big [\mathrm{e }^{- 2 \int _0^\infty F_W (z + x-\sigma R_v) \, \mathrm{d }v} \Big ] \\&\quad = \varphi _x(z), \end{aligned}$$

with the notation of (6.6), and where \((R_v)_{v\ge 0}\) is a three-dimensional Bessel process.

Before proving Lemmas 9.2 and , let us see how we use them to prove Lemma 6.1.

Proof of Lemma 6.2

Recall that \(y=z+m_t\) and \((a_s,s\in [0,t])\) is defined in (6.1). Take \(\zeta >0\) and \(w<x+z\) where \(x=-a_0\). Let

$$\begin{aligned} h_v(r):= G_{\zeta +v}(w + r), \qquad v\ge 0, \; r\in \mathbb{R }. \end{aligned}$$

So if we write

$$\begin{aligned} I_{(9.2)}&:= t \, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma (B_t - B_{t-s}) \ge a_s,\; s\in [0,t] \}} \, \mathrm{e }^{- 2 \int _0^t G_{\zeta +v}(w+\sigma B_v) \, \mathrm{d }v} \Big ] \nonumber \\&= t \, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma (B_t - B_{t-s}) \ge a_s,\; s\in [0,t] \}} \, \mathrm{e }^{- 2 \int _0^t h_v(\sigma B_v) \, \mathrm{d }v} \Big ], \end{aligned}$$
(9.2)

then we need to check that \(\lim _{t\rightarrow \infty } I_{(9.2)} = \varphi _x(z) f(w, \, \zeta )\) for some \(f(w, \, \zeta )\) such that \(f(w, \, \zeta ) \sim |w|\) as \(w\rightarrow -\infty \) and uniformly in \(\zeta >0\).

Since \((B_t-B_{t-s}, \, s\in [0,t])\) and \((B_s, \, s\in [0,t])\) have the same distribution under \(\mathbf{P}_{0,{y-w\over \sigma }}^{(t)}\), we have

$$\begin{aligned} I_{(9.2)} = t\, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\,s\in [0,t]\}} \, \mathrm{e }^{- 2 \int _0^t h_{t-v}(y-w- \sigma B_v) \, \mathrm{d }v} \Big ]. \end{aligned}$$

Recall from (9.1) that

$$\begin{aligned} G_v(m_v +r)\le c\, (|r|+1) \mathrm{e }^{r}, \end{aligned}$$

for some constant \(c>0\), and any \(v>0\) and \(r\in \mathbb{R }\). Therefore, there exists a constant \(c_{x,z}\), depending on \((x, \, z)\), such that \(h_v(m_v+r) \le c_{x,z} (|r|+1)\mathrm{e }^r\). Thus, on the event \(\{\sigma B_s> \min (s^{1/3},\, m_t +(t-s)^{1/3}), \, \forall s\in [M,t-M] \}\), we have \(\int _{M}^{t-M}h_{t-v}(y - B_v) \, \mathrm{d }v \le \varepsilon (M)\) for any \(t>1\), where \(\varepsilon (M)\) is deterministic and statisfies \(\lim _{M\rightarrow \infty } \varepsilon (M)=0\).

On the other hand recall from Lemma 5.2 that

$$\begin{aligned}&\mathbf{P}_{0,{y-w\over \sigma }}^{(t)}\Big (\sigma B_{s}\ge a_s,\! \, s\in [0,t], \exists s\in [M,t\!-\!M]: \, \sigma B_s\!<\! \min (s^{1/3}, m_t +(t-s)^{1/3})\Big )\\&\quad = {1\over t} o_M(1), \end{aligned}$$

in the sense that \(\limsup _{t\rightarrow \infty } t\mathbf{P}_{0,{y-w\over \sigma }}^{(t)}(\ldots )=o_M(1)\), where, as before, \(o_M(1)\) designates an expression which converges to 0 as \(M \rightarrow \infty \). Therefore, we see that

$$\begin{aligned} \lim _{t \rightarrow \infty } I_{(9.2)}&= \lim _{M \rightarrow \infty } \lim _{t\rightarrow \infty } t \, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\; s\in [0,t]\}} \, \mathbf{1}_{\{\sigma B_{t-M}-a_t \in [M^{1/3},\, M^{2/3}]\}} \\&\times \mathrm{e }^{- 2 \int _{[0,M]\cup [t-M,t]} h_{t-v}(y-w - \sigma B_v) \, \mathrm{d }v} \Big ]. \end{aligned}$$

Define

$$\begin{aligned} \kappa _M(r)&:= \mathbf{1}_{\{r\in [M^{1/3}, \, M^{2/3}]\}} \, \mathrm{e }^{-{(x+z-w-r)^2\over 2\sigma ^2M}} \times \nonumber \\&\times \, \mathbf{E}_{{r\over \sigma },{x+z-w\over \sigma }}^{(M)} \Big [\mathbf{1}_{\{\min _{[0,M]} B>0 \}}\, \mathrm{e }^{- 2 \int _0^M h_{M-v}(x+z-w - \sigma B_v) \, \mathrm{d }v} \Big ]. \end{aligned}$$
(9.3)

By the Markov property (applied at time \(t-M\), and then at time \(M\) for the second identity), we get, for \(t\rightarrow \infty \),

$$\begin{aligned}&t\, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\, s\in [0,\, t]\}} \, \mathbf{1}_{\{\sigma B_{t-M}-a_t \in [M^{1/3},\, M^{2/3}]\}} \, \mathrm{e }^{- 2 \int _{[0,M]\cup [t-M,t]} h_{t-v}(y-w - \sigma B_v) \, \mathrm{d }v} \Big ] \nonumber \\&\quad \sim {t^{3/2} \over M^{1/2}} \, \mathbf{E}_0 \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\, s\in [0,\, t-M]\}} \, \mathrm{e }^{- 2 \int _0^M h_{t-v}(y-w - \sigma B_v) \, \mathrm{d }v} \kappa _M(\sigma B_{t-M}-a_t) \Big ] \nonumber \\&\quad ={t^{3/2} \over M^{1/2}} \, \mathbf{E}_0 \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\, s\in [0,\,M]\}} \, \mathrm{e }^{- 2 \int _0^M h_{t-v}(y-w - \sigma B_v) \, \mathrm{d }v} \times \nonumber \\&\qquad \times \mathbf{E}_{B_M}\Big ( \mathbf{1}_{\{\sigma B_{s}\ge a_{M+s},\, s\in [0, \, t-2M]\}} \kappa _M(\sigma B_{t-2M}-a_t) \Big ) \Big ]. \end{aligned}$$
(9.4)

By Lemma 9.2, almost surely,

$$\begin{aligned}&\lim _{t\rightarrow \infty }t^{3/2} \, \mathbf{E}_{B_M} \Big (\mathbf{1}_{\{\sigma B_{s}\ge a_{M+s},\, s\in [0,t-2M]\}} \kappa _M(\sigma B_{t-2M}-a_t) \Big )\\&\quad = {x+ \sigma B_M \over 2\sqrt{\pi }} \int _{\mathbb{R }_+} r\kappa _M(r) \, \mathrm{d }r. \end{aligned}$$

On the other hand, \(h_s(m_s + r)=G_{\zeta +s}(m_s+w+r) \rightarrow F_W(w+r)\) as \(s\rightarrow \infty \) [see (1.2)]. Hence, almost surely,

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathrm{e }^{- 2 \int _0^M h_{t-v}(y-w - \sigma B_v) \, \mathrm{d }v} = \mathrm{e }^{- 2 \int _0^M F_W(z - \sigma B_v) \, \mathrm{d }v}. \end{aligned}$$

In view of the Brownian motion sample path probability bound given in (9.6), below, we are entitled to use dominated convergence to take the limit \(t\rightarrow \infty \) in (9.4):

$$\begin{aligned}&\lim _{t\rightarrow \infty } t\, \mathbf{E}_{0,{y-w\over \sigma }}^{(t)} \Big [\mathbf{1}_{\{\sigma B_{s}\ge a_s,\,s\in [0,t]\}} \, \mathbf{1}_{\{\sigma B_{t-M}-a_t \in [M^{1/3}, \, M^{2/3}]\}} \, \mathrm{e }^{- 2 \int _{[0,M]\cup [t-M,t]} h_{t-v}(y-w -\sigma B_v) \, \mathrm{d }v} \Big ]\\&\quad = \mathbf{E}\Big [\mathbf{1}_{\{\sigma B_{s}\ge -x,\, s\in [0,M]\}} \, \mathrm{e }^{- 2 \int _0^M F_W(z- \sigma B_v) \, \mathrm{d }v} (x+\sigma B_M) \Big ] {1\over 2(M\pi )^{1/2}} \int _{\mathbb{R }_+} r\kappa _M(r) \, \mathrm{d }r. \end{aligned}$$

By Lemma 9.3,

$$\begin{aligned} \lim _{M\rightarrow \infty } \mathbf{E}\Big [\mathbf{1}_{\{\sigma B_{s}\ge -x,\,s\in [0,M]\}} \, \mathrm{e }^{- 2 \int _0^M F_W(z - \sigma B_v) \, \mathrm{d }v}(x+\sigma B_M)\Big ] = \varphi _x(z), \end{aligned}$$

with the notation of (6.6). So it remains to check that

$$\begin{aligned} \lim _{M\rightarrow \infty } {1\over 2(M\pi )^{1/2}} \int _{\mathbb{R }_+} r\kappa _M(r) \, \mathrm{d }r = f(w, \, \zeta ), \end{aligned}$$
(9.5)

for some \(f(w, \, \zeta )\) such that \(f(w, \, \zeta ) \sim |w|\) as \(w\rightarrow -\infty \) and uniformly in \(\zeta >0\).

Recalling the definition of \(\kappa _M\) in (9.3), we have

$$\begin{aligned} \int _{\mathbb{R }_+} r\kappa _M(r) \, \mathrm{d }r&= \int _{M^{1/3}}^{M^{2/3}} r\, \mathrm{e }^{-{(z-w+x-r)^2\over 2\sigma ^2 M}} \mathbf{E}_{{r\over \sigma },{x+z-w\over \sigma }}^{(M)} \Big [\mathbf{1}_{\{\min _{[0,M]} B>0 \}} \mathrm{e }^{- 2 \int _0^M h_{M-v}(x+z-w - \sigma B_v) \, \mathrm{d }v}\Big ] \, \mathrm{d }r \\&= \int _{M^{1/3}}^{M^{2/3}} r\, \mathrm{e }^{-{(z-w+x-r)^2\over 2\sigma ^2 M}} \mathbf{E}_{{x+z-w\over \sigma },{r\over \sigma }}^{(M)} \Big [\mathbf{1}_{\{\min _{[0,M]} B>0 \}} \mathrm{e }^{- 2 \int _0^M h_v (x+z-w - \sigma B_v) \, \mathrm{d }v}\Big ] \, \mathrm{d }r \\&= \sigma (2\pi M)^{1/2} \, \mathbf{E}_{{x+z-w\over \sigma }} \Big [\sigma B_M \, \mathbf{1}_{\{\sigma B_M \in [M^{1/3}, \, M^{2/3}]\}} \, \mathbf{1}_{\{\min _{[0,M]} B>0 \}} \mathrm{e }^{- 2 \int _0^M h_v (x+z-w - \sigma B_v) \, \mathrm{d }v}\Big ], \end{aligned}$$

which, by the \(h\)-transform of the Bessel process, is

$$\begin{aligned} = \sigma (2\pi M)^{1/2}\, (x+z-w)\, \mathbf{E}_{{x+z-w\over \sigma }} \Big [\mathrm{e }^{- 2 \int _0^M h_v (x+z-w-\sigma R_v) \, \mathrm{d }v}\mathbf{1}_{\{\sigma R_M \in [M^{1/3}, \, M^{2/3}]\}} \Big ]. \end{aligned}$$

Dominated convergence implies that

$$\begin{aligned} \lim _{M\rightarrow \infty } {1\over M^{1/2}} \int _{\mathbb{R }_+} r\kappa _M(r) \, \mathrm{d }r&\!=\!&\sigma (2\pi )^{1/2}\, (x+z-w)\, \mathbf{E}_{{x\!+\!z\!-\!w\over \sigma }} \Big [\mathrm{e }^{\!-\! 2 \int _0^\infty h_v (x\!+\!z\!-\!w\!-\!\sigma R_v) \, \mathrm{d }v} \Big ] \\&\!=\!&\sigma (2\pi )^{1/2}\, (x\!+\!z\!-w) \, \mathbf{E}_{{x+z-w\over \sigma }} \Big [\mathrm{e }^{- 2 \int _0^\infty G_{\zeta +v} (x+z-\sigma R_v) \, \mathrm{d }v} \Big ]. \end{aligned}$$

This yields (9.5) with

$$\begin{aligned} f(w,\zeta ) := (x+z-w)\, \mathbf{E}_{{x+z-w\over \sigma }} \Big [\mathrm{e }^{- 2 \int _0^\infty G_{\zeta +v}(x+z- \sigma R_v) \, \mathrm{d }v} \Big ], \end{aligned}$$

and thus the first part of Lemma 6.1. It remains to check that \(f(w,\zeta )\sim |w|\) as \(w\rightarrow -\infty \), uniformly in \(\zeta >0\). We only have to show that, uniformly in \(\zeta >0\),

$$\begin{aligned} \lim _{w\rightarrow -\infty } \mathbf{E}_{{x+z-w\over \sigma }} \Big [\mathrm{e }^{- 2 \int _0^\infty G_{\zeta +v}(x+z-\sigma R_v) \, \mathrm{d }v}\Big ] = 1. \end{aligned}$$

Using again (9.1), \(G_v(m_v +r)\le c\, (|r|+1) \mathrm{e }^{r}\) for any \(v\ge 0\) and \(r\in \mathbb{R }\), we have that

$$\begin{aligned} \int _0^\infty G_{\zeta +v}(x+z-R_v) \, \mathrm{d }v \le \mathrm{e }^{x+z}\int _0^\infty \mathrm{e }^{-R_v} \, \mathrm{d }v, \end{aligned}$$

and we conclude by \(\lim _{r\rightarrow \infty } \mathbf{E}_{r}[\mathrm{e }^{- c \int _0^\infty \mathrm{e }^{-R_v} \, \mathrm{d }v}]=1\) for any fixed \(c>0\). \(\square \)

The rest of the section is devoted to the proof of Lemmas 9.2 and 9.3.

Proof of Lemma 9.2

For any \(a, \eta >0\), we have by (5.2)

$$\begin{aligned} \mathbf{P}_a\Big (\min _{[0,t]}\sigma B_s>0,\; \sigma B_t \in \!\, \mathrm{d }\eta \Big ) = \Big ({2\over \pi \sigma ^2 t}\Big )^{\! 1/2} \, \mathrm{e }^{-{\sigma ^2 a^2+\eta ^2\over 2\sigma ^2 t}} \sinh \Big ({\eta a\over \sigma t}\Big ) \, \mathrm{d }\eta . \end{aligned}$$

In particular, if \({a\eta \over t}\rightarrow 0\) as \(t\rightarrow \infty \), we have (recalling that \(\sigma ^2 =2\))

$$\begin{aligned} \mathbf{P}_a\Big (\min _{[0,t]}\sigma B_s>0,\; \sigma B_t \in \!\, \mathrm{d }\eta \Big ) \sim {1\over \sqrt{2\pi }} \, {a\eta \over t^{3/2}} \, \mathrm{e }^{-{\sigma ^2 a^2+\eta ^2\over 2\sigma ^2 t}} \, \mathrm{d }\eta . \end{aligned}$$

Fix \(\eta >0\). By the Markov property at time \({t\over 2}\), and using the fact that \({B_{{t\over 2}}}\) is of order \(t^{1/2}\), we have, for \(t\rightarrow \infty \),

$$\begin{aligned}&\mathbf{P}_b\Big (\{\sigma B_s\ge a_s, \; s\in [0,t] \} \cap \{\sigma B_t\in a_t + \!\, \mathrm{d }\eta \} \Big ) \\&\quad = \mathbf{E}_b\Big [\mathbf{1}_{\{\sigma B_s\ge -x, \; s\in [0,\, {t\over 2}] \}} \, \mathbf{P}_{B_{{t\over 2}}- {a_t\over \sigma }} \Big (\min _{[0,\, {t\over 2}]}B_s>0,\, \sigma B_{{t\over 2}} \in \!\, \mathrm{d }\eta \Big ) \Big ]\\&\quad \sim {2\over \sqrt{\pi }}{\eta \over t^{3/2}} \mathbf{E}_b\Big [\mathbf{1}_{\{\sigma B_s\ge -x,\, s\in [0,\, {t\over 2}] \}} \, B_{{t\over 2}} \, \mathrm{e }^{-{B_{t/2}^2\over t}}\Big ] \, \mathrm{d }\eta . \end{aligned}$$

Going from the killed Brownian motion to the three-dimensional Bessel process, we see that, as \(t\rightarrow \infty \),

$$\begin{aligned} \mathbf{E}_b\Big [\mathbf{1}_{\{\sigma B_s\ge -x,\, s\in [0,\, {t\over 2}] \}} \, B_{{t\over 2}} \, \mathrm{e }^{-{B_{t/2}^2\over t}}\Big ] \sim (b+ {x\over \sigma }) \, \mathbf{E}_0\Big [\mathrm{e }^{-{R_{t/2}^2\over t}}\Big ] = 2^{-3/2} \left(b+{x\over \sigma }\right). \end{aligned}$$

Hence,

$$\begin{aligned} \mathbf{P}_b\Big (\{\sigma B_s\ge a_s, \; s\in [0,t] \} \cap \{\sigma B_t\in a_t + \!\, \mathrm{d }\eta \} \Big ) \sim {\sigma b + x\over 2\sqrt{\pi }} \, {\eta \over t^{3/2}} \, \mathrm{d }\eta . \end{aligned}$$

To complete the proof, we have to use dominated convergence. It is enough to show that (recalling that the function \(\kappa \) is bounded with compact support) for any \(K>0\),

$$\begin{aligned} \sup _{t\ge 1} t^{3/2} \, \mathbf{P}_b\Big (\{B_s\ge a_s,s\in [0,t]\} \cap \{B_t-a_t \le K\} \Big ) <\infty . \end{aligned}$$
(9.6)

This can easily be deduced from (5.2). \(\square \)

Proof of Lemma 9.3

We have

$$\begin{aligned}&\mathbf{E}\Big [\mathbf{1}_{\{\sigma B_{s}\ge -x,\,s\in [0,M]\}} \mathrm{e }^{- 2 \int _0^M F_W(z - \sigma B_v) \, \mathrm{d }v} (x+\sigma B_M)\Big ] \\&\quad =\mathbf{E}_{{x\over \sigma }}\Big [\mathbf{1}_{\{\sigma B_{s}\ge 0,\,s\in [0,M]\}} \mathrm{e }^{- 2 \int _0^M F_W(z+x-\sigma B_v) \, \mathrm{d }v} \sigma B_M\Big ] \\&\quad =x\mathbf{E}_{{x\over \sigma }}\Big [\mathrm{e }^{- 2 \int _0^M F_W(z+x-\sigma R_v) \, \mathrm{d }v} \Big ], \end{aligned}$$

giving the first identity by dominated convergence. To prove the second identity, we recall the following well known path decomposition for the three-dimensional Bessel process \(R\): under \(\mathbf{P}_{{x\over \sigma }}, \inf _{s\ge 0} R_s\) is uniformly distributed in \((0, \, {x\over \sigma })\). Furthermore, if we write \(\nu := \inf \{s\ge 0: \, R_\nu = \inf _{s\ge 0} R_s\}\), the location of the minimum, then conditionally on \(\inf _{s\ge 0} R_s = r \in (0, \, {x\over \sigma })\), the pre-minimum path \(({x\over \sigma } - R_s, \, s\in [0, \, \nu ])\) and the post-minimum path \((R_{s+\nu } - r, \, s\ge 0)\) are independent, the first being Brownian motion starting at \(0\) and killed when hitting \({x\over \sigma } -r\) for the first time, and the second a three-dimensional Bessel process starting at \(0\). Accordingly,

$$\begin{aligned}&x\mathbf{E}_{{x\over \sigma }}\Big [\mathrm{e }^{- 2 \int _0^M F_W(z+x-\sigma R_v) \, \mathrm{d }v} \Big ] \\&\quad = x \int _0^{{x\over \sigma }} {\sigma \over x} \, \mathrm{d }r\, \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^{T_{{x\over \sigma }-r}} F_W(z+\sigma B_s) \, \mathrm{d }s - 2\int _0^\infty F_W(z+x-\sigma r - \sigma R_s) \, \mathrm{d }s} \Big ], \end{aligned}$$

where, as before, the three-dimensional Bessel process \(R\) and the Brownian motion \(B\) are assumed to be independent, and \(T_b := \inf \{s\ge 0: \, B_s=b\}\) for \(b\in \mathbb{R }\). By a change of variables \(b:= {x\over \sigma }-r\), we see that the expression on the right-hand is

$$\begin{aligned} = \sigma \int _0^{{x\over \sigma }} \mathbf{E}\Big [\mathrm{e }^{-2 \int _0^{T_b} F_W(z+\sigma B_s) \, \mathrm{d }s - 2\int _0^\infty F_W(z+ \sigma b - \sigma R_s) \, \mathrm{d }s} \Big ] \, \mathrm{d }b, \end{aligned}$$

which is \(\varphi _x(z)\) in (6.6). \(\square \)

10 Proof of Theorem 2.1

In this section, we prove Theorem 2.1. The key result is Theorem 2.3. The ingredients needed in addition are Proposition 10.1 which explains the appearance of the point measure \({\fancyscript{P}}\), and Proposition 10.2 which shows that particles sampled near \(X_1(t)\) either have a very recent common ancestor or have branched at the very beginning of the process. This last result has been first proved by Arguin et al. in [2].

We employ a very classical approach: we stop the particles when they reach an increasing family of affine stopping lines and then consider their descendants independently. The same kind of argument with the same stopping lines appear in [22] and in [1].

Fix \(k \ge 1\) and consider \({\fancyscript{H}}_k\) the set of all particles which are the first in their line of descent to hit the spatial position \(k\) (for the formalism of particle labelling, see Neveu [28]). Under the conditions we work with, we know that almost surely \({\fancyscript{H}}_k\) is a finite set. The set \({\fancyscript{H}}_k\) is again a dissecting stopping line at which we can apply the the strong Markov property (see e.g. [11]). We see that conditionally on \(\fancyscript{F}_{\!\!\fancyscript{H}_k}\)—the sigma-algebra generated by the branching Brownian motion when the particles are stopped upon hitting the position \(k\)—the subtrees rooted at the points of \({\fancyscript{H}}_k\) are independent copies of the branching Brownian motion started at position \(k\) and at the random time at which the particle considered has hit \(k\). Define \(H_k := \# \fancyscript{H}_k\) and

$$\begin{aligned} Z_k:= k \mathrm{e }^{- k} H_k. \end{aligned}$$

Neveu ([28], equation (5.4)) shows that the limit \(Z\) of the derivative martingale in (1.4) can also be obtained as a limit of \(Z_k\) (it is the same martingale on a different stopping line)

$$\begin{aligned} Z = \lim _{k \rightarrow \infty } Z_k = \lim _{k\rightarrow \infty } k \mathrm{e }^{-k} H_k \end{aligned}$$
(10.1)

almost surely. Let us further define \({\fancyscript{H}}_{k,t}\) as the set of all particles which are the first in their line of descent to hit the spatial position \(k\), and which do so before time \(t\).

For each \(u\in \fancyscript{H}_{k,t}\), let us write \(X_1^u (t)\) for the minimal position at time \(t\) of the particles which are descendants of \(u\). If \(u \in {\fancyscript{H}}_k \backslash \fancyscript{H}_{k,t}\) we define \(X_1^u(t) =0\). This allows us to define the point measure

$$\begin{aligned} \fancyscript{P}^*_{k,t} := \sum _{u\in \fancyscript{H}_k} \delta _{X_1^u (t) - \, m_t + \log (CZ_k)}. \end{aligned}$$

We further define

$$\begin{aligned} {\fancyscript{P}}^*_{k,\infty } := \sum _{u \in {\fancyscript{H}}_k} \delta _{k+ W(u) + \log (CZ_k)} \end{aligned}$$

where, conditionally on \({\fancyscript{F}}_{{\fancyscript{H}}_k}\), the \(W(u)\) are independent copies of the random variable \(W\) in (1.2).

Proposition 10.1

The following convergences hold in distribution

$$\begin{aligned} \lim _{t \rightarrow \infty } {\fancyscript{P}}^*_{k,t} = {\fancyscript{P}}^*_{k,\infty } \end{aligned}$$

and

$$\begin{aligned} \lim _{k \rightarrow \infty } (\fancyscript{P}^*_{k,\infty }, Z_k) = ({\fancyscript{P}}, Z) \end{aligned}$$

where \({\fancyscript{P}}\) is as in Theorem 2.1, \(Z\) is as in (), and \({\fancyscript{P}}\) and \(Z\) are independent.

Proof

Fix \(k \ge 1\). Recall that \({\fancyscript{H}}_k\) is the set of particles absorbed at level \(k\), and \(H_k=\#{\fancyscript{H}}_k\). Observe that for each \(u\in \fancyscript{H}_k, X_1^u (t)\) has the same distribution as \(k+X_1(t- \xi _{k,u})\), where \(\xi _{k,u}\) is the random time at which \(u\) reaches \(k\). By (1.2) and the fact that \(m_{t+c}-m_t \rightarrow 0\) for any \(c\), we have, for all \(k\ge 1\) and all \(u\in \fancyscript{H}_k\),

$$\begin{aligned} X_1^u (t) - m_t \;\; {law \over \rightarrow } \;\; k+ W, \qquad t \rightarrow \infty . \end{aligned}$$

Hence, the finite point measure \(\fancyscript{P}_{k,t} := \sum _{u \in {\fancyscript{H}}_k} \delta _{X_1^u (t) - m_t}\) converges in distribution as \(t\rightarrow \infty \), to \(\fancyscript{P}_{k,\infty }:= \sum _{u \in {\fancyscript{H}}_k} \delta _{k+W(u)}\), where conditionally on \(\fancyscript{H}_k\), the \(W(u)\) are independent copies of \(W\). This proves the first part of Proposition 10.1.

The proof of the second part relies on some classical extreme value theory. We refer the reader to [29] for a thorough treatment of this subject. Let us state the result we will use. Suppose we are given a sequence \((X_i, i \in \mathbb{N })\) of i.i.d. random variables such that

$$\begin{aligned} \mathbf{P}(X_i \ge x) \sim Cx \mathrm{e }^{- x}, \text{ as} x \rightarrow \infty . \end{aligned}$$

Call \(M_n = \max _{i=1,\ldots , n} X_i\) the record of the \(X_i\). Then it is not hard to see that if we let \(b_n = \log n + \log \log n\) we have as \(n\rightarrow \infty \)

$$\begin{aligned} \mathbf{P}(M_n -b_n \le y)&= (\mathbf{P}(X_i \le y+b_n))^n \\&= (1-(1+o(1))C (y+b_n)\mathrm{e }^{-(y+b_n)})^n \\&\sim&\exp \Big (-nC (y+ b_n) \frac{1}{n\log n} \mathrm{e }^{- y} \Big ) \\&\sim&\exp (-C\mathrm{e }^{- y}) \end{aligned}$$

and therefore

$$\begin{aligned} \mathbf{P}\left(M_n -b_n - \log C \le y \right) \sim \exp (-\mathrm{e }^{-y}). \end{aligned}$$

By applying Corollary 4.19 in [29] we immediately see that the point measure

$$\begin{aligned} \zeta _n := \sum _{i=1}^n \delta _{X_i - b_n - \log C} \end{aligned}$$

converges in distribution to a Poisson point measure on \(\mathbb{R }\) with intensity \(\mathrm{e }^{-x}\, \mathrm{d }x\).

This result applies immediately to the random variables \(-W(u)\) (recalling from (1.6) that \(\mathbf{P}(-W \ge x) \sim C x \mathrm{e }^{-x}, x\rightarrow \infty \)) and thus the point measure

$$\begin{aligned} \sum _{u \in {\fancyscript{H}}_k} \delta _{W(u) + (\log H_k +\log \log H_k + \log C)} \end{aligned}$$

converges (as \(k \rightarrow \infty \)) in distribution towards a Poisson point measure on \(\mathbb{R }\) with intensity \(\mathrm{e }^{x}\, \mathrm{d }x\) (it is \(\mathrm{e }^x\) instead of \(\mathrm{e }^{-x}\) because we are looking at the leftmost particles) independently of \(Z\) [this identity comes from (10.1)]. By definition \(H_k = k^{-1} \mathrm{e }^{k} Z_k\), thus

$$\begin{aligned} \log H_k&= k +\log Z_k - \log k \\ \log \log H_k&= \log k +\log (1+o_k(1)) \end{aligned}$$

where the term \(o_k(1)\) tends to 0 almost surely when \(k\rightarrow \infty \). Hence,

$$\begin{aligned} \log H_k + \log \log H_k = \log Z_k + k + o_k(1). \end{aligned}$$

We conclude that for \(u \in {\fancyscript{H}}_k\)

$$\begin{aligned} k+ W(u) + \log (C Z)&= W(u) + (\log H_k + \log \log H_k +\log C) + o_k(1). \end{aligned}$$

Hence we conclude that

$$\begin{aligned} {\fancyscript{P}}^*_{k,\infty }= \sum _{u \in {\fancyscript{H}}_k} \delta _{k+ W(u) + \log (C Z)} \end{aligned}$$

also converges (as \(k \rightarrow \infty \)) towards a Poisson point measure on \(\mathbb{R }\) with intensity \(\mathrm{e }^{x}\, \mathrm{d }x\) independently of \(Z = \lim _k Z_k\). This concludes the proof of Proposition .

Recall that \(J_\eta (t) := \{i \le N(t) : |X_i(t)-m_t |\le \eta \}\) is the set of indices which correspond to particles near \(m_t\) at time \(t\) and that \(\tau _{i,j}(t)\) is the time at which the particles \(X_i(t)\) and \(X_j(t)\) have branched from one another.

Proposition 10.2

(Arguin, Bovier and Kistler [2]) Fix \(\eta >0\) and any function \(\zeta : [0,\infty ) \rightarrow [0,\infty ) \) which increases to infinity. Define the event

$$\begin{aligned} \mathcal B _{\eta ,k,t}:= \left\{ \exists i,j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta (k), t-\zeta (k)] \right\} . \end{aligned}$$

One has

$$\begin{aligned} \lim _{k \rightarrow \infty } \lim _{t \rightarrow \infty } \mathbf{P}\left[\mathcal B _{\eta ,k,t} \right] =0. \end{aligned}$$
(10.2)

The following proof is included for the sake of self-containedness.

Proof

Fix \(\eta >0\) and \(k\rightarrow \zeta (k)\) an increasing sequence going to infinity. We want to control the probability of

$$\begin{aligned} \mathcal B _{\eta ,k,t}= \left\{ \exists i,j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta (k), t-\zeta (k)] \right\} \end{aligned}$$

the “bad” event that particles have branched at an intermediate time when \(t \rightarrow \infty \) and then \(k \rightarrow \infty \).

By choosing \(x\) large enough, we have for all \(\zeta \ge 0\) and \(t\) large enough

$$\begin{aligned}&\mathbf{P}(\exists i,j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta , t-\zeta ])\\&\quad \le \mathbf{P}(A_t(x,\eta )^\complement )+\mathbf{P}(\exists i,j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta , t-\zeta ], A_t(x,\eta )) \\&\quad \le \varepsilon + \mathbf{E}\left[\mathbf{1}_{A_t(x,\eta )}\sum _{i \in J_\eta (t)} \mathbf 1 _{\{\exists j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta , t-\zeta ] \}} \right]. \end{aligned}$$

Using the many-to-one principle [see (4.1)], we have

$$\begin{aligned}&\mathbf{E}\left[\mathbf{1}_{A_t(x,\eta )}\sum _{i \in J_\eta (t)} \mathbf 1 _{\{\exists j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta , t-\zeta ] \}} \right]\\&\quad = \mathbf{E}_{\mathbf{Q}}\Big [\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta , \exists j \in J_\eta (t) : \tau _{\Xi ,j}(t) \in [\zeta , t-\zeta ]\}} \Big ] \end{aligned}$$

where \(\tau _{\Xi ,j}(t)\) is the time at which the particle \(X_j(t)\) has branched off the spine \(\Xi \). In particular, using the description of the process under \(\mathbf{Q}\), we know that \(X_{\Xi _t}(t)\) is \(\sigma \) times a standard Brownian motion, and that independent branching Brownian motions are born at rate 2 (at times \((\tau _{i}^{(\Xi _t)}(t),i\ge 1)\)) from the spine \(\Xi \). The event \(\{\exists j \in J_\eta (t) : \tau _{\Xi ,j}(t) \in [\zeta , t-\zeta ]\}\}\) means that there is an instant \(\tau _{i}^{(\Xi _t)}(t)\) between \(\zeta \) and \(t-\zeta \), such that the branching Brownian motion that separated from \(\Xi \) at that time has a descendant at time \(t\) in \([m_t-\eta ,m_t+\eta ]\). In particular, the minimum of this branching Brownian motion at time \(t\) is lower than \(m_t+\eta \). Thus

$$\begin{aligned}&\mathbf{E}_\mathbf{Q}\Big [\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta , \exists j \in J_\eta (t) : \tau _{\Xi ,j}(t) \in [\zeta , t-\zeta ]\}} \Big ] \\&\qquad \le \mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} \mathbf 1 _{\{X_{1,t}^{\tau } \le m_t +\eta \}} \right] \end{aligned}$$

where \( X_{1,t}^{\tau }\) is the leftmost particle at time \(t\) descended from the particle which branched off \(\Xi \) at time \(\tau \), and the sum goes over all times \(\tau =\tau _{i}^{(\Xi _t)}(t) \in [\zeta ,t-\zeta ]\) at which a new particle is created. Recall that \(G_v(x) = \mathbf{P}(X_1(v)\le x)\) so that by conditioning we obtain

$$\begin{aligned}&\mathbf{E}_\mathbf{Q}\Big [\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta , \exists j \in J_\eta (t) : \tau _{\Xi ,j}(t) \in [\zeta , t-\zeta ]\}} \Big ]\\&\qquad \le \mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} G_{t-\tau }(m_t+\eta -X_{\Xi _\tau }(\tau ))\right]. \end{aligned}$$

For all continuous function \(X:[0,t] \rightarrow \mathbb{R }\) recall that we define \(\underline{X}^{[a,b]}:= \min _{s\in [a,b]} X(s)\), and define the event \(A_t^{(X)}(x,\eta )\) by

$$\begin{aligned} A_t^{(X)}(x,\eta )&\!:=\!&\{\sigma \underline{X}^{[0,t/2]} \ge \!-\!x \} \cap \{\underline{X}^{[t/2, t]} \ge m_t \!-\!x \} \cap \{\forall s \in [x,t/2] : X(s) \ge s^{1/3} \} \\&\cap \{\forall s \in [t/2, t-x] : X(s) -X(t) \in [(t-s)^{1/3},(t-s)^{2/3}] \}. \end{aligned}$$

Then, \(A_t(x,\eta )\cap \{|X_{\Xi _t}(t)-m_t|<\eta \}\subset A_t^{(X_{\Xi _t,t}(\cdot ))}(x,\eta )\cap \{|X_{\Xi _t}(t)-m_t|<\eta \}\), hence

$$\begin{aligned}&\mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} \mathbf 1 _{\{X_{1,t}^{\tau } \le m_t +\eta \}} \right]\\&\quad \le \mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{X_{\Xi _t}(t)}\mathbf{1}_{A_t^{(X_{\Xi _t,t}(\cdot ))}(x,\eta )}\mathbf{1}_{\{|X_{\Xi _t}(t) -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} G_{t-\tau }(m_t+\eta -X_{\Xi _\tau }(\tau ))\right]\\&\quad = \mathbf{E}\left[\mathrm{e }^{\sigma B(t)}\mathbf{1}_{A_t^{(\sigma B(\cdot ))}(x,\eta )}\mathbf{1}_{\{|\sigma B(t) -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} G_{t-\tau }(m_t+\eta -\sigma B(\tau ))\right] \end{aligned}$$

where in the last expectation \(B\) is a standard Brownian motion and the \(\tau \) over which the sums run are the atoms of a rate 2 Poisson process independent of \(B\). Since we are on the good event \(A_t^{(\sigma B)}(x,\eta )\), we know that for \(x\le s \le t/2, \sigma B_s >s^{1/3} \) and \(\sigma B_{t-s} > m_t + s^{1/3}\). Therefore

$$\begin{aligned}&\mathbf{E}_\mathbf{Q}\left[\mathrm{e }^{\sigma B_t}\mathbf{1}_{A_t^{(\sigma B)}(x,\eta )}\mathbf{1}_{\{|\sigma B_t -m_t | \le \eta \}} \sum _{\tau \in [\zeta , t-\zeta ]} G_{t-\tau }(m_t+\eta -\sigma B(\tau )) \right]\\&\quad \le t^{3/2}\mathrm{e }^{\eta +C_B} \mathbf{P}\left(A_t^{(\sigma B)}(x,\eta ),|\sigma B_t -m_t | \le \eta \right) \left\{ \int _\zeta ^{t/2} 2 G_{t-s}(m_t -s^{1/3} +\eta ) \, \mathrm{d }s \right.\\&\left. \qquad + \int _\zeta ^{t/2} 2 G_s(-s^{1/3} +\eta ) \, \mathrm{d }s \right\} \\&\quad \le c\, \left\{ \int _\zeta ^{t/2} 2 G_{t-s}(m_t -s^{1/3} +\eta ) \, \mathrm{d }s + \int _\zeta ^{t/2} 2 G_s(-s^{1/3} +\eta ) \, \mathrm{d }s \right\} \end{aligned}$$

where the constant \(c\) only depends on \(\eta \) and where we have used (5.2) for the last inequality.

Now, observe that \(G_{t-s}(m_t -s^{1/3} +\eta ) = G_{t-s}(m_{t-s}({1\over 2}) + \Delta (\eta ,t,s))\) where, as before, \(m_{t-s}({1\over 2})\) is such that \(G_{t-s}(m_{t-s}({1\over 2})) = {1\over 2}\), and \(\Delta (\eta , t,s) := \eta -s^{1/3} + m_t -m_{t-s}({1\over 2})\). Since \(m_t ={3\over 2}\log t + C_B\) by definition, and \(m_{t-s}({1\over 2}) = {3\over 2}\log (t-s) + C + o(1), t-s \rightarrow \infty \), for some constant \(C\in \mathbb{R }\), we see that there exists a sufficiently large \(\zeta _0\) such that \(\Delta (\eta , t,s) \le -{1\over 2} s^{1/3}, \forall t>\zeta \ge \zeta _0, \forall s\in [\zeta , {t\over 2}]\). This implies, for \(t>\zeta \ge \zeta _0\) and \(s\in [\zeta , {t\over 2}]\),

$$\begin{aligned} G_{t-s}(m_t -s^{1/3} +\eta ) = G_{t-s}(m_{t-s}\left({1\over 2}\right) + \Delta (\eta , t,s)) \le \mathbf{P}\left(W \le \eta -{1\over 2}s^{1/3}\right), \end{aligned}$$

the last inequality being a consequence of Fact 9.1.

Since \(\mathbf{P}(W \le -y)\sim cy\mathrm{e }^{-y}, y\rightarrow \infty \), we conclude that \(\int _\zeta ^{t/2} 2 G_{t-s}(m_t -s^{1/3} +\eta ) \, \mathrm{d }s \rightarrow 0\) as \(\zeta \rightarrow \infty \). A similar argument also shows that \(\int _\zeta ^{t/2} 2 G_{s}(-s^{1/3} +\eta ) \, \mathrm{d }s \rightarrow 0\) as \(\zeta \rightarrow \infty \).

The conclusion here is that by choosing \(\zeta \) large enough (depending only on \(\eta \)), we have \(\mathbf{P}(\exists i,j \in J_\eta (t) : \tau _{i,j}(t) \in [\zeta , t-\zeta ]) <\varepsilon \) uniformly in \(t\).

Recall that \(\forall u \in {\fancyscript{H}}_k, X_1^u(t)\) is the position at time \(t\) of the leftmost descendent of \(u\) (or 0 if \(u \not \in {\fancyscript{H}}_{k,t}\)), and let \(X_{1,t}^u(s), s\le t\) be the position at time \(s\) of the ancestor of this leftmost descendent (or 0 if \(u \not \in {\fancyscript{H}}_{k,t}\)). For each \(t,\zeta \) and \(u \in {\fancyscript{H}}_k\) define

$$\begin{aligned} {\fancyscript{Q}}^{(u)}_{t,\zeta } = \delta _0+ \sum _{i : \tau _i^{u}>t-\zeta } {\fancyscript{N}}_i^{u} \end{aligned}$$

where the \(\tau _i^u\) are the branching times along the path \(s \mapsto X_{1,t}^u(s) \) enumerated backward from \(t\) and the \({\fancyscript{N}}_i^{u}\) are the point measures of particles whose ancestor was born at \(\tau _i^u\) (this measure has no mass if \(u \not \in {\fancyscript{H}}_{k,t}\)). Thus, \({\fancyscript{Q}}^{(u)}_{t,\zeta }\) is the point measure of particles which have branched off the path \(s \mapsto X_{1,t}^u(s) \) at a time which is posterior to \(t-\zeta \), including the particle at \(X_1^u(t)\).

In the same manner we define \({\fancyscript{Q}}_\zeta \) as the point measure obtained from \({\fancyscript{Q}}\) (in Theorem 2.3) by only keeping the particles that have branched off \(s \mapsto Y(s)\) before \(\zeta \). More precisely, conditionally on the path \(Y,\) we let \(\pi \) be a Poisson point process on \([0,\infty )\) with intensity \(2 \big (1 - G_t(-Y(t))\big ) \, \mathrm{d }t = 2 \big (1 - \mathbf{P}_{Y(t)} (X_1(t) <0))\big ) \, \mathrm{d }t\). For each point \(t\in \pi \) such that \(t<\zeta ,\) start an independent branching Brownian motion \(({\fancyscript{N}}^*_{Y(t)}(u), u\ge 0)\) at position \(Y(t)\) conditioned to have \(\min {\fancyscript{N}}^*_{Y(t)}(t) >0\). Then define \({\fancyscript{Q}}_\zeta := \delta _0+ \sum _{t \in \pi , t<\zeta } {\fancyscript{N}}^*_{Y(t)}(t)\).

Lemma 10.3

For each fixed \(k\) and \(\zeta \), the following limit holds in distribution

$$\begin{aligned} \lim _{t \rightarrow \infty } (\fancyscript{P}^*_{k,t}, ({\fancyscript{Q}}^{(u)}_{t,\zeta })_{u \in {\fancyscript{H}}_k}) = ({\fancyscript{P}}^*_{k,\infty }, ({\fancyscript{Q}}^{(u)}_{\zeta })_{u \in {\fancyscript{H}}_k}) \end{aligned}$$

where \(({\fancyscript{Q}}^{(u)}_{\zeta })_{u \in {\fancyscript{H}}_k} \) is a collection of independent copies of \({\fancyscript{Q}}_{\zeta }\), independent of \({\fancyscript{P}}^*_{k,\infty }\).

Proof

Conditionally on \({\fancyscript{H}}_k\), the random variables \((X_{1,t}^u(\cdot ), \, {\fancyscript{Q}}^{(u)}_{t,\zeta })_{u\in {\fancyscript{H}}_k}\) are independent by the branching property. By Theorem 2.3, for every \(u\in {\fancyscript{H}}_k\), the pair \((X_1^u(t)- m_t, \, {\fancyscript{Q}}^{(u)}_{t,\zeta })\) converges in law to \((k+ W(u), \, {\fancyscript{Q}}^{(u)}_{\zeta })\) where \({\fancyscript{Q}}^{(u)}_{\zeta }\) is a copy of \({\fancyscript{Q}}_{\zeta }\) independent of \(W(u)\).

To conclude, observe that \(\sum _{u\in {\fancyscript{H}}_k}\delta _{k + W(u)} = {\fancyscript{P}}^*_{k,\infty } -\log (CZ_k)\) by Proposition . Since for each \(u \in {\fancyscript{H}}_k\) the point measure \({\fancyscript{Q}}_\zeta ^{(u)}\) is independent of \(W(u)\) and of all \(W(v)\) for \(v \in {\fancyscript{H}}_k\) and \(v\ne u\), it follows that \({\fancyscript{Q}}_\zeta ^{(u)}\) is independent of \({\fancyscript{P}}^*_{k,\infty }\). We conclude that

$$\begin{aligned} \lim _{t \rightarrow \infty } (\fancyscript{P}^*_{k,t}, ({\fancyscript{Q}}^{(u)}_{t,\zeta })_{u \in {\fancyscript{H}}_k}) = ({\fancyscript{P}}^*_{k,\infty }, ({\fancyscript{Q}}^{(u)}_{\zeta })_{u \in {\fancyscript{H}}_k}) \end{aligned}$$

in distribution where the two components of the limit are independent.\(\square \)

Armed with these tools let us proceed to give the

Proof of Theorem 2.1

Let \({\bar{{\fancyscript{N}}}}^{(k)}(t)\) be the extremal point measure seen from the position \(m_t-\log (CZ_k)\)

$$\begin{aligned} {\bar{{\fancyscript{N}}}}^{(k)}(t):= {\fancyscript{N}}(t)-m_t +\log (CZ_k). \end{aligned}$$

Let \(\zeta : [0,\infty ) \rightarrow [0,\infty )\) be any function increasing to infinity. Observe that on \(\mathcal B _{\eta ,k,t}^\complement \) (an event of probability tending to one when \(t\rightarrow \infty \) and then \(k \rightarrow \infty \) by Proposition ) we have

$$\begin{aligned} {\bar{{\fancyscript{N}}}}^{(k)}(t) |_{[-\eta , \eta ]} = \sum _{u \in {\fancyscript{H}}_k} \left({\fancyscript{Q}}^{(u)}_{t,\zeta (k)}+ X_{1,t}^u-m_t+\log (CZ_k)\right)|_{[-\eta , \eta )]}. \end{aligned}$$

Now by Lemma 10.3 we know that in distribution

$$\begin{aligned} \lim _{t \rightarrow \infty } \sum _{u \in {\fancyscript{H}}_k} \Big ({\fancyscript{Q}}^{(u)}_{t,\zeta (k)}+ X_{1,t}^u-m_t + \log (CZ_k) \Big ) = \sum _{x \in {\fancyscript{P}}^*_{k,\infty }} (x+ {\fancyscript{Q}}^{(x)}_{\zeta (k)}) \end{aligned}$$

where the \({\fancyscript{Q}}^{(x)}_{\zeta (k)}\) are independent copies of \({\fancyscript{Q}}_{\zeta (k)}\), and independent of \(H_k\). Moreover, we know that \(\lim _{t\rightarrow \infty } Z(t)=Z\) almost surely.

By the second limit in Proposition , we have that \((\sum _{x \in {\fancyscript{P}}^*_{k,\infty }} (x+ {\fancyscript{Q}}^{(x)}_{\zeta (k)}),Z_k)\) converges as \(k\rightarrow \infty \) to \(({\fancyscript{L}},Z)\) in distribution, \({\fancyscript{L}}\) being independent of \(Z\). In particular, \(({\fancyscript{L}},Z)\) is also the limit in distribution of \((\sum _{x \in {\fancyscript{P}}^*_{k,\infty }} (x+ {\fancyscript{Q}}^{(x)}_{\zeta (k)}),Z)\). We conclude that in distribution

$$\begin{aligned} \lim _{k\rightarrow \infty } \lim _{t \rightarrow \infty } ({\bar{{\fancyscript{N}}}}^{(k)}(t) |_{[-\eta , \eta )]},Z(t))= ({\fancyscript{L}}|_{[-\eta , \eta )]},Z). \end{aligned}$$

Hence, \(\lim _{k\rightarrow \infty }\lim _{t\rightarrow \infty }({\bar{{\fancyscript{N}}}}^{(k)}(t),Z(t))=({\fancyscript{L}},Z)\) in distribution. Since \({\bar{{\fancyscript{N}}}}(t)\) is obtained from \({\bar{{\fancyscript{N}}}}^{(k)}(t)\) by the shift \(\log (CZ) - \log (CZ_k)\), which goes to \(0\) by (10.1), we have in distribution \(\lim _{t\rightarrow \infty } ({\bar{{\fancyscript{N}}}}(t),Z(t)) = ({\fancyscript{L}},Z)\) which yields the content of Theorem 2.1.