First Passage Percolation on the Newman–Watts Small World Model

The Newman–Watts model is given by taking a cycle graph of n vertices and then adding each possible edge (i,j),|i-j|≠1modn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(i,j), |i-j|\ne 1 \mod n$$\end{document} with probability ρ/n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho /n$$\end{document} for some ρ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho >0$$\end{document} constant. In this paper we add i.i.d. exponential edge weights to this graph, and investigate typical distances in the corresponding random metric space given by the least weight paths between vertices. We show that typical distances grow as 1λlogn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\lambda }\log n$$\end{document} for a λ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda >0$$\end{document} and determine the distribution of smaller order terms in terms of limits of branching process random variables. We prove that the number of edges along the shortest weight path follows a Central Limit Theorem, and show that in a corresponding epidemic spread model the fraction of infected vertices follows a deterministic curve with a random shift.

Mollisson and Scalia-Tomba [6] as "the great circle" epidemic model, then also by Watts and Strogatz [36], and a simplifying modification was made by Newman and Watts [31] later. The Newman-Watts model consist of a cycle on n vertices, each connected to the k ≥ 1 nearest vertices, and then extra shortcut edges are added in a similar fashion to the creation of the Erdős-Rényi graph [21]: i.e., for each pair of not yet connected vertices, we connect them independently with probability p.
The model has been studied from different aspects. Newman et al. studied distances [32,33] with simulations and mean-field approximation, as well as the threshold for a large outbreak of the spread of non-deterministic epidemics [30]. Barbour and Reinert treated typical distances rigorously. First, in [7], they studied a continuous circle with circumference n instead of a cycle on n many vertices, and added Poi(nρ/2) many 0-length shortcuts at locations chosen according to the uniform measure on the circle. Then, in [8], they studied the discrete model, with all edge lengths equal to 1. They showed that typical distances in both models scale as log n.
Besides typical distances, the mixing time of simple random walk on the Newman-Watts model was also studied, i.e., the time when the distribution of the position of the walker gets close enough to the stationary distribution in total variation distance. Durrett [20] showed that the order of the mixing time is between (log n) 2 and (log n) 3 , then Addario-Berry and Lei [1] proved that Durett's lower bound is sharp.

Main Results
We work on the Newman-Watts small world model [31] with independent random edge weights: we take a cycle C n on n vertices, that we denote by [n] := {1, 2, . . . , n}, and each edge (i, j) ∈ [n], |i − j| = 1 mod n is present. Then independently for each i, j ∈ [n], |i − j| = 1 mod n we add the edge (i, j) with probability ρ/n to form shortcut edges. The parameter ρ is the asymptotic average number of shortcuts from a vertex. Conditioned on the edges of the resulting graph, we assign weights that are i.i.d. exponential random variables with mean 1 to the edges. We denote the weight of edge e by X e . We write NW n (ρ) for a realization of this weighted random graph.
We define the distance between two vertices in NW n (ρ) as the sum of weights along the shortest weight path connecting the two vertices. In this respect, the weighted graph with this distance function is a (non-Euclidean) random metric space. Further, interpreting the edge weights as time or cost, the distance between two vertices can also correspond to the time it takes for information to spread from one vertex to the other on the network, or it can model the cost of transmission between the two vertices.
We say that a sequence of events {E n } n∈N happens with high probability (w.h.p.) if lim n→∞ P(E n ) = 1, that is, the probability that the event holds tends to 1 as the size of the graph tends to infinity. We write Bin, Poi, Exp for binomial, Poisson, and exponential distributions. For random variables {X n } n∈N , X , we write X n d −→ X if X n tends to X in distribution as n → ∞. The moment generating function of a random variable X is the function M X (ϑ) := E[exp{ϑ X }].
Our first result is about typical distances in the weighted graph. Let i j denote the set of all paths γ in NW n (ρ) between two vertices i, j ∈ [n]. Then the weight of the shortest weight path is defined by P n (i, j) := min γ ∈ i j e∈γ X e .
Let us write γ = γ (i, j) for the path that minimizes the weighted distance in (1.1). We call H n (U, V ) := |γ (U, V )| the hopcount, i.e., the number of edges along the shortestweight path between two uniformly chosen vertices.
where Z is a standard normal random variable.
Our next result characterises the proportion of vertices within distance t away from a uniformly chosen vertex U as a function of t. To put this result into perspective, note that we can model the spread of information starting from some source set I 0 ⊂ [n] at time t = 0 as follows: We assume that once a vertex v receives the information at time t, it starts transmitting the information towards all its neighbors at rate 1. Let us denote the vertices that are connected to v by an edge by H(v), then, for each w ∈ H(v), w receives the information from v at time t + X (v,w) . We further assume that transmission happens only after the first receipt of the information, that is, any consecutive receipts are ignored. If instead of the spread of information spread, we model the spread of a disease, this model is often called an S I -epidemic (susceptible-infected).
In the next theorem we consider this epidemic spread model from a single source I 0 = {U } on NW n (ρ) with i.i.d. Exp(1) transmission times. We define I n (t, U ) := 1 n i∈ [n] 1{i is infected before or at time t} = 1 n #{i : i ∈ [n], U, j ≤ t}, (1.2) the fraction of infected vertices at time t of the epidemic started from the vertex U .

Remark 1.4
Note that Theorems 1.1 and 1.2 are analogous to similar results in the sequence of papers [11][12][13]24], while Theorem 1.3 is analogous to the results in [9,14]. The intuitive message of Theorem 1.3 is that a linear proportion of infected vertices can be observed after a time that is proportional to the logarithm of the size of the population. This time has a random shift given by 1 λ log W U . Besides this random shift, the fraction of infected individuals follows a deterministic curve f (·): only the 'position of the curve' on the timeaxes is random. A bigger value of W U means that the local neighborhood of U is "dense", and hence the spread is quick in the initial stages: indeed, a bigger value of W U shifts the function f (t + (log W U )/λ) more to the left on the time axes. This phenomenon has been observed in real-life epidemics, see e.g. [2,35] for a characterisation of typical epidemic curve shapes. For individual epidemic curves, browse e.g. [18].
(1.3) Remark 1.6 These functional equations and the fact that there exists a solution for all ϑ ∈ R + follow from the usual branching recursion of multi-type branching processes, that can be found e.g. in [5].

Related Literature, Comparison and Context
First passage percolation (FPP) was first introduced by Hammersley and Welsh [22] to study spreading dynamics on lattices, in particular on Z d , d ≥ 2. The intuitive idea behind the method is that one imagines water flowing at a constant rate through the (random) medium, the waterfront representing the spread. The model turned out to be able to capture the core idea of several other processes, such as weighted graph distances and epidemic spreads. Janson [25] studied typical distances and the corresponding hopcount, flooding times as well as diameter of FPP on the complete graph. He showed that typical distances, the flooding time and diameter converge to 1, 2, and 3 times log n/n, respectively, while the hopcount is of order log n.

Universality Class
In a sequence of papers (e.g. [11][12][13]23,24]) van der Hofstad et al. investigated FPP on random graphs. Their aim was to determine universality classes for the shortest path metric for weighted random graphs without 'extrinsic' geometry (e.g. the supercritical Erdős-Rényi random graph, the configuration model, or rank-1 inhomogeneous random graphs). They showed that typical distances and the hopcount scale as log n, as long as the degree distribution has finite asymptotic variance and the edge weights are continuous on [0, ∞). On the other hand, power-law degrees with infinite asymptotic variance drastically change the metric and there are several universality classes, compare [24] with [11]. In this respect, Theorems 1.1 and 1.2 show that the presence of the circle does not modify the universality class of the model. CLT for the hopcount in weighted random graphs first occured in [25] for the complete graph, then it was implicitly stated in [23] for the Erdős-Rényi random graph, with average degree at least (log n) 3 . For finite mean degree random graphs, CLT of the hopcount was proved in [11][12][13].

Comparison to the Erdős-Rényi graph
Notice that the subgraph formed by shortcut edges is approximately an Erdős-Rényi graph, with the difference that the presence of the cycle always makes NW n (ρ) connected and hence there is no subcritical or critical regime in NW n (ρ). Typical distances on the Erdős-Rényi graph with parameter ρ/n and Exp(1) edge weights scale as log n/(ρ − 1) [12], while for NW n (ρ) they scale as (log n)/λ, with λ = (ρ − 1 + ρ 2 + 6ρ + 1)/2 > ρ − 1 for all ρ ∈ R. This means that when ρ > 1, the presence of the cycle makes typical distances shorter, and this appears already in the constant scaling factor of log n. However, λ(ρ)/ρ → 1 as ρ → ∞ meaning that the effect of the cycle becomes more and more negligible as the number of shortcut edges grow.

Comparison to Inhomogeneous Random Graphs
Kolossváry et al. [29] studied FPP on the inhomogeneous random graph model (IHRG), defined in [15]. In this model, vertices have types from a type space S, and conditioned on the types of the vertices, edges are present independently with probabilities that depend on the types. One can fine-tune the parameters of this model so that any finite neighborhood of a vertex in the NW n (ρ) model is similar to that of in the IHRG, that is, both of them can be modelled using the same continuous time multi-type branching process. It would be natural to conjecture that typical distances are then the same in these two models. It turns out that this is almost but not entirely the case: the first order term λ −1 log n, and the random variables W U , W V are the same, but the additive constant c in Theorem 1.1 is not: the geometry of the Newman-Watts model modifies how the two branching processes can connect to each other, which modifies the constant. Writing the main result in [29] in the same form as the one in Theorem 1.1, we obtain c IHRG = log (ρ + 2)(2ρ + λ 2 )/(ρ(λ + 2) 2 λ(λ + 1) .

Comparison to the Discrete Model
Barbour and Reinert were the first to investigated typical distances on the Newman-Watts model rigorously. In [7] they investigated a similar model, a continuous circle with circumference L instead of L many vertices, and added Poi(Lρ/2) many shortcuts at locations chosen according to uniform measure on the circle. Distances are measured by the usual arc measure along the circle, while shortcuts are given length 0. Their results -considering typical distances -are implicit, but rewritten they show the distance is logarithmic function of L: In a subsequent paper [8] they treated the discrete model NW n (ρ(n)) with unit edge weights. They gave complete characterisation of typical distances in terms of the parameter ρ(n) that might also tend to infinity with n. In particular, they showed that the earlier continuous model is a good approximation only if ρ(n) → ρ: in this case the distances are again logarithmic.

The Epidemic Curve
The study of the epidemic curve on random graphs initiates from Barbour and Reinert [9], who investigated the epidemic curve on the Erdős-Rényi random graph and on the configuration model with bounded degrees, where also possible other aspects such as contagious period of vertices or dependence of the transmission time distribution on the degrees might be present. Later, in [14] Bhamidi et al. pointed out the connection between FPP, typical distances, and the epidemic curve by studying the epidemic spread on the configuration model with arbitrary continuous edge-weight distribution. Our Theorem 1.3 is very much along the lines of these two results.

Possible Future Directions
In [3,10,19] the competition of two spreading processes running on the same graph is investigated. This can be considered a competition between two epidemics, as well as the word-of-mouth marketing of two similar products. The results suggest that the outcome depends on the universality class of the model: in ultra-small worlds, one competitor only gets a negligible part of the vertices, while on regular graphs coexistence might be possible, i.e., both colors can paint a linear fraction of vertices. Studying competition on NW n (ρ) is an interesting and challenging future project.

Structure of the Paper
In what follows, we prove Theorems 1.1, 1.2 and 1.3. The brief idea of the proof is the following: we choose two vertices uniformly at random, then we start to explore the neighbourhoods of these vertices in the graph in terms of the distance from these vertices (Sect. 2). We show that this procedure w.h.p. results in 'shortest weight trees' (SWT's) that can be coupled to two independent copies of a continuous time multi-type branching process (CMBP). We then handle how these two shortest weight trees connect in the graph in Sect. 3 with the help of a Poisson approximation. We provide the proof of Theorem 1.3 about the epidemic curve in Sect. 4 based on our result on distances. Finally we prove the Central Limit Theorem for the hopcount in Sect. 5, based on an indicator representation of the 'generation of vertices' in the branching processes.

Exploration Process
To explore the neighborhood of a vertex, we use a modification of Dijkstra's algorithm.
Introduce the following notations: N (t), A(t), U(t) denote the set of explored (dead), active (alive) and unexplored vertices at time t, respectively, and N(t), A(t), U(t) for the sizes of these sets. The remaining lifetime of some vertex w ∈ A(t) at time t is denoted by R w (t), and means that w will become explored exactly at time t + R w (t). The set of remaining lifetimes is R {A(t)} (t). As before, H(v) denotes the neighbors of a vertex v (Figs. 1, 2).

The Exploration Process on an Arbitrary Weighted Graph
Let i = 1. The vertex from which we start the exploration process is denoted by v 1 . We color v 1 blue and set the time as t = T 1 = 0. Evidently, we take The remaining lifetimes are determined by the edge weights, i.e.
We color the active vertices w ∈ H(v 1 ) to have the same color as the edge (v 1 , w). We work with induction from now on. In each step, we increase i by 1. We can construct the continuous time process in steps, namely, at the random times when we explore a new vertex. Let , the minimum of remaining lifetimes. Then define T i := T i−1 + τ i , the time when we explore the next vertex. Nothing changes in the time interval From all the remaining lifetimes, we subtract the time passed: for some 0 ≤ s ≤ τ i , subtracted element-wise. At time T i , the vertex (or all the vertices, if there is more than one such vertices) v i of which the remaining lifetime equals 0, becomes explored and its neighbors become active. We shall refer to v i as the i th explored vertex. We set We refresh the set of remaining lifetimes: , and x also gets the color of (v i , x).
On an arbitrary connected weighted graph, the exploration process can be continued until all vertices become explored. Note that this algorithm builds the shortest weight tree SWT from the starting vertex. This tree will be modeled using the branching process.

Exploration on the Weighted Newman-Watts Random Graph
We aim to apply the exploration process defined above for discovering the neighborhood of a vertex in a realization of NW n (ρ). In the beginning, we think of the environment as completely random, and we reveal the presence and weight of edges as the exploration algorithm proceeds, i.e., we reveal an edge when one of its endpoints becomes explored. In this respect, all the quantities defined for the exploration process become random variables. In this section, we investigate the behavior of this random exploration process (Figs. 1, 2).
Let us color the cycle-edges red and the shortcut-edges blue, and let us say that an instance of a vertex is red/blue in the exploration if it is encountered via a red/blue edge. Below, adding the subscript R or B to any quantity corresponds to the same quantity restricted to only the red or blue vertices, respectively. Note that in case there are more paths leading to a vertex i from the root, there might be multiple instances of i in the exploration and they might have different colors. However, as these paths have different lengths, eventually a unique instance of i will be determined by being explored first, making the coloring unique on explored vertices. We deal with the issue of multiple instances thoroughly in Sects. 2.4.3 and 2.4.1.
While running the exploration process, we build a weighted tree along the process containing the edges that are used to explore the new vertices in the algorithm (restricted to the explored vertices, this is indeed a tree). This tree has root v 1 , grows in time, and at any time t it contains the vertex v ∈ [n] precisely when P(v 1 , v) < t. Let us denote the tree up to time t by SWT v 1 (t). If v is blue then it has been reached via a shortcut edge and hence both of its neighbors on the cycle are added to the new red active vertices. Since there are Bin(n − 3, ρ/n) many shortcut edges from a vertex, this is also the distribution of new blue active vertices born when exploring a red vertex. For the exploration of a blue vertex, we reached this vertex via a blue edge, hence an additional Bin(n − 4, ρ/n) new active blue vertices. Clearly, by the convergence of binomial to Poisson distribution, each vertex has asymptotically Poi(ρ) many blue neighbours. The second statement follows from the fact that the edge weights are i.i.d. exponential random variables, which has the memoryless property. Finally, note that at any time, R {A(t)} (t) consists of i.i.d. exponential random variables, and the algorithm takes the minimum of these. Clearly, the minimum of finite many absolutely continuous random variables is unique almost surely, and uniform over the indices.

Multi-type Branching Processes
We define the following continuous time multi-type branching process (CMBP) that will correspond to the initial stages of SWT(t).
There are two particle types, red (R) and blue (B), and their lifetime is Exp(1), independent from everything else. Particles give birth upon their death. They leave behind offspring as in Claim 2.2: each particle has Poi(ρ) many blue offspring, red particles have one, while blue particles have two red children. Dead and alive particles will correspond to explored and active vertices, respectively. With this wording, for the number of alive and dead particles, we define Definition 2. 3 We shall write A(t) = (A R (t), A B (t)) for the number of alive particles of each type, A(t) standing for the total number of alive particles. Let where N q (t) means the number of dead particles of type q = R, B. We assume the above quantities to be right-continuous. Superscripts (R), (B) refer to the process started with a single particle of the given type.
The exploration process corresponds to the process started with a single blue-type particle, which dies immediately.

Literature on Multi-type Branching Processes
Here we restate the necessary theorems from [5] which we will use.
It is not hard to see that M(t) satisfies the semigroup property M(t + s) = M(t)M(s) and the continuity condition lim t→0 M(t) = I, where I denotes the identity matrix. As a result, we have: Here, a r is the rate of dying for a particle of type r , (i.e., the parameter of its exponential lifetime), D is the number of offspring with the same sub-end superscript conventions as in Definition 2.3, and δ r,q = 1 {r =q} (i.e., δ r,q = 1 if and only if r = q).
In our case, Eigenvalues and eigenvectors of the Q matrix Using the characteristic polynomial, for ρ ≥ 1, the maximal eigenvalue λ and the second eigenvalue λ 2 is given by The normalized left eigenvector π that satisfies πQ = λπ gives the stationary typedistribution: We denote the right (column) eigenvector of Q by u and normalize it so that π u = 1. For later use, without computing, we denote by v 2 and u 2 the left (row) and right (column) eigenvector of Q belonging to the eigenvalue λ 2 . The most important theorem for our purposes is that the CMBP grows exponentially with rate λ (the so-called Malthusian parameter), more precisely, Theorem 2.6 ([5]) With the notation as above, almost surely, where W is a non-negative random variable, the almost sure martingale limit of W t := A(t)u e −λt . Further, W > 0 almost surely on the event of non-extinction. , Proof of Theorems 2.5, 2.6, 2.7 and Corollary 2.8 The proofs can be found in [5,].
Throughout the next sections, we develop error bounds on the coupling between the branching process and the exploration process on the graph. For convenience, we introduce the times we will observe the branching and exploration processes at, as well as the approximations of the martingale limit W at the times t n . Note that in our case, extinction can never occur, hence almost surely W > 0.

Labeling, Coupling, Error Terms
In this section we develop a coupling between the CMBP discussed in the previous section and SWT(t), the exploration process on NW n (ρ).
Error bound on coupling the offspring The CMBP is defined with Poi(ρ) blue offspring distribution, while in the exploration process a vertex has Bin(n − 3, ρ/n) or Bin(n − 4, ρ/n)) many blue children.
Bernoulli trials with success probability ρ/n, X = n i=1 ξ i ∼ Bin(n, ρ/n), and let Y ∼ Poi(ρ). By the usual coupling of binomial and Poisson random variables, Note that Z and V are independent and we can write Z as Z = X − V . Then, under the usual coupling of X and Y , For the blue offspring of a blue vertex letẐ = holds. Taking maximum and using union bound, the probability that up to k steps, at least one particle has different number of blue offspring in the exploration process and the Poisson branching process, is at most k(ρ 2 + 4ρ)/n.

Labeling and Thinning
We relate the CMBP to the exploration process on NW n (ρ) through the labeling of the earlier. Below, everything must be interpreted modulo n.
(i) The root is labeled u, the source of the exploration process. u can be U , a uniformly chosen vertex in [n]. (ii) Every other particle gets a label when it is born. (iii) We distinguish "left type" and "right type" red children. Left type red particles have a left type red child, right type red particles have a right type red child, blue particles have a red child of both types. (iv) A left type red child of v gets label v − 1, a right type red child of v is labeled v + 1.
(v) The blue children of v get a set of labels uniformly chosen from [n].

Lemma 2.9
We say that the labeling fails if two explored vertices share the same label (this still allows for several occurrences of the same label in the active set). The probability that the labeling fails at the iith split is at most 2i/n.

Proof
The labeling fails at the iith split if the splitting particle has a label that is already taken by an explored vertex. We distinguish two cases. When a blue particle splits Since the label of a blue particle is chosen uniformly in [n], and there are at most i − 1 dead labels already, the probability that we choose from this set is (i − 1)/n. When a red particle splits Note that the labeling procedure ensures that whenever a blue particle v is explored, it starts a growing (possibly asymmetric) red interval of red vertices around it. A red vertex, upon dying, extends this interval in one direction (if it is left type, then towards the left). Note that the original vertex v in this interval had a uniformly chosen label in [n]. Let us denote the position of the kith explored blue vertex by c k , and write l k (T i ) and r k (T i ) for the number of explored red vertices to the left and to the right of c k after the iith split, i ≥ k. Finally, we denote the whole interval of explored vertices around c k after the iith split by I k (T i ). Recall that the process is by definition right-continuous.
In this setting, the label of a red vertex that is just being explored can coincide with the label of an already explored red vertex if and only if two intervals 'grow into each other' at the iith split. Denote by I * the interval that grows at the iith split, write c * , r * (T i−1 ), * (T i−1 ) for the location of its blue vertex, right and left length, respectively. Then, I grows into another interval I k if and only if c k , the location of the blue vertex in I k , is at position (The first case means that the furthest explored red vertex on the right of I k was a red active child of the furthest explored left vertex in I * ). Since the location of c k is uniform in [n], Note that there are exactly as many intervals as blue explored vertices (at either T i−1 or T i , since the iith explored vertex v i must be red). Let the bad event E i = {v i is red and its label is already used}. Hence, since there are at most i blue explored vertices. Note that the proof also applies when the new red explored vertex coincides with a formerly explored blue one, in case Hence, the statement of the lemma follows.
In NW n (ρ), the shortest path (u, v) through x necessarily uses the shortest path between (u, x). As a result, in the CMBP, we also do not need later occurrences of the label x. Hence, we mark the second (or any later) occurrence of a label thinned, and all its descendants ghosts. We move towards bounding the proportion of ghosts among active individuals to carry on with the CMBP approximation. To determine whether a vertex is a ghost, we need knowledge about its ancestors.

Ancestral Line
We approach the problem of ghost actives with the help of the ancestral line. We define the ancestral line AL(y) of a vertex y as the chain of particles leading to y from the root, including the root and y itself. Then an alive particle is a ghost if and only if at least one of its ancestors is thinned. The ancestral line was introduced by Bühler in [16,17] with the following observation: for each time interval [T k , T k+1 ) we can allocate a unique particle on the ancestral line that was active in the interval [T k , T k+1 ). For the following observations, we condition on {D i , i = 1, . . . , k}, where D i is the total number of offspring of the iith splitting particle. Denote by G k the generation of a uniformly chosen alive (active) particle Y after the kith split. Then G k = L 1 + L 2 + · · · + L k , where the indicators L i are conditionally independent and L i = 1 if and only if the ancestor of Y that was alive in the time interval [T i , T i+1 ) was newborn (born at T i ). (A rewording of the indicators L i is as follows: L i = 1 if and only if the i th splitting particle is in AL(Y ).) Since Y is chosen uniformly, and at each split the individual to split is also chosen uniformly among the currently active individuals, each one of these active individuals is equally likely to be an ancestor of Y . Further, in the interval [T i , T i+1 ), D i many particles are newborn, and S i many are alive, which yields the probability P(L i = 1|D i , i = 1, . . . , k) = D i /S i , see the discussion at the beginning of [17, Sect. 2.A]. We arrive to the following corollary: Corollary 2. 10 The probability of the i th dying particle being an ancestor of Y , a uniformly chosen active vertex after the k th split: Expected proportion of thinned actives Let us combine Corollary 2.10 and Lemma 2.9. To be able to do so, we need the following lemma. We will provide its proof later on.

Lemma 2.11
For every ε > 0, there exists a positive integer-valued random variable K = K (ε) so that K is always finite and for every i > K , Recall that t n = (log n)/2λ, and it was chosen such that the number of active vertices is of order √ n, and that A(t), A(t), N (t), N(t) denotes the set and number of active and dead individuals in the CMBP at time t, respectively.

Lemma 2.12 Let
y is a ghost} the set of ghost active vertices at time t and A G (t) its size. For every fixed s ∈ R, the proportion A G (t n + s)/A(t n + s) tends to 0 in probability as n tends to infinity.
i.e., uniformly chosen active individual. Recall that v i is the particle that dies at T i . For an event E, let us write P k (E) := P(E|D i , i = 1, . . . , k). Using these notation and Corollary 2.10 for the representation of the ancestral line of V ∈ A(t), we can write Since the labeling is independent of the family tree, We apply Lemma 2.11 by splitting the sum for parts up to K and above, use where we used that all particles are either active or dead in the process and with a possible modification of K , we can have (1 + o(i −1/2+ε ) > 1/2 for all i > K . Next, we can use Corollary 2.8 and Theorem 2.6, which gives that (1)).
Setting t n = log n/(2λ), the right hand side tends to 0 as n → ∞, since W (n) → W and K is a tight random variable (does not depend on n).
Let us now return to the proof of Lemma 2.11. This lemma follows from [4,Theorems 1,2]. Here, we restate [4, Theorem 1] using our notations and for a special case, where each eigenvalue has multiplicity 1. This is sufficient for our purposes and easier than the general case. For an arbitrary vector a ∈ R p with the property v · a = 0 define We also restate [4, Theorem 2] without change.
Proof of Lemma 2. 11 We use the previous two theorems for the 2-type branching process defined in Sect. 2.3. Since π and v 2 are linearly independent, for any a = (0, 0) with πa = 0, necessarily v 2 a = 0, which implies μ = λ 2 in (2.7). The eigenvalues of the mean matrix M(t) are e λt and e λ 2 t . The condition μ 2 < λ in Theorem 2.13 is then equivalent to 2λ 2 < λ which follows from the nonnegativity of ρ, through simple algebraic computations, see (2.1). The asymptotic variance σ 2 and C t in this case becomes: This implies that the theorem rewrites to Applying this for the split time T i , we get that there is only a finite number of indexes i such that A(T i )a/C T i > 2. Let the maximum of these indexes be K , a random variable. Since T i − log i/λ has an almost sure limit by Theorem 2.7, T i is of order log i. This implies that C T i is of order (i log log i) 1/2 , and by definition of the almost sure convergence, C T i exceeds i 1/2+ε only finitely many times for every ε > 0.
The fluctuation is of smaller order then S i ; itself, which means we can indeed write S i = iλ(1 + o(i −1/2+ε )). For more detail on this, see the proof of [26, Corollary 3.16].

The Number of Multiple Active and Active-Explored Labels
Recall that both in the exploration process as well as in the branching process there might be multiple occurrences of active vertices, see Remark 2.1, as thinning only prevents multiple explored labels.
Later we want to use that the number of different active labels that are not ghosts at T i is approximately the same as S i , i.e., there are not many multiple occurrences. In Lemma 2.12 we have seen that the proportion of ghosts is negligible on the time scale t n , but we still have to deal with labels that are multiply active, or are explored and active at the same time. We will discuss these issues in the following five cases: 1. A blue active vertex has been already explored. 2. A red active vertex has been already explored. 3. A blue active vertex is also red active. 4. A vertex is double red active. 5. A vertex is double blue active.
We will denote by p α (t) the probability that a uniformly chosen active vertex falls under case α = 1, . . . , 5 at time t, which is the same as the proportion of vertices falling under case α among all active vertices. Case 1. Blue active being already explored At time t, there are at most N(t) explored labels that are not thinned. Under the condition that the active vertex is blue, its label is chosen uniformly over [n], so the probability that this label has been already explored is at most N(t)/n. Substitute N(t n + s) from Corollary 2.8, then for t n + s = 1 2λ log n + s, Red active being already explored This case can be treated similarly as the thinning of red vertices, so we also use the notation there. A label of the red active vertex is explored if and only if two intervals are about to grow into each other: the furthest explored red vertices in both intervals are neighbors. We call these intervals neighbors. Then, for two neighboring intervals, the active vertices at the end of each interval are explored in the other interval. Let I k and I j , 1 ≤ k < j ≤ N B (t) intervals with blue particles with label c k and c j respectively. Conditioned on c k , l k (t), r j (t), there are two possibilities so that I k and I j are neighbors: c j = c i + r i + j + 1 or c j = c i − i − r j − 1. Thus for each pair of indices the probability of the intervals being neighbors is 2/n (these are not independent, but expectation is linear). Summing up for all pair of indexes and dividing by the number of all red actives gives the proportion of case 2 red actives among all red actives.
Case 3. Blue active being red active Using that the labels of blue vertices are chosen uniformly, P(v is red and blue active) = A R (t n + s)/n.
(2.10) Case 4. Multiple red active vertices This case is similar to Case 2. A vertex v can be red active twice if the two intervals that it belongs to are "almost neighbors", that is, both have v as an active vertex on one of their ends (v is the only vertex separating them.) Conditioning on the location of one of the intervals, the blue vertex in the other interval can be at 2 different locations, hence p 4 (t n + s) = p 2 (t n + s).
(2.11) Case 5. Multiple blue active vertices Again, the label of a blue vertex is chosen uniformly at random, hence the probability that the label of an active blue vertex v coincides with another active blue label is at most A B (t)/n. Hence Proof We start with the proof of formula (2.13). By the previous arguments, a lower bound can be given if we subtract the individual probabilities for red and blue vertices to be deleted (note that this is a crude bound since we do not weight it with the proportion of red and blue active labels): where we summed up the rhs of (2.8), (2.9), (2.10), (2.11), (2.12) to obtain the rhs. Now we can use that t n + s = log n/(2λ) + s, use N(t) from Corollary 2.8, and Theorem 2.6 to get which tends to 1 since W (n) → W a.s. by (2.4) and Theorem 2.6. To prove formula (2.14), we use the first moment method to bound the number of thinned vertices. By Corollary 2.9, the probability that the iith explored vertex is thinned is at most 2i/n. Hence, conditioned on the size of the explored set N(t n + s), the expected number of thinned vertices is: where we used Corollary 2.8 for the asymptotic size of N(t n + s) to obtain the second line. Since this conditional expected size is of order 1, Markov's inequality implies that the number of thinned vertices is at most of order log n w.h.p. We show that the number of ghost explored vertices is of the same order. Note that proportion of ghost actives among actives tends to 0 by (2.13) (see also Lemma 2.12). Recall that the next explored vertex is a uniformly chosen active vertex, hence the proportion of ghosts becoming explored among explored vertices also tends to 0 in probability. One can make this argument rigorous by using first moment method and the upper bound from (2.6) to get that the expected number of ghost explored vertices is also of constant order. Markov's inequality finishes the proof again.
The conclusion of this section is summarized in the following corollary.

Corollary 2.16
Fix n ≥ 1 and ρ > 0. Consider the thinned CMBP with label u for the root. Then, there is a coupling of shortest weight tree SWT u (t) in NW n (ρ) to the evolution of the thinned CMBP as long as t ≤ t n + M for some arbitrary large M ∈ R. Further, the set of active vertices in the thinned CMBP can be approximated by the set of labeled active vertices in the the original CMBP in the sense that the proportion of the different labels among the actives over the total active vertices tends to zero as n → ∞, in the sense of Corollary 2.15.

Connection Process
Now that we have a good approximation of the shortest weight tree (SWT) started from a vertex, it provides us a method to observe the shortest weight path between two vertices. Let us give a raw sketch of this method before moving into the details. The previous section provides us with a coupling of a CMBP and the SWT as long as the total number of vertices is of order √ n in the SWT. To find the shortest weight path between vertices U and V , we grow the shortest weight trees from one of the vertices (SWT U ) until time t n (the size is then of order √ n). Then, conditioned on the presence of SWT U (t n ), we grow SWT V (·) and see when is the first time that these two trees intersect. The shortest weight path is determined by the first intersection of the explored set of vertices in the two processes. However, to avoid contradiction with the neighbors of the vertex that would be explored in both SWTs, and we have a good bound on the effective size of the set of active vertices, it turns out to be easier to look at the times when the first few active vertices in SWT U (t n ) become explored in SWT V (·). Note that a vertex w in the active set of vertices in SWT U (t n ) is at distance t n + R w (t n ) from U , where R w (t n ) denotes the remaining lifetime, see Section 2. Then we have yet to minimize the total length of the paths over vertices in A U (t n ) ∩ N V (·). This is what we shall carry out now rigorously.

Definition 3.1 (Collision and connection)
We grow SWT U , the shortest weight tree of U until time t n = 1 2λ log n, and then fix it. Then we grow SWT V until time t n + M, for some large M ∈ R, conditioned on the presence of SWT U (t n ). We say that a collision happens at time t n + s when an active vertex in SWT U (t n ) becomes explored in SWT V at time t n + s. Denote the set of collision times by the point process (t n + P i ) i∈N . 1 If a collision happens at vertex x i at time t n + P i , this determines a path between U and V with length 2t n + . Then the length of the shortest weight path is given by among all collision events.
We can see that in the case of growing SWT V after SWT U , the labels belonging to explored vertices in SWT U can not be used again, leading to some extra thinned vertices in SWT V . We claim that the number of additional ghosts is not too big. (Since we would like to get a bound on the effective size of active vertices in SWT V (t n + u), we must delete the descendants of vertices that formed earlier collision events.)

Claim 3.2 Consider the case of growing SWT V after SWT U (t n ) on the same graph NW n (ρ).
Then the effective size of the active and explored set in SWT V for times t = t n + s is asymptotically the same as the size of the active and explored set respectively, that is, the statements of Corollary 2.15 remain valid for SWT V as well.
Note that for the active set, it suffices to bound the proportion of ghosts, as the error terms caused by multiple active, or active-explored vertices are not increased by the presence of SWT U .
Proof We consider the computations in the proof of Lemma 2.12, using (2.5). Recall that the proportion of ghosts depends simultaneously on the thinning probability of the i th explored vertex as well as it being an ancestor of a uniform active vertex.
The arguments with the ancestral line (see 2.4.2) remain valid without any modification, we only have to examine the change in the thinning probability.
In case the iith explored vertex is blue, its label is chosen uniformly, thus the probability that this label coincides with a previously chosen label equals N U (t n ) + i − 1 /n. In case the iith explored vertex in SWT V is red, we can use the same idea as before: it has the label of an already explored vertex if and only if two intervals grow into each other with the iith step. We now consider the union of the intervals in SWT U and SWT V . Conditioned on the interval that grows, for any interval the probability that these two grow into each other is 2/n. The number of intervals is at most the total number of blue explored vertices, N U B (t n ) + N V B (T i ). Hence the probability that the labeling fails at the iith step of SWT V , if this is a red vertex is at most Since the color of the i-th explored vertex is either blue or red, we get P labeling fails inSWT V at step i ≤ 2 N U (t n ) + i /n. For the probability of a uniformly chosen active vertex in SWT V being a ghost, similarly to (2.5), we have By Lemma 2.12, the first sum on the rhs tends to 0 as n tends to ∞. for the second sum, let us recall the a.s. finite K in Lemma 2.11, and we split the sum again. We use Corollary 2.8, 2.4 and t n = log n/(2λ) to get For the second part of the sum, by Lemma 2.11 again, Using E[D i ] ≤ λ+2, we bound the expected value of the sum 3 with tower rule.
Since the logarithm is concave, we use Jensen's inequality: From Theorem 2.5 it follows that E N V (t n +s) = (0, 1) exp{Q·(t n +s)}1, where 1 = (1, 1) T is a column vector. Using the Jordan decomposition of the matrix Q and exponentiating, elementary matrix analysis yields that the leading order is determined by the main eigenvalue λ and hence (1, 0) exp{Q(t n + s)}1 ≤ e λ(t n +u) C 1 for some constant C 1 ≥ 0. Let us then use this bound with t n + s = log n/(2λ) + s to give an upper bound on the rhs of (3.3), and set C 2 := 2C 1 (λ + 2). Then Markov's inequality yields Then on the w.h.p. event Since we showed that the proportion of ghost actives tends to 0, the proportion of ghost explored vertices also tends to 0, by similar arguments as in the proof of Corollary 2.15. For the number of thinned vertices, we calculate the expected value as before, with thinning probability as in (3.1). Conditioned on the sizes of both explored sets N U (t n ) and N V (t n + s), the expected number of thinned vertices in SWT V at time t n + s is given by that is, the expected number of thinned vertices is of constant order. We finish the proof by applying Markov's inequality to show that this number is at most of order log n w.h.p.

The Poisson Point Process of Collisions
Recall that we say that a collision event happens at time t n + s when an active vertex in SWT U (t n ) becomes explored in SWT V at time t n + s. First we show that for each pair of colours, with respect to the parameter s in t n + s, the set of points (P i ) i∈N form a nonhomogeneous Poisson point process (PPP) on R, and that these PPP-s are asymptotically independent. We consider the intensity measure μ(dt), t ∈ R as the derivative of the mean function M(t) (expected value of points up till time t). To determine the intensity measure of the collision process, we will consider the four collision point processes for each possible pair of colours. None of the PPPs is empty: since the labels of blue vertices are chosen uniformly, they can meet any color, and considering the growing set of intervals, we see that red can meet red as well (see Fig. 3).  (1)).

(3.5)
where the 1 + o(1) factor only depends on n. The total intensity measure of the collision Poisson point process is then It is not hard to show (e.g. using Borel-Cantelli lemma) that these PPP-s have only finitely many points on (−∞, 0), hence, indexing the points by i ∈ N is doable. Before we proceed to the proof, we take a small analytic excursion.  Let (N 1 (s), N 2 (s)) denote the number of type 1 and type 2 successes (using the success probabilities at time s). Then the collection of random variables (N 1 (s), N 2 (s)) s>0 , as n goes to infinity, converges in probability to a two-dimensional Poisson point process with mean C f 1 (s) × C f 2 (s). Shortly, In particular, the processes of type 1 and type 2 successes are asymptotically independent. The statement remains valid when type 1 and type 2 successes can occur at the same time, with probability R 3 (s) = o(1/ √ n) for all s. Remark 3.5 (Analogue with an urn model) Looking at Proposition 3.4 in a simpler (but more restrictive) way, we can think of an urn model with n balls, where balls are gradually painted green and purple such that there are n R 1 (s) = f 1 (s) √ n many green and n R 2 (s) = f 2 (s) √ n many purple balls at time s. We allow a few balls to be both green and purple, with their number satisfying n R 3 (s) = o( √ n) for all s. We draw C √ n times with replacement, then (N 1 (s), N 2 (s)) denotes the number of drawn balls that had been painted green and purple by time s, respectively, and this converges to a two-dimensional PPP with the above mean.
In the special case where the double success has 0 probability (there are no double-painted balls), (3.9) Note that the right hand side converges to the rhs of (3.8). This finishes the first statement of the proposition.
In case a success can be both type 1 and 2 at the same time, (with probability R 3 (s)), by inclusion-exclusion, these cases are excluded twice on the intersection of [a, b] and [c, d] in the formula (3.9). When [a, b] ∩ [c, d] = ∅, (3.9) remains valid. By the symmetric role of type 1 and type 2 successes, we can assume a ≤ c, then, the left hand side of (3.10) becomes as n tends to infinity.
and we have already showed that this converges to the right hand side of (3.8).
Proof of Theorem 3.3 In Corollary 2.15 and in Claim 3.2 we showed that the effective size of the active and explored sets at times t n + s for s ∈ R are asymptotically the same as the number of active and explored individuals in the CMBP respectively, so we can use the asymptotics of (A R (t), A B (t)) from Theorem 2.6 and the asymptotics of (N R (t), N B (t)) from Corollary 2.8. For the coming paragraphs, for s ∈ R and an event E we define the notation Blue-blue and red-blue collision By the definition of the set C B,B (s), C R,B (s), we can use the following indicator representation: Recall that the labels in A U B (t n ) are chosen independently and uniformly in [n]. As a result, the pair C B,B (s), C R,B (s) has multinomial distribution with parameters A U B (t n ) for the number of draws, N V B (t n + s)/n, N V R (t n + s)/n for the two success parameters, and double success has 0 probability, since double-explored vertices are thinned.
Note that 1{x ∈ N V q (t n + s 1 )} ≤ 1{x ∈ N V q (t n + s 2 )} when s 1 ≤ s 2 for q = R, B, this description fits the conditions of Proposition 3.4, since for q = R, B where we have used the asymptotic results for A(t), N(t) from Theorem 2.6 and Corollary 2.8 and the definition of W (n) in (2.4). Note that the 1 + o(1) factor only depends on n and comes from the error of possible deviation from the stationary distribution (π R , π B ) in the approximations. A direct application of Proposition 3.4 shows that (C B,B (s), C R,B (s)) converges to two independent Poisson processes, with means o(1)). Differentiating with respect to s yields the required result for the intensity measures (rate functions). Blue-red collision We need a slightly longer argument to get the independence of this process and the other processes, i.e., that C B,R (s) is asymptotically independent of C R,B (s) and C B,B (s), and later also from C R,R (s). For this, let us recall that we stopped the evolution of SWT U at time t n . Hence, we consider SWT U (t n ) as a fixed set of intervals, {I k , k = 1, ..., N U B (t n )} (some of them might have already possibly merged by time t n ). Again, we write Consider an individual x ∈ A U R (t n ). Then, let us write I x for the explored interval of x, c x for the location of the center -and already explored blue individual in SWT U (t n ) -and x for the other active red individual at the other end of I x . Let us write x , r x for the number of explored red vertices to the left and to the right of c x in I x , i.e., x is at location c x − x − 1 and x is at c x + r x + 1 or the other way round. Note that c x , an explored blue label was chosen uniformly, and as a result, the marginal distribution of the labels of x, x , c x are all uniform. We can rewrite the above sum in (3.12) as (3.13) where N V B (t n + s) + a stands for shifting the whole set N V B (t n + s) by a modulo n. We aim to show that this converges to a Poisson point process with mean N U B (t n ) · 2N V B (t n + s)/n by using Proposition 3.4. Indeed, consider the event {c x ∈ N V B (t n + s) − r x − 1} as type 1 success, and the event {c x ∈ N V B (t n + s) + x + 1} as type 2 success. These both have probability N V B (t n + s)/n since c x is a blue, hence uniform label, and a shift does not change the size of a set. In this case, double success occurs when c B,e (t n + s) that is the 'effective size', i.e., the number of non-thinned and non-ghost labels in N V B (t n + s), which is asymptotically a random constant times √ n, see Claim 3.2 and Corollary 2.8. With this notation, for each fixed c x , the probability of double success is the same as the probability that the uniform set of size N V B,e (t n + s) contains two fixed labels, and this probability is B,e (t n +s) , a random constant times 1/n, which is clearly o 1/ √ n . Hence, by Proposition 3.4, C B,R (s) is the union of two asymptotically independent Poisson processes, which both have mean N U , since each interval contains 2 active red vertices, one on each end. Hence (1)), (3.14) by Theorem 2.6 and Corollary 2.8, then differentiation yields the result for the intensity measure.
The advantage of the form in (3.13) is that it reveals the independence of the processes C B,B (s), C R,B (s), C B,R (s): in the first two cases, the number of draws were indexed by the active blue individuals while here they are indexed by the explored blue individuals, which are independent and uniform, hence the dependence comes from shared indeces only. Both index sets are uniform and of order √ n, it is easy to see the expected size of the intersection is constant, hence by Markov's inequality, at most of order log n w.h.p. As a result, the number of shared indexes is o( √ n), we can use a modification of Proposition 3.4 to see that C B,R (s) is asymptotically independent from C B,B (s), C R,B (s). Red-red collision We write again Here we aim for a similar description as that in (3.13). Note that the argument that we used in the previous paragraph is 'almost valid', in the sense that we can describe the location of x ∈ A U R (t n ) by describing the location of c x ∈ N U B (t n ). The extra problem we face here is the following issue: the right end of a red interval can only merge with the left end of another interval (and not with the right end). As a result, simply changing the index N V B (t n + s) to N V R (t n +s) in formula (3.13) is not quite enough. Let us quickly introduce N V R,left (t n +s) and N V R,right (t n + s) for the set of left-type red and right-type red individuals in SWT V (t n + s), respectively. Then, we can write that is, we shift the set of left-type red particles to the left by r x + 1 to get the possible location of c x ; so that the right-type active individual in I x merges with a left-type explored individual, that is, with the left side of an interval in SWT V (t n + s). Similarly, we shift the set of right-type red particles to the right by x + 1 to get the possible location of c x for a collision on the other side. We remark on the following issue: when two intervals J i (s) ∈ SWT V (t n + s) and I k ∈ SWT U (t n ) collide at time s, in principle we should stop the evolution of J i (s), that is, for all s > s we should have J i (s ) ≡ J i (s). But this would cause computational difficulties later, since then we have to condition on all the earlier collisions to be able to calculate the intensity of the next one. Hence, it is easier to ignore this effect and do the following approximation on the number of red-red collisions: we let J i (s) grow further and it might collide with more vertices inside I k . The error terms caused by such events are negligible, since such events have been already treated when we investigated the 'extra' thinning of SWT V imposed by SWT U (t n ), that had a negligible contribution in the sense of Claim 3.2.
We aim to use Proposition 3.4 to show that C R,R (s) converges to a Poisson point process which is asymptotically independent of the other three PPPs. To show that it converges to a PPP, let {c x ∈ N V R,left (t n +s)−r x −1} correspond to type 1 and {c x ∈ N V R,right (t n +s)+ x +1} to type 2 successes, respectively. Since c x is a blue label that is chosen uniformly, the success probabilities are N V R,left (t n + s)/n and N V R,right (t n + s)/n. In this case, double success is the event . The probability of this event, again by the fact that c x is a uniformly chosen label, is Note that Proposition 3.4 can be applied once we show that the size of the intersection in the above formula is o( √ n). Also note that at times t n + s, the cumulative length of all the intervals is of order √ n, which also implies r x and x are at most of order √ n, hence the left side of an interval even shifted left and right side of the same interval shifted right on a cycle of length n cannot intersect. Now we only have to consider the left side of an interval shifted left intersecting the right side of another interval shifted right. Note that their centers are blue labels that were chosen uniformly in [n], hence their shifted positions are also uniform. The number of vertices in the intersection of the shifted sets is thus stochastically dominated by the number of thinned vertices, which was handled and proven to be o( √ n), see the proof of Claim 3.2.

Proposition 3.4 now yields that C R,R (s) converges to a PPP with with mean
where we used that N U B (t n ) = A U R (t n )/2, since there are two red active individuals in each interval around an explored blue label, and the asymptotics from Theorem 2.6 and Corollary 2.8.
To show independence, note that now the number of draws in the multinomial distribution are indexed by To see that C R,R (s) and C B,R (s) are also asymptotically independent, consider falling in the set N V R,left (t n + s) − r x − 1 ∪ N V R,right (t n + s) + x + 1 to be type 1 success and falling in the set N V (3.13) to (3.16)]. Our aim is to once again apply Proposition 3.4. For this, we have show that the probability of double success is negligible, that is, the size of the intersection The self-intersections of the left-shifted and right-shifted system (the intersections ) have already been handled as thinned vertices and their sizes are o( √ n), check the proof of Claim 3.2. By symmetry, it is enough to consider the intersection between the system of right-shifted right sides (i.e., N V R,right (t n + s) + x + 1) and the left-shifted centers (i.e., . Since the sum of all interval lengths is of order √ n, both the shifts and the interval lengths are at most of order √ n, but the cycle length is n, hence the left-shifted center c k − r x − 1 of an interval I k cannot intersect the right-shifted right side of the same interval. For any other interval I j , its center c j is another explored blue label, hence it is independent of c k , and also uniform in [n]. As a result, the size of the intersection has the same distribution as the size of N V B (t n + s) ∩ N V R,right (t n + s), which can be upper bounded by the number of thinned vertices and that has been shown to be o( √ n). This proves that we can indeed apply Proposition 3.4 and C R,R (s) and C R,B (s) are also asymptotically independent.
We emphasize that to obtain (3.5) we assumed that the number of actual intervals in SWT U (t n ) and SWT V (t n + s) is N U B (t n ) and N V B (t n + s), respectively. This is not entirely true due to the fact that intervals within SWT U or within SWT V might have merged already. However, in this case some of the included vertices are ghosts: Corollary 2.15 shows that the effective size of the active sets at times t n + s for s ∈ R is asymptotically the same as the number of active individuals in the CMBP, which, by the fact that every interval has precisely two active red vertices, implies that also the number of disjoint active intervals is asymptotically the same as N U B (t n ) and N V B (t n + s), respectively in the two processes.

Proof of Theorem 1.1
It is well known [34] that if (E i ) i∈N is a collection of i.i.d. random variables with cumulative function F E (y), and the points (P i ) i∈N form a one-dimensional Poisson point process with intensity measure μ(ds) on R, then the points (P i , E i ) i∈N form a two-dimensional nonhomogeneous Poisson point process on R × R, with intensity measure μ(ds) × F E (dy). In our case, to get the shortest path between U and V , recall from Definition 3.1 that we have to minimize the sum of time of the collision and the remaining lifetimes over the collision events. Mathematically, we want to minimize the quantity P i + E i over all points (P i , E i ) of the two dimensional PPP with intensity measure ν(ds × dy) := μ(ds) × e y dy, since the remaining lifetimes are i.i.d. exponential random variables.
Note that event {min i P i + E i ≥ z} is equivalent to the event that there is no point in the infinite triangle (z) = {(x, y) : y > 0, x + y < z} in this two-dimensional PPP (see Fig. 4). We calculate (1)).
For short, we introduce Then we can reformulate ν( (z)) = W (n) e λz (3.18) Let us turn our attention back to P n (U, V ), the shortest weight path between U and V . By the previous argument, we conclude Rearranging the left hand side and substituting the computed value of ν( (z)), we get We substitute t n = log n/(2λ), and set z := −x/λ to get We recognize on the right hand side the cumulative distribution function of a shifted Gumbel random variable, which implies Rearranging and substituting W (n) from (3.17), using that the martingales (W (n) a.s.
−→ (W U , W V ), and the factor 1 + o(1) only depends on n and becomes an additive term when taking logarithm, we obtain This finishes the proof of Theorem 1.1.

Epidemic Curve
Recall the definition of the epidemic curve function from Sect. 1.2. The discussion of the epidemic curve consists of three parts: first, we find the correct function f by computing the first moment of I n (t, U ). Then we prove the convergence in probability by bounding the second moment. Finally, we give a characterization of M W V , the moment generating function of the random variable W V , that determines the epidemic curve function f .

First Moment
First, we condition on the value W (n) U from the martingale approximation of the branching process of the uniformly chosen vertex U . Then we can express the fraction of infected individuals as a sum of indicators, and calculate its conditional expectation: Note that the rhs equals the probability of a uniformly chosen vertex, which we shall denote by V , being infected. Also note that a vertex is infected if and only if its distance from U is shorter than the time passed, hence Now we can further condition on W (n) V and use the distribution of P n (U, V ) conditioned on Let us set here z = t − 1 λ log W (n) U and rearrange, yielding Then, from (4.1) and (4.3) we get We recognize that the second term on the right hand side is the moment generating function of Changing variables yields Note that W (n) V converges to W V almost surely, which implies their moment generating functions converge in probability. This function is exactly the one given in Theorem 1.3.

Second Moment
The first moment above showed that the expected value of I n (t, U ) indeed converges in probability to the defined f function at the given point. We prove Theorem 1.3 by showing that the variance of I n (t, U ) converges to 0, then Chebyshev's inequality yields that I n (t, U ) converges to its expectation in probability.
Denote by 1 i = 1{i is infected by time t}. Let us calculate Since 1 i is an indicator, Var(1 i |W (n) U ) ≤ 1, hence the first term on the rhs is at most 1 n . As for the second term, Imagine now three exploration processes on NW n , one from U , one from i and one from j. It is not hard to see that the three exploration processes from these three vertices can be approximated by three independent branching processes. This implies that the covariance can be bounded by the error of coupling between the graph and the branching processes, as well as the thinning inside one tree and between the trees: these all have error terms of order at most 1/ log n. It is not hard to see that the coupling can be extended to three SWT's (instead of two, as before), and the error terms are only multiplied by constants. The connection processes between SWT U and the other two are related only through the intersection of SWT i and SWT j , which is again at most of order 1/ log n. As a result, then P(i and j are both are infected|W (n) This coupling works if i and j are fairly apart, say (i − j) mod n > (log n) 1+ε for any ε > 0 (this is w.h.p. longer than the length of the longest red interval). The number of "bad pairs", which are closer than this is n(log n) 1+ε /2, compared to the number of all pairs n 2 , the fraction goes to 0. Even for these, the covariance is bounded by 1. Then the sum divided by n 2 goes to 0.
With that, we have bounded the variance by a term that goes to 0, which finishes the proof.

Characterization of the Epidemic Curve Function
In this section, we prove Proposition 1.5. Recall that adding a superscript (B) or (R) indicates a branching process described in Sect. 2.3 that is started from a blue or red type vertex, respectively. We start with the recursive formula for the martingale limit random variables from [5]: where W (R) i are independent copies of W (R) = lim t→∞ e −λt Z (R) (t), and W (B) j are independent copies of W (B) d = W V , and X i , X j are i.i.d. Exp (1). Denote the moment generating functions of W (B) and W (R) by M W (B) , M W (R) respectively. Recall that a blue individual has two red and Poi(ρ) many blue children. Hence We use law of total expectation with respect to X i to compute Let J (B) defined similarly, with M W (R) replaced by M W (B) . Then, the second factor in (4.4) can be treated by conditioning on D (B) B and using independence: Taking expectation w.r.t. D We can rewrite the factor in exponent as then the moment generating function in (4.4) becomes Similarly for M (R) W , using that D (R) We have just showed that the moment generating functions satisfy the system of equations given in Proposition 1.5, and by [5], there exist proper moment generating functions satisfying these functional equations.

Central Limit Theorem for the Hopcount
In this section we prove Theorem 1.2 that states that the hopcount H n (U, V ), the number of edges along the shortest weight path between two vertices U and V chosen uniformly at random, satisfies a central limit theorem with mean and variance both λ+1 λ log n.
For this, we consider the shortest weight path between U and V in two parts: the path from U within SWT U (t n ) and from V within SWT V (·), to the vertex where the connection happens. We denote the vertex where the connection happens by Y . These paths are disjoint with the exception of Y , hence if suffices to determine their lengths, i.e. the graph distance of Y from U and Y from V . Denote by G (U ) (Y ) the generation of Y in SWT U , similarly for V . Then the required steps from the root U to Y is exactly G (U ) (Y ).

Claim 5.1
The choice of Y is asymptotically independent in the two SWT's.
Proof Conditioned on Y being the connecting vertex, it is uniformly chosen over the active set of SWT V . That determines its label, and it determines which particle is chosen in SWT U through the label. Since the labeling is independent of the structure of the family tree, aside from the thinning, the choice of Y in SWT U is independent from its choice in SWT V . We have already bounded the fraction of ghost particles (those who have one of their ancestors thinned) by a term that goes to 0 in Lemma 2.12, hence asymptotic independence holds. (Y ), and the two terms are independent. We reformulate the theorem using these terms: Considering that the terms are independent, it suffices to show that both terms on the right hand side have normal distribution with mean 0 and variance 1 2 . We show that both terms on the rhs of (5.1), multiplied by √ 2, have standard normal distribution. Due to the method we established the connection between SWT U and SWT V , the two terms need to be treated somewhat differently.

Generation of the Connecting Vertex in SWT V
Recall that we established the connection between SWT U and SWT V in the following way: we grew SWT U until time t n , then we freeze its evolution. Then, we grow SWT V , and every time a label is assigned to a splitting particle, we check if this label belongs to the active set of SWT U . As a result, the connecting vertex Y is a particle at some splitting time T k , and hence chosen uniformly over the active vertices. This implies that for its generation, we can use the indicator decomposition of the ancestral line described in Sect. 2.4.2, for Y 's generation as G k = k i=1 1 i , where conditioned on the offspring variables D i , the indicators are independent and have success probability P(1 i = 1) = D i S i . In our case the number of splits is a random variable. Recall from Sect. 3.2 that the connection time minus t n forms a tight random variable, [see e.g. (3.19)], hence till the connection there are N(t n + Z ) many explored vertices for some random Z ∈ R. By Corollary 2.8, N(t n + u) = C √ n for some bounded random variable C (that might depend on n, but is tight). Denote by Our aim is to show that Lindeberg's CLT is applicable for B 1 , B 2 converges to 1, and B 3 converges to 0.

Term B 1
For this sum of (conditionally independent) indicators, Lindeberg's condition is trivially satisfied if the total variance tends to infinity. To give a lower bound, Recall Lemma 2.11, and split the sum according to the random variable K . Each vertex has at least one red child, hence D i ≥ 1. Then where K is a.s. finite. The i th term on the rhs is at least 1/(2i), and thus the rhs tends to infinity, and is at least log n/(2λ). For the second term in (5.4), we can use that the second moment of D i d = Poi(ρ) + 1 + 1{i th explored is blue} can be bounded by some constant M 2 independent of i. Hence, again cutting the sum at K , the sum of the first K terms is a.s. finite. For the rest, we can use Lemma 2.11 again, and then Markov's inequality yields: Combining the two estimates for the two terms in (5.4), we see that the variance tends to infinity w.h.p. As a result, the term B 1 in (5.2) satisfies a CLT.

Term B 2
Similarly as for the term B 1 , we cut the sum at K given by Lemma 2.11 and write The first fraction tends to 0, as the numerator is a.s. finite. For the numerator in the third term, we can use (5.5) again, which shows that the third term tends to zero w.h.p. We have yet to show that the second term tends to 1. Let F n = σ (D 1 , ..., D n ) be the filtration generated by the random variables D i . Then For the first term of the rhs of (5.6), we will use Chebyshev's inequality. For this, elementary calculation using tower rule yields that i |F i−1 ] ≤ M 2 as in Sect. 5.1.1, we get that the rhs is at most M 2 π 2 /(3λ 2 ). Then Chebyshev's inequality yields This implies that the first term in (5.6) tends to 0 w.h.p. Now to show that the second term in (5.6) tends to 1, we use a corollary of Theorem 2.6 (see [5]), stating that the vector s. Further analysis (in particular, the central limit theorem about (S R i , S B i ) in [26]) yields that the error term is at most of order i −1/2+ε . Hence, using that D i d = Poi(ρ) + 1 + 1{i th explored is blue} and the definition of λ it is elementary to show that Substituting this into the sum, we have The first term on the rhs, introducing a constant error term δ from the integral approximation, equals C √ n i=K +1 1/i log n/2 = log(C √ n) − log(K + 1) + δ log n/2 n→∞ −→ 1, (5.9) since C is a tight random variable. The second term in (5.8) is at most ∞ i=0 O(i −3/2+ε ) is summable and finite, hence divided by log n it tends to 0. Combining everything, we get that B 2 in (5.2) tends to 1 w.h.p.

Term B 3
As before, we cut this sum at K given by Lemma 2.11, and the sum of the first K terms divided by log n tends to 0 since D i /S i < 1. When we consider the rest of the sum, we use the approximation of S i (given by Lemma 2.11) and add and subtract E[D i |F i−1 ] again: iλ(1+o(i −1/2+ε )) − λ+1 2λ log n λ+1 2λ log n (5.10) The numerator of the first term on the rhs has been treated in (5.7) and is w.h.p of order at most log log n, hence the first term on the rhs tends to 0 w.h.p. For the second term on the rhs of (5.10) we can use (5.8) and (5.9), and then it is at most almost surely, since C is a tight random variable. This shows that the term B 3 in (5.2) tends to 0 w.h.p, and finishes the proof of the CLT for the generation of the connecting vertex in SWT V , see (5.3).

Generation of the Connecting Vertex in SWT U
For the generation of Y in SWT U , we have to use a different approach. This is because the label of the connecting vertex is chosen uniformly among the active vertex of SWT V but is not necessarily uniform over the active vertices in SWT U . Indeed, it is a longish but elementary calculation to show that conditioned on the event that a connection happens, any active red label in SWT U is chosen with asymptotic probability (A (U ) (t n )) −1 (1 − π R 2 )/(1 − π 2 R 2 ), while any active blue label is chosen with asymptotical probability (A (U ) (t n )) −1 1/(1 − π 2 R 2 ), where A (U ) (t n ) is the total number of active vertices in SWT U . However, the following claim is still valid and will be enough to show the needed CLT: Claim 5.2 Conditioned on the connecting vertex having a label of a certain color in SWT U , with high probability, it is chosen uniformly at random among the active labels of that color in SWT U .
Proof We show the statement first for color blue. Recall that a blue label was chosen uniformly in [n]. Since the restriction of a uniform distribution to any set is again uniform, the probability of connection is the same for any particular blue active label among the different labels in the active blue labels in SWT U . Recall that the number of different labels (that are neither thinned nor ghost) is called the effective size and it is treated in Corollary 2.15.
The problem here is though, that some labels in the branching process approximation are multiply active, and these are neither thinned nor called ghosts, and if chosen, they modify the uniform probability for the connection. 2 However, Corollary 2.15 implies that the fraction of multiple actives tends to 0 at time t n . Hence if we pick an active label in A U B (t n ) it has multiplicity 1 w.h.p., including red and blue instances. (This also implies that asymptotically, the label has a well defined color.) Hence with high probability, at the connecting vertex, we have a uniform distribution over all possible blue active labels.
An analogous argument can be carried through for red active labels as well, using the fact that the centre of the interval where they belong to is chosen uniformly, and the fact that multiple red labels have proportion tending to 0 at time t n .
To finish the central limit theorem of G (U ) (Y ), we use a general result of Kharlamov [28] about the generation of a uniformly chosen active individual in a given type-set in a multitype branching process. For this, consider a type set S of a multi-type branching process, 2 Consider a label V i in the BP of SWT U that is active m i times, i.e., there are m i individuals in the BP having label V i . The label V i in SWT V is still chosen only with the same probability, (that is, 1/n). Since which one of the m i individuals has the minimal remaining lifetime is uniformly distributed, every individual with label V i has probability (m i A

(U )
B,e (t n )) −1 to be the connecting vertex, conditioned on connection at a blue label (with A

(U )
B,e being the effective size of blue labels). and let A S = ∪ q∈S A q the set of active individuals with any type from the type-set S. Then, [28,Theorem2] states that the generation of a uniformly chosen individual in A S satisfies a central limit theorem with asymptotic mean and variance that is independent of the choice of S. 3 To apply this result, first pick S := {R, B} in our case. Then, the statement simply turns into a CLT of the generation of a uniformly picked active individual. We have seen when treating G (V ) (Y ) that the asymptotic mean and variance are both λ+1 2λ log n in this case. Now apply the result again for S := {R} and S := {B}, separately. Combined with the previous observation, we get that an individual chosen uniformly at random with color blue/red, respectively, also satisfies a CLT with the same asymptotic mean and variance. This, combined with Claim 5.2, implies that whether Y is red or blue in SWT U , its generation G (U ) (Y ) admits a central limit theorem with mean and variance λ+1 2λ log n. This completes the proof of Theorem 1.2.