Abstract
One major open conjecture in the area of critical random graphs, formulated by statistical physicists, and supported by a large amount of numerical evidence over the last decade (Braunstein et al. in Phys Rev Lett 91(16):168701, 2003; Wu et al. in Phys Rev Lett 96(14):148702, 2006; Braunstein et al. Int J Bifurc Chaos 17(07):2215–2255, 2007; Chen et al. in Phys Rev Lett 96(6):068702, 2006) is as follows: for a wide array of random graph models with degree exponent \(\tau \in (3,4)\), distances between typical points both within maximal components in the critical regime as well as on the minimal spanning tree on the giant component in the supercritical regime scale like \(n^{(\tau -3)/(\tau -1)}\). In this paper we study the metric space structure of maximal components of the multiplicative coalescent, in the regime where the sizes converge to excursions of Lévy processes “without replacement” (Aldous and Limic Electron in J Probab 3(3):59, 1998), yielding a completely new class of limiting random metric spaces. A by-product of the analysis yields the continuum scaling limit of one fundamental class of random graph models with degree exponent \(\tau \in (3,4)\) where edges are rescaled by \(n^{-(\tau -3)/(\tau -1)}\) yielding the first rigorous proof of the above conjecture. The limits in this case are compact “tree-like” random fractals with a dense collection of hubs (infinite degree vertices), a finite number of which are identified with leaves to form shortcuts. In a special case, we show that the Minkowski dimension of the limiting spaces equal \((\tau -2)/(\tau -3)\) a.s., in stark contrast to the Erdős-Rényi scaling limit whose Minkowski dimension is 2 a.s. It is generally believed that dynamic versions of a number of fundamental random graph models, as one moves from the barely subcritical to the critical regime can be approximated by the multiplicative coalescent. In work in progress, the general theory developed in this paper is used to prove analogous limit results for other random graph models with degree exponent \(\tau \in (3,4)\). Our proof makes crucial use of inhomogeneous continuum random trees (ICRT), which have previously arisen in the study of the entrance boundary of the additive coalescent. We show that tilted versions of the same objects using the associated mass measure, describe connectivity properties of the multiplicative coalescent. Since convergence of height processes of corresponding approximating \(\mathbf {p}\)-trees is not known, we use general methodology in Athreya et al. (2014) and develop novel techniques relying on first showing convergence in the Gromov-weak topology and then extending this to Gromov–Hausdorff–Prokhorov convergence by proving a global lower mass-bound.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Avoid common mistakes on your manuscript.
1 Introduction and results
In the last two decades many results regarding scaling limits of large discrete random objects to continuum analogs have been proved. Examples range from Aldous’s continuum random tree [7, 8, 51], Schramm-Loewner evolution and critical planar systems [61], to what is most closely related to this paper: scaling limits of maximal components in the critical regime for random graphs as well as the minimal spanning tree on the giant component in the supercritical regime [3,4,5].
Motivated by empirical observations on real-world networks, in the last decade, researchers from a wide array of fields including computer science, the social sciences and statistical physics have proposed a large number of random graph models to explain various functionals of real world systems including power law degree distributions and small world scaling of distances between nodes in the network [6, 21, 32, 33, 35, 44, 55, 56]. Many of these models have a parameter t related to the edge density and a model-dependent critical point \(t_c\). Writing n for the number of vertices in the network, if \(t< t_c\) then the maximal connected component \(\mathscr {C}_1(n)\) has size that is negligible compared to n, while if \(t> t_c\) one has a giant component \(\mathscr {C}_1(n)\sim f(t) n\) for some positive model-dependent function \(f(t) > 0\) for \(t> t_c\). The “\(t=t_c\)” regime is often referred to as the critical regime. Just as a study of the classical critical Erdős-Rényi random graph spurred enormous activity in probabilistic combinatorics in the 90s [9, 21, 47, 52, 53], the study of the critical regime of these new random graph models and new phenomena such as explosive percolation [2, 60] have motivated a concerted effort to understand the critical regime of these new random graph models.
In this context, for more than a decade [23, 24, 28, 62], one of the fundamental open conjectures in this area (loosely stated) is as follows. Consider distances between typical points in the maximal component either in the critical regime or the minimal spanning tree on the giant component in the supercritical regime scale
-
(a)
If the random graph model has an asymptotic degree distribution with finite third moments, then distances scale like \(n^{1/3}\).
-
(b)
If the random graph model has a limiting degree distribution \(\left\{ p_k\right\} _{k\ge 1}\) with tail \(p_k\sim C/k^{\tau }\) for \(\tau \in (3,4)\), then distances scale like \(n^{(\tau -3)/(\tau -1)}\).
Contributions of this paper Since we will need to setup some notation before getting to the main results, let us give a general overview of the contributions of this paper:
-
(i)
General theory The fundamental aim of the paper is to develop a general theory one can use to prove (b) in the conjecture above for a wide class of random graphs and, in particular, derive a new class of continuum scaling limits. To do so, we consider the multiplicative coalescent with entrance boundary in the space \(l_0\) as in [10] [see (1.11) below]. Viewing the maximal components as measured metric spaces (using graph distance and vertex weights), we show that these components with edges and associated measures properly rescaled converge to continuum random objects in the Gromov weak sense. These resulting objects are obtained via appropriate tilts and vertex identifications of inhomogeneous continuum random trees; untilted versions of the same objects have been used to describe the entrance boundary of the additive coalescent [13]. These resulting random objects are “tree-like” but with a dense collection of “hubs” (corresponding to infinite-degree vertices).
-
(ii)
Proof techniques The standard technique in proving such results is to study height processes of certain spanning trees of the components and to show that these processes converge to limiting excursions that code the limiting random real trees. In our context, the convergence of height processes of the corresponding approximating \(\mathbf {p}\)-trees is not known. In [11], the height processes of \(\mathbf {p}\)-trees were shown to converge to limiting excursions in certain regimes, but these results are not applicable to our situation. Because of this, we develop new techniques relying on first showing convergence in Gromov-weak topology via a careful analysis of the tree spanning a finite collection of “typical” points in random “tilted” \(\mathbf {p}\)-trees. In one fundamental class of random graph models, we then extend Gromov-weak convergence to Gromov-Hausdorff-Prokhorov convergence by proving a global lower mass-bound.
-
(iii)
Special case As an example of the general theory, we study the special case of the Norros-Reittu model [57] (which in the regime of interest has been proven [46] to be equivalent to the Chung-Lu model [30] and the rank-one random graph [22]). In this case, we show that the limiting spaces are compact. We also show that the box-counting or Minkowski dimension equals \((\tau -2)/(\tau -3)\) a.s.
In work in progress [19], we use the general theory in this paper to analyze another fundamental random graph model, the configuration model with degree distribution with exponent \(\tau \in (3,4)\), and derive the continuum analogs of the maximal components of this model. We defer a more detailed discussion of related work and the relevance of the current study to Sect. 3.
Organization of the paper A reasonable amount of notation regarding the entrance boundary of the multiplicative coalescent is required to describe the main results (Theorems 1.8, 1.9). To ease the reader into the paper, we start in Sect. 1.1 with the special case of the Norros-Reittu model and in Theorem 1.2 describe what the main results imply for this model. Then in Sect. 1.2 we define the multiplicative coalescent as well as the class of entrance boundaries of importance for the paper and then describe the two main results. The results use two notions of convergence of metric spaces; these are given a precise formulation in Sect. 2.1. Section 2.2 describes an important class of random trees called \(\mathbf {p}\)-trees and the corresponding inhomogenous continuum random trees that arise as scaling limits of these objects. These are then used in Sect. 2.3 to give a precise description of the scaling limits of maximal components. We discuss the relevance of the main results, relate these to existing work and give an overview of the proof in Sect. 3. The proofs of the main results are contained in Sects. 4–7.
Notation Throughout this paper, we make use of the following standard notation. We let \(\mathop {\longrightarrow }\limits ^{d}\) denote convergence in distribution, and \(\mathop {\longrightarrow }\limits ^{\mathrm {P}}\) convergence in probability. For a sequence of random variables \((X_n)_{n\ge 1}\), we write \(X_n=o_{\mathrm {P}}(b_n)\) when \(|X_n|/b_n\mathop {\longrightarrow }\limits ^{\mathrm {P}}0\) as \(n\rightarrow \infty \). For a non-negative function \(n\mapsto g(n)\), we write \(f(n)=O(g(n))\) when |f(n)| / g(n) is uniformly bounded, and \(f(n)=o(g(n))\) when \(\lim _{n\rightarrow \infty } f(n)/g(n)=0\). Furthermore, we write \(f(n)=\Theta (g(n))\) if \(f(n)=O(g(n))\) and \(g(n)=O(f(n))\). We say that a sequence of events \(({\mathscr {E}}_n)_{n\ge 1}\) occurs with high probability (whp) when \({{\mathrm{\mathbb {P}}}}({{\mathscr {E}}}_n)\rightarrow 1\).
1.1 Rank-one random graph
1.1.1 Model formulation
We start by describing a particular class of random graph models called the Poissonian random graph or the Norros-Reittu model [22, 57], sometimes also referred to as the rank-one random graph model [22]. In the regime of interest for this paper, as shown in [46], this model is equivalent to the Chung-Lu model [29,30,31,32] and the Britton-Deijfen-Martin-Löf model [25]. Start with vertex set \([n]:=\left\{ 1,2,\ldots , n\right\} \) and suppose each vertex \(i\in [n]\) has a weight \(w_i\ge 0\) attached to it; intuitively this measures the propensity or attractiveness of this vertex in the formation of links. Writing \(\varvec{w}=(w_1,\ldots , w_n)\), place an edge between i and j independently for each \(i\ne j\in [n]\) with probability
where \(\ell _n\) is the total weight given by
To complete the formulation, we need to specify how these vertex weights are chosen. Essentially we want the empirical distribution of weights \(n^{-1} \sum _{i\in [n]} \delta \left\{ w_i\right\} \) to converge to a fixed pre-specified distribution F as \(n\rightarrow \infty \). There are a number of ways to do this, but for this paper the following choice turns out to be convenient for a clear statement of the results. Let \((w_i)_{i\in [n]}\) be constructed by
where F is a cumulative distribution function on \([0,\infty )\) and \([1-F]^{-1}\) is the generalized inverse
We assume there exists \(\tau \in (3,4)\) and \(c_{F} > 0\) such that
We will use W for a random variable with distribution F. We will use \(\mathrm{NR}_n(\varvec{w})\) to denote the corresponding random graph.
1.1.2 Motivation and known results
As described in the introduction, one impetus for the formulation of a wide array of network models, is to capture the heterogeneous and heavy-tailed nature of the degree distribution of empirical networks. Write \(N_k\) for the number of vertices with degree k in \(\mathrm{NR}_n(\varvec{w})\). Under the assumptions in the previous section, one can show [22, Theorem 3.13] that
where \(W\sim F\). In particular, the degree distribution also has tail exponent \(\tau \). More important in the context of this paper is the connectivity threshold. For \(i\ge 1\) write \(\mathscr {C}_i\) for the ith largest connected component and let \(|\mathscr {C}_i|\) denote its number of vertices. Now define the parameter
and note that \(\nu <\infty \) by (1.3). Then by [22, Theorem 3.1 and Sect. 16.4], we have the following criterion for the phase transition for the largest component.
-
(a)
Supercritical regime If \(\nu >1\), then there exists \(\rho \in (0,1) \) such that \(|\mathscr {C}_1|/n\mathop {\longrightarrow }\limits ^{\mathrm {P}}\rho \) whilst \(|\mathscr {C}_2|/n\mathop {\longrightarrow }\limits ^{\mathrm {P}}0\);
-
(b)
Subcritical regime If \(\nu <1\), then \(|\mathscr {C}_1|/n\mathop {\longrightarrow }\limits ^{\mathrm {P}}0\).
The main aim of this paper is to understand the critical regime \(\nu =1\) where also \(|\mathscr {C}_1|/n\mathop {\longrightarrow }\limits ^{\mathrm {P}}0\). In this setting, there are different universality classes depending on the vertex weights. In the Erdős-Rényi or weakly inhomogeneous universality class, critical clusters have size of order \(n^{2/3}\) and their metric space structure was discovered by Addario-Berry, Broutin and Goldschmidt [4]. Interestingly, when \({{\mathrm{\mathbb {E}}}}(W^3)<\infty \), component sizes still scale like \(n^{2/3}\) [16] while assuming finite \(6+\varepsilon \)-moments the metric space structure of rank-1 inhomogeneous random graphs is (apart from a trivial rescaling of size and time) the same [20]. However, in the strongly inhomogeneous regime where \({{\mathrm{\mathbb {E}}}}(W^3)=\infty \), the scaling limits of critical clusters are dramatically different in the sense that their sizes are gives by \(n^{(\tau -2)/(\tau -1)},\) where \(\tau \) is the degree power-law exponent given by (1.3) [17, 41]. In this paper, we focus on their metric space structure, obtained after rescaling edges by \(n^{-(\tau -3)/(\tau -1)}\) and taking the limit as \(n\rightarrow \infty .\) We show that this limiting metric space is compact and its Minkowski dimension equals \((\tau -2)/(\tau -3)\), whereas the Erdős-Rényi scaling limit has Minkowski dimension 2.
In this paper, we analyze the entire critical scaling window. Let \(\varvec{w}\) denote the weight sequence as in (1.2) and fix \(\lambda \in \mathbb {R}\). Now consider the weight sequence \(\varvec{w}(\lambda ):=(w_i(\lambda ))_{i\in [n]}\) defined by
Write \(\mathrm{NR}_n(\varvec{w}(\lambda ))\) for the corresponding random graph and let \(\mathscr {C}_i(\lambda )\) denote the corresponding ith largest component. Then this critical scaling window was first identified and studied in [41] where it was shown that for every fixed \(\lambda \in \mathbb {R}\), \(|\mathscr {C}_1|/n^{(\tau -2)/(\tau -1)}\) as well as \(n^{(\tau -2)/(\tau -1)}/|\mathscr {C}_1|\) are tight. The entire distributional asymptotics of component sizes were derived in [17] where it was shown that in the product topology on \(\mathbb {R}^{\mathbb {N}}\),
where \((Z_i(\lambda ):i\ge 1)\) are excursions away from zero of a special stochastic process described in more detail in Sect. 1.2.
1.1.3 Our results
We make the following convention:
For any metric measure space \((\mathbb {S}, d, \mu )\) and \(a>0\), \(a\mathbb {S}\) denotes the metric measure space \((\mathbb {S}, ad, \mu )\), i.e., the space where the distance is scaled by a and the measure remains unchanged.
Consider the random graph \(\mathrm{NR}_n(\varvec{w}(\lambda ))\) and view each connected component \(\mathscr {C}\) as a connected metric space via the usual graph distance where each edge has length one. Further, we can view each connected component \(\mathscr {C}\) as a metric measure space by assigning weight \(w_i/(\sum _{j\in \mathscr {C}}w_j)\) to vertex \(i\in \mathscr {C}\). Note that the normalization yields a probability measure on each connected component. Let \(\mathscr {S}\) denote the space of (equivalence classes) of compact measured metric spaces equipped with the Gromov-Hausdorff-Prokhorov metric (see Sect. 2.1.1 for definition). View
as a random element of \(\mathscr {S}^{\mathbb {N}}\).
Next recall that the lower and upper box counting dimensions of a compact metric space \(\mathscr {M}\) are given by
respectively, where \(\mathscr {N}(\mathscr {M},\delta )\) is the minimal number of open balls with radius \(\delta \) required to cover \(\mathscr {M}\). Also let \(\dim _h(\mathscr {M})\) denote the Hausdorff dimension of \(\mathscr {M}\). When \({{\mathrm{\underline{dim}}}}(\mathscr {M})={{\mathrm{\overline{dim}}}}(\mathscr {M})=\dim \), then the box-counting or Minkowski dimension is \(\dim \).
Before stating our main result, we introduce a technical condition.
Assumption 1.1
The support of the limiting distribution F (defined just before (1.2)) is given by \([\iota , \infty )\) for some \(\iota >0\). Further, F has a continuous density f on \([\iota , \infty )\) such that xf(x) is non-increasing on \([\iota , \infty )\).
Note that distributions F that are exact power laws, i.e., of the form \(F(x)=1-(\iota /x)^{\tau -1}\) for \(x>\iota \) and some \(\tau \in (3, 4)\), satisfy Assumption 1.1. The main result of this section is as follows:
Theorem 1.2
(Scaling limits with degree exponent \(\tau \in (3,4)\)) Fix \(\lambda \in \mathbb {R}\) and consider the critical Norros-Reittu model \(\mathrm{NR}_n(\varvec{w}(\lambda ))\), i.e., assume that \(\nu =1\) where \(\nu \) is as in (1.5). Assume that the limiting distribution F satisfies Assumption 1.1.
Then, there exists an appropriate limiting sequence of random compact metric measure spaces \(\mathbf {M}_\infty ^{{{\mathrm{nr}}}}(\lambda ):= (M_i^{{{\mathrm{nr}}}}(\lambda ))_{i\ge 1}\) such that the components in the critical regime satisfy
Here convergence is with respect to the product topology on \(\mathscr {S}^{\mathbb {N}}\) induced by the Gromov-Hausdorff-Prokhorov metric on each coordinate \(\mathscr {S}\). For each \(i\ge 1\), the limiting metric spaces have the following properties:
-
(a)
\(M_i^{{{\mathrm{nr}}}}(\lambda )\) is random compact metric measure space obtained by taking a random real tree \(\mathscr {T}_i(\lambda )\) and identifying a random (finite) number of pairs of points (thus creating shortcuts).
-
(b)
Call a point \(u\in \mathscr {T}_i(\lambda )\) a hub point if deleting the u results in infinitely many disconnected components of \(\mathscr {T}_i(\lambda )\). Then \(\mathscr {T}_i(\lambda )\) has infinitely many hub points which are everywhere dense on the tree \(\mathscr {T}_i(\lambda )\).
-
(c)
The box-counting or Minkowski dimension of \(M_i^{{{\mathrm{nr}}}}(\lambda )\) satisfies
$$\begin{aligned} \dim (M_i^{{{\mathrm{nr}}}}(\lambda ))=\frac{\tau -2}{\tau -3} \qquad a.s. \end{aligned}$$(1.8)Consequently, the Hausdorff dimension satisfies the bound \(\dim _h(M_i^{{{\mathrm{nr}}}}(\lambda )) \le (\tau -2)/(\tau -3)\) a.s.
Conjecture 1.3
We strongly believe that both the Hausdorff dimension and the packing dimension of \(M_i^{{{\mathrm{nr}}}}(\lambda )\) equal \((\tau -2)/(\tau -3)\) a.s. See Sect. 8 for a discussion.
1.2 Connectivity asymptotics for the multiplicative coalescent
In this section we consider a slightly more general setting than in Sect. 1.1. The motivation is as follows: recall that for the rank-one model, two vertices were connected with essentially probability proportional to the product of the weight between these two vertices. For probabilists, this connectivity pattern is quite reminiscent of the famous multiplicative coalescent [9, 10, 15]. Whilst interesting in its own right, its fundamental importance in the context of random graphs is as follows: a wide array of random graph models can be constructed in a dynamic fashion where as time progresses new edges are created between pre-existing clusters. Even though the merging dynamics between connected components tend to be quite different from that specified by the multiplicative coalescent, the mergers from the barely subcritical regime through the critical scaling window can be approximated by the multiplicative coalescent. This idea was exploited in [18] to prove universality of scaling limits in the critical regime for several random graphs models.
Thus components at criticality of a wide array of random graph models can be thought of consisting of two major parts:
-
(a)
“Blobs” that are components formed in the barely subcritical regime.
-
(b)
Edges formed between such blobs as the system proceeds from the barely subcritical regime through the critical scaling window.
The results below (in particular Theorem 1.8) specify how to handle the second aspect. In a companion paper we show how one can use macroscopic averaging of distances within blobs in random graph models such as the configuration model to show that these models also have the same scaling limit in the critical regime as Theorem 1.2 in the setting where degrees obey power-laws with exponents \(\tau \in (3, 4)\). Further, it will follow from Theorem 1.8 that the convergence in (1.7) holds with respect to the product topology induced by Gromov-weak topology on each coordinate. Therefore, Theorem 1.2 can be recovered partially from the more general Theorem 1.8 at the expense of working with a weaker topology.
Before stating the result we will need to define the multiplicative coalescent. The natural domain of this Markov process is the space
equipped with the metric \(d(\mathbf {x}, \mathbf {y}):= \sqrt{\sum _{i\ge 1} (x_i-y_i)^2}\). We will work in the simpler setup where the Markov process starts with a finite number of clusters, i.e., the process starts with \(\mathbf {x}\in \ell ^2_{\downarrow }\) such that \(\exists n < \infty \) such that \(x_i =0\) for \(i> n\). Write \(\ell ^2_{\downarrow }(n)\) for the collection of such vectors. Now the Markov process \((\mathbf {X}(t))_{t\ge 0}\) with initial state \(\mathbf {X}(0) = \mathbf {x}\) evolves as follows. Write \(\mathbf {X}(t) = (X_i(t))_{i\ge 1}\). Then for \(i\ne j\), clusters i and j merge at rate \(X_i(t)\cdot X_j(t)\) into a single cluster of size \(X_i(t)+X_j(t)\).
Note that for any fixed time \(t>0\), it is easy to find the distribution of masses \(\mathbf {X}(t)\) via the following random graph:
Definition 1.4
(Random graph \(\mathscr {G}_n(\mathbf {x},t)\)) Consider the vertex set \([n]:=\left\{ 1,2,\ldots , n\right\} \) and assign weight \(x_i\) to vertex i. Now connect each pair of vertices i, j with \(i\ne j\) independently with probability
Call this random graph \(\mathscr {G}_n(\mathbf {x},t)\). For a connected component \(\mathscr {C}\subseteq \mathscr {G}_n(\mathbf {x},t)\), let \({{\mathrm{mass}}}(\mathscr {C}):= \sum _{i\in \mathscr {C}} x_i\). Let \((\mathscr {C}_i(t))_{i\ge 1}\) denote the connected components arranged in decreasing order of their masses.
The following is obvious from the definition of the multiplicative coalescent:
Lemma 1.5
For each fixed \(t\ge 0\), the masses of the multiplicative coalescent at time t started with finite number of initial clusters with masses \(\mathbf {x}\) satisfies
Analogous to (1.9), consider the two spaces
These spaces turn out to be crucial in describing the entrance boundary of the eternal multiplicative coalescent in [10]. In the context of this paper, we are interested in studying scaling limits of connected components of the random graph \(\mathscr {G}_n(\mathbf {x},t)\) when (suitably normalized) asymptotics of the weight vector \(\mathbf {x}\) are described by a vector \(\mathbf {c}\in l_0\). Let
We will make the following assumptions about the weight vector \(\mathbf {x}:=\mathbf {x}(n)\) used to form the graph \(\mathscr {G}_n(\mathbf {x},t)\). These place the associated graph in a particular entrance boundary of the associated eternal multiplicative coalescent [10, Proposition 7].
Assumption 1.6
For each \(n\ge 1\), let \(\mathbf {x}^{(n)} = (x_i^{(n)}: 1\le i\le n)\) be an initial finite-length vector belonging to \(\ell ^2_{\downarrow }(n)\). Suppose that as \(n\rightarrow \infty \) there exists \(\mathbf {c}\in l_0\) such that
Now let \(\left\{ \xi _j:j\ge 1\right\} \) be a sequence of independent exponential random variables where \(\xi _j\) has rate \(c_j\) for each \(j\ge 1\). For a fixed \(\lambda \in \mathbb {R}\), consider the process
It turns out that this process is well defined precisely if \(\mathbf {c}\in \ell ^3_{\downarrow }\) [10]. Consider the “reflected at zero” process
and the excursions of \(\tilde{V}^{\mathbf {c}}_\lambda (\cdot )\) from zero. Then Aldous and Limic [10] showed that the lengths of these excursions are a.s. in \(l^2\) precisely when \(\mathbf {c}\in l_0\), and thus can be arranged in decreasing order. Write
for these excursions in decreasing order of their length. Let \(Z_i(\lambda ):= |\mathscr {Z}_i(\lambda )|\) denote the length of the ith largest excursion and let
Then Aldous and Limic [10] proved the following result:
Theorem 1.7
([10, Proposition 7]) Fix \(\lambda \in \mathbb {R}\) and consider the time scale \(t_n:= \lambda + [\sigma _2(\mathbf {x}^{(n)})]^{-1}\). Under Assumptions (1.12), (1.13), (1.14), the masses of the connected components of the graph \(\mathscr {G}_n(\mathbf {x},t_n)\) satisfy
with respect to the topology in \(\ell ^2_{\downarrow }\), where \(\mathbf {Z}(\lambda )\) is as in (1.18).
Now consider the connected components in \(\mathscr {G}_n(\mathbf {x},t)\), and as before, view each component \(\mathscr {C}\) as a connected metric space via the usual graph distance where each edge has length one. Further, view each component \(\mathscr {C}\) as a measured metric space by assigning mass \(x_i/{{\mathrm{mass}}}(\mathscr {C})\) to each vertex \(i\in \mathscr {C}\). Let \(\mathscr {S}_{*}\) denote the space of (equivalence classes) of measured metric spaces equipped with Gromov-weak topology (see Sect. 2.1.2 for definition) and view
as a random element in \(\mathscr {S}_{*}^{\mathbb {N}}\). Then our next result is about Gromov-weak convergence of \(\mathbf {M}_n(\lambda )\).
Theorem 1.8
Fix \(\lambda \in \mathbb {R}\). Then under Assumption 1.6, there exist an appropriate limiting sequence of metric spaces \(\mathbf {M}_{\infty }^{\mathbf {c}}(\lambda ):= (M_i^{\mathbf {c}}(\lambda ):i\ge 1)\) such that
Here weak convergence is on \(\mathscr {S}_{*}^{\mathbb {N}}\) which is equipped with the natural product topology induced by the Gromov-weak topology on each coordinate \(\mathscr {S}_{*}\).
Remark 1
A full description of the limit objects is given in Sect. 2.3. The limit objects use tilted versions of inhomogeneous continuum random trees and checking compactness even of the original versions at this level of generality turns out to be quite intractable. However as the next theorem shows, in the special case of relevance to the rank-one model, one can prove much more.
Consider the special sequence \(\mathbf {c}= \mathbf {c}(\alpha ,\tau ):=(c_i(\alpha ,\tau ):\ i\ge 1)\in l_0\) with \(\tau \in (3,4)\) and \(\alpha > 0\), where
Then we have the following result about the limiting metric spaces:
Theorem 1.9
Fix \(\alpha >0\), \(\tau \in (3,4)\) and let \(\mathbf {c}= \mathbf {c}(\alpha ,\tau )\) as in (1.19). Consider the limiting metric spaces \(\mathbf {M}_\infty ^{\mathbf {c}}(\lambda ):=(M_i^{\mathbf {c}}(\lambda ):i\ge 1)\).
Then almost surely \(M_i^{\mathbf {c}}(\lambda )\) is compact for every \(i\ge 1\). Further, the Minkowski dimension of \(M_i^{\mathbf {c}}(\lambda )\) satisfies
Consequently, the Hausdorff dimension satisfies the bound \(\dim _h(M_i^{\mathbf {c}}(\lambda )) \le (\tau -2)/(\tau -3)\) a.s.
Remark 2
Since we are dealing with equivalence classes of metric spaces (see Sects. 2.1.1 and 2.1.2), Theorem 1.9 should be understood as claiming the existence of representative spaces \(M_i^{\mathbf {c}}(\lambda )\) that are compact, and satisfy the said conditions about the fractal dimensions. We will only work with these representative spaces throughout this paper.
2 Definitions and limit objects
2.1 Convergence of metric spaces
Proper notions of convergence of (measured) metric spaces is one of the central themes in this paper. Here we define the two topologies used in the statement of our results. We mainly follow [1, 26, 38, 39].
2.1.1 Gromov-Hausdorff-Prokhorov metric
In this section, all metric spaces under consideration will be compact metric spaces with associated probability measures. Let us first recall the Gromov-Hausdorff distance \(d_{{{\mathrm{GH}}}}\) between metric spaces. Fix two metric spaces \((X_1,d_1)\) and \((X_2, d_2)\). For a subset \(C\subseteq X_1 \times X_2\), the distortion of C is defined as
A correspondence C between \(X_1\) and \(X_2\) is a measurable subset of \(X_1 \times X_2\) such that for every \(x_1 \in X_1\) there exists at least one \(x_2 \in X_2\) such that \((x_1,x_2) \in C\) and vice-versa. The Gromov-Hausdorff distance between the two metric spaces \((X_1,d_1)\) and \((X_2, d_2)\) is defined as
Suppose \((X_1, d_1)\) and \((X_2, d_2)\) are two metric spaces and \(p_1\in X_1\), and \(p_2\in X_2\). Then the pointed Gromov-Hausdorff distance between \(\varvec{X}_1:=(X_1, d_1, p_1)\) and \(\varvec{X}_2:=(X_2, d_2, p_2)\) is given by
We will need a metric that also keeps track of associated measures on the corresponding spaces. A compact measured metric space \((X, d , \mu )\) is a compact metric space (X, d) with an associated probability measure \(\mu \) on the Borel sigma algebra \(\mathscr {B}(X)\). Given two compact measured metric spaces \((X_1, d_1, \mu _1)\) and \((X_2,d_2, \mu _2)\) and a measure \(\pi \) on the product space \(X_1\times X_2\), the discrepancy of \(\pi \) with respect to \(\mu _1\) and \(\mu _2\) is defined as
where \(\pi _1, \pi _2\) are the marginals of \(\pi \) and \(||\cdot ||\) denotes the total variation distance between probability measures. Then the Gromov-Haussdorf-Prokhorov distance between \(X_1\) and \(X_2\) is defined as
where the infimum is taken over all correspondences C and measures \(\pi \) on \(X_1 \times X_2\).
Similar to (2.1), we can define a “pointed Gromov-Hausdorff-Prokhorov distance”, \(d_{{{\mathrm{GHP}}}}^{{{\mathrm{pt}}}}\) between two metric measure spaces \(X_1\) and \(X_2\) having two distinguished points \(p_1\) and \(p_2\) respectively by taking the infimum in (2.2) over all correspondences C and measures \(\pi \) on \(X_1 \times X_2\) such that \((p_1, p_2)\in C\).
Write \(\mathscr {S}\) for the collection of all measured compact metric spaces \((X,d,\mu )\). The function \(d_{{{\mathrm{GHP}}}}\) is a pseudometric on \(\mathscr {S}\), and defines an equivalence relation \(X \sim Y \Leftrightarrow d_{{{\mathrm{GHP}}}}(X,Y) = 0\) on \(\mathscr {S}\). Let \({\bar{\mathscr {S}}} := \mathscr {S}/ \sim \) be the space of isometry equivalent classes of measured compact metric spaces and \({\bar{d}}_{{{\mathrm{GHP}}}}\) the induced metric. Then by [1], \(({\bar{\mathscr {S}}}, {\bar{d}}_{{{\mathrm{GHP}}}})\) is a complete separable metric space. To ease notation, we will continue to use \((\mathscr {S}, d_{{{\mathrm{GHP}}}})\) instead of \(({\bar{\mathscr {S}}}, {\bar{d}}_{{{\mathrm{GHP}}}})\) and \(X = (X, d, \mu )\) to denote both the metric space and the corresponding equivalence class.
2.1.2 Gromov-weak topology
Here we mainly follow [38]. Introduce an equivalence relation on the space of complete and separable metric spaces that are equipped with a probability measure on the associated Borel \(\sigma \)-algebra by declaring two such spaces \((X_1, d_1, \mu _1)\) and \((X_2, d_2, \mu _2)\) to be equivalent when there exists an isometry \(\psi :\mathrm {support}(\mu _1)\rightarrow \mathrm {support}(\mu _2)\) such that \(\mu _2=\psi _{*}\mu _1:=\mu _1\circ \psi ^{-1}\), i.e., the push-forward of \(\mu _1\) under \(\psi \) is \(\mu _2\). Write \(\mathscr {S}_{*}\) for the associated space of equivalence classes. As before, we will often ease notation by not distinguishing between a metric space and its equivalence class.
Fix \(m\ge 2\), and a complete separable metric space (X, d). Then given a collection of points \(\mathbf {x}:=(x_1, x_2, \ldots , x_m)\in X^m\), let \(\mathbf {D}(\mathbf {x}):= (d(x_i, x_j))_{i,j\in [m]}\) denote the symmetric matrix of pairwise distances between the collection of points. A function \(\Phi :\mathscr {S}_* \rightarrow \mathbb {R}\) is called a polynomial of degree m if there exists a bounded continuous function \(\phi :\mathbb {R}_+^{n^2}\rightarrow \mathbb {R}\) such that
Here \(\mu ^{\otimes m}\) is the m-fold product measure of \(\mu \). Let \(\varvec{\Pi }\) denote the space of all polynomials on \(\mathscr {S}_*\).
Definition 2.1
(Gromov-weak topology) A sequence \((X_n, d_n, \mu _n)_{n\ge 1} \in \mathscr {S}_*\) is said to converge to \((X, d, \mu ) \in \mathscr {S}_*\) in the Gromov-weak topology if and only if \(\Phi ((X_n, d_n, \mu _n))\rightarrow \Phi ((X, d, \mu ))\) for all \(\Phi \in \varvec{\Pi }\).
In [38, Theorem 1] it is shown that \(\mathscr {S}_*\) is a Polish space under the Gromov-weak topology. It is also shown that, in fact, this topology can be completely metrized using the so-called Gromov-Prokhorov metric.
2.1.3 Spaces of trees with edge lengths, leaf weights and root-to-leaf measures
In the proof of the main results we need the following two spaces built on top of the space of discrete trees. The first space \(\mathbf {T}_{IJ}\) was formulated in [12, 13] where it was used to study trees spanning a finite number of random points sampled from an inhomogeneous continuum random tree (as described in the next section). We use the same notation in this paper.
The space \(\mathbf {T}_{IJ}\): Fix \(I\ge 0\) and \(J\ge 1\). Let \(\mathbf {T}_{IJ}\) be the space of trees having the following properties:
-
(a)
There are exactly J leaves labeled \(1+, \ldots , J+\), and the tree is rooted at another labeled vertex \(0+\).
-
(b)
There may be extra labeled vertices (called hubs) with distinct labels in \(\left\{ 1,2,\ldots , I\right\} \). (It is possible that only some, and not all labels in \(\left\{ 1,2,\ldots , I\right\} \) are used).
-
(c)
Every edge e has a strictly positive edge length \(l_e\).
A tree \(\mathbf {t}\in \mathbf {T}_{IJ}\) can be viewed as being composed of two parts:
-
(1)
\({{\mathrm{shape}}}(\mathbf {t})\) describing the shape of the tree (including the labels of leaves and hubs) but ignoring edge lengths. The set of all possible shapes \(\mathbf {T}_{IJ}^{{{\mathrm{shape}}}}\) is obviously finite for fixed I, J.
-
(2)
The edge lengths \(\mathbf {l}(\mathbf {t}):= (l_e:e\in \mathbf {t})\). Consider the product topology on \(\mathbf {T}_{IJ}\) consisting of the discrete topology on \(\mathbf {T}_{IJ}^{{{\mathrm{shape}}}}\) and the product topology on \(\mathbb {R}^m\) where m is the number of edges of \(\mathbf {t}\).
The space \(\mathbf {T}_{IJ}^*\): We will need a slightly more general space. Along with the three attributes above in \(\mathbf {T}_{IJ}\), the trees in this space have the following two additional properties. Let \(\mathscr {L}(\mathbf {t}):= \left\{ 1+, \ldots , J+\right\} \) denote the collection of non-root leaves in \(\mathbf {t}\). Then every leaf \(v\in \mathscr {L}(\mathbf {t}) \) has the following attributes:
-
(d)
Leaf weights A strictly positive number A(v). Write \(\mathbf {A}(\mathbf {t}):=(A(v): v\in \mathscr {L}(\mathbf {t}))\).
-
(e)
Root-to-leaf measures A probability measure \(\nu _{\mathbf {t},v}\) on the path \([0+,v]\) connecting the root and the leaf v. Here the path is viewed as a line segment pointed at \(0+\) and has the usual Euclidean topology. Write \(\varvec{\nu }(\mathbf {t}):= (\nu _{\mathbf {t},v}: v\in \mathscr {L}(\mathbf {t}))\) for this collection of probability measures.
In addition to the topology on \(\mathbf {T}_{IJ}\), the space \(\mathbf {T}_{IJ}^*\) with these additional two attributes inherits the product topology on \(\mathbb {R}^{J}\) owing to leaf weights and \((d_{{{\mathrm{GHP}}}}^{{{\mathrm{pt}}}})^J\) owing to the root-to-leaf measures.
For consistency, we add to the spaces \(\mathbf {T}_{IJ}\) and \(\mathbf {T}_{IJ}^*\) a conventional state \(\partial \). Its use will be clear later on.
2.2 Random \(\mathbf {p}\)-trees and inhomogeneous continuum random trees (ICRTs)
For fixed \(m \ge 1\), write \(\mathbb {T}_m\) and \(\mathbb {T}_m^{{{\mathrm{ord}}}}\) for the collection of all rooted trees with vertex set [m] and rooted ordered trees with vertex set [m] respectively. Here we will view a rooted tree as being directed with the root being the original progenitor and each edge being directed from child to parent. An ordered rooted tree is a tree where children of each individual are assigned an order (meant to describe for example orientation in a planar embedding, e.g., right to left or some notion of age, e.g., oldest to youngest).
In this section, we define a family of random tree models called \(\mathbf {p}\)-trees [27, 59], and their corresponding limits, the so-called inhomogeneous continuum random trees, which play a key role in describing the limit metric spaces as well as in the proof. Fix \(m \ge 1\), and a probability mass function \(\mathbf {p}= (p_1, p_2,\ldots , p_m)\) with \(p_i > 0\) for all \(i\in [m]\). A \(\mathbf {p}\)-tree is a random tree in \(\mathbb {T}_m\), with law as follows. For any fixed \(\mathbf {t}\in \mathbb {T}_m\) and \(v\in \mathbf {t}\), write \(d_v(\mathbf {t})\) for the number of children of v in the tree \(\mathbf {t}\). Then the law of the \(\mathbf {p}\)-tree, denoted by \({{\mathrm{\mathbb {P}}}}_{\text {tree}}\), is defined as:
Generating a random \(\mathbf {p}\)-tree \(\mathscr {T}\sim {{\mathrm{\mathbb {P}}}}_{\text {tree}}\) and then assigning a uniform random order on the children of every vertex \(v\in \mathscr {T}\) gives a random element with law \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{ord}}}}(\cdot ; \mathbf {p})\) given by
Obviously a \(\mathbf {p}\)-tree can be constructed by first generating an ordered \(\mathbf {p}\)-tree with the above distribution and then forgetting about the order.
In a series of papers [11,12,13] it was shown that \(\mathbf {p}\)-trees, under various assumptions, converge to inhomogeneous continuum random trees that we now describe. Recall the space \(\ell ^2_{\downarrow }\) in (1.9). Consider the subset \(\Theta \subset \ell ^2_{\downarrow }\) given by
Now recall from [37, 51] that a real tree is a metric space \((\mathscr {T},d)\) that satisfies the following for every pair \(a,b\in \mathscr {T}\):
-
(a)
There is a unique isometric map \(f_{a,b}:[0,d(a,b)]\rightarrow \mathscr {T}\) such that \(f_{a,b}(0)=a,~ f_{a,b}(d(a,b)) =b\).
-
(b)
For any continuous one-to-one map \(g:[0,1]\rightarrow \mathscr {T}\) with \(g(0)=a\) and \(g(1)=b\), we have \(g([0,1]) = f_{a,b}([0,d(a,b)])\).
Construction of the ICRT Given \(\varvec{\theta }\in \Theta \), we will now define the inhomogeneous continuum random tree \(\mathscr {T}^{\varvec{\theta }}_{(\infty )}\). We mainly follow the notation in [13]. Assume that we are working on a probability space \((\Omega , \mathscr {F},{{\mathrm{\mathbb {P}}}}_{\varvec{\theta }})\) rich enough to support the following:
-
(a)
For each \(i\ge 1\), let \(\mathscr {P}_i:= (\xi _{i,1}, \xi _{i,2}, \ldots )\) be a rate \(\theta _i\) Poisson process, independent for different i. The first point of each process \(\xi _{i,1}\) is special and is called a joinpoint, whilst the remaining points \(\xi _{i,j}\) with \(j\ge 2\) will be called i-cutpoints [13].
-
(b)
Independent of the above, let \(\varvec{U}=(U_j^{(i)}:j\ge 1,\ i\ge 1)\) be a collection of i.i.d. uniform (0, 1) random variables. These are not required to construct the tree but will be used to define a certain function on the tree.
The random real tree (with marked vertices) \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) is then constructed as follows:
-
(i)
Arrange the cutpoints \(\left\{ \xi _{i,j}:i\ge 1, j\ge 2\right\} \) in increasing order as \(0< \eta _1< \eta _2 < \cdots \). The assumption that \(\sum _i \theta _i^2 <\infty \) implies that this is possible. For every cutpoint \(\eta _k=\xi _{i,j}\), let \(\eta _k^*:=\xi _{i,1}\) be the corresponding joinpoint.
-
(ii)
Next, build the tree inductively. Start with the branch \([0,\eta _1]\). Inductively assuming we have completed step k, attach the branch \((\eta _k, \eta _{k+1}]\) to the joinpoint \(\eta _k^*\) corresponding to \(\eta _k\).
Write \(\mathscr {T}_0^{\varvec{\theta }}\) for the corresponding tree after one has used up all the branches \([0,\eta _1], \left\{ (\eta _k, \eta _{k+1}]: k\ge 1\right\} \). Note that for every \(i\ge 1\), the joinpoint \(\xi _{i,1}\) corresponds to a vertex with infinite degree. Label this vertex i. The ICRT \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) is the completion of the marked metric tree \(\mathscr {T}^{\varvec{\theta }}_0\). As argued in [13, Section 2], this is a real-tree as defined above which can be viewed as rooted at the vertex corresponding to zero. We call the vertex corresponding to joinpoint \(\xi _{i,1}\) hub i. Since \(\sum _i \theta _i = \infty \), one can check that hubs are almost everywhere dense on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\).
Remark 3
The uniform random variables \((U_j^{(i)}:j\ge 1,\ i\ge 1)\) give rise to a natural ordering on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) (or a planar embedding of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\)) as follows. For \(i\ge 1\), let \((\mathscr {T}_j^{(i)}:j\ge 1)\) be the collection of subtrees hanging off of the ith hub. Associate \(U_j^{(i)}\) with the subtree \(\mathscr {T}_j^{(i)}\), and think of \(\mathscr {T}_{j_1}^{(i)}\) appearing “to the right of” \(\mathscr {T}_{j_2}^{(i)}\) if \(U_{j_1}^{(i)}< U_{j_2}^{(i)}\). This is the natural ordering on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) when it is being viewed as a limit of ordered \(\mathbf {p}\)-trees. We can think of the pair \((\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})\) as the ordered ICRT.
Reduced tree \(r_{IJ}^{(\infty )}\): Fix \(I\ge 0\) and \(J\ge 1\). Now let \(\eta _0 = 0\) and for \(j\ge 0\) call vertex \(\eta _j\) the jth sampled leaf and label this as \(j+\) to differentiate this from hub j. Note that the subtree of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) spanned by \(\left\{ 0+,1+, \ldots , J+\right\} \) (namely the part of the tree constructed from the interval \([0,\eta _J]\)) is a tree in the usual sense with random edge lengths. For all hubs i, if \(i\le I\), retain its label and remove the label otherwise. This gives a random element of \(\mathbf {T}_{IJ}\) (recall the definiton Sect. 2.1.3), which we denote by \(r_{IJ}^{(\infty )}\). See Fig. 3 corresponding to the stick-breaking construction in Figs. 1 and 2.
Mass measure For every vertex \(v\in \mathscr {T}_{(\infty )}^{\varvec{\theta }}\), define the degree of v to be the number of connected components of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}{{\setminus }}\left\{ v\right\} \). Vertices with degree one are called leaves of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and all other vertices form the skeleton of the tree. Let \(\mathscr {L}(\mathscr {T}_{(\infty )}^{\varvec{\theta }})\) denote the set of leaves of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\). In [13], it was shown that one can associate to \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\), a natural probability measure called the mass measure satisfying \(\mu (\mathscr {L}(\mathscr {T}_{(\infty )}^{\varvec{\theta }}))=1\).
Root-to-vertex path measures Now using the collection of uniform random variables above, we will define a function \(\mathfrak {G}_{(\infty )}\) on the tree as well as a collection of measures on paths emanating from the root. Recall that the hubs in \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) have infinite degrees. Let \((\mathscr {T}_j^{(i)}:j\ge 1)\) be the collection of subtrees of hub i in \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) (labeled in some fashion). For each \(y\in \mathscr {T}_{(\infty )}^{\varvec{\theta }}\), let
We will show in our proof that \(\mathfrak {G}_{(\infty )}(y)\) is finite for almost every realization of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and for \(\mu \)-almost every \(y\in \mathscr {T}_{(\infty )}^{\varvec{\theta }}\) (see Lemma 4.9 and Theorem 4.15 below). For \(y\in \mathscr {T}_{(\infty )}^{\varvec{\theta }}\), let \([\rho ,y]\) denote the path from the root \(\rho \) to y. For every y, define a probability measure on \([\rho ,y]\) as
Thus, this probability measure is concentrated on the hubs on the path from y to the root.
Remark 4
Note that both \(\mathfrak {G}_{(\infty )}(\cdot )\) and \(Q_{y}^{(\infty )}(\cdot )\) depend on the realization of the pair \((\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})\), but we chose to suppress them to avoid cumbersome notation.
Random tree \(\mathscr {R}_{IJ}^{(\infty )}\) Recall the tree \(r_{IJ}^{(\infty )}\) above. Recall that \(\eta _j\) is the vertex in the tree \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) corresponding to leaf \(j+\) for \(1\le j\le J\). To each of these J leaves, associate the value \(\mathfrak {G}_{(\infty )}(\eta _j)\), and associate the probability measure \(Q_{\eta _j}^{(\infty )}\) to the path \([0+, j+]\). This tree is a random element of the space \(\mathbf {T}_{IJ}^{*}\) (see Sect. 2.1.3), which we denote by \(\mathscr {R}_{IJ}^{(\infty )}\).
2.3 Continuum limits of components
The aim of this section is to give an explicit description of the limiting (random) metric spaces in Theorem 1.8. We start by constructing a specific tilted version of the ICRT in Sect. 2.3.1. Then in Sect. 2.3.2 we describe the limits of maximal components.
2.3.1 Tilted ICRTs and vertex identification
Let \((\Omega , \mathscr {F}, {{\mathrm{\mathbb {P}}}}_{\theta })\) and \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) be as in Sect. 2.2 and let \(\gamma >0\) a constant. Informally, the construction goes as follows: We will first tilt the distribution of the original ICRT \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) using the functional
to get a tilted tree \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\). We then generate a random but finite number \(N_{(\infty )}^\star \) of pairs of points \(\left\{ (x_k, y_k):1\le k\le N_{(\infty )}^\star \right\} \). The final metric space is obtained by creating “shortcuts” by identifying the points \(x_k\) and \(y_k\). Formally the construction proceeds in four steps:
-
(a)
Tilted ICRT Define \({{{\mathrm{\mathbb {P}}}}}_{\theta }^\star \) on \(\Omega \) by
$$\begin{aligned} \frac{d {{{{\mathrm{\mathbb {P}}}}}}_{\theta }^\star }{d{{{{\mathrm{\mathbb {P}}}}}}_{\theta }}=\frac{\exp \left( \gamma \int _{y\in \mathscr {T}^{\theta }}\mathfrak {G}_{(\infty )}(y)\mu (dy) \right) }{{{\mathrm{\mathbb {E}}}}\left[ \exp \left( \gamma \int _{x\in \mathscr {T}^{\theta }} \mathfrak {G}_{(\infty )}(x)\mu (dx) \right) \right] }. \end{aligned}$$The expectation in the denominator is with respect to the original measure \({{{\mathrm{\mathbb {P}}}}}_{\theta }\). In our proof we will show that this object is finite. Write \((\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }, \mu ^\star )\) and \(\varvec{U}^{\star }=(U_j^{(i), \star }: i,j\ge 1)\) for the tree and the mass measure on it, and the associated random variables under this change of measure.
-
(b)
Poisson number of identification points Conditionally on \(((\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }, \mu ^\star ), \varvec{U}^{\star })\), generate \(N_{(\infty )}^\star \) having a \(\mathrm {Poisson}(\Lambda _{(\infty )}^\star )\) distribution, where
$$\begin{aligned} \Lambda _{(\infty )}^\star := \gamma \int _{y\in \mathscr {T}_{(\infty )}^{\varvec{\theta },\star }}\mathfrak {G}_{(\infty )}(y)\mu ^\star (dy) =\gamma \sum _{i\ge 1}\theta _{i}\left[ \sum _{j\ge 1}U_j^{(i), \star }\mu ^\star (\mathscr {T}_j^{(i), \star })\right] . \end{aligned}$$Here, \((\mathscr {T}_j^{(i), \star } : j\ge 1)\) denotes the collection of subtrees of hub i in \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\). (As mentioned before in Remark 4, \(\mathfrak {G}_{(\infty )}(\cdot )\) depends on the realization of the ordered ICRT. \(U_j^{(i), \star }\) appears in the expression above as the function \(\mathfrak {G}_{(\infty )}\) acts on \(y\in \mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\) for which the associated order is described by \(\varvec{U}^{\star }\)).
-
(c)
“First” endpoints (of shortcuts) Conditionally on (a) and (b), sample \(x_k\) from \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\) with density proportional to \(\mathfrak {G}_{(\infty )}(x)\mu ^\star (dx)\) for \(1\le k\le N_{(\infty )}^\star \).
-
(d)
“Second” endpoints (of shortcuts) and identification Having chosen \(x_k\), choose \(y_k\) from the path \([\rho , x_k]\) joining the root \(\rho \) and \(x_k\) according to the probability measure \(Q_{x_k}^{(\infty )}\) as in (2.8) but with \(U_j^{(i),\star }\) replacing \(U_j^{(i)}\). (Note that \(y_k\) is always a hub on \([\rho , x_k]\)). Identify \(x_k\) and \(y_k\), i.e., form the quotient space by introducing the equivalence relation \(x_k\sim y_k\) for \(1\le k\le N_{(\infty )}^\star \).
Definition 2.2
Fix \(\gamma \ge 0\) and \(\varvec{\theta }\in \Theta \) as in (2.6). Let \(\mathscr {G}_{\infty }(\varvec{\theta },\gamma )\) be the metric measure space constructed via the four steps above equipped with the measure inherited from the mass measure on \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\).
In our proofs, we will always think of the leaf end (of a shortcut or a surplus edge) as the first endpoint, and the second endpoint will be selected from the skeleton.
2.3.2 Limits of the components
Fix \(\lambda \in \mathbb {R}\) and \(\mathbf {c}\in l_0 \) as in (1.11) and consider the setting of Theorem 1.8. We will need 2 main objects:
-
(a)
The process \(\tilde{V}^{\mathbf {c}}_{\lambda }(\cdot )\) in (1.16). Recall that the excursions of this process from zero could be arranged in increasing order of lengths as \(\mathscr {Z}(\lambda )\). Let \(\Xi ^{(i)} = (c_j: \xi _j \in \mathscr {Z}_i)\) denote the point process of jumps of the process \(\tilde{V}^{\mathbf {c}}_{\lambda }(\cdot )\) corresponding to the excursion \(\mathscr {Z}_i(\lambda )\). Abusing notation we will write \(\Xi ^{(i)} = (c_j: j \in \mathscr {Z}_i)\).
-
(b)
The actual lengths of these excursions \((Z_i(\lambda ):i\ge 1)\) as in (1.18).
From these objects, for each fixed \(i\ge 1\), define the random variable \(\bar{\gamma }^{(i)}\) and the point process \(\varvec{\theta }^{(i)} = (\theta _j^{(i)}:j\in \mathscr {Z}_i(\lambda ))\) as
Our proof (see Proposition 5.1) will imply that \(\varvec{\theta }^{(i)} \in \Theta \) as in (2.6) a.s. Define
and generate the random metric measure spaces
where \(\mathscr {G}_{\infty }(\varvec{\theta },\bar{\gamma })\) is as described in Sect. 2.3.1 and the metric spaces are conditionally independent across i given the driving parameters in (2.10). Let \(\mathbf {M}_{\infty }^{\mathbf {c}}(\lambda ) = (M_i^{\mathbf {c}}(\lambda ):i\ge 1)\). Then this is the limiting collection of metric spaces in Theorem 1.8.
To describe the sequence of spaces \(\mathbf {M}_{\infty }^{{{\mathrm{nr}}}}(\lambda )\) appearing in Theorem 1.2, define
Here W is a random variable with distribution F as in (1.3). Then
3 Discussion
We describe the two major motivations for developing the general theory of this paper in Sects. 3.1 and 3.2. In Sects. 3.3 and 3.4, we include a brief discussion about ICRTs as well as give an overview of the order in which the proofs are carried out.
3.1 Universality and domains of attraction of critical random graph models
One natural question the reader might ask at this point is why the general theory in Sect. 1.2, why not just stick to the rank-one random graph model as in Sect. 1.1. As we have described in the introduction, the aim of this paper is the development of general theory applicable to a wide array of models. What does one mean by this? It turns out that many different random graph models can be constructed in a dynamic fashion as a graph-valued process \(\left\{ \mathscr {G}_n(t): t\ge 0\right\} \) where edges are added as time advances thus resulting in mergers of components as \(t\uparrow t_c\). In this construction, there is a critical time \(t_c\) (model-dependent) such that the giant component emerges after time \(t_c\).
Now for most random graph models (including the configuration model) the dynamics of mergers of components starting at time zero do not look like the multiplicative coalescent. However if one were to zoom in at the critical time \(t_c\), for many models, there exists \(\varepsilon _n\downarrow 0\) such that if one were to look at the interval \([t_c-\varepsilon _n, t_c+\varepsilon _n]\), then mergers of components can be approximated by the multiplicative coalescent. Here \(t_c -\varepsilon _n\) often corresponds to the barely subcritical regime of the random graph. Thus if one had good control over component functionals at the barely subcritical time \(t_c-\varepsilon _n\) and in particular if one was able to show that component sizes appropriately normalized satisfied Assumption 1.6, then one can use Theorem 1.8 to derive convergence at the critical time \(t_c\) of the maximal components. Note that one does not expect component sizes at time \(t_c-\varepsilon _n\) to satisfy assumptions of the Norros-Reittu model in (1.4). Rather in most cases, at time \(t_c-\varepsilon _n\), the expected size of the component of a randomly selected vertex \(V_n\) would scale like \(n^{\delta _1}\) while the maximal component would scale like \(n^{\delta _2}\) (ignoring logarithmic corrections) where \(\delta _1 < \delta _2\) are related to various scaling exponents of the system. In work in progress [19], Theorem 1.9 coupled with delicate estimates of various scaling exponents for the configuration model in the barely subcritical regime, proves analogous results for the configuration model with degree exponent \(\tau \in (3,4)\). Sizes of maximal components in the critical regime including the heavy-tailed regime for this model was previously analyzed in [48]. Further as was done in [18], where a number of sufficient conditions for the domain of attraction of the critical Erdős-Rényi scaling limits were derived, we hope to derive similar general conditions for a random graph model to belong to the same domain of attraction as the rank-one model with \(\tau \in (3,4)\), established in this paper.
3.2 Minimal spanning tree on inhomogeneous random graphs
As described in the introduction, a second major motivation for the technical analysis in this paper is the minimal spanning tree. To fix ideas, consider the Norros-Reittu model in the supercritical regime (the parameter in (1.5) \(\nu > 1\)). To each edge attach a random edge weight i.i.d. across edges, assumed to be derived from a continuous distribution. Consider the minimal spanning tree (MST) of the giant component. A large amount of simulation-based evidence from statistical physics [23, 24, 28, 62] suggests that when the degree exponent \(\tau \in (3,4)\) then the distances in this object scale like \(n^{(\tau -3)/(\tau -1)}\), the same distance scaling shown in this paper for the maximal components in the critical regime (Theorem 1.2).
This is not a coincidence. As has been shown in a series of fundamental papers [3,4,5] for the complete graph and the supercritical Erdős-Rényi random graph, a major ingredient in the analysis of the MST problem is the scaling of maximal components in the critical regime which then provides crucial input for the scaling limit of the MST. Till date we have no rigorous results for the scaling of the MST on any “inhomogeneous” random graph model. This paper provides the first step in answering this question in the heavy-tailed regime. Further this program should enable one to analyze the MST for random graph models other than the rank-one model which belong to the same “domain of attraction” in the critical regime.
3.3 Inhomogeneous continuum random trees
As evident from Sect. 2.2, ICRTs play a major role in the description of our limiting objects. Despite a lot of work on these objects in the last decade [11, 13, 27], a number of questions regarding these continuum objects are still open, ranging from sufficient conditions for compactness to the dependence of the fractal properties of this object on the driving parameter \(\varvec{\theta }\). Our proof shows that in some special cases, ICRTs are compact metric spaces when \(\varvec{\theta }\) is sampled according to an appropriate size-biased distribution. This can be seen as an annealed result on compactness of the ICRT. Whether compactness is true for non-random sequences \(\varvec{\theta }\in \Theta \) has been open problem for more than a decade [11]. Similar questions hold for its fractal dimensions. See Sect. 8 for a more detailed account of these problems.
3.4 Overview of the proof
In Sect. 4, we study the random graph \(\mathscr {G}_n(\mathbf {x},t)\) as in Definition 1.4. We start with the simple observation that conditional on the vertex set of components of \(\mathscr {G}_n(\mathbf {x},t)\), a fixed component \(\mathscr {C}\) has the same distribution as \(\mathscr {G}_n(\mathbf {x},t)\) conditional on being connected. This section studies asymptotics for such distributions assuming specific regularity properties of vertex weights in the component in the large network limit, showing Gromov-weak convergence of the associated graph under proper normalization of edge lengths and vertex weights. Section 5 uses the size-biased exploration of the process \(\mathscr {G}_n(\mathbf {x},t)\) [9] to show that maximal connected components satisfy the hypothesis required in Sect. 4. Section 6 studies the special entrance boundary in (1.19) proving both compactness of the limiting objects as well as strengthening the convergence in the Gromov-weak topology to convergence in \(d_{{{\mathrm{GHP}}}}\). In Sect. 7, we derive the box-counting or Minkowski dimension. In Sect. 8, we conclude by describing a number of open problems.
4 Proofs: asymptotics conditional on being connected
The aim of this Section is to study large connected components of \(\mathscr {G}_n(\mathbf {x},t)\) assuming vertex weights satisfy a few regularity properties.
4.1 Tilted \(\mathbf {p}\)-trees and connected components of \(\mathscr {G}(\mathbf {x},t)\)
Recall the random graph \(\mathscr {G}(\mathbf {x},t)\) from Definition 1.4. Here for any \(t\ge 0\), \((\mathscr {C}_i(t):i\ge 1)\) denotes the components in decreasing order of their mass sizes. In this section we will describe results from [20] which gave a method of constructing connected components of \(\mathscr {G}(\mathbf {x},t)\) conditional on the vertices of the components. This construction involved tilted versions of \(\mathbf {p}\)-trees introduced in Sect. 2.2. Since these trees are parametrized via a driving probability mass function (pmf) \(\mathbf {p}\), it will be easy to parametrize various random graph constructions in terms of pmfs as opposed to vertex weights \(\mathbf {x}\). Proposition 4.1 will relate vertex weights to pmfs.
Fix \(n\ge 1\) and \(\mathscr {V}\subset [n]\) and write \(\mathbb {G}_{\mathscr {V}}^{{{\mathrm{con}}}}\) for the space of all simple connected graphs with vertex set \(\mathscr {V}\). For fixed \(a > 0\), and probability mass function \(\mathbf {p}= (p_v: v \in \mathscr {V})\), define probability distributions \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ; \mathbf {p}, a, \mathscr {V})\) on \(\mathbb {G}_{\mathscr {V}}^{{{\mathrm{con}}}}\) as follows: Define for \(i,j \in \mathscr {V}\),
Then
where \(Z(\mathbf {p},a)\) is the normalizing constant
Now let \(\mathscr {V}^{(i)} := V(\mathscr {C}_i(t))\) be the vertex set of \(\mathscr {C}_i(t)\) for \(i \ge 1\) and note that \(\left\{ \mathscr {V}^{(i)}:i\ge 1\right\} \) denotes a random finite partition of the full vertex set [n]. The following result is obvious from the construction of \(\mathscr {G}(\mathbf {x},t)\):
Proposition 4.1
([20, Proposition 6.1]) Conditional on the partition \(\left\{ \mathscr {V}^{(i)}:i\ge 1\right\} \) define
For each fixed \(i \ge 1\), let \(G_i \in \mathbb {G}_{\mathscr {V}^{(i)}}^{{{\mathrm{con}}}}\) be a connected simple graph with vertex set \(\mathscr {V}^{(i)}\). Then
Thus the random graph \(\mathscr {G}(\mathbf {x},t)\) can be generated in two stages:
-
(i)
Stage I Generate the partition of the vertices into different components, i.e., generate \(\left\{ \mathscr {V}^{(i)}:i\ge 1\right\} \).
-
(ii)
Stage II Conditional on the partition, generate the internal structure of each component following the law of \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ; \mathbf {p}^{(i)}, a^{(i)}, \mathscr {V}^{(i)})\), independently across different components.
Let us now describe an algorithm to generate such connected components using distribution (4.2). To ease notation, let \(\mathscr {V}= [m]\) for some \(m\ge 1\) and fix a probability mass function \(\mathbf {p}\) on [m] and a constant \(a>0\) and write \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ):= {{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ;\mathbf {p},a,[m])\) on \(\mathbb {G}_m^{{{\mathrm{con}}}}:= \mathbb {G}_{[m]}^{{{\mathrm{con}}}}\). We will first need to set up some notation before describing this result.
Depth-first exploration of ordered trees Recall that we used \(\mathbb {T}_m^{{{\mathrm{ord}}}}\) for the space of ordered (or planar) trees with vertex set [m]. Given a tree \(\mathbf {t}\in \mathbb {T}_m^{{{\mathrm{ord}}}}\), one can use the associated order to explore the tree in a depth-first manner. More precisely we start with v(1) being the root of \(\mathbf {t}\). At each stage \(1\le i\le m\), we will keep track of three types of vertices: the set of active vertices–\(\mathscr {A}(i)\), the set of explored vertices–\(\mathscr {O}(i)\), and the set of unexplored vertices–\(\mathscr {U}(i)\). The set of active vertices will in fact be viewed as a vertical stack (not just a set) with \(\mathscr {A}(i)\) representing the state of this stack at the end of step \(\mathscr {A}(i)\). Initialize the process with \(\mathscr {A}(1) = \left\{ v(1)\right\} \) (the root of \(\mathbf {t}\)), \(\mathscr {O}(1) = \emptyset \) and \(\mathscr {U}(1) = [m]{\setminus }\left\{ v(1)\right\} \). At step \(i\ge 1\), we let
-
(i)
v(i) denote the vertex at the top of the stack \(\mathscr {A}(i)\) and let \(\mathscr {D}(i)\subset \mathscr {U}(i)\) denote the set of children of v(i). Delete v(i) from \(\mathscr {A}(i)\) and arrange the vertices of \(\mathscr {D}(i)\) from oldest to youngest at the top of the stack to form \(\mathscr {A}(i+1)\);
-
(ii)
\(\mathscr {O}(i+1) = \mathscr {O}(i) \cup \left\{ v(i)\right\} \);
-
(iii)
\(\mathscr {U}(i+1) = \mathscr {U}(i){\setminus }\mathscr {D}(i)\).
Write \(\mathfrak {P}(\mathbf {t})\) for set of pairs of vertices \(\left\{ u,v\right\} \) such that \(u,v\in \mathscr {A}(i)\) for some \(1\le i\le m\); namely both vertices are active but have not yet been explored. Using terminology from [4], call this collection the set of permitted edges. Thus,
Write \(E(\mathbf {t})\) for the edge set of \(\mathbf {t}\). Now define the function \(L : \mathbb {T}_m^{{{\mathrm{ord}}}} \rightarrow \mathbb {R}_+\) by
Recall the (ordered) \(\mathbf {p}\)-tree distribution from (2.5). Using \(L(\cdot )\) to tilt this distribution results in the distribution
For future reference we fix notation for the various objects required in the proof below.
Definition 4.2
Fix \(m\ge 1\), \(a> 0\), and a probability mass function \(\mathbf {p}\) on [m]. We will write \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\) to denote a random graph with distribution \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ,\mathbf {p},a,[m])\). \({\mathscr {T}}^{\mathbf {p}, \star }_m\) will denote a random planar tree with the tilted \(\mathbf {p}\)-tree distribution (4.5), and \({\mathscr {T}}^{\mathbf {p}}_m\) will denote a random tree with the original \(\mathbf {p}\) tree distribution (2.5).
Proposition 4.3
([20, Proposition 7.4]) Fix \(m\ge 1\), a probability mass function \(\mathbf {p}\) on [m], and \(a>0\). Consider a random connected graph on [m] constructed as follows:
-
(a)
First generate a rooted planar random tree \({\mathscr {T}}^{\mathbf {p}, \star }_m\) with distribution \({{\tilde{{{\mathrm{\mathbb {P}}}}}}}_{{{\mathrm{ord}}}}(\cdot )\) as in (4.5).
-
(b)
Let \(\mathfrak {P}({\mathscr {T}}^{\mathbf {p}, \star }_m)\) denote the permitted edge set of this random tree. Add each such edge \(\left\{ u,v\right\} \in \mathfrak {P}({\mathscr {T}}^{\mathbf {p}, \star }_m)\) with probability \(q_{uv}\) as in (4.1), independent across permitted edges.
Then, the resulting random graph has distribution \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}\) on \(\mathbb {G}_m^{{{\mathrm{con}}}}\), i.e., has the same distribution as \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\).
4.2 Convergence of connected components under weight assumptions
The aim of this section is to prove Gromov-weak convergence for the connected graph \({{\tilde{\mathscr {G}}}}_m(\mathbf {p},a)\) under regularity conditions on a and \(\mathbf {p}\) as \(m\rightarrow \infty \). We will assume that we have ordered the index set [m] so that \(p_1\ge p_2\ge \cdots \ge p_m >0\). Let
Assumption 4.4
As \(m\rightarrow \infty \), the following hold:
-
(i)
\(\sigma (\mathbf {p})\rightarrow 0\) and further for each fixed \(i\ge 1\), \(p_i/\sigma (\mathbf {p})\rightarrow \theta _i\) where \(\varvec{\theta }:= (\theta _1, \theta _2, \ldots )\) is an element of \(~\Theta \) as in (2.6).
-
(ii)
There is a constant \(\gamma >0\) such that \(a\sigma (\mathbf {p})\rightarrow \gamma \).
The following theorem is the main result of this section.
Theorem 4.5
Consider the connected random graph \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\) viewed as a metric measure space via the graph distance where each vertex v is assigned measure \(p_v\). Under Assumption 4.4,
where \(\mathscr {G}_{\infty }(\varvec{\theta },\gamma )\) is the random metric space defined in Definition 2.2 and convergence is in the Gromov-weak topology on metric spaces.
The rest of this section proves this result. We will throughout assume that \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\) has been constructed using Proposition 4.3.
4.2.1 Two constructions of \(\mathbf {p}\)-trees: exploration process and the birthday construction
We start by describing an explicit construction of the (untilted) \(\mathbf {p}\)-tree \(\mathscr {T}_m^{\mathbf {p}}\) first developed in [11]. At the end of this section we describe a second construction used later in the paper.
Exploration process construction The first construction is initiated by setting up a map \(\psi _{\mathbf {p}}:[0,1]^m\rightarrow \mathbb {T}^{{{\mathrm{ord}}}}\) as follows. Let \(\mathbf {u}:=(u_v:v\in [m])\) be a collection of distinct points in (0, 1). Define
Assume that there exists a unique point \(v^* \in [m] \) such that \(F^{\mathbf {p}}(u_{v^*}-) = \min _{s\in [0,1]} F^{\mathbf {p}}(s)\). Set \(v^*\) to be the root of the tree \(\psi _{\mathbf {p}}(\mathbf {u})\). Define \(y_i := u_i - u_{v^*}\) \( \text{ mod } 1\) for \(i \in [m]\), and
Then \(F^{{{\mathrm{exc}}},\mathbf {p}}(1-) = 0\) and \(F^{{{\mathrm{exc}}},\mathbf {p}}(s) > 0\) for \(s \in [0,1)\). Extend the definition of \(F^{{{\mathrm{exc}}},\mathbf {p}}\) to \(s \in [0,1]\) by define \(F^{{{\mathrm{exc}}},\mathbf {p}}(1) = 0\). We use \(F^{{{\mathrm{exc}}},\mathbf {p}}\) to construct a depth-first-search of an ordered tree whose exploration in this depth-first manner is encoded by the function \(F^{{{\mathrm{exc}}},\mathbf {p}}\). This in turn defines the tree \(\psi _{\mathbf {p}}(\mathbf {u})\). As before, in this construction we carry along a set of explored vertices \(\mathscr {O}(i)\), active vertices \(\mathscr {A}(i)\) and unexplored vertices \(\mathscr {U}(i) = [m]{\setminus }(\mathscr {A}(i)\cup \mathscr {O}(i))\), for \(0\le i \le m\). We view \(\mathscr {A}(i)\) as the state of a vertical stack \(\mathscr {A}\) after the ith step in the depth-first-search. Initialize with \(\mathscr {O}(0) = \emptyset \), \(\mathscr {A}(0) = \left\{ v^*\right\} \), \(\mathscr {U}(0) = [m] {\setminus }\left\{ v(1)\right\} \), and define \(y^*(0) = 0\). At step \(i \in [m]\), let v(i) be the value that is on the top of the stack \(\mathscr {A}(i-1)\) and define \(y^*(i) := y^*(i-1)+p_{v(i)}\). Define \(\mathscr {D}(i) := \left\{ j \in [m] :y^*(i-1)< y_j < y^*(i) \right\} \). Suppose \(\mathscr {D}(i) = \left\{ u(j) :1\le j \le k\right\} \) where we have ordered these vertices in the sequence that they are found in this interval, i.e.,
Update the stack \(\mathscr {A}\) as follows:
-
(i)
Delete v(i) from \(\mathscr {A}\).
-
(ii)
Push u(j), \(1\le j\le k\), to the top of \(\mathscr {A}\) sequentially (so that u(k) will be on the top of the stack at the end).
Let \(\mathscr {A}(i)\) be the state of the stack after the above operations. Update \(\mathscr {O}(i) := \mathscr {O}(i-1) \cup \left\{ v(i)\right\} \) and \(\mathscr {U}:= \mathscr {U}(i-1){\setminus }\mathscr {D}(i) \). See Fig. 5 for a pictorial description of this construction.
The tree \(\psi _{\mathbf {p}}(\mathbf {u}) \in \mathbb {T}_m^{{{\mathrm{ord}}}}\) is constructed by putting the edges \(\left\{ (v(i),v): i \in [m], v \in \mathscr {D}(i)\right\} \) and using the order prescribed in the above exploration to make the tree an ordered tree. The fact that this procedure actually produces a tree is proved in [11].
Lemma 4.6
([11, Section 3.2]) Consider the map \(\psi _{\mathbf {p}}\). Let \(\mathbf {X}:=(X_v: v\in [m])\) be i.i.d. random variables distributed uniformly on (0, 1). Then the random tree \(\psi _{\mathbf {p}}(\mathbf {X})\) has distribution (2.5), i.e., \(\psi _{\mathbf {p}}(\mathbf {X}) \mathop {=}\limits ^{d} \mathscr {T}^{\mathbf {p}}\).
For future reference, coupled with the above construction, define \(\mathscr {S}(i):=\mathscr {A}(i-1){\setminus }\left\{ v(i)\right\} \) for \(i\in [m]\). Define the function \(A_m(\cdot )\) on [0, 1] via
Further let \({\bar{A}}_m(u) := a A_m (u)\), \(u \in [0,1]\), where a is the scaling constant in (4.1).
Birthday construction We now describe a second construction of \(\mathbf {p}\)-trees, first formulated in [27]. We urge the reader to skim this portion and return to it once she has reached Sect. 4.5. Let \(\mathbf {Y}:=(Y_0, Y_1, \ldots )\) be an infinite sequence of i.i.d. random variables with distribution \(\mathbf {p}\). Let \(R_0=0\) and for \(l\ge 1\), let \(R_l\) denote the l-th repeat time, i.e.,
Now consider the directed graph formed via the edges
It is easy to check that this gives a tree which we view as rooted at \(Y_0\). Intuitively the process of constructing a tree is as follows: the tree “grows” via the addition of new vertices sampled using \(\mathbf {p}\) till it stumbles across a “repeat” (a vertex already found) when it goes back to the first occurrence of this “repeat” and starts growing from that position. The following striking result was shown in [27].
Theorem 4.7
([27, Lemma 1 and Theorem 2]) The random tree \(\mathscr {T}(\mathbf {Y})\) viewed as an object in \(\mathbb {T}_m\) is distributed as a \(\mathbf {p}\)-tree with distribution (2.4) independently of \(Y_{R_1-1}, Y_{R_2-1}, \ldots \) which are i.i.d. with distribution \(\mathbf {p}\).
Remark 5
The independence between the sequence \(Y_{R_1-1}, Y_{R_2-1}, \ldots \) and the constructed \( \mathbf {p}\) tree \(\mathscr {T}(\mathbf {Y})\) is truly remarkable. In particular, suppose \(\mathscr {S}\) is a \(\mathbf {p}\)-tree with distribution as in (2.4) and for fixed \(r\ge 1\), let \(\tilde{Y_1}, \tilde{Y_2}, \ldots \tilde{Y_r} \) be i.i.d. with distribution \(\mathbf {p}\). Write \(\mathscr {S}_r\subset \mathscr {S}\) for the tree spanned by these vertices and the root. Let \(\mathscr {T}_r^{\mathscr {B}}\subset \mathscr {T}(\mathbf {Y})\) denote the subtree with vertex set \(\left\{ Y_0, Y_1, \ldots , Y_{R_r-1}\right\} \), namely the tree constructed in the first \(R_r\) steps. Here \(\mathscr {B}\) is a mnemonic for “birthday tree” and also to distinguish this construction from a generic random tree model with r vertices. Then the above result (formalized as [27, Corollary 3]) implies that these can be jointly constructed as
We use this fact often in Sect. 4.5.
4.3 Uniform integrability of the tilt
The first use of the above construction of the \(\mathbf {p}\)-tree is to prove the following:
Proposition 4.8
Fix \(s\ge 1\) and consider the tilt \(L(\cdot )\) as in (4.4). Under Assumptions 4.4, there is a constant \(K:=K(s) < \infty \) such that
In particular, the collection of random variables \(\left\{ L(\mathscr {T}^{\mathbf {p}}_m): m\ge 1\right\} \) is uniformly integrable.
Proof
Writing out the tilt \(L(\cdot )\) explicitly, we have
say, where,
Here we have used \(({\mathrm {e}}^x-1)/x \le {\mathrm {e}}^x\) for \(x >0\) for the first inequality and the second inequality follows using the fact that \(\mathbf {t}\) is a tree, so that for each \((i,j) \in E(\mathbf {t})\) such that i is the parent of j, we have \(p_ip_j \le p_1 p_j\). By Assumption 4.4, we have \(ap_1 \rightarrow \gamma \theta _1\). In particular, there is a constant \(C> 0\) such that for all \(m\ge 1\), and \(\mathbf {t}\in {\mathbb {T}}_m^{{{\mathrm{ord}}}}\),
Now recall the functions \(A_m\) and \(\bar{A}_m:= aA_m\) from (4.6). Using the equivalent characterization of the permitted edge set from (4.3) and comparing this with (4.6), it is easy to check that
Now by the definition of \(F^{{{\mathrm{exc}}},\mathbf {p}}\),
By (4.6),
Thus
By Assumption 4.4(ii) and (4.10), for any \(s\ge 0\), there exists \(K=K(s) < \infty \) such that
Now the following lemma completes the proof of Proposition 4.8. \(\square \)
Lemma 4.9
There exists a positive constant \(c > 0\) such that for every \(m\ge 1\) and \(x\ge {\mathrm {e}}\),
Proof
Write \(\mathscr {R}(m):= \Vert F^{{{\mathrm{exc}}},\mathbf {p}}\Vert _{\infty }/\sigma (\mathbf {p})\) and as before, let \(\mathbf {X}=(X_v:v\in [m])\) be the collection of uniform random variables used to construct \(F^{\mathbf {p}}\). Write \(\mathbb {Q}[0,1]\) for the set of rationals in [0, 1]. Then note that
We start by analyzing \(\mathscr {R}_1(m)\). For fixed \(q\in \mathbb {Q}[0,1]\), define the collection of m functions
Note that for all \(j\in [m]\), \(s_q^j:[0,1]\rightarrow [-1,1]\), with \({{\mathrm{\mathbb {E}}}}(s_q^j(X_j)) =0\) and further
Also note that
If we can show that
then standard concentration inequalities for the maxima in empirical processes [49, Theorem 1.1(b)] will imply the existence of a constant \(c_1>0\) such that for all \(m\ge 1\) and \(x >0\),
Let us now prove (4.14). In fact we will show the stronger result:
Let \(X_{(1)}< X_{(2)}< \cdots < X_{(m)}\) denote the order statistics of \(\mathbf {X}\) and let \(\pi \) denote the corresponding permutation of [m], namely \(X_{(i)} = X_{\pi (i)}\). Note that
Hence
We first analyze \(\mathscr {R}_{11}(m)\). By the DKW inequality [54],
By Cauchy-Schwartz, \(m\sigma ^2(\mathbf {p})\ge (\sum _i p_i)^2=1\). Thus \(\sup _{m\ge 1} {{\mathrm{\mathbb {E}}}}(\mathscr {R}_{11}(m)) < \infty \). We now analyze \(\mathscr {R}_{12}(m)\). Since
for any \(i\in [m]\) we have
by simply expanding the square. Now note that since \(\pi \) is a uniform random permutation of the vertex set [m], for any fixed \(i\ge 1\) we also have
Thus
Now assuming that we construct \(\pi \) by sequentially sampling without replacement from [m], let \(\mathscr {F}_k\) denote the \(\sigma \)-field generated by \((\pi (1), \pi (2), \ldots , \pi (k))\) for \(0\le k\le m-1\). Let \(M_0 = 0\) and consider the sequence
It is easy to check that \(\left\{ M_k:0\le k\le m-1\right\} \) is a martingale with respect to the filtration \(\left\{ \mathscr {F}_k: 0\le k\le m-1\right\} \). Then (4.17) and Doob’s \(\mathbb {L}^2\)-maximal inequality yield
Using (4.16) with \(i=m/2\) then gives \({{\mathrm{\mathbb {E}}}}(\mathscr {R}_{12}(m)) \le 16\) for all \(m\ge 1\). Thus we have shown that \(\sup _{m\ge 1} \max ({{\mathrm{\mathbb {E}}}}(\mathscr {R}_{11}(m)),{{\mathrm{\mathbb {E}}}}(\mathscr {R}_{12}(m))) < \infty \). This proves (4.14) and thus (4.15).
To complete the proof of the lemma, we need to get a tail bound on \(\mathscr {R}_2(m)\) appearing in (4.13). As before, using [49], it is enough to show \(\sup _{m\ge 1} {{\mathrm{\mathbb {E}}}}(\mathscr {R}_2(m))< \infty \). However, note that
We now use (4.14) together with Assumption 4.4 to complete the proof. \(\square \)
4.4 Another construction of \(\tilde{\mathscr {G}}_m(\mathbf {p}, a)\) and a modification
In this section, we start by giving a more explicit description of the algorithm described in Proposition 4.3 via adding permitted edges to a tilted \(\mathbf {p}\)-tree. We first set up some notation. As a matter of convention, we will view ordered rooted trees via their planar embedding, using the associated ordering to determine the relative locations of siblings of an individual. We think of the left most sibling as the “oldest”. Further, in a depth-first exploration, we explore the tree from left to right. Now given a planar rooted tree \(\mathbf {t}\in \mathbb {T}_m\), let \(\rho \) denote the root and for every vertex \(v\in [m]\), let \([\rho ,v]\) denote the path connecting \(\rho \) to v in the tree. Given this path and a vertex \(i\in [\rho ,v]\), write \(\mathscr {R}\mathscr {C}(i,[\rho ,v])\) for the set of all children of i which fall to the right of \([\rho ,v]\). Thus in the depth-first exploration of the tree, when we get to v,
denotes the set of endpoints of all permitted edges emanating from v. Define
The function \(A_m(\cdot )\) defined in (4.6) is intimately connected to \(\mathfrak {G}_{(m)}(\cdot )\). More precisely, let \((v(1), v(2), \ldots , v(m))\) denote the order in the depth-first exploration of the tree. Let \(y^*(0)=0\) and \(y^*(i) = y^*(i-1) + p_{v(i)}\). Define
Then the function \(A_{(m)}(\cdot )\) associated with an ordered \(\mathbf {p}\)-tree has the same distribution as the function \(A_{m}(\cdot )\) associated with the tree \(\psi _{\mathbf {p}}(\mathbf {X})\), where \(\mathbf {X}=(X_v: v\in [m])\) are i.i.d. random variables uniformly distributed on (0, 1) .
Finally, define the function
While all of these objects depend on the tree \(\mathbf {t}\), we suppress this dependence to ease notation. Now Proposition 4.3 implies we can construct \({{\tilde{\mathscr {G}}}}_m(\mathbf {p},a)\) via the following five steps:
-
(i)
Tilted \(\mathbf {p}\) -tree Generate a tilted ordered \(\mathbf {p}\)-tree \(\mathscr {T}^{\mathbf {p},\star }_m\) with distribution (4.5). Now consider the (random) objects \(\mathfrak {P}(v,\mathscr {T}^{\mathbf {p},\star }_m)\) for \(v\in [m]\) and the corresponding (random) functions \(\mathfrak {G}_{(m)}(\cdot )\) on [m] and \(A_{(m)}(\cdot )\) on [0, 1].
-
(ii)
Poisson number of possible surplus edges Let \(\mathscr {P}\) denote a rate one Poisson process on \(\mathbb {R}_+^2\) and define
$$\begin{aligned} \bar{A}_{(m)}\cap {\mathscr {P}}:= \left\{ (s,t)\in \mathscr {P}: s\in [0,1], t\le \bar{A}_{(m)}(s)\right\} . \end{aligned}$$(4.21)Write \(\bar{A}_{(m)}\cap {\mathscr {P}}:= \left\{ (s_j,t_j):1\le j\le N_{(m)}^\star \right\} \) where \(N_{(m)}^\star = |\bar{A}_{(m)}\cap {\mathscr {P}}|\). We will now use the set \(\left\{ (s_j, t_j):1\le j\le N_{(m)}^\star \right\} \) to generate pairs of points \(\left\{ (\mathscr {L}_j,\mathscr {R}_j): 1\le j\le N_{(m)}^\star \right\} \) in the tree that will be joined to form the surplus edges.
-
(iii)
“First” endpoints Fix j and suppose \(s_j \in (y^*(i-1), y^*(i)]\) for some \(i\ge 1\), where \(y^*(i)\) is as given right above (4.19). Then the first endpoint of the surplus edge corresponding to \((s_j, t_j)\) is \(\mathscr {L}_j:= v(i)\).
-
(iv)
“Second” endpoints Note that in the interval \((y^*(i-1), y^*(i)]\), the function \(\bar{A}_{(m)}\) is of constant height \(a\mathfrak {G}_{(m)}(v(i))\). We will view this height as being partitioned into sub-intervals of length \(a p_u\) for each \(u\in \mathfrak {P}(v(i),\mathscr {T}^{\mathbf {p},\star }_m)\), the collection of endpoints of permitted edges emanating from \(\mathscr {L}_k\). (Assume that this partitioning is done according to some preassigned rule, e.g., using the order of the vertices in \(\mathfrak {P}(v(i),\mathscr {T}^{\mathbf {p},\star }_m)\)). Suppose \(t_j\) belongs to the interval corresponding to u. Then the second endpoint is \(\mathscr {R}_j = u\). Form an edge between \((\mathscr {L}_j, \mathscr {R}_j)\).
-
(v)
In this construction, it is possible that one created more than one surplus edge between two vertices. Remove any multiple surplus edges.
Lemma 4.10
The above construction gives a random graph with distribution \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\) as in Definition 4.2. Further, conditional on \(\mathscr {T}^{\mathbf {p},\star }_m\):
-
(a)
\(N_{(m)}^\star \) has Poisson distribution with mean \(\Lambda _{(m)}(\mathscr {T}_m^{\mathbf {p},\star })\) where \(\Lambda _{(m)}\) is as in (4.20).
-
(b)
Conditional on \(\mathscr {T}_m^{\mathbf {p},\star }\) and \(N_{(m)}^\star =k\), the first endpoints \((\mathscr {L}_j: 1\le j\le k)\) can be generated in an i.i.d. fashion by sampling from the vertex set [m] with probability distribution
$$\begin{aligned} {\mathscr {J}}^{(m)}(v) \propto p_v \mathfrak {G}_{(m)}(v), \quad v\in [m]. \end{aligned}$$ -
(c)
Conditional on \(\mathscr {T}_m^{\mathbf {p},\star }\), \(N_{(m)}^\star =k\) and the first endpoints \((\mathscr {L}_j: 1\le j\le k)\), the second endpoints can be generated in an i.i.d. fashion where the probability that \(\mathscr {R}_j = u\) is proportional to \(p_u\) if u is a right child of some individual \(y\in [\rho ,\mathscr {L}_j]\).
Proof
The assertions follow from Proposition 4.3 and standard properties of Poisson processes. \(\square \)
The modified space \(\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\): We construct a modified graph \(\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\) as follows:
- (i\(^\prime \)):
-
Generate a tilted ordered \(\mathbf {p}\)-tree \(\mathscr {T}_m^{\mathbf {p},\star }\) with distribution (4.5).
- (ii\(^\prime \)):
-
Conditional on \(\mathscr {T}_m^{\mathbf {p},\star }=k\), generate \(N_{(m)}^\star \sim \mathsf{Poi}(\Lambda _{(m)}(\mathscr {T}_m^{\mathbf {p},\star }))\).
- (iii\(^\prime \)):
-
Conditional on \(\mathscr {T}_m^{\mathbf {p},\star }\) and \(N_{(m)}^\star =k\), generate the first endpoints \((\mathscr {L}_j: 1\le j\le k)\) in an i.i.d. fashion by sampling from the vertex set [m] with probability distribution
$$\begin{aligned} {\mathscr {J}}^{(m)}(v) \propto p_v \mathfrak {G}_{(m)}(v), \quad v\in [m]. \end{aligned}$$ - (iv\(^\prime \)):
-
Conditional on \(\mathscr {T}_m^{\mathbf {p},\star }\), \(N_{(m)}^\star =k\) and the first endpoints \((\mathscr {L}_j: 1\le j\le k)\), generate the second endpoints in an i.i.d. fashion where conditional on \(\mathscr {L}_j = v\), the probability distribution of \(\mathscr {R}_j\) is given by
$$\begin{aligned} Q_{v}^{(m)}(y):= {\left\{ \begin{array}{ll} \sum _{u} p_u \mathbbm {1}\left\{ u\in \mathscr {R}\mathscr {C}(y,[\rho ,v])\right\} /\mathfrak {G}_{(m)}(v) &{} \text { if } y\in [\rho ,v],\\ 0 &{} \text { otherwise }. \end{array}\right. } \end{aligned}$$(4.22)Identify \(\mathscr {L}_j\) and \(\mathscr {R}_j\) for \(1\le j\le k\).
Thus, instead of adding an edge between \(\mathscr {L}_j\) and one of the right children on the path \([\rho ,\mathscr {L}_j]\) as in Lemma 4.10(c), we identify it to the parent of this vertex which is on \([\rho ,\mathscr {L}_j]\). Also, we do not remove any multiple surplus edges. This construction turns out to be easier to work with. \(\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\) will be viewed as a metric measure space via the graph distance where vertex v has mass \(\sum p_u\) where the sum is taken over all \(u\in [m]\) which have been identified with v. Intuitively it is clear that \(\sigma (\mathbf {p}){{\tilde{\mathscr {G}}}}_m(\mathbf {p},a)\) and \(\sigma (\mathbf {p})\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\) are “close”. This is formalized in Lemma 4.12.
Remark 6
At this point we urge the reader to go back to Sect. 2.3.1 and remind themselves of the four steps in the construction of the limit metric space \(\mathscr {G}_{\infty }(\varvec{\theta },\gamma )\), and note the similarities to the construction above. In particular, we make note of the following:
-
(a)
For finite m, we essentially tilt the \(\mathbf {p}\)-tree distribution via the functional \({\bar{L}}(\mathscr {T}_m^{\mathbf {p}}) = \exp (a{{\mathrm{\mathbb {E}}}}[\mathfrak {G}_{(m)}(V_1)\ |\ \mathscr {T}_{m}^{\mathbf {p}}])\) (the term \(\mathbb {I}(\mathscr {T}_m^{\mathbf {p}})\) as in (4.8) can be ignored as we will see in Lemma 4.14), and the number of shortcut points selected, namely \(N_{(m)}^\star \), has a Poisson distribution with mean \(a{{\mathrm{\mathbb {E}}}}(\mathfrak {G}_{(m)}(V_1)\ |\ \mathscr {T}_{m}^{\mathbf {p},\star })\). Here \(V_1\) has distribution \(\mathbf {p}\).
-
(b)
For the limit object, we tilt the measure using the functional \(L_{(\infty )}(\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U}) = \exp (\gamma {{\mathrm{\mathbb {E}}}}[\mathfrak {G}_{(\infty )}(V_1)\ |\ \mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U}])\), and the number of shortcuts, namely \(N_{(\infty )}^\star \), follows a Poisson distribution with mean \(\gamma {{\mathrm{\mathbb {E}}}}(\mathfrak {G}_{(\infty )}(V_1)\ |\ \mathscr {T}_{(\infty )}^{\varvec{\theta },\star }, \varvec{U}^{\star })\). Here \(V_1\) is distributed according to the mass measure \(\mu ^\star \) on \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\).
As a brief warm-up to the kind of calculations in the next section, we now prove a simple lemma on tightness of the number of surplus edges. We will prove distributional convergence of this object in the next section.
Lemma 4.11
Under Assumption 4.4, the sequence \(\left\{ N_{(m)}^\star :m\ge 1\right\} \) is tight, where \(N_{(m)}^\star \) is as given below (4.21).
Proof
Fix \(r > 1\). First note that conditional on \(\mathscr {T}_m^{\mathbf {p},\star } =\mathbf {t}\), \(N_{(m)}^\star \) has a Poisson distribution with mean \(\Lambda _{(m)}(\mathbf {t})\). Thus, there exists a constant \(C = C(r) \) such that
Further, note that the tilt \(L(\mathbf {t})\) in (4.4) satisfies
where \(1\le \mathbb {I}(\mathbf {t}) \le C^\prime \) for a fixed constant \(C^\prime \) independent of m by (4.9). Thus, Proposition 4.8 shows that
for any \(\gamma >0\). In particular,
which proves tightness of \(\left\{ N_{(m)}^\star :m\ge 1\right\} \). \(\square \)
We conclude this section by proving a lemma which essentially says that it is enough to work with the modified space \(\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\).
Lemma 4.12
Recall the five-step construction of \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\). Construct \(\mathscr {G}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\) on the same space by coupling it with \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\) in the obvious way. Then, under Assumption 4.4,
Proof
Define the event
In other words, F describes the event in which \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\) does not have multiple surplus edges. It is easy to check that
Thus, Lemma 4.11 combined with the assumption \(\sigma (\mathbf {p})\rightarrow 0\) yields the result provided we show that \({{\mathrm{\mathbb {P}}}}(F^c)\rightarrow 0\). To this end, note that
for every \(u\in [m]\), \(v\in \mathfrak {P}(u,\mathbf {t})\), and some universal positive constant c. Hence
Since \(\sigma (\mathbf {p})\rightarrow 0\) and \(a\sigma (\mathbf {p})\rightarrow \gamma \), \({{\mathrm{\mathbb {P}}}}(F^c)\rightarrow 0\) as desired. \(\square \)
4.5 Completing the Proof of Theorem 4.5
At this point we urge the reader to remind themselves of (a) the four steps in the construction of the limit object in Sect. 2.3, (b) the birthday construction of \(\mathbf {p}\)-trees at the end of Sect. 4.2.1 and (c) the definition of Gromov-weak topology in Sect. 2.1.2 of complete separable measured metric spaces \(\mathscr {S}_*\). Fix \(\ell \ge 1\) and a bounded continuous function \(\phi :\mathbb {R}_+^{\ell ^2}\rightarrow \mathbb {R}\). Let \(\Phi \) be as in (2.3). To simplify notation, we will write \(\Phi (X)\) instead of \(\Phi (X,d,\mu )\). To prove Theorem 4.5, we need to show that for every fixed \(\ell \ge 1\) and functions \(\phi \) and \(\Phi \) as above,
where we sample \(\ell \) points according to \(\mathbf {p}\) in \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\) while we sample \(\ell \) points according to the measure on \(\mathscr {G}_{\infty }(\varvec{\theta },\gamma )\) inherited from the mass measure. Now recall the explicit five step construction of \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\) in Sect. 4.4 starting from the tilted \(\mathbf {p}\)-tree \(\mathscr {T}_m^{\mathbf {p},\star }\) and the Poisson number of surplus edges \(N_{(m)}^\star \). Fix \(K\ge 1\) and note that
Using Lemma 4.11, we can choose K large (independent of m) to make the bound on the right arbitrarily small. Further, in view of Lemma 4.12, we can work with \({\mathscr {G}}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\) instead of \({\tilde{\mathscr {G}}}_m(\mathbf {p},a)\). Hence it suffices to prove the following convergence for every fixed \(k\ge 0\):
To analyze this term, we first need to setup some notation.
Note that both the finite m and the limit object are obtained by starting with a discrete tree for finite m and a real tree in the limit, and sampling a random number of pairs to create “shortcuts”. Recall the space \(\mathbf {T}_{IJ}^*\) in Sect. 2.1.3. Fix \(k\ge 0\) and let \(\mathbf {t}\) be an element in \(\mathbf {T}_{I,(k+\ell )}^*\) for some \(I\ge 0\). “I” will not play a role in the definition below. Write \(\rho \) for the root and denote the leaves by
Also recall that for each i, there is a probability measure \(\nu _{\mathbf {t},i }(\cdot )\) on the path \([\rho , x_i]\) for \(1\le i\le k+\ell \). For \(1\le i\le k\), sample \(y_i\) according to the distribution \(\nu _{\mathbf {t},i}(\cdot )\) independently for different i and connect \(x_i\) and \(y_i\). Let \(\mathbf {t}'\) denote the (random) tree thus obtained and let \(d_{\mathbf {t}'}\) denote the graph distance on \(\mathbf {t}'\). Define the function \(g^{(k)}_\phi :\mathbf {T}_{I,(k+\ell )}^*\rightarrow \mathbb {R}\) by
In words, we look at the expectation of \(\phi \) applied to the pairwise distances between the last \(\ell \) leaves after sampling \(y_i\) on the path \([\rho , x_i]\) for \(1\le i\le k\) and connecting \(x_i\) and \(y_i\). Note that here the expectation is only taken over the choices of \(y_i\).
Next, given \(\mathbf {t}\in \mathbb {T}_m^{{{\mathrm{ord}}}}\) and \(\varvec{v}:=(v_1, \ldots , v_{r})\) with \(v_i\in [m]\), set \(\mathbf {t}(\varvec{v})\) to be the subtree of \(\mathbf {t}\) spanning the vertices \(\varvec{v}\) and the root provided \(v_1, \ldots , v_{r}\) are all distinct and none of them is an ancestor of another vertex in \(\varvec{v}\). When this condition fails, set \(\mathbf {t}(\varvec{v})=\partial \).
Now, conditional on \(\mathscr {T}_m^{\mathbf {p},\star }\), construct a tree \(\mathscr {T}_m^{\mathbf {p},\star }({\widetilde{\mathbf {V}}}_{k,k+\ell }^{(m)})\) where
-
(i)
\({\widetilde{\mathbf {V}}}_{k,k+\ell }^{(m)}:= ({\bar{V}}_1^{(m)}, \ldots , {\bar{V}}_k^{(m)},V_{k+1}^{(m)},\ldots V_{k+\ell }^{(m)})\);
-
(ii)
\({\bar{V}}_i^{(m)}\), \(1\le i\le k\) are i.i.d. with the distribution \(\mathscr {J}^{(m)}(\cdot )\) as in Lemma 4.10(b); and
-
(iii)
\(V_{k+1}^{(m)}, \ldots V_{k+\ell }^{(m)}\) are i.i.d. with distribution \(\mathbf {p}\). Further, \({\bar{V}}_1^{(m)}, \ldots , {\bar{V}}_k^{(m)},V_{k+1}^{(m)},\ldots V_{k+\ell }^{(m)}\) are jointly independent.
We will drop the superscript and simply write \(V_i\), \({\bar{V}}_i\) etc. when there is no scope of confusion. Note that \(\mathscr {T}_m^{\mathbf {p},\star }({\widetilde{\mathbf {V}}}_{k,k+\ell })=\partial \) whenever \({\bar{V}}_1, \ldots , {\bar{V}}_k,V_{k+1},\ldots V_{k+\ell }\) are not all distinct or one of them is an ancestor of another vertex in \({\widetilde{\mathbf {V}}}_{k,k+\ell }\). In either of these two case, the subtree spanned by the root and \({\widetilde{\mathbf {V}}}_{k,k+\ell }\) will have less than \(k+\ell \) leaves. We made the convention of setting \(\mathscr {T}_m^{\mathbf {p},\star }({\widetilde{\mathbf {V}}}_{k,k+\ell })=\partial \) to make sure that we are always working with a bona fide element in \(\mathbf {T}_{I,(k+\ell )}^*\). However, this makes no difference at all since by [27, Corollary 15],
where \(V_1,\ldots ,V_{k+\ell }\) are i.i.d. \(\mathbf {p}\) random variables. Now \(\mathscr {T}_m^{\mathbf {p},\star }\) is obtained by tilting the distribution of \(\mathscr {T}_m^{\mathbf {p}}\), where the tilt \(L(\cdot )\) is uniformly integrable (Proposition 4.8). Further, \({\bar{V}}_i\), \(1\le i\le k\) are i.i.d. with the distribution \(\mathscr {J}^{(m)}(v)\propto p_v\mathfrak {G}_{(m)}(v)\) where \(\max _v \mathfrak {G}_{(m)}(v)\) is stochastically dominated by \(\Vert F^{{{\mathrm{exc}}}, \mathbf {p}}\Vert _{\infty }\) (see (4.12) and the discussion below (4.19)). It thus follows that
Using (4.25), we see that
where \({{\mathrm{\mathbb {E}}}}_{\mathbf {p},\star }(\cdot ):= {{\mathrm{\mathbb {E}}}}(\cdot |\mathscr {T}_{m}^{\mathbf {p},\star })\). At this point, we also define \({{\mathrm{\mathbb {E}}}}_{\mathbf {p}}(\cdot ):={{\mathrm{\mathbb {E}}}}(\cdot |\mathscr {T}_{m}^{\mathbf {p}})\) where \(\mathscr {T}_{m}^{\mathbf {p}}\) has the original ordered \(\mathbf {p}\)-tree distribution (2.5).
Now since \(\mathscr {J}^{(m)}(v)\propto p_v \mathfrak {G}_{(m)}(v)\), we see that the inner expectation in (4.26) can be simplified as
where \(\mathbf {V}_{k, k+\ell } = (V_1, V_2, \ldots V_{k+\ell })\), and \(V_i\) are i.i.d. with distribution \(\mathbf {p}\). Since \(\mathscr {T}_m^{\mathbf {p},\star }\) is sampled according to a tilted \(\mathbf {p}\)-tree distribution, combining (4.26), and (4.27), we get the following result:
Lemma 4.13
Fix \(k\ge 0\). Define the events \(A^{\star }_{(m),k}=\{N^{\star }_{(m)}=k\}\) and \(A_{(m),k}=\{N_{(m)}=k\). Then
where \(C_m = \left\{ {{\mathrm{\mathbb {E}}}}(L(\mathscr {T}_m^{\mathbf {p}}))\right\} ^{-1}\), and L is the tilt as in (4.4). Further, conditional on \(\mathscr {T}_m^{\mathbf {p}}\), \(N_{(m)}\) has a Poisson distribution with mean \(\Lambda _{(m)}(\mathscr {T}_m^{\mathbf {p}})= a{{\mathrm{\mathbb {E}}}}_{\mathbf {p}}(\mathfrak {G}_{(m)}(V))\) as in (4.20), where V has distribution \(\mathbf {p}\) independent of \(\mathscr {T}_m^{\mathbf {p}}\).
This formula will be the starting point to prove (4.23). Recall from (4.8) that the tilt \(L(\cdot ) = \mathbb {I}(\cdot ) {\bar{L}}(\cdot )\), where \(\mathbb {I}(\cdot )\) has a messy form given by (4.9). We have already seen in (4.10) that under Assumption 4.4, \(\mathbb {I}(\cdot )\le C\) for a constant C all \(m\ge 1\). The following lemma coupled with dominated convergence theorem will now imply that we can replace L with \({\bar{L}}\) in Lemma 4.13 and in all the subsequent analysis below:
Lemma 4.14
Under Assumption 4.4, \(\mathbb {I}(\mathscr {T}_m^{\mathbf {p}}) \mathop {\longrightarrow }\limits ^{\mathrm {P}}1\) as \(m\rightarrow \infty \).
Proof
By (4.9) we have \(1\le \mathbb {I}(\mathscr {T}_m^{\mathbf {p}}) \le \exp (a \sum _{(k,l)\in E(\mathscr {T}_m^{\mathbf {p}})} p_k p_l)\). Thus it is enough to show that \(a{{\mathrm{\mathbb {E}}}}(\sum _{(k,l)\in E(\mathscr {T}_m^{\mathbf {p}})} p_k p_l) \rightarrow 0\). Now for \(k\ne l\in [m]\), write \(\left\{ k\leadsto l\right\} \) for the event in which l is a child of k in \(\mathscr {T}_m^{\mathbf {p}}\). Then standard properties of \(\mathbf {p}\)-trees [59, Section 6.2] implies that for \(k\ne l_1\ne l_2\in [m]\)
Thus
as \(m\rightarrow \infty \) by Assumption 4.4. \(\square \)
Write \({{\mathrm{\mathbb {E}}}}_{\varvec{\theta }}\) for expectation conditional on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and the random variables \(U_j^{(i)}\) that encode the order on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\), i.e.,
and note that \({{\mathrm{\mathbb {E}}}}\left[ \Phi \left( {\mathscr {G}}_{\infty }(\varvec{\theta }, \gamma )\right) \mathbbm {1}\left\{ N_{(\infty )}^\star =k\right\} \right] \) has an expression similar to (4.28). Indeed, from the construction of \({\mathscr {G}}_{\infty }(\varvec{\theta }, \gamma )\) given in Sect. 2.3.1, it follows that
where (a) \(\mathfrak {G}_{(\infty )}(\cdot )\) is as defined in (2.7) (b) \(L_{(\infty )}(\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})\) is as in (2.9), (c) \(C_{\infty }=[{{\mathrm{\mathbb {E}}}}L_{(\infty )}(\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})]^{-1}\), (d) \(V_i^{(\infty )}\) are i.i.d. random variables sampled from \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) using the mass measure \(\mu \), (e) \({\mathbf {V}}_{k,k+\ell }^{(\infty )}=(V_1^{(\infty )},\ldots , V_{k+\ell }^{(\infty )})\), (f) \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}({\mathbf {V}}_{k,k+\ell }^{(\infty )})\) is the tree spanned by the root of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and \({\mathbf {V}}_{k,k+\ell }^{(\infty )}\), viewed as an element of \(\mathbf {T}_{0, k+\ell }^{*}\) by declaring the leaf values to be \(\mathfrak {G}_{(\infty )}(V_j^{(\infty )})\) and the root-to-leaf measures to be \(Q_{V_j}^{(\infty )}(\cdot )\) as in (2.8), and (g) conditional on \((\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})\), \(N_{(\infty )}\) has a Poisson distribution with mean
Finally, observe that \(L_{(m)}(\cdot )=\mathbb {I}_{(m)}(\cdot ){\bar{L}}_{(m)}(\cdot )\) where \(\bar{L}_{(m)}(\mathbf {t})=\exp (a{{\mathrm{\mathbb {E}}}}_{\mathbf {p}}[\mathfrak {G}_{(m)}(V_1^{(m)})])\), and recall that \(a\sigma (\mathbf {p})\rightarrow \gamma \) (Assumption 4.4) and \({L}_{(m)}(\mathscr {T}_m^{\mathbf {p}})\) is uniformly integrable (Proposition 4.8). Therefore, combining Lemma 4.14, Lemma 4.13 and (4.30) with Theorem 4.15 stated below yields (4.23) and thus completes the proof of Theorem 4.5.
Theorem 4.15
For each \(k\ge 0\),
The proof of this theorem is accomplished via the following two theorems for which we need to set up some notation. Fix \(I\ge 0\) and \(J\ge 1\). We will assume that \(\mathscr {T}_m^{\mathbf {p}}\) has been constructed via the birthday construction (see Sect. 4.2.1). This construction gives rise to an unordered \(\mathbf {p}\)-tree. To obtain an ordered \(\mathbf {p}\)-tree from this, let \(\mathscr {D}_{(m)}(i)\) denote the set of children of i in the \(\mathbf {p}\)-tree for every vertex i. Generate i.i.d. uniform random variables \(\varvec{U}_{(m)}(i):=\left\{ U_{(m),i}(v): v\in \mathscr {D}_{(m)}(i)\right\} \), independent across \(v\in \mathscr {T}_m^{\mathbf {p}}\). Think of these as “ages” of the children and arrange the children from left to right in decreasing order of their ages. We can construct the function \(\mathfrak {G}_{(m)}(\cdot )\) as in (4.18) once this ordering has been defined.
Now recall that the right hand side of (4.7) tells us how to sample J i.i.d. points \((V_1^{(m)}, \ldots , V_J^{(m)})\) from distribution \(\mathbf {p}\) and the corresponding spanning subtree \(\mathscr {T}_J^{\mathscr {B}}\) from the tree using the repeat time sequence \(\left\{ R_k^{(m)}: k\ge 1\right\} \). Thus, by the Jth repeat time \(R_J\), we would have sampled all J vertices \(V_i^{(m)} = Y_{R_{i}-1}\). View \(\mathscr {T}_J^{\mathscr {B}}\) as a tree with edge lengths and marked vertices as follows: (a) rescale every edge to have length \(\sigma (\mathbf {p})\); (b) relabel \(V_j\) as \(j+\) and the root as \(0+\); (c) mark only those vertices \(i\le I\) which occur in \(\mathscr {T}_J^{\mathscr {B}}\); (d) for all \(1\le j \le J\), set the leaf values to be \(\mathfrak {G}_{(m)}(V_j)/\sigma (\mathbf {p})\), and assign the measure \(\nu ^{(m)}_j:= Q^{(m)}_{V_j}\) as defined in (4.22) to the path connecting the root to \(V_j\), i.e., to the path \([0+, j+]\) .
Definition 4.16
Fix \(I\ge 0, J\ge 1\) and consider the tree constructed as above. Set \(r_{IJ}^{(m)}=\mathscr {R}_{IJ}^{(m)}=\partial \) if some \(j+\) is not a leaf or if some leaf has been multiply labeled. Otherwise, write \(r_{IJ}^{(m)}\in \mathbf {T}_{IJ}\) for the tree with edge lengths and at most I labelled hubs, namely where we retain information in (a) and (b) above. Write \(\mathscr {R}_{IJ}^{(m)}\in \mathbf {T}_{IJ}^*\) for the tree where we retain all information (a)–(d) above, namely the leaf values \(\mathfrak {G}_{(m)}(V_j)\) and the root-to-leaf probability measures \(Q_{V_j}^{(m)}(\cdot )\) in addition to (a) and (b).
Now recall the tree \(\mathscr {R}_{IJ}^{(\infty )}\) defined in Sect. 2.2 using the limit ICRT \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\). The main ingredients in the proof of Theorem 4.15 are the following two theorems:
Theorem 4.17
Under Assumption 4.4, \(\mathscr {R}_{IJ}^{(m)} \mathop {\longrightarrow }\limits ^{d}\mathscr {R}_{IJ}^{(\infty )}\) as \(m\rightarrow \infty \) for every fixed \(I\ge 0\) and \(J\ge 1\). This convergence is with respect to the topology defined on \(\mathbf {T}^*_{IJ}\) in Sect. 2.1.3.
The second result we will need is as follows. Recall the function \(g_\phi ^{(k)}\) on \(\mathbf {T}_{I,(k+\ell )}^*\) as in (4.24).
Theorem 4.18
Fix \(I\ge 0\), \(k\ge 0\), \(\ell \ge 2\) and a bounded continuous function \(\phi \) on \(\mathbb {R}^{\ell ^2}\). Then the function \(g_{\phi }^{(k)}\) is continuous on \(\mathbf {T}_{I, (k+\ell )}^*\).
Proof of Theorem 4.15
Assuming Theorems 4.17 and 4.18, let us now show how this completes the proof. Getting a handle directly on the conditional expectations as required in Theorem 4.15 is a little tricky. Naturally, conditional on \(\mathscr {T}_m^{\mathbf {p}}\), repeated sampling of vertices and calculating sample averages should give a good idea of the conditional expectations (and the same for the limit object \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\)). This is made precise in the following simple lemma whose proof we leave to the reader. \(\square \)
Lemma 4.19
Suppose \(\mathbf {X}^{(m)} := (X^{(m),1},X^{(m),2})\) with \(m\in \left\{ 1,2,\ldots ,\right\} \cup \left\{ \infty \right\} \) is a sequence of \(\mathbb {R}^2\)-valued random variables such that for each fixed \(r\ge 1\), there exist random variables \(\mathbf {X}_r^{(m)}:=(X_r^{(m),1} , X_r^{(m),2})\) such that the following hold:
-
(i)
There exists a constant \(C <\infty \) such that for any \(m\in \left\{ 1,2,\ldots ,\right\} \cup \left\{ \infty \right\} \), \(r\ge 1\) and \(\varepsilon >0\),
$$\begin{aligned} \max _{s=1,2}\ {{\mathrm{\mathbb {P}}}}\left( |X^{(m),s} - X_r^{(m),s}| > \varepsilon \right) \le \frac{C}{\varepsilon ^2 r}. \end{aligned}$$ -
(ii)
For each fixed \(r\ge 1\), \(\mathbf {X}_r^{(m)}\mathop {\longrightarrow }\limits ^{d}\mathbf {X}_r^{(\infty )}\).
Then \(\mathbf {X}^{(m)}\mathop {\longrightarrow }\limits ^{d}\mathbf {X}^{(\infty )}\).
We will apply this lemma with the random variables that arise in Theorem 4.15. That is, we set
and similarly define \(X^{(m),2}\) and \(X^{(\infty ),2}\) to be the second coordinates in the display (4.31). To define \(\mathbf {X}_r^{(m)}\), we proceed as follows. For each fixed \(r\ge 1\), sample a collection of \(J_r:= [r+(k+\ell )r]\) points all i.i.d. \(\mathbf {p}\) from \(\mathscr {T}_m^{\mathbf {p}}\) and think of them as r individuals points-\((V_1^{(m)}, V_2^{(m)},\ldots , V_r^{(m)})\), and r \((k+\ell )\) dimensional vectors-\(\mathbf {V}^{(m),i}_{k,k+\ell }:= (V_{i1}^{(m)},\ldots , V_{i(k+\ell )}^{(m)})\) for \(1\le i\le r\). Define
For \(m=\infty \), sample as above \(J_r\) points using the mass measure \(\mu \) from \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and define
Now define
Let \(\mathbf {X}_r^{(m)}:=(X_r^{(m),1},X_r^{(m),2})\) for \(m\in \left\{ 1, 2,\ldots \right\} \cup \left\{ \infty \right\} \). To complete the proof of the theorem, we have to check the two conditions of Lemma 4.19. Let us check condition (i) of Lemma 4.19 for the first coordinate. The second coordinate can be handled in an identical fashion.
Applying Chebyshev’s inequality conditional on \(\mathscr {T}_m^{\mathbf {p}}\) and then taking expectations, we get
where \({{\mathrm{Var}}}_{\mathbf {p}}\) defined analogously to \({{\mathrm{\mathbb {E}}}}_{\mathbf {p}}\) is the conditional variance operator. Obviously
From the argument given below (4.11), it follows that \(\Vert \mathfrak {G}_{(m)}\Vert _{\infty } \le ||F^{{{\mathrm{exc}}},\mathbf {p}}||_{\infty }\). Hence Lemma 4.9 implies that \(\sup _m\ C_{(m)} < \infty \). This verifies (i) of the lemma.
Let us now verify condition (ii) of the lemma. Writing this out explicitly, we have to show for each fixed \(r\ge 1\),
To this end, for each \(m\in \left\{ 1,2,\ldots \right\} \cup \left\{ \infty \right\} \), consider the subtree spanning the \(J_r\) points \((V_i^{(m)})_{1\le i\le r}, (\mathbf {V}_{k,k+l}^{(m),i})_{1\le i\le r}\), viewed as an element of \(\mathbf {T}_{IJ}^*\) as in Definition 4.16. Using Theorem 4.17 and continuity of the function \(g_\phi ^{(k)}\) from Theorem 4.18, we get
with respect to weak convergence on \(\mathbb {R}^{2r}\), which in turn implies (4.32). This completes the verification of the conditions of Lemma 4.19 and thus the proof of Theorem 4.15. \(\square \)
The rest of this section proves Theorems 4.17 and 4.18.
Proof of Theorem 4.17
The proof will rely on a truncation argument that is qualitatively similar to Lemma 4.19. Fix a truncation level \(R\ge 1\). Recall the definition of \(\mathfrak {G}_{(m)}(v)\) from (4.18) which kept track of the contribution of all right children of individuals i on the path \([\rho ,v]\). We will look at a truncated version of this object where we keep track of the potential contributions of only the first R vertices. More precisely let
Let \(\mathfrak {G}_{(\infty )}^R(\cdot )\) be the analogous modification of \(\mathfrak {G}_{(\infty )}(\cdot )\) defined in (2.7), i.e.,
Similarly modify the “second endpoint” measure in (4.22) to keep track of only ancestors with labels \(\le R\), namely
Note that this does not make sense if \(\mathfrak {G}_{(m)}^{R}(v) = 0\), i.e., when there is no vertex with label \(\le R\) on the path from the root to v. In this case we follow the convention of defining the measure to be the uniform probability measure on the line \([\rho ,v]\). Define \(Q_{v}^{(\infty ), R}(\cdot )\) on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) in an analogous fashion.
Consider the tree \(r_{IJ}^{(m)}\) as in Definition 4.16, and assign to leaf \(V_j\) the truncated measure \(Q_{V_j}^{(m), R}(\cdot )\) and leaf value \(\mathfrak {G}_{(m)}^{R}(V_j)\) (instead of \(Q_{V_j}^{(m)}(\cdot )\) and \(\mathfrak {G}_{(m)}(V_j)/\sigma (\mathbf {p})\)). We denote the resulting object (which is an element of \(\mathbf {T}_{IJ}^*\)) by \(\mathscr {R}_{IJ}^{(m),R}\). Similarly construct \(\mathscr {R}_{IJ}^{(\infty ),R}\). \(\square \)
Proposition 4.20
The following hold:
-
(a)
For all \(R\ge 1\), \(\mathscr {R}_{IJ}^{(m), R} \mathop {\longrightarrow }\limits ^{d}\mathscr {R}_{IJ}^{(\infty ), R}\).
-
(b)
\(\mathscr {R}_{IJ}^{(\infty ), R} \mathop {\longrightarrow }\limits ^{d}\mathscr {R}_{IJ}^{(\infty )}\) as \(R\rightarrow \infty \).
-
(c)
For any bounded continuous function \(f:\mathbf {T}_{IJ}^* \rightarrow \mathbb {R}\),
$$\begin{aligned} \limsup _{R\rightarrow \infty }\limsup _{m\rightarrow \infty }\left| {{\mathrm{\mathbb {E}}}}(f(\mathscr {R}_{IJ}^{(m), R})) - {{\mathrm{\mathbb {E}}}}(f(\mathscr {R}_{IJ}^{(m)}))\right| = 0. \end{aligned}$$
Assuming this proposition, we now complete the proof of Theorem 4.17. Note that for any fixed bounded continuous function f on \(\mathbf {T}_{IJ}^*\) and any truncation level \(R\ge 1\), we have
Now letting \(m\rightarrow \infty \) and then letting \(R\rightarrow \infty \) and using Proposition 4.20 completes the proof. \(\square \)
We next prove Proposition 4.20.
4.6 Proof of Proposition 4.20
We start with three preliminary lemmas. Recall that \(\left\{ i\leadsto j\right\} \) denotes the event that j is a child of i in \(\mathscr {T}_m^{\mathbf {p}}\).
Lemma 4.21
Under Assumption 4.4, for each fixed \(i\ge 1\),
Proof
Recall from (4.29) that for fixed i, the collection of events \(\left\{ \left\{ i\leadsto j\right\} : j\ne i\right\} \) are pairwise independent and have the same probability \(p_i\). Thus
and
This completes the proof as \(\max _{i\in [m]}p_i = p_1\rightarrow 0\) and \(p_i/\sigma (\mathbf {p})\rightarrow \theta _i\) under Assumption 4.4. \(\square \)
Lemma 4.22
Under Assumption 4.4, for each fixed \(i\ge 1\),
Proof
Fix \(\varepsilon >0\) and write
Note that by Assumption 4.4, for every \(\varepsilon > 0\), \(\left\{ n_\varepsilon (m):m\ge 1\right\} \) is a bounded sequence. Further, (4.29) and Markov’s inequality yield
as \(\max _{i\in [m]}p_i =p_1\rightarrow 0\). \(\square \)
Recall that \(\mathscr {D}_m(i)\) is the set of children of vertex i in \(\mathscr {T}_m^{\mathbf {p}}\). For later use let \(d_m(i):=|\mathscr {D}_m(i)|\) denote the degree of i in \(\mathscr {T}_m^{\mathbf {p}}\). Note that Lemma 4.21 together with the lemma just proven gives
Lemma 4.23
For each fixed m, let \(\mathbf {q}(m):= (q_1,q_2,\ldots q_d)\) be a probability mass function with \(q_i > 0\) for all i, m where \(d = d(m)\rightarrow \infty \) as \(m\uparrow \infty \). Assume further that \(q_{\max }:=\max _{i\in [d]} q_i\rightarrow 0 \) as \(m\rightarrow \infty \). Let \(\left\{ U_i^{(m)}:1\le i\le d\right\} \) be i.i.d. uniform random variables and consider the function
Then \(\sup _{t\in [0,1]} |W_m(t)| \mathop {\longrightarrow }\limits ^{\mathrm {P}}0\) as \(m\rightarrow \infty \).
Proof
Recall the proof of Lemma 4.9 where we studied the tightness of the tilt. Then replacing \(\mathbf {p}\) in the proof by \(\mathbf {q}\), the quantity of interest is \(\sup _{t\in [0,1]} |W_m(t)| = \sigma (\mathbf {q})\mathscr {R}_1(m)\) where \(\mathscr {R}_1(m)\) is as defined in (4.3) and \(\sigma (\mathbf {q}):=\sqrt{\sum _i q_i^2}\). Now (4.14) and (4.15) imply the existence of a constant C (independent of m) such that for all m and \(x\ge e\),
Since \(\sigma (\mathbf {q})\le \sqrt{q_{\max }} \rightarrow 0\) as \(m\rightarrow \infty \), this completes the proof. \(\square \)
We now have all the ingredients for the proof of Proposition 4.20. We prove parts (a), (b) and (c) one by one.
Proof of Proposition 4.20(a)
Recall from Definition 4.16 the tree \(r_{IJ}^{(m)}\) that contains all the edge lengths and hub information in \(\mathscr {R}_{IJ}^{(m)}\) but ignores root-to-leaf measures and lead values \(\mathfrak {G}_{(m)}(\cdot )\). By [27, Corollary 15] or [13, Proposition 3], for fixed \(J\ge 1\), we have
with respect to the product topology on \(\prod _{I^\prime \ge 0} \mathbf {T}_{I^{\prime } J}\). Using Lemmas 4.21 and 4.22 and Skorohod embedding, we assume that we are working on a probability space that supports a sequence of unordered \(\mathbf {p}\)-trees \(\left\{ \mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}:m\ge 1\right\} \), sampled vertices \(\left\{ V_j^{(m)}:1\le j\le J, m\ge 1\right\} \) using the associated sequence of probability mass functions \(\left\{ \mathbf {p}(m):m\ge 1\right\} \), an ICRT \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\), and sampled vertices \(\left\{ V_j^{(\infty )}:1\le j\le J\right\} \) using the mass measure such that the following hold:
-
(A)
Convergence in (4.35) happens almost surely:
$$\begin{aligned} \left( r_{I^{\prime }J}^{(m)}: I^{\prime }\ge 0, \right) \mathop {\longrightarrow }\limits ^{\mathrm {a.s.}}\left( r_{I^{\prime } J}^{(\infty )}: I^{\prime }\ge 0\right) \quad \text { as }m\rightarrow \infty \end{aligned}$$(4.36)coordinatewise, where the underlying tree corresponding to \(r_{I^{\prime }J}^{(m)}\) is spanned by the root of \(\mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}\) and \(V_j^{(m)},\ 1\le j\le J\).
-
(B)
Writing \(s_m(i):= \sum _{v\in \mathscr {D}_m(i)} p_v\) for the sum of weights of children of i in \(\mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}\), we have
$$\begin{aligned} \left( \frac{s_m(i)}{\sigma (\mathbf {p})}:i\ge 1\right) \mathop {\longrightarrow }\limits ^{\mathrm {a.s.}}\left( \theta _i:i\ge 1\right) \end{aligned}$$(4.37)coordinatewise. (We can assume that this holds because of Lemma 4.21).
-
(C)
For fixed hub \(i\ge 1\) and \(m\ge 1\), write
$$\begin{aligned} q_{m,i}(v):= \frac{p_v}{s_m(i)} , \qquad v\in \mathscr {D}_m(i), \qquad q_{m,i}^{\max }:=\max _{v\in \mathscr {D}_m(i)} q_{m,i}(v). \end{aligned}$$(4.38)Then we assume (using Lemma 4.22 and (4.34)) that for all \(i\ge 1\)
$$\begin{aligned}q_{m,i}^{\max }\mathop {\longrightarrow }\limits ^{\mathrm {a.s.}}0\text { and }d_m(i)\mathop {\longrightarrow }\limits ^{\mathrm {a.s.}}\infty .\end{aligned}$$
Now, for each \(z\in [m]\) and \(i\ge 1\), if \(i\in [\rho ,z]\) (where \(\rho =\rho _m\) is the root of \(\mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}\)), write \(c(i;z)\in \mathscr {D}_m(i)\) for the child of i that is the ancestor of z. Next, construct a collection \(\left\{ U_{m,i}(v):m\ge 1,i\ge 1, v\in [m]\right\} \) of uniform[0, 1] random variables on the same space such that
-
(a)
\(\left\{ \mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}, U_{m,i}(v):i\ge 1, v\in [m]\right\} \) are jointly independent for each \(m\ge 1\); and
-
(b)
for each \(i\le R\) and \(j\le J\) for which \(i\in [\rho , V_j^{(\infty )}]\), \(U_{m,i}\left( c(i;V_j^{(m)})\right) \) is a constant sequence (in m) eventually.
As described below Theorem 4.15, we can use these uniform random variables to generate the sequence of ordered \(\mathbf {p}\)-trees \(\left\{ \mathscr {T}_m^{\mathbf {p}}\right\} \) from \(\left\{ \mathscr {T}_m^{\mathbf {p},{{\mathrm{uo}}}}\right\} \) as follows: Let \(\varvec{U}_{m,i}:=\left\{ U_{m,i}(v): v\in \mathscr {D}_{m}(i)\right\} \). Think of these as “ages” of the children and arrange the children from left to right in decreasing order of their ages.
Once this ordering has been defined, we can construct the function \(\mathfrak {G}_{(m)}(\cdot )\) as in (4.18). In this case we can write this function explicitly in terms of the associated uniform random variables as follows. Define
Then
Similarly, the root-to-leaf measure \(Q_{v}^{(m), R}\) (recall (4.33)) can also be expressed in terms of this function.
Now using (4.36), for every fixed hub \(i\le R\), \(j\le J\), and a.e. sample point \(\omega \), one of the following two holds:
-
(a)
\(i\notin [\rho , V_j^{(\infty )}]\), in which case there exists \(m= m(\omega )\) such that \(i\notin [\rho , V_j^{(m)}]\) for all \(m> m(\omega )\).
-
(b)
\(i\in [\rho , V_j^{(\infty )}]\), in which case there exists \(m= m(\omega )\) such that \(i\in [\rho , V_j^{(m)}]\) for all \(m > m(\omega )\).
When the latter happens, using Lemma 4.23 together with (4.37) and (4.38), we get
By construction, \(U_{m, i}\left( c(i;V_j^{(m)})\right) \) is eventually constant in m on the event \(\left\{ i\in [\rho , V_j^{(\infty )}]\right\} \). This immediately implies convergence of the (scaled) truncated leaf values \(\mathfrak {G}_{(m)}^{R}(V_j^{(m)})/\sigma (\mathbf {p})\) [see (4.39)] for \(1\le j\le J\), and similarly the truncated root to leaf measures \(Q_{V_j^{(m)}}^{(m), R}\) jointly with the convergence in (4.36) and thus yields the convergence \(\mathscr {R}_{IJ}^{(m), R} \mathop {\longrightarrow }\limits ^{d}\mathscr {R}_{IJ}^{(\infty ), R}\). \(\square \)
Proof of Proposition 4.20(b)
Recall from Sect. 2.2 that \(\mathscr {R}_{IJ}^{(\infty )}\) is obtained by applying the stick-breaking construction to \([0,\eta _J]\), and leaf \(j+\) in \(\mathscr {R}_{IJ}^{(\infty )}\) corresponds to the vertex coming from \(\eta _j\). It is easy to see from the definition of \(\mathfrak {G}_{(\infty )}^{R}\) and \(Q_{v}^{(\infty ),R}\) that it suffices to prove
For every hub \(i\ge 1\) and leaf \(\eta _j\), write \(\left\{ i\rightarrow \eta _j\right\} \) if \(\eta _j\) is a descendant of i (namely \(i\in [\rho ,\eta _j]\)). Then note that
Thus, it is enough to show that given \(\varepsilon > 0\), we can find \(R= R(\varepsilon ) < \infty \) such that \({{\mathrm{\mathbb {P}}}}(\mathscr {E}_{R}^{(2)} > \varepsilon )<\varepsilon \). To this end, first choose \(K_{\varepsilon }\) large enough so that \({{\mathrm{\mathbb {P}}}}(\eta _J > K_\varepsilon ) < \varepsilon /2\), and then choose \(R_\varepsilon \) large enough so that
Then note that
where the first term in the second inequality follows from the choice of \(K_\varepsilon \), while the second term comes from the stick-breaking construction of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) using the countable collection of Poisson point processes. This completes the proof. \(\square \)
Proof of Proposition 4.20(c)
Recall that the tree \(\mathscr {R}_{IJ}^{(m)}\) (and \(\mathscr {R}_{IJ}^{(m),R}\)) can be thought of as being made up of \(2J+1\) coordinates:
-
(a)
One coordinate for the shape and edge length information along with the labels smaller than I namely \(r_{IJ}^{(m)}\) (see Definition 4.16). Note that this is the same for both \(\mathscr {R}_{IJ}^{(m)}\) and \(\mathscr {R}_{IJ}^{(m),R}\).
-
(b)
J coordinates for the leaf values \(\mathfrak {G}_{(m)}(V_j)/\sigma (\mathbf {p})\) (resp. \(\mathfrak {G}_{(m)}^{R}(V_j)/\sigma (\mathbf {p})\)).
-
(c)
J coordinates for the measured metric spaces \(\mathscr {M}_j^{(m)}:= ([\rho , V_j^{(m)}],\ Q_{V_j}^{(m)})\) (resp. \(\mathscr {M}_j^{(m), R}:= ([\rho , V_j^{(m)}],\ Q_{V_j}^{(m), R})\)).
Since \(\mathbf {T}_{IJ}^*\) assumes the product topology on these coordinates, it is enough to show the required estimate in Proposition 4.20 (c) with functions of the form
Here \(\mathbf {t}\in \mathbf {T}_{IJ}\), \(a_j\in \mathbb {R}\) are associated leaf values and \(M_j\) are the paths from the root to leaf j with an associated probability measure and f, \(g_j\) and \(h_j\) are bounded uniformly continuous functions on the spaces \(\mathbf {T}_{IJ}\), \(\mathbb {R}\) and \(\mathscr {S}\) (measured compact metric spaces) respectively. To simplify notation, we will simply write this as \(f(\mathbf {t})\).
Now we can go from \(\mathscr {R}_{IJ}^{(m)}\) to \(\mathscr {R}_{IJ}^{(m), R}\) by flipping one coordinate at a time. Thus writing
we get
Since \(V_j\)’s have been sampled in an i.i.d. fashion from \(\mathbf {p}\), it is enough to show that for any two bounded uniformly continuous functions h, g on \(\mathbb {R}\) and \(\mathscr {S}\) respectively,
and
Now consider the measured metric spaces \(\mathscr {M}_1^{(m)} \) and \(\mathscr {M}_1^{(m),R}\). As remarked above, they share the same metric space, namely the path \([\rho , V_1^{(m)}]\). The only difference is in the associated probability measures. Consider the natural correspondence \(C=\left\{ (x,x): x\in [\rho ,V_1^{(m)}]\right\} \) between \(\mathscr {M}_1^{(m)} \) and \(\mathscr {M}_1^{(m),R}\). Further, define a probability measure \(\pi \) on \([\rho ,V_1^{(m)}] \times [\rho , V_1^{(m)}]\) as
Writing \(\pi _1\) and \(\pi _2\) for the marginals of \(\pi \), we have, using the above choice of correspondence C and of the measure \(\pi \),
Now suppose we show (4.40). Using part (a) and part (b) of Proposition 4.20, we get \((\sigma (\mathbf {p}))^{-1}\mathfrak {G}_{(m)}(V_1^{(m)}) \mathop {\longrightarrow }\limits ^{d}\mathfrak {G}_{(\infty )}(V_1^{(\infty )}) > 0\). Now using the bound in (4.42) and uniform continuity of h, we see that (4.41) is true. Hence it is enough to prove (4.40).
Recall from Sect. 4.2.1 the construction of \(V_1^{(m)}\) and the tree simultaneously via the birthday construction, where \(V_1^{(m)}\) is obtained as the value before the first repeat time, namely \(Y_{R_1-1}\). Fix \(\varepsilon >0\). By [27, Theorem 4], under Assumptions 4.4 we may choose \(K_\varepsilon \) large so that the first repeat time satisfies \({{\mathrm{\mathbb {P}}}}(R_1 > K_\varepsilon /\sigma (\mathbf {p})) < \varepsilon \) for all \(m\ge 1\). Next, by uniform continuity of g, choose \(\delta \in (0,1)\) such that \(|g(x) -g(y)| < \varepsilon \) if \(|x-y| < \delta \). Finally choose R large so that for all m,
First, by choice of \(K_\varepsilon \) and boundedness of g,
and a similar inequality holds true if we replace the functional \(\mathfrak {G}_{(m)}\) by \(\mathfrak {G}_{(m)}^R\). Next, writing
we have
by our choice of \(\delta \). The difference \(\mathfrak {G}_{(m)}(V_1^{(m)}) - \mathfrak {G}_{(m)}^{R}(V_1^{(m)})\) is a tricky object for which we will need a tractable upper bound. Recall that we have used \(\mathscr {T}_1^{\mathscr {B}}\) for the birthday tree in (4.7) constructed by time \(R_1\). For every vertex \(i\in \mathscr {T}_1^{\mathscr {B}}\), let \(\mathscr {J}(i)\) be the first child of i in the birthday construction (the first new, i.e., previously un-sampled vertex sampled immediately after a prior sampling of i). This will be an empty set if i is a leaf in the eventual full tree \(\mathscr {T}_m^{\mathbf {p}}\). Recall that \(\left\{ i\leadsto j\right\} \) was used to denote the event that j is a child of i in \(\mathscr {T}_m^{\mathbf {p}}\). Then note that
Thus,
For \(i\ne j\in [m]\), define the event \(E_{ij}:=\left\{ i \text{ appears } \text{ before } \frac{K_\varepsilon }{\sigma (\mathbf {p})}, i\leadsto j, j\ne \mathscr {J}(i)\right\} \). Then for \(E_{ij}\) to happen, the following needs to happen in the birthday construction: (a) There is an \(0\le r_1\le K_\varepsilon /\sigma (\mathbf {p})\) such that till time \(r_1\), neither i or j have been sampled. (b) At time \(r_1+1\) vertex i is sampled. (c) There is an \(r_2\ge 0\) such that in the times \([r_1+1, r_1+1+r_2]\) samples, j does not appear. (d) Then at time \(r_1+r_2+2\), vertex i is sampled again. (e) In the next time step \(r_1+r_2+3\) vertex j is sampled. Therefore,
Using this in (4.45), we get
Combining (4.43), (4.44), (4.45) and (4.46) now gives the following lemma which completes the proof of (4.40) and thus the proof of part (c) of the proposition. \(\square \)
Lemma 4.24
Given \(\varepsilon > 0\) choose \(K_\varepsilon , \delta \) and R as above. Then, for all \(m\ge 1\),
Proof of Theorem 4.18
We now prove continuity of the function \(g_{\phi }^{(k)}\) on the space \(\mathbf {T}_{I,(k+\ell )}^*\). In fact, we will give a quantitative estimate. Since we are assuming the discrete topology on the coordinate corresponding to the shape, without loss of generality we will work with two trees \(\mathbf {t}, {\overline{\mathbf {t}}}\in \mathbf {T}_{I,(k+\ell )}^*\) having the same shape. We need to distinguish the labels for the root and the leaves in the two trees; so write \(0+\) (respectively \(\overline{0}+\)) for the root of \(\mathbf {t}\) (respectively \({\overline{\mathbf {t}}}\)) and write \(\left\{ j+: 1\le j\le k+\ell \right\} \) (respectively \(\left\{ \overline{j}+: 1\le j\le k+l\right\} \)) for the collection of leaves in \(\mathbf {t}\) (respectively \({\overline{\mathbf {t}}}\)). Finally, let \(\nu _j\) be the corresponding probability measure on the path \(\mathscr {M}_j:= [0+, j+]\) for \(1\le j\le k\), and analogously let \({\overline{\nu }}_j\) be the probability measure on \({\overline{\mathscr {M}}}_j:= [\overline{0}+, \overline{j}+]\). View these paths as pointed measured metric spaces pointed at the roots \(0+\) and \({\overline{0}}+\) respectively. Now let \(\varepsilon _j:= d_{{{\mathrm{GHP}}}}^{{{\mathrm{pt}}}}(\mathscr {M}_j,{\overline{\mathscr {M}}}_j)\), where \(d_{{{\mathrm{GHP}}}}^{{{\mathrm{pt}}}}\) is the pointed Gromov-Hausdorff-Prokhorov metric defined in Sect. 2.1. \(\square \)
Write \(L = {\ell \atopwithdelims ()2}\). Let \(\phi :\mathbb {R}_+^L\rightarrow \mathbb {R}\) be a bounded continuous function. For \(K >0\), let \(\square (K) = [0,K]^L\), and for \(\delta >0\), define
Finally, define
where \(l_e(\cdot )\) denotes the length of the edge e and we have used the fact that both trees have the same shape. Write \({{\mathrm{ht}}}(\mathbf {t})\) for the height of tree \(\mathbf {t}\) (not graph distance, rather in terms of maximal distance from the root when incorporating edge lengths). The following proposition completes the proof of Theorem 4.18:
Proposition 4.25
For two trees \(\mathbf {t}, {\overline{\mathbf {t}}}\in \mathbf {T}_{I,(k+\ell )}^*\) having the same shape, and with \(\varepsilon \) as in (4.47),
Proof
For each \(j\le k\), choose a correspondence \(C_j\) and a measure \(\pi _j\) on the product space \([0+, j+]\times [\overline{0}+, \overline{j}+]\) such that the following conditions are met: (a) \((0+,\overline{0}+)\in C_j\); (b) the distortion satisfies \({{\mathrm{dis}}}(C_j) <3\varepsilon _j\); (c) the measure of the complement satisfies \(\pi _j(C_j^c)< 2\varepsilon _j\); (d) and finally
where \(p_*\pi _j\) and \({\overline{p}}_*\pi _j\) are the marginals of \(\pi _j\). Now sample \((X_j^{\star }, {\overline{X}}_j^{\star })\sim \pi _j\) from \([0+, j+]\times [\overline{0}+, \overline{j}+]\) independently for \(1\le j\le k\). By (4.48), we can couple \((X_j^{\star }, {\overline{X}}_j^{\star })\) with two random variables \(X_j, {\overline{X}}_j\) (again independently for \(1\le j\le k\)) such that \(X_j\sim \nu _j\) and \({\overline{X}}_j\sim {\overline{\nu }}_j\), and further
Using conditions (b) and (c), we get
where \(d_{\mathbf {t}}\) is the distance metric on tree \(\mathbf {t}\) which incorporates the edge lengths. Now write E for the following “good event”:
It follows from (4.49) and (4.50) that
Now we are going to create “shortcuts” by gluing the leaves to the corresponding sampled points. Let \(\mathbf {S}\) (resp. \({\overline{\mathbf {S}}}\)) be the (random) metric space obtained by identifying each of the leaves \(j+\) (resp. \({\overline{j}}+\)) with \(X_j\) (resp. \({\overline{X}}_j\)) in \(\mathbf {t}\) (resp. \({\overline{\mathbf {t}}}\)) for \(1\le j\le k\) and write \(d_{\mathbf {S}}\) (resp. \(d_{{\overline{\mathbf {S}}}}\)) for the induced metric. Then by definition,
and an analogous expression holds for \(g_{\phi }^{(k)}({\overline{\mathbf {t}}})\). This gives
Consider the map from \(\mathbf {t}\) to \({\overline{\mathbf {t}}}\) which takes every vertex to the corresponding vertex and points on each edge are mapped by linear interpolation (using the edge lengths) to points on the corresponding edge. Consider \(a\in [0, j+]\) and let \(\overline{a}\in [{\overline{0}}, {\overline{j}}+]\) be the corresponding point in \({\overline{\mathbf {t}}}\) for some \(j\le k\). Then note that
on the set E.
Now consider a shortest path in \(\mathbf {S}\) connecting \((k+i_1)+\) and \({(k+i_2)}+\). We can go from \(\overline{(k+i_1)}+\) to \(\overline{(k+i_2)}+\) by taking the same route in \(\overline{\mathbf {S}}\), i.e., by traversing the same edges and taking the same shortcuts in the same order. We make the following observations: (i) The difference between distance traversed while crossing the edge e is \(|l_e(\mathbf {t})-l_e({\overline{\mathbf {t}}})|\). (ii) By (4.53), on the set E, taking a “shortcut” contributes at most \((3\varepsilon _j+\sum _e |l_e(\mathbf {t})-l_{e}({\overline{\mathbf {t}}})|)\) to the difference between distance traversed. Since we have to take at most k shortcuts, we immediately get
on the set E. By symmetry, a similar inequality holds if we interchange the roles of \(\mathbf {S}\) and \({\overline{\mathbf {S}}}\). This observation combined with (4.51) and (4.52) yields the result. \(\square \)
5 Proofs: convergence in Gromov-weak topology
Recall from Proposition 4.1 that conditional on the partition of the vertices \(\left\{ \mathscr {V}^{(i)}:i\ge 1\right\} \) into the connected components, the actual structure of the components of \(\mathscr {G}(\mathbf {x}, t)\) can be generated independently as the connected graph \({\tilde{\mathscr {G}}}_{|\mathscr {V}^{(i)}|}(a_n^{(i)},\mathbf {p}_n^{(i)})\) where \(a_n^{(i)}, \mathbf {p}_n^{(i)}\) are as in Proposition 4.1 and given \(m, \mathbf {p},a\), \({\tilde{\mathscr {G}}}_m(a,\mathbf {p})\) is the connected random graph model studied in the previous section. For Theorem 1.8, the time scale \(t = t_n\) of interest in the expression of \(a_n^{(i)}\) is
for fixed \(\lambda \in \mathbb {R}\). Let \(\mathscr {N}(\mathbb {R}_+)\) denote the space of counting measures on \(\mathbb {R}_+\) equipped with the vague topology.Define \(\varvec{\Upsilon }_n^{(i)}:= (p_v/\sigma (\mathbf {p}), v\in \mathscr {V}^{(i)})\) and view \((a_n^{(i)}\sigma (\mathbf {p}_n^{(i)}), \varvec{\Upsilon }_n^{(i)})\) as a random element of \(\mathbb {S}:= \mathbb {R}_+\times \mathscr {N}(\mathbb {R}_+)\) (equipped with the product topology). Finally, define
viewed as an element of \(\mathbb {S}^{\infty }\), again equipped with the product topology induced by a single coordinate \(\mathbb {S}\). Now given an infinite vector \(\mathbf {c}\in l_0\) recall the process \(\bar{V}_\lambda ^{\mathbf {c}}(\cdot )\) as in (1.16), the corresponding excursions \(\mathscr {Z}(\lambda )\) as in (1.17) and the corresponding excursion lengths in (1.18). Finally recall the definitions of \(\bar{\gamma }^{(i)}, \varvec{\theta }^{(i)}\) from (2.10). Writing these out explicitly, define
Proposition 5.1
The following hold under Assumption 1.6:
-
(i)
For every \(i\ge 1\), \(\sigma (\mathbf {p}_n^{(i)}) \mathop {\longrightarrow }\limits ^{\mathrm {P}}0\) as \(n\rightarrow \infty \).
-
(ii)
\(\mathscr {P}_n\mathop {\longrightarrow }\limits ^{d}\mathscr {P}_{\infty }\) on \(\mathbb {S}^{\infty }\) as \(n\rightarrow \infty \). Further for every fixed \(i\ge 1\), almost surely,
$$\begin{aligned} \sum _{v\in \mathscr {Z}_i(\lambda )} c_v= \infty . \end{aligned}$$(5.1)
Proof of Theorem 1.8
We prove the theorem assuming Proposition 5.1. By an application of Skorohod embedding we may assume that we are working on a probability space where the convergence in Proposition 5.1 happens almost surely. In particular, in this space, Assumption 4.4 is satisfied almost surely for \(\mathbf {p}_n^{(i)}\) for any fixed \(i\ge 1\). Now an application of Theorem 4.5 completes the proof. \(\square \)
5.1 Verification of weight assumptions in maximal components
Here we give the proof of Proposition 5.1. To ease notation, we will throughout assume \(\lambda =0\). The general case follows in an identical fashion, but this assumption simplifies notation. We will write \(V^{\mathbf {c}}\) instead of \(V^{\mathbf {c}}_0\) for the process in (1.15) with \(\lambda =0\) and simply write \(\mathscr {C}_i\) for \(\mathscr {C}_i([\sigma _2(\mathbf {x}^{(n)})]^{-1})\).
We start by describing an exploration scheme (developed in [9]) which simultaneously constructs the graph \(\mathscr {G}_n(\mathbf {x},t)\) and a “breadth first” walk. This was carefully analyzed in [10] to prove Theorem 1.7.
For every ordered pair (u, v), let \(\eta _{u,v}\) be an exponential random variable with rate \(tx_v\) (independent across ordered pairs). Note that there is a simple relation between the connection probabilities of \(\mathscr {G}_n(\mathbf {x},t)\) given by (1.10) and the above random variables given by:
At each stage \(i\ge 1\), we have a collection of active vertices \(\mathscr {A}(i)\), a collection of explored vertices \(\mathscr {O}(i)\) and a collection of unexplored vertices \(\mathscr {U}(i)= [n]{\setminus }\mathscr {A}(i)\cup \mathscr {O}(i)\).
Initialize with \(\mathscr {O}(1) = \emptyset \) and \(\mathscr {A}(1) = \left\{ v(1)\right\} \), where the first vertex v(1) is chosen by size-biased sampling, namely with probability proportional to vertex weights \(\mathbf {x}\). When possible we will suppress dependence on n to ease notation. Now let \(\mathscr {D}(v(1)):=\left\{ v: \eta _{v(1),v}\le x_{v(1)}\right\} \) denote the collection of “children” of v(1) and note that by (5.2) this generates the right connection probabilities in \(\mathscr {G}_n(\mathbf {x},t)\). Think of the associated \(\eta _{v(1),v}\) values (for vertices connected to v(1)) as “birth-times” of these connections in the interval \([0,x_{v(1)}]\) and label the corresponding vertices as \(v(2), v(3), \ldots v(|\mathscr {D}(v(1))|+1)\). Update the process as \(\mathscr {O}(2):= \left\{ v(1)\right\} \), \(\mathscr {A}(2):= \mathscr {D}(v(1))\) and \(\mathscr {U}(2) = \mathscr {U}(1){\setminus }\mathscr {D}(v(1))\).
Associate with this construction a breadth-first walk as follows:
Recursively for \(i\ge 2\) let \(T_{i-1}:= \sum _{j=1}^{i-1} x_{v(j)}\). At this “time” we will explore the unexplored neighbors of v(i). By this time, there are \(|\mathscr {U}(i)|:= i-1+|\mathscr {A}(i)|\) vertices that have either been explored or are active. Let \(\mathscr {D}(v(i)):= \left\{ v\in \mathscr {U}(i): \eta _{v(i)v}\le x_{v(i)}\right\} \) and again label these as \(v(i+|\mathscr {A}(i)|), v(i+|\mathscr {A}(i)|+1), \ldots v(i+|\mathscr {A}(i)|+|\mathscr {D}(v(i))|-1)\) in increasing order of their \(\eta _{v(i)v}\) values. Update \(\mathscr {O}(i+1) = \mathscr {O}(i)\cup \left\{ v(i)\right\} \), \(\mathscr {A}(i+1) = \mathscr {A}(i)\cup \mathscr {D}(v(i)){\setminus }\left\{ v(i)\right\} \) and \(\mathscr {U}(i+1) = \mathscr {U}(i){\setminus }\mathscr {D}(v(i))\). Again update the walk as
After finishing a component (which happens when \(\mathscr {A}(i) = \emptyset \) for some \(i\ge 2\)), choose the next vertex to explore in a size-biased manner from the unexplored set \(\mathscr {U}(i)\). If \(\mathscr {U}(i)=\emptyset \), then we have finished constructing the partition of the graph into the connected components.
Now note the following important properties of this exploration:
-
(a)
The ordering \(({v(1)}, {v(2)}, \ldots , {v(n)})\) is a size-biased reordering of the vertex set [n].
-
(b)
If we start a new component at some stage i with vertex v(i), and finish exploring the component at stage \(j\ge i\), then the walk satisfies
$$\begin{aligned} Z(T_j) = Z(T_{i-1}) - x_{v(i)}, \quad Z(u)\ge Z(T_j) \text{ on } T_{i-1}< u < T_j. \end{aligned}$$Thus, the size of the component of v(i), \(\sum _{l=i}^{j} x_{v(l)}\) is essentially the length of the excursion of the walk beyond past minima.
As a starting point in proving Theorem 1.7, Aldous and Limic [10] show the following result. Their result is more general (incorporating the presence of a “Brownian component”) but we state their result as applied to our setting.
Proposition 5.2
([10, Proposition 9]) Consider the process \(\left\{ \bar{Z}_n(s):s\ge 0\right\} \) defined by setting \(\bar{Z}_n(s) := Z(s)/\sigma _2\). Then under Assumption 1.6 \(\bar{Z}_n\mathop {\longrightarrow }\limits ^{d}V^{\mathbf {c}}\) as \(n\rightarrow \infty \).
Using this result, Aldous and Limic [10] show that the corresponding maximal excursions beyond past minima of \(\bar{Z}_n\) also converge to the maximal excursions beyond past minima of \(V^{\mathbf {c}}_\lambda \), namely the excursion lengths of the reflected process \(\bar{V}_\lambda ^{\mathbf {c}}\) (see (1.16)) from zero. A consequence of the proof of Theorem 1.7 in [10] using Proposition 5.2 is the following result:
Lemma 5.3
Fix K and let \(\mathscr {E}_n(K)\) be the time required for the above construction to explore the maximal K components \(\left\{ \mathscr {C}_i:1\le i\le K\right\} \). Then \(\left\{ \mathscr {E}_n(K):K\ge 1\right\} \) is tight.
In other words, for any fixed \(K\ge 1\), the maximal length excursions of \(\bar{V}^{\mathbf {c}}\) are found in finite time. Thus, even though the total weight of vertices \(\sigma _1\rightarrow \infty \), when exploring the graph in size-biased fashion, under Assumption 1.6 one needs only a finite amount of “time” to find the maximal components. Here time is measured in terms of the weight of vertices already explored. Now define
Thus, \(S_{n, 2}(t)\) is the normalized sum of squares of vertex weights of vertices explored by time t and \(R_n^{\varepsilon }\) is the normalized sum of these squares where we only retain explored vertices with weight at most \(\varepsilon \sigma _2\). Using the same set of exponential random variables \(\left\{ \xi _j:j\ge 1\right\} \) that arose in the definition of the process \(V^{\mathbf {c}}\) in (1.15) define a new process
The same proof techniques as in [10] now implies the following. Since the ideas basically follow from [10] we only sketch the proof.
Lemma 5.4
Assumption 1.6 implies the joint convergence of the processes \((\bar{Z}_n(\cdot ), S_{n,2}(\cdot ))\mathop {\longrightarrow }\limits ^{d}(V^{\mathbf {c}}(\cdot )\), \(S_{\infty ,2}(\cdot ))\) as \(n\rightarrow \infty \).
Proof
Fix \(K\ge 1\), and for each \(i\ge 1\), let \(\xi _i^{(n)}\) denote the time when vertex i is added to the collection of active vertices. Now consider the \(K+1\) dimensional stochastic process
Write
In the proof of Proposition 5.2, Aldous and Limic showed that \(\mathbf {Y}_n^K\mathop {\longrightarrow }\limits ^{d}\mathbf {Y}_{\infty }^K\) for every fixed \(K\ge 1\). Thus to complete the proof, it is enough to show, for every fixed \(A>0\) and \(\eta >0\), \(\limsup _{\varepsilon \rightarrow 0}\limsup _{n\rightarrow \infty } {{\mathrm{\mathbb {P}}}}(R_n^{\varepsilon }(A)> \eta )=0\). Now as described on [10, Page 17], we can couple \((\xi _1^{(n)}, \xi _2^{(n)}, \ldots , \xi _n^{(n)})\) with a sequence of independent exponential random variables \(({\tilde{\xi }}_1^{(n)}, {\tilde{\xi }}_2^{(n)}, \ldots , {\tilde{\xi }}_n^{(n)})\) with \({\tilde{\xi }}_j^{(n)}\) having rate \(x_j/\sigma _2\) such that \({\tilde{\xi }}_j^{(n)}\le \xi _j^{(n)}\). Now write
Then it is enough to show
which is trivial since
We have used both (1.12) and (1.13) in the last convergence assertion. Thus, first letting \(n\rightarrow \infty \) and then \(\varepsilon \rightarrow 0\) completes the proof. \(\square \)
We can now complete the proof of Proposition 5.1. First, note that to prove (5.1), it is enough to show that for any two rationals \(r<s\), \(\sum _{j} c_j\mathbbm {1}\left\{ r\le \xi _j \le s\right\} = \infty \) almost surely where \(\xi _j\) are the associated exponential rate \(c_j\) random variables. This, however, is trivially true as \(\sum _j c_j^2 =\infty \).
To prove the other assertions, define, for \(i\ge 1\), the point processes \(\Xi _n^{(i)}:=\left\{ x_u/\sigma _2: u\in \mathscr {C}_i\right\} \), namely the rescaled vertex weights in the ith maximal component. Analogously define \(\Xi _{\infty }^{(i)} = \left\{ c_v: v\in \mathscr {Z}_i\right\} \), namely the collection of jumps in the ith largest excursion of \(\bar{V}^{\mathbf {c}}\). Let
for the normalized sum of squares of vertex weights in a component. Define
We will view these as random elements of \({\tilde{\mathbb {S}}}^{\infty }\) where \({\tilde{\mathbb {S}}}:= \mathbb {R}^2\times \mathscr {N}(\mathbb {R})\). Lemma 5.3 and Lemma 5.4 now imply the following:
Lemma 5.5
As \(n\rightarrow \infty \), \({\tilde{\mathscr {P}}}_n \mathop {\longrightarrow }\limits ^{d}{\tilde{\mathscr {P}}}_{\infty }\) on \({\tilde{\mathbb {S}}}^{\infty }\).
Expressing the functionals that arise in Proposition 5.1 in terms of vertex weights in maximal components completes the proof. Indeed,
as \(n\rightarrow \infty \). The proof of \(\mathscr {P}_n\mathop {\longrightarrow }\limits ^{d}\mathscr {P}_{\infty }\) is similar. \(\square \)
5.2 Gromov-weak convergence in Theorem 1.2
That convergence in (1.7) holds with respect to Gromov-weak topology is an easy consequence of Theorem 1.8. Indeed, setting
we can write \(\mathrm{NR}_n(\varvec{w}(\lambda ))\) as the model \(\mathscr {G}(\varvec{x}, t_n)\) where \(\varvec{x}=\varvec{x}^{(n)}:=(x_i: i\in [n])\). A direct computation will show that \(\varvec{x}^{(n)}\) satisfies Assumption 1.6 with the entrance boundary \(\mathbf {c}^{{{\mathrm{nr}}}}\) defined in (2.11). Note also that
Under the assumptions of Theorem 1.2, \(\ell _n/n\rightarrow {{\mathrm{\mathbb {E}}}}W\) and \(\sum _{i}w_i^2/n\rightarrow {{\mathrm{\mathbb {E}}}}W^2={{\mathrm{\mathbb {E}}}}W\). Further, by [17, Lemma 2.2],
where \(\zeta \) is as defined in (2.12). Combining these observations, we see that
where \(t^{{{\mathrm{nr}}}}_{\lambda }\) is as in (2.12). Since \(n^{(\tau -3)/(\tau -1)}\sigma _2(\varvec{x}^{(n)})\rightarrow {{\mathrm{\mathbb {E}}}}W\), we conclude that \(\mathbf {M}_{\infty }^{{{\mathrm{nr}}}}(\lambda )\) defined in (2.13) is the Gromov-weak limit of \(n^{-(\tau -3)/(\tau -1)}\mathbf {M}_n^{{{\mathrm{nr}}}}(\lambda )\), where \(\mathbf {M}_n^{{{\mathrm{nr}}}}(\lambda )\) is as in (1.6).
Remark 7
Theorem 1.8 is stated for a fixed \(\lambda \in \mathbb {R}\), but in the argument just given, we have to work with a sequence, namely \(t_n-(\sigma _2(\varvec{x}^{(n)}))^{-1}\) converging to \(t^{{{\mathrm{nr}}}}_{\lambda }\). This, however, does not make any difference. Indeed, the proof of [10, Proposition 9] can be imitated to prove the same result in the setup where we have a sequence converging to t instead of a fixed t, and no new idea is involved here. (In [10, Lemma 27], Aldous and Limic prove a similar result for the multiplicative coalescent. They do not, however, explicitly state the convergence of the associated process under the same assumption).
6 Proofs: convergence in Gromov-Hausdorff-Prokhorov topology
In this section, we improve Gromov-weak convergence in Theorem 1.2 to Gromov-Hausdorff-Prokhorov convergence. To do so, we will rely on [14, Theorem 6.1] which gives a criterion for convergence in Gromov-Hausdorff-weak topology. We do not give the definition of Gromov-Hausdorff-weak topology and instead refer the reader to [14, Definition 5.1]. Convergence in Gromov-Hausdorff-weak topology implies convergence in Gromov-Hausdorff-Prokhorov topology when we are working with metric measure spaces having full support (i.e., the support of the measure is the entire metric space). This is true in our situation. Indeed, it is a trivial fact that \(\mathscr {C}_i(\lambda )\) has full support. Further, the mass measure on an inhomogeneous continuum random tree has full support which implies that the same is true for \(M_i^{{{\mathrm{nr}}}}(\lambda )\).
Applying [14, Theorem 6.1] to our situation, we see that it is enough to prove the following lemma:
Lemma 6.1
(Global lower mass-bound) Let \(\mathscr {C}_i(\lambda )\) be the ith largest component of \(\mathrm{NR}_n(\varvec{w}(\lambda ))\). Then the following assertion is true:
For each \(i\ge 1\), \(v\in [n]\) and \(\delta >0\), let \(B(v, \delta )\) denote the intrinsic ball (in \(\mathrm{NR}_n(\varvec{w}(\lambda ))\)) of radius \(\delta n^{(\tau -3)/(\tau -1)}\) around v and set
Then the sequence \(\left\{ \left( \mathfrak {m}_i^{(n)}(\delta )\right) ^{-1}\right\} _{n\ge 1}\) is tight.
Lemma 6.1 ensures compactness of the spaces \(M_i^{{{\mathrm{nr}}}}(\lambda )\) which, in turn, implies compactness of the spaces \(M_i^{\mathbf {c}}(\lambda )\) when \(\mathbf {c}=(c_1, c_2,\ldots )\) is of the form (1.19), thus proving the first assertion in Theorem 1.9.
Before moving on to the proof of Lemma 6.1, we state a result that essentially says that instead of looking at the largest components, we can work with the components of high-weight vertices. This observation will be used to prove the global lower-mass bound:
Proposition 6.2
For every \(\varepsilon >0\) and \(k\ge 1\), there exists \(K=K(\varepsilon , k, \lambda )>0\) such that
Proposition 6.2 follows trivially from [17, Theorem 1.6 (a)] and [17, Theorem 1.1].
6.1 Bound on size of \(\varepsilon n^{(\tau -3)/(\tau -1)}\)-nets for the largest components
For convenience, we set
The purpose of this section is to prove a strong result (Proposition 6.3 stated below) that gives control over the number of intrinsic balls of radius \(\varepsilon n^{\eta }\) needed to cover the largest components. This acts as a crucial ingredient in the proof of Lemma 6.1 as well as the proof of the bound on the upper box-counting dimension.
Proposition 6.3
(Small diameter after removing high-weight vertices) For every \(\varepsilon , \delta >0\), and \(N=N(\varepsilon ):=\varepsilon ^{-\delta -1/\eta }\),
for all n sufficiently large, a positive constant \(c_{\delta }\) depending on \(\delta \) and a universal constant \(C>0\). Here \(\mathrm{NR}_n(\varvec{w}(\lambda )){\setminus }[N]\) denotes the graph obtained by removing all vertices with labels in [N] and the edges incident to them from the graph \(\mathrm{NR}_n(\varvec{w}(\lambda ))\).
We continue to prove Proposition 6.3. Write
The proof consists of four steps. In the first step, we reduce the proof to the study of the height of mixed-Poisson branching processes. In the second step, we ensure that we can take \(\lambda =0\), while in the third step, we study the survival probability of such critical infinite-variance branching processes. In the fourth and final step, we prove the claim.
Comparison to mixed-Poisson branching processes Let \(\mathscr {C}_{{{\mathrm{res}}}}(i)\) be the cluster of i in the (restricted) random graph on the vertex set \([n]{\setminus }[i-1]\) with edge probabilities \(q_{k\ell }(\varvec{w}(\lambda ))\) for \(k,\ell \in [n]{\setminus } [i-1]\), where \(q_{k\ell }(\varvec{w}(\lambda ))\) is as in (1.1).
Note that the event \(E_n^c\) implies the existence of \(i>N\) such that the following happens: (a) The diameter of the component of i in \(\mathrm{NR}_n(\varvec{w}(\lambda )){\setminus }[N]\) is bigger than \(\varepsilon n^{\eta }\). (b) No \(j\in \left\{ N+1,\ldots , i-1\right\} \) belongs to the component of i in \(\mathrm{NR}_n(\varvec{w}(\lambda )){\setminus }[N]\). In particular, \({{\mathrm{diam}}}(\mathscr {C}_{{{\mathrm{res}}}}(i))\ge \varepsilon n^{\eta }\) for this i. Thus,
Now the random graph \(\mathrm{NR}_n(\varvec{w}(\lambda ))\) restricted to \([n]{\setminus } [i-1]\) is the Norros-Reittu random graph \({{\mathrm{NR}}}_n(\varvec{w}^{(i)}(\lambda ))\), where \(\varvec{w}^{(i)}(\lambda )=(w_j^{(i)}(\lambda ):j\in [n])\), \(w_j^{(i)}(\lambda )=w_j(\lambda )\ell _n^{(i)}/\ell _n\) for \(j\in [n]{\setminus } [i-1]\) and \(w_j^{(i)}(\lambda )=0\) for \(j\in [i-1]\), and \(\ell _n^{(i)}=\sum _{k=i}^n w_k\). Indeed, this follows from the simple observation
Write \(W_n^{(i)}(\lambda )\) for a random variable whose distribution is given by \((n-i+1)^{-1}\sum _{j=i}^{n}\delta _{w_j^{(i)}(\lambda )}\), and for any non-negative random variable X with \({{\mathrm{\mathbb {E}}}}X>0\), let \(X^\circ \) be the random variable having the size-biased distribution given by
We will use the following comparison to a mixed-Poisson branching process:
Lemma 6.4
(Domination by a mixed-Poisson branching process) Fix \(i\in [n]\) and consider \({{\mathrm{NR}}}_n(\varvec{w}^{(i)}(\lambda ))\). Then, there exists a coupling of \(\mathscr {C}_{{{\mathrm{res}}}}(i)\) and a branching process where the root has a \(\mathsf{Poi}(w_i^{(i)}(\lambda ))\) offspring distribution while every other vertex has a \(\mathsf{Poi}((W_n^{(i)}(\lambda ))^\circ )\) offspring distribution such that in the breadth-first exploration of \(\mathscr {C}_{{{\mathrm{res}}}}(i)\) starting from i, each vertex \(v\in \mathscr {C}_{{{\mathrm{res}}}}(i)\) has at most the number of children as in the branching process.
Proof
See [57, Proposition 3.1]. \(\square \)
It immediately follows from Lemma 6.4 that
where \(\overline{T}_n^{(i)}(\lambda )\) is a mixed-Poisson branching process tree whose root has a \(\mathsf{Poi}(w_i^{(i)}(\lambda ))\) offspring distribution and every other vertex has a \(\mathsf{Poi}((W_n^{(i)}(\lambda ))^\circ )\) offspring distribution. As before, \({{\mathrm{ht}}}(\mathbf {t})\) denotes the height of the tree \(\mathbf {t}\).
When \({{\mathrm{ht}}}(\overline{T}_n^{(i)}(\lambda ))>\varepsilon n^{\eta }/2\), at least one of the subtrees of the root needs to have height at least \(\varepsilon n^{\eta }/2\). Combining this observation with (6.4) and (6.5), we get
where \(T_n^{(i)}(\lambda )\) is a branching process tree where every vertex has a \(\mathsf{Poi}((W_n^{(i)}(\lambda ))^\circ )\) offspring distribution.
We make the convention of writing \(T_n^{(i)}\), \(W_n^{(i)}\) etc. instead of \(T_n^{(i)}(0)\), \(W_n^{(i)}(0)\) etc. With this notation, it is easy to see that \(W_n^{(i)}(\lambda )\mathop {=}\limits ^{d}(1+\lambda n^{-\eta }) W_n^{(i)}\) and hence \((W_n^{(i)}(\lambda ))^\circ \mathop {=}\limits ^{d}(1+\lambda n^{-\eta }) (W_n^{(i)})^\circ \).
The survival probability of mixed-Poisson branching processes We would like to compare our mixed-Poisson branching process with an offspring distribution that is independent of n. For this, we rely on the following two lemmas:
Lemma 6.5
(Mixed-Poisson branching processes of different parameters) Let \(T_n^{(i)}\) and \(T_n^{(i)}(\lambda )\) be as above. Assume further that \(\lambda \ge 0\). Then, for each \(k\ge 1\),
Proof
We follow [43, Proof of Lemma 3.4(1)]. Writing \(\delta =1+\lambda n^{-\eta }\), we note that we can obtain \(T_n^{(i)}\) as a subtree of \(T_n^{(i)}(\lambda )\) by killing every child independently with probability \(1-\delta ^{-1}\). Write \(\mathcal {A}\) for the event in which \({{\mathrm{ht}}}(T_n^{(i)}(\lambda ))\ge k\) and no vertex in the leftmost path of length k starting from the root in \(T_n^{(i)}(\lambda )\) is killed. Then
Indeed, the probability of the leftmost path surviving is precisely \(1/\delta ^k\). To finish the proof, note that \(\mathcal {A}\) implies \({{\mathrm{ht}}}(T_n^{(i)})\ge k\), so that
which is the desired inequality. \(\square \)
Lemma 6.6
(Stochastic bound by n-independent variable) Under Assumption 1.1, the random variable \((W_n^{(i)})^\circ \) is stochastically upper bounded by \(W^{\circ }\) where \(W\sim F\), i.e., \((W_n^{(i)})^\circ \mathop {\le }\limits ^{\mathrm {st}} W^{\circ }\).
Proof
First we make the following elementary observation: if \(a_1, a_2, b_1, b_2\) are positive numbers such that
Repeated application of the above will yield the following simple inequality: if \(\left\{ a_n\right\} _{n\ge 1}\) and \(\left\{ b_n\right\} _{n\ge 1}\) are sequences of positive numbers satisfying
Recall that \(\iota \) denotes the leftmost point of the support of F, and note that from (1.2) it follows that \(\int _{w_j}^{\infty }f=j/n\), \(j=1, 2, \ldots , n\) (note also that \(w_n=\iota \)). Define the function \(h_n: [\iota ,w_1)\rightarrow (\iota ,\infty )\) by \( \int _y^{h_n(y)}f=1/n. \) This immediately implies
Let \(g_n: [\iota ,w_1)\rightarrow (0,\infty )\) be given by
A direct computation and an application of (6.8) yields
Since uf(u) is non-increasing on \([\iota , \infty )\) under Assumption 1.1, we conclude that \(g_n'(y)\le 0\) on \((\iota , w_1)\). Thus, \(g_n(\cdot )\) is non-increasing on \([\iota , w_1)\). By right continuity, we can define \(g_n(w_1)=w_1/(\int _{w_1}^{\infty }uf(u)\ du)\). Since \(w_n\le w_{n-1}\le \cdots \le w_1\), we conclude that \(g_n(w_1)\le g_n(w_2)\le \cdots \le g_n(w_n)\). Clearly \(h_n(w_j)=w_{j-1}\) for \(j=2,\ldots ,n\). Thus
Now an application of (6.7) gives
which is equivalent to
This concludes the proof. \(\square \)
We continue to study the survival probability of mixed-Poisson branching processes with infinite variance offspring distribution:
Lemma 6.7
(Survival probability of infinite-variance MPBP) Let T denote a mixed-Poisson branching process tree with offspring distribution \(\mathsf{Poi}(W^{\circ })\). Then, there exists a constant \(c_{6.7}\) such that for all \(m\ge 1\),
Proof
This is a well-known result. We sketch the proof briefly for completeness. Recall the following facts about \(W^\circ \): (a) \(\mathbb {E}[W^\circ ]=\nu =1\) and (b) for \(x\rightarrow \infty \), \({{\mathrm{\mathbb {P}}}}(W^\circ >x)=c x^{-(\tau -2)}(1+o(1)).\) By the Otter-Dwass formula, which describes the distribution of the total progeny of a branching process (see [36] for the special case when the branching process starts with a single individual, [58] for the more general case, and [42] for a simple proof based on induction), we have
where \(X_i\) are i.i.d. random variables distributed as \(W^\circ \). By [41, Proposition 2.7], in our situation, \(\mathbb {P}(\sum _{i=1}^k X_i = k-1)\le ck^{-1/(\tau -2)}\), so that
Take \(k=m^{(\tau -2)/(\tau -3)}\) in the second inequality in (6.9) to get
where |T| denotes the total number of vertices in T. We condition on the size |T| and write
By [50, Theorem 4], there exists a \(\kappa >1\) such that, uniformly for \(u\ge 1\),
Combining this with (6.10), we get
as required. \(\square \)
Proof of Proposition 6.3
Clearly
where
Iterating this \(\varepsilon n^{\eta }/4\) times, we get
where the second inequality is a consequence of Lemma 6.6 and the last step follows from Lemma 6.7.
Substituting the estimate (6.11) into (6.6) leads to
for some constant c. Here we have used Lemma 6.5 and the simple fact that \(w_i^{(i)}\le w_i\).
Next, note that it is an easy consequence of (1.3) that there exist constants \(c', c''>0\) such that for all \(i\in [n]\),
Further, [17, Lemma 2.2] implies that \(\nu _n^{(1)}<1\) for large n. Hence, for every \(i\ge 2\),
for some \(C>0\). Here, we have used the second inequality in (6.13). Combining this estimate with (6.12) and the first inequality in (6.13), we end up with
for some \(C'>0\). Taking \(N=\varepsilon ^{-\delta -1/\eta }\), we arrive at
Note that \(\varepsilon N^{\eta }=\varepsilon ^{-\delta \eta }\). A little more work after plugging this into (6.14) will lead to (6.2). \(\square \)
6.2 Proof of global lower-mass bound
In this section, we complete the proof of Lemma 6.1. We start with some preliminaries:
Lemma 6.8
(Weight of size-biased reordering) Let \(\pi _v(1)=v\) and \((\pi _v(i): i\in [n]{\setminus }\{1\})\) be a size-biased reordering on \([n]{\setminus } \{v\}\) where the size of vertex \(v'\) is proportional to \(w_{v'}\) for \(v'\in [n]{\setminus }\left\{ v\right\} \). Then, for every \(k=o(n)\), there exists a \(J>0\) such that
Proof
See [17, Proof of Lemma 5.1]. \(\square \)
Recall the definitions of \(\eta \) and \(\rho \) from (6.1). Recall that for \(v\in [n]\), \(B(v,\delta )\) denotes the intrinsic ball (in \(\mathrm{NR}_n(\varvec{w}(\lambda ))\)) around v or radius \(\delta n^{\eta }\). We will use the following bound on the weight of balls:
Lemma 6.9
(Weights of balls around high-weight vertices cannot be too small) For every \(\varepsilon >0\) and \(i\ge 1\), there exist \(n_{i, \varepsilon }\) large and \(\delta _{i,\varepsilon }>0\) such that for all \(n\ge n_{i,\varepsilon }\) and \(\delta \in (0, \delta _{i,\varepsilon }]\),
Proof
We rely on a cluster exploration used in [17] which we describe next. We denote by \((Z_l(i))_{l\ge 0}\) the exploration process of \(\mathscr {C}(i)\), the cluster containing i, starting from i, in the breadth-first search, where \(Z_0(i)=1\) and where \(Z_1(i)\) denotes the number of potential neighbors of the initial vertex i. The variable \(Z_l(i)\) has the interpretation of the number of potential neighbors of the first l explored potential vertices in the cluster whose neighbors have not yet been explored. As a result, we explore by taking one vertex of the ‘stack’ of size \(Z_l(i)\), drawing its mark and checking whether it is a real vertex, followed by drawing its number of potential neighbors. Thus, we set \(Z_0(i)=1, Z_1(i)=\mathsf{Poi}(w_i)\), and note that, for \(l\ge 2\), \(Z_l(i)\) satisfies the recursion relation
where \(X_l\) denotes the number of potential neighbors of the lth potential vertex that is explored, where \(X_1=X_1(i)=\mathsf{Poi}(w_i)\). More precisely, when we explore the lth potential vertex, we start by drawing its mark \(M_l\) in an i.i.d. way with distribution
When we have already explored a vertex with the same mark as the one drawn, we turn the status of the vertex to be explored to inactive, the potential vertex does not become a real vertex, and we proceed with the next potential vertex. When, instead, it receives a mark that we have not yet seen, then the potential vertex becomes a real vertex, its mark \(M_l\in [n]\) indicating to which vertex in [n] the lth explored vertex corresponds, so that \(M_l\in \mathscr {C}(i)\). We then draw \(X_l=\mathsf{Poi}(w_{M_l})\), and \(X_l\) denotes the number of potential vertices incident to the real vertex \(M_l\). Again, upon exploration, these potential vertices might become real vertices, and this occurs precisely when their mark corresponds to a vertex in [n] that has not appeared in the cluster exploration so far. We call the above procedure of drawing a mark for a potential vertex to investigate whether it corresponds to a real vertex a vertex check. Let
Then, by imitating the techniques used in the proof of [17, Theorem 2.4], we obtain
([17, Theorem 2.4] states the result for \(i=1\). However the exact same proof goes through for any \(i\ge 2\)). The limiting process \((\mathscr {S}_t(i))_{t>0}\) is defined as follows: Let
We let \((\mathscr {I}_i(t))_{i\ge 1}\) denote independent increasing indicator processes defined by
so that
Here \(\big (\mathsf{Exp}(a i^{-1/(\tau -1)})\big )_{i\ge 1}\) are independent exponential random variables with rates \(a i^{-1/(\tau -1)}\). Then we define
for all \(t\ge 0\), where \(c=\lambda +\zeta -ab\) and \(\zeta \) is as in (2.12). We call \((\mathscr {S}_t)_{t\ge 0}\) a thinned Lévy process.
Let \(\mathscr {H}_n^{(i)}(u)\) denote the hitting time of u of the process \((\mathscr {Z}_t^{(n)}(i))_{t> 0}\). Then, by [17, Corollary 3.4], \(\mathscr {H}_n^{(i)}(u)\mathop {\longrightarrow }\limits ^{d}\mathscr {H}_{\mathscr {S}(i)}(u),\) the hitting time of u of the process \((\mathscr {S}_t(i))_{t>0}\). This implies the existence of a \(B_{\varepsilon ,i}\) (independent of n) and an integer \(n_{i,\varepsilon }\) such that
since the limiting process \((\mathscr {S}_t(i))_{t>0}\) starts from \((c_{F}/i)^{1/(\tau -1)}\) and takes a positive amount of time to reach \((c_{F}/2i)^{1/(\tau -1)}\).
Let |B(i, r)| denote the number of vertices in B(i, r). Let \(\delta _{\varepsilon ,i}\) be so small that
Then we claim that for all \(\delta \in (0,\delta _{\varepsilon ,i}]\),
That (6.21) holds can be seen as follows. For \(|B(i,\delta )|\le (c_{F}/2i)^{1/(\tau -1)} \delta n^{\rho }\) to occur, there has to exist some \(j\in [1, \delta n^{\eta }]\) such that the number of vertices at distance j from i is smaller than \((c_{F}/2i)^{1/(\tau -1)} \delta n^{\rho }/(\delta n^{\eta })\), i.e.,
Now the number of vertices at distance j from i is precisely the number of vertices in generation j of the breadth-first exploration process, and hence this number (scaled by \(n^{\rho }\)) appears in the function \(\mathscr {Z}_t^{(n)}(i)\). Thus, (6.22) implies that \((\mathscr {Z}_t^{(n)}(i))_{t>0}\) has to hit \((c_{F}/2i)^{1/(\tau -1)}\) before we have finished exploring up to generation \(\delta n^{\eta }\), i.e., we must have that
where the last inequality holds by (6.20) and because \(\delta \in (0, \delta _{\varepsilon ,i}]\).
Combining (6.19) and (6.21), we conclude that for all \(\delta \in (0,\delta _{\varepsilon ,i}]\) and \(n\ge n_{i,\varepsilon }\),
This explains the second term in (6.15).
To see what happens when \(|B(i,\delta )|\ge (c_{F}/2i)^{1/(\tau -1)} \delta n^{\rho }\), recall that the vertices appear in a size-biased fashion in our exploration process. Hence
by Lemma 6.8. Combining (6.23) and (6.24) proves the claim. \(\square \)
Lemma 6.10
For \(v\in [n]\), let \(\mathscr {C}(v)\) denote the component of v in \(\mathrm{NR}_n(\varvec{w}(\lambda ))\). Then for every fixed \(i\ge 1\) and \(\varepsilon _1,\varepsilon _2>0\), there exist \(\xi =\xi _{\varepsilon _1,\varepsilon _2}^{(i)}>0\) and an integer \({{\bar{n}}}={{{\bar{n}}}_{\varepsilon _1,\varepsilon _2}}^{(i)}\) such that
Proof
Recall Proposition 6.3, and choose \(N_{\varepsilon _1,\varepsilon _2}\) and \(n_{\varepsilon _1,\varepsilon _2}\) large so that
for all \(n\ge n_{\varepsilon _1, \varepsilon _2}\). Let
Clearly, on the set \(F_1\cap F_2\),
Recall the definition of \(\delta _{\varepsilon ,i}\) in (6.20), and let
Then (6.26) implies
on the set \(F_1\cap F_2\). Hence, for all \(n\ge n_{\varepsilon _1,\varepsilon _2}\),
where the second inequality is a consequence of Lemma 6.9.
Next, on the set \(F_1\cap F_2^c\),
for any \(v\in \mathscr {C}(i)\). Further, by [17, Theorem 1.4], \(n^{-\rho }\sum _{j\in \mathscr {C}(i)}w_j\) converges in distribution to a positive random variable. Hence, there exists \(\xi _{\varepsilon _2}^{(i)}>0\) such that
The result follows upon combining (6.25), (6.27) and (6.28). \(\square \)
We are now ready for the proof of Lemma 6.1:
Proof of Lemma 6.1
Using Proposition 6.2, for any \(i\ge 1\) and \(\varepsilon >0\), we can choose K such that
By Lemma 6.10, we can choose \(\xi >0\) and an integer \({\bar{n}}\) such that
for all \(n\ge {\bar{n}}\) and \(k\in [K]\). Combining (6.29) and (6.30), we see that
which yields the desired tightness. \(\square \)
7 Proofs: Fractal dimension
In this section, we prove the assertions about the box-counting dimension. Throughout this section, \(C,C'\) will denote universal constants whose values may change from line to line.
We first prove a similar result for the component of j, \(\mathscr {C}(j)\). Consider \(\mathscr {C}(1)\), and as usual, view \(\mathscr {C}(1)\) as a metric measure space via the graph distance and by assigning mass \(p_v:=w_v/(\sum _{\ell \in \mathscr {C}(1)}w_{\ell })\) to vertex \(v\in \mathscr {C}(1)\). Set \(\mathbf {p}:=(p_v: v\in \mathscr {C}(1))\). Now note that conditional on the vertex set of \(\mathscr {C}(1)\), \(\mathscr {C}(1)\) has the same distribution as the graph \({\tilde{\mathscr {G}}}_m(\mathbf {p}, a)\) where \(a=(1+\lambda n^{-\eta })(\sum _{j\in \mathscr {C}(1)} w_j)^2/\ell _n\). Using [17, Proposition 3.7] and [17, Lemma 3.1], it is easy to verify that the conditions in Assumption 4.4 hold with this choice of a and \(\mathbf {p}\). Thus, by Theorem 4.5, \(n^{-\eta }\mathscr {C}(1)\) converges in Gromov-weak topology to a limiting space that we denote by \(\mathscr {M}(1)\). Further, the sequence \(\left\{ n^{-\eta }\mathscr {C}(1)\right\} _{n\ge 1}\) satisfies the global lower mass-bound property by Lemma 6.10. Hence,
with respect to the Gromov-Hausdorff-Prokhorov topology. By similar arguments, we can show that \(n^{-\eta }\mathscr {C}(j)\mathop {\longrightarrow }\limits ^{d}\mathscr {M}(j)\) with respect to the Gromov-Hausdorff-Prokhorov topology for any \(j\ge 1\) and an appropriate (random) compact metric measure space \(\mathscr {M}(j)\). In Sect. 7.1, we identify the upper box-counting dimension, and in Sect. 7.2 the lower box-counting dimension.
7.1 Upper bound on the Minkowski dimension
The key ingredient in the proof is the following lemma:
Proposition 7.1
Write \(\pi =(\tau -2)/(\tau -3)\). Then for every \(j\ge 1\),
Proof
For simplicity, we work with \(j=1\). The proof is similar for any \(j\ge 2\). Recall that \(\mathscr {N}(\mathscr {M}, \delta )\) denotes the minimum number of open balls of radius \(\delta \) needed to cover the compact space \(\mathscr {M}\). Write
Since the convergence in (7.1) holds with respect to the Gromov-Hausdorff topology, for every \(x, \varepsilon >0\),
Fix an arbitrary \(\delta >0\) and, for any \(\varepsilon >0\), define
Let \(E_n\) be the event defined in (6.3). Clearly, on the event \(E_n\cap \left\{ \mathfrak {N}_{(n)}(\varepsilon )>x_{\varepsilon }\right\} \), any \(v\in \mathscr {C}(1)\) is within distance \(\varepsilon n^{\eta }\) from a point in \(\mathscr {C}(1)\cap [N(\varepsilon )]\). Hence,
and, by Proposition 6.3,
It remains to bound \({{\mathrm{\mathbb {P}}}}\left( |\mathscr {C}(1)\cap [N(\varepsilon )]|\ge x_{\varepsilon }\right) \). To this end, note that by [17, Proposition 3.7],
where \(\mathscr {I}_q(\cdot )\) and \(\mathscr {H}_{\mathscr {S}(1)}(\cdot )\) are as defined around (6.18). Further, [45, Theorem 1.4] implies the existence of positive constants \(A_1\) and \(A_2\) such that
Combining (7.5),(7.6), (7.7) and (7.8), we conclude that, for any \(u_{\varepsilon }>0\),
Now \(\mathscr {I}_q\left( u_{\varepsilon }\right) \) are i.i.d. Bernoulli random variables with
where a is as in (6.16). Choose \(s>0\) small so that \({\mathrm {e}}^s-1\le 2s\). Clearly
Hence, there exists a constant \(A_3>0\) such that
Combining (7.3), (7.9) and (7.10), we see that \(\sum _{k=1}^{\infty }{{\mathrm{\mathbb {P}}}}\left( \mathfrak {N}_{(\infty )}(2/k)>k^{\delta +\pi }\right) <\infty .\) Since \(\delta >0\) was arbitrary, we conclude that
By sandwiching \(\varepsilon \) between \(2/(k-1)\) and 2 / k, we get the desired upper bound on \({{\mathrm{\overline{dim}}}}(\mathscr {M}(1))\). \(\square \)
Proof of upper bounds in (1.8) and (1.20): We only give the proof of (1.8). This will imply (1.20) because of (2.13). Fix \(i\ge 1\) and let
By Proposition 6.2, \(K_n\) is tight. By passing to a subsequence if necessary, we can assume that we are working on a space where
for some (integer-valued) random variable \(K_{\infty }\). Then
By Proposition 7.1, \({{\mathrm{\mathbb {P}}}}\left( {{\mathrm{\overline{dim}}}}\left( M_i^{{{\mathrm{nr}}}}(\lambda )\right) >\pi ,\ K_{\infty }=j\right) =0\) for every \(j\ge 1\), and hence
This completes the proof of the upper bound on the Minkowski dimension. \(\square \)
7.2 Lower bound on the Minkowski dimension
We next extend the argument for the upper bound to prove a lower bound on the Minkowski dimension of \(\mathscr {M}(j)\). As in (7.3),
Recall the definitions in (7.2), and for an arbitrary \(\delta >0\) and \(\varepsilon >0\), adapt (7.4) to
where \(\pi =(\tau -2)/(\tau -3)\) as in Proposition 7.1, and \(h>0\) is sufficiently small so that
(A simple calculation will show that it is possible to choose \(h>0\) small so that (7.12) holds whenever \(\tau >3\)).
The main result in this section is the following estimate on \(\mathfrak {N}_{(n)}(\varepsilon ):=\mathscr {N}(\mathscr {C}(j), \varepsilon n^{\eta })\):
Proposition 7.2
There exist \(\kappa >0\) and \(c>0\) such that
Consequently, for every \(j\ge 1\),
The rest of this section is devoted to the proof of Proposition 7.2. As in Sect. 7.1 and for simplicity, we work with \(j=1\). The proof is similar for any \(j\ge 2\). Before starting with the proof, we collect some preliminaries. The proof below relies on two asymptotic bounds on \(|\mathscr {C}(1)|\). For this, we use
where \(\mathscr {H}_{\mathscr {S}(1)}(\cdot )\)is defined around (6.18). Our main result on the lower tails of the distribution of \(\mathscr {H}_{\mathscr {S}(1)}(0)\) is in the following lemma:
Lemma 7.3
(Lower tails of \(\mathscr {H}_{\mathscr {S}(1)}(0)\)) There exists \(C>0\) such that
Proof
We note that
We split
where, abbreviating \(d_j=a/j^{1/(\tau -1)}\),
Here \((N_j(t))_{t\ge 0}\) are independent rate \(d_j\) Poisson processes. Thus, \((\mathscr {R}_t)_{t\ge 0}\) is a Lévy process, while \((\mathscr {D}_t)_{t\ge 0}\) subtracts the multiple hits. When \(b>0\) and \(t\le s\) with s small, and using that \(\mathscr {D}_s\) is non-decreasing,
We start with the latter contribution. Since, for a Poisson random variable Z with parameter \(\lambda \),
we have
For the first term in (7.17), we use Doob’s \(L^2\)-inequality to bound
so that (7.15) follows. \(\square \)
Lemma 7.4
(Cluster weight convergence) For a set of vertices \(A\subseteq [n]\), let \(w(A)=\sum _{a\in A} w_a\) denote its weight. Then, as \(n\rightarrow \infty \), for every \(j\ge 1\), \(\mathbb {E}[n^{-\rho }w(\mathscr {C}(j))]\) remains uniformly bounded as \(n\rightarrow \infty \), where \(\rho \) is as in (6.1).
Proof
Fix \(K\ge 0\) so large that
This is possible, since \(\ell _n/n\rightarrow \mathbb {E}[W]\), while
where we have used (6.13) in the second step to lower bound \(\sum _{i\le K} w_i^2\). We write \(\mathscr {C}(A)=\bigcup _{a\in A} \mathscr {C}(a)\). Then, for \(j\le K\), we bound
We next investigate \(\mathbb {E}[w(\mathscr {C}([K]))]\). Note that
where \(\ell _n=\sum _{l\in [n]}w_l\) is the total weight. Thus, for any \(A\subseteq [n]\),
where \(i_0=a, i_l=j\) and the sum is over distinct vertices not in A. Using the bound on \(p_{i,j}\) and performing the sum over \(i_1,\ldots , i_{l-1}\), we obtain that
By (7.18),
As a result, for large n,
Since, by an argument similar to (7.19),
we arrive at
This completes the proof. \(\square \)
We conclude that
We now study the event in (7.21). We note that \(\mathfrak {N}_{(n)}(\varepsilon )\ge X^{(n)}(\varepsilon )\), which is defined as
where \({{\mathrm{dist}}}_{\mathscr {C}(1)}(A,B)\) is the graph distance between the sets of vertices \(A\cap \mathscr {C}(1)\) and \(B\cap \mathscr {C}(1)\). Indeed, we start counting in the order \(i\ge 1\), and determine whether an extra ball is needed to cover vertex i after we have covered the vertices in \([i-1]\cap \mathscr {C}(1)\). The first contribution in (7.22) comes from the ball that covers vertex 1.
Use inclusion-exclusion to write \(X^{(n)}(\varepsilon )\) as
where
Therefore,
We will show that the limsup as \(n\rightarrow \infty \) of the first probability is bounded by \(C\varepsilon ^{\kappa _1}\), and the limsup as \(n\rightarrow \infty \) of the second by \(C\varepsilon ^{\kappa _2}\) with \(\kappa _1,\kappa _2>0\), so that Proposition 7.2 will follow with \(\kappa =\min \{\delta h/2, \kappa _1, \kappa _2\}\).
Analysis of \(X_1^{(n)}\). It follows from [17, Proposition 3.7] that
where
is a sum of independent indicators with success probabilities \(1-\exp \big (-d_i \varepsilon ^{\delta h/2}\big )\), \(i=1,\ldots , \underline{N}(\varepsilon )\) (recall (6.17)), with \(d_j\) as defined right below (7.16). Note that
Similarly, for small enough \(\varepsilon >0\),
Further, since \(\underline{X}_1(\varepsilon )\) is a sum of independent indicators,
Combining (7.24), (7.25), (7.26), and (7.27), we get
where \(\kappa _1=2\pi -2\delta +\delta h/2-\pi (1-\delta ')>0\) when \(\delta >0\) is sufficiently small. This proves a bound on the first term on the right side of (7.23).
Analysis of \(X_2^{(n)}\). We next give an upper bound on \({{\mathrm{\mathbb {P}}}}(X_2^{(n)}(\varepsilon )\ge \underline{x}_{\varepsilon })\). We start with
Further,
When \(i\in \mathscr {C}(1)\) and \({{\mathrm{dist}}}_{\mathscr {C}(1)}(i, [i-1])\le 4\varepsilon n^{\eta }\), there has to be \(j\in [i-1]\) and \(k\in [n]\) such that the three events
-
(i)
\(\{{{\mathrm{dist}}}(i, k)\le 4\varepsilon n^{\eta }\}\);
-
(ii)
\(\{{{\mathrm{dist}}}(j, k)\le 4\varepsilon n^{\eta }\}\);
-
(iii)
\(\{k\in \mathscr {C}(1)\}\),
occur disjointly, where \({{\mathrm{dist}}}(i,j)\) denotes the graph distance in the random graph \(\mathrm{NR}_n(\varvec{w})\). There are two cases depending on whether \(k>\underline{N}(\varepsilon )\) or \(k\le \underline{N}(\varepsilon )\). When \(k\le \underline{N}(\varepsilon )\), we can ignore the event \(\{{{\mathrm{dist}}}(j, k)\le 2\varepsilon n^{\eta }\}\). This gives, for \(2\le i\le \underline{N}(\varepsilon )\),
where, for two increasing events A, B, we write \(A\circ B\) for the event that A and B occur disjointly.
By the BK inequality, we bound
Similar to (7.20), we have
where \(\nu _n(\lambda )=(1+\lambda n^{-\eta })\nu _n\). In our case, \(\nu _n=1+O(n^{-\eta })\), so that, for \(l\le 4\varepsilon n^{\eta }\),
Further,
where we recall that \(w(A)=\sum _{a\in A}w_a\) denotes the total weight of A. By Lemma 7.4, \(\mathbb {E}[n^{-\rho }w(\mathscr {C}(1))]\) remains uniformly bounded as \(n\rightarrow \infty \). We conclude that
where the last step uses the first inequality in (6.13). Note that
so that the powers of n cancel. Combining the above with (7.30) leads to
Note that
Thus
Using (7.29) and plugging in the values \(\eta =(\tau -3)/(\tau -1), \pi =(\tau -2)/(\tau -3)\), we arrive at
where the exponents \(\kappa _3\) and \(\kappa _4\) are positive because of the choice of \(\delta '\) (see (7.12)).
Completion of the proof of Proposition 7.2: Note that (7.13) follows upon combining (7.21), (7.23), (7.28), and (7.31). Now fix \(p>1/\kappa \), where \(\kappa \) is as in (7.13). Then \(\sum _{k=1}^{\infty }{{\mathrm{\mathbb {P}}}}\left( \mathfrak {N}_{(\infty )}(1/k^p)<(2k)^{(\pi -\delta )p}\right) <\infty .\) Since \(\delta >0\) was arbitrary, we conclude that
By sandwiching \(\varepsilon \) between \(1/(k-1)^p\) and \(1/k^p\), we obtain the bound: \({{\mathrm{\underline{dim}}}}(\mathscr {M}(1))\ge \pi \) a.s. \(\square \)
Proof of (1.8) and (1.20): Proposition 7.2 combined with an argument identical to the the one given right after the proof of Proposition 7.1 yields the lower bound: \({{\mathrm{\underline{dim}}}}\left( M_i^{{{\mathrm{nr}}}}(\lambda )\right) \ge \pi \) a.s. (1.8) follows once we combine this lower bound with (7.11), and (1.20) follows as a consequence of (2.13). \(\square \)
8 Open problems
In Theorem 1.8, we have considered a general entrance boundary \(\mathbf {c}\in l_0\). To study specific properties of the limit objects, we focused mainly on the special case \(\mathbf {c}=\mathbf {c}(\alpha ,\tau )\) as in (1.19) and in this case, we have shown compactness and identified the box counting dimension in Theorem 1.9. An important problem in this context is to establish necessary and sufficient conditions on \(\mathbf {c}\) that ensure compactness of the limiting spaces.
Another motivation for pursuing this problem comes from the following simple corollary of Theorem 1.9: For any \(i\ge 1\), consider the sequence \(\varvec{\theta }^{(i)}\) as in (2.10). Then \(\mathscr {T}_{(\infty )}^{\varvec{\theta }^{(i)}}\) is almost surely compact. Similarly, compactness of \(\mathscr {M}(1)\) (as defined in (7.1)) implies compactness of the associated ICRT \(\mathscr {T}_{(\infty )}^{{\overline{\varvec{\theta }}}}\) where \(\overline{\varvec{\theta }}=(\overline{\theta }_i:i\ge 1)\) is given by the following prescription: Let \(q_k\) be such that
where \(\mathscr {I}_q(\cdot )\) and \(\mathscr {H}_{\mathscr {S}(1)}(\cdot )\) are as defined around (6.18). Define
These can be thought of as “annealed results,” since \(\varvec{\theta }^{(i)}\) and \({\overline{\varvec{\theta }}}\) are random. No result is known in this direction without a prior distribution on \(\varvec{\theta }\), i.e., sufficient conditions on non-random \(\varvec{\theta }\in \Theta \) that ensure compactness of the tree \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) are not known. In [11, Section 7], Aldous, Miermont and Pitman conjecture that boundedness of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) for \(\varvec{\theta }\in \Theta \) is equivalent to \(\int _1^{\infty }(\psi _{\varvec{\theta }}(u))^{-1}du<\infty \), where \(\psi _{\varvec{\theta }}\), in our situation, is given by
This conjecture, however, is open to date. Our proof technique demonstrates a method of proving such annealed results via approximation by random graphs. Thus, classification of those \(\mathbf {c}\in l_0\) for which the spaces \(M_i^{\mathbf {c}}(\lambda )\) are compact will lead to a broad class of prior distributions on \(\varvec{\theta }\) for which \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) is compact.
Problem 8.1
Find necessary and sufficient conditions on \(\mathbf {c}\) that ensure compactness of the spaces \(M_i^{\mathbf {c}}(\lambda )\) for \(i\ge 1\).
Another related problem is to find the fractal dimensions of the limiting spaces. As a corollary to Theorem 1.9, we get
where \(\varvec{\theta }^{(i)}\) is as in (2.10) corresponding to \(\mathbf {c}\) of the form (1.19). Proposition 7.1 and Proposition 7.2 show that the assertion in (8.1) remains true if we replace \(\varvec{\theta }^{(i)}\) by \({\overline{\varvec{\theta }}}\). Now, it is not hard to prove that
It then follows that
which in turn implies that both the Hausdorff dimension and the packing dimension of a \(\psi _{{\overline{\varvec{\theta }}}}\) Lévy tree equal \((\tau -2)/(\tau -3)\) a.s. (see [34, 40]). Using the analogy between ICRTs and Lévy trees as in [11, Section 7], it is natural to expect that the same is true for \(\mathscr {T}_{(\infty )}^{{\overline{\varvec{\theta }}}}\) and hence for \(\mathscr {M}(1)\). This is the heuristic behind Conjecture 1.3.
Problem 8.2
Prove Conjecture 1.3.
Abbreviations
- \((\mathscr {S}_t)_{t\ge 0}\) :
-
Thinned Lévy process
- \(X^\circ \) :
-
Random variable having the size-biased distribution
- \({{\mathrm{\mathbb {E}}}}_{\mathbf {p}}, {{\mathrm{\mathbb {E}}}}_{\mathbf {p},\star }\) :
-
Expectation conditional on the ordered \(\mathbf {p}\)-tree \(\mathscr {T}_{m}^{\mathbf {p}}\) and the tilted \(\mathbf {p}\)-tree \(\mathscr {T}_{m}^{\mathbf {p},\star }\) respectively
- \({{\mathrm{\mathbb {E}}}}_{\varvec{\theta }}\) :
-
Expectation conditional on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) and the random variables \(U_j^{(i)}\) that encode the order on \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\)
- \(\mathscr {C}(i)\) :
-
Component containing node i
- \(\eta , \rho \) :
-
Critical exponents
- \(\mathscr {C}_i(\lambda )\) :
-
The i-th largest component in \(\mathrm{NR}_n(\varvec{w}(\lambda ))\)
- \(\mathscr {N}(\mathbb {R}_+)\) :
-
Space of counting measures on \(\mathbb {R}_+\) equipped with the vague topology
- \(\mathscr {N}(\mathscr {M},\delta )\) :
-
Minimal number of open balls with radius \(\delta \) required to cover a metric space \(\mathscr {M}\)
- \(\mathop {\longrightarrow }\limits ^{d}, \mathop {\longrightarrow }\limits ^{\mathrm {P}}\) :
-
Convergence in distribution and probability
- \(d_{{{\mathrm{GH}}}}(X_1, X_2)\) :
-
Gromov-Hausdorff distance between two metric spaces \((X_1,d_1)\) and \((X_2, d_2)\). See same section for pointed Gromov-Hausdorff distance \(d_{{{\mathrm{GH}}}}^{{{\mathrm{pt}}}}\)
- \(d_{{{\mathrm{GHP}}}}(X_1, X_2)\) :
-
Gromov-Hausdorff distance between two measured metric spaces \((X_1,d_1,\mu _1)\) and \((X_2, d_2, \mu _2)\)
- \({{\mathrm{dis}}}(C)\) :
-
Distortion of correspondence \(C\subseteq X_1 \times X_2\)
- \(\mathbb {S}\) :
-
Space \(\mathbb {R}_+\times \mathscr {N}(\mathbb {R}_+)\) equipped with the product topology
- \(d_{\mathbf {t}}\) :
-
Distance metric on tree \(\mathbf {t}\) which incorporates the edge lengths
- \(\ell ^3_{\downarrow }\) :
-
Decreasing positive vectors with finite \(\ell ^3\)-norm
- \(F^{\mathbf {p}}(\cdot ),A_m(\cdot )\) :
-
Functions in the depth first construction of a \(\mathbf {p}\)-tree
- \(\mathscr {M}\) :
-
Symbol used to denote a generic metric space
- \(\tilde{\mathscr {G}}_m(\mathbf {p}, a)\) :
-
Random graph with distribution \({{\mathrm{\mathbb {P}}}}_{{{\mathrm{con}}}}(\cdot ,\mathbf {p},a,[m])\) defined in (4.2). See Sect. 4.4 for the modified random graph \({\mathscr {G}}_m^{{{\mathrm{mod}}}}(\mathbf {p},a)\)
- \(g^{(k)}_{\phi }(\mathbf {t})\) :
-
For a tree \(\mathbf {t}\in \mathbf {T}_{I,(k+\ell )}^*\), the functional defined in (4.24)
- \({{\mathrm{ht}}}(\mathbf {t})\) :
-
Height of a tree \(\mathbf {t}\) with edge lengths incorporated into the distance
- \(\mathscr {T}^{\varvec{\theta }}_{(\infty )}\) :
-
An ICRT constructed using \(\varvec{\theta }\in \Theta \)
- \(\mathscr {L}(\mathscr {T}_{(\infty )}^{\varvec{\theta }})\) :
-
Set of leaves of \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\)
- \(\mathfrak {G}_{(\infty )}(y),Q_{y}^{(\infty )}\) :
-
Root-to-vertex weights and measures in \(\mathscr {T}_{(\infty )}^{\varvec{\theta }}\) defined in (2.7) and (2.8). See analogous objects for finite trees in Sect. 4.4
- \(\ell ^2_{\downarrow }\) :
-
Space describing component sizes for the multiplicative coalescent
- \(\mathscr {L}(\mathbf {t})\) :
-
The collection of non-root leaves in a tree \(\mathbf {t}\)
- \(\varvec{w}(\lambda ):=(w_i(\lambda ))_{i\in [n]}\) :
-
Weight sequence in the critical scaling window
- \(\mathrm{NR}_n(\varvec{w})\) :
-
Norros-Reittu random graph with weight sequence \(\varvec{w}\)
- \(\nu \) :
-
Asymptotic expected forward degree Norros-Reittu random graph
- \(\mathfrak {P}(\mathbf {t})\) :
-
Set of permitted edges in a tree \(\mathbf {t}\)
- \({{\mathrm{\mathbb {P}}}}_{\text {tree}}(\cdot ; \mathbf {p})\) :
-
Distribution of a \(\mathbf {p}\)-tree with driving pmf \(\mathbf {p}\)
- \({\mathscr {T}}^{\mathbf {p}}_m,{\mathscr {T}}^{\mathbf {p}, \star }_m\) :
-
Random \(\mathbf {p}\)-tree, respectively tilted \(\mathbf {p}\)-tree using \(L(\cdot )\)
- \(\mathscr {R}\mathscr {C}(i,[\rho ,v])\) :
-
For vertex i in a path \([\rho ,v]\), set of all children of i which fall to the right of \([\rho ,v]\)
- \(r_{IJ}^{(m)},\mathscr {R}_{IJ}^{(m)}\) :
-
Spanning subtrees obtained from the birthday construction of \(\mathbf {p}\)-trees and retaining specific set of information. See Definition 4.16. See Sect. 2.2 for corresponding objects for ICRTs
- \(N_{(\infty )}^\star \) :
-
Number of shortcuts in \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\)
- \(\sigma _r(\mathbf {x})\) :
-
rth moment of the weight sequence \(\mathbf {x}\)
- \(\mathscr {T}_m^{\mathbf {p},\star }(\widetilde{\mathbf {V}}_{k,k+\ell }^{(m)})\) :
-
Spanning subtree of tilted \(\mathbf {p}\)-tree \(\mathscr {T}_m^{\mathbf {p},\star }\) using sampled vertex set \(\widetilde{\mathbf {V}}_{k,k+\ell }^{(m)}\)
- \(\mathscr {S}\) :
-
Space of all measured compact metric spaces. \(\bar{\mathscr {S}}\) is the corresponding space of isometry equivalent classes under \(d_{{{\mathrm{GHP}}}}\)
- \(\mathscr {S}_*\) :
-
Space of measured metric spaces under the Gromov weak topology
- \(\tau \) :
-
Tail exponent of the cdf of the weight sequence \(\varvec{w}\)
- \(\Theta \) :
-
Space of tenable parameters giving rise to ICRTs
- \(L(\cdot ),{{\mathrm{\mathbb {P}}}}_{{{\mathrm{ord}}}}^\star \) :
-
Tilt functional and associated tilted \(\mathbf {p}\)-tree distribution
- \(\mathscr {T}_{(\infty )}^{\varvec{\theta },\star }\) :
-
Tilted ICRT with distribution \({{{\mathrm{\mathbb {P}}}}}_{\theta }^\star \)
- \(L_{(\infty )}(\mathscr {T}_{(\infty )}^{\varvec{\theta }}, \varvec{U})\) :
-
Tilt functional to construct tilted ICRT
- \(\mathbb {T}_m,\mathbb {T}_m^{{{\mathrm{ord}}}}\) :
-
Collection of all rooted (respectively rooted ordered) trees with vertex set [m]
- \({{\mathrm{\underline{dim}}}},{{\mathrm{\overline{dim}}}}\) :
-
Lower and upper box counting dimensions
- \(V^{\mathbf {c}}_\lambda (\cdot )\) :
-
Levy process “without replacement”. The corresponding process reflected at zero is \(\tilde{V}^{\mathbf {c}}_\lambda (\cdot )\)
- \(\mathbf {M}_n^{{{\mathrm{nr}}}}(\lambda )\) :
-
Components of \(\mathrm{NR}_n(\varvec{w}(\lambda ))\) viewed as an element of \(\mathscr {S}^{\mathbb {N}}\)
- \(\mathbf {T}_{IJ}\) :
-
Space of tress with I leaves all labelled, and J other labeled “hub” vertices and further every edge has strictly positive edge length
- \(\mathbf {T}_{IJ}^*\) :
-
The pace \(\mathbf {T}_{IJ}\) where in addition the trees are equipped with leaf weights and root-to-leaf measures
- \(\mathbf {Z}(\lambda )\) :
-
Lengths of excursion of \(\tilde{V}^{\mathbf {c}}_\lambda (\cdot )\) from zero
References
Abraham, R., Delmas, J.-F., Hoscheit, P.: A note on the Gromov-Hausdorff-Prokhorov distance between (locally) compact metric measure spaces. Electron. J. Probab. 18(14), 1–21 (2013)
Achlioptas, D., D’Souza, R.M., Spencer, J.: Explosive percolation in random networks. Science 323(5920), 1453–1455 (2009)
Addario-Berry, L., Broutin, N., Goldschmidt, C.: Critical random graphs: limiting constructions and distributional properties. Electron. J. Probab. 15(25), 741–775 (2010). MR2650781 (2011d:60025)
Addario-Berry, L., Broutin, N., Goldschmidt, C.: The continuum limit of critical random graphs. Probab. Theory Relat. Fields 152(3–4), 367–406 (2012)
Addario-Berry, L., Broutin, N., Goldschmidt, C., Miermont, G.: The scaling limit of the minimum spanning tree of the complete graph. Ann. Probab. (2013) (to appear)
Albert, R., Barabási, A.-L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74(1), 47 (2002)
Aldous, D.: The continuum random tree I. Ann. Probab. 19, 1–28 (1991)
Aldous, D.: The continuum random tree III. Ann. Probab. 21, 248–289 (1993)
Aldous, D.: Brownian excursions, critical random graphs and the multiplicative coalescent. Ann. Probab. 25(2), 812–854 (1997). MR1434128 (98d:60019)
Aldous, D., Limic, V.: The entrance boundary of the multiplicative coalescent. Electron. J. Probab. 3(3), 59 (1998). (electronic). MR1491528 (99d:60086)
Aldous, D., Miermont, G., Pitman, J.: The exploration process of inhomogeneous continuum random trees, and an extension of Jeulin’s local time identity. Probab. Theory Relat. Fields 129(2), 182–218 (2004). MR2063375 (2005f:60023)
Aldous, D., Pitman, J.: A family of random trees with random edge lengths. Random Struct. Algorithms 15(2), 176–195 (1999)
Aldous, D., Pitman, J.: Inhomogeneous continuum random trees and the entrance boundary of the additive coalescent. Probab. Theory Relat. Fields 118(4), 455–482 (2000). MR1808372 (2002a:60012)
Athreya, S., Löhr, W., Winter, A.: The gap between Gromov-vague and Gromov-Hausdorff-vague topology. Stoch. Process. their Appl 126, 2527–2553 (2016)
Bertoin, J.: Random Fragmentation and Coagulation Processes, Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2006). MR2253162 (2007k:60004)
Bhamidi, S., van der Hofstad, R., van Leeuwaarden, J.S.H.: Scaling limits for critical inhomogeneous random graphs with finite third moments. Electron. J. Probab. 15(54), 1682 (2010)
Bhamidi, S., van der Hofstad, R., van Leeuwaarden, J.S.H.: Novel scaling limits for critical inhomogeneous random graphs. Ann. Probab. 40(6), 2299–2361 (2012). MR3050505
Bhamidi, S., Broutin, N., Sen, S., Wang, X.: Scaling limits of random graph models at criticality: Universality and the basin of attraction of the Erdős-Rényi random graph. arXiv preprint arXiv:1411.3417 (2014)
Bhamidi, S., Dhara, S., Hofstad, R.V.d., Sen, S.: Continuum scaling limits of the configuration model (2016) (In preparation)
Bhamidi, S., Sen, S., Wang, X.: Continuum limit of critical inhomogeneous random graphs. Probab. Theory Relat. Fields (2014) (to appear)
Bollobás, B.: Random Graphs. Cambridge University Press, Cambridge (2001)
Bollobás, B., Janson, S., Riordan, O.: The phase transition in inhomogeneous random graphs. Random Struct. Algorithms 31(1), 3–122 (2007). MR2337396 (2008e:05124)
Braunstein, L.A., Buldyrev, S.V., Cohen, R., Havlin, S., Stanley, H.E.: Optimal paths in disordered complex networks. Phys. Rev. Lett. 91(16), 168701 (2003)
Braunstein, L.A., Wu, Z., Chen, Y., Buldyrev, S.V., Kalisky, T., Sreenivasan, S., Cohen, R., Lopez, E., Havlin, S., Stanley, H.E.: Optimal path and minimal spanning trees in random weighted networks. Int. J. Bifurc. Chaos 17(07), 2215–2255 (2007)
Britton, T., Deijfen, M., Martin-Löf, A.: Generating simple random graphs with prescribed degree distribution. J. Stat. Phys. 124(6), 1377–1397 (2006). MR2266448 (2007g:05168)
Burago, D., Burago, Y., Ivanov, S.: A course inmetric geometry, Graduate Studies inMathematics, vol. 33, American Mathematical Society, Providence, RI (MR1835418 (2002e:53053)) (2001)
Camarri, M., Pitman, J.: Limit distributions and random trees derived fromthe birthday problem with unequal probabilities. Electron. J. Probab. 5(2), 18 (2000). (electronic). MR1741774 (2001c:60080)
Chen, Y., López, E., Havlin, S., Stanley, H.E.: Universal behavior of optimal paths inweighted networks with general disorder. Phys. Rev. Lett. 96(6), 068702 (2006)
Chung, F., Lu, L.: The average distances in random graphs with given expected degrees. Proc. Natl. Acad. Sci. USA 99(25), 15879–15882 (2002). (electronic). MR1944974 (2003k:05124)
Chung, F., Lu, L.: Connected components in random graphs with given expected degree sequences. Ann. Comb. 6(2), 125–145 (2002). MR1955514 (2003k:05123)
Chung, F., Lu, L.: The average distance in a random graph with given expected degrees. Internet Math. 1(1), 91–113 (2003). MR2076728 (2005e:05122)
Chung, F., Lu, L.: Complex graphs and networks, CBMS Regional Conference Series inMathematics, vol. 107, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI (MR2248695 (2007i:05169)) (2006)
Dorogovtsev, S.N., Mendes, J.F.F.: Evolution of networks. Adv. Phys. 51(4), 1079–1187 (2002)
Duquesne, T., Le Gall, J.-F.: Probabilistic and fractal aspects of lévy trees. Probab. Theory Relat. Fields 131(4), 553–603 (2005)
Durrett, R.: Random Graph Dynamics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2007). MR2271734 (2008c:05167)
Dwass, M.: The total progeny in a branching process and a related random walk. J. Appl. Prob. 6, 682–686 (1969)
Evans, S.N.: Probability and real trees, Lecture Notes in Mathematics, vol. 1920. Springer, Berlin, 2008. Lectures from the 35th Summer School on Probability Theory held in Saint-Flour, July 6–23 (MR2351587 (2009d:60014)) (2005)
Greven, A., Pfaffelhuber, P., Winter, A.: Convergence in distribution of random metric measure spaces (\(\Lambda \) -coalescent measure trees). Probab. Theory Relat. Fields 145(1–2), 285–322 (2009). MR2520129 (2011c:60008)
Gromov, M.: Metric structures for Riemannian and non-Riemannian spaces, English,Modern Birkhäuser Classics, Birkhäuser Boston, Inc., Boston, MA (2007). Based on the 1981 French original, With appendices by Katz, M., Pansu, P. and Semmes, S. Translated fromthe French by Sean Michael Bates. MR2307192 (2007k:53049)
Haas, B., Miermont, G.: The genealogy of self-similar fragmentations with negative index as a continuum random tree. Electron. J. Probab. 9(4), 57–97 (2004). (MR2041829 (2004m:60086))
Hofstad, R.v.d.: Critical behavior in inhomogeneous random graphs. Rand. Struct. Algorithms 42(4), 480–508 (2013)
Hofstad, R.v.d., Keane, M.: An elementary proof of the hitting time theorem. Am. Math. Mon. 115(8), 753–756 (2008). MR2456097
Hofstad, R.v.d., Nachmias, A.: Hypercube percolation, Preprint: To appear in J. Eur. Math. Soc. 19, 725–814 (2017)
Hofstad, R.v.d.: Random Graphs and Complex Networks. Volume 1. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2017)
Hofstad, R.v.d., Kliem, S., Leeuwaarden, J.S.H.V.: Cluster tails for critical power-law inhomogeneous random graphs, arXiv preprint arXiv:1404.1727 (2014)
Janson, S.: Asymptotic equivalence and contiguity of some random graphs. Rand. Struct. Algorithms 36(1), 26–45 (2010). (MR2591045 (2011h:60021))
Janson, S., Luczak, T., Rucinski, A.: Random Graphs. Wiley, London (2011)
Joseph, A., et al.: The component sizes of a critical random graph with given degree sequence. Ann. Appl. Probab. 24(6), 2560–2594 (2014)
Klein, T., Rio, E.: Concentration around the mean for maxima of empirical processes. Ann. Probab. 33(3), 1060–1077 (2005). (MR2135312 (2006c:60022))
Kortchemski, I.: Sub-exponential tail bounds for conditioned stable Bienaymé-Galton-Watson trees. Probab. Theory Relat. Fields (2015) (to appear)
Le Gall, J.-F.: Random trees and applications. Probab. Surv. 2, 245–311 (2005). (MR2203728 (2007h:60078))
Łuczak, T.: Component behavior near the critical point of the random graph process. Rand. Struct. Algorithms 1(3), 287–310 (1990)
Łuczak, T., Pittel, B., Wierman, J.C.: The structure of a random graph at the point of the phase transition. Trans. Am. Math. Soc. 341(2), 721–748 (1994)
Massart, P.: The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. Probab. 18(3), 1269–1283 (1990). MR1062069 (91i:60052)
Newman, M.: Networks: An Introduction. Oxford University Press, Oxford (2010)
Newman, M.E.: The structure and function of complex networks. SIAM Rev. 45(2), 167–256 (2003)
Norros, I., Reittu, H.: On a conditionally Poissonian graph process. Adv. Appl. Probab. 38(1), 59–75 (2006). (MR2213964)
Otter, R.: The multiplicative process. Ann. Math. Stat. 20, 206–224 (1949). MR0030716 (11,41a)
Pitman, J.: Random mappings, forests, and subsets associated with Abel-Cayley-Hurwitz multinomial expansions. Sém. Lothar. Combin. 46 (2001/02), Art. B46h, p. 45 (MR1877634 (2002m:60017))
Riordan, O., Warnke, L.: Explosive percolation is continuous. Science 333(6040), 322–324 (2011)
Schramm, O.: Scaling limits of loop-erased random walks and uniform spanning trees. Isr. J. Math. 118(1), 221–288 (2000)
Wu, Z., Braunstein, L.A., Havlin, S., Stanley, H.E.: Transport in weighted networks: partition into superhighways and roads. Phys. Rev. Lett. 96(14), 148702 (2006)
Acknowledgements
The authors are indebted to Grégory Miermont for many enlightening discussions about inhomogeneous continuum random trees. SS thanks ENS Lyon for hospitality and accommodation during visits. The authors thank Igor Kortchemski for drawing their attention to his recent preprint [50]. The authors also thank an anonymous referee who pointed out a number of issues in an earlier version which significantly improved the readability of the manuscript. SB has been partially supported by NSF-DMS grants 1105581, 1310002, 160683, 161307, SES grant 1357622 and ARO W911NF-17-1-0010. RvdH and SS have been supported in part by the Netherlands Organisation for Scientific Research (NWO) through the Gravitation Networks grant 024.002.003. In addition, RvdH has been supported by VICI grant 639.033.806 and SS has been supported by a CRM-ISM fellowship.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Bhamidi, S., van der Hofstad, R. & Sen, S. The multiplicative coalescent, inhomogeneous continuum random trees, and new universality classes for critical random graphs. Probab. Theory Relat. Fields 170, 387–474 (2018). https://doi.org/10.1007/s00440-017-0760-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-017-0760-6
Keywords
- Multiplicative coalescent
- \(\mathbf {p}\)-trees
- Inhomogeneous continuum random trees
- Critical random graphs
- Gromov-Hausdorff distance
- Gromov-weak topology