Abstract
We consider an infinite spatial inhomogeneous random graph model with an integrable connection kernel that interpolates nicely between existing spatial random graph models. Key examples are versions of the weight-dependent random connection model, the infinite geometric inhomogeneous random graph, and the age-based random connection model. These infinite models arise as the local limit of the corresponding finite models. For these models we identify the asymptotics of the local clustering as a function of the degree of the root in different regimes in a unified way. We show that the scaling exhibits phase transitions as the interpolation parameter moves across different regimes. This allows us to draw conclusions on the geometry of a typical triangle contributing to the clustering in the different regimes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The last two decades has seen the rise of models for complex networks that use geometry [6, 9, 12, 14, 21]. The main idea is that geometry can encode similarity of nodes and thus serves as a component for deciding whether an edge will be present. Random graph models with underlying geometry have proven to be very successful in reproducing several key features of real-world networks. As such, geometry of complex networks has become an important research domain [1, 2, 27, 29, 32]. One key feature properly captured by geometric models is the presence of triangles, triples of connected nodes, measured in terms of their clustering. In the literature, three different measures for clustering are often considered. The first is the so-called global clustering coefficient, defined as three times the ratio of the number of triangles to the number of paths of length two in the graph. The second is the local clustering coefficient, which for each node counts the fraction of triangles it participates in and then averages this over all nodes. Finally, the clustering function, sometimes called the clustering spectrum, is the local clustering function restricted to nodes of a given degree k. That is, it is a function that takes as input \(k \in \mathbb {N}\), and outputs the average number of triangles among all possible triangles that are present among degree k vertices in the graph, see (1.5) below for a definition.
This paper studies the behaviour of the clustering function of a typical vertex, in a model of infinite spatial inhomogeneous random graphs. Spatial inhomogeneous random graphs can be seen as spatial versions of inhomogeneous random graphs, as studied in [3], where nodes have a position in a geometric space, see also [31, Remark 1.5] for a discussion. In a typical construction of these graphs, one considers vertices as atoms of some marked Poisson point process, where the edges are included conditionally independently given the vertices and their marks, according to a connection function \(\kappa \). These graphs also generalize infinite geometric inhomogeneous random graphs [4,5,6] and hyperbolic random graphs [22]. Graphs of similar flavour have been considered in [12, 13].
For an infinite graph, a functional similar to the clustering function can be considered under some conditions. In particular, if a sequence of finite random graphs converges locally to an infinite graph, then the associated sequence of clustering functions also converge to a limiting functional of the infinite graph, see [15, Exercise 2.32]. This limit is expressed as the clustering function of a typical vertex of this graph, see (1.6) below. We study how the clustering function behaves as the degree of the typical vertex diverges. Problems of similar flavor have been studied in [8, 30] for the hyperbolic random graph. In particular, a phase transition in the clustering function is established in [8, Proposition 1.4], which disproved the conjectured inverse linear scaling of the clustering function of these graphs [22]. For spatial versions of preferential attachment networks, convergence of the average and global clustering coefficients are shown in [17], while scalings of the clustering function for a different spatial preferential attachment model have been studied in [16]. Typically, the clustering function vanishes polynomially, as the degree of a typical vertex tends to infinity, for sparse network models from the literature. For example, for the models considered in [23, 28], the authors see an inverse linear decay, while a polynomial decay is observed, for example, in [7, 25, 29, 33]. The main intuition behind the prediction of an inverse linear decay is that most triangles are edge-disjoint, and there can be at most k/2 such edge-disjoint triangles incident to a vertex if it has degree k.
In this paper, we see that under suitable choices of the model parameters, even for sparse random graphs, the clustering function can decay logarithmically, or not decay at all, see for example Theorem 1.1 below. This last behavior is to be understood as that there are rare vertices of arbitrarily large degrees, and for these vertices, a constant proportion of their neighbor pairs are also neighbors, i.e., the neighborhood of such rare high-degree vertices are almost cliques—a phenomenon that has not yet been observed in the literature for sparse random graphs.
The rest of this section is organized as follows: In Sects. 1.1 and 1.2 we define our random graph models. In Sect. 1.3 we discuss the clustering function. We state our main result in Sect. 1.4, followed by a discussion on different examples covered under our setup in Sect. 1.5. Finally in Sect. 1.6 we discuss the proof strategy of our main result.
1.1 The Random Graph Model
The random graphs we study are spatial inhomogeneous random graphs, or SIRGs [31]. The vertex set is either \([n]=\{1,\dots ,n\}\) (finite model) or \(\mathbb {N} \cup \{0\}\) (infinite model). Each vertex i has a location \(X_i\) in \(\mathbb {R}^d\), where for the finite model the locations are i.i.d. uniform in the box
For the infinite model, all the locations \( (X_i)_{i \ge 1}\) except that of 0, form the collection of atoms of a rate 1 homogeneous Poisson point process \(\Gamma \) on \(\mathbb {R}^d\), and we let the vertex 0 have the location \(X_0:=\textbf{0} \in \mathbb {R}^d\). In particular, the vertex locations of \(\mathbb {G}^{\infty }\) form a palm version \(\Gamma \cup \{\textbf{0}\}\) of the Poisson process \(\Gamma \). Moreover, each vertex i has a random positive real weight \(W_i\) associated to it, where \((W_i)_{i \ge 1}\) is a sequence of i.i.d. copies of a random variable W. Edges between pairs of vertices i and j, conditionally on \((X_i,W_i)\) and \((X_j,W_j)\), are placed independently with probability
where \(\kappa : \mathbb {R}_+ \times \mathbb {R} \times \mathbb {R} \rightarrow [0,1]\) is assumed to be symmetric in its last two arguments. We denote the finite model as \(\mathbb {G}^{(n)}(\kappa ,W)\), and the infinite model as \(\mathbb {G}^{\infty }(\kappa ,W)\). The infinite rooted graph \((\mathbb {G}^{\infty },0)\) arises as the local limit in probability of the finite model, see [31, Theorem 1.2].
For the rest of the paper, we fix the sequence of weights \(\mathbb {W}=(W_i)_{i\in S}\), where S is either [n] or \(\mathbb {N}\cup \{0\}\), corresponding respectively to the finite and infinite models, to be a sequence of i.i.d. copies from a \(\text {Pareto}(\beta -1,1)\) distribution. That is, each W has density function
with \(\beta >2\), i.e., the weights have finite expectation. For such choices of power-law weights, with appropriate choice of the connection function \(\kappa \), the degree distribution of a vertex also obeys a power law, so that they naturally give rise to scale-free spatial random graphs.
1.2 The Interpolation Kernel
We now discuss the connection function \(\kappa \) we are concerned with. For a long-range parameter \(\alpha \in (0, \infty ]\), we consider
for some symmetric function \(g:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R}_+\). We refer to g as the weight function of \(\kappa \). For vertices i and j, this is to be interpreted as follows: when \(\alpha <\infty \), the edge \(\{i,j\}\) is present with probability 1 if \(\Vert X_i-X_j\Vert <g(W_i,W_j)^{1/d}\), otherwise it is present with probability \(\left( \frac{g(W_i,W_j)}{\Vert X_i-X_j\Vert ^d}\right) ^{\alpha }\); while when \(\alpha =\infty \), the edge \(\{i,j\}\) is present precisely when \(\Vert X_i-X_j\Vert <g(W_i,W_j)^{1/d}\).
Let us introduce the following notation which we use throughout the paper: for any two reals \(r_1\) and \(r_2\),
In this paper, we work with the weight function
for a parameter \(a \in [0,\infty )\). Different values of the interpolation parameter a in (1.3) capture different spatial random graphs from the literature, see Sect. 1.5 for details.
1.3 The Clustering Function
Let \(G=(V(G),E(G))\) be a locally finite graph. For \(v \in V(G)\), we denote its degree in G by \(d_v(G)\) and further define
so that \(\Delta _v(G)\) is the number of triangles in G containing v as a vertex. In addition, we let \(N_k(G)\) be the number of vertices in G with \(d_v(G)=k\).
Definition 1.1
(Clustering function) The clustering function \(\textrm{CC}_G:\mathbb {N}\rightarrow \mathbb {R}_+\) of G is defined as
Thus, the quantity \(\textrm{CC}_G(k)\) measures the proportion of two-paths in G that are triangles, where the middle vertex of the two-path has degree k.
It is not immediately clear how to define such a notion for an infinite graph G, since \(N_k(G)\) can be infinite. However, if G arises as the local limit of a growing sequence \((G_n)_{n \ge 1}\) of finite graphs, then the corresponding sequence of clustering functions has a limit, which can then be defined as the clustering function of G. Indeed, it was shown in [31, Corollary 2.10] that, for any \(k \in \mathbb {N}\),
where
We call \(\gamma (k)\) the clustering function of \(\mathbb {G}^{\infty }\). In this paper we analyse the asymptotics of \(\gamma (k)\) for large k.
1.4 Main Result
Let us denote the Lebesgue measure of a ball of unit radius in \(\mathbb {R}^d\) by \(\omega _d\). For any \(\alpha \in (1,\infty ]\) we then define
where \(\xi _\infty := \lim _{\alpha \rightarrow \infty } \xi _\alpha = \omega _d (\beta -1)\). For any \(w_1, w_2 \in (1,\infty )\), we define the functions
Further, let \(\textrm{S}_k(a,\beta )\) be defined as
Next for \(\textbf{w}=(w_1,w_2)\in \mathbb {R}^2\), using the notation
for the product measure on \(\mathbb {R}^2\) with density \(f_W(w_1)f_W(w_2)\), let \(\Gamma (a,\alpha ,\beta ,d)\) be defined as
With these definitions we are ready to state our main result about the scaling of the clustering function:
Theorem 1.1
Let \(\alpha >1\), \(\beta >2\) and let W be a random variable with density (1.1). Furthermore, let \(\kappa \) be defined as in (1.2) with g as in (1.3). Consider the infinite SIRG \(\mathbb {G}^{\infty }(\kappa ,W)\) and associated clustering function \(\gamma (k)\) as defined in (1.6). Then, as \(k \rightarrow \infty \),
where \(\textrm{S}_k(a,\beta )\) is as in (1.8), with \(\Gamma (a,\alpha ,\beta ,d)\) is as in (1.10).
The main message of Theorem 1.1 is that the clustering function of the infinite SIRG \(\mathbb {G}^\infty \) has three different scaling regimes, depending on the parameters a and \(\beta \), characterized by the function \(\textrm{S}_k(a,\beta )\).
In Fig. 1, we provide the phase diagram of the scaling function \(\textrm{S}_k(a,\beta )\) in terms of a and \(\beta \). The lines \(a=0\), \(a=1\) and \(\beta =a+2\) all correspond to known random graph models, see Sect. 1.5 for a discussion on the different examples of random graph models covered under our set up.
Remark 1.1
(Range of parameters) Let us discuss our choice of parameters \(\beta >2\) and \(\alpha >1\). When \(\alpha \le 1\), the conditional degree of 0 given \(\{W_0=w\}\), is almost surely infinite for any \(w>0\) by Lemma 2.1, which implies that \(d_0(\mathbb {G}^{\infty })\) is almost surely infinite. Similarly, when \(\alpha >1\) and \(\beta \le 2\), \(d_0(\mathbb {G}^{\infty })\) is almost surely infinite, again by Lemma 2.1. In particular our choice of the range of parameters \(\beta >2\), \(\alpha >1\) are so that the random variable \(d_0(\mathbb {G}^{\infty })\) is finite almost surely, and the quantity of interest \(\gamma (k)\) makes sense.
Remark 1.2
(Phase transitions) We note that there are three different phases for the scaling of the clustering function \(\gamma (k)\): \(\beta >(a+\frac{3}{2})\vee 2\), \(\beta \in (a+1,a+\frac{3}{2})\) and \(\beta \in (2,a+1)\), with two phase transition boundaries at \(\beta =a+\frac{3}{2}\) and \(\beta =a+1\). The scalings of \(\gamma (k)\) at \(\beta =a+1\) and \(\beta \in (2,a+1)\) are new, especially the constant scaling of \(\gamma (k)\) in the regime \(\beta \in (2,a+1)\) is surprising. The phase transition at \(\beta =a+\frac{3}{2}\) has already been observed for a particular case: namely \(a=1\) and \(d=1\), with \(\alpha =\infty \). In this case the corresponding infinite SIRG \(\mathbb {G}^{\infty }\) is a good approximation to the local limit of the threshold hyperbolic random graph model with appropriate parameters (see e.g. [31, Sect. 2.1.3]), using arguments as appearing in [20, Sect. 9]. For these random graphs, the phase transition at the point (1, 5/2) of Fig. 1 was observed in [8, Proposition 1.4].
Remark 1.3
(Choice of Pareto weights) Our choice of the weight distribution (1.1) is purely for technical reasons. One can also consider regularly-varying distributions with tail parameter \(\beta -1\), but Pareto weights make the arguments and constants cleaner. We expect Theorem 1.1 to go through when one has regularly varying weights with similar proofs, albeit more technical, except in the critical regimes \(\beta =a+\frac{3}{2}\) and \(\beta =a+1\). Here we expect the particular choice of the regularly varying distribution to introduce additional slowly-varying corrections, which change the scaling. The choice of Pareto weights also helps us draw a cleaner link between the models of SIRG we consider and existing random graph models from the literature, as discussed in Sect. 1.5. All these random graph models have a mixed-Poisson degree distribution, and hence have the same power-law tail as the mixing random variable (e.g. see [31, Sect. 2.2])
1.5 Examples
We now discuss different spatial random graph models from the literature that different values of a in (1.3) give rise to.
1.5.1 Product Kernels
When one takes \(a=1\) in (1.3), g(s, t) becomes the product st. This corresponds to the models having a product structure in their weights, most notably Geometric Inhomogeneous Random Graphs (GIRGs) [4,5,6], Weight Dependent Random Connection Models (WDRCMs) [13], and Hyperbolic Random Graphs (HRGs) [22]. Let us discuss these examples briefly.
The infinite GIRG (see [31, Sect. 2.1.2]) has as its vertex set a homogeneous Poisson process on \(\mathbb {R}^d\). Each vertex has a power-law weight associated to it, where the collection of weights form an i.i.d. collection. Conditionally on everything, the edges are placed independently with a connection function of the form (1.2) with \(g(s,t)=st\). When restricted to \(d=1\), these graphs give a version of HRGs [22]—for a discussion of this, see [20, Sect. 9]. Here we comment that typically in GIRGs or HRGs, the connection function does not exactly have the form (1.2), but can easily be seen to be bounded from above and below up to constant factors by (1.2). In particular, assuming the form (1.2) for the connection function does not affect the conclusions of Theorem 1.1, but comes with less unnecessary technicalities, which is why we prefer (1.2).
The \(a=1\) case can also be seen as a special case of the weight-dependent random connection model [13]. Here the vertices are again a homogeneous Poisson point process on \(\mathbb {R}^d\), but each vertex now has a uniformly distributed [0, 1] mark associated to it, as opposed to power-law weights in the case of GIRGs, and each edge between two vertices \((\textbf{x},s)\) and \((\textbf{y},t)\) is included conditionally independently with probability \(\rho (g(s,t)\Vert \textbf{x}-\textbf{y}\Vert ^d)\), where the function \(\rho (\cdot )\) is non-increasing and integrable. In particular, let the profile function \(\rho (g(s,t)\Vert \textbf{x}-\textbf{y}\Vert ^d)\) in [13, (1.2)] be equal to \(\kappa _{\alpha }(\Vert \textbf{x}-\textbf{y}\Vert ,s,t)\), with weight function g as in (1.3) with \(a=1\). This corresponds to taking \(\beta =1\) in the product kernel [13, (1.6)] of these models, and taking \(1/\gamma =\beta -1\), where now \(\beta \) is the tail parameter of the Pareto distribution coming from (1.1). This can be seen using the standard fact that if U is uniform on (0, 1), then \(U^{-\gamma }\) has the density (1.1) with \(1/\gamma =\beta -1\).
1.5.2 The Max Kernel
When one takes \(a=0\) in (1.3), the function g(s, t) becomes \(s \vee t\). This also corresponds to a special case of the weight-dependent random connection model [13]. To realize this case, we take \(\beta =1\) for the min kernel \(g^{\textrm{min}}(s,t)=(s \wedge t)/\beta \) as in [13, p3], and let \(1/\gamma =\beta -1\) as before. This transforms the connection function \(\rho (g(s,t)\Vert \textbf{x}-\textbf{y}\Vert )\) into the connection function \(\kappa _{\alpha }\) as in (1.2), with \(a=0\) and \(1/\gamma =\beta -1\), with weight function g as in (1.3) with \(a=0\).
1.5.3 Age-Dependent Random Connection Models
When one takes \(a=\beta -2\), the corresponding infinite model \(\mathbb {G}^{\infty }\) gives rise to the age-dependent random connection model, which comes out as the local limit of the age-based spatial preferential attachment model, as studied in [12]. This model also has a homogeneous Poisson point process as its vertex set, with i.i.d. marks associated to them that are uniformly distributed on [0, 1]. The connection function (see [12, Sect. 3]) for this model has the form \(\varphi (\beta ^{-1}(s \vee u)^{1-\gamma }(s \wedge u)^{\gamma }\Vert \textbf{x}-\textbf{y}\Vert ^d)\) (see e.g. the bottom of [12, p. 315]), for parameters \(\beta \in (0, \infty )\), \(\gamma \in (0,1)\), s, t being i.i.d. uniforms on (0, 1), and a non-increasing, integrable profile function \(\varphi \). Let \(\beta =1\), and consider the profile function \(\varphi (r)=\mathbbm {1}_{\left\{ r<1\right\} }+\left( \frac{1}{r}\right) ^{\alpha }\mathbbm {1}_{\left\{ r \ge 1\right\} }\). For these choices, we can write the above connection function as \(\kappa _{\alpha }(r,s',t')\), as appearing in (1.2), with the weight function \(g_a(s',t')\) being \((s'\vee t')(s' \wedge t')^a\), where \(s'=s^{-\gamma }\), \(t'=t^{-\gamma }\), and \(a=1/\gamma -1\). As before, using the standard fact that if U is uniform on (0, 1), then \(W=U^{-\gamma }\) is a Pareto random variable with scale parameter 1 and shape parameter \(1/\gamma \), if we let \(1/\gamma =\beta -1\), then using \(\gamma \in (0,1)\), \(\beta >2\), both \(s'\) and \(t'\) have the density (1.1), and the weight function \(g_a\) is (1.3) with \(a=\beta -2\). This completes the link between the infinite SIRGs we consider, and the age-based spatial preferential attachment networks.
A similar kernel was also considered in [11], where the authors look at percolation questions for one dimensional models, and in the recent work [19], the authors look at a similar interpolating kernel, and study cluster-size decay of these models.
1.6 Proof Strategy
In this section we discuss the main ideas to prove Theorem 1.1. First, let us recall some standard notations. Throughout the paper, for real sequences \((f_k)_{k \ge 1}\) and \((g_k)_{k \ge 1}\), we use the standard Landau notations \(f_k=O\left( g_k\right) \) and \(f_k=\Omega \left( g_k\right) \), as \(k \rightarrow \infty \), to mean respectively that there exist real constants C, c such that
We write \(f_k=\Theta (g_k)\) when both \(f_k=O\left( g_k\right) \) and \(f_k=\Omega \left( g_k\right) \). Further, we write \(f_k=o\left( g_k\right) \) and \(f_k=\omega (g_k)\) to mean respectively that
The first step is to reduce the expression of \(\gamma (k)\) using properties of the locations and weights of the vertices of the graph \(\mathbb {G}^{\infty }\). Let us for now denote the conditional density of the weight \(W_0\) of the root 0 given that its degree \(d_0\) equals k by \(w\mapsto f_k(w)\). Using this conditional density, we obtain the following representation of \(\gamma (k)\),
where the quantity \({T(w)}/{\mathcal {M}(w)^2}\) is a bounded function of w. Here T(w) can be understood as the expected number of triangles containing 0 given \(W_0=w\), and \(\mathcal {M}(w)\) is the expected degree of 0 given \(W_0=w\). This representation is obtained in Proposition 2.2. We then use concentration properties of Poisson random variables to show that asymptotically, as \(k \rightarrow \infty \), the density \(f_k(w)\) behaves like a Dirac density, with a point mass at \(\mathcal {M}^{-1}(k)\). This is the content of Proposition 2.3. As a consequence,
as \(k \rightarrow \infty \), and thus it suffices to show that
We then proceed in two phases. First in Sect. 3 we establish the exact scaling of \(\mathcal {M}^{-1}(k)\). More precisely, if we define
then our result implies that \(\sigma (\mathcal {M}^{-1}(k))/(k^2 \textrm{S}_k(a,\beta ))\) converges to a constant, see Corollary 3.4.
For the final part we write
The first term converges by Corollary 3.4, so that all that is left is to study the limit \(T(w)/\sigma (w)\) as \(w \rightarrow \infty \) in more detail. To prepare for this, we establish general upper and lower bounds on T(w) and study their asymptotic behavior in Sect. 4. Finally, in Sect. 5, we establish the limits \(T(w)/\sigma (w)\) for the three different cases and wrap up the proof of Theorem 1.1.
1.7 More on the Phase Transitions
As Theorem 1.1 shows, the clustering function undergoes two phase transitions. While the exact scaling of the clustering function in the different regimes, as well as the exact location of the transition, depends on the choice of the function g in \(\kappa \), there is something general we can say about the driving force behind both phase transitions.
We start with the first transition. To discuss this, for \(w,{w}_1,{w}_2>0\) we introduce the notation
Recall from (1.11) that the scaling of \(\gamma (k)\) is determined by that of \(T(\mathcal {M}^{-1}(k))\). In Sect. 1.6 we mentioned that we will establish general upper and lower bounds on T(w).
In Lemma 4.1, we show that
while in Lemma 4.3, we obtain
We remark that when w is sufficiently large, up to constant factors, the first term of the upper bound, as well as the lower bound, are both truncated integrals of the function \(h(w,{w}_1,{w}_2)\). For the first term of the upper bound, this is immediate, while for the lower bound we use that, for w sufficiently large, the term
is of constant order. In particular, as we see in the proof of Theorem 1.1, the upper and lower bounds are of the same order for large w, which means that the scaling of T(w), for large w, is guided by truncated integrals of the function \(h(w,{w}_1,{w}_2)\). As we see in the proof of Theorem 1.1, it is the phase transition in the integrability of the function \(h(w,{w}_1,{w}_2)\) that causes the first phase transition in the scaling of \(\gamma (k)\) at \(\beta =a+\frac{3}{2}\), see Theorem 1.1. In particular, when \(\beta >a+\frac{3}{2}\), the function \(h(w,{w}_1,{w}_2)\), when multiplied by the densities of \({w}_1\) and \({w}_2\) having the form (1.1), is integrable in \({w}_1\) and \({w}_2\), and the obtained function of w decays linearly in w, which is the main cause for the inverse linear scaling of \(\gamma (k)\) in this regime. Instead when \(\beta \le a+\frac{3}{2}\), the function \(h(w,{w}_1,{w}_2)\) is not integrable anymore, and its truncated integral diverges logarithmically when \(\beta =a+\frac{3}{2}\), and polynomially when \(\beta <a+\frac{3}{2}\).
Finally, we discuss the occurence of the second phase transition at \(\beta =a+1\). Here we understand the phase transition in terms of how the scaling of \(\mathcal {M}(w)\) changes. In particular, as \(w\rightarrow \infty \), one observes the following transition in behaviour of the scaling of the mean degree \(\mathcal {M}(w)\) of 0 given \(W_0=w\), about \(\beta =a+1\):
see Fig. 2 and Sect. 3 for more details on the behaviour of \(\mathcal {M}(w)\).
In particular, whenever \(\beta <a+\frac{3}{2}\), \({T(\mathcal {M}^{-1}(k))}/{k^2}\) always scales like \({\mathcal {M}^{-1}(k)^{4+2a-2\beta }}/{k^2},\) as we will prove in Sect. 3. Thus, the transition in behaviour of the scaling of \(\mathcal {M}^{-1}(k)\) guides the transition in scaling of \({T(\mathcal {M}^{-1}(k))}/{k^2}\) around \(\beta =a+1\).
1.7.1 Organization of the Paper
Section 2 gives preliminary simplifications to analyse \(\gamma (k)\). In Sect. 3, we analyse the scaling of \(\mathcal {M}^{-1}(k)\). In Sect. 4, we develop some bounding techniques which aid us in analysing upper and lower asymptotics of \(\gamma (k)\). Using results from Sects. 3 and 4, we prove Theorem 1.1 in Sect. 5. Section 6 is devoted to discussion. Appendix A contains some standard results which are used to derive a result of Sect. 2. Appendix B contains the proof of a technical result used in Sect. 3.
2 Preliminaries
This section is devoted to two simplifications of the clustering function \(\gamma (k)\) from (1.6). First, in Proposition 2.2 we reduce its expression to an integral form as given in (2.5). Next, in Proposition 2.3 we prove a concentration result which enables us to argue that asymptotically, this integral is really an integral with respect to a Dirac measure. This simplifies the analysis for its asymptotics.
2.1 Simplifying \(\gamma (k)\)
Let us denote
for the parameter space of the vertices: the first coordinate denotes the location and the second the weight of a vertex. We first prove that conditionally on the weight of a typical vertex, the neighbors of the typical vertex in any infinite SIRG form an inhomogeneous Poisson point process on \(\mathbb {U}\):
Lemma 2.1
(Neighbors of 0) Consider an infinite SIRG \(\mathbb {G}^{\infty }\), where the vertex locations are given by a Palm version \(\Gamma \cup \{\textbf{0}\}\) of a homogeneous Poisson process \(\Gamma \) on \(\mathbb {R}^d\), the weights are i.i.d. with density \(f_W\), and the connection function \(\kappa : \mathbb {R}_+\times \mathbb {R} \times \mathbb {R} \rightarrow [0,1]\) is symmetric in its last two arguments. Let us denote the weight of 0 by \(W_0\). Then, conditionally on \(\{W_0=w\}\), the locations and weights of the neighbors of 0 in the graph \(\mathbb {G}^{\infty }\) are distributed as the atoms of an inhomogeneous Poisson point process \(\mathcal {N}_0^{(w)}\) on \(\mathbb {U}\), with intensity function, for any \(\textbf{p}=(\textbf{x},x) \in \mathbb {U}\), given by
The proof of Lemma 2.1 is a consequence of standard theory of markings and thinnings of Poisson point processes. The required necessary concepts and the proof is postponed to Appendix A.
Recall \(\gamma (k)\) from (1.6). Let us use Lemma 2.1 to simplify the expression of \(\gamma (k)\), when \(\mathbb {G}^{\infty }\) has independent weights distributed according to the density (1.1), and with connection function (1.2), for some symmetric weight function g. Denoting in this case the integral of the above intensity function by \(\mathcal {M}(w)\), we have,
Moreover, Lemma 2.1 implies that the number of neighbors of 0 conditionally on the event \(\{W_0=w\}\) has distribution
Using (2.3), a simple application of the law of total probability shows that the conditional density function \(f_k(w)\) of the weight \(W_0\) of 0, conditionally on the event \(\{d_0(\mathbb {G}^{\infty })=k\}\) has the form
From now on, to simplify notation, we write \(d\underline{\textbf{x}}=d\textbf{x}_1d\textbf{x}_2\) for the Lebesgue measure on \((\mathbb {R}^d)^2\). Also recall the notation \(f_W(\underline{\textbf{w}})d\underline{\textbf{w}}\) from (1.9).
Proposition 2.2
(Simplifying \(\gamma (k)\)) Consider an infinite SIRG \(\mathbb {G}^{\infty }\) with vertex locations distributed as \(\Gamma \cup \{\textbf{0}\}\) where \(\Gamma \) is a homogeneous Poisson point process on \(\mathbb {R}^d\), with independent vertex weights distributed according to the density (1.1), and with connection function (1.2) for some symmetric weight function g. Assume that the integral on the RHS of (2.2) is finite. Then the clustering function \(\gamma (k)\) of \(\mathbb {G}^{\infty }\) has the form
where \(f_k(w)\) is as in (2.4), and
Proof
Recall the Poisson process \(\mathcal {N}^{(w)}_0\) from Lemma 2.1. Let \(\mathcal {N}_{0}^{(w,k)}\) be the conditional point process on \(\mathbb {U}\) defined as
Let \((U_{\{i,j\}})_{i,j=1}^k\) be a symmetric array of uniformly distributed random variables on [0, 1], independent of everything else.
Denoting the atoms of \(\mathcal {N}_0^{(w,k)}\) by \(\{\textbf{p}_i=(\textbf{x}_i,{x}_i)\in \mathbb {U}: 1\le i \le k\}\), we note that
where \(\sum _{\textbf{p}_i,\textbf{p}_j \in \mathcal {N}_0^{(w,k)}}^{\ne }\) denotes the sum over distinct pairs \(\textbf{p}_i,\textbf{p}_j \in \mathcal {N}_0^{(w,k)}\) and the expectation is taken both with respect to the point process and the uniform random variables. This implies that
Using the notation \(\textbf{p}_i=(\textbf{x}_i,x_i)\), we note that the collection of \(\mathbb {U}\)-valued random variables \(\{\textbf{p}_i\}_{\textbf{p}_i \in \mathcal {N}_0^{(w,k)}}\) form an i.i.d. collection of random variables with density given by
In particular, the expectation of the sum in the integrand in (2.7) evaluates to
where \(\textbf{p}_1=(\textbf{x}_1,x_1),\textbf{p}_2=(\textbf{x}_2,x_2)\), and to obtain the first equality we have simply changed variables \((x_1,x_2)\mapsto (w_1,w_2)\). Since \(\gamma (k)=\left( {\begin{array}{c}k\\ 2\end{array}}\right) ^{-1}\mathbb {E}\left[ \left. \Delta _0(\mathbb {G}^{\infty })\right| d_0(\mathbb {G}^{\infty })=k\right] \) from (1.6), the result follows.
2.2 Concentration of the Weight of 0 When \(d_0(\mathbb {G}^{\infty })\) is Large
The main goal of this section is to argue that the conditional density \(w \mapsto f_k(w)\) of (2.4) is asymptotically like a Dirac mass at \(\mathcal {M}^{-1}(k)\). Informally, the argument goes as follows: Conditionally on the Poisson variable (2.3) being large, standard Poisson concentration arguments imply that its mean \(\mathcal {M}(w)\) also has to be large. In particular, if \(\mathcal {M}(w)\) is a nice monotone function of w, this means that w also has to be large. That is, the degree of 0 in \(\mathbb {G}^{\infty }\) cannot be large unless its weight is also large. In this section, we formalise this by arguing that when the degree of 0 is k, and \(k \rightarrow \infty \), the weight of 0 is highly concentrated, and diverges as \(\mathcal {M}^{-1}(k)\). As a consequence, the conditional density function \(f_k(w)\) appearing in the integral (2.5) behaves like a Dirac mass with an atom at \(\mathcal {M}^{-1}(k)\). Consequently, using (2.5), the \(k \rightarrow \infty \) asymptotic behaviour of \(\gamma (k)\) is the same as that of \({T(\mathcal {M}^{-1}(k))}/{k^2}\).
Recall the notations T(w) and \(\mathcal {M}(w)\) from (2.2) and (2.6). First, we need a regularity condition for diverging sequences, that ensures that they are robust under certain lower order perturbations:
Definition 2.1
(Fluctuation-robust sequences) A sequence \((\psi (k))_{k \ge 1}\) diverging to infinity is called fluctuation-robust, if for any \(C>0\), as \(k \rightarrow \infty \),
Examples of fluctuation-robust sequences are positive powers of k, or some positive power of k times some logarithmic factor: \(k \log {k}\), \(k^{\epsilon } (\log {k})^{\delta }\) for any \(\epsilon >0\) and \(\delta \in \mathbb {R}\), etc. It is easy to verify that for any constant \(C>0\) and \(L \in \mathbb {N}\), all of \(C\psi (k)\), \(\psi (Ck)\), \(\psi ^{(L)}(k)\) are fluctuation-robust, if \(\psi (k)\) is so, where \(\psi ^{(L)}(\cdot )\) denotes the \(L-\)fold composition of \(\psi (\cdot )\). Note that the sequence \(\psi (k)=e^{k}\) is not fluctuation-robust. Observe that if \(\psi (k)\) is fluctuation-robust, then any sequence \(\phi (k)\) with \({\psi (k)}/{\phi (k)}\rightarrow 1\) is also fluctuation-robust.
Let us now state the main result of this section. Recall the connection function (1.2). For the following proposition, we do not need the form (1.3) of g, and it is true as soon as the weight function g satisfies certain assumptions:
Proposition 2.3
Let \(\mathbb {G}^{\infty }\) be an infinite SIRG with vertex locations distributed as \(\Gamma \cup \{\textbf{0}\}\) for a homogeneous Poisson point process \(\Gamma \) on \(\mathbb {R}^d\), independent vertex weights having density (1.1), and with the connection function (1.2) for some symmetric function g. Recall also \(\mathcal {M}(w)\) from (2.2). Assume the following:
-
1.
For fixed \(s \in \mathbb {R}\), the function \(w \mapsto g(s,w)\) is monotone increasing in w.
-
2.
\(\mathcal {M}(w)< \infty \) for all \(w >0\), with \(\mathcal {M}(w) \rightarrow \infty \) as \(w \rightarrow \infty \). Further, there exists \( K_0> 0\) and \(\zeta \in \mathbb {R}\) such that \(\mathcal {M}(w)\) is strictly increasing and differentiable in \((K_0,\infty )\), with \(\mathcal {M}'(w)\le w^{\zeta }\) for all \(w>K_0\).
-
3.
There exists \(\xi \in \mathbb {R}\) such that \(\mathcal {M}^{-1}(k)\le k^{\xi }\), as \(k \rightarrow \infty \).
-
4.
\(T(\mathcal {M}^{-1}(k))=\Omega (k^{-\eta })\) for some \(\eta >0\), as \(k \rightarrow \infty \).
Under the above mentioned assumptions,
-
a
if further \(T(\mathcal {M}^{-1}(k))=\Theta (\psi (k))\) for some fluctuation-robust sequence \((\psi (k))_{k \in \mathbb {N}}\), then as \(k \rightarrow \infty \),
$$\begin{aligned} \gamma (k)=\Theta (T(\mathcal {M}^{-1}(k))k^{-2}). \end{aligned}$$ -
1.
if further the sequence \((T(\mathcal {M}^{-1}(k)))_{k \in \mathbb {N}}\) is itself fluctuation-robust, then, as \(k \rightarrow \infty \),
$$\begin{aligned} \frac{\gamma (k)}{T(\mathcal {M}^{-1}(k))k^{-2}} \rightarrow 1. \end{aligned}$$
Thus, under the conditions of Proposition 2.3, asymptotically the density \(w \mapsto f_k(w)\) appearing in the integral (2.5) behaves like a Dirac mass at \(\mathcal {M}^{-1}(k)\), as \(k \rightarrow \infty \), provided the sequence \((T(\mathcal {M}^{-1}(k)))_{k \in \mathbb {N}}\) scales like a fluctuation-robust sequence.
Proposition 2.3 is used in the proof of Theorem 1.1 in the following way: Note that the sequence \(\textrm{S}_k(a,\beta )\) in the statement of Theorem 1.1 is fluctuation robust, for given a and \(\beta \). Consequently, so is the sequence \(k^2\textrm{S}_k(a,\beta )\). Hence it is enough to show that \(\frac{T(\mathcal {M}^{-1}(k))}{k^2 \textrm{S}_k(a,\beta )} \rightarrow \Gamma (a,\alpha , \beta ,d)\), since this automatically implies that \(T(\mathcal {M}^{-1}(k))\) is fluctuation-robust, and hence the appropriately rescaled limit of both \(T(\mathcal {M}^{-1}(k))/k^2\) and \(\gamma (k)\) is the same, due to Proposition 2.3 (b).
Before going into the proof, we first state some useful results. We begin with the standard Chernoff concentration bound for Poisson random variables, e.g. see [8, (2.12)], as
for any \(x > 0\). In particular, for any \(C>0\), and \(\lambda =\lambda _n \rightarrow \infty \),
We also recall the standard stochastic domination property of Poisson random variables: for \(\lambda _1>\lambda _2\) and any \(x>0\),
Next, we state a lemma which will be useful in the proof of Proposition 2.3:
Lemma 2.4
Under the assumptions of Proposition 2.3, there exists \(K_1>0\) such that for all \(k>K_1\),
We are now ready to prove Proposition 2.3.
Proof of Proposition 2.3
Let \(I_k^{-}:=k-C\sqrt{k \log {k}}\), and define \(I_k^{+}\) analogously, where C is a sufficiently large constant to be chosen later.
Note that \(J_k^{-}:=\mathcal {M}^{-1}(I_k^{-})\) and \(J_k^{+}:=\mathcal {M}^{-1}(I_k^{+})\) are well defined for all large k, since \(\mathcal {M}(w)\) is continuous and strictly increasing on \((K_0,\infty )\).
Further, note that if \(w<J_k^{-}\), i.e., \(\mathcal {M}(w)<I_k^{-}\), then
as \(k \rightarrow \infty \), where to obtain the first inequality, we have used that \(I_k^{-}+C\sqrt{I_k^{-}\log {I_k^{-}}}<k\) for all large k, and to obtain the last inequality, we have used \(\mathcal {M}(w)<I_k^{-}\) along with (2.10). A similar argument shows that, for \(w>J_k^{+}\),
We conclude that if \(w \notin [J_k^-,J_k^+]\),
Hence, we can write
where
and
Here to obtain the first inequality, we have used the easily checked upper bound of 1 on \(T(w)/{\mathcal {M}(w)^2}\). Observe that using Assumption (4), Lemma 2.4, and (2.12), for all large k,
by choosing \(C>0\) sufficiently large.
Next we aim to bound \(\overline{\gamma }(k)\) from above and below. Recall that we assume that \(w \mapsto g_s(w)\) is monotone increasing in w. As a consequence, for any \(x \in \mathbb {R}^d\) and \(s \in \mathbb {R}_+\), \(w \mapsto \kappa _{\alpha }(\Vert \textbf{x}\Vert ,s,w)\) is also monotone increasing in w. In particular, this implies that \(w \mapsto T(w)\) is monotone increasing in w. Also, \(w \mapsto \mathcal {M}(w)\) is strictly increasing in w whenever w is sufficiently large by assumption. Using the monotonicity of T(w) and \(\mathcal {M}(w)\), and recalling \(\overline{\gamma }(k)\) from (2.14), we get can an upper bound on it by simply bounding by the supremum value of the integrand,
where
as \(k \rightarrow \infty \), by choosing \(C>0\) large, where we have used (2.12) and Lemma 2.4 to obtain (2.16).
Similarly, we can also obtain a lower bound on \(\overline{\gamma }(k)\), by bounding it from below by the infimum value of the integrand. In particular, we obtain the following chain of inequalities,
Now let us prove (a.). Since \(T(\mathcal {M}^{-1}(k))=\Theta (\psi (k))\), there are constants \(C_1,c_1>0\) such that whenever k is sufficiently large,
Hence,
which implies, as \(\psi (k)\) is fluctuation-robust and consequently \({\psi (I_k^-)}/{\psi (k)}\rightarrow 1\), that \({T(\mathcal {M}^{-1}(I_k^-))}/{T(\mathcal {M}^{-1}(k))} = O\left( 1\right) \), as \(k \rightarrow \infty \). A similar argument shows that \({T(\mathcal {M}^{-1}(I_k^-))}/{T(\mathcal {M}^{-1}(k))} = \Omega (1)\), as \(k \rightarrow \infty \). Hence, as \(k \rightarrow \infty \), \({T(\mathcal {M}^{-1}(I_k^-))}/{T(\mathcal {M}^{-1}(k))} = \Theta (1)\), and similarly, \({T(\mathcal {M}^{-1}(I_k^+))}/{T(\mathcal {M}^{-1}(k))} = \Theta (1)\). Further, note from the definitions of \(I_k^+\) and \(I_k^-\) that both \({k}/{I_k^+}, {k}/{I_k^-} \rightarrow 1\) as \(k \rightarrow \infty \). Also, (2.16) shows that \(\delta _k \rightarrow 0\). Hence, both the left-most and the right-most expressions of (2.17) are \(\Theta (1)\), as \(k \rightarrow \infty \). Hence, by (2.17),
Finally, recall (2.13). Since (2.15) shows that \(\varepsilon _k \rightarrow 0\), we have both that \({\overline{\gamma }(k)}/{(T(\mathcal {M}^{-1}(k))k^{-2})}\) and \({\gamma (k)}/{(T(\mathcal {M}^{-1}(k))k^{-2})}\) are of the same order, which concludes the proof of (a), using (2.18).
Now, we move on to proving (b). The philosophy is quite similar to that of the proof of (a). In this case, recall that \(T(\mathcal {M}^{-1}(k))\) is itself log-robust. This implies that \({T(\mathcal {M}^{-1}(I_k^-))}/{T(\mathcal {M}^{-1}(k))}\rightarrow 1\) and \( {T(\mathcal {M}^{-1}(I_k^+))}/{T(\mathcal {M}^{-1}(k))} \rightarrow 1\). Thus arguing similarly as before, both the left-most and the right-most expressions of (2.17) converge to 1, as \(k \rightarrow \infty \). So that also \({\overline{\gamma }(k)}/{(T(\mathcal {M}^{-1}(k))k^{-2})} \rightarrow 1\) as \(k \rightarrow \infty \). Recalling (2.13) and (2.15), we obtain that \({\gamma (k)}/{(T(\mathcal {M}^{-1}(k))k^{-2})} \rightarrow 1\), as \(k \rightarrow \infty \), which concludes the proof of (b).
Finally, let us give the proof of Lemma 2.4:
Proof of Lemma 2.4
Let \(K_1>0\) be sufficiently large so that \(\mathcal {M}(w)\) is strictly increasing and differentiable on \((K_1,\infty )\), and further, \(\mathcal {M}'(w)\le w^{\zeta }\) and \(\mathcal {M}^{-1}(k)\le 2k^{\xi }\) for all \(w \in (K_1,\infty )\). Note that the existence of such a \(K_1\) is guaranteed by Assumptions (2.) and (3.) of Proposition 2.3. Restricting the domain of integration to \((K_1,\infty )\) and then changing variables as \(t\mapsto \mathcal {M}(w)\), we get a lower bound on \(\int _1^{\infty }\mathbb {P}\left( \text {Poi}(\mathcal {M}(w)=k)\right) w^{-\beta }dw\) as,
where \(\Gamma (t,x)=\int _x^{\infty }e^{-r}r^{t-1}dr\) is the upper incomplete gamma function, and \(\Gamma (t)=\Gamma (t,0)\) is the gamma function. Next we recall standard scaling relations between gamma functions: for any fixed \(x>0\) and \(a \in \mathbb {R}\), as \(t \rightarrow \infty \),
Taking \(t=k+1\) and \(a=-\xi \beta -\xi \zeta -2\) finishes the proof of the lemma.
3 Scaling of \(\mathcal {M}^{-1}(k)\)
In the previous section, we have reduced the analysis of the clustering function \(\gamma (k)\) to analyzing \(T(\mathcal {M}^{-1}(k))/k^2\). The first step is to study \(\mathcal {M}^{-1}(k)\), which is the content of this section. Recall the definition from (2.2). Then by (1.1),
Next, recalling \(\xi _\alpha = \xi _{\alpha ,\beta ,d}:= \frac{\alpha \omega _d (\beta - 1)}{\alpha - 1}\) from (1.7), with \(\xi _\infty = \lim _{\alpha \rightarrow \infty } \xi _\alpha \), and from (1.3) that \(g(w_1,w_2) =(w_1 \vee w_2)(w_1 \wedge w_1)^a\), we conclude that
Direct computations of this integral yield the following result about the asymptotic behavior of \(\mathcal {M}(w)\):
Lemma 3.1
Let \(\mathcal {M}(w)\) be defined as in (2.2) and define
Then,
Remark 3.1
(Infinite mean degree) Observe that if \(\beta \in (2,\frac{a+3}{2})\) then \(2 + a - 2\beta < -1\). Hence, since in this regime \(\mathcal {M}(w) = \Theta \left( w^{2+a-\beta }\right) \), it follows that \(\mathcal {M}(w)\) is no longer integrable with respect to the density \(f_W(w) = \Theta \left( w^{-\beta }\right) \). This implies that \(\mathbb {E}\left[ d_0(\mathbb {G}^\infty )\right] = \infty \) in this regime.
The above result shows that, asymptotically, \(\mathcal {M}(w)\) is some polynomial of combinations of w and \(\log (w)\). The next step is then to use the asymptotic behavior of \(\mathcal {M}(w)\) to derive the behavior of \(\mathcal {M}^{-1}(k)\). Let us define certain functions which have the form of \(\mathcal {M}(w)\). To begin with, let us call a real function eventually strictly increasing if it is strictly increasing on an interval of the form \((K_0,\infty )\) for some \(K_0 \in \mathbb {R}\). Then we define:
Definition 3.1
(Almost power functions) A function \(f: \mathbb {R}_+ \rightarrow \mathbb {R}\) is called an almost power function, if it is an eventually strictly increasing continuous function, such that
for some \(a > 0\) and \(b\ge 0\).
We use a lemma that relates the scaling of almost power functions to the scaling of their inverses:
Lemma 3.2
Let \(f: \mathbb {R}_+ \rightarrow \mathbb {R}\) be an almost power function of the form (3.2). Then, for all large y, \(f^{-1}(y)\) is well defined, and as \(y \rightarrow \infty \),
Lemma 3.2 is proved in Appendix B. For now we observe that, by (3.1) and Lemma 3.1, the function \(w \mapsto \mathcal {M}(w)\) satisfies the conditions of the lemma. Hence, we obtain the scaling behavior of the inverse \(\mathcal {M}^{-1}(k)\). Figure 2 shows the two scaling regimes of \(\mathcal {M}^{-1}(k)\) in the \((\beta ,a)\) space.
Proposition 3.3
Let \(\mathcal {M}(w)\) be defined as in (2.2) and define
Then,
Next define
Then, combining Proposition 3.3 with the definition of \(\textrm{S}_k(a,\beta )\) yields the following corollary, which is the first step to proving Theorem 1.1:
Corollary 3.4
As \(k \rightarrow \infty \),
4 Scaling of T(w)
Having established the asymptotic behavior of \(\mathcal {M}^{-1}(k)\), in this section we study the scaling of the integral T(w) as \(w \rightarrow \infty \). In particular, we will show that \(T(w) = \Theta \left( \sigma (w)\right) \), where \(\sigma (w)\) is defined in (3.3). For this, we first obtain upper and lower bounds on T(w) that are sharp, up to constants, for sufficiently large w. It is important to note that the bounds we obtain in Lemmas 4.1 and 4.3 are valid for any symmetric function g and can thus be used, together with Proposition 2.3, to study the scaling of the clustering function for different types of g’s.
To study the asymptotics of these bounds we will use the specific form of g as in (1.3) and show that for this function \(T(w) = O\left( \sigma (w)\right) \) and \(T(w) = \Omega (\sigma (w))\). We start with the upper bound, as the lower bound requires a bit more computations.
To aid in our analysis we define the spatial integral
so that
4.1 The Upper Bound
We obtain an upper bound on T(w) by splitting the integral depending on whether \(w_1 \wedge w_2\) is smaller or larger than w. For each regime we then use a different upper bound for the integrand \(\mathcal {S}_w(w_1,w_2)\). Then we study each term separately:
Lemma 4.1
Let T(w) be defined as in (2.6) with \(\kappa _{\alpha }\) from (1.2) for some symmetric weight function g. Then,
where \(h(w,w_1, w_2)\) is defined in (1.12).
Proof
Note that for an upper bound on T(w), it is sufficient to focus on the case \(\alpha \in (1,\infty )\), since for any \(\alpha \in (1,\infty )\), from (1.2) one has \(\kappa _{\alpha }(\cdot )\ge \kappa _{\infty }(\cdot )\) point-wise. Hence for the rest of the proof we assume \(\alpha \in (1,\infty )\).
Next we begin by providing two upper bounds on \(\mathcal {S}_w(w_1,w_2)\). Bounding \(\kappa _{\alpha }(\Vert \textbf{x}\Vert ,w,w_1)\) from above by 1, we get that
Similarly,
Thus using the non-negativity of g,
Also note that by bounding \(\kappa _{\alpha }(\Vert \textbf{x}-\textbf{y}\Vert ,w_1,w_2) \le 1\),
Finally, we split the integral
Using the bounds (4.5) and (4.6), respectively, for \(\mathcal {S}_w(w_1,w_2)\) in the first and second terms of the right hand side (RHS) of (4.7) gives the result.
Remark 4.1
(Bounding \(\mathcal {S}_w(w_1,w_2)\) differently) Note that depending on whether \(w_1 \wedge w_2\) is larger or smaller than w, we bound \(\mathcal {S}_w(w_1,w_2)\) differently. As already discussed in Sect. 1.6, when \(w_1\) and \(w_2\) have density (1.1), the function \(h(w,w_1,w_2)\) is not integrable in \((w_1,w_2)\) if \(\beta \le a+\frac{3}{2}\). Consequently, using the bound (4.5) for \(w_1 \wedge w_2 > w\), for \(\beta \le a+\frac{3}{2}\), would only give us a trivial upper bound of \(\infty \) on T(w), which is not useful. Keeping this in mind, we were required to bound \(\mathcal {S}_w(w_1,w_2)\) differently. When \(\beta >a+\frac{3}{2}\), \(h(w,w_1,w_2)\) is integrable when \(w_1\) and \(w_2\) have density (1.1), and indeed for this case this distinction does not make a difference: as we will see, when \(\beta >a+\frac{3}{2}\), the term (4.3) is of negligible order compared to the term (4.2).
With this bound in hands we can now use the definition of our interpolation function g to study the scaling of T(w).
Lemma 4.2
Let T(w) be defined as in (2.6) with \(\kappa \) and g as in (1.2) and (1.3), respectively. Then, as \(w \rightarrow \infty \),
Proof
We show that the first term (4.2) from Lemma 4.1 has the exact same scaling as in the statement of the lemma. For the second term (4.3) we then use the definition of g from (1.3) to get, as \(w \rightarrow \infty \),
Noting that \(w^{4+2a-2\beta }\) is bounded by each term on the RHS of (4.8), in its respective case, then finishes the proof.
For (4.2), we again use the definition of g from (1.3) to get
where to obtain the second last inequality we have used the symmetry in \(w_1\) and \(w_2\). Now, the intended result follows by noting that as \(w \rightarrow \infty \),
4.2 The Lower Bound
Next, we prove a lower bound on T(w). For this we rely on geometric techniques. Recall that \(\omega _d\) denotes the Lebesgue measure of a ball of unit radius in \(\mathbb {R}^d\).
Lemma 4.3
Consider the connection function (1.2), with some symmetric weight function g. Recall \(h(w,w_1,w_2)\) from (1.12). Then,
Proof
Note that for a lower bound on T(w), we need only consider the case \(\alpha = \infty \) since for any \(\alpha \in (0,\infty )\), \(\kappa _\alpha > \kappa _{\infty }\) point-wise. Thus, we assume \(\alpha =\infty \) for the rest of the proof. For \(\textbf{x} \in \mathbb {R}^d\) and \(r>0\), we also use the notation
to denote the open ball of radius r around \(\textbf{x}\) in \(\mathbb {R}^d\).
For \(\alpha =\infty \),
where g is the weight function (1.3). A simple change of variable \(\textbf{z}=\textbf{x}-\textbf{y}\) yields
Note that if \(\Vert \textbf{x}\Vert >g(w,w_1)^{1/d}+g(w_1,w_2)^{1/d}\), then the set \(B(\textbf{x},g(w,w_2)^{1/d})\) is disjoint from the open ball of radius \(g(w_1,w_2)^{1/d}\) around the origin \(\textbf{0} \in \mathbb {R}^d\), see Fig. 3a.
This means in particular that if \(\Vert \textbf{z}\Vert <g(w_1,w_2)^{1/d}\), then \(\textbf{z} \notin B(\textbf{x},g(w,w_2)^{1/d})\). Consequently in this case,
By this observation,
Next, we obtain a generic lower bound on the left hand side (LHS) of (4.10), in the complementary range, that is, for \(\Vert \textbf{x}\Vert <g(w,w_2)^{1/d}+g(w_1,w_2)^{1/d}\).
First observe that the integral (4.10) is the volume of intersection of the two balls \(B(\textbf{x},g(w,w_2)^{1/d})\) and \(B(\textbf{0},g(w_1,w_2)^{1/d})\). A lower bound on this volume is the volume of any ball that is completely contained in this intersection. It is not too hard to figure out what the maximum diameter \(\ell \) of such a ball, completely contained in this intersection, can be (see Fig. 3b):
It is
In particular, when \(g(w,w_2)>g(w_1,w_2)\), since it can never hold that \(B(\textbf{0},g(w_1,w_2)^{1/d}) \supset B(\textbf{x},g(w,w_2)^{1/d})\),
Further bounding from below by multiplying by the indicator \(\mathbbm {1}_{\left\{ \Vert \textbf{x}\Vert <g(w,w_2)^{1/d}-g(w_1,w_2)^{1/d}\right\} }\),
At this point we introduce some short-hand notations to keep things neat. Let us denote
Using the lower bound (4.11) and using the notation (4.12) (recall the notation \(f_{W}(\textbf{w})d\textbf{w}\) from (1.9)), we have
Finally, noting that \(g(w,w_1)^{1/d}> g(w,w_1)^{1/d}-g(w_1,w_2)^{1/d}\), we have from (4.13),
which is (4.9) recalling (1.1).
Lemma 4.4
Let T(w) be defined as in (2.6) with \(\kappa \) and g as in (1.2) and (1.3), respectively. Then, as \(w \rightarrow \infty \),
Proof
Using the lower bound from Lemma 4.3, it suffices to show that as \(w \rightarrow \infty \),
Observe that if both \(w_1,w_2 < {\mathcal {M}^{-1}(k)}/{2}\), then using the definition of g from (1.3), one has \(\mathbbm {1}_{\left\{ g(w_1,w_2)<g(\mathcal {M}^{-1}(k),w_2)\right\} }=1\). Furthermore,
Consequently, we can bound the LHS of (4.14) from below by restricting the integration domain from \((1,\infty )\times (1,\infty )\) to \(\left( 1,{\mathcal {M}^{-1}(k)}/{2}\right) \times \left( 1,{\mathcal {M}^{-1}(k)}/{2}\right) \), to get the following lower bound on it:
Hence to conclude Lemma 4.4, it is enough to check that the double integral in (4.15) is at least the RHS of (4.14), which we proceed to do. Note that
where \(K={1}/{(\beta -2)}>0\), and in the second last step we have used the symmetry in \(w_1\) and \(w_2\).
To conclude Lemma 4.4, it is enough to check that (4.16) is at least the RHS of (4.14). To keep notation clear, let us write R for the constant \(K\underline{K}\). We do a case-by-case analysis:
Case 1 \(\beta \ge (a+\frac{3}{2})\vee 2\). In this case, observe that the first term of (4.16) equals
while the subtracted second term of (4.16) equals
In particular, since \(\beta >a+\frac{3}{2}\), the first term of (4.16) is the dominating term, and is indeed at least the RHS of (4.14).
Case 2 \(2<\beta <a+\frac{3}{2}\). In this case, we note that the first term of (4.16) equals
Simplifying the subtracted second term in (4.16) gives
since \(\beta<a+3/2<1+2a\) i.e. \(1+2a-\beta >0\), as \(a>1/2\) (note that we need \(a>1/2\) so that \(a+3/2>2\)). Using (4.17) and (4.18), we note that the term (4.16) is at least
Observing that \(\left[ {(3+2a-2\beta )^{-1}}-{(1+2a-\beta )^{-1}} \right] >0\) since \(\beta >2\), and \(4+2a-2\beta >1\) since \(\beta <a+\frac{3}{2}\), the last expression is indeed \(\Omega (\mathcal {M}^{-1}(k)^{4+2a-2\beta })\), which completes the proof for the case \(1+2a-\beta >0\).
Note that combining Lemmas 4.2 and 4.4 we get the scaling of T(w), as \(w \rightarrow \infty \):
At this point we make the observation that the assumptions of Proposition 2.3 are indeed satisfied. Indeed, assumption (1.) is immediate from the choice of g in (1.3). It is clear from the expression (3.1) that assumption (2.) is satisfied. Assumption (3.) follows from Proposition 3.3. Finally, assumption (4.) follows from (4.19). We make no more mention of this, and we apply Proposition 2.3 in the next section where we prove Theorem 1.1, wherever necessary.
5 Proof of Theorem 1.1
In this section we prove Theorem 1.1. Recall the definition of T(w) from (2.6), and note that using the notation \(\mathcal {S}_w(w_1,w_2)\) from (4.1), we have
Observe that the sequence \((\textrm{S}_k(a,\beta ))_{k \ge 1}\) is fluctuation-robust, and thus so is \((k^2 \textrm{S}_k(a,\beta ))_{k \ge 1}\). Hence by Proposition 2.3 it suffices to show that
Recall the definition of \(\sigma \) from (3.3). Following the strategy we laid out, we write
The first factor converges by Corollary 3.4. Comparing the constants in Corollary 3.4 to \(\Gamma (a,\alpha ,\beta ,d)\) in Theorem 1.1, it suffices to show that
This is achieved in Lemmas 5.2, 5.3 and 5.4.
The general strategy is to employ a specific change of variables \((x,y) = (\phi _1(x^\prime ), \phi _2(y^\prime ))\) and \((w_1,w_2) = (\psi _1(w_1^\prime ), \psi _2(w_2^\prime ))\) to the integral expression of T(w) so that
where \(\hat{\mathcal {S}}_{w}(\psi _1(w_1^\prime ), \psi _2(w_2^\prime ))\) is the integral \(\mathcal {S}_{w}(w_1,w_2)\) after the change of variables and without the leading order scaling. We then analyze the new integral and use dominated convergence to prove that it converges to a constant as \(k \rightarrow \infty \). This is also where we make use of the upper bounds established for T(w) in Sect. 4.
The right choice of the change of variables depends on the regime of a and \(\beta \). In addition, for the third regime: \(2< \beta < (a+\frac{3}{2})\), the entire integration domain \(w_1, w_2 \in (1,\infty )\) in T(w) contributes to the constant. Instead in the other two regimes the main contribution comes from the integration over the domain \(w_1, w_2 \le w\), which is summarized in the following result.
Lemma 5.1
Let \(a > 0\) and \(\beta > (a+\frac{3}{2}) \vee 2\) or \(\beta = a + \frac{3}{2} > 2\). Then, as \(w \rightarrow \infty \),
Proof
Lemmas 4.2 and 4.4 together imply that in these regimes, T(w) scales either as w or \(w\log (w)\). Hence, it suffices to show that
Note that the LHS is nothing more than the integral over the domain \(w_1 \vee w_2> w\). We must therefore show that
We first note that by symmetry the full integral is equal to twice the integral where \(w_1 \le w_2\). We then split the inner integral over the two cases \(1<w_1<w\) and \(w<w_1 < w_2\) to obtain:
We first show that the second integral in (5.4) is \(O\left( w^{4 + 2a-2\beta }\right) \) which is \(o(w \log (w))\) for the two regimes of a and \(\beta \). Using the bound (4.6) we get
We proceed with the first integral in (5.3). Note that \(w_1 \le w_2\) which implies that \(g(w,w_1) \le g(w,w_2)\). In addition, recall the function \(h(w,w_1,w_2)\) from (1.12) and observe that because \(w_1 \le w \le w_2\),
Next, by (4.5),
Since each term on the RHS is \(o(w \log (w))\) when \(\beta > (a+\frac{3}{2}) \vee 2\) and \(a > 0\) or \(\beta = a + \frac{3}{2} > 2\) with \(a > \frac{1}{2}\) the result follows.
5.1 The Case \(\beta >(a+\frac{3}{2})\vee 2\) and \(a \in [0,\infty )\)
By Lemma 5.1 it suffices to show that
where
using the symmetry in \(w_1\) and \(w_2\).
The following observation will prove useful throughout the whole section: for any \(t,s > 0\) it holds that
Lemma 5.2
Let \(\beta > (a+\frac{3}{2}) \vee 2\), with \(a \ge 0\). Then, as \(w \rightarrow \infty \),
Proof
Recall that
We now apply the following change of variables:
to obtain
where we also use the scaling relation (5.6) for the first two \(\kappa \) terms in the integrand.
We now apply dominated convergence twice. For this we first note that the integrand above is dominated by \(\kappa (\Vert \textbf{x}^\prime \Vert ,1,w_1) \kappa (\Vert \textbf{y}^\prime \Vert ,w_1,w_2)\). Hence, using straightforward calculations based on the definition of \(\kappa \), we get
which is finite for any finite \(w_1,w_2\). Therefore since \(\kappa (\Vert \textbf{x} + w^{-1/d}\textbf{y}\Vert ,1,w_2) \rightarrow \kappa (\Vert \textbf{x}\Vert ,1,w_2)\) pointwise, it follows that
by dominated convergence.
For the last step we first observe that
It is straightforward to see that the above integral is finite. Hence, since \(\frac{\mathcal {S}_w(w_1,w_2)}{w} \mathbbm {1}_{\left\{ w_1,w_2\le w\right\} }\) converges pointwise to \(I(w_1,w_2)\), which is integrable, we apply dominated convergence to conclude
as required. This proves Theorem 1.1 for \(\beta >(a+\frac{3}{2})\vee 2\) and \(a \in (0,\infty )\).
5.2 The Case \(\beta =a+\frac{3}{2}\) and \(a \in (\frac{1}{2},\infty )\)
The proof for this case follows a similar approach to that for \(\beta > (a+\frac{3}{2})\vee 2\). Again, by Lemma 5.1 we only need to consider \(\overline{T}(w)\) instead of T(w):
Lemma 5.3
Let \(\beta = a+\frac{3}{2}\), with \(a > \frac{1}{2}\). Then, as \(w \rightarrow \infty \),
Proof
Note that a similar change of variables as applied in Lemma 5.2 extracts a scaling factor w outside the integral. However, for this case we need to extract an additional factor \(\log (w)\). This is achieved by also scaling the variables \(w_1\) and \(w_2\). The proof thus proceeds as follows. We first apply a change of variable on \(w_1\) and \(w_2\) and then on \(\textbf{x}\) and \(\textbf{y}\). After this we get that \(\overline{T}(w)\) is equal to \(w \log (w)\) times some integral. We then use dominated convergence, in a similar way as in the proof of Lemma 5.2, to show that this integral converges.
Apply the change of variables \((w_1,w_2)\mapsto (u,r)\) given by
Then \(dw_1 = \log (w) w^udu\) and \(dw_2 = w^udr\) so that
Next consider
Here, we apply the change of variables
to get
where we have used (5.6) twice to go from the second to the third line. Observe that the integrand is bounded by \(\kappa \left( \Vert \textbf{x}^\prime \Vert ,1,1\right) \kappa \left( \Vert \textbf{y}^\prime \Vert ,1,r\right) \), which is integrable. Hence, by dominated convergence,
Recall the the split of \(\overline{T}(w)\) in (5.8). Consider the first term on the RHS of (5.8). We have
where we used that \(2a + 3 - 2\beta = 0\), since \(\beta = a + \frac{3}{2}\).
We now apply dominated convergence. For this we recall that \(\frac{\mathcal {S}_w(w^u,w^ur)}{w^{2au+u+1}} \rightarrow I(1,r)\) and that
where the RHS is integrable over \((1,\infty )\) with respect to \(f_W(r)\). Therefore, by dominated convergence, the first term on the RHS of (5.8) when divided by \(w \log {(w)}\), converges as \(w\rightarrow \infty \) to
Now consider the second term on the RHS of (5.8). Arguing similarly, we note that when divided by \(w \log {(w)}\), this term becomes
We want to apply dominated convergence again. Here we use a different representation of the integral \(\mathcal {S}_w(w^u,w^ur)\). We apply the change of variables
to obtain in a similar fashion as before,
In particular, we note that \(\frac{\mathcal {S}_w(w^u,w^ur)}{w^{2au+u+1}}\mathbbm {1}_{\left\{ w^{-u}<r<1\right\} } \rightarrow I(1,r)\mathbbm {1}_{\left\{ 0<r<1\right\} }\), as \(w \rightarrow \infty \). Further using the representation (5.9), we note that \(\frac{\mathcal {S}_w(w^u,w^ur)}{w^{2au+u+1}}\mathbbm {1}_{\left\{ w^{-u}<r<1\right\} }\) is bounded from above by \(\left( \int _{\mathbb {R}^d} \kappa (\Vert \textbf{x}\Vert ,1,r) d\textbf{x}\right) ^2\mathbbm {1}_{\left\{ 0<r<1\right\} }=\overline{C}r^{2a}\mathbbm {1}_{\left\{ 0<r<1\right\} }\), when \(0<r<1\), using the definition of \(\kappa \), for some constant \(\overline{C}>0\). Using that \(\beta =a+3/2\) and \(a>1/2\), it is easily checked that the integral \((\beta -1)\int _0^1 \int _{0}^1 r^{2a}r^{-\beta }drdu\) is finite. Hence, by dominated convergence, the second term on the RHS of (5.8) when divided by \(w\log {(w)}\), converges to
Overall, in this case, the quantity \(\frac{\overline{T}(w)}{w\log {(w)}}\) converges to \((\beta -1)^2 \int _0^{\infty } I(1,r) r^{-\beta }dr\), as \(w \rightarrow \infty \). This completes the proof of Theorem 1.1 for \(\beta =a+\frac{3}{2}\) and \(a\in (\frac{1}{2},\infty )\).
5.3 The Case \(\beta \in (2,a+\frac{3}{2})\) and \(a \in (\frac{1}{2}, \infty )\)
Lemma 5.4
Let \(\beta \in (2,a+\frac{3}{2})\), with \(a \in (\frac{1}{2},\infty )\). Then as \(w \rightarrow \infty \),
Proof
Recall the expression for T(w) from (2.6). Furthermore, note that for any \(t > 0\), \(\textbf{x} \in \mathbb {R}^d\) and \(w_1,w_2 > 0\),
We now apply the change of variables
Straightforward calculations imply that
To prove that the integral converges we first note that, by (4.19),
Hence, by monotone convergence,
and thus
This completes the proof of Theorem 1.1 for \(\beta \in (2,a+\frac{3}{2})\) and \(a\in (\frac{1}{2},\infty )\).
6 Discussion
This section consists of discussions of our proof, possible extensions, and simulations.
6.1 Geometry of Typical Triangles
Note that because of the Poisson point process representation of the neighbors of 0 in \(\mathbb {G}^{\infty }\) as obtained in Lemma 2.1, the fraction \(T(w)/\mathcal {M}(w)^2\) can be seen as the probability that two randomly sampled neighbors of 0 have an edge between them, given that \(W_0=w\). Consequently, since the density \(f_k(w)\) behaves as a Dirac mass at \(\mathcal {M}^{-1}(k)\) thanks to Proposition 2.3, the fraction \(T(\mathcal {M}^{-1}(k))/k^2\) can be interpreted as the probability that two randomly sampled neighbors of 0 are also neighbors, given \(d_0(\mathbb {G}^{\infty })=k\). The change of variables that we encounter in Sect. 5 gives us interesting insights about the locations and weights \((\textbf{x},w_1),(\textbf{y},w_2)\in \mathbb {R}^d \times \mathbb {R}_+\) of two such randomly sampled neighbors. This further gives us insight into, what a randomly sampled triangle between 0 and two such neighbors look like. Figure 4 shows an abstract depiction of such a typical triangle. The quantity \(\Vert \textbf{x}-\textbf{y}\Vert \) is the spatial distance from \(\textbf{x}\) to \(\textbf{y}\) in \(\mathbb {R}^d\). We will next explain how this distance and the weights of each node in a typical triangle behaves in the three different regimes of \(\beta \) and a.
To begin, recall Corollary 3.4 and (5.2). Our main idea is to analyse the change of variables encountered in Lemmas 5.2, 5.3 and 5.4, to get the convergence of \(T(w)/\sigma (w)\), for \(w=\mathcal {M}^{-1}(k)\).
6.1.1 Case \(\beta >(a+\frac{3}{2})\vee 2\) and \(a\in [0,\infty )\)
Let us carefully dissect the variable change in (5.7). Clearly, when \(w=\mathcal {M}^{-1}(k)\), by (5.7), \(\Vert \textbf{x}\Vert \) scales like \(\mathcal {M}^{-1}(k)^{1/d}\), which scales like \(k^{1/d}\) (recall Proposition 3.3). On the other hand, we did not need to scale \(\textbf{y}-\textbf{x}=\textbf{y}'\), i.e. \(\Vert \textbf{x}-\textbf{y}\Vert \) remains bounded, which implies that \(\textbf{y}\) stays in a ball of finite radius about \(\textbf{x}\). Thus, \(\Vert \textbf{y}\Vert \) is also of the order \(k^{1/d}\), and is of constant order spatial distance away from \(\textbf{x}\). Finally, note that we did not have to rescale the respective weights \(w_1\) and \(w_2\) corresponding to x and y, which means that they are of constant order.
6.1.2 Case \(\beta =a+\frac{3}{2}\) and \(a \in (\frac{1}{2},\infty )\)
Arguing as in the previous case, and observing that \(\mathcal {M}^{-1}(k)\) scales like k using Proposition 3.3, we note from the proof of Lemma 5.3 that when \(w=\mathcal {M}^{-1}(k)\), the relevant change of variables yield \(\textbf{x}=C_1\textbf{x}'k^{1/d}=C_2\overline{\textbf{x}}k^{(1+au')/d}\), for some constants \(C_1, C_2>0\) and \(u'\in (0,1)\). Consequently, \(\Vert \textbf{x}\Vert =\Theta (k^{(1+au')/d})\). Next note that \(\textbf{y}-\textbf{x}=\textbf{y}'=C_3\overline{\textbf{y}}k^{(au'+u')/d}\), where \(C_3>0\) is some constant. This implies that \(\Vert \textbf{x}-\textbf{y}\Vert =\Theta (k^{(au'+u')/d})\). Note that the distance \(\Vert \textbf{x}-\textbf{y}\Vert \) between \(\textbf{x}\) and \(\textbf{y}\) is of smaller order than \(\Vert \textbf{x}\Vert \), since \(u' \in (0,1)\). This means that \(\Vert \textbf{y}\Vert \) is also of the order \(\Vert \textbf{x}\Vert \). Finally, note that for this case we had to rescale the weights as well. In particular, we changed variables as \(w_1=e^u=C_4k^{u'}\), for some constant \(C_4>0\), which means that the weight \(w_1\) corresponding to \(\textbf{x}\) is \(\Theta (k^{u'})\), with \(u'\in (0,1)\). We also had to do a variable change as \(\textbf{w}={w_2}/{C_4k^{u'}}={w_2}/{w_1}\), i.e. the weight \(w_2\) corresponding to \(\textbf{y}\) is of the same order as \(w_1\), the weight of \(\textbf{x}\).
6.1.3 Case \(\beta \in (2,a+\frac{3}{2})\) and \(a \in (\frac{1}{2},\infty )\)
Note that for this case, for \(w=\mathcal {M}^{-1}(k)\), the variable change in (5.10) suggests that both \(\textbf{x}\) and \(\textbf{y}\) are \(\Theta ((\mathcal {M}^{-1}(k))^{(1+a)/d})\), while the respective weights \(w_1\) and \(w_2\) are both \(\Theta (\mathcal {M}^{-1}(k))\). Finally, note that for this case we did not have to rescale \(\textbf{x}-\textbf{y}\), which means that the distance \(\Vert \textbf{x}-\textbf{y}\Vert \) between \(\textbf{x}\) and \(\textbf{y}\) is also of the order \((\mathcal {M}^{-1}(k))^{(1+a)/d}\). Depending on how \(\mathcal {M}^{-1}(k)\) scales in k (recall Proposition 3.3), the orders of the different quantities \(\Vert \textbf{x}\Vert ,\Vert \textbf{y}\Vert ,w_1,w_2,\Vert \textbf{x}-\textbf{y}\Vert \) differ in different regimes of \(\beta \) and a. The relationship and the behaviour is summarized in Table 1.
6.2 The Boolean Model
In this section, we discuss a popular related model from the literature. Consider a homogeneous Poisson point process \(\Gamma \) on \(\mathbb {R}^d\), and a non-negative random variable W. Form the open ball \(B(\textbf{x},W_{\textbf{x}})\) around each point \(\textbf{x} \in \Gamma \), where the collection \((W_{\textbf{x}})_{\textbf{x} \in \Gamma }\) is a collection of i.i.d. copies of W. The random graph formed by placing an edge between distinct pairs \(\textbf{x}, \textbf{y} \in \Gamma \) whenever \(B(\textbf{x},W_{\textbf{x}})\cap B(\textbf{y},W_{\textbf{y}}) \ne \emptyset \) is called the Boolean model [10, 26]. Let W have distribution (1.1). Clearly, this random graph model is a SIRG, with connection function (1.2) with \(\alpha =\infty \) and \(g(s,t)=(s+t)^d\). One can also consider the corresponding \(\alpha \in (1,\infty )\) version. For this model, \(\mathbb {E}\left[ \left. d_0(\mathbb {G}^{\infty })\right| W_0=w\right] =\mathcal {M}(w)\) is integrable with respect to the density (1.1) when \(\beta >d\).
A related model was studied in [13], where the authors consider a weight function of the form \(g^{\textrm{sum}}(s,t)=\beta ^{-1}(s^{-\gamma /d}+t^{-\gamma /d})^{-d}\), with s and t now uniformly distributed on (0, 1), and \(\beta \in (0,\infty )\), \(\gamma \in [0,1)\) parameters, see [13, (1.5)]. A particular case of this model can be considered, when one takes \(\beta =1\), the profile function \(\rho \) as considered in [13, (1.2)] to be \(\kappa _{\alpha }(\Vert \textbf{x}-\textbf{y}\Vert ,s',t')\) as in (1.2), where \(s'=s^{-\gamma }\), and \(t'=t^{-\gamma }\), with the weight function \(g(s',t')=(s'+t')^d\). Note that \(s'\) and \(t'\) now have density (1.1) with \(\beta -1=1/\gamma \). Again, as in the last paragraph, \(\mathbb {E}\left[ \left. d_0(\mathbb {G}^{\infty })\right| W_0=w\right] =\mathcal {M}(w)\) is integrable with respect to the density (1.1) when \(\beta >d\), i.e. \(\gamma <1/(d-1)\).
These models does not directly come as a particular case of the weight function (1.3) that we consider. However, following the same proof strategy and calculations as in the proof of Theorem 1.1, it can be shown that the corresponding clustering function \(\gamma (k)\) has an inverse linear scaling: \(\lim _{k \rightarrow \infty }k\gamma (k)=\Gamma \in (0,\infty )\). The calculations are very similar to the \(a=0\) and \(\beta >d\) case for the weight function (1.3) that we consider, and so we omit these calculations. In particular, we state the following theorem without proof:
Theorem 6.1
Consider the infinite SIRG \(\mathbb {G}^{\infty }(\kappa ,W)\), where W has distribution (1.1) and \(\kappa =\kappa _{\alpha }\) is of the form (1.2), with \(g(s,t)=(s+t)^d\). Assume \(\beta >d\). Then, with \(\gamma (k)\) denoting the clustering function of this graph,
where (recall (1.7) and the definitions below it)
As remarked before, we need the condition \(\beta >d\) to ensure that \(\mathbb {E}\left[ d_0(\mathbb {G}^{\infty })\right] <\infty \).
6.3 On Finite Models
Note that Theorem 1.1 gives results on how \(\gamma (k)\) scales. However \(\gamma (k)\) is the \(n \rightarrow \infty \) limit of the sequence of clustering functions \(\left( \textrm{CC}_{\mathbb {G}^{(n)}}(k)\right) _{n \ge 1}\) of the finite models \(\mathbb {G}^{(n)}\). What happens if we let \(k=k_n\) depend on n, with \(k_n \rightarrow \infty \), for the graph \(\mathbb {G}^{(n)}\)? That is, let us consider the sequence of random variables \(\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)\), and ask the question: do results similar as in the theorems above hold with high probability as \(n \rightarrow \infty \)? We expect if \(k_n \rightarrow \infty \) sufficiently slowly, then we have a positive answer to the question. We formulate this as a conjecture in this section. We focus on the finite SIRG \(\mathbb {G}^{(n)}\), where the locations are i.i.d. on \(\left[ -{n^{1/d}}/{2},{n^{1/d}}/{2} \right] ^d\). As remarked earlier, the finite model locally converges to the infinite model \(\mathbb {G}^{\infty }\) (see [31, Theorem 1.2]). For the finite model \(\mathbb {G}^{(n)}\), recall the clustering function \(\textrm{CC}_{\mathbb {G}^{(n)}}(k)\) from (1.5). Recall for fixed k, \(\textrm{CC}_{\mathbb {G}^{(n)}}(k) {\mathop {\rightarrow }\limits ^{\mathbb {P}}}\gamma (k)\). For \(k_n \rightarrow \infty \), denote the sequence \(\phi (k_n) \rightarrow \infty \) by
and denote the sequence \(\psi (n)\rightarrow \infty \) as
We conjecture the following limit laws for finite models:
Conjecture 6.2
(Scaling for finite graphs) Consider the finite SIRG \(\mathbb {G}^{(n)}\).
-
a.
Let \(k_n \rightarrow \infty \) satisfy \(\phi (k_n)=o\left( \psi (n)\right) \). Then, as \(n \rightarrow \infty \), \(\frac{\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)}{\gamma (k_n)}{\mathop {\rightarrow }\limits ^{\mathbb {P}}}1\).
-
b.
Let \(k_n \rightarrow \infty \) satisfy \(\phi (k_n)=(1+o\left( 1\right) )\psi (n)\), as \(n \rightarrow \infty \). Then, as \(n \rightarrow \infty \), \(\frac{\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)}{\gamma (k_n)}{\mathop {\rightarrow }\limits ^{d}}L_0\text {Poi}(\lambda )\), for some \(L_0,\lambda >0\).
-
c.
Let \(k_n \rightarrow \infty \) satisfy \(\phi (k_n)=\omega \left( {\psi (n)}\right) \). Then as \(n\rightarrow \infty \), \(\frac{\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)}{\gamma (k_n)}{\mathop {\rightarrow }\limits ^{\mathbb {P}}}0\).
Heuristic Arguments for Conjecture6.2. Let W be a random variable with density (1.1). Observe that by (2.3), because \(\mathbb {G}^{(n)}\) converges locally to \(\mathbb {G}^{\infty }\), the expected number of vertices with degree at least \(k_n\) in \(\mathbb {G}^{(n)}\) \(\sim n\mathbb {P}\left( \textrm{Poi}(\mathcal {M}(W))>k_n\right) \sim n\mathbb {P}\left( \mathcal {M}(W)>k_n\right) =n\mathbb {P}\left( W>\mathcal {M}^{-1}(k_n)\right) \) as \(W \rightarrow \infty \). Then (1.1) and Proposition 3.3 ensure that when \(k_n\) satisfies \(\phi (k_n)=o\left( \psi (n)\right) \), this expected value does not decay to 0. In other words, as soon as it is possible to choose vertices of degree \(k_n\) in the graph \(\mathbb {G}^{(n)}\), the ratio of \(\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)\) and \(\gamma (k_n)\) should converge in probability to 1, i.e., they are asymptotically the same, as claimed by Conjecture 6.2(a). There are two main difficult steps in proving Conjecture 6.2 (a). Firstly, to rigorously prove that as soon as \(\phi (k_n)=o\left( \psi (n)\right) \), the expected number of vertices in \(\mathbb {G}^{(n)}\) with degree \(k_n\), does not decay to 0. Secondly, to prove \(\textrm{CC}_{\mathbb {G}^{(n)}}(k_n)/\gamma (k_n)\) converges to 1, as soon as \(\phi (k_n)=o\left( \psi (n)\right) \). Parts (b) and (c) of Conjecture 6.2 are to be understood as quantifying the finite-size effects in our models. That is, when \(\phi (k_n)=\omega (\psi (n))\) as in Conjecture 6.2 (c), there are not many vertices of degree \(k_n\) in \(\mathbb {G}^{(n)}\), and even when there are, they simply do not have enough edges among their neighbors to contribute to the clustering in comparison to their degrees. Conjecture 6.2(b) claims that at the boundary of the finite-size effects, which we call the finite-size threshold, the limit is random, and in fact Poissonian. Parts (b) and (c) require a careful and tedious analysis of boundary effects around the finite-size threshold, which constitutes its main difficulty. A result of similar flavor for Hyperbolic Random Graphs, which corresponds to taking \(d=1\) and \(\alpha =\infty \) in our model, was proved in [8, Theorem 1.5]. This proof relied heavily upon the particular choice of the model and its parameters, and does not appear to be easily generalizable to our setting.
As we have seen in Theorem 1.1, there are two phase transitions in the scaling of \(\gamma (k)\), so three different phases. In Fig. 5, we see simulations of the clustering as a function of the degree in the finite graph \(\mathbb {G}^{(n)}\), in these three different phases of \(\beta \) and a. Here for \(k \in \{d_v(\mathbb {G}^{(n)}):v\in V(\mathbb {G}^{(n)})\}\), the log-log plot of \(k \rightarrow \textrm{CC}_{\mathbb {G}^{(n)}}(k)\) is shown. Since in all these three phases, \(\gamma (k)=\Theta (k^{-p})\) as \(k \rightarrow \infty \), for some \(p \ge 0\), we see a straight line in the log-log plot, before the decay becomes much faster due to finite-size effects. The (approximate) finite-size threshold is indicated by the green dashed vertical line. In the plots of Fig. 5, the number of vertices n of \(\mathbb {G}^{(n)}\) is fixed to be \(n=22000\), \(a=2\), \(\alpha =1\) and \(d=2\) are fixed. \(\beta \) is varied to obtain random graphs in the three different regimes: \(\beta >a+\frac{3}{2}\), \(\beta \in (a+1,a+\frac{3}{2})\) and \(\beta \in (\frac{a+3}{2},a+1)\). Here for the third phase, we do not go all the way to \(\beta =2\), since, in the regime \(\beta \in (2,\frac{a+3}{2})\), \(\mathbb {E}\left[ d_0(\mathbb {G}^{\infty })\right] =\infty \), which we recall from Sect. 3.
As can be seen in Fig. 5, as the weight tail \(\beta \) becomes heavier, the decay of the clustering function, as the degree becomes large, becomes slower and slower. The outliers in the simulations for extremely high-degree vertices (extreme right of the plots) show low clustering because of finite-size effects. Note that the clustering contribution from extremely low-degree vertices (extreme left of the plots) are not of interest to us, since we are interested in the clustering behaviour as the degree diverges.
In Fig. 6, for visualization, we have included simulations of the graph \(\mathbb {G}^{(n)}\) in the three different regimes \(\beta >a+\frac{3}{2}\), \(\beta \in (a+1,a+\frac{3}{2})\) and \(\beta \in (2,a+1)\). In all these simulations, \(n=8000\) and \(\alpha =1\) are fixed. Here larger sized vertices have more degree. A uniform bond percolation parameter of \(p=1/7000\) is first applied on all the three graphs for better visibility of edges. Further, bond percolation is applied to the graphs in the regimes \(\beta \in (a+1,a+\frac{3}{2})\) and \(\beta \in (2,a+1)\) so that they have (approximately) the same average degree, as the graph in the regime \(\beta >a+\frac{3}{2}\). In Fig. 6c, we see the graph \(\mathbb {G}^{(n)}\) in the regime \(\beta \in (2,a+1)\). Here \(\beta \) is also chosen such that \(\beta >(a+3)/2\), so that \(\mathbb {E}\left[ d_0(\mathbb {G}^{\infty })\right] <\infty \). However, the clustering \(\gamma (k)\) of the corresponding infinite model converges to a constant, see Theorem 1.1. As a consequence, the graph admits vertices of very high degrees, and these vertices have neighborhoods that are almost cliques. Could these vertices be part of a large dense subgraph? A future research direction for us is to study other properties of the graphs in this exotic regime.
References
Bloznelis, M., Leskelä, L.: Clustering and percolation on superpositions of Bernoulli random graphs. Random Struct. Algorith. (2019). https://doi.org/10.1002/rsa.21140
Boguñá, M., Bonamassa, I., De Domenico, M., Havlin, S., Krioukov, D., Serrano, M.: Network geometry. Nat. Rev. Phys. 3(2), 114–135 (2021)
Bollobás, B., Janson, S., Riordan, O.: The phase transition in inhomogeneous random graphs. Random Struct. Algorithms 31(1), 3–122 (2007). https://doi.org/10.1002/rsa.20168
Bringmann, K., Keusch, R., Lengler, J.: Average distance in a general class of scale-free networks with underlying geometry. (2016). arXiv:1602.05712
Bringmann, K., Keusch, R., Lengler, J.: Sampling geometric inhomogeneous random graphs in linear time. In 25th European Symposium on Algorithms, volume 87 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 20, 15. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern (2017)
Bringmann, K., Keusch, R., Lengler, J.: Geometric inhomogeneous random graphs. Theoret. Comput. Sci. 760, 35–54 (2019). https://doi.org/10.1016/j.tcs.2018.08.014
Csányi, G., Szendri, B.: Structure of a large social network. Phys. Rev. E 69, 036131 (2004). https://doi.org/10.1103/PhysRevE.69.036131
Fountoulakis, N., van der Hoorn, P., Müller, T., Schepers, M.: Clustering in a hyperbolic model of complex networks. Electron. J. Probab. 26, Paper No. 13, 132 (2021). https://doi.org/10.1214/21-ejp583
García-Pérez, G., Boguñá, M., Allard, A., Serrano, M.: The hidden hyperbolic geometry of international trade: World trade atlas 1870–2013. Sci. Rep. 6(1), 1–10 (2016)
Gouéré, J.-B.: Subcritical regimes in the Poisson Boolean model of continuum percolation. Ann. Probab. 36(4), 1209–1220 (2008). https://doi.org/10.1214/07-AOP352
Gracar, P., Lüchtrath, L., Mönch, C.: Finiteness of the percolation threshold for inhomogeneous long-range models in one dimension. (2022). arXiv:2203.11966
Gracar, P., Grauer, A., Lüchtrath, L., Mörters, P.: The age-dependent random connection model. Queueing Syst. 93(3–4), 309–331 (2019). https://doi.org/10.1007/s11134-019-09625-y
Gracar, P., Heydenreich, M., Mönch, C., Mörters, P.: Recurrence versus transience for weight-dependent random connection models. Electron. J. Probab. 27, 1–31 (2022). https://doi.org/10.1214/22-EJP748
Hoff, P.D., Raftery, A.E., Handcock, M.S.: Latent space approaches to social network analysis. J. Am. Stat. Assoc. 97(460), 1090–1098 (2002)
Hofstad van der R.: Random graphs and complex networks, Vol. 2 (in preparation). (2021+). http://www.win.tue.nl/~rhofstad/NotesRGCNII.pdf
Iskhakov, L., Kamiński, B., Mironov, M., Prałat, P., Prokhorenkova, L.: Clustering properties of spatial preferential attachment model. In: Bonato, A., Prałat, P., Raigorodskii, A. (eds.) Algorithms and Models for the Web Graph. Springer, Cham. pp. 30–43 (2018)
Jacob, E., Mörters, P.: Spatial preferential attachment networks: power laws and clustering coefficients. Ann. Appl. Probab. 25(2), 632–662 (2015). https://doi.org/10.1214/14-AAP1006
Jorritsma, J., Lapinskas, J.: Software for increasing efficacy of contact-tracing applications by user referrals and stricter quarantining. PLoS ONE (2021). https://doi.org/10.5281/zenodo.4675115
Jorritsma, J., Komjáthy, J., Mitsche, D.: Cluster-size decay in supercritical kernel-based spatial random graphs. (2023). arXiv:2303.00724
Komjáthy, J., Lodewijks, B.: Explosion in weighted hyperbolic random graphs and geometric inhomogeneous random graphs. Stoch. Process. Appl. 130(3), 1309–1367 (2020). https://doi.org/10.1016/j.spa.2019.04.014
Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A., Boguñá, M.: Hyperbolic geometry of complex networks. Phys. Rev. E 82, 036106 (2010). https://doi.org/10.1103/PhysRevE.82.036106
Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A., Boguñá, M.: Hyperbolic geometry of complex networks. Phys. Rev. E (3) 82(3), 036106, 18 (2010). https://doi.org/10.1103/PhysRevE.82.036106
Krot, A., Ostroumova Prokhorenkova, L.: Local clustering coefficient in generalized preferential attachment models. In: International workshop on algorithms and models for the web-graph, pp. 15–28. Springer (2015)
Last, G., Penrose, M.: Lectures on the Poisson Process, Volume 7 of Institute of Mathematical Statistics Textbooks. Cambridge University Press, Cambridge (2018)
Leskovec, J.: Dynamics of Large Networks. Carnegie Mellon University, Pittsburgh (2008)
Meester, R., Roy, R.: Continuum Percolation. Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1996). https://doi.org/10.1017/CBO9780511895357
Michielan, R., Litvak, N., Stegehuis, C.: Detecting hyperbolic geometry in networks: why triangles are not enough. Phys. Rev. E 106(5), 054303 (2022)
Newman, M.E.J.: Properties of highly clustered networks. Phys. Rev. E 68, 026121 (2003). https://doi.org/10.1103/PhysRevE.68.026121
Serrano, M.A., Boguñá, M.: Clustering in complex networks. I. General formalism. Phys. Rev. E 74, 056114 (2006). https://doi.org/10.1103/PhysRevE.74.056114
Stegehuis, C., van der Hofstad, R., van Leeuwaarden, J.S.H.: Scale-free network clustering in hyperbolic and other random graphs. J. Phys. A 52(29), 295101 (2019). https://doi.org/10.1088/1751-8121/ab2269
van der Hofstad, R., van der Hoorn, P., Maitra, N.: Local limits of spatial inhomogeneous random graphs. Adv. Appl. Probab. (2023). https://doi.org/10.1017/apr.2022.61
van der Kolk, J., Serrano, M., Boguñá, M.: An anomalous topological phase transition in spatial random graphs. Commun. Phys. 5(1), 1–7 (2022). https://doi.org/10.1103/PhysRevE.82.036106
Vázquez, A., Pastor-Satorras, R., Vespignani, A.: Large-scale topological and dynamical properties of the internet. Phys. Rev. E 65(6), 066130 (2002). https://doi.org/10.1103/PhysRevE.65.066130
Acknowledgements
The work of RvdH is supported in part by the Netherlands Organisation for Scientific Research (NWO) through Gravitation-grant NETWORKS \(-\)024.002.003. NM thanks Martijn Gösgens for help with the pictures, and Joost Jorritsma for help with the simulations, based on the code [18]. We thank an anonymous reviewer for a meticulous reading of the submitted version, whose observations greatly improved the paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by Hal Tasaki.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Markings and Thinnings: Proof of Lemma 2.1
Before proving Lemma 2.1, we first recall the concepts of marking and thinning of a point process. For more on what follows, we refer the reader to [24, Chapter 5].
Definition A.1
(Probability kernel) For measurable spaces \((\mathbb {X},\mathscr {X})\) and \((\mathbb {Y},\mathscr {Y})\), a probability kernel K from \(\mathbb {X}\) to \(\mathbb {Y}\) is a map \(K:\mathbb {X} \times \mathscr {Y} \rightarrow [0,1]\), such that for every fixed \(x \in \mathbb {X}\), \(K(x,\cdot )\) is a probability measure on \(\mathbb {Y}\), and for every fixed \(A \in \mathscr {Y}\), \(K(\cdot , A)\) is an \(\mathscr {X}\)-measurable function on \(\mathbb {X}\).
Definition A.2
(Proper point process) A point process \(\eta \) on a measurable space \((\mathbb {X},\mathscr {X})\) is called proper, if there exists \(\mathbb {X}\)-valued random variables \(X_1,X_2,\dots \) and an \(\mathbb {N}_0:=\mathbb {N}\cup \{0\}\)-valued random variable \(\mathscr {N}\), such that almost surely,
It is a well-known fact that any Poisson point process is proper, see [24, Corollaries 3.7 and 6.5].
Definition A.3
(Marking) Let \(\eta =\sum _{i=1}^{\mathscr {N}}\delta _{X_i}\) be a proper point process on \(\mathbb {X}\). Let K be a probability kernel from \(\mathbb {X}\) to \(\mathbb {Y}\) for some measurable space \((\mathbb {Y}, \mathscr {Y})\). Let \(Y_1,Y_2,\dots \) be \(\mathbb {Y}\)-valued random variables and let \((Y_i)_{i \le m}\), given \(\mathscr {N}=m\) and \((X_i)_{i \le m}\), be independent random variables with distribution \(K(X_i,\cdot )\), \(i \le m\). Then the \(\mathbb {X} \times \mathbb {Y}\)-valued point process
is called a K-marking of \(\eta \).
If there is a probability measure \(\mathbb {Q}\) on \(\mathbb {Y}\) such that \(K(x,\cdot )=\mathbb {Q}(\cdot )\) for all \(x \in \mathbb {X}\), then \(\zeta \) is called an independent \(\mathbb {Q}\)-marking of \(\eta \). The following is the main theorem on markings of Poisson point processes (see [24, Theorem 5.6]) which we state without proof. Recall an s-finite measure is a countable sum of finite measures:
Theorem A.1
(Marking theorem) Let \(\zeta \) be a K-marking of a proper Poisson point process \(\eta \) with s-finite intensity measure \(\lambda \). Then \(\zeta \) is a Poisson point process on \(\mathbb {X} \times \mathbb {Y}\) with intensity measure \(\lambda \otimes K\), where, for any \(C \in \mathscr {X} \times \mathscr {Y}\),
Definition A.4
(Thinning) Let \(p:\mathbb {X} \rightarrow [0,1]\) be measurable, and consider the probability kernel K from \(\mathbb {X}\) to [0, 1] defined by
Then if \(\zeta \) is a \(K_p\)-marking of a proper point process \(\eta \), the point process \(\zeta (\cdot \times \{1\})\) on \(\mathbb {X}\) is called a p-thinning of \(\eta \).
The following is the main theorem on thinnings of Poisson point process (see [24, Theorem 5.8]), which we state without proof:
Theorem A.2
(Thinning theorem) Let \(\zeta \) be a \(K_p\)-marking of a proper Poisson process \(\eta \), where \(K_p\) is as in (A.1). Then \(\zeta _i:=\zeta (\cdot \times \{i\})\) for \(i=1,0\) are independent Poisson point processes on \(\mathbb {X}\).
Note that since the intensity measure of \(\zeta \) is \(\lambda \otimes K_p\), the intensity measures \(\zeta _0\) and \(\zeta _1\) of respectively \(\zeta (\cdot \times \{0\})\) and \(\zeta (\cdot \times \{1\})\) are
We now have the necessary ingredients to give the proof of Lemma 2.1.
Proof of Lemma 2.1
Let \(\eta \) denote a Poisson point process on \(\mathbb {U}\) with intensity measure given by \(\lambda _d(d\textbf{x}) \times f_W(x)dx\), where we recall that \(\lambda _d\) denotes the Lebesgue measure on \(\mathbb {R}^d\). Let \(\eta _0\) be the point process \(\eta + \delta _{(\textbf{0},w)}\), i.e. \(\eta _0\) is \(\eta \) with the point \((\textbf{0},w) \in \mathbb {U}\) added - a Palm version of \(\eta \). Observe that the collection \(\{(X_i,W_i)\}_{i \in V(\mathbb {G}^{\infty })}\) of tuples of the locations and weights of the vertices of \(\mathbb {G}^{\infty }\), are distributed as the atoms of \(\eta _0\). Thus, we view the atoms of \(\eta _0\) as the vertices of \(\mathbb {G}^{\infty }\).
For \(w>0\) fixed, let \(p_w:\mathbb {U} \rightarrow [0,1]\) be defined by
That is, given \(\textbf{p}=(\textbf{x},x) \in \eta \), \(p_w(\textbf{p})\) is the probability that there is an edge between the vertices \(\textbf{p}\) and \((\textbf{0},w)\) in \(\mathbb {G}^{\infty }\).
Consider the probability kernel \(K_{p_w}\) from \(\mathbb {U}\) to [0, 1] defined by
Let \(\eta ^{(p_w)}\) be the \(K_{p_w}\)-marking of \(\eta \), and consider \(\eta ^{(p_w)}_1:=\eta ^{(p_w)}(\cdot \times \{1\})\), the \(p_w\)-thinning of \(\eta \). Observe that the neighbors of \((\textbf{0},w)\) in \(\mathbb {G}^{\infty }\) have the same distribution as the atoms of \(\eta ^{(p_w)}_1\). From (A.2), the intensity measure of the Poisson point process \(\eta ^{(p_w)}_1\) is
In other words, the neighbors of \((\textbf{0},w)\) are distributed as a Poisson point process on \(\mathbb {U}\), with intensity function \(\rho _w(\textbf{p})\) as in (2.1). This completes the proof.
Remark A.1
Note that our proof shows that the point process of the locations and weights of the non-neighbors of 0, given it has weight \(W_0=w\), also form a Poisson point process, namely \(\eta ^{p_w}_0:=\eta ^{(p_w)}(\cdot \times \{0\})\).
Inverse Scaling of Almost Power Functions: Proof of Lemma 3.2
We prove the scaling of \(f^{-1}(k)\) by contradiction. To this end we assume that there exists some \(\delta >0\), and a sequence of reals \((k_n)_{n \ge 1}\), with \(k_n \rightarrow \infty \) as \(n \rightarrow \infty \), such that, for all \(n \ge 1\),
The above inequality implies that along a subsequence
or
Without loss of generality, let us assume the first case. The second case can be dealt with in a similar fashion (reversing all inequalities).
Since f is strictly increasing on \((t,\infty )\), and \(f^{-1}(k_n) \rightarrow \infty \), for all large \(n\ge 1\),
Since \(f(k)=ck^{a}(\log {k})^{b}+o\left( ck^{a}(\log {k})^{b}\right) \) as \(k \rightarrow \infty \), and
as \(k_n \rightarrow \infty \), from (B.1), we have
which is a contradiction. This completes the proof.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
van der Hofstad, R., van der Hoorn, P. & Maitra, N. Scaling of the Clustering Function in Spatial Inhomogeneous Random Graphs. J Stat Phys 190, 110 (2023). https://doi.org/10.1007/s10955-023-03122-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-023-03122-6