Introduction

Random walks on random graphs have been actively studied over the past decades, see, e.g., [1, 3, 5, 7, 13], among many others. The two layers of randomness, one for the random graph, the other one for the random walk on it, are typical of many fascinating problems in modern probability, such as spin glasses, random walks in a environment, and others. Moreover, random walks are a key tool to understand the properties of random graphs, in particular close to the point of phase transition (see the recent monograph [12]). There are various quantities to describe the behavior of a random walk on a random graph, most of them related to the question, how quickly the random walk is able to see various areas of the graph. The mixing time characterizes how long it takes the distribution of a walker to get close to its equilibrium distribution, the hitting and commute times represent the time it takes to get from one vertex to another, and the cover time describes the time a walker needs to see the entire graph. In this note, we will investigate the hitting time of random walk on random graphs.

To define it, let a realization \(G_n=(V_n,E_n)\) of an Erdős–Rényi graph \(\mathcal {G}(n,p)\) be given, i.e., we choose a random graph \(G_n\) on \(V_n=\{1, \ldots , n\}\) such that all undirected edges \(e=\{i,j\}\in E_n\) are realized independently with the same probability p. Note that \(p=p_n\) may and typically will depend on n. We will choose it in such a way that \(p^2_n \gg \frac{(\log n)^{16 \xi }}{n}\), by which we mean that \(\frac{(\log n)^{16 \xi }}{n p^2_n} \rightarrow 0\) as \(n \rightarrow \infty \). Here \(\xi >1\) is some large enough positive number, which appears in Lemma 2. Hence, with probability converging to one \(G_n\) is connected and all the probabilities below will be understood conditionally on the event that \(G_n\) is connected (see [2], Theorem 7.3).

While the above setup suffices to define \(G_n\) for fixed n, it does not allow to consider almost sure results or limits for \(n \rightarrow \infty \), that we will need later on. To this end, we will construct the entire sequence of random graphs as follows. Given the sequence \((p_n)\) let \(a_{i,j}(n)\), \(i,j\in V_{n}\) be the indicator for the event that the undirected edge \(\{i,j\}\) is present in the random graph on \(V_{n}=\{1, \ldots , n\}\). We will assume that the \(a_{i,j}(n)\) behave like a time-inhomogeneous Markov chain. More precisely:

$$\begin{aligned}&{\mathbb {P}}(a_{i,j}(n+1)=0 |a_{i,j}(n)=0 )=1=1-{\mathbb {P}}(a_{i,j}(n+1)=1 |a_{i,j}(n)=0 )\\&{\mathbb {P}}(a_{i,j}(n+1)=0 |a_{i,j}(n)=1 )=1-\frac{p_{n+1}}{p_{n}}=1-{\mathbb {P}}(a_{i,j}(n+1)=1 |a_{i,j}(n)=1 ) \end{aligned}$$

for all \(i\ne j\). The graph is assumed to contain no loops, thus \(a_{ii}(n)=0\) for all \(i\in V_n\). This setup only works, if \((p_n)\) is non-increasing. If \((p_n)\) is increasing, we can construct the complement of \(G_n\),\(G_n^c\), in the same fashion. Here \(G_n^c\) is the graph on \(\{1, \dots , n\}\) that contains an edge e if and only if \(e \notin E_n\). If we can construct \(G_n^c\), we can construct \(G_n\) as well. In this paper, we will therefore assume that \((p_n)\) is such that we can either construct \(G_n\) or \(G_n^c\). In the following, we omit the index n whenever suitable. Unless otherwise remarked, the calculations are made on \(G=G_n=(V_n,E_n)=(V,E)\).

On a realization G of \(\mathcal {G}(n,p)\) for fixed n consider simple random walk \((X_t)\) in discrete time t: If \(X_t\) is in \(v\in V\) at time t, \(X_{t+1}\) will be in w with probability \(1/d_v\) (\(d_v\) denoting the degree of v), if \(\{v,w\} \in E\) and with probability 0, otherwise. This random walk has an invariant distribution given by

$$\begin{aligned} \pi _i:= \frac{d_i}{\sum _{j \in V}d_j}=\frac{d_i}{2 |E|}. \end{aligned}$$

Let \(H_{ij}\) be the expected time it takes the walk to reach vertex j when starting from vertex i. Moreover, let

$$\begin{aligned} H_j := \sum _{i\in V} \pi _i H_{ij} \quad \text{ and } \quad H^i := \sum _{j \in V} \pi _j H_{ij} \end{aligned}$$
(1.1)

be the average target hitting time and the average starting hitting time, respectively. Note that \(H_j\) and \(H^i\) are expectation values in the random walk measure. However, with respect to the realization of the random graph they are random variables. In general, \(H_j\) and \(H^i\) will be different. In [8], the asymptotic behavior of \(H_j\) and \(H^i\) was analyzed. Confirming a prediction in the physics literature ([11]), it was shown that

$$\begin{aligned} H_j=n(1+o(1)) \qquad \text{ as } \text{ well } \text{ as } \qquad H=H^i=n(1+o(1)) \end{aligned}$$
(1.2)

a.a.s. By the latter expression, we mean that for a given vertex i the probability that \(H_i\) or \(H^i\) are not of the order prescribed by (1.2) goes to 0, as \(n \rightarrow \infty \). The analogous result for hypergraphs was proved in [6]. Indeed, as we will see \(H^i=:H\) does not even depend on i.

Equation (1.2) can be considered a law of large numbers for the random variables \(H_j\) and \(H^i\). In this note, we will study the fluctuations around this law of large numbers for H on the level of a CLT. It turns out that we are able to rewrite H, up to negligible terms, as an incomplete U-statistic with a kernel of degree 2. Even though the fluctuations of this specific form of incomplete U-statistics have not been analyzed so far, we are able to show a CLT for them.

As a consequence, we will be able to show

Theorem 1

Let H be defined as in (1.1) and \(\theta _n\) be given by (3.9). Moreover, assume that \(p=p_n\in (0,1)\) is either increasing or decreasing. Assume that

$$\begin{aligned} n p_n^2/(\log n)^{16\xi } \rightarrow \infty \end{aligned}$$
(1.3)

for some \(\xi >1\) and that \(n(1-p_n)\rightarrow \infty \). Then, with the above setting

(1.4)

where \(\mathcal {N}\left( m, s^2\right) \) denotes a normal distribution with expectation m and variance \(s^2\) and denotes convergence in distribution.

We organize this paper as follows: In Sect. 2, we recall a decomposition of H in terms of the spectrum of a variant of the Laplace matrix of the random graph. This decomposition is the key to rewriting H in terms of a U-statistic \(U_n\) plus smaller order terms in Sect, 3. In Sect. 4, we prove a CLT for the U-statistic \(U_n\). Note that a similar CLT has been shown in [9] (which in turn is based on the results in [4] and [10]); however, it is easier to prove this CLT in our context directly. In Sect. 5, we see that \(U_n\) on the level of a CLT is indeed the central term in the decomposition of H, i.e., we will see that Theorem 1 is true. In an appendix, we collect some auxiliary results.

Spectral Decomposition of the Hitting Times

We start as in [8] or [6], the spectral decomposition of the hitting times, taken from [7], Sect. 3. To introduce it, let A be the adjacency matrix of G, i.e., \(A=(a_{ij})\), with \(a_{ij}={\text {1}}_{\{i,j\}\in E}\). Let D be the diagonal matrix \(D=(\mathrm {diag}(d_i))_{i=1}^{n}\). With A, we associate the matrix B defined as \(B:=D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\). Note that \(B=(b_{ij})\) with \(b_{ij}=\frac{a_{ij}}{\sqrt{d_i d_j}}\). Therefore, B is symmetric and hence has real eigenvalues. \(\lambda _1 = 1\) is an eigenvalue of B, since \(w:=(\sqrt{d_1}, \cdots , \sqrt{d_n})\) satisfies \(Bw=w\). By the Perron–Frobenius theorem, \(\lambda _1\) is the largest eigenvalue. We order the eigenvalues \(\lambda _k\) of B such that \(\lambda _1 \ge \lambda _2 \ge \cdots \ge \lambda _{n} \) and we normalize the eigenvectors \(v_k\) to the eigenvalues \(\lambda _k\) to have length one. Thus, in particular, \( v_1 := \frac{w}{\sqrt{2|E|}}= \left( \sqrt{\frac{d_j}{2|E|}} \right) _{j=1}^{n}. \) Recall that the matrix of the eigenvectors is orthogonal and the scalar product of two eigenvectors \(v_i\) and \(v_j\) satisfies \(\langle v_i, v_j\rangle =\delta _{ij}\). With this, we can give the following representation of H:

Proposition 1

(cf. [7], Theorem 3.1 and Formula (3.3)) For all \(i\ne j \in V\), we have \( H_{ij}=2 |E| \sum \limits _{k=2}^{n} \frac{1}{1- \lambda _k} \left( \frac{v_{k, j}^2}{d_j} - \frac{v_{k, i} v_{k, j}}{\sqrt{d_i d_j}} \right) . \) Thus,

$$\begin{aligned} H= \sum _{k=2}^{n} \frac{1}{1-\lambda _k} \end{aligned}$$
(2.1)

Formula (2.1) will be the basis of our analysis in the next sections.

Rewriting H

In this section, we rewrite H in terms of a sum of a U-statistic and a negligible term. This is the starting point to prove Theorem 1. In a first step, we shall recall that all eigenvalues \(\lambda _k\) (except \(\lambda _1\)) are small in absolute value with high probability.

Lemma 2

Let \(\xi >1\) be some fixed number. For all \(k \ge 2 \) and some constant C not depending on n, we have with probability at least \(1-e^{-\theta (\log n)^\xi }\) (for some \(\theta >0\)) that

$$\begin{aligned} |\lambda _k| \lesssim C\sqrt{\frac{(\log n)^{16 \xi }}{np}}. \end{aligned}$$

Here \(\lesssim \) designates that the left-hand side is asymptotically less than or equal to the right-hand side.

Proof

This lemma is proven in [8], proof of Proposition 3.2. \(\square \)

Now consider (2.1). Since \(\lambda _k<1\) asymptotically almost surely for \(k\ge 2\) we can apply the geometric series to write

$$\begin{aligned} H&=\sum \limits _{k=2}^{n}\sum \limits _{l=0}^\infty \lambda _k^l=\sum \limits _{k=2}^{n} \Big (1+\lambda _k+\lambda _k^2+\lambda _k^3+\sum \limits _{l=4}^\infty \lambda _k^l\Big )\nonumber \\&=(n-1)+\sum \limits _{k=2}^{n}\lambda _k+\sum \limits _{k=2}^{n}\lambda _k^2+\sum \limits _{k=2}^{n}\lambda _k^3+\sum \limits _{k=2}^{n}\lambda _k^4\sum \limits _{l=0}^\infty \lambda _k^l \nonumber \\&=(n-3)+\sum \limits _{k=1}^{n}\lambda _k^2+\sum \limits _{k=2}^{n}\lambda _k^3+\sum \limits _{k=2}^{n}\lambda _k^4\frac{1}{1-\lambda _k}, \end{aligned}$$
(3.1)

since B has trace 0 and \(\lambda _1=1\). The constant in the definition of the O-term could, in principle, depend on n. However, as we saw in Lemma 2, the \(\lambda _k\) are uniformly bounded away from 1 such that indeed this constant does not depend on n. This will be important later, when we will show that \(O\left( \sum \limits _{k=2}^{n}\lambda _k^3\right) \) is negligible on the scale of a CLT. Anticipating this, consider

$$\begin{aligned} \sum \limits _{k=1}^{n}\lambda _k^2=\mathrm {tr}(B^2)&=\sum \limits _{i,j=1}^{n}\frac{a_{ij}a_{ji}}{d_id_j} =\sum \limits _{i,j=1}^{n}\frac{a_{ij}}{d_id_j} =2\sum \limits _{i<j}\frac{a_{ij}}{d_id_j} \end{aligned}$$
(3.2)

which follows from \(a_{ij}=a_{ji}\) and \(a_{ij}\in \{0,1\}\).

where for constant \(K\in {{\mathbb {N}}}\), \(W\subset \{1,\dots ,n\}\) with \(|W|=K\) and vertices \(i,j_1,\dots ,j_K\), we define

$$\begin{aligned} \tilde{d}_i^W:=\sum \limits _{\begin{array}{c} l=1\\ l\notin W\cup \{i\} \end{array}}^{n}a_{il},\qquad \tilde{d}_i^{(j_1,\dots ,j_K)}:=\tilde{d}_i^{\{j_1,\dots ,j_K\}}\qquad \text { and }\qquad \tilde{d}_i^j:=\tilde{d}_i^{(j)}. \end{aligned}$$
(3.3)

Notice that \(\tilde{d}_i^j\) is binomially distributed with parameters \(n-2\) and p. We define for \(a,b\in {{\mathbb {N}}}\) and \(X\sim \mathrm {Bin}(n-a-1,p)\)

$$\begin{aligned} \mu _{a,b}:=\mathbb {E}\left[ \frac{1}{X+b}\right] \qquad \text {and}\qquad \sigma _{a,b}^2:=\mathbb {E}\left[ \frac{1}{(X+b)^2}\right] . \end{aligned}$$

Notice that both \(\mu _{a,b}\) and \(\sigma _{a,b}\) depend on n. For the sake of brevity, we omit this index. Let us first give the leading-order terms of these quantities:

Lemma 3

For \(a,b\in {{\mathbb {N}}}\),

$$\begin{aligned} \mu _{a,b}&=\frac{1}{np}+\frac{ap - (b - 1)}{(np)^2}\nonumber \\&+\frac{a^2p^2 - (2a - 1)p(b - 1) + (b - 1)(b - 2)}{(np)^3}+O\left( \frac{1}{(np)^4}\right) \end{aligned}$$
(3.4)
$$\begin{aligned} \sigma _{a,b}^2&=\frac{1}{(np)^2}+\frac{(2a - 1)p + 3 - 2b}{(np)^3}+O\left( \frac{1}{(np)^4}\right) \end{aligned}$$
(3.5)

Proof

We find, using Lemma 11, for \(X\sim \mathrm {Bin}(n-a-1,p)\)

$$\begin{aligned} \mu _{a,b}&=\mathbb {E}\left[ \frac{1}{X + 1}\right] -\mathbb {E}\left[ \frac{b - 1}{(X + 1)(X + 2)}\right] +\mathbb {E}\left[ \frac{(b - 1)(b - 2)}{(X + 1)(X + 2)(X + 3)}\right] +O\left( \frac{1}{(np)^4}\right) \\&=\frac{1}{(n - a)p}-\frac{b - 1}{(n - a)(n - a + 1)p^2}+\frac{(b - 1)(b - 2)}{(n - a)(n - a + 1)(n - a + 2)p^3}+O\left( \frac{1}{(np)^4}\right) \\&=\frac{1}{np}+\frac{ap - (b - 1)}{(np)^2}+\frac{a^2p^2 - (2a - 1)p(b - 1) + (b - 1)(b - 2)}{(np)^3}+O\left( \frac{1}{(np)^4}\right) \end{aligned}$$

and

$$\begin{aligned} \sigma _{a,b}^2&=\mathbb {E}\left[ \frac{1}{(X + 1)(X + 2)}\right] +\mathbb {E}\left[ \frac{3 - 2b}{(X + 1)(X + 2)(X + 3)}\right] +O\left( \frac{1}{(np)^4}\right) \\&=\frac{1}{(n - a)(n - a + 1)p^2}+\frac{3 - 2b}{(n - a)(n - a + 1)(n - a + 2)p^3}+O\left( \frac{1}{(np)^4}\right) \\&=\frac{1}{(np)^2}+\frac{(2a - 1)p + 3 - 2b}{(np)^3}+O\left( \frac{1}{(np)^4}\right) . \end{aligned}$$

\(\square \)

With

$$\begin{aligned} \mu :=\mu _{1,1}= \mathbb {E}\left[ \frac{1}{\tilde{d}_1^2+1}\right] \end{aligned}$$
(3.6)

we are led to consider the U-statistic

$$\begin{aligned} U_n= \frac{1}{n\theta _n}\sum \limits _{1\le i<j\le n}\left( \frac{a_{i,j}}{{d}_i{d}_j}-\mu ^2p\right) =\frac{1}{n\theta _n}\sum \limits _{1\le i<j\le n}\left( \frac{a_{i,j}}{(\tilde{d}_i^j + 1)(\tilde{d}_j^i + 1)}-\mu ^2p\right) .\nonumber \\ \end{aligned}$$
(3.7)

Here \(\theta _n\) is a normalizing sequence we will specify in (3.9). Note, however, that \(U_n\) has an unusual structure for a U-statistic due to the lack of independence of the \(a_{i,j}\) and the \(\tilde{d}_i^j\) as well as between the \(\tilde{d}_i^j\) themselves.

To begin with, we will prove that the expectation of inverse moments of \(d_i\) is asymptotically insensitive to removing a constant number of vertices.

In what follows, for two sequence \(a_n, b_n\) we will write \(a_n\approx b_n\), \(a_n\lesssim b_n\), and \(a_n \gtrsim b_n\), respectively, to denote that \(a_n=b_n(1+o(1))\), \(a_n= O(b_n)\), and \(a_n= \varOmega (b_n)\), respectively, in the Landau notation. Consider

$$\begin{aligned} \sigma ^2:=\sigma _{1,1}^2=\mathbb {E}\Bigl [\Bigl (\frac{1}{\tilde{d}_1^2+1}\Bigr )^2\Bigr ]. \end{aligned}$$
(3.8)

Corollary 4

For \(\mu \) and \(\sigma \) as defined by (3.6), (3.8), the following relation holds:

$$\begin{aligned} \sigma ^2-\mu ^2\approx \frac{1-p}{(np)^3}. \end{aligned}$$

Proof

We immediately find from (3.4), (3.5)

$$\begin{aligned} \sigma ^2-\mu ^2&=\frac{1}{(np)^2}\left( 1+\frac{1+p}{np}+O\left( \frac{1}{(np)^2}\right) \right) -\frac{1}{(np)^2}\left( 1+\frac{p}{np}+O\left( \frac{1}{(np)^2}\right) \right) ^2\\&=\frac{1}{(np)^2}\left( \frac{1+p}{np}-\frac{2p}{np}+O\left( \frac{1}{(np)^2}\right) \right) \approx \frac{1-p}{(np)^3} \end{aligned}$$

which yields the claim. \(\square \)

With these notations and results on inverse moments, we can define

$$\begin{aligned} \theta _n^2:=np^2\mu ^2(\sigma ^2-\mu ^2)+\frac{1}{2}p(\sigma ^4-\mu ^4). \end{aligned}$$
(3.9)

Analogously to Corollary 4, \(\sigma ^4-\mu ^4=\varTheta \left( \frac{1}{(np)^5}\right) \). Therefore,

$$\begin{aligned} \theta _n^2\approx np^2\mu ^2(\sigma ^2-\mu ^2)\approx \frac{np^2(1-p)}{(np)^5} \end{aligned}$$
(3.10)

A CLT for \(U_n\)

We now prove the following CLT for the U-statistic \({U}_n\) defined in (3.7):

Theorem 2

Let \(\mu _n\) be defined as in (3.6) and \(\theta _n\) be given by (3.9). Assume that \(p=p_n\in (0,1)\) satisfies \(np/(\log n)^{16\xi }\rightarrow \infty \) and \(n(1-p)\rightarrow \infty \). Then,

We begin to prove this theorem by decomposing \(U_n\) into an incomplete U-statistic and a remaining term: For

$$\begin{aligned} {V}_n:=\frac{\sqrt{2}}{n\theta _n}\sum \limits _{i<j}a_{i,j}\left( \frac{1}{d_id_j}-\mu ^2\right) \quad \text { and }\quad {Y}_n:=\frac{\sqrt{2}\mu ^2}{n\theta _n}\sum \limits _{i<j}(a_{i,j}-p), \end{aligned}$$
(4.1)

we have \({U}_n={V}_n+{Y}_n\). We decompose \({V}_n\) further, similar to the Hoeffding decomposition for U-statistics, by writing \({V}_n={V}_n'+{V}_n''\) with

$$\begin{aligned} {V}_n':=\frac{\sqrt{2}}{n\theta _n}\sum \limits _{i<j}a_{i,j}\left( \frac{1}{d_i}-\mu \right) \left( \frac{1}{d_j}-\mu \right) \quad \text { and }\quad {V}_n'':=\frac{\sqrt{2}\mu }{n\theta _n}\sum \limits _{i\ne j}a_{i,j}\left( \frac{1}{d_i}-\mu \right) .\nonumber \\ \end{aligned}$$
(4.2)

We find that \({V}_n'\) is negligible:

Proposition 5

For p satisfying \(np\rightarrow \infty \),

$$\begin{aligned} {V}_n'\xrightarrow [n\rightarrow \infty ]{{\mathbb {P}}}0. \end{aligned}$$

Proof

Using (3.3), we can write

$$\begin{aligned} \mathbb {E}\left[ ({V}_n')^2\right]&=\frac{2}{n^2\theta _n^2}\sum \limits _{i<j}\mathbb {E}\left[ a_{i,j}^2\bigg (\frac{1}{\tilde{d}_i^j + 1}-\mu \bigg )^2\bigg (\frac{1}{\tilde{d}_j^i + 1}-\mu \bigg )^2\right] \nonumber \\&\quad +\frac{2}{n^2\theta _n^2}\underset{|\{i,j\}\cap \{k,l\}|=1}{\sum \limits _{i<j}\sum \limits _{k<l}}\nonumber \\&\quad \mathbb {E}\left[ a_{i,j}\bigg (\frac{1}{\tilde{d}_i^j + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_j^i + 1}-\mu \bigg ) a_{k,l}\bigg (\frac{1}{\tilde{d}_k^l + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_l^k + 1}-\mu \bigg )\right] \nonumber \\&\quad +\frac{2}{n^2\theta _n^2}\underset{|\{i,j\}\cap \{k,l\}|=0}{\sum \limits _{i<j}\sum \limits _{k<l}}\nonumber \\&\quad \mathbb {E}\left[ a_{i,j}\bigg (\frac{1}{\tilde{d}_i^j + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_j^i + 1}-\mu \bigg ) a_{k,l}\bigg (\frac{1}{\tilde{d}_k^l + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_l^k + 1}-\mu \bigg )\right] \nonumber \\&=\frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \mathbb {E}\left[ a_{1,2}^2\bigg (\frac{1}{\tilde{d}_1^2 + 1}-\mu \bigg )^2\bigg (\frac{1}{\tilde{d}_2^1 + 1}-\mu \bigg )^2\right] \nonumber \\&\quad +\frac{2}{n^2\theta _n^2}2(n-2)\left( {\begin{array}{c}n\\ 2\end{array}}\right) \nonumber \\&\quad \mathbb {E}\left[ a_{1,2}\bigg (\frac{1}{\tilde{d}_1^2 + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_2^1 + 1}-\mu \bigg ) a_{1,3}\bigg (\frac{1}{\tilde{d}_1^3 + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_3^1 + 1}-\mu \bigg )\right] \nonumber \\&\quad +\frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \left( {\begin{array}{c}n-2\\ 2\end{array}}\right) \nonumber \\&\quad \mathbb {E}\left[ a_{1,2}\bigg (\frac{1}{\tilde{d}_1^2 + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_2^1 + 1}-\mu \bigg )a_{3,4}\bigg (\frac{1}{\tilde{d}_3^4 + 1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_4^3 + 1}-\mu \bigg )\right] \nonumber \\&=:\frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \mathcal {S}_2+\frac{2}{n^2\theta _n^2}2(n-2)\left( {\begin{array}{c}n\\ 2\end{array}}\right) \mathcal {S}_1+\frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \left( {\begin{array}{c}n-2\\ 2\end{array}}\right) \mathcal {S}_0 \end{aligned}$$
(4.3)

by identical distribution. We consider \(\mathcal {S}_0\), \(\mathcal {S}_1\) and \(\mathcal {S}_2\) separately:

By independence between \(a_{i,j}\), \(\tilde{d}_i^j\) and \(\tilde{d}_j^i\), we obtain \(\mathcal {S}_2=p(\sigma ^2-\mu ^2)^2.\) By \(np\rightarrow \infty \), (3.4) Corollary 4, we therefore obtain

$$\begin{aligned} \frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \mathcal {S}_2\approx \frac{p(\sigma ^2-\mu ^2)^2}{np^2\mu ^2(\sigma ^2-\mu ^2)}=\frac{\sigma ^2-\mu ^2}{np\mu ^2}=O\left( \frac{1-p}{(np)^2}\right) \xrightarrow []{n\rightarrow \infty }0. \end{aligned}$$
(4.4)

Secondly, with the notation from (3.3),

$$\begin{aligned} a_{1,2}a_{1,3}\frac{1}{\tilde{d}_1^{2}+1}=a_{1,2}a_{1,3}\frac{1}{\tilde{d}_1^{3}+1}=a_{1,2}a_{1,3}\frac{1}{\tilde{d}_1^{(2,3)}+2} \end{aligned}$$

and hence by independence

$$\begin{aligned} \mathcal {S}_1&=\mathbb {E}\left[ a_{1,2}a_{1,3}\bigg (\frac{1}{\tilde{d}_1^{(2,3)}+1}-\mu \bigg )^2\bigg (\frac{1}{\tilde{d}_2^1+1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_3^1+1}-\mu \bigg )\right] \nonumber \\&=p^2\mathbb {E}\left[ \bigg (\frac{1}{\tilde{d}_1^{(2,3)}+2}-\mu \bigg )^2\right] \mathbb {E}\left[ \bigg (\frac{1}{\tilde{d}_2^1+1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_3^1+1}-\mu \bigg )\right] \end{aligned}$$
(4.5)

The former expectation can be bounded by (3.4) and (3.5)

$$\begin{aligned}&\mathbb {E}\left[ \bigg (\frac{1}{\tilde{d}_1^{(2,3)}+2}-\mu \bigg )^2\right] =\sigma _{2,2}^2-2\mu _{2,2}\mu _{1,1}+\mu _{1,1}^2\nonumber \\&=\frac{1}{(np)^2}+\frac{3p-1}{(np)^3}-2\frac{1}{(np)^2}\left( 1+\frac{2p-1}{np}\right) \left( 1+\frac{p}{np}\right) \nonumber \\&\quad +\frac{1}{(np)^2}\left( 1+\frac{p}{np}\right) ^2+o\left( \frac{1}{(np)^3}\right) \nonumber \\&=\frac{3p-1-2\cdot (3p-1)+2p}{(np)^3}+o\left( \frac{1}{(np)^3}\right) =\frac{1-p}{(np)^3}+o\left( \frac{1}{(np)^3}\right) \end{aligned}$$
(4.6)

Furthermore, by Cauchy–Schwarz inequality

$$\begin{aligned} \mathbb {E}\left[ \bigg (\frac{1}{\tilde{d}_2^1+1}-\mu \bigg )\bigg (\frac{1}{\tilde{d}_3^1+1}-\mu \bigg )\right] \le \mathbb {E}\left[ \bigg (\frac{1}{\tilde{d}_2^1+1}-\mu \bigg )^2\right] =\sigma ^2-\mu ^2 \end{aligned}$$
(4.7)

Combining (3.4), \(np\rightarrow \infty \), (4.5), (4.6) and (4.7) yields

$$\begin{aligned} \frac{2}{n^2\theta _n^2}2(n-2)\left( {\begin{array}{c}n\\ 2\end{array}}\right) \mathcal {S}_1\approx \frac{2n^3p^2}{n^3p^2\mu ^2(\sigma ^2-\mu ^2)}\frac{(1-p)}{(np)^3}(\sigma ^2-\mu ^2)\approx \frac{2(1-p)}{np}\xrightarrow []{n\rightarrow \infty }0.\nonumber \\ \end{aligned}$$
(4.8)

Finally, consider \(\mathcal {S}_0\). We use the law of total expectation, differentiating over all possible choices for the random vector \(\mathbf {a}:=(a_{1,3};\,a_{1,4};\,a_{2,3};\,a_{2,4})\) and independence. In the following computation, the vector \(\mathbf {A\in \{0,1\}^4}\) will have components \(\mathbf {A_1}, \ldots \mathbf {A_4}\):

by (3.4). Further, we need to consider all possible choices of \(\mathbf {A}\):

  • If every entry of \(\mathbf {A}\) is 1, the product of the fractions in the sum is \(\frac{(2p-2)^4}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =p^4\). There is one (\(\left( {\begin{array}{c}4\\ 4\end{array}}\right) =1\)) possibility to choose the nonzero entries of \(\mathbf {A}\).

  • If three entries of \(\mathbf {A}\) are 1 and the other one is 0, the product of the fractions in the sum is \(\frac{(2p-2)^2(2p-1)^2}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =p^3(1-p)\). There are \(\left( {\begin{array}{c}4\\ 3\end{array}}\right) =4\) possibilities to choose the nonzero entries of \(\mathbf {A}\).

  • If either \(\mathbf {A}_2=\mathbf {A}_3=1\) and \(\mathbf {A}_1=\mathbf {A}_4=0\) or \(\mathbf {A}_2=\mathbf {A}_3=0\) and \(\mathbf {A}_1=\mathbf {A}_4=1\), the product of the fractions in the sum is \(\frac{(2p-1)^4}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =p^2(1-p)^2\). We have 2 possibilities for this case (the ones specified above).

  • If two entries of \(\mathbf {A}\) are 1 and the other two zero and we are not in the case before, the product of the fractions in the sum is \(\frac{(2p-2)(2p-1)^2(2p)}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =p^2(1-p)^2\). We have \(\left( {\begin{array}{c}4\\ 2\end{array}}\right) -2=4\) possibilities for this case.

  • If one entry of \(\mathbf {A}\) is 1 and the others are 0, the product of the fractions in the sum is \(\frac{(2p-1)^2(2p)^2}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =p(1-p)^3\). There are \(\left( {\begin{array}{c}4\\ 1\end{array}}\right) =4\) possibilities to choose the nonzero entry of \(\mathbf {A}\).

  • If all entries of \(\mathbf {A}\) are 0, the product of the fractions in the sum is \(\frac{(2p)^4}{(np)^8}\), while \(\mathbb {P}\left( \mathbf {a}=\mathbf {A}\right) =(1-p)^4\). There is \(\left( {\begin{array}{c}4\\ 0\end{array}}\right) =1\) possibility to choose the nonzero entries of \(\mathbf {A}\).

If we enter these findings into the above formula, we obtain

$$\begin{aligned} \mathcal {S}_0&\approx p^2\left( p^4\frac{(2p-2)^4}{(np)^8}+4p^3(1-p)\frac{(2p-2)^2(2p-1)^2}{(np)^8}+2p^2(1-p)^2\frac{(2p-1)^4}{(np)^8}\right. \\&\quad \, \left. +\,4p^2(1 - p)^2\frac{(2p - 2)(2p - 1)^2(2p)}{(np)^8}+4p(1 - p)^3\frac{(2p - 1)^2(2p)^2}{(np)^8}+(1 - p)^4\frac{(2p)^4}{(np)^8}\right) \\&= p^2\cdot \frac{p^2(1-p)^2}{(np)^8}\Big (p^24(2p-2)^2+4p(1-p)4(2p-1)^2+2(2p-1)^4\\&\quad \, \, +4(2p-2)(2p-1)^2(2p)+4p(1-p)(2p-1)^24+(1-p)^24(2p)^2\Big )\\&=O\left( \frac{p^4(1-p)^2}{(np)^8}\right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{2}{n^2\theta _n^2}\left( {\begin{array}{c}n\\ 2\end{array}}\right) \left( {\begin{array}{c}n-2\\ 2\end{array}}\right) \mathcal {S}_0&=O\left( \frac{n^4p^4(1-p)^2}{n^2np^2\mu ^2(\sigma ^2-\mu ^2)(np)^8}\right) \nonumber \\&=O\left( \frac{(np)^4(1-p)^2}{(np)^{10}n\frac{1}{(np)^2}\frac{1-p}{(np)^3}}\right) =O\left( \frac{1-p}{n(np)}\right) \xrightarrow []{n\rightarrow \infty }0 \end{aligned}$$
(4.9)

by \(np\rightarrow \infty \), (3.4) and (3.5). Combining (4.3) with (4.4), (4.8) and (4.9) yields

$$\begin{aligned} \mathbb {E}\left[ ({V}_n')^2\right] \xrightarrow []{n\rightarrow \infty }0 \end{aligned}$$

and, by Markov’s inequality, the claim follows. \(\square \)

We summarize the other two terms from the decomposition of \({U}_n\) in (4.1) and (4.2), \({V}_n''\) and \({Y}_n\), in

$$\begin{aligned} {W}_n&:={V}_n''+{Y}_n= \frac{\sqrt{2}\mu }{n\theta _n}\sum \limits _{1\le i\ne j\le n}a_{i,j}\left( \frac{1}{d_i}-\mu \right) +\frac{\sqrt{2}\mu ^2}{n\theta _n}\sum \limits _{1\le i< j\le n}(a_{i,j}-p) \end{aligned}$$

and transform it by applying the symmetry of the adjacency matrix A as well as the facts that \(d_i=\sum \limits _{\begin{array}{c} j=1,\dots ,n\\ j\ne i \end{array}}a_{i,j}\) for every \(i=1,\dots ,n\) and \(2|E|=\sum \limits _{i=1}^nd_i\):

$$\begin{aligned} {W}_n&=\frac{\sqrt{2}\mu }{n\theta _n}\sum \limits _{i=1}^n\left( \frac{1}{d_i}-\mu \right) \sum \limits _{\begin{array}{c} j=1,\dots ,n\\ j\ne i \end{array}}a_{i,j}+\frac{\sqrt{2}\mu ^2}{2n\theta _n}\sum \limits _{i=1}^n\sum \limits _{\begin{array}{c} j=1,\dots ,n\\ j\ne i \end{array}}(a_{i,j}-p)\nonumber \\&=\frac{\sqrt{2}\mu }{n\theta _n}\sum \limits _{i=1}^n\left( \frac{1}{d_i}-\mu \right) d_i+\frac{\sqrt{2}\mu ^2}{2n\theta _n}\left( \sum \limits _{i=1}^nd_i-n(n-1)p\right) \nonumber \\&=\frac{\sqrt{2}\mu }{n\theta _n}\left( n-\mu \sum \limits _{i=1}^nd_i\right) +\frac{\sqrt{2}\mu ^2}{2n\theta _n}\left( \sum \limits _{i=1}^nd_i-n(n-1)p\right) \nonumber \\&=\frac{\sqrt{2}\mu ^2}{n\theta _n}\left( \frac{n}{\mu }-\frac{1}{2}n(n-1)p-|E|\right) \nonumber \\&=\frac{\sqrt{2}\mu ^2}{n\theta _n}\left( \frac{n(n-1)p}{1+O(e^{-np})}-\frac{1}{2}n(n-1)p-|E|\right) \nonumber \\&=\frac{\sqrt{2}\mu ^2}{n\theta _n}\left( \frac{O(e^{-np})n(n-1)p}{1+O(e^{-np})}+\frac{1}{2}n(n-1)p-|E|\right) \end{aligned}$$
(4.10)

by Lemma 11. The first summand is negligible:

Lemma 6

If \(p=p_n\) satisfies \(np/(\log n)^{16\xi }\rightarrow \infty \),

$$\begin{aligned} \frac{\mu ^2}{n\theta _n}\frac{O(e^{-np})n(n-1)p}{1-O(e^{-np})}=o(1). \end{aligned}$$

Proof

Using (3.10), we estimate

$$\begin{aligned} \frac{\mu ^2}{n\theta _n}\frac{O(e^{-np})n(n-1)p}{1-O(e^{-np})}&= O\left( \frac{\frac{1}{(np)^2}e^{-np}n^2p}{n\sqrt{\frac{np^2}{(np)^5}}}\right) =O\left( \frac{1}{np}e^{-np}\sqrt{\frac{(np)^4}{p}}\right) \\&=O\left( \frac{np}{\sqrt{p}}e^{-np}\right) . \end{aligned}$$

Clearly, \(npe^{-np}\le e^{-\sqrt{np}}\). By \(np^2/(\log n)^{16\xi }\rightarrow \infty \), \(np>(\log n)^{16\xi }>(\log n)^{2}\) for sufficiently large n and therefore

$$\begin{aligned} \frac{e^{-\sqrt{np}}}{\sqrt{p}}\le \frac{e^{-\log n}}{\sqrt{p}}=\frac{1}{n\sqrt{p}} \end{aligned}$$

which converges to 0. \(\square \)

Proof of Theorem 2

Using Lemma 6 and (4.10) yields

$$\begin{aligned} {W}_n=-\frac{\sqrt{2}\mu ^2}{n\theta _n}\left( |E|-\left( {\begin{array}{c}n\\ 2\end{array}}\right) p\right) +o(1). \end{aligned}$$

By the Lindeberg–Feller CLT,

(4.11)

because the number of edges in the random graph, |E|, is binomially distributed with parameters \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) and p and \(np \rightarrow \infty \). Furthermore, by (3.10)

$$\begin{aligned} \frac{\sqrt{2}\mu ^2}{n\theta _n}\sqrt{\left( {\begin{array}{c}n\\ 2\end{array}}\right) p(1-p)}&\approx \frac{\frac{1}{(np)^2}\sqrt{n^2p(1-p)}}{n\sqrt{\frac{np^2(1-p)}{(np)^5}}}=\sqrt{\frac{n^2p(1-p) (np)^5}{(np)^4n^2np^2(1-p)}}=1. \end{aligned}$$
(4.12)

Combining (4.11) and (4.12) yields

(4.13)

Applying Slutzky’s theorem on the decomposition \({U}_n={V}_n'+{W}_n\) given by (4.1) and (4.2) with the convergence results from Proposition 5 and (4.13), we obtain the claim. \(\square \)

Proof of Theorem 1

Recall that by (3.1) we know that

$$\begin{aligned} H=(n-3)+\sum \limits _{k=1}^{n}\lambda _k^2+\sum \limits _{k=2}^{n}\lambda _k^3+\sum \limits _{k=2}^{n}\lambda _k^4\frac{1}{1-\lambda _k}. \end{aligned}$$
(5.1)

Moreover, in (3.2), we saw that \( \sum \limits _{k=1}^{n}\lambda _k^2=\sum \limits _{i,j=1}^{n}\frac{a_{ij}}{d_id_j}=2\sum \limits _{i<j}\frac{a_{ij}}{d_id_j}. \) To this term, we can apply Theorem 2 to obtain a CLT. It remains to deal with the other two sums in (5.1). We will prove that

Proposition 7

Under the assumptions of Theorem 1

$$\begin{aligned} \frac{1}{n\theta _n}\left[ \sum \limits _{k=2}^n\lambda _k^3+\frac{3(1-p)}{np}\right] \xrightarrow [n\rightarrow \infty ]{{\mathbb {P}}}0. \end{aligned}$$

as well as

Proposition 8

Under the assumptions of Theorem 1

$$\begin{aligned} \frac{1}{n\theta _n}\left[ \sum \limits _{k=2}^n\lambda _k^4\frac{1}{1-\lambda _k}-\frac{2(1-p)^2}{np^2}\right] \xrightarrow [n\rightarrow \infty ]{{\mathbb {P}}}0 \end{aligned}$$

Both propositions can be proved by computing second moments of the respective terms and applying Markov’s inequality, which we will defer to the end of this section.

To complete the proof of Theorem 1 from here, we notice that (5.1) gives

$$\begin{aligned}&\frac{1}{\sqrt{2}n\theta _n}\left( H-(n-3)-\frac{1}{p}+\frac{2-3p}{np}-\frac{2(1-p)^2}{np^2}\right) \\&\quad =\frac{\sqrt{2}}{n\theta _n}\left( \sum \limits _{i<j}\frac{a_{ij}}{d_id_j}-\mu ^2\left( {\begin{array}{c}n\\ 2\end{array}}\right) p\right) +\frac{1}{\sqrt{2}n\theta _n}\left[ 2\mu ^2\left( {\begin{array}{c}n\\ 2\end{array}}\right) p-\left( \frac{1}{p}+\frac{1}{np}\right) \right] \\&\qquad +\frac{1}{\sqrt{2}n\theta _n}\left[ \sum \limits _{k=2}^n\lambda _k^3+\frac{3(1-p)}{np}\right] +\frac{1}{\sqrt{2}n\theta _n}\left[ \sum \limits _{k=2}^n\lambda _k^4\frac{1}{1-\lambda _k}-\frac{2(1-p)^2}{np^2}\right] . \end{aligned}$$

By Theorem 2, the first summand converges in distribution to a Gaussian random variable. The second term satisfies by Lemma  11

$$\begin{aligned}&\frac{1}{\sqrt{2}n\theta _n}\left| 2\mu ^2\left( {\begin{array}{c}n\\ 2\end{array}}\right) p-\left( \frac{1}{p}+\frac{1}{np}\right) \right| \approx \frac{1}{\sqrt{2}n\theta _n}\left| \frac{n(n-1)p}{(n-1)^2p^2}-\left( \frac{1}{p}+\frac{1}{np}\right) \right| \\&\quad =\frac{1}{\sqrt{2}np\theta _n}\left| \sum \limits _{l=0}^\infty \left( \frac{1}{n}\right) ^l-\left( 1+\frac{1}{n}\right) \right| =\frac{1}{\sqrt{2}np\theta _n}\sum \limits _{l=2}^\infty \left( \frac{1}{n}\right) ^l\\&\quad =\frac{1}{\sqrt{2}n^3p\theta _n}\frac{n}{n - 1}\le \frac{1}{n^3p\theta _n}, \end{aligned}$$

which converges to 0 as \(n\rightarrow \infty \) by (3.10). The third and fourth summands converge to 0 in probability by Propositions 7 and 8. Thus, Slutzky’s theorem yields the assertion of Theorem 1.

We now finish off by proving Propositions 7 and 8.

Proof of Proposition 7

Just as above we see that \(\sum \limits _{k=1}^{n}\lambda _k^3=\mathrm {tr}(B^3)=\sum \limits _{i=1}^{n}z_{ii},\) where \(z_{ij}\) are the entries of \(B^3\). The entries of B are given by \(b_{ij}=\frac{a_{ij}}{\sqrt{d_i}\sqrt{d_j}}\) for \(i,j\in \{1,\dots ,n\}\), hence

$$\begin{aligned} z_{ii}= \sum \limits _{j,k=1}^{n}\frac{a_{ij}a_{jk}a_{ki}}{d_id_jd_k}=\sum \limits _{\begin{array}{c} j,k=1\\ i,j,k \text { p.d.} \end{array}}^{n}\frac{a_{ij}a_{jk}a_{ki}}{d_id_jd_k} \end{aligned}$$

since \(a_{ii}=0\) (where ”p.d.” stands for ”pairwise different”). Thus,

$$\begin{aligned} \sum \limits _{k=1}^{n}\lambda _k^3=\sum \limits _{\begin{array}{c} i_1,i_2,i_3 =1\\ i_1,i_2,i_3 \text { p.d.} \end{array}}^{n}\frac{a_{i_1i_2}a_{i_2i_3}a_{i_3i_1}}{d_{i_1}d_{i_2}d_{i_3}} \end{aligned}$$
(5.2)

We have by (3.4), (5.2)

$$\begin{aligned} \mathbb {E}\left[ \sum \limits _{k=1}^n\lambda _k^3\right]&=\sum \limits _{\begin{array}{c} i_1,i_2,i_3\\ \text {p.d.} \end{array}}\mathbb {E}\left[ \frac{a_{i_1i_2}a_{i_2i_3}a_{i_3i_1}}{(\tilde{d}_{i_1}^{(i_2,i_3)}+2)(\tilde{d}_{i_2}^{(i_1,i_3)}+2)(\tilde{d}_{i_3}^{(i_1,i_2)}+2)}\right] \nonumber \\&=\sum \limits _{\begin{array}{c} i_1,i_2,i_3\\ \text {p.d.} \end{array}}p^3\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_1}^{(i_2,i_3)}+2)}\right] \mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_2}^{(i_1,i_3)}+2)}\right] \mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_3}^{(i_1,i_2)}+2)}\right] \nonumber \\&=n(n-1)(n-2)p^3\mu _{2,2}^3\nonumber \\&= (n^3-3n^2+2n)p^3\frac{1}{(np)^3}\left( 1-\frac{1-2p}{(np)}+\frac{4p^2-3p}{(np)^2}+O\left( (np)^{-3}\right) \right) ^3\nonumber \\&=\left( 1-\frac{3}{n}+\frac{2}{n^2}\right) \left( 1-\frac{3 - 6p}{np}+\frac{3(1 - 4p + 4p^2) + 12p^2 - 9p}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \right) \nonumber \\&=1-\frac{3(1-p)}{(np)}+\frac{8p^2-12p+3}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \end{aligned}$$
(5.3)

and further

$$\begin{aligned} \mathbb {E}\left[ \Big (\sum \limits _{k=1}^n\lambda _k^3\Big )^2\right]&=\underbrace{\sum \limits _{\begin{array}{c} i_1,i_2,i_3\\ \text {p.d.} \end{array}}\sum \limits _{\begin{array}{c} j_1,j_2,j_3\\ \text {p.d.} \end{array}}}_{(\square )}\underbrace{\mathbb {E}\left[ \frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_1}}{d_{i_1}d_{i_2}d_{i_3}}\frac{a_{j_1,j_2}a_{j_2,j_3}a_{j_3,j_1}}{d_{j_1}d_{j_2}d_{j_3}}\right] }_{(*)}, \end{aligned}$$
(5.4)

We differentiate between the different values of \(\ell :=|I\cup J|\), \(I:=\{i_1,i_2,i_3\}\), \(J:=\{j_1,j_2,j_3\}\). To abbreviate notation, we write \((\square )\) for the number of possibilities to choose \(i_1,i_2,i_3,j_1,j_2,j_3\) in the sums in (5.4) in each considered case and \((\square )(*)\) for the total contribution of the considered case to (5.4). We notice that \((*)\) is nonzero if and only if the following condition holds:

$$\begin{aligned} i_1\ne i_2\ne i_3\ne i_1\qquad \text {and}\qquad j_1\ne j_2\ne j_3\ne j_1 \end{aligned}$$
(5.5)

Therefore, we only consider the cases in \((\square )\) in which (5.5) is satisfied.

  • \(\ell =3\): There are \(6n(n-1)(n-2)=O(n^3)\) possibilities for this case (n possibilities for \(i_1\), \(n-1\) for \(i_2\), \(n-2\) for \(i_3\), since \(i_1,i_2,i_3\) are pairwise different and further 3 possibilities for \(j_1\), 2 for \(j_2\), and 1 for \(j_3\), because \(j_1,j_2,j_3\) are chosen from \(i_1,i_2,i_3\) in an arbitrary order. Then,

    $$\begin{aligned} (*)&=\mathbb {E}\bigg [\frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_1}}{d_{i_1}^2d_{i_2}^2d_{i_3}^2}\bigg ]\\&=p^2\mathbb {E}\left[ \Big (\frac{1}{d_{i_1}^{I\cup J} + 2}\Big )^2\right] ^3=p^2\left( \sigma _{2,2}^2\right) ^3=p^3O\left( \frac{1}{(np)^{6}}\right) , \end{aligned}$$

    by (3.5). Thus,

    $$\begin{aligned} (\square )(*)&=O(n^3)p^3O\left( \frac{1}{(np)^{6}}\right) =O\left( \frac{1}{(np)^{3}}\right) . \end{aligned}$$
  • \(\ell =4\): There are \(18n(n-1)(n-2)(n-3)=18n^4\left( 1+O\left( (np)^{-1}\right) \right) \) possibilities for this case (\(n(n-1)(n-2)\) possibilities for \(i_1,i_2,i_3\), 3 possibilities to choose two values from \(i_1,i_2,i_3\) to be copied to \(j_1,j_2,j_3\), \((n-3)\) possibilities to choose the last value among \(j_1,j_2,j_3\) and then \(3\cdot 2\cdot 1 \) possibilities to arrange the indices \(j_1,j_2,j_3\)). Then, w.l.o.g, let \(i_1=j_1\), \(i_2=j_2\), and therefore,

    $$\begin{aligned} (*)&=\mathbb {E}\bigg [\frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_1}a_{i_1,j_3}a_{i_2,j_3}}{d_{i_1}^2d_{i_2}^2d_{i_3}d_{j_3}}\bigg ]\\&=p^5\mathbb {E}\bigg [\frac{1}{{(\tilde{d}_{i_1}^{I\cup J}+3)}^2}\bigg ]\mathbb {E}\bigg [\frac{1}{{(\tilde{d}_{i_2}^{I\cup J}+3)}^2}\bigg ]\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_3}^{(i_1,i_2)}+2)(\tilde{d}_{j_3}^{(i_1,i_2)}+2)}\bigg ]\\&=p^5\left( \sigma _{\ell -1,3}^2\right) ^2\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_3}^{(i_1,i_2)}+2)(\tilde{d}_{j_3}^{(i_1,i_2)}+2)}\bigg ]. \end{aligned}$$

    For the remaining expectation, we use the law of total expectation to differentiate the different values of \(a_{i_3,j_3}\):

    $$\begin{aligned}&\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_3}^{(i_1,i_2)}+2)(\tilde{d}_{j_3}^{(i_1,i_2)}+2)}\right] =\mathbb {E}\left[ \mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_3}^{(i_1,i_2)}+2)(\tilde{d}_{j_3}^{(i_1,i_2)}+2)}\,\bigg |\, a_{i_3,j_3}\bigg ]\right] \\&\quad =\mathbb {E}\left[ \mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_3}^{I\cup J}+a_{i_3,j_3}+2)(\tilde{d}_{j_3}^{I\cup J}+a_{i_3,j_3}+2)}\,\bigg |\, a_{i_3,j_3}\bigg ]\right] \\&\quad =\mathbb {E}\left[ \mu _{\ell -1,2+a_{i_3,j_3}}^2\right] =p\mu _{3,3}^2+(1-p)\mu _{3,2}^2=\frac{1}{(np)^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$

    by (3.4), and \(\sigma _{\ell -1,3}^2=\sigma _{3,3}^2=\frac{1}{(np)^2}\left( 1+O\big (\frac{1}{np}\big )\right) \) due to (3.5). Therefore,

    $$\begin{aligned} (\square )(*)&=18n^4p^5\frac{1}{(np)^6}\left( 1+O\left( \frac{1}{np}\right) \right) =\frac{18p}{(np)^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
  • \(\ell =5\): There are \(9n(n-1)(n-2)(n-3)(n-4)=9n^5\left( 1-\frac{10}{n}+O\left( n^{-2}\right) \right) \) possibilities for this case (\(n(n-1)(n-2)\) possibilities for \(i_1,i_2,i_3\), 3 possibilities to choose which value is copied from \(i_1,i_2,i_3\) to \(j_1,j_2,j_3\), 3 possibilities to choose its position and \((n-3)(n-4)\) possibilities to choose and arrange the other two values in \(j_1,j_2,j_3\)). W.l.o.g. let \(i_1=j_1\). Then,

    $$\begin{aligned} (*)&=\mathbb {E}\left[ \frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_1}a_{i_1,j_2}a_{j_2,j_3}a_{j_3,i_1}}{d_{i_1}^2d_{i_2}d_{j_2}d_{i_3}d_{j_3}}\right] \\&=p^6\mathbb {E}\bigg [\frac{1}{{(\tilde{d}_{i_1}^{I\cup J}+4)}^2}\bigg ]\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_2}^{(i_1,i_3)} + 2)(\tilde{d}_{j_2}^{(i_1,j_3)} + 2)(\tilde{d}_{i_3}^{(i_1,i_2)} + 2)(\tilde{d}_{j_3}^{(i_1,j_2)} + 2)}\bigg ]\\&=p^6\sigma _{\ell -1,4}^2\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_2}^{(i_1,i_3)} + 2)(\tilde{d}_{j_2}^{(i_1,j_3)} + 2)(\tilde{d}_{i_3}^{(i_1,i_2)} + 2)(\tilde{d}_{j_3}^{(i_1,j_2)} + 2)}\bigg ]. \end{aligned}$$

    By (3.5), \(\sigma _{4,4}^2= \frac{1}{(np)^2}\left( 1+\frac{7p-5}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) .\) For \(i\in \{i_2,i_3\}\), we denote \(c_i:=a_{i,j_2}+a_{i,j_3}\) and analogously, for \(i\in \{j_2,j_3\}\) we denote \(c_i:=a_{i,i_2}+a_{i,i_3}\). With these notations, the remaining expectation can be transformed using (3.4) and \(c_i\sim \mathrm {Bin}(2,p)\) for every \(i\in \{i_2,i_3,j_2,j_3\}\)

    $$\begin{aligned}&\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_2}^{(i_1,i_3)}+2)(\tilde{d}_{j_2}^{(i_1,j_3)}+2)(\tilde{d}_{i_3}^{(i_1,i_2)}+2)(\tilde{d}_{j_3}^{(i_1,j_2)}+2)}\bigg ]\\&\quad =\mathbb {E}\Bigg [\mathbb {E}\bigg [\frac{1}{(\tilde{d}_{i_2}^{I\cup J} + c_{i_2} + 2)(\tilde{d}_{j_2}^{I\cup J} + c_{j_2} + 2)}\\&\qquad \cdot \frac{1}{(\tilde{d}_{i_3}^{I\cup J} + c_{i_3} + 2)(\tilde{d}_{j_3}^{I\cup J} + c_{j_3} + 2)}\,\bigg |\, c_{i_2},c_{j_2},c_{i_3}c_{j_3}\bigg ]\Bigg ]\\&\quad =\mathbb {E}\left[ \mu _{\ell -1,2+c_{i_2}}\cdot \mu _{\ell -1,2+c_{j_2}}\cdot \mu _{\ell -1,2+c_{i_3}}\cdot \mu _{\ell -1,2+c_{j_3}}\right] \\&\quad =\mathbb {E}\left[ \mu _{4,2+c_{i_2}}\cdot \mu _{4,2+c_{j_2}}\cdot \mu _{4,2+c_{i_3}}\cdot \mu _{4,2+c_{j_3}}\right] \\&\quad =\mathbb {E}\bigg [\frac{1}{(np)^4}\bigg (1-\sum \limits _{i\in I\cup J\setminus \{i_1\}}\frac{2+c_i-1-4p}{np}+O\left( \frac{1}{(np)^{2}}\right) \bigg )\bigg ]\\&\quad =\frac{1}{(np)^4}\bigg (1-\sum \limits _{i\in I\cup J\setminus \{i_1\}}\frac{2+2p-1-4p}{np}+O\left( \frac{1}{(np)^{2}}\right) \bigg )\\&\quad =\frac{1}{(np)^4}\bigg (1-\frac{4-8p}{np}+O\left( \frac{1}{(np)^{2}}\right) \bigg ) \end{aligned}$$

    Therefore,

    $$\begin{aligned} (\square )(*)&= 9n^5\left( 1-\frac{10}{n}+O\left( n^{-2}\right) \right) p^6\frac{1}{(np)^6}\\&\quad \cdot \left( 1+\frac{7p-5}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \left( 1-\frac{4-8p}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \\&=\frac{9}{n}\left( 1-\frac{10}{n}+O\left( n^{-2}\right) \right) \left( 1+\frac{15p-9}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \\&=\frac{9}{n}\left( 1+\frac{5p-9}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) . \end{aligned}$$
  • \(\ell =6\): There are \(n(n-1)(n-2)(n-3)(n-4)(n-5)=n^6\left( 1-\frac{15}{n}+\frac{85}{n^2}+O\left( n^{-3}\right) \right) \) possibilities. We obtain

    $$\begin{aligned} (*)=p^6\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_1}^{(i_2,i_3)} + 2)(\tilde{d}_{j_1}^{(j_2,j_3)} + 2)(\tilde{d}_{i_2}^{(i_1,i_3)} + 2)(\tilde{d}_{j_2}^{(j_1,j_3)} + 2)(\tilde{d}_{i_3}^{(i_1,i_2)} + 2)(\tilde{d}_{j_3}^{(j_1,j_2)} + 2)}\right] \end{aligned}$$

    Similar to the case \(\ell =5\), for \(i\in I\) we write \(c_i:=a_{i,j_1}+a_{i,j_2}+a_{i,j_3}\) and for \(i\in J\) analogously \(c_i:=a_{i,i_1}+a_{i,i_2}+a_{i,i_3}\). Computations analogous to the case \(\ell =5\) yield

    $$\begin{aligned} (*)&=p^6\mathbb {E}\left[ \mu _{\ell -1,2+c_{i_1}}\mu _{\ell -1,2+c_{j_1}}\mu _{\ell -1,2+c_{i_2}}\mu _{\ell -1,2+c_{j_2}}\mu _{\ell -1,2+c_{i_3}}\mu _{\ell -1,2+c_{j_3}}\right] \\&=p^6\mathbb {E}\left[ \mu _{5,2+c_{i_1}}\mu _{5,2+c_{j_1}}\mu _{5,2+c_{i_2}}\mu _{5,2+c_{j_2}}\mu _{5,2+c_{i_3}}\mu _{5,2+c_{j_3}}\right] \\&=\frac{1}{(np)^6}\mathbb {E}\bigg [1-\sum \limits _{i\in I\cup J}\frac{2+c_i-1-5p}{np}\\&\quad +\sum \limits _{i\in I\cup J}\frac{25p^2-9(2+c_i-1)p+(2+c_i-1)(2+c_i-2)}{(np)^2}\\&\quad +\sum \limits _{\{i,j\}\subset I\cup J}\frac{(2+c_i-1-5p)(2+c_j-1-5p)}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \bigg ]\\&=\frac{1}{(np)^6}\bigg [1-\frac{6 - 30p + \sum \limits _{i\in I\cup J} \mathbb {E}\left[ c_i\right] }{np}+\frac{150p^2 - 54p + \sum \limits _{i\in I\cup J}\mathbb {E}\left[ -9pc_i + c_i + c_i^2\right] }{(np)^2}\\&\quad +\frac{15 - 150p + 375p^2 + \sum \limits _{\{i,j\}\subset I\cup J}\mathbb {E}\left[ c_i + c_j + c_ic_j - 5pc_i - 5pc_j\right] }{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \bigg ]\\&=\frac{1}{(np)^6}\bigg [1-\frac{6-30p+18p}{np}+\frac{150p^2-54p-9p\cdot 18p+18p+18p(1+2p)}{(np)^2}\\&\quad +\frac{15 - 150p + 375p^2 + 45p + 45p - 225p^2 - 225p^2 + 54p^2 + 9(8p^2 + p)}{(np)^2}\\&\quad +O\left( \frac{1}{(np)^{3}}\right) \bigg ]\\&=\frac{1}{(np)^6}\bigg (1-\frac{6-12p}{np}+\frac{75p^2-69p+15}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \bigg ) \end{aligned}$$

    because \(c_i\sim \mathrm {Bin}(3,p)\) for every \(i\in I\cup J\), i.e., \(\mathbb {E}\left[ c_i\right] =3p\), \(\mathbb {E}\left[ c_i^2\right] =3p(1+2p)\) and \(\mathbb {E}\left[ c_ic_j\right] =8p^2+p\) if \(i\in I, j\in J\) or \(i\in J, j\in I\), \(\mathbb {E}\left[ c_ic_j\right] =9p^2\) if \(i,j\in I\) or \(i,j\in J\). Therefore,

    $$\begin{aligned} (\square )(*)&= n^6\left( 1-\frac{15}{n}+\frac{85}{n^2}+O\left( n^{-3}\right) \right) p^6\\&\quad \cdot \frac{1}{(np)^6}\bigg (1-\frac{6-12p}{np}+\frac{75p^2-69p+15}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \bigg )\\&=\left( 1-\frac{15}{n}+\frac{85}{n^2}\right) \bigg (1-\frac{6-12p}{np}+\frac{75p^2-69p+15}{(np)^2}\bigg )+O\left( \frac{1}{(np)^{3}}\right) \\&=1-\frac{6+3p}{np}+\frac{85p^2+75p^2-69p+15+15p(6-12p)}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \\&=1-\frac{6+3p}{np}+\frac{-20p^2+21p+15}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) . \end{aligned}$$

We gather the information from all four cases to obtain

$$\begin{aligned} \mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^3\bigg )^2\right]&= \frac{18p}{(np)^2}+\frac{9}{n}-\frac{9p(9-5p)}{(np)^2}+1-\frac{6+3p}{np}+\frac{-20p^2+21p+15}{(np)^2}\nonumber \\&=1-\frac{6-6p}{np}+\frac{25p^2-42p+15}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \end{aligned}$$
(5.6)

Combining this with (5.3) yields

$$\begin{aligned}&\mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^3-\left( 1-\frac{3(1-p)}{np}\right) \bigg )^2\right] \\&\quad =1-\frac{6-6p}{np}+\frac{25p^2-42p+15}{(np)^2}+\left( 1-\frac{3(1-p)}{np}\right) ^2+O\left( \frac{1}{(np)^{3}}\right) \\&\quad \quad -2\left( 1-\frac{3(1-p)}{np}\right) \left( 1-\frac{3(1-p)}{(np)}+\frac{8p^2-12p+3}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \right) \\&\quad =1-\frac{6(1-p)}{np}+\frac{25p^2-42p+15}{(np)^2}+1-\frac{6(1-p)}{np}+\frac{9(1-p)^2}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \\&\quad \quad -2\left( 1-\frac{6(1-p)}{(np)}+\frac{8p^2-12p+3+9(1-p)^2}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \right) \\&\quad =\frac{25p^2 - 42p + 15 + 9 - 18p + 9p^2 - 16p^2 + 24p - 6 - 18 + 36p - 18p^2}{(np)^2}\\&\quad \quad +O\left( \frac{1}{(np)^{3}}\right) =O\left( \frac{1}{(np)^{3}}\right) \end{aligned}$$

Markov’s inequality combined with (3.10) proves the claim, because \(n(1-p)\rightarrow \infty \). \(\square \)

Proof of Proposition 8

Throughout this proof, we will apply similar reasoning as in the proof of Proposition 7. Just as above we see that \(\sum \limits _{k=1}^{n}\lambda _k^4=\mathrm {tr}(B^4)=\sum \limits _{i=1}^{n}z_{ii},\) where \(z_{ij}\) are now the entries of \(B^4\), hence

$$\begin{aligned} z_{ii}=\sum \limits _{j,k,l=1}^{n}\frac{a_{ij}a_{jk}a_{kl}a_{li}}{d_id_jd_kd_l} \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {E}\left[ \sum \limits _{k=1}^n\lambda _k^4\right]&=\underbrace{\sum \limits _{i_1,i_2,i_3,i_4}}_{(\square )}\underbrace{\mathbb {E}\left[ \frac{a_{i_1i_2}a_{i_2i_3}a_{i_3i_4}a_{i_4i_1}}{d_{i_1}d_{i_2}d_{i_3}d_{i_4}}\right] }_{(*)} \end{aligned}$$
(5.7)

We differentiate between the different values of \(\ell :=|I|\), \(I:=\{i_1,i_2,i_3,i_4\}\). To abbreviate notation, we write \((\square )\) for the number of possibilities to choose \(i_1,i_2,i_3,i_4\) in the sums in (5.7) in the considered case and \((\square )(*)\) for the total contribution of the considered case to (5.7). We notice that \((*)\) is nonzero if and only if the following condition holds:

$$\begin{aligned} i_1\ne i_2\ne i_3\ne i_4\ne i_1 \end{aligned}$$
(5.8)

Therefore, we only consider the cases in \((\square )\) in which (5.8) is satisfied.

  • if \(\ell =2\), i.e., \(i_1=i_3\) and \(i_2=i_4\), we have

    $$\begin{aligned} (\square )&=n(n-1)=n^2\left( 1-\frac{1}{n}\right) \\ (*)&=\mathbb {E}\Bigg [{\frac{a_{i_1i_2}}{{(\tilde{d}_{i_1}^{i_2}+1)}^2{(\tilde{d}_{i_2}^{i_1}+1)}^2}}\Bigg ]\\&=p\mathbb {E}\Bigg [\frac{1}{{(\tilde{d}_{i_1}^{i_2}+1)}^2}\Bigg ]\mathbb {E}\left[ \frac{1}{{(\tilde{d}_{i_2}^{i_1}+1)}^2}\right] \\&=p\sigma _{2,1}^4=p\frac{1}{(np)^4}\left( 1+O\left( \frac{1}{np}\right) \right) ^2=p\frac{1}{(np)^4}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$

    and thus

    $$\begin{aligned} (\square )(*)&=\frac{1}{n^2p^3}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
  • if \(\ell =3\), i.e., \(i_1=i_3\) or \(i_2=i_4\), but not both (w.l.o.g., we assume \(i_1=i_3\)), we have

    $$\begin{aligned} (\square )&=2n(n-1)(n-2)=2n^3\left( 1-\frac{3}{n}+\frac{2}{n^2}\right) \\ (*)&=\mathbb {E}\Bigg [{\frac{a_{i_1i_2}a_{i_1i_4}}{{(\tilde{d}_{i_1}^{(i_2,i_4)}+2)}^2(\tilde{d}_{i_2}^{i_1}+1)(\tilde{d}_{i_4}^{i_1}+1)}}\Bigg ]\\&=p^2\mathbb {E}\Bigg [\frac{1}{{(\tilde{d}_{i_1}^{(i_2,i_4)}+2)}^2}\Bigg ]\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_2}^{i_1}+1)(\tilde{d}_{i_4}^{i_1}+1)}\right] \\&=p^2\sigma _{2,2}^2{\mathbb {E}\left[ \mu _{2,1+a_{i_2,i_4}}^2\right] }\\&=p^2\frac{1}{(np)^4}\left( 1+\frac{2p}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \left( 1+\frac{3p-1}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \\&=p^2\frac{1}{(np)^4}\left( 1+\frac{5p-1}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \end{aligned}$$

    and thus

    $$\begin{aligned} (\square )(*)&=\frac{2}{np^2}\left( 1-\frac{3}{n}+\frac{2}{n^2}\right) \left( 1+\frac{5p-1}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \\&=\frac{2}{np^2}\left( 1+\frac{2p-1}{np}+O\left( \frac{1}{(np)^{2}}\right) \right) \end{aligned}$$
  • If \(\ell =4\), i.e., all four of the indices are pairwise different, we have

    $$\begin{aligned} (\square )&=n(n-1)(n-2)(n-3)=n^4\left( 1-\frac{6}{n}+\frac{11}{n^2}+O(n^{-3})\right) \\ (*)&=\mathbb {E}\left[ \frac{a_{i_1i_2}a_{i_2i_3}a_{i_3i_4}a_{i_4i_1}}{(\tilde{d}_{i_1}^{(i_2,i_4)}+2)(\tilde{d}_{i_2}^{(i_1,i_3)}+2)(\tilde{d}_{i_3}^{(i_2,i_4)}+2)(\tilde{d}_{i_4}^{(i_1,i_3)}+2)}\right] \\&=p^4\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_1}^{(i_2,i_4)}+2)(\tilde{d}_{i_3}^{(i_2,i_4)}+2)}\right] \mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_2}^{(i_1,i_3)}+2)(\tilde{d}_{i_4}^{(i_1,i_3)}+2)}\right] \\&=p^4{\mathbb {E}\left[ \mu _{3,2+a_{i_1,i_3}}^2\right] }^2\\&=p^4\mathbb {E}\bigg [\left( \frac{1}{np}+\frac{3p - (a_{i_1,i_3} + 1)}{(np)^2}+\frac{9p^2 - 5p(a_{i_1,i_3} + 1) + (a_{i_1,i_3} + 1)a_{i_1,i_3}}{(np)^3}\right. \\&\quad \left. +O\left( \frac{1}{(np)^{4}}\right) \right) ^2\bigg ]^2\\&=p^4\frac{1}{(np)^4}\mathbb {E}\left[ 1+\frac{6p - 2(a_{i_1,i_3} + 1)}{np}+\frac{9p^2 - 6p(a_{i_1,i_3} + 1)+(a_{i_1,i_3} + 1)^2}{(np)^2}\right. \\&\left. \quad +\frac{18p^2-10p(a_{i_1,i_3}+1)+2(a_{i_1,i_3}+1)a_{i_1,i_3}}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \right] ^2\\&=\frac{1}{n^4}\left( 1+\frac{4p-2}{np}+\frac{11p^2-9p+1}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \right) ^2\\&=\frac{1}{n^4}\left( 1+\frac{8p-4}{np}+\frac{38p^2-34p+6}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \right) \end{aligned}$$

    and thus

    $$\begin{aligned} (\square )(*)&= \left( 1 - \frac{6}{n} + \frac{11}{n^2} + O\Big (\frac{1}{n^3}\Big )\right) \left( 1 + \frac{8p - 4}{np} + \frac{38p^2 - 34p + 6}{(np)^2} + O\left( \frac{1}{(np)^{3}}\right) \right) \\&=1+\frac{2p-4}{np}+\frac{p^2-10p+6}{(np)^2}+O\left( \frac{1}{(np)^{3}}\right) \end{aligned}$$

Combining all possible cases yields

$$\begin{aligned} \mathbb {E}\left[ \sum \limits _{k=1}^n\lambda _k^4\right]= & {} 1+\frac{2p^2-4p+2}{np^2}+\frac{p^4-10p^3+10p^2-p}{(np^2)^2}+O\left( \frac{1}{(np)^{3}}\right) \nonumber \\= & {} 1+\frac{2(1-p)^2}{np^2}+\frac{p^4-10p^3+10p^2-p}{(np^2)^2}+O\left( \frac{1}{(np)^{3}}\right) \end{aligned}$$
(5.9)

For the second moment, consider

$$\begin{aligned} \mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^4\bigg )^2\right]&=\mathbb {E}\left[ \bigg (\sum \limits _{i_1,i_2,i_3,i_4}\frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_4}a_{i_4,i_1}}{d_{i_1}d_{i_2}d_{i_3}d_{i_4}}\bigg )^2\right] \nonumber \\&=\mathbb {E}\left[ \sum \limits _{i_1,i_2,i_3,i_4}\sum \limits _{j_1,j_2,j_3,j_4}\frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_4}a_{i_4,i_1}a_{j_1,j_2}a_{j_2,j_3}a_{j_3,j_4}a_{j_4,j_1}}{d_{i_1}d_{i_2}d_{i_3}d_{i_4}d_{j_1}d_{j_2}d_{j_3}d_{j_4}}\right] \nonumber \\&=\mathbb {E}\left[ \sum \limits _{i_1,i_2,i_3,i_4}\sum \limits _{j_1,j_2,j_3,j_4}\frac{a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_4}a_{i_4,i_1}a_{j_1,j_2}a_{j_2,j_3}a_{j_3,j_4}a_{j_4,j_1}}{(\tilde{d}_{i_1}^{(I\cup J)\setminus \{i_1\}}+b_{i_1}+c_{i_1})\cdots (\tilde{d}_{j_4}^{(I\cup J)\setminus \{j_4\}}+b_{j_4}+c_{j_4})}\right] \nonumber \\&=\underbrace{\sum \limits _{i_1,i_2,i_3,i_4}\sum \limits _{j_1,j_2,j_3,j_4}}_{(\square )}\underbrace{\mathbb {E}\left[ a_{i_1,i_2}a_{i_2,i_3}a_{i_3,i_4}a_{i_4,i_1}a_{j_1,j_2}a_{j_2,j_3}a_{j_3,j_4}a_{j_4,j_1}\right] }_{(\triangle )}\nonumber \\&\quad \cdot \underbrace{\mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_1}^{(I\cup J)\setminus \{i_1\}}+b_{i_1}+c_{i_1})\cdots (\tilde{d}_{j_4}^{(I\cup J)\setminus \{j_4\}}+b_{j_4}+c_{j_4})}\right] }_{(*)} \end{aligned}$$
(5.10)

Here, we denote

$$\begin{aligned} I&=\{i_1,\dots ,i_4\}\qquad \text {and}\qquad J=\{j_1,\dots ,j_4\}\\ \mathcal {A}&=\{a_{i_1,i_2},a_{i_2,i_3},a_{i_3,i_4},a_{i_4,i_1},a_{j_1,j_2},a_{j_2,j_3},a_{j_3,j_4},a_{j_4,j_1}\}, \\ \mathcal {I}_{i}&=\{j\in I\cup J\mid a_{i,j}\in \mathcal {A} \vee a_{j,i}\in \mathcal {A}\}, \end{aligned}$$

for \(i\in I\cup J\) and \(b_{i}=|\mathcal {I}_{i}|\), i.e., we sum over the number of \(a_{i,j}\)’s in \(\mathcal {A}\) containing i (notice that this is deterministic) and

$$\begin{aligned} c_{i}=\sum \limits _{j\in (I\cup J)\setminus \mathcal {I}_{i}}a_{i,j}, \end{aligned}$$

i.e., we sum over all \(a_{i,k}\) for \(k\in I\cup J\), excluding those that appear in \((\triangle )\), similar to the considerations in the proof of Proposition 7. Clearly, for each \(i\in I\cup J\), \(c_i\) is binomially distributed with some parameters \(\mathbf {m}_i\) and p (notice that the first parameter may not always be 5, e.g., if \(i_1=i_3\), \(a_{i_1,i_3}=0\)). Furthermore, if \(i,j\in I\cup J\) are chosen in such a way that \(a_{i,j}\) appears in \((\triangle )\), \(c_i\) and \(c_j\) are independent. Otherwise,

$$\begin{aligned} \mathbb {E}\left[ c_ic_j\right] =(\mathbf {m}_i\mathbf {m}_j-1)p^2+p. \end{aligned}$$
(5.11)

Now set \(I=\{i_1,\dots ,i_4\}\) and \(J=\{j_1,\dots ,j_4\}\). We differentiate between the different values of \(\ell :=|I\cup J|\). To simplify notation, we write \((\square )\) for the number of possibilities to choose \(i_1,\dots ,i_4,j_1,\dots ,j_4\) in the sums in (5.10) in each considered case and \((\square )(\triangle )(*)\) for the total contribution of the considered case to (5.10). We notice that \((\triangle )\) is nonzero if and only if the following condition holds:

$$\begin{aligned} i_1\ne i_2\ne i_3\ne i_4\ne i_1\quad \wedge \quad j_1\ne j_2\ne j_3\ne j_4\ne j_1 \end{aligned}$$
(5.12)

Therefore, we only consider the cases in \((\square )\) in which (5.12) is satisfied.

  • If \(\ell \le 5\), it is straightforward to check that because of Proposition 10

    $$\begin{aligned} (\square )=O(n^{\ell }),\qquad (\triangle )&\le p^{\ell -2},\qquad (*)=O\left( \frac{1}{(np)^8}\right) \nonumber \\ (\square )(\triangle )(*)&=O\left( (np)^{\ell -8}p^{-2}\right) =O\left( \frac{1}{n^3p^5}\right) \end{aligned}$$
    (5.13)
  • \(\ell =6\). There are six ways to obtain this:

    1. (a)

      \(|I|=2\), \(|J|=4\), \(|I\cap J|=0\),

    2. (b)

      \(|I|=4\), \(|J|=2\), \(|I\cap J|=0\),

    3. (c)

      \(|I|=3\), \(|J|=4\), \(|I\cap J|=1\),

    4. (d)

      \(|I|=4\), \(|J|=3\), \(|I\cap J|=1\),

    5. (e)

      \(|I|=3\), \(|J|=3\), \(|I\cap J|=0\),

    6. (f)

      \(|I|=4\), \(|J|=4\), \(|I\cap J|=2\).

    In each of these cases, \((*)=\frac{1}{(np)^{8}}\left( 1+O\left( \frac{1}{np}\right) \right) \) as can be seen by computations similarly as above applying (3.4), (3.5). Clearly, (a) and (b) as well as (c) and (d) are, respectively, analogous.

    1. (a)

      Because \(|I|=2\), but (5.8) holds, \(i_1=i_3\) and \(i_2=i_4\). Thus, we have \(n(n-1)\) possibilities to choose \(i_1,\dots ,i_4\). The indices \(j_1,\dots ,j_4\) are all different from \(i_1\) and \(i_2\), thus giving another \((n-2)(n-3)(n-4)(n-5)\) possibilities.

      $$\begin{aligned} (\square )&=n(n-1)\dots (n-5)=n^6\left( 1+O\left( n^{-1}\right) \right) \\ (\triangle )&=p^5\\ (\square )(\triangle )(*)&=\frac{1}{n^2p^3}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
    2. (b)

      Analogously to (a), we obtain a contribution of

      $$\begin{aligned} (\square )(\triangle )(*)=(\square )(\triangle )(*)=\frac{1}{n^2p^3}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
    3. (c)

      Either \(i_1=i_3\) or \(i_2=i_4\) hold, but not both. The other two indices in I are different from the two identical ones and each other. Thus, we obtain \(2n(n-1)(n-2)\) possibilities to choose \(i_1,\dots ,i_4\). One of the indices in J (4 possibilities) is a copy of one of the indices in I (3 possibilities, either the one appearing twice or one of the other two). The other three indices in J are different from those in I, and from each other and pairwise different from each other, thus giving another \((n-3)(n-4)(n-5)\) possibilities. Therefore,

      $$\begin{aligned} (\square )&=2n(n-1)(n-2)\cdot 4\cdot 3\cdot (n-3)(n-4)(n-5)\\&=24n^6\left( 1+O\left( n^{-1}\right) \right) \\ (\triangle )&=p^6\\ (\square )(\triangle )(*)&=\frac{24}{(np)^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
    4. (d)

      Analogously to (c), we obtain a contribution of

      $$\begin{aligned} (\square )(\triangle )(*)=(\square )(\triangle )(*)=\frac{24}{(np)^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
    5. (e)

      Either \(i_1=i_3\) or \(i_2=i_4\) hold, but not both. The other two indices in I are different from the two identical ones and each other. Thus, we obtain \(2n(n-1)(n-2)\) possibilities to choose \(i_1,\dots ,i_4\). Analogously, either \(j_1=j_3\) or \(j_2=j_4\) hold, but not both. Because the indices in J are different from those in I, we obtain \(2(n-3)(n-4)(n-5)\) to choose \(j_1,\dots ,j_4\).

      $$\begin{aligned} (\square )&=4n(n-1)\dots (n-5)=4n^6\left( 1+O\left( n^{-1}\right) \right) \\ (\triangle )&=p^4\\ (\square )(\triangle )(*)&=\frac{4}{(np^2)^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
    6. (f)

      We need to differentiate between the positions of the indices in \(I\cap J\).

      • If the copied indices \(I\cap J\) are neighbors in both I and J, i.e.,

        $$\begin{aligned}&I\cap J\in \{\{i_1,i_2\},\{i_2,i_3\},\{i_3,i_4\},\{i_4,i_1\}\}\cap \{\{j_1,j_2\},\{j_2,j_3\},\\&\quad \{j_3,j_4\},\{j_4,j_1\}\},\end{aligned}$$

        \((\triangle )=p^7\) (because then, \(a_{i',j'}\) appears twice in \((\triangle )\)). The number of possibilities for this to happen is to choose the indices in I (\(n(n-1)(n-2)(n-3)\) possibilities), the choosing one of the four neighboring pairs in I (4 possibilities, w.l.o.g. \(\{i_1,i_2\}\)) and on of the four in J (4 possibilities, w.l.o.g. \(\{j_1,j_2\}\)). Then choose the order in which the indices are copied (2 possibilities, namely \(i_1=j_1, i_2=j_2\) as opposed to \(i_1=j_2\), \(i_2=j_1\).). The remaining two indices from J are different from all other indices, i.e., we have \((n-4)(n-5)\) possibilities. In this case,

        $$\begin{aligned} (\square )&=n(n-1)(n-2)(n-3)\cdot 4\cdot 4\cdot 2\cdot (n-4)(n-5)\\&=32n^6\left( 1+O\left( n^{-1}\right) \right) \\ (\triangle )&=p^7\\ (\square )(\triangle )(*)&=\frac{32}{n^2p}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$
      • If the copied indices are not neighbors in both I and J, \((\triangle )=p^8\), we first choose the indices in I (\(n(n-1)(n-2)(n-3)\) possibilities). For J, we choose 2 indices from I to be copied to 2 indices in J, both chosen without considering their order and so that not both of the index pairs are neighboring, as they do in the first case (\(\left( {\begin{array}{c}4\\ 2\end{array}}\right) \cdot \left( {\begin{array}{c}4\\ 2\end{array}}\right) -4\cdot 4=20\) possibilities). Then again choose the order in which the indices are copied (2 possibilities). For the remaining indices in J, we again have \((n-4)(n-5)\) possibilities. Therefore,

        $$\begin{aligned} (\square )&=n(n-1)(n-2)(n-3)\cdot 20\cdot 2\cdot (n-4)(n-5)\\&=40n^6\left( 1+O\left( n^{-1}\right) \right) \\ (\triangle )&=p^8\\ (\square )(\triangle )(*)&=\frac{40}{n^2}\left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$

        Both cases combined yield a contribution of (f) of

        $$\begin{aligned} (\square )(\triangle )(*)=\left( \frac{32}{n^2p}+\frac{40}{n^2}\right) \left( 1+O\left( \frac{1}{np}\right) \right) \end{aligned}$$

    The entirety of the cases satisfying \(\ell =6\) yields the contribution

    $$\begin{aligned} (\square )(\triangle )(*)&=\frac{p+p+24p^2+24p^2+4+32p^3+40p^4}{(np^2)^2}+O\left( \frac{1}{(np)^3}\right) \nonumber \\&=\frac{4+2p+48p^2+32p^3+40p^4}{(np^2)^2}+O\left( \frac{1}{(np)^3}\right) \end{aligned}$$
    (5.14)
  • \(\ell =7\). There are three ways to obtain this:

    1. (a)

      \(|I|=3\), \(|J|=4\), \(|I\cap J|=0\),

    2. (b)

      \(|I|=4\), \(|J|=3\), \(|I\cap J|=0\),

    3. (c)

      \(|I|=4\), \(|J|=4\), \(|I\cap J|=1\)

    the first two of which are apparently analogous.

    1. (a)

      Either \(i_1=i_3\) or \(i_2=i_4\) hold, but not both (without loss of generality, we will choose the former for our calculations). The other two indices in I are different from the two identical ones and each other. Thus, we obtain \(2n(n-1)(n-2)\) possibilities to choose \(i_1,\dots ,i_4\). The indices \(j_1,\dots ,j_4\) are all different from those in I and pairwise different from each other, thus giving another \((n-3)(n-4)(n-5)(n-6)\) possibilities. Furthermore, we find that \(\mathbf {m}_{i_1}=\mathbf {m}_{j_1}=\mathbf {m}_{j_2}=\mathbf {m}_{j_3}=\mathbf {m}_{j_4}=4\) and \(\mathbf {m}_{i_2}=\mathbf {m}_{i_4}=5\).

      $$\begin{aligned} (\square )&=2n(n-1)\dots (n-6)=2n^7\left( 1-\frac{21}{n}+O\left( n^{-2}\right) \right) \\ (\triangle )&=p^6\\ (*)&=\mathbb {E}\left[ \frac{1}{{(\tilde{d}_{i_1}^{I\cup J}+2+c_{i_1})}^2(\tilde{d}_{i_2}^{I\cup J}+1+c_{i_2})(\tilde{d}_{i_4}^{I\cup J}+1+c_{i_4})}\right. \\&\quad \left. \cdot \frac{1}{(\tilde{d}_{j_1}^{I\cup J}+2+c_{j_1})(\tilde{d}_{j_2}^{I\cup J}+2+c_{j_2})(\tilde{d}_{j_3}^{I\cup J}+2+c_{j_3})(\tilde{d}_{j_4}^{I\cup J}+2+c_{j_4})}\right] \\&=\mathbb {E}\left[ \sigma _{6,2+c_{i_1}}^2\mu _{6,1+c_{i_2}}\mu _{6,1+c_{i_4}}\mu _{6,2+c_{j_1}}\mu _{6,2+c_{j_2}}\mu _{6,2+c_{j_3}}\mu _{6,2+c_{j_4}}\right] \\&=\frac{1}{(np)^8}\mathbb {E}\left[ \left( 1+\frac{11p-2c_{i_1}-1}{np}\right) \left( 1+\frac{6p-c_{i_2}}{np}\right) \left( 1+\frac{6p-c_{i_4}}{np}\right) \right. \\&\qquad \qquad \left. \cdot \left( 1+\frac{6p-c_{j_1}-1}{np}\right) \dots \left( 1+\frac{6p-c_{j_4}-1}{np}\right) +O\left( \frac{1}{(np)^2}\right) \right] \\&=\frac{1}{(np)^8}\mathbb {E}\Bigg [1+\frac{11p - 2c_{i_1} - 1 + 36p - c_{i_2} - c_{i_4} - \sum \limits _{j\in J}c_{j} - 4}{np}+O\left( \frac{1}{(np)^2}\right) \Bigg ]\\&=\frac{1}{(np)^8}\left( 1+\frac{13p-5}{np}+O\left( \frac{1}{(np)^2}\right) \right) \end{aligned}$$

      This case gives a total contribution of

      $$\begin{aligned} (\square )(\triangle )(*)=\frac{2}{np^2}\left( 1+\frac{-8p-5}{np}+O\left( \frac{1}{(np)^2}\right) \right) \end{aligned}$$
    2. (b)

      Analogously to (a), we obtain a contribution of

      $$\begin{aligned} (\square )(\triangle )(*)=\frac{2}{np^2}\left( 1+\frac{-8p-5}{np}+O\left( \frac{1}{(np)^2}\right) \right) \end{aligned}$$
    3. (c)

      There are \(n(n-1)(n-2)(n-3)\) possibilities to choose \(i_1,\dots i_4\). Then, we pick one of the indices \(j_1,\dots ,j_4\) (4 possibilities) and assign it one of the values of \(i_1,\dots ,i_4\) (4 possibilities). Without loss of generality, we choose \(i_1=j_1\). The other three indices in J are all different from those in I and from each other, thereby giving another \((n-4)(n-5)(n-6)\) possibilities. We further find that \(\mathbf {m}_{i_1}=2\), \(\mathbf {m}_{i_2}=\mathbf {m}_{i_3}=\mathbf {m}_{i_4}=\mathbf {m}_{j_2}=\mathbf {m}_{j_3}=\mathbf {m}_{j_4}=4\). Therefore,

      $$\begin{aligned} (\square )&=16n(n-1)\dots (n-6)=16n^7\left( 1-\frac{21}{n}+O\left( n^{-2}\right) \right) \\ (\triangle )&=p^8\\&(*)=\mathbb {E}\left[ \frac{1}{{(\tilde{d}_{i_1}^{I\cup J}+4+c_{i_1})}^2(\tilde{d}_{i_2}^{I\cup J}+2+c_{i_2})\dots (\tilde{d}_{j_4}^{I\cup J}+2+c_{j_4})}\right] \\&=\mathbb {E}\left[ \sigma _{6,4+c_{i_1}}^2\mu _{6,2+c_{i_2}}\mu _{6,2+c_{i_3}}\mu _{6,2+c_{i_4}}\mu _{6,2+c_{j_2}}\mu _{6,2+c_{j_3}}\mu _{6,2+c_{j_4}}\right] \\&=\frac{1}{(np)^8}\mathbb {E}\Bigg [\left( 1+\frac{11p - 2c_{i_1} - 5}{np}\right) \prod \limits _{\begin{array}{c} j\in I\cup J\\ j\ne {\{i_1\}} \end{array}} \left( 1+\frac{6p - c_{j} - 1}{np}\right) +O\left( \frac{1}{(np)^{2}}\right) \Bigg ]\\&=\frac{1}{(np)^8}\mathbb {E}\left[ 1+\frac{1}{np}\Big (11 p - 2c_{i_1} - 5 + 36p - \sum \limits _{\begin{array}{c} i\in I\cup J\\ j\ne \{i_1\} \end{array}} c_{i} - 6\Big )+O\left( \frac{1}{(np)^2}\right) \right] \\&=\frac{1}{(np)^8}\left( 1+\frac{19p-11}{np}+O\left( \frac{1}{(np)^2}\right) \right) \end{aligned}$$

      This case gives a total contribution of

      $$\begin{aligned} (\square )(\triangle )(*)=\frac{16}{n}\left( 1+\frac{-2p-11}{np}+O\left( \frac{1}{(np)^2}\right) \right) \end{aligned}$$

    The entirety of the cases satisfying \(\ell =7\) yields the contribution

    $$\begin{aligned} (\square )(\triangle )(*)=\frac{4 + 16p^2}{np^2}+\frac{-32p^2 - 20p - 32p^4 - 176p^3}{(np^2)^2}+O\left( \frac{1}{(np)^3}\right) \nonumber \\ \end{aligned}$$
    (5.15)
  • \(\ell =8\). In this case, \(i_1,\dots ,i_4,j_1,\dots ,j_4\) are pairwise different. Therefore, we have

    $$\begin{aligned} (\square )&=n\cdot (n-1)\dots (n-7)=n^8\left( 1-\frac{28}{n}+\frac{322}{n^2}+O(n^{-3})\right) \\ (\triangle )&=p^8 \end{aligned}$$

    Furthermore, \(\mathbf {m}_i=5\) for all \(i\in I\cup J\), hence \(\mathbb {E}\left[ c_i\right] =5p\), \(\mathbb {E}\left[ c_i^2\right] =5p(4p+1)\). For \(i,j\in I\cup J\) with \(i<j\) (28 possibilities), there are exactly 8 possibilities to choose (ij) such that \(c_i\) and \(c_j\) are independent (namely \((i_1,i_2),(i_1,i_4),(i_2,i_3),(i_3,i_4),(j_1,j_2),(j_1,j_4),(j_2,j_3),(j_3,j_4)\)). For these, \(\mathbb {E}\left[ c_ic_j\right] =25p^2\). For the other 20 possibilities, \(\mathbb {E}\left[ c_ic_j\right] =24p^2+p\) by (5.11).

    $$\begin{aligned}&(*)=\mathbb {E}\left[ \mathbb {E}\left[ \frac{1}{(\tilde{d}_{i_1}^{(I\cup J)\setminus \{i_1\}}+2+c_{i_1})\cdots (\tilde{d}_{j_4}^{(I\cup J)\setminus \{j_4\}}+2+c_{j_4})}\,\Big |\, c_{i_1},\dots ,c_{j_4}\right] \right] \\&=\mathbb {E}\left[ \mu _{7,2+c_{i_1}}\mu _{7,2+c_{i_2}}\mu _{7,2+c_{i_3}}\mu _{7,2+c_{i_4}}\mu _{7,2+c_{j_1}}\mu _{7,2+c_{j_2}}\mu _{7,2+c_{j_3}}\mu _{7,2+c_{j_4}}\right] \\&=\mathbb {E}\left[ \prod \limits _{i\in I\cup J}\frac{1}{np}\left( 1+\frac{7p - (c_i + 1)}{np}+\frac{49p^2 - 13p(c_i + 1) + c_i(c_i + 1)}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \right) \right] \\&=\frac{1}{(np)^8}\Bigg [1+\frac{\mathbb {E}\left[ \sum \limits _{i\in I\cup J}(7p - (c_i + 1))\right] }{np}+\frac{\mathbb {E}\left[ \sum \limits _{i\in I\cup J} (49p^2 - 13p(c_i + 1) + c_i(c_i + 1))\right] }{(np)^2}\\&\quad +\frac{\mathbb {E}\left[ \sum \limits _{i<j}(7p - (c_i + 1))(7p - (c_j + 1))\right] }{(np)^2}+O\left( \frac{1}{(np)^3}\right) \Bigg ]\\&=\frac{1}{(np)^8}\bigg [1+\frac{56p - (40p + 8)}{np}+\frac{392p^2 - 520p^2 - 104p + 40p(1 + 4p) + 40p}{(np)^2}\\&\quad +\frac{1372p^2 - 1960p^2 - 392p + 8 \cdot 25p^2 + 20 \cdot (24p^2 + p) + 280p + 28}{(np)^2}\\&\quad +O\left( \frac{1}{(np)^3}\right) \bigg ]\\&=\frac{1}{(np)^8}\bigg [1+\frac{16p-8}{np}+\frac{124p^2-116p+28}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \bigg ]\\ \end{aligned}$$

    In total, we obtain in the case \(\ell =8\)

    $$\begin{aligned} (\square )(\triangle )(*)&=n^8\left( 1-\frac{28}{n}+\frac{322}{n^2}+O(n^{-3})\right) p^8\nonumber \\&\quad \cdot \frac{1}{(np)^8}\bigg [1+\frac{16p - 8}{np}+\frac{124p^2 - 116p + 28}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \bigg ]\nonumber \\&=1-\frac{8 + 12p}{np}+\frac{28 + 108p - 2p^2}{(np)^2}+O\left( \frac{1}{(np)^3}\right) \end{aligned}$$
    (5.16)

We can now summarize all cases for \(\ell \) given by (5.13), (5.14), (5.15) and (5.16) by applying (5.10):

$$\begin{aligned} \mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^4\bigg )^2\right]&=1-\frac{8+12p}{np}+\frac{28p^2+108p^3-2p^4}{(np^2)^2}+\frac{4+16p^2}{np^2}+O\left( \frac{1}{n^3p^5}\right) \nonumber \\&\quad +\frac{-32p^2-20p-32p^4-176p^3}{(np^2)^2}+\frac{4+2p+48p^2+32p^3+40p^4}{(np^2)^2}\nonumber \\&=1+\frac{4-8p+4p^2}{np^2}+\frac{4-18p+44p^2-36p^3+6p^4}{(np^2)^2}+O\left( \frac{1}{n^3p^5}\right) \end{aligned}$$
(5.17)

Therefore, by (5.9) und (5.17)

$$\begin{aligned}&\mathbb {E}\left[ \bigg (\sum \limits _{k=2}^n\lambda _k^4-\frac{2(1-p)^2}{np^2}\bigg )^2\right] =\mathbb {E}\left[ \bigg (\sum \limits _{k=2}^n\lambda _k^4\bigg )^2\right] -\frac{4(1-p)^2}{np^2}\mathbb {E}\left[ \sum \limits _{k=2}^n\lambda _k^4\right] +\frac{4(1-p)^4}{(np^2)^2}\\&\quad =\mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^4-1\bigg )^2\right] -\frac{4(1-p)^2}{np^2}\mathbb {E}\left[ \sum \limits _{k=2}^n\lambda _k^4\right] +\frac{4(1-p)^4}{(np^2)^2}\\&\quad =\mathbb {E}\left[ \bigg (\sum \limits _{k=1}^n\lambda _k^4\bigg )^2\right] -2\mathbb {E}\left[ \sum \limits _{k=2}^n\lambda _k^4\right] -1-\frac{4(1-p)^2}{np^2}\mathbb {E}\left[ \sum \limits _{k=2}^n\lambda _k^4\right] +\frac{4(1-p)^4}{(np^2)^2}\\&\quad =1+\frac{4-8p+4p^2}{np^2}+\frac{4-18p+44p^2-36p^3+6p^4}{(np^2)^2}+O\left( \frac{1}{n^3p^5}\right) \\&\qquad - \left( 2+\frac{4(1-p)^2}{np^2}\right) \left( \frac{2(1-p)^2}{np^2}+\frac{p^4-10p^3+10p^2-p}{(np^2)^2}+O\left( \frac{1}{(np)^3}\right) \right) -1+\frac{4(1-p)^4}{(np^2)^2}\\&=\frac{4(1-p)^2}{np^2}+\frac{4-18p+44p^2-36p^3+6p^4}{(np^2)^2}+O\left( \frac{1}{n^3p^5}\right) \\&\qquad -\frac{4(1-p)^2}{np^2}-\frac{2\left( p^4-10p^3+10p^2-p\right) }{(np^2)^2}-\frac{8(1-p)^4}{(np^2)^2}+\frac{4(1-p)^4}{(np^2)^2}\\&\quad =O\left( \frac{1}{n^3p^5}\right) \end{aligned}$$

Applying (3.10) and (1.3),

$$\begin{aligned} \frac{1}{n^2\theta _n^2}\mathbb {E}\left[ \bigg (\sum \limits _{k=2}^n\lambda _k^4-\frac{2(1-p)^2}{np^2}\bigg )^2\right] \xrightarrow []{n\rightarrow \infty }0 \end{aligned}$$
(5.18)

By Lemma 2,

$$\begin{aligned} |\lambda _k| \lesssim a_n:=C\sqrt{\frac{(\log n)^{16 \xi }}{np}}. \end{aligned}$$

with probability at least \(1-e^{-\theta (\log n)^\xi }\) (for some \(\theta >0\)) for some constant C. This bound is independent of k. Therefore, with probability converging to 1,

$$\begin{aligned} \sum \limits _{k=2}^n\lambda _k^4\frac{1}{1-\lambda _k}=(1+b_n)\sum \limits _{k=2}^n\lambda _k^4, \end{aligned}$$

where \(b_n\le \frac{a_n}{1-a_n}\) denotes an appropriate null sequence. Clearly, \(b_n\lesssim a_n\) since \(a_n\rightarrow 0\)

$$\begin{aligned}&\frac{1}{n\theta _n}\mathbb {E}\left[ \left| \sum \limits _{k=2}^n\lambda _k^4\frac{1}{1-\lambda _k}-\frac{2(1-p)^2}{np^2}\right| \right] \le \frac{1}{n\theta _n}\mathbb {E}\left[ \left| \sum \limits _{k=2}^n\lambda _k^4-\frac{2(1-p)^2}{np^2}\right| +\left| b_n\sum \limits _{k=2}^n\lambda _k^4\right| \right] \\&\quad \le \left( \frac{1}{n^2\theta _n^2}\mathbb {E}\left[ \bigg (\sum \limits _{k=2}^n\lambda _k^4-\frac{2(1-p)^2}{np^2}\bigg )^2\right] \right) ^{1/2}+b_n\frac{1}{n\theta _n}\mathbb {E}\left[ \sum \limits _{k=2}^n\lambda _k^4\right] \\&\quad =o(1)+b_n\frac{\sqrt{(np)^5}}{n\sqrt{np^2(1-p)}}\left( \frac{2(1-p)^2}{np^2}+O\left( \frac{1}{(np^2)^{2}}\right) \right) \\&\quad =o(1)+b_nO\left( \frac{1}{\sqrt{p}}\right) \left( 1+O\left( \frac{1}{np^2}\right) \right) \\&\quad \lesssim o(1)+O\left( \sqrt{\frac{(\log n)^{16\xi }}{np^2}}\right) \left( 1+O\left( \frac{1}{np^2}\right) \right) , \end{aligned}$$

by(5.18), (3.10), (5.9), and this converges to 0 as \(n\rightarrow \infty \) because (1.3). By Markov’s inequality, this completes the proof. \(\square \)

Remark 1

It is quite possible that our condition on \(p_n\) is not optimal. However, to improve on it, we would need to consider even higher moments of the \(\lambda _k\)’s in (3.1). Given the amount of technicalities caused by the consideration of \(\lambda _k^3\) and \(\lambda _k^4\) we refrain from it.