Skip to main content

Estimation of the Parameters in an Expanding Dynamic Network Model

Abstract

In this paper, we consider an expanding sparse dynamic network model where the time evolution is governed by a Markovian structure. Transition of the network from time t to t + 1 involves three components where a new node joins the existing network, some of the existing edges drop out, and new edges are formed with the incoming node. We consider long term behavior of the network density and establish its limit. We also study asymptotic distributions of the maximum likelihood estimators of key model parameters. We report results from a simulation study to investigate finite sample properties of the estimators.

This is a preview of subscription content, access via your institution.

Figure 1
Figure 2

References

  1. Athreya, K. B. and Lahiri, S.N. (2006). Measure theory and probability theory. Springer Science & Business Media, Berlin.

    MATH  Google Scholar 

  2. Erdös, P and Rényi, A (1959). On random graphs. Publ. Math. Debrecen 1959, 290–297.

    MATH  Google Scholar 

  3. Hoff, P. D., Raftery, A. E. and Handcock, M. S. (2002). Latent space approaches to social network analysis. J Amer Stat Assoc 97, 1090–1098.

    MathSciNet  Article  Google Scholar 

  4. Han, Q., Xu, K. and Airoldi, E. (2015). Consistent estimation of dynamic and multi-layer block models. In: Proceedings of the 32nd International Conference onMachine Learning (ICML-15), 1511–1520.

  5. Holland, P., Laskey, K. and Samuel Leinhardt, S. (1983). Stochastic block models: First steps. Soc. Netw. 5, 109–137.

    Article  Google Scholar 

  6. Matias, C. and Miele, V. (2017). Statistical clustering of temporal networks through a dynamic stochastic block model. J R Stat Soc Ser B 79, 1119–1141.

    MathSciNet  Article  Google Scholar 

  7. Rohe, K., Chatterjee, S. and Yu, B. (2011). Spectral clustering and the high-dimensional stochastic blockmodel. Ann. Stat. 39, 4, 1878–1915.

    MathSciNet  Article  Google Scholar 

  8. Zhang, X., Moore, C. and Newman, M. (2017). Random graph models for dynamic networks. Europ. Phys. J. B 90, 200.

    MathSciNet  Article  Google Scholar 

  9. Zhang, Y., Levina, E. and Zhu, J. (2017). Estimating network edge probabilities by neighbourhood smoothing. Biometrika 104, 771–783.

    MathSciNet  Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Wei Zhao.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research partially supported by NSF grants number DMS 1811933 and 2006475.

Appendices

Appendix A: Proofs

For proving the asymptotic distributions of the MLEs, we will make use of a version of the martingale Central Limit Theorem, which is stated here for completeness. For each T ≥ 1, let \(\{Y_{iT}, {\mathcal F}_{iT}\}_{i=1}^{T}\) be a martingale difference array (MDA), i.e., \( {\mathcal F}_{1T}\subset {\mathcal F}_{2T}\subset \ldots \subset {\mathcal F}_{TT} \) are σ-fields such that YiT is measurable with respect to \({\mathcal F}_{iT}\) for every i = 1,…,T and \(E(Y_{iT}| {\mathcal F}_{i-1,T})=0\) for all i = 1,…,T. Then, under some conditions, \({\sum }_{i=1}^{T} Y_{iT}\) converges to a normal limit:

Lemma A.1.

Let \(\{Y_{iT}, {\mathcal F}_{iT}\}_{i=1}^{T}\) be an MDA such that

$$ {\sum}_{i=1}^{T} E (Y_{iT}^{2}| {\mathcal F}_{i-1,T} ){\rightarrow}_{p} \sigma^{2} $$
(A.1)

for some constant \(\sigma ^{2}\in (0,\infty )\) and that

(A.2)

for all 𝜖 > 0. Then, \({\sum }_{i=1}^{T} Y_{iT} {\rightarrow }^{d} N(0,\sigma ^{2})\).

For a proof, see Theorem 16.1.1 of Athreya and Lahiri (2006). Note that condition Eq. A.2 holds trivially if

$$ {\sum}_{i=1}^{T} E (Y_{iT}^{4} | {\mathcal F}_{i-1,T} ){\rightarrow}_{p} 0. $$
(A.3)

Condition Eq. A.3 is often referred to as Lyapounov’s condition which is what we will use in the proof of Theorem 3.2 in Section 6.2 below.

We now start with the proof of Theorem 3.1.

A.1: Proof of Theorem 3.1: Behavior of the Network Density

Proof Proof of (i):.

Let Lt be the total edges at time t and let the expected edges at time t be denoted by lt = E(Lt). Note that conditional on the history up to time t, at time t + 1, the expected number of edges that survives is Ltκt+ 1p and the expected number of new edges that will form among the vertices in Vt where there was no edge at time t is \(\big (\frac {n_{t}(n_{t}-1)}{2} - L_{t}\big )\kappa _{t+1}q\). As for the new incoming node, there are nt-many possible pairings of nodes and hence, the expected number of new edges connecting the new node to the network is ntκt+ 1a. Thus, the expected total edges at time t + 1 is

$$ \begin{array}{@{}rcl@{}} l_{t+1} &=& \mathrm{E}(L_{t+1})\\ &=& \mathrm{E}\big[\mathrm{E}(L_{t+1}|L_{t})\big]\\ &=& \mathrm{E}\Big\{L_{t}\kappa_{t+1}p + \Big[\frac{n_{t}(n_{t}-1)}{2} - L_{t}\Big]\kappa_{t+1}q + n_{t}\kappa_{t+1}a\Big\}\\ &=& \mathrm{E}(L_{t})\kappa_{t+1}(p-q) + \frac{n_{t}(n_{t}-1)\kappa_{t+1}}{2}q + n_{t}\kappa_{t+1}a\\ &=& l_{t}\kappa_{t+1}(p-q) + \frac{n_{t}(n_{t}-1)\kappa_{t+1}}{2}q + n_{t}\kappa_{t+1}a. \end{array} $$

Note that \(\frac {n_{t}(n_{t}-1)}{2}\) is the potential edge number at time t. According to the definition of network density \({\varrho }_{t} = \frac {2L_{t}}{n_{t}(n_{t}-1)}\), the above equation can be rewritten as:

$$ \frac{n_{t}(n_{t}+1)}{n_{t}(n_{t}-1)}\frac{\rho_{t+1}}{\kappa_{t+1}} = \rho_{t}(p-q) + q + \frac{2n_{t}}{n_{t}(n_{t}-1)}a, $$
(A.4)

where ρt = E(ϱt) is the expected network density. Taking the limit on both side of the Eq. A.4, we conclude that

$$ \lim_{t\rightarrow\infty} \frac{\rho_{t}}{\kappa_{t}} = q. $$

Note that we assumed:

  1. 1.

    the network is going to be sparse, which leads to \(\lim _{t\rightarrow \infty } \rho _{t} = 0\);

  2. 2.

    the network size is growing large, which leads to \(\lim _{t\rightarrow \infty } n_{t} = \infty \).

Proof Proof of (ii):.

Next we consider the limit of the variance of ϱt. Note that

$$ \begin{array}{@{}rcl@{}} {\text{Var}}({\varrho}_{t+1}) &=& \frac{4}{{n_{t}^{2}}(n_{t}+1)^{2}} {\text{Var}}(L_{t+1})\\ &=& \frac{4}{{n_{t}^{2}}(n_{t}+1)^{2}} \big[\mathrm{E}({\text{Var}}(L_{t+1}|L_{t})) + {\text{Var}}(\mathrm{E}(L_{t+1}|L_{t}))\big], \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} \mathrm{E}({\text{Var}}(L_{t+1}|L_{t})) &=& \mathrm{E}\Big[\kappa_{t+1}p(1-\kappa_{t+1}p)L_{t} + \kappa_{t+1}q(1-\kappa_{t+1}q)\Big(\frac{n_{t}(n_{t}-1)}{2} - L_{t}\Big)\\ &&+ \kappa_{t+1}a(1-\kappa_{t+1}a)n_{t}\Big]\\ &=& \mathrm{E}(L_{t})\big[\kappa_{t+1}p(1-\kappa_{t+1}p) - \kappa_{t+1}q(1-\kappa_{t+1}q)\big] \\ &&+ \kappa_{t+1}q(1-\kappa_{t+1}q)\frac{n_{t}(n_{t}-1)}{2} + \kappa_{t+1}a(1-\kappa_{t+1}a)n_{t},\\ {\text{Var}}(E(L_{t+1}|L_{t})) &=& {\text{Var}}\Big[L_{t}\kappa_{t+1}p + \Big(\frac{n_{t}(n_{t}-1)}{2} - L_{t}\Big)\kappa_{t+1}q + n_{t}\kappa_{t+1}a\Big]\\ &= &{\text{Var}}(L_{t})\kappa_{t+1}^{2}(p - q)^{2}. \end{array} $$

Thus,

$$ \begin{array}{@{}rcl@{}} {\text{Var}}\Big(\frac{{\varrho}_{t+1}}{\kappa_{t+1}}\Big) &=& \frac{4}{{n_{t}^{2}}(n_{t}+1)^{2}\kappa_{t+1}^{2}}\Big\{\mathrm{E}(L_{t})\big[\kappa_{t+1}p(1-\kappa_{t+1}p) - \kappa_{t+1}q(1-\kappa_{t+1}q)\big] \\ & &+ \kappa_{t+1}q(1-\kappa_{t+1}q)\frac{n_{t}(n_{t}-1)}{2} + \kappa_{t+1}a(1-\kappa_{t+1}a)n_{t} \\ & &+ {\text{Var}}(L_{t})\kappa_{t+1}^{2}(p - q)^{2}\Big\}\\[.1in] &=& \frac{2(n_{t}-1)}{n_{t}(n_{t}+1)^{2}\kappa_{t+1}}\mathrm{E}({\varrho}_{t})\big[p(1 - \kappa_{t+1}p) - q(1 - \kappa_{t+1}q)\big] + \frac{2(n_{t}-1)}{n_{t}(n_{t} + 1)^{2}\kappa_{t+1}}q(1 - \kappa_{t+1}q) \\ &&+ \frac{4}{n_{t}(n_{t}+1)^{2}\kappa_{t+1}}a(1-\kappa_{t+1}a) + \frac{(n_{t}-1)^{2}}{(n_{t}+1)^{2}}{\text{Var}}\Big(\frac{{\varrho}_{t}}{\kappa_{t}}\Big){\kappa_{t}^{2}}(p - q)^{2}. \end{array} $$

Taking the ‘\(\limsup \)’ on both sides, we have

$$ \limsup_{t\rightarrow \infty}{\text{Var}}\Big(\frac{{\varrho}_{t+1}}{\kappa_{t+1}}\Big) = (p - q)^{2} \limsup_{t\rightarrow \infty}{\text{Var}}\Big(\frac{{\varrho}_{t}}{\kappa_{t}}\Big){\kappa_{t}^{2}}. $$

Hence, it follows that

$$ \lim_{t\rightarrow \infty}{\text{Var}}\Big(\frac{{\varrho}_{t}}{\kappa_{t}}\Big) = 0. $$

Note that the above conclusion is based on the requirement of κt = tγ, 0 < γ. □

Proof Proof of (iii):.

Follows from parts (i) and (ii) and Chebychev’s inequality. □

A.2: Proof of Theorem 3.2: Asymptotic Normality of the MLEs

Since the proofs of Eqs. 5 and 6 are quite similar to that of Eq. 4, we only prove for Eq. 4 here. Equation 3 can be written as

$$ {\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi_{t}(X^{t}|X^{t-1}, k_{t}, \hat{p}) = 0 $$

where

$$ \psi_{t}(X^{t}|X^{t-1}, k_{t}, p) = x_{ij}^{t-1} \frac{x_{ij}^{t} - \kappa_{t}p}{1-\kappa_{t}p}. $$

Suppose at some p, \({\sum }_{t=1}^{T} {\sum }_{i<j} \psi _{t}(X^{t}|X^{t-1}, k_{t}, p) = 0\). Then,

$$ \begin{array}{@{}rcl@{}} 0 &=& {\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi_{t}(X^{t}|X^{t-1}, k_{t}, \hat{p}) \\ &=& {\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi_{t}(X^{t}|X^{t-1}, k_{t}, p) + (\hat{p}-p){\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi^{\prime}_{t}(X^{t}|X^{t-1}, k_{t}, p)\\ &&+\frac{(\hat{p}-p)^{2}}{2}{\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi^{\prime\prime}_{t}(X^{t}|X^{t-1}, k_{t}, p)\\ &=& I_{1T} + (\hat{p}-p)I_{2T} + \frac{(\hat{p}-p)^{2}}{2}R_{T}. \end{array} $$
(A.5)

A.2.1: Asymptotic Normality of I 1T

Recall that

$$ I_{1T} = {\sum}_{t=1}^{T} {\sum}_{i<j,i,j\in V_{t-1}} \psi_{t}(X^{t}|X^{t-1}, k_{t}, p) = {\sum}_{t=1}^{T}Z_{t}. $$

Now, we will show the asymptotic normality of I1n following Lemma A.1. Recall that \(\mathcal {F}_{a} = \sigma \langle X^{t}: t\le a\rangle , ~ 0 \le a < \infty \). First, we will show that Zt is a martingale difference array (MDA). Using the fact that \(X_{ij}^{t}\) is a {0,1}-valued random variable, we have

$$ \begin{array}{@{}rcl@{}} \mathrm{E}(Z_{t}| \mathcal{F}_{t-1}) &=& \mathrm{E}\left( {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}(X_{ij}^{t}-p\kappa_{t})}{1-\kappa_{t}p}\Big| X^{t-1}\right) \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1} \mathrm{E}\big(X_{ij}^{t}-p\kappa_{t}\big| X^{t-1}\big)}{1-\kappa_{t}p} \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}\big(X_{ij}^{t-1}\kappa_{t}p + (1-X_{ij}^{t-1})\kappa_{t}q - \kappa_{t}p\big)}{1-\kappa_{t}p} \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1} \big(X_{ij}^{t-1}\kappa_{t}p-\kappa_{t}p\big)}{1-\kappa_{t}p} \\ &=& 0, \end{array} $$

followed by \(X_{ij}^{t-1} = (X_{ij}^{t-1})^{2}\). Next consider \(\mathrm {E}({Z_{t}^{2}}| \mathcal {F}_{t-1})\).

$$ \begin{array}{@{}rcl@{}} \mathrm{E}({Z_{t}^{2}}| \mathcal{F}_{t-1}) &=& \mathrm{E}\bigg(\bigg[\frac{{\sum}_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1}(X_{ij}^{t}-p\kappa_{t})}{1-\kappa_{t}p}\bigg]^{2} \bigg| X^{t-1}\bigg) \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \bigg(\frac{X_{ij}^{t-1}}{1-\kappa_{t}p}\bigg)^{2} \mathrm{E}\big[(X_{ij}^{t}-p\kappa_{t})^{2}\big| X^{t-1}\big] \\ &&+ {\sum}_{i<j}{\sum}_{k<l,(k,l)\neq (i,j)} \frac{X_{ij}^{t-1}X_{kl}^{t-1}}{(1-\kappa_{t}p)^{2}} \mathrm{E}\big[(X_{ij}^{t}-p\kappa_{t})(X_{kl}^{t}-p\kappa_{t})\big| X^{t-1}\big] \\ &=& \text{I} + \text{II}. \end{array} $$

First,

$$ \begin{array}{@{}rcl@{}} \text{I} &=& {\sum}_{i<j,i,j\in V_{t-1}} \bigg(\frac{X_{ij}^{t-1}}{1-\kappa_{t}p}\bigg)^{2} \mathrm{E}\big[(X_{ij}^{t})^{2}-2\kappa_{t}pX_{ij}^{t} + {\kappa_{t}^{2}}p^{2}\big| X^{t-1}\big] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}}{(1-\kappa_{t}p)^{2}} \mathrm{E}\big[(1-2\kappa_{t}p)X_{ij}^{t} + {\kappa_{t}^{2}}p^{2}\big| X^{t-1}\big] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}}{(1-\kappa_{t}p)^{2}} \big[(1-2\kappa_{t}p)(X_{ij}^{t-1}\kappa_{t}p + (1-X_{ij}^{t-1})\kappa_{t}q) + {\kappa_{t}^{2}}p^{2}\big] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}}{(1-\kappa_{t}p)^{2}}(1-\kappa_{t}p)\kappa_{t}p\\ &=& \frac{\kappa_{t}p}{1-\kappa_{t}p}L_{t-1}, \end{array} $$

where Lt is the number of edges at time t. Second,

$$ \begin{array}{@{}rcl@{}} \text{II} &=& {\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} \frac{X_{ij}^{t-1}X_{kl}^{t-1}}{(1-\kappa_{t}p)^{2}} \mathrm{E}(X_{ij}^{t}-\kappa_{t}p|X^{t-1})\mathrm{E}(X_{kl}^{t}-\kappa_{t}p| X^{t-1})\\ &=& {\sum}_{i,j}{\sum}_{k,l} \frac{X_{ij}^{t-1}\big[X_{ij}^{t-1}\kappa_{t}p + (1-X_{ij}^{t-1})\kappa_{t}q - \kappa_{t}p\big]X_{kl}^{t-1}\big[X_{kl}^{t-1}\kappa_{t}p + (1-X_{kl}^{t-1})\kappa_{t}q - \kappa_{t}p\big]}{(1-\kappa_{t}p)^{2}}\\ &=& {\sum}_{i,j}{\sum}_{k,l} \frac{\big[X_{ij}^{t-1}\kappa_{t}p -X_{ij}^{t-1}\kappa_{t}p\big]\big[X_{kl}^{t-1}\kappa_{t}p - X_{kl}^{t-1}\kappa_{t}p\big]}{(1-\kappa_{t}p)^{2}}\\ &=& 0. \end{array} $$

Thus,

$$ \mathrm{E}({Z_{t}^{2}}| \mathcal{F}_{t-1}) = \frac{\kappa_{t}p}{1-\kappa_{t}p}L_{t-1}. $$

By Theorem 3.1, we have

$$ \begin{array}{@{}rcl@{}} {\sum}_{t=1}^{T}\mathrm{E}({Z_{t}^{2}}| \mathcal{F}_{t-1}) &=& {\sum}_{t=1}^{T} \frac{\kappa_{t}p}{1-\kappa_{t}p}L_{t-1}\\ &=& p{\sum}_{t=1}^{T} \frac{n_{t}(n_{t}-1)\kappa_{t}\kappa_{t-1}}{2(1-\kappa_{t}p)}\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\\ &=& p{\sum}_{t=1}^{T} \frac{n_{t}(n_{t}-1)t^{-\gamma}(t-1)^{-\gamma}}{2(1-\kappa_{t}p)}\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\\ &&\sim \frac{{c_{1}^{2}}pq}{2}{{\int}_{0}^{T}}x^{2(1-\gamma)}\text{d}x \\ &&\sim \frac{{c_{1}^{2}}pq}{2}\frac{T^{3-2\gamma}}{3-2\gamma}, \text{ where } \kappa_{t}\sim c_{1}t^{-\gamma}. \end{array} $$

Here, 3 − 2γ > 0, i.e. γ < 3/2 is required. Therefore, if Lyapunov’s condition hold,

$$ \frac{I_{1T}}{\sigma_{T}} \stackrel{d}{\longrightarrow} N(0, 1), \text{ where } {\sigma_{T}^{2}} = \frac{{c_{1}^{2}}pqT^{3-2\gamma}}{2(3-2\gamma)}. $$

Now, we verify the Lyapunov’s condition:

$$ \begin{array}{@{}rcl@{}} &&{\sum}_{t=1}^{T} {\mathrm{E}[Z_{t}^{4}}|\mathcal{F}_{t-1}] \\ &=& {\sum}_{t=1}^{T}\mathrm{E}\left( \left[\frac{{\sum}_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1}(X_{ij}^{t}-\kappa_{t}p)}{1-\kappa_{t}p}\right]^{4} \Big| X^{t-1}\right) \\ &=& {\sum}_{t=1}^{T}\frac{1}{(1-\kappa_{t}p)^{4}}\left\{{\sum}_{i<j,i,j\in V_{t-1}} (X_{ij}^{t-1})^{4} \mathrm{E}\big[(X_{ij}^{t}-\kappa_{t}p)^{4}\big| X^{t-1}\big]\right. \\ &&+ 4{\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} (X_{ij}^{t-1})^{3}X_{kl}^{t-1} \mathrm{E}\big[(X_{ij}^{t}-\kappa_{t}p)^{3}(X_{kl}^{t}-\kappa_{t}p)\big| X^{t-1}\big] \\ &&+ 6{\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} (X_{ij}^{t-1})^{2}(X_{kl}^{t-1})^{2} \mathrm{E}\big[(X_{ij}^{t}-\kappa_{t}p)^{2}(X_{kl}^{t}-\kappa_{t}p)^{2}\big| X^{t-1}\big] \\ &&+ {\sum}_{(i,j)}^{*}{\sum}_{(k,l)}^{*}{\sum}_{(m,n)}^{*}{\sum}_{(r,s)}^{*} X_{ij}^{t-1}X_{kl}^{t-1}X_{mn}^{t-1}X_{rs}^{t-1}\times \\ &&\ \left. \mathrm{E}\big[(X_{ij}^{t}-\kappa_{t}p)(X_{kl}^{t}-\kappa_{t}p)(X_{mn}^{t}-\kappa_{t}p)(X_{rs}^{t}-\kappa_{t}p)\big| X^{t-1}\big]\right\} \\ &\equiv& {\sum}_{t=1}^{T}\frac{1}{(1-\kappa_{t}p)^{4}}(\text{I} + 8\text{II} + 6\text{III} + \text{IV}), \end{array} $$

where part IV is the summations are over all distinct node-pairs at time t − 1. Due to the independence of the edges and the conditional expectation that \(X_{ij}^{t-1}\mathrm {E}[(X_{ij}^{t}-p)|X^{t-1}] = 0\), it is easy to show that II = 0,IV = 0.

Since we assume that a single node joins the network at a time, nt = n0 + t. To analyze I, using the technique of enlarging and reducing, we have

$$ \begin{array}{@{}rcl@{}} \text{I} &=& {\sum}_{i<j,i,j\in V_{t-1}} (X_{ij}^{t-1})^{4} \mathrm{E}[(X_{ij}^{t}-\kappa_{t}p)^{4}| X^{t-1}] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1} \mathrm{E}\big[X_{ij}^{t} - 3\kappa_{t}pX_{ij}^{t} + 4{\kappa_{t}^{2}}p^{2}X_{ij}^{t} - 3{\kappa_{t}^{3}}p^{3}X_{ij}^{t} + {\kappa_{t}^{4}}p^{4}\big| X^{t-1}\big] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1}(\kappa_{t}p - 3{\kappa_{t}^{2}}p^{2} + 4{\kappa_{t}^{3}}p^{3} - 3{\kappa_{t}^{4}}p^{4} + {\kappa_{t}^{4}}p^{4}) \\ &=& L_{t-1}(\kappa_{t}p - 3{\kappa_{t}^{2}}p^{2} + 4{\kappa_{t}^{3}}p^{3} - 2{\kappa_{t}^{4}}p^{4}). \end{array} $$

For part III, using similar technique, we have

$$ \begin{array}{@{}rcl@{}} \text{III} &=& {\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} (X_{ij}^{t-1})^{2}(X_{kl}^{t-1})^{2} \mathrm{E}\big[(X_{ij}^{t}-\kappa_{t}p)^{2}(X_{kl}^{t}-\kappa_{t}p)^{2}\big| X^{t-1}\big] \\ &= &{\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} X_{ij}^{t-1}X_{kl}^{t-1} \mathrm{E}\big[(X_{ij}^{t} - \kappa_{t}p)^{2}\big| X^{t-1}\big]\mathrm{E}\big[(X_{kl}^{t}-\kappa_{t}p)^{2}\big| X^{t-1}\big] \\ &=& {\sum}_{i<j,i,j\in V_{t-1}}{\sum}_{k<l,k,l\in V_{t-1},(k,l)\neq (i,j)} X_{ij}^{t-1}\kappa_{t}p(1-\kappa_{t}p) X_{kl}^{t-1}\kappa_{t}p(1-\kappa_{t}p)\\ &=& L_{t-1}^{2}{\kappa_{t}^{2}}p^{2}(1-\kappa_{t}p)^{2}. \end{array} $$

Thus

$$ \begin{array}{@{}rcl@{}} \sum\limits_{t=1}^{T} {\mathrm{E}[Z_{t}^{4}}|\mathcal{F}_{t-1}] &=& \sum\limits_{t=1}^{T} \frac{1}{(1-\kappa_{t}p)^{4}}\big[L_{t-1}(\kappa_{t}p - 3{\kappa_{t}^{2}}p^{2} + 4{\kappa_{t}^{3}}p^{3} - 2{\kappa_{t}^{4}}p^{4}) + L_{t-1}^{2}{\kappa_{t}^{2}}p^{2}(1-\kappa_{t}p)^{2}\big] \\ &&\sim \sum\limits_{t=1}^{T} \bigg\{\frac{n_{t}(n_{t}-1)}{2(1-\kappa_{t}p)^{4}}\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\kappa_{t-1}\kappa_{t}p + \frac{{n_{t}^{2}}(n_{t}-1)^{2}}{4(1-\kappa_{t}p)^{2}}\bigg(\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\bigg)^{2}\kappa_{t-1}^{2}{\kappa_{t}^{2}}p^{2}\bigg\} \\ &&\sim \sum\limits_{t=1}^{T} \frac{t^{2-2\gamma}pq}{2} + \sum\limits_{t=1}^{T}\bigg\{\frac{t^{2-2\gamma}pq}{2}\bigg\}^{2}\\ &&\sim \frac{{c_{1}^{2}}pq}{2}{{\int}_{0}^{T}}x^{2(1-\gamma)}\text{d}x + \frac{{c_{1}^{4}}p^{2}q^{2}}{4}{{\int}_{0}^{T}}x^{4(1-\gamma)}\text{d}x \\ &&\sim \frac{{c_{1}^{2}}pq}{2}\frac{T^{3-2\gamma}}{3-2\gamma} + \frac{{c_{1}^{4}}p^{2}q^{2}}{4}\frac{T^{5-4\gamma}}{5-4\gamma}, \text{ where } \kappa_{t}\sim c_{1}t^{-\gamma}. \end{array} $$

As a result,

$$ \frac{1}{{\sigma_{T}^{4}}} \sum\limits_{t=1}^{T} {\mathrm{E}[Z_{t}^{4}}|\mathcal{F}_{t-1}] \rightarrow 0,\text{\ as\ } T \rightarrow \infty. $$

Therefore, the Lyapunov’s condition is verified. Hence, by Martingale CLT, we have

$$ \frac{I_{1T}}{\sigma_{T}} \stackrel{d}{\longrightarrow} N(0, 1), \text{ where } {\sigma_{T}^{2}} = \frac{{c_{1}^{2}}pqT^{3-2\gamma}}{2(3-2\gamma)}. $$

A.2.2: Analysis of I 2T

Next,

$$ I_{2T} = \sum\limits_{t=1}^{T} \sum\limits_{i<j,i,j\in V_{t-1}} \psi^{\prime}_{t}(X^{t}|X^{t-1}, k_{t}, p) = \sum\limits_{t=1}^{T} \sum\limits_{i<j,i,j\in V_{t-1}}\frac{X^{t-1}(X^{t}-1)\kappa_{t}}{(1-\kappa_{t}p)^{2}} = \sum\limits_{t=1}^{T} W_{t}. $$

The conditional expectation is,

$$ \begin{array}{@{}rcl@{}} \mathrm{E}(W_{t}| \mathcal{F}_{t-1}) &=& \mathrm{E}\bigg(\sum\limits_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}(X_{ij}^{t}-1)\kappa_{t}}{(1-\kappa_{t}p)^{2}}\bigg| X^{t-1}\bigg) \\ &=& \sum\limits_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}\kappa_{t} \mathrm{E}\big(X_{ij}^{t}-1\big|X^{t-1}\big)}{(1-\kappa_{t}p)^{2}} \\ &=& \sum\limits_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}\kappa_{t}\big(X_{ij}^{t-1}\kappa_{t}p + (1-X_{ij}^{t-1})\kappa_{t}q - 1\big)}{(1-\kappa_{t}p)^{2}} \\ &=& -\sum\limits_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}\kappa_{t} }{(1-\kappa_{t}p)}\\ &=& -\frac{L_{t-1}\kappa_{t} }{(1-\kappa_{t}p)}. \end{array} $$

Thus, let \(I^{*}_{2T} = {\sum }_{t=1}^{T}\mathrm {E}(W_{t}| \mathcal {F}_{t-1})\),

$$ \begin{array}{@{}rcl@{}} I^{*}_{2T} &=& \sum\limits_{t=1}^{T}-\frac{L_{t-1}\kappa_{t}}{(1-\kappa_{t}p)}\\ &=& \sum\limits_{t=1}^{T}-\frac{n_{t}(n_{t}-1)\kappa_{t}\kappa_{t-1}}{2(1-\kappa_{t}p)}\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\\ &&\sim \frac{{c_{1}^{2}}qT^{3-2\gamma}}{2(3-2\gamma)}. \end{array} $$

Now we need to show \(I_{2n}/I_{2n}^{*} {\rightarrow }_{p} 1\).

$$ {\text{Var}}(W_{t}) = \mathrm{E}\big({\text{Var}}(W_{t}|\mathcal{F}_{t-1})\big) + {\text{Var}}\big(\mathrm{E}(W_{t}|\mathcal{F}_{t-1})\big) $$

and

$$ \begin{array}{@{}rcl@{}} \mathrm{E}\big({\text{Var}}(W_{t}|\mathcal{F}_{t-1})\big) &=& \mathrm{E}\bigg[{\text{Var}}\bigg(\sum\limits_{i<j,i,j\in V_{t-1}} \frac{X_{ij}^{t-1}(X_{ij}^{t}-1)\kappa_{t}}{(1-\kappa_{t}p)^{2}}\bigg|\mathcal{F}_{t-1})\bigg)\bigg]\\ &=& \frac{{\kappa_{t}^{2}}}{(1-\kappa_{t}p)^{4}}\mathrm{E}\Big(\sum\limits_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1}{\text{Var}}(X_{ij}^{t}|\mathcal{F}_{t-1})\Big)\\ &=& \frac{{\kappa_{t}^{2}}}{(1-\kappa_{t}p)^{4}}\mathrm{E}\Big(\sum\limits_{i<j,i,j\in V_{t-1}} X_{ij}^{t-1}\kappa_{t}p(1-\kappa_{t}p)\Big) \\ &=& \frac{{\kappa_{t}^{3}}p}{(1-\kappa_{t}p)^{3}}\mathrm{E}(L_{t-1}) \\ &=& \frac{n_{t}(n_{t}-1){\kappa_{t}^{3}}\kappa_{t-1}p}{2(1-\kappa_{t}p)^{3}}\mathrm{E}\Big(\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\Big) \\ &=& O(T^{2-4\gamma}), \\ {\text{Var}}\big(\mathrm{E}(W_{t}|\mathcal{F}_{t-1})\big) &=& {\text{Var}}\bigg(\frac{L_{t-1}\kappa_{t}p}{1-\kappa_{t}p}\bigg)\\ &=& \frac{{\kappa_{t}^{2}}}{(1 - \kappa_{t}q)^{2}}{\text{Var}}(L_{t-1})\\ &=& \frac{{\kappa_{t}^{2}}\kappa^{2}_{t-1}{n_{t}^{2}}(n_{t}-1)^{2}}{4(1 - \kappa_{t}q)^{2}}{\text{Var}}\Big(\frac{{\varrho}_{t-1}}{\kappa_{t-1}}\Big)\\ &=& O(T^{4-4\gamma}). \end{array} $$

Thus,

$$ {\text{Var}}\bigg(\frac{W_{t}}{I^{*}_{2T}}\bigg) = \frac{O(T^{4-4\gamma})}{O(T^{6-4\gamma})} = O(T^{-2}). $$

As a result, we have shown \(I_{2n}/I_{2n}^{*} {\rightarrow }_{p} 1\).

A.2.3: Asymptotic Normality of R T

Then,

$$ R_{T} = \sum\limits_{t=1}^{T} \sum\limits_{i<j,i,j\in V_{t-1}} \psi^{\prime\prime}_{t}(X^{t}|X^{t-1}, k_{t}, p) = \sum\limits_{t=1}^{T} \sum\limits_{i<j,i,j\in V_{t-1}}\frac{2X^{t-1}(X^{t}-1){\kappa_{t}^{2}}}{(1-\kappa_{t}p)^{3}} = \sum\limits_{t=1}^{T} 2Y_{t}. $$

Similarly,

Then,

A.2.4: Final Conclusion of MLE

Reformulate the Eq. A.5, we have

$$ \begin{array}{@{}rcl@{}} \hat{p} - p &=& -\frac{I_{1T}}{I_{2T}} - (\hat{p} - p)^{2}\frac{R_{T}}{I_{2T}} \\ &=& -\frac{I_{1T}}{I^{*}_{2T}(1+O_{p}(1))} - \frac{(\hat{p} - p)^{2}}{2}\frac{O_{p}(E|R_{T}|)}{O_{p}(T^{3-2\gamma})}. \end{array} $$

We have already shown that

$$ \frac{I_{1T}}{\sigma_{T}} \stackrel{d}{\longrightarrow} N(0, 1), \text{ where } {\sigma_{T}^{2}} = \frac{{c_{1}^{2}}pqT^{3-2\gamma}}{2(3-2\gamma)}, $$

and

$$ I^{*}_{2T}\sim \frac{{c_{1}^{2}}qT^{3-2\gamma}}{2(3-2\gamma)}. $$

and

$$ \mathrm{E}|R_{T}| = \begin{cases} O_{p}\bigg(\frac{{c_{1}^{3}}qT^{3-3\gamma}}{3-3\gamma}\bigg) & \text{ if } \gamma<1 \\ O_{p}(\log T) & \text{ if } \gamma=1 \end{cases}. $$

By standard arguments, this implies

$$ \begin{array}{@{}rcl@{}} \alpha_{T} (\hat{p} - p) &=& -\frac{(I_{1T}/\sigma_{T})\alpha_{T}}{(I^{*}_{2T}/\sigma_{T})(1+O_{p}(1))} - \frac{(\hat{p} - p)^{2}}{2}\frac{O_{p}(E|R_{T}|)}{O_{p}(T^{3-2\gamma})}\\ &&\underset{\rightarrow}{d} N(0, \tau^{2}) \end{array} $$

where αT = T(3 − 2γ)/2.

$$ \begin{array}{@{}rcl@{}} \tau &=&\lim_{T\rightarrow\infty} \frac{\alpha_{T}\sigma_{T}}{I^{*}_{2T}}\\ &=& \lim_{T\rightarrow\infty} T^{(3-2\gamma)/2}\bigg(\frac{{c_{1}^{2}}pqT^{3-2\gamma}}{2(3-2\gamma)}\bigg)^{1/2}\frac{2(3-2\gamma)}{{c_{1}^{2}}qT^{3-2\gamma}}\\ &=& \sqrt{\frac{2(3-2\gamma)}{{c_{1}^{2}}}\frac{p}{q}}. \end{array} $$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhao, W., Lahiri, S. Estimation of the Parameters in an Expanding Dynamic Network Model. Sankhya A (2021). https://doi.org/10.1007/s13171-021-00258-z

Download citation

Keywords

  • Bootstrap
  • limit distribution
  • maximum likelihood estimators
  • network density.

AMS (2000) subject classification

  • Primary; 62E20 Secondary; 62M05