Skip to main content

Advertisement

Log in

Contemporaneous Statistics for Estimation in Stochastic Actor-Oriented Co-evolution Models

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Stochastic actor-oriented models (SAOMs) can be used to analyse dynamic network data, collected by observing a network and a behaviour in a panel design. The parameters of SAOMs are usually estimated by the method of moments (MoM) implemented by a stochastic approximation algorithm, where statistics defining the moment conditions correspond in a natural way to the parameters. Here, we propose to apply the generalized method of moments (GMoM), using more statistics than parameters. We concentrate on statistics depending jointly on the network and the behaviour, because of the importance of their interdependence, and propose to add contemporaneous statistics to the usual cross-lagged statistics. We describe the stochastic algorithm developed to approximate the GMoM solution. A small simulation study supports the greater statistical efficiency of the GMoM estimator compared to the MoM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Because of the difficulty of these network models this is not something we can prove, but it is supported by all practical evidence.

References

  • Amati, V., Schönenberger, F., & Snijders, T. A. (2015). Estimation of stochastic actor-oriented models for the evolution of networks by generalized method of moments. Journal de la Société Française de Statistique, 156(3), 140–165.

    Google Scholar 

  • Block, P. (2015). Reciprocity, transitivity, and the mysterious three-cycle. Social Networks, 40, 163–173.

    Google Scholar 

  • Bollen, K. A., Kolenikov, S., & Bauldry, S. (2014). Model-implied instrumental variable—generalized method of moments (MIIV-GMM) estimators for latent variable models. Psychometrika, 79(1), 20–50.

    PubMed  Google Scholar 

  • Breusch, T., Qian, H., Schmidt, P., & Wyhowski, D. (1999). Redundancy of moment conditions. Journal of Econometrics, 91(1), 89–111.

    Google Scholar 

  • Burguete, J. F., Ronald Gallant, A., & Souza, G. (1982). On unification of the asymptotic theory of nonlinear econometric models. Econometric Reviews, 1(2), 151–190.

    Google Scholar 

  • Burk, W. J., Kerr, M., & Stattin, H. (2008). The co-evolution of early adolescent friendship networks, school involvement, and delinquent behaviors. Revue française de sociologie, 49(3), 499–522.

    Google Scholar 

  • Ebbers, J. J., & Wijnberg, N. M. (2010). Disentangling the effects of reputation and network position on the evolution of alliance networks. Strategic Organization, 8(3), 255–275.

    Google Scholar 

  • Gallant, A. R., Hsieh, D., & Tauchen, G. (1997). Estimation of stochastic volatility models with diagnostics. Journal of econometrics, 81(1), 159–192.

    Google Scholar 

  • Hall, A. R. (2005). Generalized method of moments. Oxford: Oxford University Press.

    Google Scholar 

  • Hansen, L. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50, 1029–1054.

    Google Scholar 

  • Hansen, L. P., & Singleton, K. J. (1982). Generalized instrumental variables estimation of nonlinear rational expectations models. Econometrica, 50(5), 1269–1286.

    Google Scholar 

  • Haynie, D. L., Doogan, N. J., & Soller, B. (2014). Gender, friendship networks, and delinquency: A dynamic network approach. Criminology, 52(4), 688–722.

    PubMed  PubMed Central  Google Scholar 

  • Holland, P. W., & Leinhardt, S. (1977). A dynamic model for social networks. Journal of Mathematical Sociology, 5(1), 5–20.

    Google Scholar 

  • Hunter, D. R. (2007). Curved exponential family models for social networks. Social Networks, 29, 216–230.

    PubMed  PubMed Central  Google Scholar 

  • Kim, J.-S., & Frees, E. W. (2007). Multilevel modeling with correlated effects. Psychometrika, 72(4), 505–533.

    Google Scholar 

  • Koskinen, J. H., & Snijders, T. A. B. (2007). Bayesian inference for dynamic social network data. Journal of Statistical Planning and Inference, 13, 3930–3938.

    Google Scholar 

  • Luce, R., & Suppes, P. (1965). Preference, utility, and subjective probability. Handbook of Mathematical Psychology, 3, 249–410.

    Google Scholar 

  • Mátyás, L. (1999). Generalized method of moments estimation. Cambridge: Cambridge University Press.

    Google Scholar 

  • McFadden, D. (1973). Conditional logit analysis of qualitative choice behavior. Oakland: Institute of Urban and Regional Development, University of California.

    Google Scholar 

  • McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415–444.

    Google Scholar 

  • Meyer, C. D. (2000). Matrix analysis and applied linear algebra. Philadelphia: SIAM.

    Google Scholar 

  • Michell, L., & West, P. (1996). Peer pressure to smoke: The meaning depends on the method. Health Education Research, 11(1), 39–49.

    Google Scholar 

  • Newey, W., & Windmeijer, F. (2009). Generalized method of moments with many weak moment conditions. Econometrica, 77(3), 687–719.

    Google Scholar 

  • Neyman, J., & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference: Part II. Biometrika, 20, 263–294.

    Google Scholar 

  • Niezink, N. M. D., & Snijders, T. A. B. (2017). Co-evolution of social networks and continuous actor attributes. The Annals of Applied Statistics, 11(4), 1948–1973.

    Google Scholar 

  • Niezink, N. M. D., Snijders, T. A. B., & van Duijn, M. A. J. (2019). No longer discrete: Modeling the dynamics of social networks and continuous behavior. Sociological Methodology. https://doi.org/10.1177/0081175019842263.

    Google Scholar 

  • Norris, J. R. (1997). Markov chains. Cambridge: Cambridge University Press.

    Google Scholar 

  • Pearson, K. (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 50(302), 157–175.

    Google Scholar 

  • Pflug, G. C. (1990). Non-asymptotic confidence bounds for stochastic approximation algorithms with constant step size. Monatshefte für Mathematik, 110(3–4), 297–314.

    Google Scholar 

  • Polyak, B. T. (1990). A new method of stochastic approximation type. Automation and Remote Control, 51, 937–946.

    Google Scholar 

  • Ripley, R. M., Snijders, T. A. B., Boda, Z., András, V., & Paulina, P. (2019). Manual for RSiena. Groningen: ICS, Department of Sociology, University of Groningen.

    Google Scholar 

  • Robbins, H., & Monro, S. (1951). A stochastic approximation method. The Annals of Mathematical Statistics, 22, 400–407.

    Google Scholar 

  • Ruppert, D. (1988). Efficient estimations from a slowly convergent Robbins–Monro process. Technical report, Cornell University Operations Research and Industrial Engineering.

  • Schulte, M., Cohen, N. A., & Klein, K. J. (2012). The coevolution of network ties and perceptions of team psychological safety. Organization Science, 23(2), 564–581.

    Google Scholar 

  • Schweinberger, M., & Snijders, T. A. B. (2007). Markov models for digraph panel data: Monte Carlo-based derivative estimation. Computational Statistics & Data Analysis, 51(9), 4465–4483.

    Google Scholar 

  • Snijders, T. A. B. (1996). Stochastic actor-oriented models for network change. Journal of Mathematical Sociology, 21(1–2), 149–172.

    Google Scholar 

  • Snijders, T. A. B. (2001). The statistical evaluation of social network dynamics. Sociological Methodology, 31(1), 361–395.

    Google Scholar 

  • Snijders, T. A. B. (2005). Models for longitudinal network data. In P. J. C. Conte, J. Scott, & S. Wasserman (Eds.), Models and methods in social network analysis (pp. 215–247). Cambridge: Cambridge University Press.

    Google Scholar 

  • Snijders, T. A. B. (2017a). Stochastic actor-oriented models for network dynamics. Annual Review of Statistics and Its Application, 4, 343–363.

    Google Scholar 

  • Snijders, T. A. B. (2017b). Siena algorithms. Technical report, University of Groningen, University of Oxford. http://www.stats.ox.ac.uk/~snijders/siena/Siena_algorithms.pdf.

  • Snijders, T. A. B., Koskinen, J., & Schweinberger, M. (2010a). Maximum likelihood estimation for social network dynamics. The Annals of Applied Statistics, 4(2), 567–588.

    PubMed  PubMed Central  Google Scholar 

  • Snijders, T. A. B., & Lomi, A. (2019). Beyond homophily: Incorporating actor variables in statistical network models. Network Science, 7(1), 1–19.

    Google Scholar 

  • Snijders, T. A. B., Steglich, C. E. G., & Schweinberger, M. (2007). Modeling the co-evolution of networks and behavior. In K. van Montfort, H. Oud, & A. Satorra (Eds.), Longitudinal models in the behavioral and related sciences (pp. 41–71). Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Snijders, T. A. B., Van de Bunt, G. G., & Steglich, C. E. G. (2010b). Introduction to stochastic actor-based models for network dynamics. Social Networks, 32(1), 44–60.

    Google Scholar 

  • Snijders, T. A. B., & van Duijn, M. A. J. (1997). Simulation for statistical inference in dynamic network models. In R. Conte, R. Hegselmann, & P. Terna (Eds.), Simulating social phenomena (pp. 493–512). Berlin: Springer.

    Google Scholar 

  • Steglich, C. E. G., Snijders, T. A. B., & Pearson, M. (2010). Dynamic networks and behavior: Separating selection from influence. Sociological Methodology, 40(1), 329–393.

    Google Scholar 

  • Strang, G. (1976). Linear algebra and its applications. New York: Academic Press.

    Google Scholar 

  • Train, K. E. (2009). Discrete choice methods with simulation. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Viviana Amati.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Part of this research has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013)/ERC Grant Agreement No. 319209.

Appendix

Appendix

We describe in details the algorithm used to approximate the value of the GMoM and outline the additional steps that need to be performed compared to the “regular” algorithm for approximating the value of the MoM estimator (Snijders, 2001). In the following, we refer to the Robbins–Monro procedure for the MoM as the “MoM algorithm,” and to its modified version for the GMoM as the “GMoM algorithm.” To keep the notation transparent, we stick to the notation used in Section 3.2.

The GMoM estimate for \(\theta \) is the value \(\hat{\theta }\) such that

$$\begin{aligned} B\;E_{\theta }\,[\,s^*(X,Z)-s^*(x,z)\,]=0, \end{aligned}$$

with \(B=\frac{\partial }{\partial \theta }\;E_{\theta }\,[\,s^*(X,Z)- s^*(x,z)\,]'\,W=\Gamma \; W\). The additional steps in the GMoM algorithm compared to the MoM algorithm are related to the approximation of the matrix B.

Let \(\theta _0\) be an initial value of the parameter \(\theta \). For the network and behavioural rate parameters, the default initial values are computed as a function of the number of network changes and the number, direction and size of the behavioural changes. For the evaluation functions, the default initial values are obtained by setting all the parameters, but the outdegree and the linear shape parameters, equal to 0. For the outdegree parameter, the initial value is computed as a function of the log-odds of the probability of a tie being present given the observed data (Snijders, 2005). For the linear shape parameter, the initial value is computed as a function of the absolute mean and the variance of the behaviour over the observation period. For relatively simple models, the default initial values usually work fine. For more complex models, the GMoM algorithm might converge slowly and better initial values can be obtained by fitting the regular MoM to the data and set \(\theta _0\) to the MoM estimate.

The estimation algorithm consists of three phases.

  1. 1.

    Phase 1

    The first phase is a preliminary phase used to approximate the quantities involved in the Robbins–Monro step, namely the matrix \(\Gamma \) of first-order derivatives of the statistics, the matrix of weights W and the matrix D of the first-order derivatives of the function \(B\,E_{\theta }\,[\,s^*(X,Z)-s^*(x,z)\,]\). We denote these approximations by \(\Gamma _0\), \(W_0\) and \(D_0\), respectively.

    1. 1.1.

      Given the initial value \(\theta _0\), simulate \(n_1\) network–behaviour co-evolution trajectories.

    2. 1.2.

      Approximate the generic element of \(\Gamma _0\) and defined as

      $$\begin{aligned} \gamma _{pk}\ = \ \frac{\partial }{\partial \theta _p}E_{\theta }[s_k^*(X,Z)-s_k^*(x,z)] \end{aligned}$$

      by averaging the product of the simulated statistics and the score functions [score function method, Schweinberger and Snijders (2007)] or by averaging difference quotients using random numbers [finite difference method, Snijders (2001)]. The former method is preferred since it is more computationally efficient for complex models and leads to the following approximation

      $$\begin{aligned} \hat{\gamma }_{pk}=\frac{1}{n_1}\sum \limits _{h=1}^{n_1}U_{ph}\,[s_k^{*}\big (x^{(h)},z^{(h)}\big )-s_k^*\big (x,z\big )], \end{aligned}$$

      where \(s_k^{*}\big (x^{(h)},z^{(h)}\big )\) and \(U_{ph}\big [p\big (x^{(h)},z^{(h)}\big )\big ]=\frac{\partial }{\partial \theta _p}\log \big [p\big (x^{(h)},z^{(h)}\big )\big ]\) are the values of the statistics and the pth component of the score function vector computed for the hth simulation.

    3. 1.3.

      Compute the sample covariance matrix of the simulated statistics, denoted by \(\hat{\Sigma }\). The generic cell of this matrix is computed in the following way

      $$\begin{aligned} \hat{\Sigma }_{ku}=\frac{1}{n_1}\sum \limits _{h=1}^{n_1}\; \Big [s_k^{*}\big (x^{(h)},z^{(h)}\big )-\overline{s}^{\;*}_k(x,z)\Big ]\,\Big [s_u^{*}\big (x^{(h)},z^{(h)}\big )-\overline{s}^{\;*}_u(x,z)\Big ], \end{aligned}$$

      with \(\overline{s}^{\;*}_k(x,z)\) the sample average of the values of the simulated statistic \(s_k(x,z)\), i.e.

      $$\begin{aligned} \overline{s}^{\;*}_k(x,z)=\frac{1}{n_1}\sum \limits _{h=1}^{n_1} s_k^{*}\big (x^{(h)},z^{(h)}\big ). \end{aligned}$$
    4. 1.4.

      Approximate the weight matrix W by computing \(W_0=\hat{\Sigma }^{-1}\).

    5. 1.5.

      Compute \(B_0=\Gamma _0\; W_0\).

    6. 1.6.

      Divide all elements of \(B_0\) by their row sums.

    7. 1.7.

      Approximate the matrix

      $$\begin{aligned} D=\frac{\partial }{\partial \theta }\, B\, E_{\theta }\,[\,s^*(X,Z)- s^*(x,z)\,] \end{aligned}$$

      by using \(D_0=B_0\; \Gamma _0'\).

    8. 1.8.

      Determine the first rough estimate of \(\theta \) using one Newton–Raphson step

      $$\begin{aligned} \hat{\theta }=\theta _0-\alpha \,D_0^{-1}\,B_0\,[\,\overline{s}\,^*(x,z)-s^*(x,z)\,], \end{aligned}$$

      with \(\overline{s}\,^*(x,z)\) the vector of the averages of the simulated statistics and \(\alpha \) a number between 0 and 1.

    In comparison with Phase 1 of the MoM algorithm, the GMoM algorithm requires the additional steps 1.3–1.6 and the computation of the larger matrix \(\Gamma \) of first-order derivatives of the statistics. Thus, a higher number of simulations \(n_1= 100 + (7\times q)\) is required.

    In a previous version of the GMoM algorithm (Amati et al., 2015), the matrix W was computed as a block diagonal matrix, where the blocks correspond to the parameter of the rate functions and those of the evaluation functions. Here, we use a full matrix of weights W since we noticed that this choice gives more accurate estimates for the rate parameters and more stability to the algorithm.

  2. 2.

    Phase 2

    The second phase carries out the estimation and follows the MoM algorithm except for the inclusion of the matrix B in the Robbins–Monro step. During the second phase, the matrixes D and B are kept fixed at \(D_0\) and \(B_0\).

    The second phase of the algorithm is divided into L sub-phases characterized by a decreasing value of \(\alpha _r\) and the use of the diagonal matrix obtained from the matrix D (Ruppert, 1988; Polyak, 1990; Pflug, 1990).

    Each of the L (advice: \(L=5\) sub-phases) sub-phases comprises the following steps:

    1. 2.1.

      Set \(\hat{\theta _1}=\hat{\theta }\), \(\alpha \) (advice: \(\alpha =0.2\) for the first sub-phase), and \(n_2=n_{2\ell }\), with \(\ell \) being the number of the current sub-phase.

    2. 2.2.

      For \(r =1,\, \ldots \,, n_{2\ell }-1\), simulate one co-evolution trajectory for the current value of the parameter \(\hat{\theta }_r\) and compute the corresponding value of the statistics \(s^{*}\big (x^{(r)},z^{(r)}\big )\). Update \(\theta \) by the Robbins–Monro step

      $$\begin{aligned} \hat{\theta }_{r+1}=\hat{\theta }_r-\alpha \ D_0^{-1}B_0\,[\,s^{*}\big (x^{(r)},z^{(r)}\big )- s^*(x,z)\,] \end{aligned}$$
    3. 2.3.

      Update the value of \(\theta \) by

      $$\begin{aligned} \hat{\theta }=\frac{1}{n_{2\ell }}\sum \limits _{r=1}^{n_{2\ell }}\hat{\theta }_r \end{aligned}$$
    4. 2.4

      Set \(\alpha =\alpha /2\), \(n_{2\ell }=(7+p)\times (2.52)^{\ell +1}\).

    The average of the values \(\hat{\theta }_r\) during the last sub-phase is the final estimate for \(\theta \). Thus, we define the GMoM estimate of \(\hat{\theta }\) as the value \(\hat{\theta }_1\) derived from the last sub-phase of Phase 2.

  3. 3.

    Phase 3

    This phase computes the standard errors of the estimates and evaluate the convergence of the algorithm.

    1. 3.1.

      Simulate \(n_3\) co-evolution trajectories using the GMoM estimate \(\hat{\theta }\).

    2. 3.2.

      Update the approximation of the matrices B, \(\Gamma \) and \(\Sigma \) as in steps (1.2)–(1.6) of Phase 1. We denote these estimates by \(\hat{B}\), \(\hat{\Gamma }\) and \(\hat{\Sigma }\).

    3. 3.3.

      Compute the covariance matrix of the GMoM estimator as

      $$\begin{aligned} \Sigma _{\widehat{\theta }_{\mathrm{GMoM}}}=(\hat{B}\;\hat{\Gamma })^{- 1}(\hat{B}\;\hat{\Sigma }\;\hat{B}')((\hat{B}\;\hat{\Gamma })^{-1})' \end{aligned}$$
    4. 3.4.

      Calculate the t-ratios for convergence

      $$\begin{aligned} t-\text {ratio}_p = \frac{\hat{B}(\,\overline{s}_p^*(x,z)-s^*_p(x,z))}{\sqrt{ \left( \hat{B}\;\hat{\Sigma }\;\hat{B}' \right) _{pp}}}. \end{aligned}$$

      As a rule of thumb, when these t-ratios are smaller than 0.1, then convergence is satisfactory.

Compared to the MoM algorithm, the GMoM algorithm requires the additional estimation of B, a larger matrix of derivatives \(\Gamma \), and a larger covariance matrix \(\Sigma \) (step 3.2). Thus, a higher number of simulations is required. For the simulation study in Section 4 and the example in Section 5, we used \(n_3=10{,}000\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amati, V., Schönenberger, F. & Snijders, T.A.B. Contemporaneous Statistics for Estimation in Stochastic Actor-Oriented Co-evolution Models. Psychometrika 84, 1068–1096 (2019). https://doi.org/10.1007/s11336-019-09676-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-019-09676-3

Keywords

Navigation