Avoid common mistakes on your manuscript.
In 1904, Spearman reported the positive manifold—a pattern of exclusively positive correlations among cognitive test scores—for which he proposed the g-factor theory of general intelligence in 1927. According to g-factor theory, general intelligence is a stable trait that cannot be directly observed but gives rise to observed test scores. The g-factor model quickly became the dominant theory of intelligence and, while the theory was revised and expanded, the fundamental idea went unchallenged for almost a century. Modifications to the g-factor theory (e.g., multi-factor models, hierarchical factor models, and their merger) were proposed and debated at length (e.g., Carroll, 1993; Gardner, 1983; Horn & Cattell, 1966; McGrew & Flanagan, 1998; Sternberg, 1985; Thurstone, 1935). Still, all these theories held intelligence and its facets to be unobservable underlying abilities. The theory’s dominance spread from intelligence to personality research, where similar debates about the underlying factor structure of personality continue to hold great interest, and to other areas of psychology where common factors came to characterize other traits, behaviors, and attitudes. Spearman’s idea now permeates many areas of psychological research.
Although alternative characterizations of the nature of psychological phenomena such as the positive manifold were proposed (e.g., the sampling model of Thomson (1916) and the gene–environment interaction model of Dickens and Flynn (2001)), the g-factor model received a challenge almost a century later, when van der Maas and colleagues published “A dynamical model of general intelligence” (van der Maas et al., 2006). This landmark paper introduced the mutualism model, which accounts for the positive manifold in cognitive test scores by invoking a network of mutually interacting components. The mutualism model challenged the general factor theory by suggesting that intelligent behavior—and the observed pattern of positive correlations among test scores—could evolve during development from mutualistic interactions between cognitive, behavioral, and biological factors rather than unknown common causes. That is, the very same positive manifold that was assumed to arise as a result of an underlying latent factor could, it turned out, stem from a dynamical system of positively interacting components (van der Maas, Kan, Marsman, & Stevenson, 2017; van der Maas, Savi, Hofman, Kan, & Marsman, 2019). The mutualism model of intelligence offered a radically new framework for thinking about how a psychological attribute might arise and how it might be related to its constituent parts. And, as the past 15 years have shown, the proposed network approach proved to be a promising addition to the pantheon of psychometric theories (e.g., Marsman et al., 2018).
In the 15 years since van der Maas et al.’s (2006) paper was published, its central idea has sparked an entirely new psychometric subfield. This subfield, which has come to be known as network psychometrics, defines psychological constructs (e.g., intelligence, mental disorders, personality traits, and attitudes) as complex systems of behavioral, cognitive, environmental, and biological factors (Borsboom, 2017; Cramer et al., 2012; Dalege et al., 2016; Savi, Marsman, van der Maas, & Maris, 2019). These biopsychosocial systems need not contain hidden units, in principle, and are defined by the local interactions between the system’s elements, which form a network. Psychometric network theory asks how psychological phenomena emerge from these local interactions, while psychometric network analysis aims to infer these local interactions from empirical data.
While the field has made great strides, it is still in its youth. The co-development of network theories and models with technical methodological advances and software releases shapes its short history. In the following sections, we briefly review this history of methodological innovations in light of the maturation of psychological network theory. We then discuss the status of the methods regarding what questions they can and cannot yet answer and how the papers in this special issue contribute to three of the most pressing open questions.
1 Historical Trajectory of Network Modeling in Psychology
Borsboom (2008) took van der Maas et al.’s (2006) mutualism idea and considered it as an alternative framework for conceptualizing psychopathological disorders, suggesting that these, too, be seen as “causal networks consisting of symptoms and direct causal relations between them” (p. 1089). In the conclusion of that paper, Borsboom noted that, as of 2008, “there [was] currently no worked-out psychometric theory to go with [the network] perspective” (p. 1106), and he called for further empirical and psychometric work to elaborate on the network perspective on psychopathology. That paper marks the beginning of a concerted effort to apply network modeling theories and methods to psychological data, and to develop new theories and methods in response to those data and to the research questions of psychologists.
A few years later, seminal developments in theory and software helped the field take off. Cramer, Waldorp, van der Maas, and Borsboom (2010) took the first steps toward developing a network theory of psychopathology and showing how such a network could be modeled and visualized. Borsboom and Cramer (2013) provided examples of how to generate network visualizations from data and compute network properties like path lengths, clustering coefficients, and centrality measures. Their applied examples laid the groundwork for researchers to create network visualizations and compute network metrics from their data. At the same moment, two key software developments arrived. Epskamp, Cramer, Waldorp, Schmittmann, and Borsboom (2012) published qgraph, an R package for visualizing (and later fitting) networks in data. Shortly after that, van Borkulo et al. (2014) introduced IsingFit, an R package for fitting regularized networks to binary (e.g., symptom) data. Armed with software, applied researchers worldwide were newly able to estimate and visualize their data as networks. Many used this opportunity to spark a new way of thinking about attributes within their research areas. Early publications of this era focused on psychopathology, describing network theories of distinct disorders and using networks to explain the links between those disorders (e.g., Cramer, Borsboom, Aggen, & Kendler, (2012; Robinaugh, LeBlanc, Vuletich, & McNally, 2014; Ruzzano, Borsboom, & Geurts, 2015). Another early thread of research posited that personality, too, could be fruitfully conceived as a network of interacting components (e.g., Costantini & Perugini, 2012; Cramer et al., 2012).
While exciting, these early forays into network modeling were limited in what they were able to do: network models were, at this point, typically visualizations of correlations or partial correlations between variables observed at a single time point (although the development of longitudinal methods also began early on, e.g., Bringmann et al., 2013, Bringmann, Lemmens, Huibers, Borsboom, & Tuerlinckx, 2015). Many used \(l_{1}\)-regularization (“lasso”) to remove some edges to obtain a sparse network diagram. The interpretable output of these early network applications consisted of (1) a network structure, wherein edges set to zero by the regularization algorithm were interpreted as missing causal links, (2) a set of edge weights, where edges large in absolute value were interpreted as strong and potentially causal direct relations, and (3) a set of node centrality indices, which were interpreted as representing the relative importance of each variable to the system.
The development of network methodology beyond its starting point was propelled by the desire to answer specific empirical questions arising from network theory. While the initial forays into network modeling attempted to answer the most general question, “what is the structure of direct relations among variables in a multivariate dataset?,” methodologists shortly began to work on methods to answer other questions, such as “how do these structures differ across groups?” (van Borkulo et al., 2015; in press), “how does the structure of a network predict future behavior”? (Dalege et al., 2016), and “how can individual networks be used to guide clinical interventions”? (Rubel, Fisher, Husen, & Lutz, 2018; Wichers, Groot, psychosystems, ESM group, & EWS group, 2016). Several of the papers in this special issue further develop answers to these empirically motivated questions.
The early explosion of empirical network analysis applications sparked some strong criticism of the methods and the common interpretations of network models, For example, several papers questioned the stability across repeated samples of network properties such as edge weights and centrality indices (Forbes, Wright, Markon, & Krueger, 2017; Fried et al., 2018; Neal & Neal, in press). Other authors critiqued the prevalent causal interpretation of network structure and centrality indices (e.g., Bringmann et al., 2019; Hallquist, Wright, & Molenaar, 2021; Rodebaugh et al., 2018; Ryan, Bringmann, & Schuurman, 2019; Spiller et al., 2020). Yet other authors have questioned the dichotomy between networks and common factors, when in fact, both may apply to most situations (Bringmann & Eronen, 2018), and each statistical network and factor model has an equivalent in the other framework (Epskamp, Maris, Waldorp, & Borsboom, 2018; Marsman et al., 2018; Waldorp & Marsman, in press). On the estimation end, the practice of fitting network models using \(l_{1}\)-regularization has been shown to be suboptimal for most of the types of psychological data that network models are fit to (Williams, & Rast, 2020; Williams, Wysocki, Rhemtulla, & Rast, 2019; Wysocki & Rhemtulla, 2021), and the practice of inferring that edges set to zero by the regularization function are truly zero in the population (or more generally, inferring population networks are sparse when they’ve been estimated using sparse estimation procedures) has likewise been shown to be unjustified (Epskamp, Kruis, & Marsman, 2017; Williams, Briganti, Linkowski, & Mulder, 2021).
This criticism has not happened in a vacuum; methodologists have continued to develop and study new network methodologies in response to and alongside it. In response to questions about the replicability and robustness of network methodologies, researchers have begun to develop methods to quantify the uncertainty around estimated network parameters and to develop confirmatory tests for them (Rodriguez, Williams, Rast, & Mulder, 2020), as well as to investigate empirical evidence for replicability and generalizability of networks estimated on real data (Herrera-Bennett & Rhemtulla, 2021; Funkhouser, Correa, Gorka, Nelson, Phan, & Shankman, 2020). Most critics additionally offer innovative responses to their own critiques. For example, longitudinal and idiographic network methods responded to criticisms of cross-sectional network models (Bringmann et al., 2013; Epskamp, Waldorp, Mõttus, & Borsboom, 2018). New estimation methods were developed to deal with the peculiarities of psychological data (e.g., small samples, small numbers of variables, ordinal and nonnormal distributions, and dense population networks; Haslbeck & Waldorp, 2020; Williams, 2021a; Wysocki & Rhemtulla, 2021). In the face of criticism of centrality indices, researchers introduced new centrality indices and developed predictability indices (e.g., Haslbeck & Waldorp, 2018; Robinaugh, Millner, & McNally, 2016), although more development is clearly needed in this area.
When we published the call for papers for the “Network psychometrics in action” special issue in the final quarter of 2019, the field had matured a bit. We had methods that answered real empirical questions, and we had begun to grasp what our methods could or could not do. Yet, at the same time, some methodological challenges persisted. Therefore, to curate the special issue, we called for papers that “showcase how methodological innovations in the network approach that are inspired by real data can be used to answer important substantive questions.” We hoped to receive manuscripts that addressed challenges within a couple of general themes that pervade the psychological network literature. As the next section and the special issue show, we were not let down.
2 Three Methodological Challenges that Impede Substantive Research | Contributions to the Special Issue
The special issue’s contributions can be organized into three research themes, focusing on unique substantive questions. The first research theme concerns the discovery of network structure: what does the population network look like, and how can we calibrate our certainty in the estimates of that structure? The second theme concerns confirmatory network methodology: how can we test hypotheses about particular edges and evaluate group differences? The third theme involves the interpretation of an estimated network: how can we identify elements of a network that are important in a conceptual, predictive, or causal sense? We consider each of these themes in turn.
3 What is the Network’s Structure? and How Robust are Our Estimates of It?
The field started with a methodology that could estimate a network’s structure and parameters but could not quantify their uncertainty. As mentioned above, early research in this field revolved around network visualizations. At the time, network psychometrics was not more sophisticated than a pretty picture (Bringmann, 2016). But there is a real danger of becoming overconfident in the estimated network if one is unaware of the underlying uncertainty (Hinne, Gronau, van den Bergh, & Wagenmakers, 2020; Hoeting, Madigan, Raftery, & Volinsky, 1999). And there often is more uncertainty than researchers wish to acknowledge. For example, Mansueto, Wiers, van Weert, Schouten, and Epskamp (in press) recently showed that it is hard to recover the network structure from longitudinal data in typical sample sizes. Similarly, Fried and colleagues (Fried & Cramer, 2017; Fried et al., 2018) and Forbes and colleagues (2017; 2019a, b) initiated a discussion on the robustness of networks in cross-sectional data. With the limited data that we usually have, it is unlikely that we can be sure that the estimated network is correct and know its parameters with absolute certainty. However evident it may be, it took the field several years to develop the first methods that quantify the uncertainty of our network results. It is thus not surprising that concerns about the reproducibility of published network results have become prevalent.
The robustness of network results now firmly ranks as one of the field’s top priorities. Much of the development that addresses this research priority has focused on quantifying the uncertainty in the estimated parameters (e.g., Epskamp, Borsboom, & Fried, 2018; Jones, Williams, & McNally, 2021; Jongerling, Epskamp, & Williams, 2021). However, we believe that given the complex nature of network structure selection, more work should also focus on quantifying the uncertainty in the selected structure and address questions like, “which structures are plausible for the data at hand?” and “what impact does the uncertainty in the network’s structure have on our parameter estimates and their uncertainty?” Although some elegant Bayesian solutions have been developed that address these questions (e.g., Mohammadi, Massam, & Letac, in press; Mohammadi & Wit, 2015; Pensar, Nyman, Niiranen, & Corander, 2017; Williams 2021b; Williams & Mulder, 2020a), that have been paired with software implementations (Mohammadi & Wit, 2019; Williams & Mulder, 2020b), these solutions have received too little attention in the psychological literature. At the same time, this methodology could still use further development to be applied to the full spectrum of psychological network models and psychometric variables, and to address a broad range of empirical questions (e.g., to quantify uncertainty in centrality measures; Huth, Luigjes, Marsman, Goudriaan, & van Holst, 2021; Jongerling et al., 2021). The development of Bayesian models (e.g., prior specifications) that fit the psychological context is another area that deserves attention.
There are also concerns about the theoretical properties of uncertainty quantification (e.g., producing confidence intervals) in combination with the \(l_{1}\) constraint (i.e., lasso estimation) in frequentist approaches (i.e., Bühlmann, Kalisch, & Meier, 2014, Section 3.1; Pötscher & Leeb, 2009; Williams, 2021c). Therefore, more work should focus on alternative routes to quantify parameter uncertainty, such as Bayesian or empirical Bayesian approaches or alternative forms of regularization. Some elegant Bayesian solutions have already been developed for Gaussian Graphical models (e.g., Mohammadi & Wit, 2015; Williams, 2021b; Williams & Mulder, 2020a), but non-Gaussian models have received little attention (Pensar et al., 2017, offers a recent exception).
Two papers in the special issue aim to model the uncertainty associated with estimating the network and consequently offer more robust inference. For example, Epskamp, Isvoranu, and Chueng (2022: this issue) provide a classical hierarchical approach to aggregate independent network sources into a single estimate of its topology useful for meta-analyses of Gaussian networks. The approach does not use regularization, and its standard maximum likelihood framework is blessed with familiar solutions for standard errors and confidence intervals of the estimated parameters. Marsman, Huth, Waldorp, and Ntzoufras (2022: this issue), on the other hand, offer empirical Bayes and full Bayesian solutions for selecting the structure of an Ising model, a network for binary variables, quantifying the uncertainty in the network’s estimated structure, and the associated parameters. Both contributions offer unique solutions to gauge the uncertainty in estimated networks and deliver robust network results.
4 How Can We Conduct Confirmatory Tests of the Relationship Between Two Variables and Discover Differences Between Groups?
The current methodological toolbox for psychological networks is mainly exploratory. We use it to estimate a network based on the available data and then interpret the estimated network. But researchers often struggle with interpreting these networks as exploratory. For example, the absence of an edge between two variables in a lasso-estimated network is often viewed as evidence for its exclusion (Williams et al., 2021). But if the lasso edge estimate is equal to zero exactly, what evidence do we have that the edge should, in fact, be excluded from the network? The problem is that we cannot really tell. Current (frequentist) implementations of the lasso estimation procedure cannot separate the evidence for absence from the absence of evidence (i.e., that there is too little information to decide about its inclusion). Moreover, the current implementation of lasso-estimation is also not intended to be a statistical test for edge inclusion or exclusion; it is meant for selecting a single network structure but does not pit structures with a particular edge against those without that edge.
The lack of confirmatory methodology for psychological networks was a serious concern, echoed by reviews taking stock of the field (Fried & Cramer, 2017; Robinaugh, Hoekstra, Toner, & Borsboom, 2020). It is hard to formulate a cumulative science without the ability to build on what we have learned. What is the evidence for including a particular edge in the network? How do network structures compare for the data at hand? What can we say about the sign of network relations? Does the cross-sectional network hold for the population, or are there groups with a systematically different topology? In the past few years, the field has exerted considerable effort to address these questions. For example, van Bork et al. (2019; see also Kan, van der Maas, & Levine, 2019) proposed a test to identify whether data were generated from a sparse network model or a unidimensional factor model. Epskamp (2020) borrowed ideas from structural equation modeling and developed relative fit measures and likelihood-based tests for nested Gaussian Graphical models, and Williams (2021b) and Williams and Mulder (2020a) developed Bayes factor tests to assess the evidence for edge exclusion and order constraints (e.g., their sign) on the relations of these models. Van Borkulo and colleagues (in press) developed a permutation test to assess if two estimated network structures differ, and Jones, Mair, Simon, and Zeileis (2020) developed structural change tests for evaluating the impact of background variables to assess subgroup differences in the structure of Gaussian networks. For the latter, Huth et al. (in press) developed a permutation test variant suited for small sample sizes. To contrast with these classical approaches, Williams, Rast, Pericchi, and Mulder (2020) developed Bayesian solutions for assessing subgroup differences in Gaussian Graphical models (also see Williams (2021b)).
In sum, we have made great strides in developing confirmatory methods in the last three years, but the available methodology is still very limited. Where the Bayesian assessment of GGMs has received much attention, a confirmatory network methodology of non-Gaussian variables is still largely absent. Several papers in the special issue aim to fill that gap. For example, Marsman et al. (2022; this issue) offer Bayesian solutions for assessing edge inclusion for the Ising model, a network for binary variables, addressing similar questions as Williams and Mulder (2020a) did for GGMs. While Epskamp et al. (2022, this issue) offer a classical approach to gauge the heterogeneity of a GGM applied to independent datasets, Lee, Chen, DeSarbo, and Xue (2022; this issue) gauge the heterogeneity of networks of ordinal variables estimated from cross-sectional data. Lee and colleagues introduce an empirical Bayes method for estimating a finite mixture of latent GGMs to model the ordinal variables, using a new penalized Expectation–Maximization procedure to estimate the mixing weights and network parameters. Where the aforementioned approaches of, for example, van Borkulo et al. (in press) and Williams et al., (2020) assess differences between identified subgroups (i.e., observed heterogeneity), the mixture approach of Lee and colleagues allows us to assess if there are unidentified subgroups (i.e., Brusco, Steinley, Hoffman, Davis-Stober, & Wasserman, 2019). Finally, Bodner, Tuerlinckx, Bosmans, & Ceulemans (2021) recently showed how to assess the marginal dependence of two binary variables (e.g., symptom indicators) in a nonparametric way using a permutation test. In this issue, Bodner, Bringmann, Tuerlinckx, de Jonge, and Ceulemans (2022; this issue) use their method to investigate the co-occurrence of symptoms over time and construct symptom networks from the set of significant greater-than-zero values. The obtained individual network structures can then be used to reveal symptom clusters in between-subjects analyses.
5 What Defining Features of A Network Foster Interpretation, Prediction, and Intervention?
Where initial network analyses focused on network plots, researchers started to wonder how to interpret their network estimates. Which relations or which nodes in the network are important? Centrality measures, borrowed from network science (e.g., Newman, 2004; Newman, Barabási, & Watts, 2006), are often used to identify the important nodes in the estimated structure. But as alluded to before, centrality measures have also received several critiques in recent years. Bringmann et al. (2019) argued that the assumptions underlying centrality measures might not apply to psychological networks. They stressed the importance of considering “for what?” when interpreting a node as important. Dablander and Hinne (2019), for example, showed that the nodes that are flagged by centrality measures might not be the nodes that are important in a causal sense. But despite these critiques and concerns, centrality measures continue to be used due to a lack of a better alternative. At the same time, centrality measures focus exclusively on the network’s nodes. But for assessing causality, it seems reasonable that not only nodes, but also particular network relations are important. In this context, Haslbeck and Waldorp (2018) proposed to use nodewise predictability—the ability of a node or group of nodes to predict others.
Four papers in this issue propose methods focused on the interpretation and use of network models. Brusco, Steinley, and Watts (2022; this issue) propose methods that work on estimated networks, cleverly reordering the rows and columns of an estimated association matrix to produce a maximally interpretable structure. These methods offer a fruitful alternative to existing centrality measures and can say something about which nodes are best able to predict other nodes, that is, which nodes are most central. In a similar vein, Golino, Christensen, Moulder, Kim, and Boker (2022, this issue) introduce a novel clustering method that we can use to identify latent topics in the time series of text data, such as Twitter data. They extend their exploratory graph analysis (Golino, & Epskamp, 2017; Golino et al., 2020) approach to discover latent topics in text data taken from interviews at a single time (Kjellström & Golino, 2019) ) to multiple time points (i.e., a time series). They apply it to estimate latent topics in Twitter data taken during the 2016 US presidential election to identify word clusters and analyze individual dynamics of separate words. Both contributions of Henry, Robinaugh, and Fried (2022, this issue) and Ryan and Hamaker (2022, this issue) offer methods that aim to characterize a network by examining its implications for intervention. Henry and colleagues bring control theory to bear on dynamic psychometric networks, showing how this method, which was developed to optimize production processes, might be used to tailor clinical interventions based on individual networks. Ryan and Hamaker, on the other hand, develop a continuous-time vector autoregressive modeling approach to constructing dynamic individual networks. This model allows the researcher to extract model-implied effects on any node in the network at any time lag. The authors show how these models can be used to form precise predictions about the impact of an intervention. These predictions form the basis for two new centrality measures, total effect, and indirect effect centrality, that indicate the importance of nodes as intervention targets.
6 Software Contributions
Not only do each of these contributions push the field forward by proposing novel methods to discover and confirm network structure, summarize network properties, and use networks for prediction and intervention. All but one provides us with the software for doing so. In eight papers, four new R packages are introduced (packages: ConNEcT, Bodner et al., 2022; Bodner & Ceulemans, in press; netcontrol, Henry et al., 2022; rbinnet, Marsman et al., 2022; ctnet, Ryan, and Hamaker, 2022), two previously introduced R packages are expanded to include new methods ((psychonetrics, Epskamp, 2020; Epskamp et al., 2022; EGAnet, Golino & Epskamp, 2017; Golino et al., 2022), and one paper provides code in MATLAB and R for implementing the new method (Brusco et al. 2022).
7 Closing Statement
Having offered a brief overview of the historic trends, the questions facing network psychometrics as we see them, and the contributions provided by the papers in this special issue, we leave our reader to read. We offer our heartfelt thanks to every author who contributed their fine work, and we hope you enjoy this truly excellent set of papers.
References
Bodner, N., Bringmann, L. F., Tuerlinckx, F., de Jonge, P., & Ceulemans, E. (2022). ConNEcT: A novel network approach for investigating the co-occurrence of binary psychopathological symptoms over time. Psychometrika, this issue.
Bodner, N., & Ceulemans, E. (in press). ConNEcT: An R package to build contingency measure-based networks on binary time series. Behavior Research Methods.
Bodner, N., Tuerlinckx, F., Bosmans, G., & Ceulemans, E. (2021). Accounting for auto-dependency in binary dyadic time series data: A comparison of model- and permutation-based approaches for testing pairwise associations. British Journal of Mathematical and Statistical Psychology, 74, 86–109.
Borsboom, D. (2008). Psychometric perspectives on diagnostic systems. Journal of Clinical Psychology, 64, 1089–1108.
Borsboom, D. (2017). A network theory of mental disorders. World Psychiatry, 16, 5–13.
Borsboom, D., & Cramer, A. O. J. (2013). Network analysis: An integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9, 91–121.
Bringmann, L. F. (2016). Dynamical networks in psychology: More than a pretty picture? (Unpublished doctoral dissertation). Katholieke Universiteit Leuven.
Bringmann, L. F., Elmer, T., Epskamp, S., Krause, R. W., Schoch, D., Wichers, M., Wigman, J. T. W., & Snippe, E. (2019). What do centrality measures measure in psychological networks? Journal of Abnormal Psychology, 128, 892–903.
Bringmann, L. F., & Eronen, M. I. (2018). Don’t blame the model: Reconsidering the network approach to psychopathology. Psychological Review, 125, 606–615.
Bringmann, L. F., Lemmens, L. H. J. M., Huibers, M. J. H., Borsboom, D., & Tuerlinckx, F. (2015). Revealing the dynamic network structure of the Beck Depression Inventory-II. Psychological Medicine, 45, 747–757.
Bringmann, L. F., Pe, M. L., Vissers, N., Ceulemans, E., Borsboom, D., VanPaemel, F., & Kuppens, P. (2016). Assessing temporal emotion dynamics using networks. Assessment, 23, 425–435.
Bringmann, L. F., Vissers, N., Wichers, M., Geschwind, N., Kuppens, P., Peeters, F., Borsboom, D., & Tuerlinckx, F. (2013). A network approach to psychopathology: New insights into clinical longitudinal data. PLoS One, 8, e60188.
Brusco, M. J., Steinley, D., Hoffman, M., Davis-Stober, C., & Wasserman, S. (2019). On Ising models and algorithms for the construction of symptom networks in psychopathological research. Psychological Methods, 24, 735–753.
Brusco, M. J., Steinley, D., & Watts, A. L. (2022). Disentangling relationships in symptom networks using matrix permutation methods. Psychometrika, this issue.
Bühlmann, P., Kalisch, M., & Meier, L. (2014). High-dimensional statistics with a view toward applications in biology. Annual Reviews of Statistics and Its Applications, 1, 255–278.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press.
Costantini, G., & Perugini, M. (2012). The definition of components and the use of formal indexes are key steps for a successful application of network analysis in personality psychology. European Journal of Personality, 26, 434–435.
Cramer, A. O. J., Borsboom, D., Aggen, S. H., & Kendler, K. S. (2012). The pathoplasticity of dysphoric episodes: Differential impact of stressful life events on the pattern of depressive symptom inter-correlations. Psychological Medicine, 42, 957–965.
Cramer, A. O. J., van der Sluis, S., Noordhof, A., Wichers, M., Geschwind, N., Aggen, S. H., Kendler, K. S., & Borsboom, D. (2012). Dimensions of normal personality as networks in search for equilibrium: You can’t like parties if you don’t like people. European Journal of Personality, 26, 414–431.
Cramer, A. O. J., Waldorp, L. J., van der Maas, H. L. J., & Borsboom, D. (2010). Comorbidity: A network perspective. Behavioral and Brain Sciences, 33, 137–150.
Dablander, F., & Hinne, M. (2019). Node centrality measures are a poor substitute for causal inference. Scientific Reports, 9, 6846.
Dalege, J., Borsboom, D., van Harreveld, F., van den Berg, H., Conner, M., & van der Maas, H. L. J. (2016). Toward a formalized account of attitudes: The causal attitude network (CAN) model. Psychological Review, 123, 2–22.
Dickens, W. T., & Flynn, J. R. (2001). Heritability estimates versus large environmental effects: The IQ paradox resolved. Psychological Review, 108, 346–369.
Epskamp, E. (2022). Psychometric network models from time-series and panel data. Psychometrika, 85, 206–231.
Epskamp, E., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their accuracy: A tutorial paper. Behavior Research Methods, 50, 195–212.
Epskamp, S., Cramer, A. O. J., Waldorp, L. J., Schmittmann, V. D., & Borsboom, D. (2012). qgraph: Network visualizations of relationships in psychometric data. Journal of Statistical Software, 48(4).
Epskamp, S., Isvoranu, A.-M., & Chueng, M. W.-L. (2022). Meta-analytic Gaussian network aggregation. Psychometrika, this issue.
Epskamp, S., Kruis, J., & Marsman, M. (2017). Estimating psychopathological networks: Be careful what you wish for. PloS One, 12, e017891.
Epskamp, S., Maris, G. K. J., Waldorp, L. J., & Borsboom, D. (2018). Network psychometrics. In P. Irwing, D. Hughes, & T. Booth (Eds.), The Wiley handbook of psychometric testing, 2 volume set: A multidisciplinary reference on survey, scale and test development. Wiley.
Epskamp, S., Waldorp, L. J., Mõttus, R., & Borsboom, D. (2018). The Gaussian Graphical Model in cross-sectional and time-series data. Multivariate Behavioral Research, 53, 453–480.
Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. (2017). Evidence that psychopathology symptom networks have limited replicability. Journal of Abnormal Psychology, 126, 969–988.
Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. (2019a). The network approach to psychopathology; promise versus reality. World Psychiatry, 18, 272–273.
Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. (2019b). Quantifying the reliability and replicability of psychopathology network characteristics. Multivariate Behavioral Research, 56, 224–242.
Fried, E. I., & Cramer, A. O. J. (2017). Moving forward: Challenges and directions for psychopathological network theory and methodology. Perspectives on Psychological Science, 12, 999–1020.
Fried, E. I., Eidhof, M. B., Palic, S., Costantini, G., Huisman-van Dijk, H. M., Bockting, C. L. H., Engelhard, I., Armour, C., Nielsen, A. B. S., & Karstoft, K.-I. (2018). Replicability and generalizability of posttraumatic stress disorder (PTSD) networks: A cross-cultural multisite study of PTSD symptoms in four trauma patient samples. Clinical Psychological Science, 6, 335–351.
Funkhouser, C. J., Correa, K. A., Gorka, S. M., Nelson, B. D., Phan, K. L., & Shankman, S. A. (2020). The replicability and generalizability of internalizing symptom networks across five samples. Journal of Abnormal Psychology, 129, 191–203.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. Basic Books.
Golino, H., Christensen, A. P., Moulder, R., Kim, S., & Boker, S. M. (2022). Modeling latent topics in social media using dynamic exploratory graph analysis: The case of the right-wing and left-wing trolls in the 2016 US elections. Psychometrika, this issue.
Golino, H. F., & Epskamp, S. (2017). Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research. PloS One, 12, e0174035.
Golino, H., Shi, D., Garrido, L. E., Christensen, A. P., Nieto, M. D., Sadana, R., & Martinez-Molina, A. (2020). Investigating the performance of exploratory graph analysis and traditional techniques to identify the number of latent factors: a simulation and tutorial. Psychological Methods, 25, 292.
Hallquist, M. N., Wright, A. G. C., & Molenaar, P. C. M. (2021). Problems with centrality measures in psychopathology symptom networks: Why network psychometrics cannot escape psychometric theory. Multivariate Behavioral Research, 56, 199–223.
Haslbeck, J. M. B., & Waldorp, L. J. (2018). How well do network models predict observations? On the importance of predictability in network models. Behavior Research Methods, 50, 853–861.
Haslbeck, J. M. B. & Waldorp, L. J. (2020). mgm: Estimating time-varying mixed graphical models in high-dimensional data. Journal of Statistical Software, 93(8).
Henry, T. R., Robinaugh, D. J., & Fried, E. I. (2022). On the control of psychological networks. Psychometrika, this issue.
Herrera-Bennett, A. C., & Rhemtulla, M. (2021). Network replicability & generalizability: Exploring the effects of sampling variability, scale variability, and node reliability. PsyArXiv https://psyarxiv.com/7vkm8/
Hinne, M., Gronau, Q. F., van den Bergh, D., & Wagenmakers, E.-J. (2020). A conceptual introduction to Bayesian model averaging. Advances in Methods and Practices in Psychological Science, 3, 200–215.
Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. (1999). Bayesian model averaging: A tutorial. Statistical Science, 14, 382–417.
Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57, 253–270.
Huth, K., Luigjes, K., Marsman, M., Goudriaan, A. E., & van Holst, R. J. (2021). Modeling alcohol use disorder as a set of interconnected symptoms - Assessing differences between clinical and population samples and across external factors. Addictive Behaviors, 125, 107128.
Huth, K., Waldorp, L. J., Luigjes, J., Goudriaan, A. E., van Holst, R. J., & Marsman, M. (in press). A note on the Structural Change Test in finite samples: Using a permutation approach to estimate the sampling distribution. Psychometrika.
Jones, P. J., Mair, P., Simon, T., & Zeileis, A. (2020). Network trees: A method for recursively partitioning covariance structures. Psychometrika, 85, 926–945.
Jones, P. J., Williams, D. R., & McNally, R. J. (2021). Sampling variability is not nonreplication: A Bayesian reanalysis of Forbes, Wright, Markon, and Krueger. Multivariate Behavioral Research, 56, 249–255.
Jongerling, J., Epskamp, S., & Williams, D. R. (2021). Bayesian uncertainty estimation for Gaussian Graphical Models and centrality indices. PsyArXiv https://psyarxiv.com/7kude/
Kan, K. J., van der Maas, H. L. J., & Levine, S. Z. (2019). Extending psychometric network analysis: Empirical evidence against g in favor of mutualism? Intelligence, 73, 52–62.
Kjellström, S., & Golino, H. (2019). Mining concepts of health responsibility using text mining and exploratory graph analysis. Scandinavian Journal of Occupational Therapy, 26, 395–410.
Lee, K. H., Chen, Q., DeSarbo, W. S., & Xue, L. (2022). Estimating finite mixtures of ordinal graphical models. Psychometrika, this issue.
Marsman, M., Borsboom, D., Kruis, J., Epskamp, S., van Bork, R., Waldorp, L. J., van der Maas, H. L. J., & Maris, G. K. J. (2018). An introduction to Network Psychometrics: Relating Ising network models to item response theory models. Multivariate Behavioral Research, 53, 15–35.
Marsman, M., Huth, K., Waldorp, L. J., & Ntzoufras, I. (2022). Objective Bayesian edge screening and structure selection for networks of binary variables. Psychometrika, this issue.
Mansueto, A. C., Wiers, R. W., van Weert, J. C. M., Schouten, B. C., & Epskamp, S. (in press). Investigating the feasibility of idiographic network models. Psychological Methods.
McGrew, K., & Flanagan, D. (1998). Intelligence test desk reference (ITDR): The Gf-Gc cross-battery assessment. Pearson Education. Retrieved from https://psycnet.apa.org/record/1998-07192-000
Mohammadi, R., Massam, H., & Letac, G. (in press). Accelerating bayesian structure learning in sparse gaussian graphical models. Journal of the American Statistical Association.
Mohammadi, A., & Wit, E. C. (2015). Bayesian structure learning in sparse Gaussian graphical models. Bayesian Analysis, 10, 109–138.
Mohammadi, R., & Wit, E.C. (2019). BDgraph: An R package for Bayesian structure learning in graphical models. Journal of Statistical Software, 89(3).
Neal, Z. P., & Neal, J. W. (in press). Out of bounds? The boundary specification problem for centrality in psychological networks. Psychological Methods.
Newman, M. (2004). Analysis of weighted networks. Physical Review E, 70, 056131.
Newman, M., Barabási, A.-L., & Watts, D. J. (Eds.). (2006). The Structure and dynamics of networks. Princeton University Press.
Pensar, J., Nyman, H., Niiranen, J., & Corander, J. (2017). Marginal pseudo-likelihood learning of discrete Markov network structures. Bayesian Analysis, 12, 1195–1215.
Pötscher, B. M., & Leeb, H. (2009). On the distribution of penalized maximum likelihood estimators: The LASSO, SCAD, and thresholding. Journal of Multivariate Analysis, 100, 2065–2082.
Robinaugh, D. J., Hoekstra, R. H. A., Toner, E. R., & Borsboom, D. (2020). The network approach to psychopathology: A review of the literature 2008–2018 and an agenda for future research. Psychological Medicine, 50, 353–366.
Robinaugh, D. J., LeBlanc, N. J., Vuletich, H. A., & McNally, R. J. (2014). Network analysis of persistent complex bereavement disorder in conjugally bereaved adults. Journal of Abnormal Psychology, 123, 510–522.
Robinaugh, D. J., Millner, A. J., & McNally, R. J. (2016). Identifying highly influential nodes in the complicated grief network. Journal of Abnormal Psychology, 125, 747–757.
Rodebaugh, T. L., Tonge, N. A., Piccirillo, M. L., Fried, E., Horenstein, A., Morrison, A. S., Goldin, P., Gross, J. J., Lim, M. H., Fernandez, K. C., Blanco, C., Schneier, F. R., Bogdan, R., Thompson, R. J., & Heimberg, R. G. (2018). Does centrality in a cross-sectional network suggest intervention targets for social anxiety disorder? Journal of Consulting and Clinical Psychology, 86, 831–844.
Rodriguez, J. E., Williams, D. R., Rast, P., & Mulder, J. (2020). On formalizing theoretical expectations: Bayesian testing of central structures in psychological networks. PsyArXiv https://psyarxiv.com/zw7pf/
Rubel, J. A., Fisher, A. J., Husen, K., & Lutz, W. (2018). Translating person-specific network models into personalized treatments: Development and demonstration of the dynamic assessment treatment algorithm for individual networks (DATA-IN). Psychotherapy and Psychosomatics, 87, 249–251.
Ruzzano, L., Borsboom, D., & Geurts, H. M. (2015). Repetitive behaviors in autism and obsessive-compulsive disorder: New perspectives from a network analysis. Journal of Autism and Developmental Disorders, 45, 192–202.
Ryan, O., Bringmann, L. F., & Schuurman, N. K. (2019). The challenge of generating causal hypotheses using network models. PsyArXiv https://psyarxiv.com/ryg69
Ryan, O., & Hamaker, E. L. (2022). Time to intervene: A continuous-time approach to network analysis and centrality. Psychometrika, this issue.
Savi, A. O., Marsman, M., van der Maas, H. L. J., & Maris, G. K. J. (2019). The wiring of intelligence. Perspectives on Psychological Science, 14, 1034–1061.
Spearman, C. (1904). General intelligence, objectively determined and measured. The American Journal of Psychology, 15, 201–292.
Spearman, C. (1927). The abilities of man: Their nature and assessment. Macmillan and Company.
Spiller, T. R., Levi, O., Neria, Y., Suarez-Jimenez, B., Bar-Haim, Y., & Lazarov, A. (2020). On the validity of the centrality hypothesis in cross-sectional between-subject networks of psychopathology. BMC Medicine, 18(297).
Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of intelligence. Cambridge University Press.
Thomson, G. H. (1916). A hierarchy without a general factor. British Journal of Psychology, 8, 271–281.
Thurstone, L. L. (1935). The vectors of the mind. Chicago University Press.
van Bork, R., Rhemtulla, M., Waldorp, L. J., Kruis, J., Rezvanifar, S., & Borsboom, D. (2019). Latent variable models and networks: Statistical equivalence and testability. Multivariate Behavioral Research, 56(2), 175–198.
van Borkulo, C. D., Borsboom, D., Epskamp, S., Blanken, T. F., Boschloo, L., Schoevers, R. A., & Waldorp, L. J. (2014). A new method for constructing networks from binary data. Scientific Reports, 4, 5918.
van Borkulo, C. D., Boschloo, L., Borsboom, D., Penninx, B. W. J. H., Waldorp, L. J., & Schoevers, R. A. (2015). Association of symptom network structure with the course of depression. JAMA Psychiatry, 72, 1219–1226.
van Borkulo, C. D., van Bork, R., Boschloo, L., Kossakowski, J. J., Tio, P., Schoevers, R. A., Borsboom, D., & Waldorp, L. J. (in press). Comparing network structures on three aspects: A permutation test. Psychological Methods.
van der Maas, H. L. J., Dolan, C. V., Grasman, R. P. P. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. J. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113, 842–861.
van der Maas, H. L. J., Kan, K.-J., Marsman, M., & Stevenson, C. E. (2017). Network models for cognitive development and intelligence. Journal of Intelligence, 5(2).
van der Maas, H. L. J., Savi, A. O., Hofman, A., Kan, K.-J., & Marsman, M. (2019). The network approach to general intelligence. In D. J. McFarland (Ed.), General and specific mental abilities (pp. 108–131). Cambridge Scholars Publishing.
Waldorp, L. J., & Marsman, M. (in press). Relations between networks, regression, partial correlation, and latent variable models. Multivariate Behavioral Research.
Wichers, M., Groot, P. C., Psychosystems, ESM group, & EWS group (2016). Critical slowing down as a personalized early warning signal for depression. Psychotherapy and Psychosomatics, 85, 114–116.
Williams, D. R. (2021a). GGMnonreg: Non-regularized Gaussian Graphical Models in R. PsyArXiv https://psyarxiv.com/p5jk9/
Williams, D. R. (2021b). Bayesian estimation for Gaussian Graphical Models: Structure learning, predictability, and network comparisons. Multivariate Behavioral Research, 56, 336–352.
Williams, D. R. (2021c). The confidence interval that wasn’t: Bootstrapped “confidence intervals” in l\(_{1}\)-regularized partial correlation networks. PsyArXiv https://psyarxiv.com/kjh2f
Williams, D. R., Briganti, G., Linkowski, P., & Mulder, J. (2021). On accepting the null hypothesis of conditional independence in partial correlation networks: A bayesian analysis. PsyArXiv https://psyarxiv.com/7uhx8/
Williams, D. R., & Mulder, J. (2020a). Bayesian hypothesis testing for Gaussian graphical models: Conditional independence and order constraints. Journal of Mathematical Psychology, 99, 102441.
Williams, D. R., & Mulder, J. (2020b). BGGM: Bayesian Gaussian graphical models in R. Journal of Open Source Software, 5(21), 2111.
Williams, D. R., & Rast, P. (2020). Back to the basics: Rethinking partial correlation network methodology. British Journal of Mathematical and Statistical Psychology, 73, 187–212.
Williams, D. R., Rast, P., Pericchi, L. R., & Mulder, J. (2020). Comparing Gaussian graphical models with the posterior predictive distribution and Bayesian model selection. Psychological Methods, 25, 653–672.
Williams, D. R., Rhemtulla, M., Wysocki, A. C., & Rast, P. (2019). On nonregularized estimation of psychological networks. Multivariate Behavioral Research, 54, 719–750.
Wysocki, A. C., & Rhemtulla, M. (2021). On penalty parameter selection for estimating network models. Multivariate Behavioral Research, 56, 288–302.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The editorial process for this paper was not handled by the editors of the special issue, Maarten Marsman and Mijke Rhemtulla, but by Edward Ip instead.
MM was supported by a Veni grant (451-17-017) from the Netherlands Organization for Scientific Research (NWO).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Marsman, M., Rhemtulla, M. Guest Editors’ Introduction to The Special Issue “Network Psychometrics in Action”: Methodological Innovations Inspired by Empirical Problems. Psychometrika 87, 1–11 (2022). https://doi.org/10.1007/s11336-022-09861-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11336-022-09861-x