Advertisement

Advances in Statistical Methods for Causal Inference in Prevention Science: Introduction to the Special Section

  • Wolfgang WiedermannEmail author
  • Nianbo Dong
  • Alexander von Eye
Article
  • 215 Downloads

Abstract

The board of the Society for Prevention Research noted recently that extant methods for the analysis of causality mechanisms in prevention may still be too rudimentary for detailed and sophisticated analysis of causality hypotheses. This Special Section aims to fill some of the current voids, in particular in the domain of statistical methods of the analysis of causal inference. In the first article, Bray et al. propose a novel methodological approach in which they link propensity score techniques and Latent Class Analysis. In the second article, Kelcey et al. discuss power analysis tools for the study of causal mediation effects in cluster-randomized interventions. Wiedermann et al. present, in the third article, methods of Direction Dependence Analysis for the identification of confounders and for inference concerning the direction of causal effects in mediation models. A more general approach to the identification of causal structures in non-experimental data is presented by Shimizu in the fourth article. This approach is based on linear non-Gaussian acyclic models. Molenaar introduces vector-autoregressive methods for the optimal representation of Granger causality in time-dependent data. The Special Section concludes with a commentary by Musci and Stuart. In this commentary, the contributions of the articles in the Special Section are highlighted from the perspective of the experimental causal research tradition.

Keywords

Propensity score techniques Causal mediation Direction dependence analysis Causal structure learning Granger causality 
The seminal conceptual framework for prevention science by Kellam, Koretz, and Moscicki (Kellam et al. 1999; known as the developmental epidemiological preventive science framework) integrates three disciplines: epidemiology, life course development, and intervention trials technology. The third core element adds the systematic study of causal functions of risks and protective factors on targeted outcomes. Thus, prevention research is primarily concerned with establishing evidence-based statements about the causal nature of any form of intervention (i.e., policies, programs, or practices), and the hypothesized improvement of health or the reduction of disease-related problems. Within the last decades, tremendous progress has been made in developing statistical tools for causal inference (see, e.g., Pearl 2009; Peters et al. 2017; VanderWeele 2015; Wiedermann and von Eye 2016) which enable prevention scientists to empirically evaluate causal hypotheses and estimate causal effects. Although many causal inference methods, such as propensity score techniques (e.g., Harder et al. 2010; Lippold et al. 2014), causal mediation approaches (Pearl 2012), and sensitivity analyses (Liu et al. 2013), are already part of the standard methodological toolbox of prevention researchers, the Society of Prevention Research (SPR) recently provided an update of standards of evidence (Gottfredson et al. 2015) with a special emphasis on the importance of testing the complex mechanisms by which interventions causally affect health-related outcomes. The SPR Board further concluded that existing methods may still be too rudimentary to answer complex questions of causation—in particular, those research questions that go beyond simple main effects of experimental interventions. Examples of such methodological shortcomings are
  1. (a)

    the lack of statistical methods to rigorously evaluate complex mediational chains of causation (Imai et al. 2013),

     
  2. (b)

    the lack of quantitative methods to adequately address the iterative and dynamic processes of both, temporal change in individual health behavior (Beltz et al. 2016) and change in community factors that affect the sustainability of evidence-based interventions (Chambers et al. 2013), and

     
  3. (c)

    the lack of rigorous statistical tools to provide evidence of causation in the latent variable domain (for first attempts see Butera et al. 2014 and Muthén and Asparouhov 2015; von Eye and Wiedermann 2014).

     

The Special Section, Advances in Statistical Methods for Causal Inference in Prevention Science, brings together leading scholars in the field of causal inference research to present and discuss recent methodological developments which help to overcome many of these shortcomings. The first article, contributed by Bray et al. (2018), proposes a novel methodological framework that links propensity score techniques and Latent Class Analysis (LCA). While previous work (Butera et al. 2014; Lanza et al. 2016) discusses methods for estimating causal effects of manifest exposures (e.g., depressive symptoms) on complex patterns of latent outcomes (e.g., latent substance use profiles), the present article proposes statistical techniques to quantify the causal effect of latent class exposures (i.e., complex patterns of multiple causes) on manifest distal outcomes. Nationally representative data on adolescent drinking motives and adult drinking disorder are used to demonstrate this novel approach.

The second article, Kelcey et al. (2018), is devoted to the study of causal mediation effects in cluster-randomized interventions. While previous methodological research on multilevel mediation models focused on, for example, modeling issues (e.g., Preacher et al. 2010; Pituch and Stapleton 2012) and methods for statistical inference (e.g., Pituch et al. 2006), the issue of sample size planning for the detection of multilevel mediation effects received considerably less attention. The present article discusses the conceptual and statistical framework to study mediation effects in cluster-randomized interventions and introduces novel power analysis tools (cf., Dong and Maynard 2013) to design multilevel mediation studies. The article, thus, closes an important gap in planning for cluster-randomized interventions which now enables prevention scientists to design multilevel mediation studies with adequate power to address both, the complexity of the intervention setting under study and the potential complexity of the underlying causal mechanisms of the intervention.

While methods to estimate mediation effects with randomized exposures are well understood and readily available (e.g., MacKinnon 2008), several authors noted the limitations that concern the mediator-outcome component of an intervention theory (known as the “conceptual theory”; cf. Gottfredson et al. 2015; Herting 2002). Here, it is well-known that, even under randomized treatment, neither the direction (cf. Wiedermann and von Eye 2015) nor the magnitude of the causal “conceptual” effect can be identified uniquely when mediators are collected at the same time as the outcomes without imposing additional assumptions on the data (e.g., Keele 2015; MacKinnon and Pirlott 2015; Pirlott and MacKinnon 2016). The third article, contributed by Wiedermann et al. (2018), introduces a line of research known as Direction Dependence Analysis (DDA; Wiedermann and Li 2018; Wiedermann and Sebastian 2018) which can be used to detect potential confounders and infer the causal direction of effects (i.e., whether the causal model xy or the reversed model yx reflects the underlying causal mechanism) in observational data. The authors demonstrate how DDA can be used to empirically test causal direction and magnitude of the “conceptual” (mediator-outcome) part of an intervention theory.

In the fourth article, Shimizu (2018) then introduces a more general approach to discern causal structures in non-experimental data. The presented algorithms based on linear non-Gaussian acyclic models (LiNGAM), mainly developed in the field of machine learning research (cf. Shimizu et al. 2006; Hyvärinen and Smith 2013), are capable of identifying causal structures of (non-experimentally observed) variables even in the presence of unobserved confounders (cf. Shimizu and Bollen 2014). This approach is, thus, ideally suited to generate new hypotheses of how intervention effects are altered by factors that are not under experimental control. To the best of our knowledge, this is the first article that introduces modern machine learning algorithms for causal structure learning to the audience of prevention scientists.

The fifth article, authored by Molenaar (2018), focuses on recent developments of methods for causal inference in intensive longitudinal studies (ILSs), i.e., studies with rapid in situ assessment protocols (cf. Bamberger 2016; Bolger and Laurenceau 2013). With technological advances, ILSs provide new opportunities for prevention research to evaluate short- and long-term efficacy of prevention programs and to study the dynamics of temporal changes in health-related behavior (Ridenour et al. 2013). The author introduces Granger causality testing (Granger 1969) which enables prevention researchers to identify causal relations among time-dependent variables. In essence, Granger causality testing relies on a prediction error approach where a variable x “Granger-causes” a variable y if the prediction error of yt (given a universal set of information up to time point t) is smaller than the prediction error of yt without considering the information of x. The author then highlights the methodological issue that two equivalent representations of so-called vector autoregressive (VAR) models exist which, however, can lead to different Granger causal conclusions. The present contribution introduces a data-driven way to find the optimal representation of VAR models to derive Granger causality statements. Application of this novel approach is illustrated using time series of electro-dermal activity (EDA) data of a child with sensory processing disorder and his therapist during interaction in occupational therapy.

The Special Section closes with a commentary by Musci and Stuart (2018). Starting with summarizing the fundamental challenges of causal inference, the authors provide a careful reminder that every statistical approach to causal inference, including the approaches presented in the Special Section, builds on (sometimes strong and untestable) assumptions. Thus, prevention scientists need to be aware of these assumptions and be able to assess their plausibility when applying the methods to their own data. Further, the authors emphasize the importance of sensitivity analyses to evaluate the robustness of the causal inference approach against assumption violations.

The articles included in this Special Section present theoretical and empirical research and make important contributions to extend the methodological repertoire of prevention scientists. While causal inference will continue to be a dynamic field of research, the recent advances constitute important steps in the development of statistical methods to equip the next generation of quantitative prevention scientists.

Notes

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent

Informed consent was not required for this study.

References

  1. Bamberger, K. T. (2016). The application of intensive longitudinal methods to investigate change: Stimulating the field of applied family research. Clinical Child and Family Psychology Review, 19, 21–38.  https://doi.org/10.1007/s10567-015-0194-6.CrossRefPubMedPubMedCentralGoogle Scholar
  2. Beltz, A. M., Wright, A. G. C., Sprague, B. N., & Molenaar, P. C. M. (2016). Bridging the nomothetic and idiographic approaches to the analysis of clinical data. Assessment, 23, 447–458.CrossRefGoogle Scholar
  3. Bolger, N., & Laurenceau, J.-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. New York: Guilford Press.Google Scholar
  4. Bray, B. C., Dziak, J. J., Patrick, M. E., & Lanza, S. T. (2018). Inverse propensity score weighting with a latent class exposure: Estimating the causal effect of reported reasons for alcohol use on problem alcohol use 16 years later. Prevention Science.  https://doi.org/10.1007/s11121-018-0883-8.
  5. Butera, N. M., Lanza, S. T., & Coffman, D. L. (2014). A framework for estimating causal effects in latent class analysis: Is there a causal link between early sex and subsequent profiles of delinquency? Prevention Science, 15, 397–407.  https://doi.org/10.1007/s11121-013-0417-3.CrossRefPubMedPubMedCentralGoogle Scholar
  6. Chambers, D. A., Glasgow, R. E., & Strange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8, retrieved from http://www.implementationscience.com/content/8/1/117. Accessed 10 Jan 2019.
  7. Dong, N., & Maynard, R. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and minimum required sample sizes for experimental and quasi-experimental design studies. Journal of Research on Educational Effectiveness, 6, 24–67.  https://doi.org/10.1080/19345747.2012.673143.CrossRefGoogle Scholar
  8. Gottfredson, D. C., Cook, T. D., Gardner, F. E., Gorman-Smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16, 893–926.  https://doi.org/10.1007/s11121-015-0555-x.CrossRefPubMedPubMedCentralGoogle Scholar
  9. Granger, C. W. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37, 424–438.  https://doi.org/10.2307/1912791.CrossRefGoogle Scholar
  10. Harder, V. S., Stuart, E. A., & Anthony, J. C. (2010). Propensity score techniques and the assessment of measured covariate balance to test causal associations in psychological research. Psychological Methods, 15, 234–249.  https://doi.org/10.1037/a0019623.CrossRefPubMedPubMedCentralGoogle Scholar
  11. Herting, J. R. (2002). Evaluating and rejecting true mediation models: A cautionary note. Prevention Science, 3, 285–289.  https://doi.org/10.1023/A:1020828709115.CrossRefPubMedGoogle Scholar
  12. Hyvärinen, A., & Smith, S. M. (2013). Pairwise likelihood ratios for estimation of non-Gaussian structural equation models. Journal of Machine Learning Research, 14, 111–152.Google Scholar
  13. Imai, K., Tingley, D., & Yamamoto, T. (2013). Experimental designs for identifying causal mechanisms. Journal of the Royal Statistical Society A, 176, 5–51.  https://doi.org/10.1111/j.1467-985x.2012.01032.x.CrossRefGoogle Scholar
  14. Keele, L. (2015). Causal mediation analysis: Warning! Assumptions ahead. American Journal of Evaluation, 36, 500–513.  https://doi.org/10.1177/1098214015594689.CrossRefGoogle Scholar
  15. Kelcey, B., Spybrook, J., & Dong, N. (2018). Sample size planning for cluster-randomized interventions probing multilevel mediation. Prevention Science.  https://doi.org/10.1007/s11121-018-0921-6.
  16. Kellam, S. G., Koretz, D., & Mościcki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27, 463–482.  https://doi.org/10.1023/a:1022129127298.CrossRefPubMedGoogle Scholar
  17. Lanza, S. T., Schuler, M. S., & Bray, B. C. (2016). Latent class analysis with causal inference: The effect of adolescent depression on young adult substance use profile. In W. Wiedermann & A. von Eye (Eds.), Statistics and causality: Methods for applied empirical research (pp. 385–404). Hoboken: Wiley and Sons.CrossRefGoogle Scholar
  18. Lippold, M. A., Coffman, D. L., & Greenberg, M. T. (2014). Investigating the potential causal relationship between parental knowledge and youth risky behavior: A propensity score analysis. Prevention Science, 15, 869–878.  https://doi.org/10.1007/s11121-013-0443-1.CrossRefPubMedPubMedCentralGoogle Scholar
  19. Liu, W., Kuramoto, S. J., & Stuart, E. A. (2013). An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research. Prevention Science, 14, 570–580.  https://doi.org/10.1007/s11121-012-0339-5.CrossRefPubMedPubMedCentralGoogle Scholar
  20. MacKinnon, D. P. (2008). Introduction to statistical mediation analysis. New York: Taylor & Francis.Google Scholar
  21. MacKinnon, D. P., & Pirlott, A. G. (2015). Statistical approaches for enhancing causal interpretation of the M to Y relation in mediation analysis. Personality and Social Psychology Review, 19, 30–43.  https://doi.org/10.1177/1088868314542878.CrossRefPubMedGoogle Scholar
  22. Molenaar, P. C. M. (2018). Granger causality testing with intensive longitudinal data. Prevention Science, (in press).  https://doi.org/10.1007/s11121-018-0919-0.
  23. Musci, R. J., & Stuart, E. (2018). Ensuring causal, not casual, inference. Prevention Science, (in press).Google Scholar
  24. Muthén, B., & Asparouhov, T. (2015). Causal effects in mediation modeling: An introduction with applications to latent variables. Structural Equation Modeling, 22, 12–23.  https://doi.org/10.1080/10705511.2014.935843.CrossRefGoogle Scholar
  25. Pearl, J. (2009). Causality: Models, reasoning, and inference (2nd ed.). New York: Cambridge University Press.CrossRefGoogle Scholar
  26. Pearl, J. (2012). The causal mediation formula: A guide to the assessment of pathways and mechanisms. Prevention Science, 13, 426–436.  https://doi.org/10.1007/s11121-011-0270-1.CrossRefPubMedGoogle Scholar
  27. Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of causal inference: Foundations and learning algorithms. Cambridge: MIT Press.Google Scholar
  28. Pirlott, A. G., & MacKinnon, D. P. (2016). Design approaches to experimental mediation. Journal of Experimental Social Psychology, 66, 29–38.  https://doi.org/10.1016/j.jesp.2015.09.012.CrossRefPubMedPubMedCentralGoogle Scholar
  29. Pituch, K. A., & Stapleton, L. M. (2012). Distinguishing between cross-and cluster-level mediation processes in the cluster randomized trial. Sociological Methods & Research, 41, 630–670.  https://doi.org/10.1177/0049124112460380.CrossRefGoogle Scholar
  30. Pituch, K. A., Stapleton, L. M., & Kang, J. Y. (2006). A comparison of single sample and bootstrap methods to assess mediation in cluster randomized trials. Multivariate Behavioral Research, 41, 367–400.  https://doi.org/10.1207/s15327906mbr4103_5.CrossRefPubMedGoogle Scholar
  31. Preacher, K. J., Zyphur, M. J., & Zhang, Z. (2010). A general multilevel SEM framework for assessing multilevel mediation. Psychological Methods, 15, 209–233.  https://doi.org/10.1037/a0020141.CrossRefPubMedGoogle Scholar
  32. Ridenour, T. A., Pineo, T. Z., Molina, M. M. M., & Lich, K. H. (2013). Toward rigorous idiographic research in prevention science: Comparison between three analytic strategies for testing preventive intervention in very small samples. Prevention Science, 14, 267–278.  https://doi.org/10.1007/s11121-012-0311-4.CrossRefPubMedPubMedCentralGoogle Scholar
  33. Shimizu, S. (2018). Non-Gaussian methods for causal structure learning. Prevention Science, (in press).  https://doi.org/10.1007/s11121-018-0901-x.
  34. Shimizu, S., & Bollen, K. (2014). Bayesian estimation of causal direction in acyclic structural equation models with individual-specific confounder variables and non-Gaussian distributions. Journal of Machine Learning Research, 15, 2629–2652.Google Scholar
  35. Shimizu, S., Hoyer, P. O., Hyvärinen, A., & Kerminen, A. (2006). A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7, 2003–2030.Google Scholar
  36. VanderWeele, T. J. (2015). Explanation in causal inference: Methods for mediation and interaction. Oxford: Oxford University Press.Google Scholar
  37. von Eye, A., & Wiedermann, W. (2014). On direction of dependence in latent variable contexts. Educational and Psychological Measurement, 74, 5–30.  https://doi.org/10.1177/0013164413505863.CrossRefGoogle Scholar
  38. Wiedermann, W., & Li, X. (2018). Direction dependence analysis: Testing the direction of effects in linear models with an implementation in SPSS. Behavior Research Methods, 50, 1581–1601.  https://doi.org/10.3758/s13428-018-1031-x.CrossRefPubMedGoogle Scholar
  39. Wiedermann, W., & Sebastian, J. (2018). Direction dependence analysis in the presence of confounders: Applications to linear mediation models using observational data. Multivariate Behavioral Research, (in press).  https://doi.org/10.1080/00273171.2018.1528542.
  40. Wiedermann, W., & von Eye, A. (2015). Direction of effects in mediation analysis. Psychological Methods, 20, 221–244.  https://doi.org/10.1037/met0000027.CrossRefPubMedGoogle Scholar
  41. Wiedermann, W., & von Eye, A. (2016). Statistics and causality: Methods for applied empirical research. Hoboken: Wiley and Sons.CrossRefGoogle Scholar
  42. Wiedermann, W., Li, X., & von Eye, A. (2018). Testing the causal direction of mediation effects in randomized intervention studies. Prevention Science, (in press).  https://doi.org/10.1007/s11121-018-0900-y.

Copyright information

© Society for Prevention Research 2019

Authors and Affiliations

  1. 1.Statistics, Measurement, and Evaluation in Education, Department of Educational, School, and Counselling Psychology, College of EducationUniversity of MissouriColumbiaUSA
  2. 2.University of North Carolina at Chapel HillChapel HillUSA
  3. 3.Michigan State UniversityEast LansingUSA

Personalised recommendations