Prevention Science

, Volume 11, Issue 3, pp 239–251 | Cite as

Investigating the Impact of Selection Bias in Dose-Response Analyses of Preventive Interventions

  • Herle M. McGowan
  • Robert L. Nix
  • Susan A. Murphy
  • Karen L. Bierman
  • Conduct Problems Prevention Research Group*
Article

Abstract

This paper focuses on the impact of selection bias in the context of extended, community-based prevention trials that attempt to “unpack” intervention effects and analyze mechanisms of change. Relying on dose-response analyses as the most general form of such efforts, this study provides two examples of how selection bias can affect the estimation of treatment effects. In Example 1, we describe an actual intervention in which selection bias was believed to influence the dose-response relation of an adaptive component in a preventive intervention for young children with severe behavior problems. In Example 2, we conduct a series of Monte Carlo simulations to illustrate just how severely selection bias can affect estimates in a dose-response analysis when the factors that affect dose are not recorded. We also assess the extent to which selection bias is ameliorated by the use of pretreatment covariates. We examine the implications of these examples and review trial design, data collection, and data analysis factors that can reduce selection bias in efforts to understand how preventive interventions have the effects they do.

Keywords

Selection bias Preventive interventions Dose-response Simulations 

References

  1. Achenbach, T. M. (1991). Manual for the Teacher’s Report Form and 1991 profile. Burlington: University of Vermont Department of Psychiatry.Google Scholar
  2. Barber, J. S., Murphy, S. A., & Verbitsky, N. (2004). Adjusting for time-varying confounding in survival analysis. Sociological Methodology, 34, 163–192.CrossRefGoogle Scholar
  3. Bierman, K. L., Greenberg, M. T., & Conduct Problems Prevention Research Group. (1996). Social skills training in the Fast Track program. In R. D. Peters & R. J. McMahon (Eds.), Preventing childhood disorders, substance abuse, and delinquency (pp. 65–89). Thousand Oaks, CA: Sage.Google Scholar
  4. Bodnar, L. M., Davidian, M., Siega-Riz, A. M., & Tsiatis, A. A. (2004). Marginal structural models for analyzing causal effects of time-dependent treatments: An application in perinatal epidemiology. American Journal of Epidemiology, 159, 926–934.CrossRefPubMedGoogle Scholar
  5. Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978). Statistics for experimenters. An introduction to design, data analysis, and model building. New York: Wiley.Google Scholar
  6. Bray, B., Almirall, D., Zimmerman, R., Lynam, D., & Murphy, S. A. (2006). Assessing the total effect of time-varying predictors in prevention research. Prevention Science, 7, 1–17.CrossRefPubMedGoogle Scholar
  7. Cicchetti, D., & Hinshaw, S. P. (2002). Prevention and intervention science: Contributions to developmental theory. Development and Psychopathology, 14, 667–671.PubMedGoogle Scholar
  8. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
  9. Coie, J. D., & Dodge, K. A. (1988). Multiple sources of data on social behavior and social status in the school: A cross-age comparison. Child Development, 59, 815–829.CrossRefPubMedGoogle Scholar
  10. Cole, S. R., & Hernán, M. A. (2008). Constructing inverse probability weights for marginal structural models. American Journal of Epidemiology, 168, 656–664.CrossRefPubMedGoogle Scholar
  11. Collins, L. M., Murphy, S. A., & Bierman, K. A. (2004). A conceptual framework for adaptive preventive interventions. Prevention Science, 5, 185–196.CrossRefPubMedGoogle Scholar
  12. Collins, L. M., Murphy, S. A., Nair, V. N., & Strecher, V. (2005). A strategy for optimizing and evaluating behavioral interventions. Annals of Behavioral Medicine, 30, 65–73.CrossRefPubMedGoogle Scholar
  13. Collins, L. M., Murphy, S. A., & Strecher, V. (2007). The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART): New methods for more potent ehealth interventions. American Journal of Preventive Medicine, 32, S112–S118.CrossRefPubMedGoogle Scholar
  14. Conduct Problems Prevention Research Group. (1992). A developmental and clinical model for the prevention of conduct disorders: The FAST Track program. Development and Psychopathology, 4, 509–527.CrossRefGoogle Scholar
  15. Conduct Problems Prevention Research Group. (1999). Initial impact of the Fast Track prevention trial for conduct problems: I. The high-risk sample. Journal of Consulting and Clinical Psychology, 67, 631–647.CrossRefGoogle Scholar
  16. Conduct Problems Prevention Research Group. (2002). Evaluation of the first 3 years of the Fast Track prevention trial with children at high risk for adolescent conduct problems. Journal of Abnormal Child Psychology, 30, 19–35.CrossRefGoogle Scholar
  17. Dimidjian, S., Hollon, S. D., Dobson, K. S., Schmaling, K. B., Kohlenberg, R. J., Addis, M. E., et al. (2006). Randomized trial of behavioral activation, cognitive therapy, and antidepressant medication in the acute treatment of adults with major depression. Journal of Consulting and Clinical Psychology, 74, 658–670.CrossRefPubMedGoogle Scholar
  18. Dobson, K. S., Hollon, S. D., Dimidjian, S., Schmaling, K. B., Kohlenberg, R. J., Gallop, R., et al. (2008). Randomized trial of behavioral activation, cognitive therapy, and anti-depressant medication in the prevention of relapse and recurrence of major depression. Journal of Consulting and Clinical Psychology, 76, 468–477.CrossRefPubMedGoogle Scholar
  19. Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11, 193–221.CrossRefGoogle Scholar
  20. Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.CrossRefPubMedGoogle Scholar
  21. Feinstein, A. L. (1991). Intention to treat policy for analyzing randomized trials: statistical distortions and neglected clinical challenges. In J. Cramer & B. Spilker (Eds.), Patient compliance in medical practice and clinical trials (pp 359–370). New York: Raven.Google Scholar
  22. Hall, R. C. W. (1995). Global assessment of functioning: A modified scale. Psychosomatics: Journal of Consultation Liaison Psychiatry, 36, 267–275.Google Scholar
  23. Heckman, J. (1976). The common structure of statistical models of truncation, sample selection, and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement, 5, 475–492.Google Scholar
  24. Hernán, M. A., Brumback, B., & Robins, J. M. (2000). Estimating the causal effect of zidovudine on CD4 count with a marginal structural model for repeated measures. Statistics in Medicine, 21, 1689–1709.CrossRefGoogle Scholar
  25. Hill, J. L., Brooks-Gunn, J., & Waldfogel, J. (2003). Sustained effects of high participation in an early intervention for low birth-weight premature infants. Developmental Psychology, 39, 730–744.CrossRefPubMedGoogle Scholar
  26. Jacobson, N. S., Schmaling, K. B., Holtzworth-Munroe, A., Katt, J. L., Wood, L. F., & Follette, V. M. (1989). Research-structured vs. clinically flexible versions of social learning-based marital therapy. Behaviour Research and Therapy, 27, 173–180.CrossRefPubMedGoogle Scholar
  27. Kreuter, M., Farrell, D., Olevitch, L., & Brennan, L. (2000). Tailoring health messages: Customizing communication with computer technology. Mahwah, NJ: Erlbaum.Google Scholar
  28. Lavori, P. W., Dawon, R., & Roth, A. J. (2000). Flexible treatment strategies in chronic disease: Clinical and research implications. Biological Psychiatry, 48, 605–614.CrossRefPubMedGoogle Scholar
  29. Lyons-Ruth, K., & Melnick, S. (2004). Dose-response effect of mother-infant clinical home visiting on aggressive behavior problems in kindergarten. Journal of the American Academy of Child and Adolescent Psychiatry, 43, 699–707.CrossRefPubMedGoogle Scholar
  30. Mortimer, K. M., Neugebauer, R., van der Laan, M., & Tager, I. B. (2005). An application of model-fitting procedures for marginal structural models. American Journal of Epidemiology, 162, 382–388.CrossRefPubMedGoogle Scholar
  31. Murphy, S. A., Oslin, D., Rush, A. J., & Zhu, J. for MCATS. (2006). Methodological challenges in constructing effective treatment sequences for chronic disorders. Neuropsychopharmacology, 32, 257–262.CrossRefPubMedGoogle Scholar
  32. Neter, J., Kutner, M. H., Nachtsheim, C. J., & Wasserman, W. (1996). Applied linear statistical models (4th ed.). New York: McGraw-Hill.Google Scholar
  33. Pocock, S. J., & Abdalla, M. (1998). The hope and hazards of using compliance data in randomized controlled trials. Statistics in Medicine, 17, 303–317.CrossRefPubMedGoogle Scholar
  34. Robins, J. M. (1999). Association, causation, and marginal structural models. Synthese, 121, 151–179.CrossRefGoogle Scholar
  35. Robins, J. M., Hernán, M. A., & Brumback, B. (2000). Marginal structural models and causal inference in epidemiology. Epidemiology, 11, 550–560.CrossRefPubMedGoogle Scholar
  36. Rohrbach, L. A., Graham, J. W., & Hansen, W. B. (1993). Diffusion of school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine, 22, 237–260.CrossRefPubMedGoogle Scholar
  37. Rosenbaum, P. R. (1984a). The consequences of adjustment for a concomitant variable that has been affected by the treatment. Journal of the Royal Statistical Society, Series A, 147, 656–666.CrossRefGoogle Scholar
  38. Rosenbaum, P. R. (1984b). From association to causation in observational studies: The role of tests of strongly ignorable treatment assignment. Journal of the American Statistical Association, 79, 41–48.CrossRefGoogle Scholar
  39. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer.Google Scholar
  40. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  41. Rubin, D. B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127, 757–763.PubMedGoogle Scholar
  42. Seitz, V., Apfel, N. H., & Rosenbaum, L. K. (1991). Effects of an intervention program for pregnant adolescents: Educational outcomes at two years postpartum. American Journal of Community Psychology, 19, 911–930.CrossRefPubMedGoogle Scholar
  43. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin.Google Scholar
  44. Silverman, W. K. (2006). Shifting our thinking and training from evidence-based treatments to evidence-based explanations of treatments. In Balance: Society of Clinical Child and Adolescent Psychology Newsletter, 21.Google Scholar
  45. Trochim, W. M. (2006). The research methods knowledge base (2nd ed.). Retrieved November 10, 2009, from http://www.socialresearchmethods.net/kb/expfact.php.
  46. Wilkinson, L., & The Task Force on Statistical Inference on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.CrossRefGoogle Scholar
  47. Winship, C., & Mare, R. D. (1992). Models for sample selection bias. Annual Review of Sociology, 18, 327–350.CrossRefGoogle Scholar

Copyright information

© Society for Prevention Research 2010

Authors and Affiliations

  • Herle M. McGowan
    • 1
  • Robert L. Nix
    • 2
  • Susan A. Murphy
    • 3
  • Karen L. Bierman
    • 2
  • Conduct Problems Prevention Research Group*
  1. 1.NCSU Department of StatisticsNorth Carolina State UniversityRaleighUSA
  2. 2.Pennsylvania State UniversityUniversity ParkUSA
  3. 3.University of MichiganAnn ArborUSA

Personalised recommendations