Advertisement

Prevention Science

, Volume 14, Issue 6, pp 557–569 | Cite as

Selection Effects and Prevention Program Outcomes

  • Laura G. Hill
  • Robert Rosenman
  • Vidhura Tennekoon
  • Bidisha Mandal
Article

Abstract

A primary goal of the paper is to provide an example of an evaluation design and analytic method that can be used to strengthen causal inference in nonexperimental prevention research. We used this method in a nonexperimental multisite study to evaluate short-term outcomes of a preventive intervention, and we accounted for effects of two types of selection bias: self-selection into the program and differential dropout. To provide context for our analytic approach, we present an overview of the counterfactual model (also known as Rubin's causal model or the potential outcomes model) and several methods derived from that model, including propensity score matching, the Heckman two-step approach, and full information maximum likelihood based on a bivariate probit model and its trivariate generalization. We provide an example using evaluation data from a community-based family intervention and a nonexperimental control group constructed from the Washington State biennial Healthy Youth Survey (HYS) risk behavior data (HYS n = 68,846; intervention n = 1,502). We identified significant effects of participant, program, and community attributes in self-selection into the program and program completion. Identification of specific selection effects is useful for developing recruitment and retention strategies, and failure to identify selection may lead to inaccurate estimation of outcomes and their public health impact. Counterfactual models allow us to evaluate interventions in uncontrolled settings and still maintain some confidence in the internal validity of our inferences; their application holds great promise for the field of prevention science as we scale up to community dissemination of preventive interventions.

Keywords

Selection effects Translational research Universal prevention Family-focused interventions Causal inference Observational research Nonexperimental research 

Notes

Acknowledgments

This study was supported in part by the National Institute of Drug Abuse (grants R21 DA025139-01Al and R21 DA19758-01). We thank the Washington State Department of Health for providing the supplementary data sample, and we thank the program providers and families who participated in the program evaluation.

Supplementary material

11121_2012_342_MOESM1_ESM.docx (134 kb)
ESM 1 [Insert caption here] (docx 134 kb)

References

  1. Arendt, J.N. & Holm, A. (2006). Probit models with binary endogenous regressors (working paper 4/2006). Retrieved from Department of Business and Economics at the University of Southern Denmark website: http://static.sdu.dk/mediafiles/Files/Om_SDU/Institutter/Ivoe/Disc_papers/Disc_2006/dpbe4%202006%20pdf.pdf. Accessed 23 Oct 2012.
  2. Arthur, M. W., Briney, J. S., Hawkins, J. D., Abbott, R. D., Brooke-Weiss, B. L., & Catalano, R. F. (2007). Measuring risk and protection in communities using the Communities that Care Youth Survey. Evaluation and Program Planning, 30, 197–211.CrossRefPubMedGoogle Scholar
  3. Barnard, J., Frangakis, C. E., Hill, J. L., & Rubin, D. R. (2003). Principal stratification approach to broken randomized experiments. Journal of the American Statistical Association, 98, 299–323. doi: 10.1198/016214503000071.CrossRefGoogle Scholar
  4. Berinsky, A. (2004). Silent voices: Opinion polls and political representation in America. Princeton: Princeton University Press.Google Scholar
  5. Bhattacharya, J., Goldman, D., & McCaffrey, D. (2006). Estimating probit models with self-selected treatments. Statistics in Medicine, 25, 389–413. doi: 10.1002/sim.2226.CrossRefPubMedGoogle Scholar
  6. Biglan, A., Hood, D., Brozovsky, P., Ochs, L., Ary, D., & Black, C. (1991). Subject attrition in prevention research. NIDA Research Monograph, 107, 213–234.PubMedGoogle Scholar
  7. Bushway, S., Johnson, B. D., & Slocum, L. A. (2007). Is the magic still there? The use of the Heckman two-step correction for selection bias in criminology. Journal of Quantitative Criminology, 23, 151–178. doi: 10.1007/s10940-007-9024-4.CrossRefGoogle Scholar
  8. Cook, T. D., & Steiner, P. M. (2010). Case matching and the reduction of selection bias in quasi experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychological Methods, 15, 56–68. doi: 10.1037/a0018536.CrossRefPubMedGoogle Scholar
  9. Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27, 724–750.CrossRefGoogle Scholar
  10. Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and Statistics, 84, 151–161.CrossRefGoogle Scholar
  11. Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.CrossRefPubMedGoogle Scholar
  12. Foster, E. M. (2010). Casual inference and developmental psychology. Developmental Psychology, 46, 1454–1480.CrossRefPubMedGoogle Scholar
  13. Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153–161.CrossRefGoogle Scholar
  14. Heckman, J. J., Ichimura, H., Smith, J., & Todd, P. (1996). Sources of selection bias in evaluating social programs: An interpretation of conventional measures and evidence on the effectiveness of matching as a program evaluation method. Proceedings of the National Acadamy of Science, 93, 13416–13420.CrossRefGoogle Scholar
  15. Hill, L. G., Goates, S. G., & Rosenman, R. (2010). Detecting selection effects in community implementations of family-based substance abuse prevention programs. American Journal of Public Health, 100, 623–630.PubMedCentralCrossRefPubMedGoogle Scholar
  16. Kumpfer, K. L., Molgaard, V., & Spoth, R. (1996). The Strengthening Families Program for the prevention of delinquency and drug use. In R. D. V. Peters & R. J. McMahon (Eds.), Preventing childhood disorders, substance abuse, and delinquency. Banff International Behavioral Science Series (Vol. 3) (pp. 241–267). Thousand Oaks: Sage Publications.CrossRefGoogle Scholar
  17. Lahiri, K., & Song, J. G. (2000). The effect of smoking on health using a sequential self-selection model. Health Economics, 9, 491–511.CrossRefPubMedGoogle Scholar
  18. Lesaffre, E., & Molenberghs, G. (1991). Multivariate probit analysis: A neglected procedure in medical statistics. Statistics in Medicine, 10, 1391–1403.CrossRefPubMedGoogle Scholar
  19. Lochman, J. E., & van den Steenhoven, A. (2002). Family-based approaches to substance abuse prevention. The Journal of Primary Prevention, 23, 49–114.CrossRefGoogle Scholar
  20. Maxwell, S. E. (2010). Introduction to the special section on Campbell's and Rubin's conceptualizations of causality. Psychological Methods, 15, 1–2.CrossRefPubMedGoogle Scholar
  21. Maydeu-Olivares, A., Coffman, D. L., & Hartmann, W. M. (2007). Asymptotically distribution-free (ADF) interval estimation of coefficient alpha. Psychological Methods, 12, 157–176.CrossRefPubMedGoogle Scholar
  22. McGowan, H. M., Nix, R. L., Murphy, S. A., & Bierman, K. L. (2010). Investigating the impact of selection bias in dose–response analyses of preventive interventions. Prevention Science, 11, 239–251.PubMedCentralCrossRefPubMedGoogle Scholar
  23. Neyman, J. (1990). On the application of probability theory to agricultural experiments: Essay on principles. Section 9. Statistical Science, 5, 465–480.Google Scholar
  24. Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge: Cambridge University Press.Google Scholar
  25. Redmond, C., Spoth, R., & Trudeau, L. (2002). Family- and community-level predictors of parent support seeking. Journal of Community Psychology, 30, 153–171.CrossRefGoogle Scholar
  26. Roodman, D.M. (2007). CMP: Stata module to implement conditional (recursive) mixed process estimator. Statistical software components. http://ideas.repec.org/c/boc/bocode/s456882.html. Accessed 23 Oct 2012.
  27. Roodman, D. (2009). Estimating fully observed recursive mixed-process models with cmp. http://www.cgdev.org/content/publications/detail/1421516. Accessed 23 Oct 2012.
  28. Rosenman, R., Mandal, B., Tennekoon, V., & Hill, L.G. (2010). Estimating treatment effectiveness with sample selection (working paper 2010-05). Retrieved from School of Economic Sciences at Washington State University website: http://faculty.ses.wsu.edu/WorkingPapers/Rosenman/WP2010-5.pdf. Accessed 23 Oct 2012.
  29. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. doi: 10.1037/h0037350.CrossRefGoogle Scholar
  30. Rubin, D. B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127, 757–763.CrossRefPubMedGoogle Scholar
  31. Rubin, D. B. (2004). Teaching statistical inference for causal effects in experiments and observational studies. Journal of Educational and Behavioral Statistics, 29, 343–367.CrossRefGoogle Scholar
  32. Rubin, D. B. (2008). For objective causal inference, design trumps analysis. Annals of Applied Statistics, 2, 808–840.CrossRefGoogle Scholar
  33. Serlin, R. C., Wampold, B. E., & Levin, J. R. (2003). Should providers of treatment be regarded as a random factor? If it ain't broke, don't “fix” it: A comment on Siemer and Joormann (2003). Psychological Methods, 8, 524–534.CrossRefPubMedGoogle Scholar
  34. Shadish, W. R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods, 15-, 3–17.CrossRefPubMedGoogle Scholar
  35. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton, Mifflin and Company.Google Scholar
  36. Spoth, R., Randall, G. K., & Shin, C. (2008). Increasing school success through partnership-based family competency training: Experimental study of long-term outcomes. School Psychology Quarterly, 23, 70–89.PubMedCentralCrossRefPubMedGoogle Scholar
  37. US Department of Labor, Bureau of Labor Statistics (2012). http://www.bls.gov/data/#unemployment. Accessed 23 Oct 2012.
  38. Washington Office of Financial Management (2010). http://www.ofm.wa.gov/localdata/default.asp. Accessed 23 Oct 2012.
  39. Washington State Department of Health (2006). Washington State Healthy Youth Survey. http://www.doh.wa.gov/Portals/1/Documents/Pubs/WashingtonStateHYS2006.pdf. Accessed 23 Oct 2012.
  40. West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15, 18–37Google Scholar

Copyright information

© Society for Prevention Research 2013

Authors and Affiliations

  • Laura G. Hill
    • 1
  • Robert Rosenman
    • 2
  • Vidhura Tennekoon
    • 3
  • Bidisha Mandal
    • 2
  1. 1.Department of Human DevelopmentWashington State UniversityPullmanUSA
  2. 2.School of Economic SciencesWashington State UniversityPullmanUSA
  3. 3.Department of EconomicsEastern Washington UniversityCheneyUSA

Personalised recommendations