Advertisement

Prevention Science

, Volume 13, Issue 3, pp 300–313 | Cite as

Person Mobility in the Design and Analysis of Cluster-Randomized Cohort Prevention Trials

  • Sam VuchinichEmail author
  • Brian R. Flay
  • Lawrence Aber
  • Leonard Bickman
Article

Abstract

Person mobility is an inescapable fact of life for most cluster-randomized (e.g., schools, hospitals, clinic, cities, state) cohort prevention trials. Mobility rates are an important substantive consideration in estimating the effects of an intervention. In cluster-randomized trials, mobility rates are often correlated with ethnicity, poverty and other variables associated with disparity. This raises the possibility that estimated intervention effects may generalize to only the least mobile segments of a population and, thus, create a threat to external validity. Such mobility can also create threats to the internal validity of conclusions from randomized trials. Researchers must decide how to deal with persons who leave study clusters during a trial (dropouts), persons and clusters that do not comply with an assigned intervention, and persons who enter clusters during a trial (late entrants), in addition to the persons who remain for the duration of a trial (stayers). Statistical techniques alone cannot solve the key issues of internal and external validity raised by the phenomenon of person mobility. This commentary presents a systematic, Campbellian-type analysis of person mobility in cluster–randomized cohort prevention trials. It describes four approaches for dealing with dropouts, late entrants and stayers with respect to data collection, analysis and generalizability. The questions at issue are: 1) From whom should data be collected at each wave of data collection? 2) Which cases should be included in the analyses of an intervention effect? and 3) To what populations can trial results be generalized? The conclusions lead to recommendations for the design and analysis of future cluster-randomized cohort prevention trials.

Keywords

Mobility Cluster-randomized trials Cohort prevention trials Validity Generalizability 

Notes

Acknowledgments

Preparation of this paper was supported by grants to each of the authors as part of the Social and Character Development (SACD) Research program funded by the Institute of Education Sciences (IES), U.S. Department of Education. The content of this publication does not necessarily reflect the views or policies of IES or the U.S. Government.

References

  1. Aber, J. L., Brown, J. L., & Jones, S. M. (2003). Developmental trajectories toward violence in middle childhood: Course, demographic differences, and response to school-based intervention. Developmental Psychology, 39, 324–348.PubMedCrossRefGoogle Scholar
  2. Baker, S. G., Fitzmaurice, G. M., Freedman, L. S., & Kramer, B. S. (2006). Simple adjustments for randomized trials with nonrandomly missing or censored outcomes arising from informative covariates. Biostatistics, 7, 29–40.PubMedCrossRefGoogle Scholar
  3. Barnard, J., Frangakis, C. E., Hill, & Rubin, D. T. (2003). Principal stratification approach to broken randomized experiments: A case study of school choice vouchers in New York City. Journal of the American Statistical Association, 98, 299–311.Google Scholar
  4. Battistich, V., Schaps, E., Watson, M., Solomon, D., & Lewis, C. (2000). Effects of the Child Development Project on students’ drug use and other problem behaviors. The Journal of Primary Prevention, 21, 75–99.CrossRefGoogle Scholar
  5. Bauman, K. E., Suchindran, C. M., & Murray, D. M. (1999). The paucity of effects in community trials: Is secular trend the culprit? Preventive Medicine, 28, 426–429.PubMedCrossRefGoogle Scholar
  6. Beets, M. W., Flay, B. R., Vuchinich, S., Acock, A. C., Li, K., & Allred, C. (2008). School climate and teachers’ beliefs and attitudes associated with implementation of the Positive Action program: A diffusion of innovations model. Prevention Science, 9, 264–275.PubMedCrossRefGoogle Scholar
  7. Bloom, H. S. (2005) Randomizing groups to evaluate place-based programs. In H. S. Bloom (Ed.), Learning more from social experiments: Evolving analytic approaches (pp. 115–172). New York: Russell Sage Foundation.Google Scholar
  8. Bloom, H. S., Bos, J. M., & Lee, S.-W. (1999). Using cluster random assignment to measure program impacts: Statistical implications for the evaluation of education programs. Evaluation Review, 23, 445–469.PubMedCrossRefGoogle Scholar
  9. Bonell, C., Sorhaindo, A., Strange, V., Wiggins, M., Allen, E., Fletcher, A., et al. (2010). A pilot whole-school intervention to improve school ethos and reduce substance use. Health Education, 110, 252–272.CrossRefGoogle Scholar
  10. Borman, G. D., & Dowling, N. M. (2006). Longitudinal achievement effects of multiyear summer school: Evidence from the Teach Baltimore randomized field trial. Educational Evaluation and Policy Analysis, 28, 25–48.CrossRefGoogle Scholar
  11. Boruch, R. G., & Foley, E. (2000). The honestly experimental society: Sites and other entities as the units of allocation and analysis in randomized trials. In L. Bickman (Ed.), Validity and social experimentation: Donald Campbell’s legacy, Volume 1 (pp. 198–238). Thousand Oaks, CA: Sage Publications.Google Scholar
  12. Brown, C. H., Wang, W., Kellam, S. G., Muthen, B. O., Petras, H., Toyinbo, P., et al. (2008). Models for testing and evaluating impact in randomized field trials: Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence, 95S, S74–S104.CrossRefGoogle Scholar
  13. Brown, E. C., Graham, J. W., Hawkins, J. D., Arthur, M. W., Baldwin, M. M., Oesterle, S., et al. (2009). Design and analysis of the Community Youth Development Study longitudinal cohort sample. Evaluation Review, 33, 311–334.PubMedCrossRefGoogle Scholar
  14. Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312.PubMedCrossRefGoogle Scholar
  15. Carpenter, J. R., Goldstein, H. & Kenward, M. G. (2011). REALCOM-IMPUTE software for multilevel multiple imputation with mixed response types. Journal of Statistical Software, 45, 1–14.Google Scholar
  16. Christakis, N. A., & Fowler, J. H. (2007). The spread of obesity in a large social network over 32 years. The New England Journal of Medicine, 357, 370–379.PubMedCrossRefGoogle Scholar
  17. Clark, T. W., Pareek, M., Hoschler, K., Dillon, H., Nicholson, K. G., Groth, N., & Stephenson, I. (2009). Trial of 2009 Influenza A (H1N1) monovalent MF59-adjvanted vaccine. The New England Journal of Medicine, 361, 2424–2435.Google Scholar
  18. Collins, L. M., Schafer, J. L., & Kam, C.-M. (2001). A comparison of inclusive and restrictive strategies in missing data procedures. Psychological Methods, 6, 330–351.PubMedCrossRefGoogle Scholar
  19. Cook, T. D. (2005). Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. The Annals of the American Academy of Political and Social Science, 599, 176–198.CrossRefGoogle Scholar
  20. Cornfield, J. (1978). Randomization by group: A formal analysis. American Journal of Epidemiology, 108, 100–102.PubMedGoogle Scholar
  21. Crouchley, R., & Davies, R. B. (1999). A comparison of population average and random-effect models for the analysis of longitudinal count data with base-line information. Journal of the Royal Statistical Society, Series A, 162, 331–347.CrossRefGoogle Scholar
  22. Danese, A., Moffitt, T. E., Pariente, C., Poulton, R., Caspi, A. (2008). Elevated inflammation levels in depressed adults with a history of childhood maltreatment. Archives of General Psychiatry, 65, 409–415.Google Scholar
  23. DeGarmo, D. S., Eddy, J. M., Reid, J. B., & Fetrow, R. A. (2009). Evaluating mediators of the impact of the Linking the Interests of Families and Teachers (LIFT) multimodal preventive intervention on substance use initiation and growth across adolescence. Prevention Science, 10, 208–220.PubMedCrossRefGoogle Scholar
  24. Dodge, K. A., Greenberg, M. T., Malone, S. M., & Conduct Problems Prevention Group. (2008). Testing an idealized dynamic model of the development of serious violence in adolescence. Child Development, 79, 1907–1927.CrossRefGoogle Scholar
  25. Donner, A., & Klar, N. (2000). Design and analysis of cluster randomization trials in health research. London, England: Arnold.Google Scholar
  26. Donner, A., & Klar, N. (2004). Pitfalls and controversies in cluster randomization trials. American Journal of Public Health, 94, 416–422.PubMedCrossRefGoogle Scholar
  27. Eckenrode, J., Rowe, E., Laird, M., & Brathwaite, J. (1995). Mobility as a mediator of the effects of child maltreatment on academic performance. Child Development, 66, 1130–1142.PubMedCrossRefGoogle Scholar
  28. Fairchild, A. J., & MacKinnon, D. P. (2009). A general model for testing mediation and moderation effects. Prevention Science, 10, 87–99.PubMedCrossRefGoogle Scholar
  29. Fitzmaurice, G. M., & Laird, N. M. (2000). Generalized linear mixture models for handling nonignorable dropouts in longitudinal studies. Biostatistics, 1, 141–156.PubMedCrossRefGoogle Scholar
  30. Flay, B. R. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.PubMedCrossRefGoogle Scholar
  31. Flay, B. R., & Collins, L. M. (2005). Historical review of school-based randomized trials for evaluating problem behavior prevention programs. The Annals of the American Academy of Political and Social Science, 599, 115–146.CrossRefGoogle Scholar
  32. Flay, B. R., & Cook, T. D. (1981). Evaluation of mass media prevention campaigns. In R. R. Rice & W. Paisley (Eds.), Public communication campaigns (pp. 239–313). Beverly Hills, CA: Sage.Google Scholar
  33. Flay, B. R., Graumlich, S., Segawa, S., Burns, J. L., Holliday, M. Y., & Investigators, A. A. (2004). Effects of two prevention programs on high-risk behaviors among African-American youth: A randomized trial. Archives of Pediatric & Adolescent Medicine, 158, 377–384.CrossRefGoogle Scholar
  34. Foshee, V. A., Bauman, K. E., Ennett, S. T., Suchindran, C., Benefield, T., & Linder, G. F. (2005). Assessing the effects of the dating violence prevention program “Safe Dates” using random coefficient regression modeling. Prevention Science, 6, 245–258.PubMedCrossRefGoogle Scholar
  35. Frangakis, C. E. (2009). The calibration of treatment effects from clinical trials to target populations. Clinical Trials, 6, 136–140.PubMedCrossRefGoogle Scholar
  36. Frangakis, C. E., & Rubin, D. B. (2002). Principal stratification in causal inference. Biometrics, 58, 1–29.CrossRefGoogle Scholar
  37. Frangakis, C. E., Rubin, D. B., & Zhou, X. H. (2002). Clustered encouragement design with individual noncompliance: Bayesian inference and application to advance directive forms. Biostatistics, 3, 147–164.PubMedCrossRefGoogle Scholar
  38. Giraudeau, B., & Rivaud, P. (2009). Preventing bias in cluster randomized trials. PLoS Medicine, 6, e1000065.PubMedCrossRefGoogle Scholar
  39. Goldstein, H., Burgess, S., & McConnell, B. (2007). Modelling the effect of pupil mobility on school differences in educational achievement. Journal of the Royal Statistical Society: Series A, 170, 941–954.CrossRefGoogle Scholar
  40. Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.PubMedCrossRefGoogle Scholar
  41. Gruman, D. H., Harachi, D. W., Abbott, R. D., Catalano, R. F., & Fleming, C. B. (2008). Longitudinal effects of student mobility on three dimensions of elementary school engagement. Child Development, 79, 1833–1852.PubMedCrossRefGoogle Scholar
  42. Hewitt, C. E., Kumaravel, B., Dumville, J. C., & Torgerson, D. J. (2010). Assessing the impact of attrition in randomized controlled trials. Journal of Clinical Epidemiology, 63, 1264–1270.PubMedCrossRefGoogle Scholar
  43. Hirano, K., Imbens, G., Ridder, G., & Rubin, D. (2001). Combining panel data sets with attrition and refreshment samples. Econometrica, 69, 1645–1659.CrossRefGoogle Scholar
  44. Hox, J. (2010). Multilevel analysis: Techniques and applications. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  45. Jo, B. (2002). Estimation of intervention effects with noncompliance: Alternative model specifications. Journal of Educational and Behavioral Statistics, 27, 385–409.CrossRefGoogle Scholar
  46. Jo, B., & Muthen, B. (2003). Longitudinal studies with intervention and noncompliance: Estimation of causal effects in growth mixture modeling. In S. P. Reise & N. Duan (Eds.), Multilevel modeling: Methodological advances, issues and applications (pp. 112–139). Mahwah NJ: Lawrence Erlbaum.Google Scholar
  47. Jo, B., & Stuart, E. A. (2009). On the use of propensity scores in principal causal effect estimation. Statistics in Medicine, 28, 2857–2875.PubMedCrossRefGoogle Scholar
  48. Jo, B., Asparouhov, T., Muthen, B., Ialongo, N., & Brown, C. H. (2008). Intention-to-treat analysis in cluster randomized trials with noncompliance. Statistics in Medicine, 27, 5565–5577.PubMedCrossRefGoogle Scholar
  49. Jo, B., Ginexi, E. M., & Ialongo, N. S. (2010). Handling missing data in randomized experiments with noncompliance. Prevention Science, 11, 384–396.PubMedCrossRefGoogle Scholar
  50. Jones, S., Brown, J., & Aber, J. L. (2011). Two-year impacts of a universal school-based social-emotional and literacy intervention: An experiment in translational developmental research. Child Development, 82, 533–554.PubMedCrossRefGoogle Scholar
  51. Kellam, S. G., Brown, C. H., Poduska, J. M., Ialongo, N., Wang, W., Toyinbo, P., et al. (2008). Effects of a universal classroom behavior management program in first and second grades on young adult behavioral, psychiatric, and social outcomes. Drug and Alcohol Dependence, 95, S5–S28.PubMedCrossRefGoogle Scholar
  52. Leon A. C., Demirtas, H., & Hedeker, D. (2007). Bias reduction with an adjustment for participants’ intent to drop out of a randomized controlled clinical trial. Clinical Trials, 4, 540–547.Google Scholar
  53. Little, R. J. A., & Rubin, D. B. (2002). Statistical analysis with missing data (2nd ed). New York: Wiley.Google Scholar
  54. Little, R. J., Long, Q., & Lin, X. (2009). A comparison of methods for estimating the causal effect of a treatment in randomized clinical trials subject to noncompliance. Biometrics, 65, 640–649.PubMedCrossRefGoogle Scholar
  55. Liu, L. C., Flay, B., & Investigators, A. A. (2009). Evaluating mediation in longitudinal multivariate data: Mediation effects for the Aban Aya Youth Project drug prevention program. Prevention Science, 10, 197–207.PubMedCrossRefGoogle Scholar
  56. Localio, A. R., Berlin, J. A., & Ten Have, T. R. (2006). Longitudinal and repeated cross-sectional cluster-randomization designs using mixed effects regression for binary outcomes: Bias and coverage of frequentist and Bayesian methods. Statistics in Medicine, 25, 2720–2736.PubMedCrossRefGoogle Scholar
  57. Multisite Violence Prevention Project. (2008). Impact of a universal school-based violence prevention program on social-cognitive outcomes. Prevention Science, 9, 231–244.CrossRefGoogle Scholar
  58. Murray, D. M. (1998). Design and analysis of group-randomized trials. New York: Oxford University Press.Google Scholar
  59. Murray, D. M., Van Horn, M. L., Hawkins, J. D., & Arthur, M. W. (2006). Analysis strategies for a community trial to reduce adolescent ATOD use: A comparison of random coefficient and ANOVA/ANCOVA models. Contemporary Clinical Trials, 27, 188–206.PubMedCrossRefGoogle Scholar
  60. Murray, D. M., Pals, S. L., Blitstein, J. L., Alfano, C. M., & Lehman, J. (2008). Design and analysis of group-randomized trials in cancer: A review of current practices. Journal of the National Cancer Institute, 100, 483–491.PubMedCrossRefGoogle Scholar
  61. Muthen, B., Brown, C. H., Masyn, K., Jo, B., Khoo, S.-T., Yang, C.-P., et al. (2002). General growth mixture modeling for randomized preventive interventions. Biostatistics, 3, 459–475.PubMedCrossRefGoogle Scholar
  62. National Research Council and Institute of Medicine. (2010). Student mobility: Exploring the impact of frequent moves on achievement: Summary of a workshop. (Beatty, A., Rapporteur). Committee on the impact of mobility and change on the lives of young children, schools, and neighborhoods. Board on Children, Youth, and Families, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.Google Scholar
  63. Osann K., Wenzel L., Dogan A., Hsieh S., Chase D. M., Sappington S., Monk B. J., & Nelson E. L. (2011). Recruitment and retention results for a population-based cervical cancer biobehavioral clinical trial. Gynecologic Oncology, 121, 558–563.PubMedCrossRefGoogle Scholar
  64. Pals, S. L., Murray, D. M., Alfano, C. M., Shadish, W. R., Hannan, P. J., & Baker, W. L. (2008). Individually randomized group treatment trials: A critical appraisal of frequently used design and analytic procedures. American Journal of Public Health, 98, 1418–1424.PubMedCrossRefGoogle Scholar
  65. Pribesh, S., & Downey, D. B. (1999). Why are residential and school moves associated with poor school performance? Demography, 36, 521–534.PubMedCrossRefGoogle Scholar
  66. Prinz, R. J., Dumas, J. E., Smith, E. P., Laughlin, J., White, D., & Barrón, R. (2001). Recruitment and retention of participants in prevention trials. American Journal of Preventive Medicine, 20 (Supplement), 31–37.Google Scholar
  67. Rabe-Hesketh, S., & Skrondal, A. (2008). Multilevel and longitudinal modeling using Stata (2nd ed.). College Station, TX: Stata Press.Google Scholar
  68. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Newbury Park, CA: Sage.Google Scholar
  69. Ribisl, K. M., Walton, M. A., Mowbray, C. T., Luke, D. A., Davidson, W. S., & Bootsmiller, B. J. (1996). Minimizing participant attrition in panel studies through the use of effective retention and tracking strategies: Review and recommendations. Evaluation and Program Planning, 19, 1–25.CrossRefGoogle Scholar
  70. Roy, J., Hogan, J. W., & Marcus, B. H. (2008). Principal stratification with predictors of compliance for randomized trials with two active treatments. Biostatistics, 9, 277–289.PubMedCrossRefGoogle Scholar
  71. Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. New York: Wiley.CrossRefGoogle Scholar
  72. Rubin, D. B. (2006). Causal inference through potential outcomes and principal stratification. Statistical Science, 21, 299–309.CrossRefGoogle Scholar
  73. Sampson, R. J., Sharkey, P., & Raudenbush, S. W. (2008). Durable effects of concentrated disadvantage on verbal ability among African-American children. PNAS, 105, 845–852.PubMedCrossRefGoogle Scholar
  74. Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147–177.PubMedCrossRefGoogle Scholar
  75. Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Wadsworth.Google Scholar
  76. Sherwin, R. (1978). Controlled trials of the diet-heart hypothesis: Some comments on the experimental unit. American Journal of Epidemiology, 108, 92–99.PubMedGoogle Scholar
  77. Stevens, J., Murray, D. M., Catellier, D. J., Hannan, P. J., Lytle, L. A., Elder, J. P., et al. (2005). Design of the Trial of Activity in Adolescent Girls (TAAG). Contemporary Clinical Trials, 26, 223–233.PubMedCrossRefGoogle Scholar
  78. Stuart, E. A., Perry, D. F., Le, H.-N., & Ialongo, N. (2008). Estimating intervention effects of prevention programs: Accounting for noncompliance. Prevention Science, 9, 288–298.Google Scholar
  79. Taljaard, M., Donner, A., & Klar, N. (2008). Imputation strategies for missing continuous outcomes in cluster randomized trials. Biometrical Journal, 50, 329–345.PubMedCrossRefGoogle Scholar
  80. Taylor, L., & Zhou, X. H. (2009). Multiple imputation methods for treatment noncompliance and nonresponse in randomized clinical trials. Biometrics, 65, 88–95.PubMedCrossRefGoogle Scholar
  81. U.S. Census Bureau. (2009). Current population survey, annual social and economic supplement. Table 1. General Mobility: 2008 to 2009. Washington, DC: US Census Bureau.Google Scholar
  82. U.S. Department of Education, National Center for Education Statistics. (2006). ECLS-K longitudinal kindergarten-fifth grade public-use data file and electronic codebook (CD-ROM). (NCES 2006–035). Washington, DC: Author.Google Scholar
  83. Varnell, S., Murray, D. M., Janega, J. B., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent practices. American Journal of Public Health, 94, 393–399.PubMedCrossRefGoogle Scholar
  84. West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jaríais, D. C., Holtgrave, D., et al. (2008). Alternatives to randomized controlled trials. American Journal of Public Health, 98, 1359–1366.PubMedCrossRefGoogle Scholar
  85. Yucel, R. M. (2008). Multiple imputation inference for multivariate multilevel continuous data with ignorable non-response. Philosophical Transactions of the Royal Society A, 366, 2389–2403.CrossRefGoogle Scholar

Copyright information

© Society for Prevention Research 2012

Authors and Affiliations

  • Sam Vuchinich
    • 1
    Email author
  • Brian R. Flay
    • 2
  • Lawrence Aber
    • 3
  • Leonard Bickman
    • 4
  1. 1.School of Social and Behavioral Health SciencesOregon State UniversityCorvallisUSA
  2. 2.School of Social and Behavioral Health SciencesOregon State UniversityCorvallisUSA
  3. 3.Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkUSA
  4. 4.Department of Psychology and Human DevelopmentVanderbilt UniversityNashvilleUSA

Personalised recommendations