Translational Behavioral Medicine

, Volume 6, Issue 1, pp 153–159 | Cite as

Research design issues for evaluating complex multicomponent interventions in neighborhoods and communities

  • Kelli A. Komro
  • Brian R. Flay
  • Anthony Biglan
  • Alexander C. Wagenaar


Major advances in population health will not occur unless we translate existing knowledge into effective multicomponent interventions, implement and maintain these in communities, and develop rigorous translational research and evaluation methods to ensure continual improvement and sustainability. We discuss challenges and offer approaches to evaluation that are key for translational research stages 3 to 5 to advance optimized adoption, implementation, and maintenance of effective and replicable multicomponent strategies. The major challenges we discuss concern (a) multiple contexts of evaluation/research, (b) complexity of packages of interventions, and (c) phases of evaluation/research questions. We suggest multiple alternative research designs that maintain rigor but accommodate these challenges and highlight the need for measurement systems. Longitudinal data collection and a standardized continuous measurement system are fundamental to the evaluation and refinement of complex multicomponent interventions. To be useful to T3–T5 translational research efforts in neighborhoods and communities, such a system would include assessments of the reach, implementation, effects on immediate outcomes, and effects of the comprehensive intervention package on more distal health outcomes.


Health Well-being Evaluation design Complex intervention Translational research 



A grant from the National Institute on Drug Abuse (DA028946) supported the authors’ work on this manuscript. The funders had no role in the manuscript, and the views expressed are solely those of the authors. We thank Christine Cody and Bethany Livingston for their excellent editorial assistance.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no competing interests.

Adherence to ethical standards

No human subject research was involved.


  1. 1.
    National Research Council, Institute of Medicine. Preventing mental, emotional, and behavioral disorders among young people: progress and possibilities. Washington, DC: The National Academies Press, 2009.Google Scholar
  2. 2.
    Yoshikawa H, Aber JL, Beardslee WR. The effects of poverty on the mental, emotional, and behavioral health of children and youth: implications for prevention. Am Psychol. 2012; 67(4): 272-84. doi: 10.1037/a0028015.CrossRefPubMedGoogle Scholar
  3. 3.
    Fishbein DH, Stahl M, Ridenour TA, Sussman S. Translating emerging research into prevention of high-risk behaviors: state-of-the-science, barriers, and innovations. Translational behavioral medicine 2015Google Scholar
  4. 4.
    Holmes BJ, Finegood DT, Riley BL, Best A. Systems thinking in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health. New York: Oxford University Press; 2012: 175-91.Google Scholar
  5. 5.
    Komro KA, Flay BR, Biglan A, the Promise Neighborhoods Research Consortium. Creating nurturing environments: a science-based framework for promoting child health and development within high-poverty neighborhoods. Clin Child Fam Psychol Rev. 2011; 14(2): 111.CrossRefPubMedPubMedCentralGoogle Scholar
  6. 6.
    Komro KA, Tobler AL, Delisle AL, O’Mara RJ, Wagenaar AC. Beyond the clinic: improving child health through evidence-based community development. BMC Pediatr. 2013; 13(172): 1471-2431.Google Scholar
  7. 7.
    Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007; 28: 413-33.CrossRefPubMedGoogle Scholar
  8. 8.
    Patton MQ. Utilization-focused evaluation. 4th ed: Guilford Press, 2008Google Scholar
  9. 9.
    Flay BR, Biglan A, Boruch RF, et al. Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prevention Science 2005;6(3):151–75 doi: DOI  10.1007/s11121-005-5553-y
  10. 10.
    Chen HT. The bottom-up approach to integrative validity: a new perspective for program evaluation. Eval Program Plann. 2010; 33(3): 205-14.CrossRefPubMedGoogle Scholar
  11. 11.
    Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Can Med Assoc J. 2009; 180(10), E47.CrossRefGoogle Scholar
  12. 12.
    Brownson RC, Colditz GA, Proctor EK. Design and analysis in dissemination and implementation research: Oxford University Press, 2012Google Scholar
  13. 13.
    MacKinnon DP, Fairchild AJ. Current directions in mediation analysis. Current Directions in Psychological Science. 2009; 18(1): 16-20.CrossRefPubMedPubMedCentralGoogle Scholar
  14. 14.
    Haskins R, Baron J. Building the connection between policy and evidence: The Obama evidence-based initiatives. Nesta: London, England; 2011.Google Scholar
  15. 15.
    Dobbie W, Fryer RG Jr. Are high quality schools enough to close the achievement gap? NBER: Evidence from a social experiment in Harlem; 2009.CrossRefGoogle Scholar
  16. 16.
    Cook TD, Shadish WR, Wong VC. Three conditions under which experiments and observational studies produce comparable causal estimates: new findings from within-study comparisons. J Policy Anal Manage. 2008; 27(4): 724-50.CrossRefGoogle Scholar
  17. 17.
    Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin Company; 2002.Google Scholar
  18. 18.
    Cook TD, Steiner PM. Case matching and the reduction of selection bias in quasi-experiments: the relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychol Methods. 2010; 15(1): 56-68.CrossRefPubMedGoogle Scholar
  19. 19.
    Shadish WR, Clark MH, Steiner PM. Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. J Am Stat Assoc. 2008; 103(484): 1334-44.CrossRefGoogle Scholar
  20. 20.
    Rubin DB. Using propensity scores to help design observational studies: application to the tobacco litigation. Health Services and Outcomes Research Methodology. 2001; 2(3): 169-88.CrossRefGoogle Scholar
  21. 21.
    Coffman DL. Estimating causal effects in mediation analysis using propensity scores. Structural Equation Modeling: A Multidisciplinary Journal. 2011; 18(3): 357-69.CrossRefGoogle Scholar
  22. 22.
    Wagenaar AC, Komro K. Natural experiments: research design elements for optimal causal inference without randomization. In: Wagenaar AC, Burris S, eds. Public health law research: theory and methods. San Francisco: Jossey-Bass; 2013.Google Scholar
  23. 23.
    Landsverk J, Brown CH, Chamberlain P, et al. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health. New York: Oxford University Press; 2012.Google Scholar

Copyright information

© Society of Behavioral Medicine 2015

Authors and Affiliations

  1. 1.Department of Behavioral Sciences and Health Education, Rollins School of Public HealthEmory UniversityAtlantaUSA
  2. 2.Department of Public HealthOregon State UniversityCorvallisUSA
  3. 3.Oregon Research InstituteEugeneUSA
  4. 4.Department of Health Outcomes & Policy, College of MedicineUniversity of FloridaGainesvilleUSA

Personalised recommendations