Advertisement

CUTOS: A Framework for Contextualizing Evidence

  • Ariel M. AloeEmail author
  • Christopher G. Thompson
  • Deborah K. Reed
Chapter
  • 40 Downloads

Abstract

Researchers often emphasize the need for evidence from previous research to understand what is already known about a specific program or intervention. Even in the presence of high-quality evidence, consumers of research often find it difficult to interpret and apply the findings to their specific context. Many fields are incorporating evidence-based practices and the use of systematic reviews to understand what the evidence supports. However, there is little in the literature that tells us how to interpret the findings based on the intended application being considered, despite stressing the importance of doing so. This chapter introduces a framework that gives us a set of tools for disentangling what is known and a space to contemplate whether and which findings are applicable to a specific context. This implementation framework starts by identifying known evidence about what is going to be implemented (i.e., what the evidence supports) and how the components of such evidence apply to a specific context (population, organization, resources, etc.). In addition, this active implementation framework emphasizes the importance of collaborative decision-making between researchers and stakeholders.

Keywords

Implementation Generalization Systematic review Meta-analysis 

References

  1. Becker, B. J. (1996). The generalizability of empirical research results. In C. P. Benbow & D. Lubinski (Eds.), Intellectual talent: Psychometric and social issues (pp. 362–383). Baltimore, MD: Johns Hopkins Press.Google Scholar
  2. Burchett, H., Umoquit, M., & Dobrow, M. (2011). How do we know when research from one setting can be useful in another? A review of external validity, applicability and transferability frameworks. Journal of Health Services Research and Policy, 16, 238–244.  https://doi.org/10.1258/jhsrp.2011.010124CrossRefGoogle Scholar
  3. Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally.Google Scholar
  4. Campbell, D. T. (1986). Relabeling internal and external validity for applied social scientists. New Directions for Program Evaluation, 31, 67–77.  https://doi.org/10.1002/ev.1434CrossRefGoogle Scholar
  5. Chandler, J., Churchill, R., Higgins, J. P. T., Lasserson, T., & Tovey, D. (2013). Methodological Expectations of Cochrane Intervention Reviews (MECIR). Methodological standards for the conduct of new Cochrane Intervention Reviews.Google Scholar
  6. Cook, T. D. (1991). Meta-analysis: Its potential for causal description and causal explanation within program evaluation. In G. Albrecht, H. U. Otto, S. Karstedt-Henke, & K. Bollert (Eds.), Social prevention and the social sciences: Theoretical controversies, research problems and evaluation strategies.Google Scholar
  7. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin Company.Google Scholar
  8. Cooper, H. M., Hedges, L. V., & Valentine, J. (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York, NY: The Russell Sage Foundation.Google Scholar
  9. Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.Google Scholar
  10. Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366.  https://doi.org/10.2105/ajph.94.3.361CrossRefGoogle Scholar
  11. Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., Yaghoub-Zadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panel’s meta-analysis. Reading Research Quarterly, 36(3), 250–287.  https://doi.org/10.1598/RRQ.36.3.2CrossRefGoogle Scholar
  12. Ercikan, K., & Roth, W.-M. (2014). Limits of generalizing in education research: Why criteria for research generalization should include population heterogeneity and uses of knowledge claims. Teachers College Record, 116, 1–28.Google Scholar
  13. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8. https://doi.org/10.3102/0013189X005010003.
  14. GRADE working group. (n.d.). Organizations that have endorsed or that are using GRADE. Available from: www.gradeworkinggroup.org
  15. Guyatt, G., Oxman, A. D., Akl, E. A., Kunz, R., Vist, G., Brozek, J., … Schunemann, H. J. (2011). GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology, 64, 383–394.  https://doi.org/10.1016/j.jclinepi.2010.04.026CrossRefGoogle Scholar
  16. Higgins, J. P. T., & Green, S. (2011). Cochrane handbook for systematic reviews of interventions. Retrieved from www.cochrane-handbook.org
  17. Innvaer, S. S., Vist, G. G., Trommald, M. M., & Oxman, A. A. (2002). Health policy-makers’ perceptions of their use of evidence: A systematic review. Journal of Health Services Research and Policy, 7, 239–244.  https://doi.org/10.1258/135581902320432778CrossRefGoogle Scholar
  18. Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & the PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analysis: The PRISMA statement. Physical Therapy, 89, 873–880.  https://doi.org/10.1093/ptj/89.9.873CrossRefGoogle Scholar
  19. National Institute of Child Health and Human Development (NICHD). (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office.Google Scholar
  20. Oliver, K., Innvar, S. S., Lorenc, T., Woodman, J., & Thomas, J. (2014). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14(2), 1–12.  https://doi.org/10.1186/1472-6963-14-2CrossRefGoogle Scholar
  21. Richardson, W. S., Wilson, M. C., Nishikawa, J., & Hayward, R. S. A. (1995). The well-built clinical question: A key to evidence-based decisions. ACP Journal Club, 123, A-12.  https://doi.org/10.7326/ACPJC-1005-123-3-A12.
  22. Schulz, K. F., Altman, D. G., Moher, D., & the CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomized trials. Annals of Internal Medicine, 152, 726–732.CrossRefGoogle Scholar
  23. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental design for generalized causal inference. Boston, MA: Houghton-Mifflin.Google Scholar
  24. Slavin, R. (2008). Perspectives on evidence-based research in education—What works? Issues in synthesizing educational program evaluation. Educational Researcher, 37(1), 5–14.  https://doi.org/10.3102/0013189x08314117CrossRefGoogle Scholar
  25. Stroup, D. F., Berlin, J. A., Morton, S. C., Olkin, I., Williamson, G. D., Rennie, D., … Thacker, S. B. (2000). Meta-analysis of observational studies in epidemiology: A proposal for reporting. Journal of the American Medical Association, 283, 2008–2012.  https://doi.org/10.1001/jama.283.15.2008CrossRefGoogle Scholar
  26. Wang, S., Moss, J. R., & Hiller, J. E. (2005). Applicability and transferability of interventions in evidence-based public health. Health Promotion International, 12, 76–83.  https://doi.org/10.1186/1472-6963-14-61CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Ariel M. Aloe
    • 1
    Email author
  • Christopher G. Thompson
    • 2
  • Deborah K. Reed
    • 3
  1. 1.The University of IowaIowa CityUSA
  2. 2.Texas A&M UniversityCollege StationUSA
  3. 3.Iowa Reading Research CenterIowa CityUSA

Personalised recommendations