Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

  • Carla A. Green
  • Naihua Duan
  • Robert D. Gibbons
  • Kimberly E. Hoagwood
  • Lawrence A. Palinkas
  • Jennifer P. Wisdom
Original Paper

Abstract

Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

Keywords

Dissemination and implementation research Mixed methods Qualitative methods 

References

  1. Aarons, G. A., Miller, E. A., Green, A. E., Perrott, J. A., & Bradway, R. (2012). Adaptation happens: A qualitative case study of implementation of the incredible years evidence-based parent training programme in a residential substance abuse treatment programme. Journal of Children’s Services, 7(4), 233–245.CrossRefGoogle Scholar
  2. Aarons, G. A., Wells, R. S., Zagursky, K., Fettes, D. L., & Palinkas, L. A. (2009). Implementing evidence-based practice in community mental health agencies: A multiple stakeholder analysis. American Journal of Public Health, 99(11), 2087–2095. doi:10.2105/AJPH.2009.161711.PubMedCentralPubMedCrossRefGoogle Scholar
  3. Adler, P. A., & Adler, P. (1998). Observational techniques. In N. K. Denzin & Y. S. Lincoln (Eds.), Collecting and interpreting qualitative materials (pp. 79–109). Thousand Oaks: Sage Publications.Google Scholar
  4. Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal e!ects using instrumental variable. Journal of the American Statistical Association, 91, 444–455.CrossRefGoogle Scholar
  5. Angrosino, M. V. (2005). Recontextualizing observation: Ethnography, pedagogy, adn the prospects for a progressive political agenda. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (3rd ed., pp. 729–745). Thousand Oaks: Sage Publications.Google Scholar
  6. Arrington, B., Kimmey, J., Brewster, M., Bentley, J., Kane, M., Van, B. C., et al. (2008). Building a local agenda for dissemination of research into practice. Journal of Public Health Management and Practice, 14(2), 185–192. doi:10.1097/01.PHH.0000311898.03573.28.PubMedCrossRefGoogle Scholar
  7. Babbie, E. (1990). Survey research methods (2nd ed.). Belmont: Wadsworth.Google Scholar
  8. Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311.CrossRefGoogle Scholar
  9. Beebe, J. (2001). Rapid assessment process: An introduction. Lanham: AltaMira Press.Google Scholar
  10. Behar, L. B., & Hydaker, W. M. (2009). Defining community readiness for the implementation of a system of care. Administration and Policy In Mental Health, 36(6), 381–392. doi:10.1007/s10488-009-0227-x.PubMedCrossRefGoogle Scholar
  11. Bernard, H. R. (2002). Research methods in anthropology: Qualitative and quantitative approaches. Walnut Creek: Alta Mira Press.Google Scholar
  12. Bernard, H. R. (2011). Research methods in anthropology: Qualitative and quantitative approaches (5th ed.). Lanham: AltaMira Press.Google Scholar
  13. Berwick, D. M. (2003). Disseminating innovations in health care. The Journal of the American Medical Association, 289(15), 1969–1975. doi:10.1001/jama.289.15.1969.PubMedCrossRefGoogle Scholar
  14. Blankertz, L. (1998). The value and practicality of deliberate sampling for heterogeneity: A critical multiplist perspective. American Journal of Evaluation, 19(3), 307–324.CrossRefGoogle Scholar
  15. Bourgeault, I., Dingwall, R., & de Vries, R. (2010). Handbook of qualitative methods in health research. Los Angeles: Sage Publications.Google Scholar
  16. Caronna, C. A. (2010). Why use qualitative methods to study health care organizations? Insights from multi-level case studies. In I. Bourgeault, R. Dingwall, & R. de Vries (Eds.), Handbook of qualitative methods in health research (pp. 71–87). Los Angeles: Sage Publications.CrossRefGoogle Scholar
  17. Charmaz, K. (2001). Qualitative interviewing and grounded theory analysis. In J. F. Gubrium & S. Hutchinson (Eds.), Handbook of interview research (pp. 675–694). Thousand Oaks: Sage Publications.CrossRefGoogle Scholar
  18. Charmaz, K. (2006). Constructing grounded theory. Thousand Oaks: Sage Publications Inc.Google Scholar
  19. Chase, S. E. (2005). Narrative inquiry: Multiple lenses, approaches, voices. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (3rd ed., pp. 651–679). Thousand Oaks: Sage Publications.Google Scholar
  20. Concept Systems. (2014). The concept system software, Global MaxTM: Concept Systems Incorporated. Retrieved 8 April 2014 from: http://www.conceptsystems.com.
  21. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation : Design & analysis issues for field settings. Boston: Houghton Mifflin.Google Scholar
  22. Corsaro, W. A., & Heise, D. R. (1990). Event structure models from ethnographic data. In C. C. Clogg (Ed.), Sociological methodology: 1990 (pp. 1–57). Cambridge: Basil Blackwell.Google Scholar
  23. Couper, M. P. (2008). Designing effective web surveys. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  24. Couper, M. P., Baker, R., Bethlehem, J., Clark, C., Martin, J., Nicholls, W. I., et al. (1998). Computer-assisted survey information collection. New York: John Wiley.Google Scholar
  25. Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks: Sage Publications.Google Scholar
  26. Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Smith, K. C. f. t. O. o. B. a. S. S. R. (2011). Best practices for mixed methods research in the health sciences. http://obssr.od.nih.gov/mixed_methods_research. Accessed on 8 Apr 2014.
  27. Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Inc.Google Scholar
  28. Davies, D., & Dodd, J. (2002). Qualitative research and the question of rigor. Qualitative Health Research, 12(2), 279–289.PubMedCrossRefGoogle Scholar
  29. de Leeuw, E. D. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 2, 233–255.Google Scholar
  30. Denzin, N. K., & Lincoln, Y. S. (1998). Collecting and interpreting qualitative materials. Thousand Oaks: Sage Publications.Google Scholar
  31. Denzin, N. K., & Lincoln, Y. S. (2005). Handbook of qualitative research (3rd ed.). Thousand Oaks: Sage Publications.Google Scholar
  32. Dicicco-Bloom, B., & Crabtree, B. F. (2006). The qualitative research interview. Medical Education, 40(4), 314–321. doi:10.1111/j.1365-2929.2006.02418.x.PubMedCrossRefGoogle Scholar
  33. Donner, A. (1998). Some aspects of the design and analysis of cluster randomization trials. Journal of the Royal Statistical Society: Series C (Applied Statistics), 47(1), 95–113.CrossRefGoogle Scholar
  34. Donner, A., & Klar, N. (1994). Cluster randomization trials in epidemiology: Theory and application. Journal of Statistical Planning and Inference, 42(1), 37–56.CrossRefGoogle Scholar
  35. Draucker, C. B., Martsolf, D. S., Ross, R., & Rusk, T. B. (2007). Theoretical sampling and category development in grounded theory. Qualitative Health Research, 17(8), 1137–1148.PubMedCrossRefGoogle Scholar
  36. Finkelstein, M. O., Levin, B., & Robbins, H. (1996a). Clinical and prophylactic trials with assured new treatment for those at greater risk: I. A design proposal. American Journal of Public Health, 86(5), 691–695.PubMedCentralPubMedCrossRefGoogle Scholar
  37. Finkelstein, M. O., Levin, B., & Robbins, H. (1996b). Clinical and prophylactic trials with assured new treatment for those at greater risk: II. Examples. American Journal of Public Health, 86(5), 696–705.PubMedCentralPubMedCrossRefGoogle Scholar
  38. Fisher, R. A. (1925). Statistical methods for research workers (14th ed.). Edinburgh: Oliver and Boyd.Google Scholar
  39. Ford, E. W., Duncan, W. J., & Ginter, P. M. (2005). Health departments’ implementation of public health’s core functions: An assessment of health impacts. Public Health, 119(1), 11–21. doi:10.1016/j.puhe.2004.03.002.PubMedCrossRefGoogle Scholar
  40. Fowler, F. J. (2009). Survey research methods (4th ed.)., Applied social research methods series Thousand Oaks: Sage Publications.Google Scholar
  41. Fowler, F. J, Jr, Gallagher, P. M., Stringfellow, V. L., Zaslavsky, A. M., Thompson, J. W., & Cleary, P. D. (2002). Using telephone interviews to reduce nonresponse bias to mail surveys of health plan members. Medical Care, 40(3), 190–200.PubMedCrossRefGoogle Scholar
  42. Frankel, M. R., Shapiro, M. F., Duan, N., Morton, S. C., Berry, S. H., Brown, J. A., et al. (1999). National probability samples in studies of low-prevalence diseases. Part II: Designing and implementing the HIV cost and services utilization study sample. Health Services Research, 34(5 Pt 1), 969–992.PubMedCentralPubMedGoogle Scholar
  43. Gabbay, J., & le May, A. (2004). Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ, 329(7473), 1013. doi:10.1136/bmj.329.7473.1013.PubMedCentralPubMedCrossRefGoogle Scholar
  44. Gibbons, R. D., Weiss, D. J., Pilkonis, P. A., Frank, E., Moore, T., Kim, J. B. et al. (2013). The CAT-DI: A computerized adaptive test for depression. Archives of General Psychiatry, in press.Google Scholar
  45. Gilchrist, V. J., & Williams, R. L. (1999). Key informant interviews. In B. F. Crabtree & W. L. Miller (Eds.), Doing qualitative research (Second ed., pp. 71–88). Thousand Oaks: Sage Publications.Google Scholar
  46. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publishing Company.Google Scholar
  47. Griffin, L. J., & Korstad, R. R. (1998). Historical inference and event-structure analysis. International Review of Social History, 43(Supplement S6), 145–165. doi:10.1017/S0020859000115135.CrossRefGoogle Scholar
  48. Heise, D. R. (1989). Modeling event structures. The Journal of Mathematical Sociology, 14(2-3), 139–169. doi:10.1080/0022250X.1989.9990048.CrossRefGoogle Scholar
  49. Heise, D. (2012). Ethno. Retrieved from http://www.indiana.edu/~socpsy/ESA/. Accessed on 8 Apr 2014.
  50. Hotopf, M. (2002). The pragmatic randomised controlled trial. Advances in Psychiatric Treatment, 8(5), 326–333.CrossRefGoogle Scholar
  51. Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. doi:10.1177/1049732305276687.PubMedCrossRefGoogle Scholar
  52. Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142(2), 615–635.CrossRefGoogle Scholar
  53. Jabbar, A. M., & Abelson, J. (2011). Development of a framework for effective community engagement in Ontario. Canada. Health Policy, 101(1), 59–69. doi:10.1016/j.healthpol.2010.08.024.PubMedCrossRefGoogle Scholar
  54. Kamberelis, G., & Dimitriadis, G. (2005). Focus groups: Strategic Articulations of Pedagogy, Politics, and Inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (3rd ed.). Thousand Oaks: Sage Publications Inc.Google Scholar
  55. Kish, L. (1995). Survey sampling. New York: Wiley Classics Library John Wiley and Sons Inc.Google Scholar
  56. Krefting, L. (1991). Rigor in qualitative research: The assessment of trustworthiness. American Journal of Occupational Therapy, 45(3), 214–222.PubMedCrossRefGoogle Scholar
  57. Krippendorff, K. (2004). Content analysis: An introduction to methodology. Thousand Oaks: Sage Publications Inc.Google Scholar
  58. Lee, D. S., & Lemieux, T. (2010). Regression discontinuity designs in economics. Journal of Economic Literature, 48(2), 281–355.Google Scholar
  59. Lensvelt-Mulders, G. J. L. M., Hox, J. J., van der Heijden, P. G. M., & Maas, C. J. M. (2005). Meta-analysis of randomized response research: Thirty-five years of validation. Sociological Methods and Research, 33(3), 319–348.CrossRefGoogle Scholar
  60. Luce, B. R., Kramer, J. M., Goodman, S. N., Connor, J. T., Tunis, S., Whicher, D., et al. (2009). Rethinking randomized clinical trials for comparative effectiveness research: The need for transformational change. Annals of Internal Medicine, 151(3), 206–209.PubMedCrossRefGoogle Scholar
  61. Marsden, P. V., & Wright, J. D. (2010). Handbook of survey research (2nd ed.). Bingley: Emerald Group Publishing Limited.Google Scholar
  62. Marshall, M. N. (1996). The key informant technique. Family Practice, 13(1), 92–97.PubMedCrossRefGoogle Scholar
  63. McNall, M., & Foster-Fishman, P. G. (2007). Methods of rapid evaluation, assessment, and appraisal. American Journal of Evaluation, 28(2), 151–168.CrossRefGoogle Scholar
  64. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks: Sage Publications.Google Scholar
  65. Miller, W. L., & Crabtree, B. F. (1999). Depth interviewing. Doing qualitative research (pp. 123–201). Thousand Oaks: Sage Publications.Google Scholar
  66. Morgan, D. L. (1993). Successful focus groups: Advancing the state of the art. Newbury Park: Sage Publications.Google Scholar
  67. Morgan, D. L., & Krueger, R. A. (1998). The focus group kit. Thousand Oaks: Sage Publications.Google Scholar
  68. Morse, J. M. (2000). Determining sample size. Qualitative Health Research, 10(1), 3–5.CrossRefGoogle Scholar
  69. Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. Access on December 30, 2008 from http://www.ualberta.ca/~iiqm/backissues/1_2Final/html/morse.html. International Journal of Qualitative Methods, 1(2). Retrieved from http://www.ualberta.ca/~iiqm/backissues/1_2Final/html/morse.html.
  70. Murray, D. M. (1998). Design and analysis of group-randomized trials (29th ed.). Oxford: Oxford University Press.Google Scholar
  71. Murray, S. A., Tapson, J., Turnbull, L., McCallum, J., & Little, A. (1994). Listening to local voices: Adapting rapid appraisal to assess health and social needs in general practice. BMJ, 308(6930), 698–700.PubMedCentralPubMedCrossRefGoogle Scholar
  72. Needle, R. H., Trotter, R. T., Singer, M., Bates, C., Page, J. B., Metzger, D., et al. (2003). Rapid assessment of the HIV/AIDS crisis in racial and ethnic minority communities: An approach for timely community interventions. American Journal of Public Health, 93(6), 970–979.PubMedCentralPubMedCrossRefGoogle Scholar
  73. Oakley, A., Strange, V., Bonell, C., Allen, E., & Stephenson, J. (2006). Process evaluation in randomised controlled trials of complex interventions. BMJ, 332(7538), 413–416. doi:10.1136/bmj.332.7538.413.PubMedCentralPubMedCrossRefGoogle Scholar
  74. Palinkas, L. A., Aarons, G. A., Horwitz, S., Chamberlain, P., Hurlburt, M., & Landsverk, J. (2011a). Mixed method designs in implementation research. Administration and Policy In Mental Health, 38(1), 44–53. doi:10.1007/s10488-010-0314-z.PubMedCentralPubMedCrossRefGoogle Scholar
  75. Palinkas, L. A., Aarons, G. A., Horwitz, S., Chamberlain, P., Hurlburt, M., & Landsverk, J. (2011b). Mixed method designs in implementation research. Administration and Policy in Mental Health, 38(1), 44–53.PubMedCentralPubMedCrossRefGoogle Scholar
  76. Palinkas, L. A., Horwitz, S. M., Chamberlain, P., Hurlburt, M. S., & Landsverk, J. (2011c). Mixed-methods designs in mental health services research: A review. Psychiatric Services, 62(3), 255–263. doi:10.1176/appi.ps.62.3.255.PubMedCrossRefGoogle Scholar
  77. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2013). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health Google Scholar
  78. Pentland, B. T. (1999). Building process theory with narrative: From description to explanation. Academy of Management Review, 24(4), 711-724.Google Scholar
  79. Poland, B. D. (1995). Transcription quality as an aspect of rigor in qualitative research. Qualitative Inquiry, 1(3), 290–310.CrossRefGoogle Scholar
  80. Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy In Mental Health, 36(1), 24–34. doi:10.1007/s10488-008-0197-4.PubMedCrossRefGoogle Scholar
  81. Ragin, C. C. (1997). Turning the tables: How case-oriented research challenges variable-oriented research. Comparative Social Research, 16, 27–42.Google Scholar
  82. Ragin, C. C. (1999a). The distinctiveness of case-oriented research. Health Services Research, 34(5 Pt 2), 1137–1151.PubMedCentralPubMedGoogle Scholar
  83. Ragin, C. C. (1999b). Using qualitative comparative analysis to study causal complexity. Health Services Research, 34(5 Pt 2), 1225–1239.PubMedCentralPubMedGoogle Scholar
  84. Ragin, C. C., Shulman, D., Weinberg, A., & Gran, B. (2003). Complexity, generality, and qualitative comparative analysis. Field Methods, 15, 323–340.CrossRefGoogle Scholar
  85. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer Science+Business Media.CrossRefGoogle Scholar
  86. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.Google Scholar
  87. Rosenbaum, P. R., & Rubin, D. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79(387), 516–524.CrossRefGoogle Scholar
  88. Rossi, P. H., Wright, J. D., & Anderson, A. B. (1983). Handbook of survey research: Quantitative studies in social relations (First ed.). Waltham: Academic Press.Google Scholar
  89. Scheuren, F. (2013). Chapter 6, Designing a questionnaire. https://www.whatisasurvey.info/downloads/pamphlet_current.pdf. Retrieved from https://www.whatisasurvey.info/downloads/pamphlet_current.pdf. Accessed 8 Apr 2014.
  90. Schwartz, D., & Lellouch, J. (2009). Explanatory and pragmatic attitudes in therapeutical trials. Journal of Clinical Epidemiology, 62(5), 499–505. doi:10.1016/j.jclinepi.2009.01.012.PubMedCrossRefGoogle Scholar
  91. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  92. Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology, 4, 1–32.PubMedCrossRefGoogle Scholar
  93. Shortell, S. M. (1999). The emergence of qualitative methods in health services research. Health Services Research, 34(5 Pt 2), 1083–1090.PubMedCentralPubMedGoogle Scholar
  94. Stake, R. E. (2005). Qualitative case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 443–466). Thousand Oaks: Sage Publications.Google Scholar
  95. Stetler, C. B., Legro, M. W., Wallace, C. M., Bowman, C., Guihan, M., Hagedorn, H., et al. (2006). The role of formative evaluation in implementation research and the QUERI experience. Journal of General Internal Medicine, 21(Suppl 2), S1–S8.PubMedCentralPubMedCrossRefGoogle Scholar
  96. Stevenson, W. B., & Greenberg, D. N. (1998). The formal analysis of narratives of organizational change. Journal of Management, 24(6), 741–762.Google Scholar
  97. Strauss, A. L., & Corbin, J. (1998). Theoretical sampling. Basics of qualitative research: Techniques and procedures for developing grounded theory (pp. 201–215). Thousand Oaks: SAGE Publications Inc.Google Scholar
  98. Thistlethwaite, D., & Campbell, D. (1960). Regression-discontinuity analysis: an alternative to the ex post facto experiment. Journal of Educational Psychology, 51, 309–317.CrossRefGoogle Scholar
  99. Thorpe, K. E., Zwarenstein, M., Oxman, A. D., Treweek, S., Furberg, C. D., Altman, D. G., et al. (2009). A pragmatic-explanatory continuum indicator summary (PRECIS): A tool to help trial designers. CMAJ: Canadian Medical Association Journal, 180(10), E47–E57.PubMedCentralPubMedCrossRefGoogle Scholar
  100. Tong, Y., & Kolen, M. J. (2007). Comparisons of methodologies and results in vertical scaling for educational achievement tests. Applied Measurement in Education, 20(2), 227–253. doi:10.1080/08957340701301207.CrossRefGoogle Scholar
  101. Torrey, W. C., Bond, G. R., McHugo, G. J., & Swain, K. (2012). Evidence-based practice implementation in community mental health settings: The relative importance of key domains of implementation activity. Administration and Policy In Mental Health, 39(5), 353–364. doi:10.1007/s10488-011-0357-9.PubMedCrossRefGoogle Scholar
  102. Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275–304.Google Scholar
  103. Tremblay, M. A. (1957). The key informant technique: A nonethnographic application. American Anthropologist, 59(4), 688–701. doi:10.1525/aa.1957.59.4.02a00100.CrossRefGoogle Scholar
  104. Trochim, W. M. (1989). Introduction to concept mapping for planning and evaluation. Evaluation and Program Planning, 12, 1–16.CrossRefGoogle Scholar
  105. Trotter, R. T., Needle, R. H., Goosby, E., Bates, C., & Singer, M. (2001). A methodological model for rapid assessment, response, and evaluation: The RARE program in public health. Field Methods, 13(2), 137–159.CrossRefGoogle Scholar
  106. Trumpy, A. J. (2008). Subject to negotiation: The mechanisms behind co-optation and corporate reform. Social Problems, 55, 519–536.CrossRefGoogle Scholar
  107. Tunis, S. R., Stryer, D. B., & Clancy, C. M. (2003). Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. The Journal of the American Medical Association, 290(12), 1624–1632.PubMedCrossRefGoogle Scholar
  108. Weiss, D. J. (1985). Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53(6), 774–789.PubMedCrossRefGoogle Scholar
  109. Weller, S. C., & Romeny, A. K. (1988). Systematic data collection (12th ed.). Newbury Park: Sage.Google Scholar
  110. West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D. C., Holtgrave, D., et al. (2008). Alternatives to the randomized controlled trial. American Journal of Public Health, 98(8), 1359–1366.PubMedCentralPubMedCrossRefGoogle Scholar
  111. Westfall, J. M., Mold, J., & Fagnan, L. (2007). Practice-based research–”Blue Highways” on the NIH roadmap. The Journal of the American Medical Association, 297(4), 403–406. doi:10.1001/jama.297.4.403.PubMedCrossRefGoogle Scholar
  112. Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4), 522–537.PubMedCrossRefGoogle Scholar
  113. Yates, F. (1935). Complex experiments, with discussion. Supplement to the Journal of the Royal Statistical Society, Series B 2, 2(2), 181–247.CrossRefGoogle Scholar
  114. Yin, R. K. (1999). Enhancing the quality of case studies in health services research. Health Services Research, 34(5 Pt 2), 1209–1224.PubMedCentralPubMedGoogle Scholar
  115. Yin, R. K. (2003a). Applications of case study research (2nd ed.)., Applied social research methods series Thousand Oaks: Sage Publications Inc.Google Scholar
  116. Yin, R. K. (2003b). Case study research: Design and methods (3rd ed.)., Applied social research methods series Thousand Oaks: Sage Publications.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Carla A. Green
    • 1
  • Naihua Duan
    • 2
  • Robert D. Gibbons
    • 3
  • Kimberly E. Hoagwood
    • 4
  • Lawrence A. Palinkas
    • 5
  • Jennifer P. Wisdom
    • 6
  1. 1.Center for Health Research, Kaiser Permanente NorthwestPortlandUSA
  2. 2.Columbia University Medical CenterNew YorkUSA
  3. 3.Center for Health StatisticsUniversity of ChicagoChicagoUSA
  4. 4.Department of Child and Adolescent PsychiatryNew York University Langone Medical CenterNew YorkUSA
  5. 5.Albert G. and Frances Lomas Feldman Professor of Social Policy and Health, School of Social WorkUniversity of Southern CaliforniaLos AngelesUSA
  6. 6.George Washington UniversityWashingtonUSA

Personalised recommendations