Advertisement

Special Issue Overview: Optimizing Mixed Methods for Implementation Research in Large Systems

  • Kimberly HoagwoodEmail author
  • Serene Olin
  • Sarah Horwitz
Introduction

States and healthcare systems spend significant resources on mental and physical health services, are responsible for serving entire populations under their purview, and struggle to ensure that the services they provide are informed by the best available evidence. Consequently, implementation research has become a powerful branch of applied science with a focus on identifying strategies for improving the quality of population-based services. These large systems are multi-level, with clients nested within providers, providers nested within clinics or agencies, and agencies nested within state or healthcare systems that craft policies. Despite the development of numerous conceptual models and a small number of studies on phases and stages in implementing specific practices (Saldana 2014; Chamberlain et al. 2011; Olin et al. in press), very little attention has been directed towards offering solutions for the unique methodological challenges of research, particularly within large and complex state and healthcare systems.

Implementing evidence-based and innovative practices, treatments, and services in large systems is highly complex, and has not, until recently, been guided by empirical or theoretical knowledge. Mixed method designs and approaches have been proposed to promote a more complete and efficient way of understanding the full range of factors that influence the dissemination and implementation of evidence-based innovations in large systems.

This special issue provides both an overview of mixed methods designs and approaches, as well as applications and integration of sophisticated sampling, statistical methods and models (borrowed from various fields such as anthropology, statistics, engineering and computer science) to increase the range of solutions for handling the unique challenges of design, sampling, measurement, and analysis common in implementation research. Given the rapidity of changes in the structure, financing and delivery of services within states and healthcare systems, driven largely by the Patient Protection and Affordable Care Act (ACA), the use of mixed methods to efficiently wrest meaning from complex and myriad sources of data is needed to optimize interpretation, increase usable knowledge, and inform system-wide reforms.

The papers in this special issue arose from the work of our Advanced Center on Implementation-Dissemination of Evidence-Based Practices Among States [P30 MH090322-01 A1 Hoagwood (PI), The IDEAS Center], co-located within New York University’s Child Study Center and the New York State Office of Mental Health (OMH). The IDEAS Center was explicitly positioned to conduct experiments within the state “laboratory” of the children’s mental health system, making the work immediately policy-relevant. The authors in this special series have been members of the IDEAS faculty, and in this series, address specific methodological challenges that are applicable to large system rollouts of evidence-based practices for children, adolescents, and families.

In the six papers in this special issue, we describe conceptual issues and specific strategies for sampling, designing, and analyzing complex data using mixed methods. The papers provide both theoretically-informed frameworks, but also practical and grounded strategies that can be used to answer questions related to scaling-up new practices or services in large systems.

In the first paper, Green and co-authors (Green et al. 2014) in, “Common Approaches to Mixed Methods Dissemination and Implementation Research: Value, Strengths, and Weaknesses,” have created a toolkit of common qualitative and quantitative methods most applicable to implementation research. The authors review qualitative, quantitative, and hybrid approaches to provide examples of optimal ways to integrate these approaches. Because implementation research is located within organizational cultures, embedded in networks of agencies, nested within communities, reflective of different stakeholder perspectives, and often driven by state or national policies, having a toolkit to guide the selection of tools and the application of mixed methods is helpful for analyzing information gathered from different constituencies and enabling interpretation of complex, multi-layered data.

In the second paper, “Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research,” Palinkas and colleagues (Palinkas et al. 2013) describe issues involved in various sampling strategies used in implementation research. These issues can include selecting extreme (or outlier) cases to maximize variation in order to document those variations. They can also include selecting homogeneous cases to reduce variation and simplify analysis. The authors present different approaches for the use of purposeful sampling strategies to address features of implementation research, and contrast purposeful and random sampling strategies. This discussion is important because there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. The authors describe a multistage strategy for purposeful sampling that begins with an emphasis on variation or dispersion, and then moves to a narrower emphasis on similarity or central tendencies. This approach helps to optimize the balance between internal and external validity. Importantly, the review offers key recommendations to promote the optimal use of purposeful sampling in mixed methods implementation research.

In the complementary paper by Duan and co-authors, “Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research,” the authors describe the value of blending these two methods to strengthen the yield from implementation studies. As they explain, the essence of purposeful sampling, widely used in qualitative research, is to select information-rich cases for the most effective use of limited resources (e.g., selection of extreme or outlier cases), or the selection of cases with maximum variation, thus increasing between-unit variance. Alternatively, one can select homogeneous cases for the purpose of reducing variation and simplifying analysis, thus reducing within-unit variability and facilitating group interviews. Duan et al. point out how quantitatively-based optimal design approaches result in designs similar to those derived from purposeful sampling. They suggest potential synergies between both methods. Optimal design provides a useful framework for assessing the sensitivity of design decisions. Because optimal design is a quantitative approach that requires a variety of assumptions, the more flexible purposeful sampling may serve as a useful complement in allowing a qualitative approach to formulate and evaluate these assumptions. This paper provides creative ideas for integrating purposeful sampling and optimal design in mixed methods research. The blend of these two approaches can facilitate efficiency and use of limited resources, which is especially valuable in implementation research where multiple aims and objectives are often being balanced to enhance knowledge yield.

A significant challenge in implementation research relates to consistent definition and measurement of implementation constructs. The paper entitled “Measuring Predictors of Innovation Adoption” by Chor et al. (2014) provides a comprehensive and theoretically-based review of all measures that assess the many predictors associated with innovation adoption. It comprehensively describes 118 measures associated with 27 adoption predictors across multiple levels and contexts common in implementation research (i.e., outer context, organization, innovation, and individual-level). It describes each measure in terms of its content, psychometric properties, and whether it is publicly available and accessible. Measures currently used, and those with potential for assessing important constructs related to adoption, are described.

Altogether, 118 measures associated with innovation adoption are identified, with many focusing on organizational structure, discrete innovation features, and individual provider characteristics. Constructs that have fewer measures associated with them are also described—thus pointing to areas for further development—including individual client characteristics. This paper provides both conceptual and definitional consistency to measurement of the first phase of implementation, i.e., adoption. Consequently, it contributes important knowledge about ways to assess modifiable processes, measures that are capable of examining multiple levels of implementation simultaneously, and measures that reflect both adoption and its opposite—de-adoption and non-adoption. Chor et al. note that a glaring omission in the literature is any focus on predictors of failed adoption or non-adoption.

In the paper, “Blending Qualitative and Computational Linguistics Methods for Fidelity Assessment: Experience with the Familias Unidas Preventive Intervention,” Gallo et al. (2014) describe the use of an automated system for processing linguistic patterns that can be used for fidelity monitoring. Developing feasible and easy-to-use fidelity monitoring technologies is important if effective interventions are to be implemented widely. In this example, the authors compare kappa scores between human raters and machine raters for measuring fidelity to a behavioral intervention. They blend qualitative and quantitative measures and apply rule-based computational linguistics to provide compelling data on this new and practical approach to fidelity measurement. As the authors aptly point out, the success or failure of the current approach to evidence-based implementation hinges, in part, on the ability of the field to develop easily usable and practical fidelity monitoring tools.

This issue closes with an opinion piece by Wu and colleagues highlighting how application of engineering principles to implementation science may offer breakthroughs conceptually and technically. In “Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings,” Wu et al. provide examples of how the application of engineering methods might be relevant for the development of more efficient and safer implementation of clinical practices, medical devices, and health services systems. They provide a specific example of systems engineering design applied to a study of the implementation of an evidence-based depression intervention for low-income diabetes patients, using engineering techniques and digital technologies to enhance workflow, decision-making, and communication. The authors highlight the utility of engineering principles in customizing the implementation of evidence-based innovations within healthcare settings.

Altogether, these papers provide thoughtful, theoretically rich, yet detailed and practical examples of ways to design mixed method studies in implementation research conducted within large systems. To date, such methods have yet to be fully applied within children’s services. We hope that these papers will advance a more robust and easily accessible knowledge base about ways to optimize implementation of effective services for children and families within large state and healthcare systems.

References

  1. Chamberlain, P., Brown, C. H., & Saldana, L. (2011). Observational measure of implementation progress: The stages of implementation completion (SIC). Implementation Science, 6, 116.PubMedCentralCrossRefPubMedGoogle Scholar
  2. Chor, K. H. B., Wisdom, J. P., Olin, S. -C. S., Hoagwood, K. E., & Horwitz, S. M. (2014). Measures for predictors of innovation adoption. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-014-0551-7.
  3. Gallo, C., Pantin, H., Villamar, J., Prado, G., Tapia, M., Ogihara, M., et al. (2014). Blending qualitative and computational linguistics methods for fidelity assessment: Experience with the familias unidas preventive intervention. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-014-0538-4.
  4. Green, C. A., Duan, N., Gibbons, R. D., Hoagwood, K. E., Palinkas, L. A., & Wisdom, J. P. (2014). Approaches to mixed methods dissemination and implementation research: Methods, strengths, caveats, and opportunities. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-014-0552-6.
  5. Olin, S. S., Chor, K. H. B., Weaver, J., Duan, N., Kerker, B., Clark, L., et al. (in press). Multilevel predictors of clinic adoption of state-supported trainings in children’s services. Psychiatric Services.Google Scholar
  6. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2013). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-013-0528-y.
  7. Saldana, L. (2014). The stages of implementation completion for evidence-based practice: Protocol for a mixed methods study. Implementation Science, 9, 43.PubMedCentralCrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.New York University Langone Medical CenterNew YorkUSA

Personalised recommendations