Monitoring Treatment Progress and Providing Feedback is Viewed Favorably but Rarely Used in Practice
- 996 Downloads
Numerous trials demonstrate that monitoring client progress and using feedback for clinical decision-making enhances treatment outcomes, but available data suggest these practices are rare in clinical settings and no psychometrically validated measures exist for assessing attitudinal barriers to these practices. This national survey of 504 clinicians collected data on attitudes toward and use of monitoring and feedback. Two new measures were developed and subjected to factor analysis: The monitoring and feedback attitudes scale (MFA), measuring general attitudes toward monitoring and feedback, and the attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF), measuring attitudes toward standardized progress tools. Both measures showed good fit to their final factor solutions, with excellent internal consistency for all subscales. Scores on the MFA subscales (Benefit, Harm) indicated that clinicians hold generally positive attitudes toward monitoring and feedback, but scores on the ASA-MF subscales (Clinical Utility, Treatment Planning, Practicality) were relatively neutral. Providers with cognitive-behavioral theoretical orientations held more positive attitudes. Only 13.9 % of clinicians reported using standardized progress measures at least monthly and 61.5 % never used them. Providers with more positive attitudes reported higher use, providing initial support for the predictive validity of the ASA-MF and MFA. Thus, while clinicians report generally positive attitudes toward monitoring and feedback, routine collection of standardized progress measures remains uncommon. Implications for the dissemination and implementation of monitoring and feedback systems are discussed.
KeywordsPsychological assessment Attitude measures Evidence based practice Therapists
This research was supported by an award from the University of Miami’s Provost Resaerch Award program to Dr. Jensen-Doss. Dr. Lewis’ work on this project was supported by the National Institute Of Mental Health of the National Institutes of Health (NIH) under Award Number R01MH103310 and Dr. Lyon’s work by NIH award K08MH095939.
Compliance with Ethical Standards
Conflict of Interest
None of the authors have conflicts of interest to declare.
All procedures performed in this study were in accordance with the ethical standards of the University of Miami Institutional Review Board and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
This study was approved for a waiver of signed consent; all participants were provided with a consent statement.
- Aarons, G. A., Horowitz, J., Dlugosz, L., & Ehrhart, M. (2012). The role of organizational processes in dissemination and implementation research. Dissemination and Implementation Research in Health: Translating Science to Practice, 128–153.Google Scholar
- Bickman, L., Douglas, S. R., De Andrade, A. R. V., Tomlinson, M., Gleacher, A., Olin, S., & Hoagwood, K. (2016). Implementing a measurement feedback system: A tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research, 43(3), 410–425.CrossRefPubMedPubMedCentralGoogle Scholar
- Chorpita, B. F., Bernstein, A., Daleiden, E. L., & Research Network on Youth Mental Health. (2008). Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 114–123.CrossRefPubMedGoogle Scholar
- Creed, T. A., Wolk, C. B., Feinberg, B., Evans, A. C., & Beck, A. T. (2016). Beyond the label: Relationship between community therapists’ self-report of a cognitive behavioral therapy orientation and observed skills. Administration and Policy in Mental Health and Mental Health Services Research, 43(1), 36–43.CrossRefPubMedGoogle Scholar
- Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. (3rd ed.). Hoboken: Wiley.Google Scholar
- Dozois, D. J., Mikail, S. F., Alden, L. E., Bieling, P. J., Bourgon, G., Clark, D. A., Drapeau, M., Gallson, D., Greenberg, L., Hunsley, J. (2014). The CPA presidential task force on evidence-based practice of psychological treatments. Canadian Psychology/Psychologie canadienne, 55(3), 153.CrossRefGoogle Scholar
- Frauenhoffer, D., Ross, M. J., Gfeller, J., Searight, H. R., & Piotrowski, C. (1998). Psychological test usage among licensed mental health practitioners: A multidisciplinary survey. Journal of Psychological Practice, 4(1), 28–33.Google Scholar
- Garland, A. F., Brookman-Frazee, L., Hurlburt, M. S., Accurso, E. C., Zoffness, R. J., Haine-Schlagel, R., & Ganger, W. (2010). Mental health care for children with disruptive behavior problems: A view inside therapists’ offices. Psychiatric Services, 61(8), 788–795.CrossRefPubMedPubMedCentralGoogle Scholar
- Hall, C., Moldavsky, M., Taylor, J., Sayal, K., Marriott, M., Batty, M., Pass, S., Hollis, C., et al. (2014). Implementation of routine outcome measurement in child and adolescent mental health services in the United Kingdom: A critical perspective. European Child & Adolescent Psychiatry, 23(4), 239–242.CrossRefGoogle Scholar
- Kotte, A., Hill, K. A., Mah, A. C., Korathu-Larson, P. A., Au, J. R., Izmirian, S., … Higa-McMillan, C. K., et al. (2016). Facilitators and barriers of implementing a measurement feedback system in public youth mental health. Administration and Policy in Mental Health and Mental Health Services Research, 1–18.Google Scholar
- Lambert, M. J., Whipple, J. L., Hawkins, E. J., Vermeersch, D. A., Nielsen, S. L., & Smart, D. W. (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice, 10(3), 288–301.Google Scholar
- Landes, S. J., Carlson, E. B., Ruzek, J. I., Wang, D., Hugo, E., DeGaetano, N., … Lindley, S. E., et al. (2015). Provider-driven development of a measurement feedback system to enhance measurement-based care in va mental health. Cognitive and Behavioral Practice, 22(1), 87–100.CrossRefGoogle Scholar
- Little, R. J. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), 1198–1202.Google Scholar
- Lyon, A. R., Dorsey, S., Pullmann, M., Silbaugh-Cowdin, J., & Berliner, L. (2015). Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Administration and Policy in Mental Health and Mental Health Services Research, 42(1), 47–60.CrossRefPubMedPubMedCentralGoogle Scholar
- Lyon, A. R., Ludwig, K., Wasse, J. K., Bergstrom, A., Hendrix, E., & McCauley, E. (2016). Determinants and functions of standardized assessment use among school mental health clinicians: A mixed methods evaluation. Administration and Policy in Mental Health and Mental Health Services Research, 43(1), 122–134.CrossRefPubMedPubMedCentralGoogle Scholar
- Martinez, R. G., Lewis, C. C., & Weiner, B. J. (2014). Instrumentation issues in implementation science. Implementation Science, 9, 118.Google Scholar
- Muthén, L., & Muthén, B. (1998–2011). Mplus user’s guide (6th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
- Persons, J. B. (2006). Case formulation–driven psychotherapy. Clinical Psychology: Science and Practice, 13(2), 167–170.Google Scholar
- Weisz, J. R., Chorpita, B. F., Frye, A., Ng, M. Y., Lau, N., Bearman, S. K., Ugueto, A. M., Langer, D. A., Hoagwood, K. E., et al. (2011). Youth top problems: Using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy. Journal of Consulting and Clinical Psychology, 79(3), 369.CrossRefPubMedGoogle Scholar