Journal of General Internal Medicine

, Volume 22, Issue 9, pp 1330–1334 | Cite as

Validation of a Method for Assessing Resident Physicians’ Quality Improvement Proposals

  • James L. Leenstra
  • Thomas J. Beckman
  • Darcy A. Reed
  • William C. Mundell
  • Kris G. Thomas
  • Bryan J. Krajicek
  • Stephen S. Cha
  • Joseph C. Kolars
  • Furman S. McDonald
Original Article

Abstract

BACKGROUND

Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking.

OBJECTIVE

We developed an instrument for assessing resident QI proposals—the Quality Improvement Proposal Assessment Tool (QIPAT-7)—and determined its validity and reliability.

DESIGN

QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised.

PARTICIPANTS

Seven raters used the instrument to assess 45 resident QI proposals.

MEASUREMENTS

Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach’s alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively.

RESULTS

QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach’s alpha = 0.87) were high.

CONCLUSIONS

This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

KEY WORDS

quality improvement systems-based practice practice-based learning and improvement assessment evaluation study validation study 

References

  1. 1.
    ACGME Outcome Project. Available at: http://www.acgme.org/outcome/comp/compFull.asp. Accessed July 29, 2006.
  2. 2.
    Ashton CM. “Invisible” doctors: making a case for involving medical residents in hospital quality improvement programs. Acad Med. 1993;68(11):823–4.PubMedCrossRefGoogle Scholar
  3. 3.
    Headrick LA, Richardson A, Priebe GP. Continuous improvement learning for residents. Pediatrics. 1998;101(4 Pt 2):768–73; discussion 773–4.PubMedGoogle Scholar
  4. 4.
    Parenti CM, Lederle FA, Impola CL, Peterson LR. Reduction of unnecessary intravenous catheter use. Internal medicine house staff participate in a successful quality improvement project. Arch Intern Med. 1994;154(16):1829–32.PubMedCrossRefGoogle Scholar
  5. 5.
    Frey K, Edwards F, Altman K, Spahr N, Gorman RS. The ‘Collaborative Care’ curriculum: an educational model addressing key ACGME core competencies in primary care residency training [see comment]. Med Educ. 2003;37(9):786–9.PubMedCrossRefGoogle Scholar
  6. 6.
    Amin AN, Rucker L. A systems-based practice curriculum. Med Educ. 2004;38(5):568–9.PubMedCrossRefGoogle Scholar
  7. 7.
    Djuricich AM, Ciccarelli M, Swigonski NL. A continuous quality improvement curriculum for residents: addressing core competency, improving systems. Acad Med. 2004;79(10 Suppl):S65–7.PubMedCrossRefGoogle Scholar
  8. 8.
    Ogrinc G, Headrick LA, Morrison LJ, Foster T. Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004;19(5 Pt 2):496–500.PubMedCrossRefGoogle Scholar
  9. 9.
    Ziegelstein RC, Fiebach NH. “The mirror” and “the village”: a new method for teaching practice-based learning and improvement and systems-based practice. Acad Med. 2004;79(1):83–8.PubMedCrossRefGoogle Scholar
  10. 10.
    Allen E, Zerzan J, Choo C, Shenson D, Saha S. Teaching systems-based practice to residents by using independent study projects. Acad Med. 2005;80(2):125–8.PubMedCrossRefGoogle Scholar
  11. 11.
    ACT (Achieving Competence Today) Online. Available at: http://www.actcurriculum.org/. Accessed June 27, 2006.
  12. 12.
    Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco, CA: Jossey-Bass; 1996.Google Scholar
  13. 13.
    Cleghorn GD, Headrick LA. The PDSA cycle at the core of learning in health professions education. Joint Comm J Qual Improv. 1996;22(3):206–12.Google Scholar
  14. 14.
    Berwick DM, Nolan TW. Physicians as leaders in improving health care: a new series in Annals of Internal Medicine [see comment]. Ann Intern Med. 1998;128(4):289–92.PubMedGoogle Scholar
  15. 15.
    DeVellis RF. Scale Development: Theory and Applications. London: Sage Publications; 1991.Google Scholar
  16. 16.
    Gorsuch RL. Factor Analysis, Second Edition. Hillsdale: Lawrence Erlbaum Associates, Inc.; 1983.Google Scholar
  17. 17.
    Landis JR, Koch GG. The measure of observer agreement for categorical data. Biometrics. 1977;33:159–74.PubMedCrossRefGoogle Scholar
  18. 18.
    Swing SR. Assessing the ACGME general competencies: general considerations and assessment methods. Acad Emerg Med. 2002;9(11):1278–88.PubMedCrossRefGoogle Scholar
  19. 19.
    Lynch DC, Swing SR, Horowitz SD, Holt K, Messer JV. Assessing practice-based learning and improvement. Teach Learn Med. 2004;16(1):85–92.PubMedCrossRefGoogle Scholar
  20. 20.
    Messick S. Validity. In: Linn RL, ed. Educational Measurement. 3rd ed. Phoenix, AZ: Oryx Press; 1993.Google Scholar
  21. 21.
    Downing SM. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37:830–7.PubMedCrossRefGoogle Scholar
  22. 22.
    Standards for Educational and Psychological Testing. Washington, DC: American Education Research Association; 1999.Google Scholar
  23. 23.
    Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119:166e7–16.CrossRefGoogle Scholar
  24. 24.
    Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med. 2004;19:971–7.PubMedCrossRefGoogle Scholar
  25. 25.
    Beckman TJ, Cook DA, Mandrekar JN. What is the validity evidence for assessments of clinical teaching? J Gen Intern Med. 2005;20:1159–64.PubMedCrossRefGoogle Scholar
  26. 26.
    Moss F, Thompson R. A new structure for quality improvement reports [comment]. Qual Health Care. 1999;8(2):76.PubMedCrossRefGoogle Scholar
  27. 27.
    Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project [see comment]. Qual Saf Health Care. 2005;14(5):319–25.PubMedCrossRefGoogle Scholar

Copyright information

© Society of General Internal Medicine 2007

Authors and Affiliations

  • James L. Leenstra
    • 1
  • Thomas J. Beckman
    • 2
  • Darcy A. Reed
    • 3
  • William C. Mundell
    • 2
  • Kris G. Thomas
    • 3
  • Bryan J. Krajicek
    • 1
  • Stephen S. Cha
    • 4
  • Joseph C. Kolars
    • 5
  • Furman S. McDonald
    • 2
  1. 1.Division of General Internal MedicineMayo Clinic College of MedicineRochesterUSA
  2. 2.Division of General Internal MedicineMayo Clinic College of MedicineRochesterUSA
  3. 3.Division of Primary Care Internal MedicineMayo Clinic College of MedicineRochesterUSA
  4. 4.Division of BiostatisticsMayo Clinic College of MedicineRochesterUSA
  5. 5.Division of Gastroenterology and HepatologyMayo Clinic College of MedicineRochesterUSA

Personalised recommendations