Journal of General Internal Medicine

, Volume 10, Issue 9, pp 504–510 | Cite as

Measuring attending physician performance in a general medicine outpatient clinic

  • Rodney A. Hayward
  • Brent C. Williams
  • Larry D. Gruppen
  • David Rosenbaum
Original Articles

Abstract

OBJECTIVE: To determine which aspects of outpatient attending physician performance (e.g., clinical ability, teaching ability, interpersonal conduct) were measurable and separable by resident report.

DESIGN: Self-administered evaluation form.

SETTING: University internal medicine resident continuity clinic.

PARTICIPANTS: All residents with their continuity clinic at the university hospital evaluated the two attendings who staffed their clinic for the academic years of 1990–1991, 1991–1992, and 1992–1993 (average of 85 total residents per year). The overall response rate was 74%.

ANALYSIS: Exploratory analyses were conducted on a preliminary evaluation form in the first two years of the study (236 evaluations of 20 different clinic attendings) and confirmatory analyses using factor analysis and generalizability analysis were performed on the third year’s data (142 evaluations of 15 different clinic attendings). Analysis of variance was used to evaluate factors associated with evaluation scores.

RESULTS: Analyses demonstrated that the residents did not distinguish between the attendings’ clinical and teaching abilities, resulting in a single four-item scale that was named the Clinical/Teaching Excellence Scale, measured on a five-point scale from poor to outstanding (Cronbach’s alpha=0.92). A large amount of the variance for this scale score was associated with attending identity (adjusted R2=46%). However, two alternative approaches to evaluating the performance of the attending (preference for him or her to the “average” attending and perceived impact of the attending on residents’ clinical skills) did not provide useful information independent of the Clinical/Teaching Excellence Scale. The ratings of three separate conduct scales [availability in clinic (Availability Scale), treating residents and patients with respect (Respect Scale), and time efficiency in staffing cases (Slow Staffing Scale)] were separable from each other and from the rating of clinical/teaching excellence. For the Clinical/Teaching Excellent Scale, as few as four evaluations produced good interrater reliability and eight evaluations produced excellent reliability (reliability coefficients were 0.70 and 0.84, respectively).

CONCLUSIONS: Although this evaluation instrument for measuring clinic attending performance must be considered preliminary, this study suggests that relatively few attending evaluations are required to reliably profile an individual attending’s performance, that attending identity is associated with a large amount of the scale score variation, and that special issues of attending performance more relevant to the outpatient setting than the inpatient setting (availability in clinic and sensitivity to time efficiency) should be considered when evaluating clinic attending performance.

Key words

medical education clinical teaching ambulatory teaching internship and residency internal medicine performance evaluation residency training 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Perkoff GT. Teaching clinical medicine in the ambulatory setting: an idea whose time has finally come. N Engl J Med. 1986;814:27–31.CrossRefGoogle Scholar
  2. 2.
    Howell JD, Lurie N, Woolliscroft JO. Worlds apart: some thoughts to be delivered to house staff on the first day of clinic. JAMA. 1987;258:502–3.PubMedCrossRefGoogle Scholar
  3. 3.
    SGIM Council. Guidelines for promotion of clinical teachers: draft policy statement. SGIM News. 1993;Nov:7–14.Google Scholar
  4. 4.
    Whitman N, Schwenk T. Faculty evaluation as a means of faculty development. J Fam Pract. 1982;14:1097–101.PubMedGoogle Scholar
  5. 5.
    Smith LG. The development of an evaluation system for house staff and attendings. J Med Soc N Jersey. 1974;71:685–7.Google Scholar
  6. 6.
    Irby D, Rakestraw P. Evaluating clinical teaching in medicine. J Med Educ. 1981;56:181–6.PubMedGoogle Scholar
  7. 7.
    Ramsey PG, Gillmore GM, Irby DM. Evaluating clinical teaching in the medicine clerkship: relationship of instructor experience and training setting to ratings of teaching effectiveness. J Gen Intern Med. 1988;3:351–5.PubMedCrossRefGoogle Scholar
  8. 8.
    Tortolani AJ, Risucci DA, Rosati RJ. Resident evaluation of surgical faculty. J Surg Res. 1991;51:186–91.PubMedCrossRefGoogle Scholar
  9. 9.
    Downing SM, English DC, Dean RE. Resident ratings of surgical faculty: improved teaching effectiveness through feedback. Am Surg. 1983;49:329–32.PubMedGoogle Scholar
  10. 10.
    Irby DM, Gillmore GM, Ramsey PG. Factors affecting ratings of clinical teachers by medical students and residents. J Med Educ. 1987;62:1–7.PubMedGoogle Scholar
  11. 11.
    Donnelly MB, Woolliscroft JO. Evaluation of clinical instructors by third-year medical students. Acad Med. 1989;64:159–64.PubMedCrossRefGoogle Scholar
  12. 12.
    McLeod PJ, James CA, Abrahamowicz M. Clinical tutor evaluation: a 5-year study by students on an in-patient service and residents in an ambulatory care clinic. Med Educ. 1993;27:48–54.PubMedCrossRefGoogle Scholar
  13. 13.
    Ramsbottom-Lucier MT, Gillmore GM, Irby DM, Ramsey PG. Evaluation of clinical teaching by general internal medicine faculty in outpatient and inpatient settings. Acad Med. 1994;69:152–4.PubMedCrossRefGoogle Scholar
  14. 14.
    Kim J. Factor Analysis: Statistical Methods and Practical Issues. Beverly Hills, CA: Sage Publications. 1978.Google Scholar
  15. 15.
    Holzinger KJ, Harman HH. Factor Analysis: A Synthesis of Factorial Methods. Chicago, IL: University of Chicago Press, 1941.Google Scholar
  16. 16.
    Nunnally JC. Psychometric Theory. 2nd ed. New York: McGraw-Hill, 1978.Google Scholar
  17. 17.
    Stata Corporation. Stata Reference Manual: Release 3.1. 6th ed. College Station, TX: Stata Corporation, 1993.Google Scholar
  18. 18.
    Brennan RL. Elements of Generalizability Theory. Iowa City, IA: American College Testing Publications, 1983.Google Scholar
  19. 19.
    Ebel RL. Estimation of the reliability of ratings. Psyehometrika. 1951;16:407–24.CrossRefGoogle Scholar

Copyright information

© Society of General Internal Medicine 1995

Authors and Affiliations

  • Rodney A. Hayward
    • 1
    • 2
  • Brent C. Williams
    • 1
    • 3
  • Larry D. Gruppen
    • 4
  • David Rosenbaum
    • 5
  1. 1.the Division of General Medicine, Department of Internal MedicineUniversity of MichiganAnn Arbor
  2. 2.Department of Health Services Management and PolicyUniversity of MichiganAnn Arbor
  3. 3.General Medicine Outpatient Service, Primary Care Education ProgramsUniversity of MichiganAnn Arbor
  4. 4.the Office of Educational Resources and Research, Department of Postgraduate Medicine and Health Professions EducationUniversity of MichiganAnn Arbor
  5. 5.the New York University Medical SchoolNew York

Personalised recommendations