Journal of General Internal Medicine

, Volume 20, Issue 12, pp 1159–1164

What is the validity evidence for assessments of clinical teaching?

Authors

    • Division of General Internal Medicine, Department of Internal MedicineMayo Clinic College of Medicine, Mayo Clinic and Mayo Foundation
  • David A. Cook
    • Division of General Internal Medicine, Department of Internal MedicineMayo Clinic College of Medicine, Mayo Clinic and Mayo Foundation
  • Jayawant N. Mandrekar
    • Division of Biostatistics, Department of Health Sciences ResearchMayo Clinic College of Medicine, Mayo Clinic and Mayo Foundation
Clinical Review

DOI: 10.1111/j.1525-1497.2005.0258.x

Cite this article as:
Beckman, T.J., Cook, D.A. & Mandrekar, J.N. J GEN INTERN MED (2005) 20: 1159. doi:10.1111/j.1525-1497.2005.0258.x

Abstract

BACKGROUND: Although a variety of validity evidence should be utilized when evaluating assessment tools, a review of teaching assessments suggested that authors pursue a limited range of validity evidence.

OBJECTIVES: To develop a method for rating validity evidence and to quantify the evidence supporting scores from existing clinical teaching assessment instruments.

DESIGN: A comprehensive search yielded 22 articles on clinical teaching assessments. Using standards outlined by the American Psychological and Education Research Associations, we developed a method for rating the 5 categories of validity evidence reported in each article. We then quantified the validity evidence by summing the ratings for each category. We also calculated weighted κ coefficients to determine interrater reliabilities for each category of validity evidence.

MAIN RESULTS: Content and Internal Structure evidence received the highest ratings (27 and 32, respectively, of 44 possible). Relation to Other Variables, Consequences, and Response Process received the lowest ratings (9, 2, and 2, respectively). Interrater reliability was good for Content, Internal Structure, and Relation to Other Variables (κ range 0.52 to 0.96, all P values <.01), but poor for Consequences and Response Process.

CONCLUSIONS: Content and Internal Structure evidence is well represented among published assessments of clinical teaching. Evidence for Relation to Other Variables, Consequences, and Response Process receive little attention, and future research should emphasize these categories. The low interrater reliability for Response Process and Consequences likely reflects the scarcity of reported evidence. With further development, our method for rating the validity evidence should prove useful in various settings.

Key Words

validityclinical teachingevaluation studies
Download to read the full article text

Copyright information

© Society of General Internal Medicine 2005