Objective:To study the reliability and validity of using medical school faculty in the evaluation of the interviewing skills of medical students.
Design:All second-year University of North Carolina medical students (n=159) were observed interviewing standardized patients for 5 minutes by one of eight experienced clinical faculty. Interview quality was assessed by a faculty checklist covering questioning style, facilitative behaviors, and specific content. Twenty-one randomly chosen students were videotaped and rated: by the original rater as well as four other raters; by two nationally recognized experts; and according to Roter’s coding dimensions, which have been found to correlate strongly with patient compliance and satisfaction.
Setting:Medical school at a state university in the southeastern United States.
Participants:Faculty members who volunteered to evaluate second-year medical students during an annual Objective Structured Clinical Exam.
Interventions:Interrater reliability and intrarater reliability were tested using videotapes of medical students interviewing a standardized patient. Validity was tested by comparing the faculty judgment with both an analysis using the Roter Interactional Analysis System and an assessment made by expert interviewers.
Measurements and main results:Faculty mean checklist score was 80% (range 41–100%). Intrarater reliability was poor for assessment of skills and behaviors as compared with that for content obtained. Interrater reliability was also poor as measured by intraclass correlation coefficients ranging from 0.11 to 0.37. When compared with the experts, faculty raters had a sensitivity of 80% but a specificity of 45% in identifying students with adequate skills. The predictive value of faculty assessment was 12%. Analysis using Roter’s coding scheme suggests that faculty scored students on the basis of likability rather than specific behavioral skills, limiting their ability to provide behaviorally specific feedback.
Conclusions:To accurately evaluate clinical interviewing skills we must enhance rater consistency, particularly in assessing those skills that both satisfy patients and yield crucial data.
Platt FW, McMath JC. Clinical hypocompetence: the interview. Ann Intern Med. 1979;91:898–902.PubMedGoogle Scholar
Duffy DL, Hamerman D, Cohen MA. Communication skills of house officers: a study in a medical clinic. Ann Intern Med. 1980;93:354–7.PubMedGoogle Scholar
Beckman HB, Frankel RM. The effect of physician behavior on the collection of data. Ann Intern Med. 1984;101:692–5.PubMedGoogle Scholar
Kaplan SH, Greenfield S, Ware JE. Assessing the effects of physician-patient interaction on the outcomes of chronic disease. Med Care. 1989;27(March suppl):S110-S127.PubMedCrossRefGoogle Scholar
Hulka BS, Kupper LL, Cassel JC, Mayo F. Doctor-patient communication and outcomes among diabetic patients. J Community Health. 1975;1:15–27.PubMedCrossRefGoogle Scholar
Hulka BS, Cassel JC, Kupper LL, Burdette JA. Communication, compliance, and concordance between physicians and patients with prescribed medications. Am J Public Health. 1976;66:847–53.PubMedGoogle Scholar
Starfield B, Wray C, Hess K, Gross R, Birk PS, D’lugoff BD. The influence of patient-practioner agreement on outcome of care. Am J Public Health. 1981;71:127–31.PubMedGoogle Scholar
Stiles WB, Putnam SM, Wolf MH, James SA. Interaction exchange structure and patient satisfaction with medical interviews. Med Care. 1979;17:667–81.PubMedCrossRefGoogle Scholar
DiMatteo MR, Hays RD, Prince LM. Relationship of physicians’ nonverbal communication skill to patient satisfaction, appointment noncompliance, and physician workload. Health Psychol. 1986;5(6):581–94.PubMedCrossRefGoogle Scholar
Hart I, Harden R, Walton H (eds). Newer developments in assessing clinical competence. Montreal: Can-Heal, 1986.Google Scholar
Petrusa ER, Blackwell TA, Ainsworth MA. Reliability and validity of an objective structured clinical examination for assessing the clinical performance of residents. Arch Intern Med. 1990;150:573–7.PubMedCrossRefGoogle Scholar
Hoole AJ, Kowlowitz V, McGaghie WC, Sloane PS, Colindres RE. Using the Objective Structured Clinical Exam at the University of North Carolina. N C J Med. 1987;48(9):463–7.Google Scholar
Inui T, Carter WB, Problems and prospects for health services research on provider-patient communication. Med Care. 1985;23:521–38.PubMedCrossRefGoogle Scholar
Mishler EG, Clark JA, Ingelfinger J, Simon MP. The language of attentive patient care: a comparison of two medical interviews. J Gen Intern Med. 1989;4:325–35.PubMedCrossRefGoogle Scholar
Novack DH. Therapeutic aspects of the clinical encounter. Clin Rev. 1987;2:346–55.Google Scholar
Larsen LM, Smith CK. Assessment of nonverbal communication in the patient-physician interview. J Fam Pract. 1981;12:481–8.PubMedGoogle Scholar
Roter D. The Roter method of interaction process analysis. Baltimore, MD: Johns Hopkins University Press, 1989.Google Scholar
Wasserman RC, Inui TS. Systematic analysis of clinician-patient interactions: a critique of recent approaches with suggestions for future research. Med Care. 1983;21:279–93.PubMedCrossRefGoogle Scholar
Inui TS, Carter WB, Kukull WA, Haigh VH. Outcome-based doctor-patient interaction analysis: I. Comparison of techniques. Med Care. 1982;20:535–49.PubMedCrossRefGoogle Scholar
Carter WB, Inui TS, Kukull WA, Haigh VH. Outcome-based doctor-patient interaction analysis: II. Identifying effective provider and patient behavior. Med Care. 1982;20:550–66.PubMedCrossRefGoogle Scholar
Wolraich ML, Albanese M, Stone G, et al. Medical communications behavior system: an interactional analysis system for medical interaction. Med Care. 1986;24:891–903.PubMedCrossRefGoogle Scholar
Kramer MS, Feinstein AR. Clinical biostatistics: LIV. The biostatistics of concordance. Clin Pharmacol Ther. 1981;1:111–23.CrossRefGoogle Scholar
Fleiss JL. The design and analysis of clinical experiments. New York: John Wiley and Sons, 1986.Google Scholar
Nunnally JS. Psychometric theory. New York: McGraw Hill, 1978.Google Scholar
Herbers JE Jr, Noel GL, Cooper GS, Harvey J, Pangaro LN, Weaver MJ. How accurate are faculty evaluations of clinical competence? J Gen Intern Med. 1989,4:202–8.PubMedGoogle Scholar
Branch W. Office practice of medicine. In: Lipkin M. The medical interview and related skills. Social-psychiatric problems. Philadelphia: W. B. Saunders, 1982.Google Scholar
Hampton JR, Harrison MJG, Mitchell JRA, Prichard JS, Seymour C. Relative contributions of history-taking, physical examination, and laboratory investigation to diagnosis and management of medical outpatients. Br Med J. 1975;31:486–9.CrossRefGoogle Scholar
Wolliscroft JO, Calhoun JG, Billiu GA, Stross JK, MacDonald M, Templeton B. House officer interviewing techniques: impact on data ellicitation and patient perceptions. J Gen Intern Med. 1989;4:108–14.Google Scholar
Wartman SA, Morlock LL, Malitz FE, Palm E. Do prescriptions adversely affect doctor-patient interactions? Am J Public Health. 1981;71:1358–61.PubMedGoogle Scholar
Dirks JF, Schraa JC, Robinson SK. Patient mislabeling of symptoms: implications for patient-physician communication and medical outcome. Int J Psychiatry Med. 1982;12:15–27.PubMedCrossRefGoogle Scholar
Maguire P, Rutter DR. History-taking for medical students: deficiencies in performance. Lancet. 1976;ii:556–8.CrossRefGoogle Scholar
Valente CM, Antlitz AM, Boyd MD, Troisi AJ. The importance of physician-patient communication in reducing medical liability. Med Mutual J. 1988;January:75–8.Google Scholar
Winslow R. Sometimes, talk is the best medicine for physicians, communication may avert suits. Wall Street Journal. October 5, 1989:B1.Google Scholar
Eisenberg JM. Evaluating internists’ clinical competence. J Gen Intern Med. 1989;4:139–43.PubMedGoogle Scholar