Advances in Health Sciences Education

, Volume 14, Issue 4, pp 575–594 | Cite as

Evaluating the effectiveness of rating instruments for a communication skills assessment of medical residents

  • Cherdsak Iramaneerat
  • Carol M. Myford
  • Rachel Yudkowsky
  • Tali Lowenstein
Original Paper

Abstract

The investigators used evidence based on response processes to evaluate and improve the validity of scores on the Patient-Centered Communication and Interpersonal Skills (CIS) Scale for the assessment of residents’ communication competence. The investigators retrospectively analyzed the communication skills ratings of 68 residents at the University of Illinois at Chicago (UIC). Each resident encountered six standardized patients (SPs) portraying six cases. SPs rated the performance of each resident using the CIS Scale—an 18-item rating instrument asking for level of agreement on a 5-category scale. A many-faceted Rasch measurement model was used to determine how effectively each item and scale on the rating instrument performed. The analyses revealed that items were too easy for the residents. The SPs underutilized the lowest rating category, making the scale function as a 4-category rating scale. Some SPs were inconsistent when assigning ratings in the middle categories. The investigators modified the rating instrument based on the findings, creating the Revised UIC Communication and Interpersonal Skills (RUCIS) Scale—a 13-item rating instrument that employs a 4-category behaviorally anchored rating scale for each item. The investigators implemented the RUCIS Scale in a subsequent communication skills OSCE for 85 residents. The analyses revealed that the RUCIS Scale functioned more effectively than the CIS Scale in several respects (e.g., a more uniform distribution of ratings across categories, and better fit of the items to the measurement model). However, SPs still rarely assigned ratings in the lowest rating category of each scale.

Keywords

Validity Rating scale Communication skills Many-faceted Rasch measurement OSCE 

References

  1. Accreditation Council for Graduate Medical Education (1999). The ACGME outcome project. Retrieved August 2007, from http://www.acgme.org/outcome/.
  2. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
  3. Bashook, P. G., & Swing, S. (2000). Toolbox of assessment methods. Retrieved August 2007, from http://www.acgme.org/outcome/assess/assHome.asp.
  4. Bernadin, H. J., & Buckley, M. R. (1981). Strategies in rater training. Academy of Management Review, 6, 205–212. doi:10.2307/257876.CrossRefGoogle Scholar
  5. Bernardin, H. J., & Smith, P. C. (1981). A clarification of some issues regarding the development and use of behaviorally anchored rating scales (BARS). The Journal of Applied Psychology, 66, 458–463. doi:10.1037/0021-9010.66.4.458.CrossRefGoogle Scholar
  6. Burchard, K. W., & Rowland-Morin, P. A. (1990). A new method of assessing the interpersonal skills of surgeons. Academic Medicine, 65, 274–276. doi:10.1097/00001888-199004000-00012.CrossRefGoogle Scholar
  7. Cohen, D. S., Colliver, J. A., Marcy, M. S., Fried, E. D., & Scwartz, M. H. (1996). Psychometric properties of a standardized-patient checklist and rating-scale form used to assess interpersonal and communication skills. Academic Medicine, 71(1(Suppl)), S87–S89.Google Scholar
  8. Heller, J. I., Sheingold, K., & Myford, C. M. (1998). Reasoning about evidence in portfolios: Cognitive foundations for valid and reliable assessment. Educational Assessment, 5, 5–40. doi:10.1207/s15326977ea0501_1.CrossRefGoogle Scholar
  9. Humphris, G. M. (2002). Communication skills knowledge, understanding and OSCE performance in medical trainees: A multivariate prospective study using structural equation modeling. Medical Education, 36, 842–852. doi:10.1046/j.1365-2923.2002.01295.x.CrossRefGoogle Scholar
  10. Humphris, G. M., & Kaney, S. (2001). The Liverpool Brief Assessment System for communication skills in the making of doctors. Advances in Health Sciences Education, 6, 69–80. doi:10.1023/A:1009879220949.CrossRefGoogle Scholar
  11. Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport, CT: Praeger.Google Scholar
  12. Linacre, J. M. (1989). Many-faceted Rasch measurement. Chicago, IL: MESA Press.Google Scholar
  13. Linacre, J. M. (2004). Optimizing rating scale category effectiveness. In E. V. Smith Jr. & R. M. Smith (Eds.), Introduction to Rasch measurement: Theory, models and applications (pp. 258–278). Maple Grove, MN: JAM Press.Google Scholar
  14. Linacre, J. M. (2005). Facets (Version 3.57) [computer program]. Chicago, IL: Winsteps.Google Scholar
  15. Linacre, J. M., & Wright, B. D. (1994). Chi-square fit statistics. Rasch Measurement Transactions, 8, 350.Google Scholar
  16. Makoul, G. (2001). The SEGUE Framework for teaching and assessing communication skills. Patient Education and Counseling, 45, 23–34. doi:10.1016/S0738-3991(01)00136-7.CrossRefGoogle Scholar
  17. Schnabl, G. K., Hassard, T. H., & Kopelow, M. L. (1991). The assessment of interpersonal skills using standardized patients. Academic Medicine, 66(9 (Suppl)), S34–S36.CrossRefGoogle Scholar
  18. Sireci, S. G., Thissen, D., & Wainer, H. (1991). On the reliability of testlet-based tests. Journal of Educational Measurement, 28, 237–247. doi:10.1111/j.1745-3984.1991.tb00356.x.CrossRefGoogle Scholar
  19. Smith, P. C., & Kendall, L. M. (1963). Retranslation of expectations: An approach to the construction of unambiguous anchors for rating scales. The Journal of Applied Psychology, 47, 149–155. doi:10.1037/h0047060.CrossRefGoogle Scholar
  20. Stillman, P. L., Sabers, D. L., & Redfield, D. L. (1976). The use of paraprofessionals to teach interviewing skills. Pediatrics, 57, 769–774.Google Scholar
  21. Stillman, P. L., Swanson, D. B., Smee, S., Stillman, A. E., Ebert, T. H., Emmel, V. S., et al. (1986). Assessing clinical skills of residents with standardized patients. Annals of Internal Medicine, 105, 762–771.Google Scholar
  22. Thissen, D., Steinberg, L., & Mooney, J. (1989). Trace lines for testlets: A use of multiple-categorical response models. Journal of Educational Measurement, 26(3), 247–260. doi:10.1111/j.1745-3984.1989.tb00331.x.CrossRefGoogle Scholar
  23. Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8(3), 370. Available from: URL: http://www.rasch.org/rmt/rmt383b.htm.
  24. Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. Chicago: MESA Press.Google Scholar
  25. Yudkowsky, R., Alseidi, A., & Cintron, J. (2004). Beyond fulfilling the core competencies: An objective structured clinical examination to assess communication and interpersonal skills in a surgical residency. Current Surgery, 61, 499–503. doi:10.1016/j.cursur.2004.05.009.CrossRefGoogle Scholar
  26. Yudkowsky, R., Downing, S. M., & Sandlow, L. J. (2006). Developing an institution-based assessment of resident communication and interpersonal skills. Academic Medicine, 81(12), 1115–1122. doi:10.1097/01.ACM.0000246752.00689.bf.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  • Cherdsak Iramaneerat
    • 1
  • Carol M. Myford
    • 2
  • Rachel Yudkowsky
    • 3
  • Tali Lowenstein
    • 3
  1. 1.Department of Surgery, Faculty of MedicineSiriraj Hospital, Mahidol UniversityBangkokThailand
  2. 2.Department of Educational Psychology, College of EducationUniversity of Illinois at ChicagoChicagoUSA
  3. 3.Department of Medical Education, College of MedicineUniversity of Illinois at ChicagoChicagoUSA

Personalised recommendations