Advances in Health Sciences Education

, Volume 16, Issue 5, pp 601–608 | Cite as

Optimization of answer keys for script concordance testing: should we exclude deviant panelists, deviant responses, or neither?

  • Robert Gagnon
  • Stuart Lubarsky
  • Carole Lambert
  • Bernard Charlin
Article

Abstract

The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool’s discriminatory power, but deviant answers could potentially diminish the reliability of scores by introducing measurement error. (1) to investigate the effects on SCT psychometrics of excluding from the test’s scoring key either deviant panelists or deviant answers; (2) to propose a method for excluding either deviant panelists or deviant answers. Using an SCT in radiation oncology, we examined three methods for reducing panel response variability. One method (‘outliers’) entailed removing from the panel members with very low total scores. Two other methods (‘distance-from-mode’ and ‘judgment-by-experts’) excluded widely deviant responses to individual questions from the test’s scoring key. We compared the effects of these methods on score reliability, correlations between original and adjusted scores, and between-group effect sizes (panel-residents; panel-students; and residents-students). With a large panel (n = 45), optimization methods have no effect on reliability of scores, correlation and effect size. With a smaller panel (n = 15) no significant effect of optimization methods were observed on reliability and correlation, but significant variation on effect size was observed across samples. Measurement error resulting from deviant panelist responses on SCTs is negligible, provided the panel size is sufficiently large (>15). However, if removal of deviant answers is judged necessary, the distance-from-mode strategy is recommended.

Keywords

Clinical Reasoning Script concordance test Optimization Reliability Panel 

References

  1. Allman, R. M., Steinberg, E. P., Kerulv, J. C., & Dans, P. E. (1985). Physician tolerance for uncertainty: Use of liver-spleen scans to detect metastases. JAMA, 254, 246–248.CrossRefGoogle Scholar
  2. Bland, A., Kreiter, C., & Gordon, J. (2005). The psychometric properties of five scoring methods applied to the script concordance test. Academic Medicine, 80, 395–399.CrossRefGoogle Scholar
  3. Carriere, B., Gagnon, R., Charlin, B., Downing, S., & Bordage, G. (2009). Assessing clinical reasoning in pediatric emergency medicine: Validity evidence for a script concordance test. Annals of Emergency Medicine, 53, 647–652.CrossRefGoogle Scholar
  4. Charlin, B., & van der Vleuten, C. (2004). Standardized assessment of reasoning in contexts of uncertainty: The script concordance approach. Evaluation & the Health Professions, 27, 304–319.CrossRefGoogle Scholar
  5. Charlin, B., Gagnon, R., Pelletier, J., et al. (2006). Assessment of clinical reasoning in the context of uncertainty: The effect of variability within the reference panel. Medical Education, 40, 848–854.CrossRefGoogle Scholar
  6. Charlin, B., Boshuizen, H. P., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41, 1178–1184.CrossRefGoogle Scholar
  7. Eddy, D. M. (1984). Variations in physician practice: The role of uncertainty. Health Affairs (Millwood), 3, 74–89.CrossRefGoogle Scholar
  8. Fournier, J. P., Demeester, A., & Charlin, B. (2008). Script concordance tests: Guidelines for construction. BMC Medical Informatics and Decision Making, 8, 18.CrossRefGoogle Scholar
  9. Gagnon, R., Charlin, B., Coletti, M., Sauve, E., & van der Vleuten, C. (2005). Assessment in the context of uncertainty: How many members are needed on the panel of reference of a script concordance test? Medical Education, 39, 284–291.CrossRefGoogle Scholar
  10. Lambert, C., Gagnon, R., Nguyen, D., Charlin, B. (2009). The script concordance test in radiation oncology: Validation study of a new tool to assess clinical reasoning. Radiation Oncology, 9, 4–7. http://www.ro-journal.com/content/4/1/7.
  11. Lubarsky, S., Chalk, C., Kazitani, D., Gagnon, R., & Charlin, B. (2009). The script concordance test: A new tool assessing clinical judgment in neurology. Canadian Journal of Neurological Sciences, 36, 326–331.Google Scholar
  12. Lubarsky, S., Charlin, B., Cook, D.A., Chalk, C., & van der Vleuten, C. (2011). Script concordance method: A review of published validity evidence. Medical Education (in press).Google Scholar
  13. Meterissian, S. (2007). Is the script concordance test a valid instrument for assessment of intraoperative decision-making skills? The American Journal of Surgery, 193, 248–251.CrossRefGoogle Scholar
  14. Schmidt, H. G., Norman, G. R., & Boshuizen, H. P. A. (1990). A cognitive perspective on medical expertise: Theory and implications. Academic Medicine, 65, 611–621.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  • Robert Gagnon
    • 1
  • Stuart Lubarsky
    • 2
  • Carole Lambert
    • 1
  • Bernard Charlin
    • 1
  1. 1.CPASS, Faculty of MedicineUniversity of MontrealMontrealCanada
  2. 2.Department of Neurology and Neurosurgery and Centre for Medical EducationMcGill UniversityMontrealCanada

Personalised recommendations