Advances in Health Sciences Education

, Volume 15, Issue 1, pp 55–63 | Cite as

Is case-specificity content-specificity? An analysis of data from extended-matching questions

ORIGINAL PAPER

Abstract

Case-specificity, i.e., variability of a subject’s performance across cases, has been a consistent finding in medical education. It has important implications for assessment validity and reliability. Its root causes remain a matter of discussion. One hypothesis, content-specificity, links variability of performance to variable levels of relevant knowledge. Extended-matching items (EMIs) are an ideal format to test this hypothesis as items are grouped by topic. If differences pertaining to content knowledge are the main cause of case-specificity, variability across topics should be high and variability across items within the same topic low. We used generalisability analysis on results of a written test composed of 159 EMIs sat by two cohorts of general practice trainees at one university. Two hundred and twenty-seven trainees took part. The variance component attributed to subjects was small. Variance attributed to topics was smaller than variance attributed to items. The main source of error was interaction between subjects and items, accounting for two-thirds of error. The generalisability D study revealed that for the same total number of items, increasing the number of topics results in a higher G coefficient than increasing the number of items per topic. Topical knowledge does not seem to explain case-specificity observed in our data. Structure of knowledge and reasoning strategy may be more important, in particular pattern-recognition which EMIs were designed to elicit. The causal explanations of case-specificity may be dependent on test format. Increasing the number of topics with fewer items each would increase reliability but also testing time.

Keywords

Assessment Case-specificity Clinical reasoning Content-specificity Extended-matching items Generalisability Pattern recognition Postgraduate general practice training Written assessment 

References

  1. Beullens, J., Struyf, E., & Van Damme, B. (2005). Do extended matching multiple-choice questions measure clinical reasoning? Medical Education, 39(4), 410–417. doi:10.1111/j.1365-2929.2005.02089.x.CrossRefGoogle Scholar
  2. Beullens, J., Van Damme, B., Jaspaert, H., & Janssen, P. J. (2002). Are extended-matching multiple-choice items appropriate for a final test in medical education? Medical Teacher, 24(4), 390–395. doi:10.1080/0142159021000000843.CrossRefGoogle Scholar
  3. Case, S. M., & Swanson, D. B. (1993). Extended-matching items: A practical alternative to free-response questions. Teaching and Learning in Medicine, 5(2), 107–115.CrossRefGoogle Scholar
  4. Case, S. M., Swanson, D. B., & Stillman, P. L. (1988). Evaluating diagnostic pattern recognition: The psychometric characteristics of a new item format. Paper presented at the Annual Meeting of the Eastern Educational Research Association, Miami Beach, FL, February 24–27, 1988.Google Scholar
  5. Charlin, B., Boshuizen, H. P., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41(12), 1178–1184.CrossRefGoogle Scholar
  6. Coderre, S., Mandin, H., Harasym, P. H., & Fick, G. H. (2003). Diagnostic reasoning strategies and diagnostic success. Medical Education, 37(8), 695–703. doi:10.1046/j.1365-2923.2003.01577.x.CrossRefGoogle Scholar
  7. Custers, E. J., Regehr, G., & Norman, G. R. (1996). Mental representations of medical diagnostic knowledge: A review. Academic Medicine, 71(10), S55–S61. doi:10.1097/00001888-199610000-00044.CrossRefGoogle Scholar
  8. Eva, K. W. (2003). On the generality of specificity. Medical Education, 37(7), 587–588. doi:10.1046/j.1365-2923.2003.01563.x.CrossRefGoogle Scholar
  9. Eva, K. W. (2005). What every teacher needs to know about clinical reasoning. Medical Education, 39(1), 98–106. doi:10.1111/j.1365-2929.2004.01972.x.CrossRefGoogle Scholar
  10. Eva, K. W., Neville, A. J., & Norman, G. R. (1998). Exploring the etiology of content specificity: Factors influencing analogic transfer and problem solving. Academic Medicine, 73(10), S1–S5. doi:10.1097/00001888-199810000-00028.CrossRefGoogle Scholar
  11. Heemskerk, L., Norman, G., Chou, S., Mintz, M., Mandin, H., & McLaughlin, K. (2008). The effect of question format and task difficulty on reasoning strategies and diagnostic performance in Internal Medicine residents. Advances in health sciences education: Theory and practice, 13(4), 453–462.CrossRefGoogle Scholar
  12. Kreiter, C. (2008). A comment on the continuing impact of case specificity. Medical Education, 42(6), 548–549. doi:10.1111/j.1365-2923.2008.03085.x.CrossRefGoogle Scholar
  13. Kreiter, C. D., & Bergus, G. R. (2007). Case specificity: Empirical phenomenon or measurement artifact? Teaching and Learning in Medicine, 19(4), 378–381.Google Scholar
  14. Mattick, K., Dennis, I., Bradley, P., & Bligh, J. (2008). Content specificity: Is it the full story? Statistical modelling of a clinical skills examination. Medical Education, 42(6), 589–599. doi:10.1111/j.1365-2923.2008.03020.x.CrossRefGoogle Scholar
  15. Norman, G. (2005). Research in clinical reasoning: Past history and current trends. Medical Education, 39(4), 418–427. doi:10.1111/j.1365-2929.2005.02127.x.CrossRefGoogle Scholar
  16. Norman, G., Bordage, G., Page, G., & Keane, D. (2006). How specific is case specificity? Medical Education, 40(7), 618–623. doi:10.1111/j.1365-2929.2006.02511.x.CrossRefGoogle Scholar
  17. Norman, G. R., & Eva, K. W. (2003). Doggie diagnosis, diagnostic success and diagnostic reasoning strategies: An alternative view. Medical Education, 37(8), 676–677. doi:10.1046/j.1365-2923.2003.01528.x.CrossRefGoogle Scholar
  18. Norman, G. R., Tugwell, P., Feightner, J. W., Muzzin, L. J., & Jacoby, L. L. (1985). Knowledge and clinical problem-solving. Medical Education, 19(5), 344–356. doi:10.1111/j.1365-2923.1985.tb01336.x.CrossRefGoogle Scholar
  19. Schmidt, H. G., & Boshuizen, H. P. (1993). On acquiring expertise in medicine. Educational Psychology Review, 5(3), 205–221. doi:10.1007/BF01323044.CrossRefGoogle Scholar
  20. Schmidt, H. G., Boshuizen, H. P., & Norman, G. (1992). Reflections on the nature of expertise in medicine. In Keravnou, E. (Ed.), Deep models for medical knowledge engineering (pp. 231–248). Amsterdam: Elsevier.Google Scholar
  21. Swanson, D. B., Holtzman, K. Z., Allbee, K., & Clauser, B. E. (2006). Psychometric characteristics and response times for content-parallel extended-matching and one-best-answer items in relation to number of options. Academic Medicine, 81(10), S52–S55. doi:10.1097/01.ACM.0000236518.87708.9d.CrossRefGoogle Scholar
  22. van der Vleuten, C. P., & Newble, D. I. (1995). How can we test clinical reasoning? Lancet, 345(8956), 1032–1034. doi:10.1016/S0140-6736(95)90763-7.CrossRefGoogle Scholar
  23. Wimmers, P. F., & Fung, C. C. (2008). The impact of case specificity and generalisable skills on clinical performance: A correlated traits-correlated methods approach. Medical Education, 42(6), 580–588. doi:10.1111/j.1365-2923.2008.03089.x.CrossRefGoogle Scholar
  24. Wimmers, P., Splinter, T., Hancock, G., & Schmidt, H. (2007). Clinical competence: General ability or case-specific? Advances in Health Sciences Education, 12(3), 299–314. doi:10.1007/s10459-006-9002-x.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.Centre Academique de Medecine GeneraleUniversite catholique de LouvainBrusselsBelgium
  2. 2.Centre de Pedagogie Appliquee aux Sciences de la SanteUniversite de MontrealMontrealCanada

Personalised recommendations