Understanding the Assessment of Clinical Reasoning

  • Joseph Rencic
  • Steven J. Durning
  • Eric Holmboe
  • Larry D. Gruppen
Chapter
Part of the Innovation and Change in Professional Education book series (ICPE, volume 13)

Abstract

Clinical reasoning assessment is an essential component of determining a health professional’s competence. Clinical reasoning cannot be assessed directly. It must be gleaned from a health professional’s choices and decisions. Clinical knowledge and knowledge organization, rather than a general problem solving process, serve as the substrate for clinical reasoning ability. Unfortunately, the lack of a gold standard for the clinical reasoning process and the observation of context specificity make it difficult to assess clinical reasoning. Information processing theory, which focuses on the way the brain processes and organizes knowledge, has provided valuable insights into the cognitive psychology of diagnostic and therapeutic reasoning but failed to explain the variance in health professional’s diagnostic performance. Situativity theory has emerged suggesting that this variance relates to context-specific factors that impact a health professional’s clinical reasoning performance. Both information processing and situativity theory inform the way in which we assess clinical reasoning. Current assessment methods focus on standardized testing of knowledge to maximize psychometric parameters and work-based assessments which evaluate clinical reasoning under authentic, uncertain conditions that can decrease the reliability of these measurements. Issues of inter-rater reliability and context specificity require that multiple raters assess multiple encounters in multiple contexts to optimize validity and reliability. No single assessment method can assess all aspects of clinical reasoning; therefore, in order to improve the quality of assessments of clinical reasoning ability, different combinations of methods that measure different components of the clinical reasoning process are needed.

References

  1. Abbott, V., Black, J. B., & Smith, E. E. (1985). The representation of scripts in memory. Journal of Memory and Language, 24(2), 179–199.CrossRefGoogle Scholar
  2. ACP Smart Medicine. (n.d.). Retrieved July 28, 2014, from http://smartmedicine.acponline.org
  3. Adamson, K. A., Gubrud, P., Sideras, S., & Lasater, K. (2011). Assessing the reliability, validity, and use of the Lasater Clinical Judgment Rubric: three approaches. Journal of Nursing Education, 51(2), 66–73.CrossRefGoogle Scholar
  4. American Board of Anesthesiology. (n.d.). Maintenance of certification in anesthesiology (MOCA): Simulation for MOCA. Retrieved July 22, 2014 from http://www.theaba.org/Home/anesthesiology_maintenance
  5. Ark, T. K., Brooks, L. R., & Eva, K. W. (2007). The benefits of flexibility: The pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Medical Education, 41(3), 281–287.CrossRefGoogle Scholar
  6. Babbott, S. F., Beasley, B. W., Hinchey, K. T., Blotzer, J. W., & Holmboe, E. S. (2007). The predictive validity of the internal medicine in-training examination. American Journal of Medicine, 120(8), 735–740.CrossRefGoogle Scholar
  7. Bland, A. C., Kreiter, C. D., & Gordon, J. A. (2005). The psychometric properties of five scoring methods applied to the script concordance test. Academic Medicine, 80(4), 395–399.Google Scholar
  8. Bordage, G. (2007). Prototypes and semantic qualifiers: From past to present. Medical Education, 41(12), 1117–1121.CrossRefGoogle Scholar
  9. Bordage, G., & Page, G. (2012, August). Guidelines for the development of key feature problems and test cases. Medical Council of Canada. Retrieved July 20, 2014, from http://mcc.ca/wp-content/uploads/CDM-Guidelines.pdf
  10. Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712.CrossRefGoogle Scholar
  11. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Research, 18(1), 32–42.CrossRefGoogle Scholar
  12. Brydges, R., & Butler, D. (2012). A reflective analysis of medical education research on self-regulation in learning and practice. Medical Education, 46(1), 71–79.Google Scholar
  13. Case, S. M., & Swanson, D. B. (1993). Extended-matching items: A practical alternative to free-response questions. Teaching and Learning in Medicine: An International Journal, 5(2), 107–115.CrossRefGoogle Scholar
  14. Case, S. M., & Swanson, D. B. (2002). Constructing written test questions for the basic and clinical sciences (3rd Ed.). National Board of Medical Examiners. Retrieved February 6, 2015, from http://www.nbme.org/pdf/itemwriting_2003/2003iwgwhole.pdf
  15. Chang, D., Kenel-Pierre, S., Basa, J., Schwartzman, A., Dresner, L., Alfonso, A. E., & Sugiyama, G. (2014). Study habits centered on completing review questions result in quantitatively higher American Board of Surgery In-Training Exam scores. Journal of Surgical Education, 71(6), e127–e131.CrossRefGoogle Scholar
  16. Charlin, B., Boshuizen, H., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41(12), 1178–1184.CrossRefGoogle Scholar
  17. Charlin, B., & van der Vleuten, C. (2004). Standardized assessment of reasoning in contexts of uncertainty: The script concordance approach. Evaluation and the Health Professions, 27(3), 304–319.CrossRefGoogle Scholar
  18. Chart Stimulated Recall. (n.d.). Practical Doc: By rural doctors, for rural doctors. Retrieved February 2, 2015, from http://www.practicaldoc.ca/teaching/practical-prof/teaching-nuts-bolts/chart-stimulated-recall/
  19. Cleary, T. J., Durning, S. J., Gruppen, L. D., Hemmer, P. A., & Artino Jr, A. R. (2013). Self-regulated learning. Oxford textbook of medical education, 465–478.Google Scholar
  20. Cleland, J. A., Abe, K., & Rethans, J. J. (2009). The use of simulated patients in medical education: AMEE Guide No 42 1. Medical Teacher, 31(6), 477–486.CrossRefGoogle Scholar
  21. Counselman, F. L., Borenstein, M. A., Chisholm, C. D., Epter, M. L., Khandelwal, S., Kraus, C. K., et al. (2014). The 2013 model of the clinical practice of emergency medicine. Academic Emergency Medicine, 21(5), 574–598.CrossRefGoogle Scholar
  22. Courteille, O., Bergin, R., Courteille, O., Bergin, R., Stockeld, D., Ponzer, S., & Fors, U. (2008). The use of a virtual patient case in an OSCE-based exam-a pilot study. Medical Teacher, 30(3), e66–e76.CrossRefGoogle Scholar
  23. Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine, 78(8), 775–780.CrossRefGoogle Scholar
  24. Cunnington, J. P., Hanna, E., Turnhbull, J., Kaigas, T. B., & Norman, G. R. (1997). Defensible assessment of the competency of the practicing physician. Academic Medicine, 72(1), 9–12.Google Scholar
  25. Daley, B. J., & Torre, D. M. (2010). Concept maps in medical education: an analytical literature review. Medical Education, 44(5), 440–448.CrossRefGoogle Scholar
  26. Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and implement script concordance tests: Insights from a systematic review. Medical Education, 46(6), 552–563.CrossRefGoogle Scholar
  27. Durning, S. J., & Artino, A. R. (2011). Situativity theory: A perspective on how participants and the environment can interact: AMEE Guide no. 52. Medical Teacher, 33(3), 188–199.CrossRefGoogle Scholar
  28. Durning, S. J., Artino, A. R, Jr, Schuwirth, L., & van der Vleuten, C. (2013). Clarifying assumptions to enhance our understanding and assessment of clinical reasoning. Academic Medicine, 88(4), 442–448.CrossRefGoogle Scholar
  29. Durning, S. J., Cleary, T. J., Sandars, J., Hemmer, P., Kokotailo, P., & Artino, A. R. (2011). Perspective: Viewing “strugglers” through a different lens: How a self-regulated learning perspective can help medical educators with assessment and remediation. Academic Medicine, 86(4), 488–495.CrossRefGoogle Scholar
  30. Durning, S. J., Costanzo, M., Artino, A. R., Vleuten, C., Beckman, T. J., Holmboe, E., et al. (2014). Using functional magnetic resonance imaging to improve how we understand, teach, and assess clinical reasoning. Journal of Continuing Education in the Health Professions, 34(1), 76–82.CrossRefGoogle Scholar
  31. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  32. Ericsson, K. A. (2007). An expert-performance perspective of research on medical expertise: The study of clinical performance. Medical Education, 41(12), 1124–1130.CrossRefGoogle Scholar
  33. Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (Eds.). (2006). The Cambridge handbook of expertise and expert performance. Cambridge, UK: Cambridge University Press.Google Scholar
  34. Eva, K. W. (2003). On the generality of specificity. Medical Education, 37(7), 587–588.CrossRefGoogle Scholar
  35. Eva, K. W. (2005). What every teacher needs to know about clinical reasoning. Medical Education, 39(1), 98–106.CrossRefGoogle Scholar
  36. Eva, K. W., Hatala, R. M., LeBlanc, V. R., & Brooks, L. R. (2007). Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Medical Education, 41(12), 1152–1158.CrossRefGoogle Scholar
  37. Eva, K. W., Neville, A. J., & Norman, G. R. (1998). Exploring the etiology of content specificity: factors influencing analogic transfer and problem solving. Academic Medicine, 73(10), S1–S5.CrossRefGoogle Scholar
  38. Farmer, E. A., & Page, G. (2005). A practical guide to assessing clinical decision-making skills using the key features approach. Medical Education, 39(12), 1188–1194.CrossRefGoogle Scholar
  39. Fonteyn, M., & Grobe, S. (1993). Expert critical care nurses’ clinical reasoning under uncertainty: Representation, structure and process. In M. Frisee (Ed.), Sixteenth annual symposium on computer applications in medical care (pp. 405–409). New York, NY: McGraw-Hill.Google Scholar
  40. Gingerich, A., Regehr, G., & Eva, K. W. (2011). Rater-based assessments as social judgments: Rethinking the etiology of rater errors. Academic Medicine, 86(10), S1–S7.CrossRefGoogle Scholar
  41. Goulet, F., Gagnon, R., & Gingras, M. É. (2007). Influence of remedial professional development programs for poorly performing physicians. Journal of Continuing Education in the Health Professions, 27(1), 42–48.CrossRefGoogle Scholar
  42. Govaerts, M. J. B., Schuwirth, L. W. T., Van der Vleuten, C. P. M., & Muijtjens, A. M. M. (2011). Workplace-based assessment: Effects of rater expertise. Advances in Health Sciences Education, 16(2), 151–165.CrossRefGoogle Scholar
  43. Graber, M. L., Franklin, N., & Gordon, R. (2005). Diagnostic error in internal medicine. Archives of Internal Medicine, 165(13), 1493–1499.CrossRefGoogle Scholar
  44. Grabovsky, I., Hess, B. J., Haist, S. A., Lipner, R. S., Hawley, J. L., Woodward, S., et al. (2014). The relationship between performance on the infectious disease in-training and certification examinations. Clinical Infectious Diseases, ciu906v2.Google Scholar
  45. Green, M. L., Reddy, S. G., & Holmboe, E. (2009). Teaching and evaluating point of care learning with an Internet-based clinical-question portfolio. Journal of Continuing Education in the Health Professions, 29(4), 209–219.CrossRefGoogle Scholar
  46. Gruppen, L. D., & Frohna, A. Z. (2002). Clinical reasoning. In G. R. Norman, C. P. M. van der Vleuten, & D. I. Newble (Eds.), International handbook of research in medical education (pp. 205–230). Dordrecht, The Netherlands: Kluwer Academic Publishers.CrossRefGoogle Scholar
  47. Gruppen, L. D., Wolf, F. M., & Billi, J. E. (1991). Information gathering and integration as sources of error in diagnostic decision making. Medical Decision Making, 11(4), 233–239.CrossRefGoogle Scholar
  48. Haber, R. J., & Avins, A. L. (1994). Do ratings on the American Board of Internal Medicine Resident Evaluation Form detect differences in clinical competence? Journal of General Internal Medicine, 9(3), 140–145.CrossRefGoogle Scholar
  49. Hall, W., Violato, C., Lewkonia, R., Lockyer, J., Fidler, H., Toews, J., & Moores, D. (1999). Assessment of physician performance in Alberta the physician achievement review. Canadian Medical Association Journal, 161(1), 52–57.Google Scholar
  50. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81–112.CrossRefGoogle Scholar
  51. Hawkins, R. E., & Boulet J. R. (2008). Direct observation: Standardized patients. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence (pp.102–118). Philadelphia, Pa: Elsevier. Google Scholar
  52. Hawkins, R. E., Lipner, R. S., Ham, H. P., Wagner, R., & Holmboe, E. S. (2013). American board of medical specialties maintenance of certification: Theory and evidence regarding the current framework. Journal of Continuing Education in the Health Professions, 33(S1), S7–S19.Google Scholar
  53. Hawkins, R. E., Sumption, K. F., Gaglione, M. M., & Holmboe, E. S. (1999). The in-training examination in internal medicine: Resident perceptions and lack of correlation between resident scores and faculty predictions of resident performance. The American Journal of Medicine, 106(2), 206–210.Google Scholar
  54. Hodges, B. D. (2013). Assessment in the post-psychometric era: Learning to love the subjective and collective. Medical Teacher, 35(7), 564–568.CrossRefGoogle Scholar
  55. Holmboe, R. S. (2004). Tbc importance of faculty observation of trainees’ clinical skills. Academic Medicine, 79, 16–22.CrossRefGoogle Scholar
  56. Holmboe, E. S., & Durning, S. J. (2014). Assessing clinical reasoning: Moving from in vitro to in vivo. Diagnosis, 1(1), 111–117.CrossRefGoogle Scholar
  57. Holmboe, E. S., & Hawkins, R. E. (1998). Methods for evaluating the clinical competence of residents in internal medicine: A review. Annals of Internal Medicine, 129(1), 42–48.CrossRefGoogle Scholar
  58. Holmboe, E. S., Lipner, R., & Greiner, A. (2008). Assessing quality of care: Knowledge matters. JAMA, 299(3), 338–340.CrossRefGoogle Scholar
  59. Jefferies, A., Simmons, B., Ng, E., & Skidmore, M. (2011). Assessment of multiple physician competencies in postgraduate training: Utility of the structured oral examination. Advances in Health Sciences Education Theory and Practice, 16(5), 569–577.CrossRefGoogle Scholar
  60. Johnson, E. J., Camerer, C., Sen, S., & Rymon, T. (1991). Behavior and cognition in sequential bargaining. Wharton School, University of Pennsylvania, Marketing Department.Google Scholar
  61. Jones, M. A., Jensen, G., & Edwards, I. (2008). Clinical reasoning in physiotherapy. In. J. Higgs, M. A. Jones, S. Loftus, & N. Christensen (Eds.), Clinical reasoning in the health professions (3rd Ed., pp. 245–256). New York: Elsevier Limited.Google Scholar
  62. Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus, & Giroux.Google Scholar
  63. Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331(6018), 772–775.CrossRefGoogle Scholar
  64. Klein, G. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press.Google Scholar
  65. Kogan, J. R., Hess, B. J., Conforti, L. N., & Holmboe, E. S. (2010). What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills. Academic Medicine, 85(10), S25–S28.CrossRefGoogle Scholar
  66. Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA, 302(12), 1316–1326.CrossRefGoogle Scholar
  67. Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory and Practice, 414(4), 212–218.CrossRefGoogle Scholar
  68. Larsen, D. P., Butler, A. C., & Roediger, H. L, I. I. I. (2008). Test-enhanced learning in medical education. Medical Education, 42(10), 959–966.CrossRefGoogle Scholar
  69. Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(11), 496–503.Google Scholar
  70. Lasater, K. (2011). Clinical judgment: The last frontier for evaluation. Nurse Education in Practice, 11(2), 86–92.CrossRefGoogle Scholar
  71. Lave, J. (1988). Cognition in practice. Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
  72. Lineberry, M., Kreiter, C. D., & Bordage, G. (2013). Threats to validity in the use and interpretation of script concordance test scores. Medical Education, 47(12), 1175–1183.CrossRefGoogle Scholar
  73. Lineberry, M., Kreiter, C. D., & Bordage, G. (2014). Script concordance tests: Strong inferences about examinees require stronger evidence. Medical Education, 48(4), 452–453.CrossRefGoogle Scholar
  74. Liu, K. P., Chan, C. C., & Hui-Chan, C. W. (2000). Clinical reasoning and the occupational therapy curriculum. Occupational Therapy International, 7(3), 173–183.CrossRefGoogle Scholar
  75. Lubarsky, S., Dory, V., Duggan, P., Gagnon, R., & Charlin, B. (2013). Script concordance testing: From theory to practice: AMEE Guide No. 75. Medical Teacher, 35(3), 184–193.CrossRefGoogle Scholar
  76. Maatsch, J. L., Huang, R., Downing, S. M., & Barker, D. (1983). Predictive validity of medical specialty examinations. Final report to NCHSR Grant No.: HS02039-04.Google Scholar
  77. Mamede, S., Schmidt, H. G., Rikers, R. M., Custers, E. J., Splinter, T. A., & van Saase, J. L. (2010). Conscious thought beats deliberation without attention in diagnostic decision-making: At least when you are an expert. Psychological Research, 74(6), 586–592.CrossRefGoogle Scholar
  78. McCarthy, W. H., & Gonnella, J. S. (1967). The simulated patient management problem: A technique for evaluating and teaching clinical competence. Medical Education, 1(5), 348–352.Google Scholar
  79. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.CrossRefGoogle Scholar
  80. Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67.CrossRefGoogle Scholar
  81. Munger, B. S. (1995). Oral examinations. Recertification: New Evaluation Methods and Strategies (pp. 39–42). Evanston, ILL: American Board of Medical Specialties.Google Scholar
  82. Munger, B. S., Krome, R. L., Maatsch, J. C., & Podgorny, G. (1982). The certification examination in emergency medicine: An update. Annals of Emergency Medicine, 11(2), 91–96.CrossRefGoogle Scholar
  83. National Board of Medical Examiners International Foundations of Medicine. (n.d.). Retrieved July 25th, 2014, from http://www.nbme.org/ifom/
  84. National Board of Medical Examiners. (n.d.). Step 2 clinical skills. Retrieved July 22, 2014, from http://www.usmle.org/pdfs/step-2-cs/cs-info-manual.pdf
  85. Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  86. Norcini, J., Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., & Roberts, T. (2011). Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher, 33(3), 206–214.CrossRefGoogle Scholar
  87. Norcini, J. J., Blank, L. L., Duffy, F. D., & Fortna, G. S. (2003). The mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine, 138(6), 476–481.CrossRefGoogle Scholar
  88. Norcini, J. J., Lipner, R. S., & Grosso, L. J. (2013). Assessment in the context of licensure and certification. Teaching and Learning in Medicine, 25(Suppl1), S62–S67.CrossRefGoogle Scholar
  89. Norcini, J. J., Swanson, D. B., Grosso, L. J., Shea, J. A., & Webster, G. D. (1984). A comparison of knowledge, synthesis, and clinical judgment multiple-choice questions in the assessment of physician competence. Evaluation and the Health Professions, 7(4), 485–499.CrossRefGoogle Scholar
  90. Norcini, J. J., Swanson, D. B., & Webster, G. D. (1982). Reliability, validity and efficiency of various item formats in assessment of physician competence. In Proceedings of the Annual Conference on Research in Medical Education. Conference on Research in Medical Education (Vol. 22, pp. 53–58).Google Scholar
  91. Norman, G. R., Swanson, D. B., & Case, S. M. (1996). Conceptual and methodological issues in studies comparing assessment formats. Teaching and Learning in Medicine: An International Journal, 8(4), 208–216.CrossRefGoogle Scholar
  92. Page, G., & Bordage, G. (1995). The Medical Council of Canada’s key features project: A more valid written examination of clinical decision-making skills. Academic Medicine, 70(2), 104–110.CrossRefGoogle Scholar
  93. Pangaro, L., & Holmboe, E.S. (2008). Evaluation forms and rating scales. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence (pp. 102–118). Philadelphia, PA: Mosby-Elsevier.Google Scholar
  94. Pauker, S. G., Gorry, G. A., Kassirer, J. P., & Schwartz, W. B. (1976). Towards the simulation of clinical cognition: Taking a present illness by computer. The American Journal of Medicine, 60(7), 981–996.CrossRefGoogle Scholar
  95. Physician Achievement Review (PAR). (n.d.). Retrieved July 20, 2014, from http://parprogram.org/par/
  96. Rimoldi, H. J. (1963). Rationale and applications of the test of diagnostic skills. Academic Medicine, 38(5), 364–368.Google Scholar
  97. Roberts, L. (1999). Using concept maps to measure statistical understanding. International Journal of Mathematical Education in Science and Technology, 30(5), 707–717.CrossRefGoogle Scholar
  98. Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27–48). Potomac, MD: Erlbaum Press.Google Scholar
  99. Ruiz-Primo, M. A. (2004). Examining concept maps as an assessment tool. In A. J. Canas, J. D. Novak, & F. M. Gonzalez (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the First International Conference on Concept Mapping (pp. 555–562). Pamplona, Spain.Google Scholar
  100. Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations. Cambridge, UK: Cambridge University Press.Google Scholar
  101. Satter, R. M., Cohen, T., Ortiz, P., Kahol, K., Mackenzie, J., Olson, C., & Patel, V. L. (2012). Avatar-based simulation in the evaluation of diagnosis and management of mental health disorders in primary care. Journal of Biomedical Informatics, 45(6), 1137–1150.CrossRefGoogle Scholar
  102. Scalese, R. S., Issenberg, S. B. (2008). Simulation-based assessment. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence. Philadelphia: Mosby-Elsevier.Google Scholar
  103. Schau, C., & Mattern, N. (1997). Use of mapping techniques in teaching applied statistics courses. The American Statistician, 51, 171–175.Google Scholar
  104. Schipper, S., & Ross, S. (2010). Structured teaching and assessment. Canadian Family Physician, 56(9), 958–959.Google Scholar
  105. Schmidt, H. G., & Rikers, R. M. (2007). How expertise develops in medicine: Knowledge encapsulation and illness script formation. Medical Education, 41(12), 1133–1139.Google Scholar
  106. Schuwirth, L. (2009). Is assessment of clinical reasoning still the Holy Grail? Medical Education, 43(4), 298–300.CrossRefGoogle Scholar
  107. Schuwirth, L. W. T., & van der Vleuten, C. P. (2006). A plea for new psychometric models in educational assessment. Medical Education, 40, 296–300.CrossRefGoogle Scholar
  108. Streiner, D. L. (1985). Global rating scales. In: V. R. Neufeld & G. R. Norman (Eds.), Assessing clinical competence (pp. 119–141). New York, NY: Springer.Google Scholar
  109. Sweller, J., Van Merrienböer, J. J., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296.CrossRefGoogle Scholar
  110. Torre, D. M., Daley, B., Stark-Schweitzer, T., Siddartha, S., Petkova, J., & Ziebert, M. (2007). A qualitative evaluation of medical student learning with concept maps. Medical Teacher, 29(9–10), 949–955.CrossRefGoogle Scholar
  111. Trudel, J. L., Bordage, G., & Downing, S. M. (2008). Reliability and validity of key feature cases for the self-assessment of colon and rectal surgeons. Annals of Surgery, 248(2), 252–258.CrossRefGoogle Scholar
  112. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.CrossRefGoogle Scholar
  113. van Der Vleuten, C. P. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education, 1(1), 41–67.CrossRefGoogle Scholar
  114. van Der Vleuten, C. P., & Schuwirth, L. W. (2005). Assessing professional competence: From methods to programmes. Medical Education, 39(3), 309–317.CrossRefGoogle Scholar
  115. van Merrienböer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational psychology review, 17(2), 147–177.CrossRefGoogle Scholar
  116. Walsh, C. M., Sherlock, M. E., Ling, S. C., & Carnahan, H. (2012). Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy. Cochrane Database of Systematic Reviews, 6, 1–91. doi:10.1002/14651858.CD008237.pub2 Google Scholar
  117. Zimmerman, B. J. (2011). Motivational sources and outcomes of self-regulated learning and performance. In: B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 49–64). New York, NY: Routledge.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Joseph Rencic
    • 1
  • Steven J. Durning
    • 2
  • Eric Holmboe
    • 3
  • Larry D. Gruppen
    • 4
  1. 1.Tufts Medical CenterBostonUSA
  2. 2.Uniformed Services UniversityBethesdaUSA
  3. 3.Accreditation Council of Graduate Medical EducationChicagoUSA
  4. 4.University of Michigan Medical SchoolAnn ArborUSA

Personalised recommendations