Abstract
Clinical reasoning assessment is an essential component of determining a health professional’s competence. Clinical reasoning cannot be assessed directly. It must be gleaned from a health professional’s choices and decisions. Clinical knowledge and knowledge organization, rather than a general problem solving process, serve as the substrate for clinical reasoning ability. Unfortunately, the lack of a gold standard for the clinical reasoning process and the observation of context specificity make it difficult to assess clinical reasoning. Information processing theory, which focuses on the way the brain processes and organizes knowledge, has provided valuable insights into the cognitive psychology of diagnostic and therapeutic reasoning but failed to explain the variance in health professional’s diagnostic performance. Situativity theory has emerged suggesting that this variance relates to context-specific factors that impact a health professional’s clinical reasoning performance. Both information processing and situativity theory inform the way in which we assess clinical reasoning. Current assessment methods focus on standardized testing of knowledge to maximize psychometric parameters and work-based assessments which evaluate clinical reasoning under authentic, uncertain conditions that can decrease the reliability of these measurements. Issues of inter-rater reliability and context specificity require that multiple raters assess multiple encounters in multiple contexts to optimize validity and reliability. No single assessment method can assess all aspects of clinical reasoning; therefore, in order to improve the quality of assessments of clinical reasoning ability, different combinations of methods that measure different components of the clinical reasoning process are needed.
Keywords
- Clinical Reasoning
- Objective Structure Clinical Examination
- Cognitive Load Theory
- Diagnostic Reasoning
- United States Medical License Examination
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Adapted from Dory et al. (2012).
References
Abbott, V., Black, J. B., & Smith, E. E. (1985). The representation of scripts in memory. Journal of Memory and Language, 24(2), 179–199.
ACP Smart Medicine. (n.d.). Retrieved July 28, 2014, from http://smartmedicine.acponline.org
Adamson, K. A., Gubrud, P., Sideras, S., & Lasater, K. (2011). Assessing the reliability, validity, and use of the Lasater Clinical Judgment Rubric: three approaches. Journal of Nursing Education, 51(2), 66–73.
American Board of Anesthesiology. (n.d.). Maintenance of certification in anesthesiology (MOCA): Simulation for MOCA. Retrieved July 22, 2014 from http://www.theaba.org/Home/anesthesiology_maintenance
Ark, T. K., Brooks, L. R., & Eva, K. W. (2007). The benefits of flexibility: The pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Medical Education, 41(3), 281–287.
Babbott, S. F., Beasley, B. W., Hinchey, K. T., Blotzer, J. W., & Holmboe, E. S. (2007). The predictive validity of the internal medicine in-training examination. American Journal of Medicine, 120(8), 735–740.
Bland, A. C., Kreiter, C. D., & Gordon, J. A. (2005). The psychometric properties of five scoring methods applied to the script concordance test. Academic Medicine, 80(4), 395–399.
Bordage, G. (2007). Prototypes and semantic qualifiers: From past to present. Medical Education, 41(12), 1117–1121.
Bordage, G., & Page, G. (2012, August). Guidelines for the development of key feature problems and test cases. Medical Council of Canada. Retrieved July 20, 2014, from http://mcc.ca/wp-content/uploads/CDM-Guidelines.pdf
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Research, 18(1), 32–42.
Brydges, R., & Butler, D. (2012). A reflective analysis of medical education research on self-regulation in learning and practice. Medical Education, 46(1), 71–79.
Case, S. M., & Swanson, D. B. (1993). Extended-matching items: A practical alternative to free-response questions. Teaching and Learning in Medicine: An International Journal, 5(2), 107–115.
Case, S. M., & Swanson, D. B. (2002). Constructing written test questions for the basic and clinical sciences (3rd Ed.). National Board of Medical Examiners. Retrieved February 6, 2015, from http://www.nbme.org/pdf/itemwriting_2003/2003iwgwhole.pdf
Chang, D., Kenel-Pierre, S., Basa, J., Schwartzman, A., Dresner, L., Alfonso, A. E., & Sugiyama, G. (2014). Study habits centered on completing review questions result in quantitatively higher American Board of Surgery In-Training Exam scores. Journal of Surgical Education, 71(6), e127–e131.
Charlin, B., Boshuizen, H., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41(12), 1178–1184.
Charlin, B., & van der Vleuten, C. (2004). Standardized assessment of reasoning in contexts of uncertainty: The script concordance approach. Evaluation and the Health Professions, 27(3), 304–319.
Chart Stimulated Recall. (n.d.). Practical Doc: By rural doctors, for rural doctors. Retrieved February 2, 2015, from http://www.practicaldoc.ca/teaching/practical-prof/teaching-nuts-bolts/chart-stimulated-recall/
Cleary, T. J., Durning, S. J., Gruppen, L. D., Hemmer, P. A., & Artino Jr, A. R. (2013). Self-regulated learning. Oxford textbook of medical education, 465–478.
Cleland, J. A., Abe, K., & Rethans, J. J. (2009). The use of simulated patients in medical education: AMEE Guide No 42 1. Medical Teacher, 31(6), 477–486.
Counselman, F. L., Borenstein, M. A., Chisholm, C. D., Epter, M. L., Khandelwal, S., Kraus, C. K., et al. (2014). The 2013 model of the clinical practice of emergency medicine. Academic Emergency Medicine, 21(5), 574–598.
Courteille, O., Bergin, R., Courteille, O., Bergin, R., Stockeld, D., Ponzer, S., & Fors, U. (2008). The use of a virtual patient case in an OSCE-based exam-a pilot study. Medical Teacher, 30(3), e66–e76.
Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine, 78(8), 775–780.
Cunnington, J. P., Hanna, E., Turnhbull, J., Kaigas, T. B., & Norman, G. R. (1997). Defensible assessment of the competency of the practicing physician. Academic Medicine, 72(1), 9–12.
Daley, B. J., & Torre, D. M. (2010). Concept maps in medical education: an analytical literature review. Medical Education, 44(5), 440–448.
Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and implement script concordance tests: Insights from a systematic review. Medical Education, 46(6), 552–563.
Durning, S. J., & Artino, A. R. (2011). Situativity theory: A perspective on how participants and the environment can interact: AMEE Guide no. 52. Medical Teacher, 33(3), 188–199.
Durning, S. J., Artino, A. R, Jr, Schuwirth, L., & van der Vleuten, C. (2013). Clarifying assumptions to enhance our understanding and assessment of clinical reasoning. Academic Medicine, 88(4), 442–448.
Durning, S. J., Cleary, T. J., Sandars, J., Hemmer, P., Kokotailo, P., & Artino, A. R. (2011). Perspective: Viewing “strugglers” through a different lens: How a self-regulated learning perspective can help medical educators with assessment and remediation. Academic Medicine, 86(4), 488–495.
Durning, S. J., Costanzo, M., Artino, A. R., Vleuten, C., Beckman, T. J., Holmboe, E., et al. (2014). Using functional magnetic resonance imaging to improve how we understand, teach, and assess clinical reasoning. Journal of Continuing Education in the Health Professions, 34(1), 76–82.
Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press.
Ericsson, K. A. (2007). An expert-performance perspective of research on medical expertise: The study of clinical performance. Medical Education, 41(12), 1124–1130.
Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (Eds.). (2006). The Cambridge handbook of expertise and expert performance. Cambridge, UK: Cambridge University Press.
Eva, K. W. (2003). On the generality of specificity. Medical Education, 37(7), 587–588.
Eva, K. W. (2005). What every teacher needs to know about clinical reasoning. Medical Education, 39(1), 98–106.
Eva, K. W., Hatala, R. M., LeBlanc, V. R., & Brooks, L. R. (2007). Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Medical Education, 41(12), 1152–1158.
Eva, K. W., Neville, A. J., & Norman, G. R. (1998). Exploring the etiology of content specificity: factors influencing analogic transfer and problem solving. Academic Medicine, 73(10), S1–S5.
Farmer, E. A., & Page, G. (2005). A practical guide to assessing clinical decision-making skills using the key features approach. Medical Education, 39(12), 1188–1194.
Fonteyn, M., & Grobe, S. (1993). Expert critical care nurses’ clinical reasoning under uncertainty: Representation, structure and process. In M. Frisee (Ed.), Sixteenth annual symposium on computer applications in medical care (pp. 405–409). New York, NY: McGraw-Hill.
Gingerich, A., Regehr, G., & Eva, K. W. (2011). Rater-based assessments as social judgments: Rethinking the etiology of rater errors. Academic Medicine, 86(10), S1–S7.
Goulet, F., Gagnon, R., & Gingras, M. É. (2007). Influence of remedial professional development programs for poorly performing physicians. Journal of Continuing Education in the Health Professions, 27(1), 42–48.
Govaerts, M. J. B., Schuwirth, L. W. T., Van der Vleuten, C. P. M., & Muijtjens, A. M. M. (2011). Workplace-based assessment: Effects of rater expertise. Advances in Health Sciences Education, 16(2), 151–165.
Graber, M. L., Franklin, N., & Gordon, R. (2005). Diagnostic error in internal medicine. Archives of Internal Medicine, 165(13), 1493–1499.
Grabovsky, I., Hess, B. J., Haist, S. A., Lipner, R. S., Hawley, J. L., Woodward, S., et al. (2014). The relationship between performance on the infectious disease in-training and certification examinations. Clinical Infectious Diseases, ciu906v2.
Green, M. L., Reddy, S. G., & Holmboe, E. (2009). Teaching and evaluating point of care learning with an Internet-based clinical-question portfolio. Journal of Continuing Education in the Health Professions, 29(4), 209–219.
Gruppen, L. D., & Frohna, A. Z. (2002). Clinical reasoning. In G. R. Norman, C. P. M. van der Vleuten, & D. I. Newble (Eds.), International handbook of research in medical education (pp. 205–230). Dordrecht, The Netherlands: Kluwer Academic Publishers.
Gruppen, L. D., Wolf, F. M., & Billi, J. E. (1991). Information gathering and integration as sources of error in diagnostic decision making. Medical Decision Making, 11(4), 233–239.
Haber, R. J., & Avins, A. L. (1994). Do ratings on the American Board of Internal Medicine Resident Evaluation Form detect differences in clinical competence? Journal of General Internal Medicine, 9(3), 140–145.
Hall, W., Violato, C., Lewkonia, R., Lockyer, J., Fidler, H., Toews, J., & Moores, D. (1999). Assessment of physician performance in Alberta the physician achievement review. Canadian Medical Association Journal, 161(1), 52–57.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81–112.
Hawkins, R. E., & Boulet J. R. (2008). Direct observation: Standardized patients. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence (pp.102–118). Philadelphia, Pa: Elsevier.
Hawkins, R. E., Lipner, R. S., Ham, H. P., Wagner, R., & Holmboe, E. S. (2013). American board of medical specialties maintenance of certification: Theory and evidence regarding the current framework. Journal of Continuing Education in the Health Professions, 33(S1), S7–S19.
Hawkins, R. E., Sumption, K. F., Gaglione, M. M., & Holmboe, E. S. (1999). The in-training examination in internal medicine: Resident perceptions and lack of correlation between resident scores and faculty predictions of resident performance. The American Journal of Medicine, 106(2), 206–210.
Hodges, B. D. (2013). Assessment in the post-psychometric era: Learning to love the subjective and collective. Medical Teacher, 35(7), 564–568.
Holmboe, R. S. (2004). Tbc importance of faculty observation of trainees’ clinical skills. Academic Medicine, 79, 16–22.
Holmboe, E. S., & Durning, S. J. (2014). Assessing clinical reasoning: Moving from in vitro to in vivo. Diagnosis, 1(1), 111–117.
Holmboe, E. S., & Hawkins, R. E. (1998). Methods for evaluating the clinical competence of residents in internal medicine: A review. Annals of Internal Medicine, 129(1), 42–48.
Holmboe, E. S., Lipner, R., & Greiner, A. (2008). Assessing quality of care: Knowledge matters. JAMA, 299(3), 338–340.
Jefferies, A., Simmons, B., Ng, E., & Skidmore, M. (2011). Assessment of multiple physician competencies in postgraduate training: Utility of the structured oral examination. Advances in Health Sciences Education Theory and Practice, 16(5), 569–577.
Johnson, E. J., Camerer, C., Sen, S., & Rymon, T. (1991). Behavior and cognition in sequential bargaining. Wharton School, University of Pennsylvania, Marketing Department.
Jones, M. A., Jensen, G., & Edwards, I. (2008). Clinical reasoning in physiotherapy. In. J. Higgs, M. A. Jones, S. Loftus, & N. Christensen (Eds.), Clinical reasoning in the health professions (3rd Ed., pp. 245–256). New York: Elsevier Limited.
Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus, & Giroux.
Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331(6018), 772–775.
Klein, G. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press.
Kogan, J. R., Hess, B. J., Conforti, L. N., & Holmboe, E. S. (2010). What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills. Academic Medicine, 85(10), S25–S28.
Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA, 302(12), 1316–1326.
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory and Practice, 414(4), 212–218.
Larsen, D. P., Butler, A. C., & Roediger, H. L, I. I. I. (2008). Test-enhanced learning in medical education. Medical Education, 42(10), 959–966.
Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(11), 496–503.
Lasater, K. (2011). Clinical judgment: The last frontier for evaluation. Nurse Education in Practice, 11(2), 86–92.
Lave, J. (1988). Cognition in practice. Cambridge, MA: Cambridge University Press.
Lineberry, M., Kreiter, C. D., & Bordage, G. (2013). Threats to validity in the use and interpretation of script concordance test scores. Medical Education, 47(12), 1175–1183.
Lineberry, M., Kreiter, C. D., & Bordage, G. (2014). Script concordance tests: Strong inferences about examinees require stronger evidence. Medical Education, 48(4), 452–453.
Liu, K. P., Chan, C. C., & Hui-Chan, C. W. (2000). Clinical reasoning and the occupational therapy curriculum. Occupational Therapy International, 7(3), 173–183.
Lubarsky, S., Dory, V., Duggan, P., Gagnon, R., & Charlin, B. (2013). Script concordance testing: From theory to practice: AMEE Guide No. 75. Medical Teacher, 35(3), 184–193.
Maatsch, J. L., Huang, R., Downing, S. M., & Barker, D. (1983). Predictive validity of medical specialty examinations. Final report to NCHSR Grant No.: HS02039-04.
Mamede, S., Schmidt, H. G., Rikers, R. M., Custers, E. J., Splinter, T. A., & van Saase, J. L. (2010). Conscious thought beats deliberation without attention in diagnostic decision-making: At least when you are an expert. Psychological Research, 74(6), 586–592.
McCarthy, W. H., & Gonnella, J. S. (1967). The simulated patient management problem: A technique for evaluating and teaching clinical competence. Medical Education, 1(5), 348–352.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63–S67.
Munger, B. S. (1995). Oral examinations. Recertification: New Evaluation Methods and Strategies (pp. 39–42). Evanston, ILL: American Board of Medical Specialties.
Munger, B. S., Krome, R. L., Maatsch, J. C., & Podgorny, G. (1982). The certification examination in emergency medicine: An update. Annals of Emergency Medicine, 11(2), 91–96.
National Board of Medical Examiners International Foundations of Medicine. (n.d.). Retrieved July 25th, 2014, from http://www.nbme.org/ifom/
National Board of Medical Examiners. (n.d.). Step 2 clinical skills. Retrieved July 22, 2014, from http://www.usmle.org/pdfs/step-2-cs/cs-info-manual.pdf
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
Norcini, J., Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., & Roberts, T. (2011). Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher, 33(3), 206–214.
Norcini, J. J., Blank, L. L., Duffy, F. D., & Fortna, G. S. (2003). The mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine, 138(6), 476–481.
Norcini, J. J., Lipner, R. S., & Grosso, L. J. (2013). Assessment in the context of licensure and certification. Teaching and Learning in Medicine, 25(Suppl1), S62–S67.
Norcini, J. J., Swanson, D. B., Grosso, L. J., Shea, J. A., & Webster, G. D. (1984). A comparison of knowledge, synthesis, and clinical judgment multiple-choice questions in the assessment of physician competence. Evaluation and the Health Professions, 7(4), 485–499.
Norcini, J. J., Swanson, D. B., & Webster, G. D. (1982). Reliability, validity and efficiency of various item formats in assessment of physician competence. In Proceedings of the Annual Conference on Research in Medical Education. Conference on Research in Medical Education (Vol. 22, pp. 53–58).
Norman, G. R., Swanson, D. B., & Case, S. M. (1996). Conceptual and methodological issues in studies comparing assessment formats. Teaching and Learning in Medicine: An International Journal, 8(4), 208–216.
Page, G., & Bordage, G. (1995). The Medical Council of Canada’s key features project: A more valid written examination of clinical decision-making skills. Academic Medicine, 70(2), 104–110.
Pangaro, L., & Holmboe, E.S. (2008). Evaluation forms and rating scales. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence (pp. 102–118). Philadelphia, PA: Mosby-Elsevier.
Pauker, S. G., Gorry, G. A., Kassirer, J. P., & Schwartz, W. B. (1976). Towards the simulation of clinical cognition: Taking a present illness by computer. The American Journal of Medicine, 60(7), 981–996.
Physician Achievement Review (PAR). (n.d.). Retrieved July 20, 2014, from http://parprogram.org/par/
Rimoldi, H. J. (1963). Rationale and applications of the test of diagnostic skills. Academic Medicine, 38(5), 364–368.
Roberts, L. (1999). Using concept maps to measure statistical understanding. International Journal of Mathematical Education in Science and Technology, 30(5), 707–717.
Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27–48). Potomac, MD: Erlbaum Press.
Ruiz-Primo, M. A. (2004). Examining concept maps as an assessment tool. In A. J. Canas, J. D. Novak, & F. M. Gonzalez (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the First International Conference on Concept Mapping (pp. 555–562). Pamplona, Spain.
Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations. Cambridge, UK: Cambridge University Press.
Satter, R. M., Cohen, T., Ortiz, P., Kahol, K., Mackenzie, J., Olson, C., & Patel, V. L. (2012). Avatar-based simulation in the evaluation of diagnosis and management of mental health disorders in primary care. Journal of Biomedical Informatics, 45(6), 1137–1150.
Scalese, R. S., Issenberg, S. B. (2008). Simulation-based assessment. In E. S. Holmboe & R. E. Hawkins (Eds.), Practical guide to the evaluation of clinical competence. Philadelphia: Mosby-Elsevier.
Schau, C., & Mattern, N. (1997). Use of mapping techniques in teaching applied statistics courses. The American Statistician, 51, 171–175.
Schipper, S., & Ross, S. (2010). Structured teaching and assessment. Canadian Family Physician, 56(9), 958–959.
Schmidt, H. G., & Rikers, R. M. (2007). How expertise develops in medicine: Knowledge encapsulation and illness script formation. Medical Education, 41(12), 1133–1139.
Schuwirth, L. (2009). Is assessment of clinical reasoning still the Holy Grail? Medical Education, 43(4), 298–300.
Schuwirth, L. W. T., & van der Vleuten, C. P. (2006). A plea for new psychometric models in educational assessment. Medical Education, 40, 296–300.
Streiner, D. L. (1985). Global rating scales. In: V. R. Neufeld & G. R. Norman (Eds.), Assessing clinical competence (pp. 119–141). New York, NY: Springer.
Sweller, J., Van Merrienböer, J. J., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296.
Torre, D. M., Daley, B., Stark-Schweitzer, T., Siddartha, S., Petkova, J., & Ziebert, M. (2007). A qualitative evaluation of medical student learning with concept maps. Medical Teacher, 29(9–10), 949–955.
Trudel, J. L., Bordage, G., & Downing, S. M. (2008). Reliability and validity of key feature cases for the self-assessment of colon and rectal surgeons. Annals of Surgery, 248(2), 252–258.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
van Der Vleuten, C. P. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education, 1(1), 41–67.
van Der Vleuten, C. P., & Schuwirth, L. W. (2005). Assessing professional competence: From methods to programmes. Medical Education, 39(3), 309–317.
van Merrienböer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational psychology review, 17(2), 147–177.
Walsh, C. M., Sherlock, M. E., Ling, S. C., & Carnahan, H. (2012). Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy. Cochrane Database of Systematic Reviews, 6, 1–91. doi:10.1002/14651858.CD008237.pub2
Zimmerman, B. J. (2011). Motivational sources and outcomes of self-regulated learning and performance. In: B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 49–64). New York, NY: Routledge.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Rencic, J., Durning, S.J., Holmboe, E., Gruppen, L.D. (2016). Understanding the Assessment of Clinical Reasoning. In: Wimmers, P., Mentkowski, M. (eds) Assessing Competence in Professional Performance across Disciplines and Professions. Innovation and Change in Professional Education, vol 13. Springer, Cham. https://doi.org/10.1007/978-3-319-30064-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-30064-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30062-7
Online ISBN: 978-3-319-30064-1
eBook Packages: EducationEducation (R0)