Understanding Clinical Reasoning from Multiple Perspectives: A Conceptual and Theoretical Overview

Open Access
Part of the Innovation and Change in Professional Education book series (ICPE, volume 15)


Rather than a historical overview as in Chap. 2, this chapter provides the reader with insight into the various approaches that have been used to understand clinical reasoning. We review concepts and major scholars who have been involved in such investigations. Cognitive psychologists Newel and Simon theorized about problem-solving skills and artificial intelligence and initiated the use of computers as metaphors of thinking. Elstein and colleagues found that there is no such thing as a general problem-solving skill, independent of medical knowledge, and thus clinical reasoning is case specific. Reasoning then became analyzed in approaches, including forward reasoning from data to diagnosis; hypothetico-deductive reasoning with backward nature, from hypothesis to diagnosis; and abductive reasoning to understand early hypothesis generation that is so characteristic in clinical reasoning, elaborated by Patel and colleagues. Bordage introduced prototypes to characterize how physicians may remember illness presentations and semantic qualifiers to denote the shortened conceptual language and labels physicians use to store medical information systematically in memory. Illness scripts represent how encounters with diseases are remembered by physicians and were introduced by Feltovich and Barrows. Schmidt and Boshuizen elaborated the concept further and propose encapsulation of knowledge as a hypothetical process that happens when physicians regularly and routinely apply shortcuts in thinking typically ellaborated as pathophysiology. Reasoning ability appears not only to be case specific-- it is also situation or context specific. Clinicians with broad reasoning ability have extensive experience. Deliberate practice with many cases and in varying contexts is recommended by Ericsson to acquire reasoning expertise. To improve reasoning, some authors have focused on cognitive biases and error prevention. Norman, however, concludes that bias reduction strategies are unlikely to be successful but correcting knowledge deficiencies is likely to lead to reasoning success. Kahnemann promoted System 1 and System 2 thinking for instant pattern recognition (nonanalytic reasoning) and analytic reasoning, respectively. What actually happens in the brain during clinical reasoning is the domain of neuroscience, which may provide insights from research in the near future.

Concepts and Definitions

This chapter is devoted to clarifying terminology and concepts that have been regularly cited and used in the last decades around clinical reasoning. Thus, this chapter represents a conceptual overview.

Success in clinical reasoning is essential to a physician’s performance. Clinical reasoning is both a process and an outcome (with the latter often being referred to as decision-making). While these decisions must be evidence based as much as possible, clearly decisions also involve patient perspectives, the relationship between the physician and the patient, and the system or environment where care is rendered. Definitions of clinical reasoning therefore must include these aspects. While definitions of clinical reasoning vary, they typically share the features that clinical reasoning entails: (i) the cognitive operations allowing physicians to observe, collect, and analyze information and (ii) the resulting decisions for actions that take into account a patient’s specific circumstances and preferences (Eva et al. 2007; Durning and Artino 2011).

The variety of definitions of clinical reasoning and the heterogeneity in research is likely in part due to the number of fields that have informed our understanding of clinical reasoning. In this chapter, a number of concepts from a broad spectrum of fields is presented to help the reader understand clinical reasoning and to assist the instruction of preclinical medical students. Many of these concepts reflect difficulties inherent to understanding how doctors think and how this type of thinking can be acquired by learners over time. Some provide hypotheses with more or less firm theoretical grounding, but a broad understanding of clinical reasoning requires an ongoing process of investigation.

Learning to Solve Problems in New Areas: Expanding the Learner Domain Space

Klahr and Dunbar proposed a model for scientific discovery (Klahr and Dunbar 1988) that may be helpful to understand how learners solve problems in unknown territory, such as what happens when a medical student starts learning to solve medical problems. The student has a learner domain space of knowledge that only partly overlaps, or not at all, with the expert domain space of knowledge, which is the space that contains all possible hypotheses a learner can generate about a problem. Knowledge building during inquiry learning can be considered as expanding the learner domain space to increase that overlap (Lazonder et al. 2008).

Early Thinking of Clinical Reasoning: The Computer Analogy

Building on the cognitive psychology work of Newell and Simon about problem-solving in the 1970s (Newell and Simon 1972), artificial intelligence (AI) computer models were created to resemble the clinical reasoning process, with programs like MYCIN and INTERNIST (Pauker et al. 1976). Analogies between cognitive functioning and the emerging computer capacities led to the assumption that both use algorithmic processes in the working memory, viewed as the central processing unit of the brain. Many predicted that like in chess, computer programs for medical diagnosis would quickly be developed and would perform superiorly to the practicing professional, outperforming the diagnostic accuracy of the best physician’s thinking. Four decades later, however, this has not yet happened and may be impossible. The emergence of self-driving cars as an analogy shows how humans can build highly complex machines, but at least this development in clinical reasoning has been much slower than many had thought it would (Wachter 2015; Clancey 1983). Robert Wachter, in a recent book about technology in health care, argues that, still better than computers, experienced physicians can distinguish between patients with similar signs and symptoms to determine that “that guy is sick, and the other is okay,” with the “the eyeball test” or intuition, which computers have not been able to capture so far (page 95), just as a computer cannot currently analyze nonverbal information that is so critical to communication in health care. Clinical decision support systems (CDSS, containing a large knowledge base and if-then rules for inferences) have been used with some success at the point of care to support clinicians in decision-making, particularly in medication decisions, but, integrated with electronic health records, they have not been shown to improve clinical outcome parameters as of yet (Moja et al. 2014).

Abandoning Clinical Reasoning as a General Problem-Solving Ability

Expertise in clinical reasoning was initially viewed as being synonymous with acquiring general problem-solving procedures (Newell and Simon 1972). However, in a groundbreaking study, published as a book in 1978 ( Medical Problem Solving ), Elstein and colleagues found few differences between expert (attending physicians) and novice diagnosticians (medical students) in the way they solve diagnostic problems (Elstein et al. 1978). The primary difference appeared to be in their knowledge and in particular the way it is structured as a consequence of experience. Thus while medical students and practicing physicians generated a similar number of diagnostic hypotheses differential diagnosis of similar length, practicing physicians were far more likely to list the correct diagnosis. This insight replaced the era that was marked by the belief that clinical reasoning could be measured as a distinct skill that would result in superior performance regardless of the specifics of a patient’s presentation. Content knowledge was shown to be very important but still does not guarantee success in clinical reasoning. Variation in clinical performance is a product of the expert’s integration of his or her knowledge of the signs and symptoms of disease with contextual factors in order to arrive at an adaptive solution.

Deconstructing the Reasoning Process

In an overview in 2005, Patel and colleagues summarize the process of clinical reasoning in four stages: abstraction, abduction, deduction, and induction (Patel et al. 2005).
  • Abstraction can be viewed as generalization from a finding to a conclusion (hemoglobin <12 gm/dl in an adult male is labeled as “anemia”).

  • Abduction is a backward reasoning process to explain why this adult male should have anemia. “Abductive reasoning” was first coined as a term by logician C.S. Peirce in the nineteenth century to signify a common process when a surprising observation takes place that leads to a hypothesis (“The lawn is wet! Ergo, it has probably rained.”) and is based on knowledge of possible causations and must be tested (“but it could also be the neighbor’s sprinkler”). Abduction is considered to be a primary means of acquiring new ideas in clinical reasoning (Bolton 2015).

  • Deduction is the process of testing the hypothesis (e.g., of anemia) through confirmation by expected other diagnostic findings: if conditions X and Y are met, inference Z must be true.

  • Induction is the process of generalization from multiple cases and more applicable in research than in individual patient care: if multiple patients show similar signs and symptoms, general rules may be created to explain new cases.

Part of this process is forward-driven reasoning (hypothesis generation through data), and another part is backward-driven reasoning (hypothesis testing) (Patel et al. 2005).

Knowledge Representations to Support Reasoning

In a 1996 review, Custers and colleagues categorized the thinking about the way physician’s cognition is organized around clinical knowledge in three alternative frameworks and provided critical notes (Custers et al. 1996). These mental representations could have the form of prototypes , instances , or semantic networks . All three of these models have assets and drawbacks in their explanatory power for clinical reasoning. The prototype framework or prototype theory assumes that multiple encounters with related diseases lead physicians to remember the common denominators, resulting in single prototypes in long-term memory. The instances framework assumes that physicians actually remember the individual instances of patient encounters without abstraction, and context-specific (situation specific) information may be part of these instances. The semantic network theory posits the existence of nodes of information units, connected with other nodes in the network. The strength of the network and its nodes depends on the intensity of its use. Schemas and illness scripts are medically meaningful interconnected nodes that can be strengthened and adapted based on clinical experience.

Prototyping and Semantic Qualifiers

Georges Bordage introduced the term semantic qualifiers referring to the use of abstract, often binary, terms to help sort through and organize (e.g., chunk) patient information. They are “useful adjectives” that represent an abstraction of the situational clinical findings (Chang et al. 1998). A commonly cited example of the use of semantic qualifiers is translating a patient who is presenting with knee swelling and pain into a presentation of acute monarticular arthritis. Note three semantic qualifiers – “acute,” “monoarticular,” and “arthritis.” The reason why these qualifiers are important is that the structure of clinical knowledge in the clinician’s mind is organized with such qualifiers, as claimed by Bordage. To enable recognition and linkage, the clinician must first translate what she hears and sees into such terminology (Bordage 1994). An assumption is that the clinician’s memory contains prototypes of diseases (Bordage and Zacks 1984), generalizable representations that enable recognition. Bordage stresses how semantically rich discourses about patients are associated with greater diagnostic accuracy (Bordage 2007).

Illness Script Theory

Custers recently summarized scripts as high-level conceptual knowledge structures in long-term memory, representing general event sequences, in which the individual events are interconnected by temporal and often causal or hierarchical relationships (“usually diabetes type II occurs a older age, a overweight is associated; late symptoms might include vascular problems in the retina, in the lower limbs and in other places”). Scripts are activated as integral wholes in appropriate contexts that should contain relevant variables, including clinical findings in the patient. “Slots” in the reasoning process can be filled with information present in the actual situation, retrieved from memory, or inferred from the context (Custers 2015). Illness scripts, first introduced by Barrows and Feltovich, are believed to be chunks in long-term memory that contain three components, enabling conditions (past history and causes) , fault (pathophysiology), and consequences (signs and symptoms) (Feltovics and Barrows 1984), and are elaborated further by Schmidt and Boshuizen (1993). Illness scripts are stored in long-term memory as units with temporal (i.e., sequential) components, as a film script of unfolding events, and patients are remembered as instances of a script. With experience, physicians build a larger repertoire of illness scripts and more elaborated scripts.

Illness scripts are shaped by experience and continually refined throughout one’s clinical practice. When an experienced physician initially sees a patient, his or her verbal and nonverbal information is thought to immediately activate relevant illness scripts. This effortless, fast thinking, or nonanalytic process is referred to as script activation . In some cases, only one script is activated, and in these cases, one may arrive at the correct diagnosis (e.g., “type II diabetes mellitus”). In other cases, multiple scripts are activated, and then theory holds that we choose the most likely diagnosis by comparing and contrasting alternative illness scripts that were activated (through analytic or slow thinking). Early learners may not activate any scripts when they initially see a patient, and experts may activate one or several illness scripts.

Encapsulation of Knowledge and the Intermediate Effect

With increasing clinical information stored as illness scripts in the long-term memory of the physician, diagnostic reasoning should steadily become more accurate. However, studies have shown that more novice clinicians (e.g., those just out of training such as recent graduates from residency education) sometimes outperform physicians who have been in practice for some time (e.g., “experts”) on the recall of details from clinical cases seen. This finding was coined by Schmidt and Boshuizen as the intermediate effect (Schmidt and Boshuizen 1993). While inexperienced clinicians may consciously use pathophysiological thinking when solving clinical problems, the frequent use of similar thinking pathways leads to efficient shortcuts, and after a while it may no longer be possible to unfold these pathways. The pathophysiological knowledge about the disease becomes encapsulated into diagnostic labels or high-level simplified causal models that explain signs and symptoms (Schmidt and Mamede 2015).

System 1 and 2 Thinking as Dual Processes

Dual process theory refers to two processes that are thought to apply during reasoning (Croskerry et al. 2014). Briefly, dual process theory argues that we have two general thought processes. Fast thinking (sometimes called System I thinking or “nonanalytic” reasoning) is believed to be quick, subconscious, and typically effortless. An example of a fast thinking strategy is pattern recognition (Eva 2005). An example of pattern recognition in medicine would happen when a physician examines a patient with palpitations and immediately recognizes the cardinal features or “pattern” of Graves’ disease, when also observing exophthalmia, fine resting tremor, and thyromegaly. Slow or analytic thinking (System 2 thinking) on the other hand is effortful and conscious. An example of System 2 thinking would be working through a patient’s acid base status (e.g., calculating an anion gap, using Winter’s formula, and calculating a delta-delta gap). Dual process theory has recently been popularized in the book Thinking, Fast and Slow by Daniel Kahneman (2011). More recent work with dual process theory argues that both of these processes are used simultaneously, e.g., it’s not one or the other but rather one uses a combination of both fast and slow thinking in practice. In other words, fast and slow thinking can be viewed as a continuum (Custers 2013). Efficient clinical work requires fast thinking. The capacity of the working memory would be overloaded if analytic reasoning were required for all decisions in patient care (Young et al. 2014).

Case Specificity and Context Specificity

In Elstein and colleagues ’ seminal work on medical problem-solving (Elstein et al. 1978), researchers noted that physician performance on one patient or case did not predict performance on a subsequent content area or case, giving rise to the phenomenon of case specificity. These findings would be quite surprising if medical problem-solving were a general skill.

A second vexing problem in practice is the more recently highlighted phenomenon of context specificity. Context specificity refers to the finding that a physician can see two patients with the same chief complaint and the same (or nearly identical) symptoms and physical findings and have the same diagnosis, yet, in different contexts, arrive at different diagnoses (Durning et al. 2011). The context can be helpful to arrive at the correct diagnosis (Hobus et al. 1987) or harmful and lead to error (Eva 2005). In other words, something other than the “essential content” is driving the physician’s clinical reasoning. Durning and Artino hold that the outcome of clinical reasoning is driven by the context, which includes the physician, the patient, the system, and their interactions (Durning and Artino 2011). The notion of system includes appointment length, appointment location, support systems, and clinic staffing (Durning and Artino 2011) and stresses the importance of the situation. One example of “situativity” is situated cognition , which breaks down an activity like clinical reasoning into physician, patient, and environment as well as interactions between these components. Clinical reasoning is believed to emerge from these factors and their interactions. Another example of situativity, situated learning , stresses participation in an activity and identity formation as learning versus the acquisition of generalized facts.

Clinical Reasoning and the Development of Expert Performance

Despite the finding that clinical reasoning is content-dependent and context-dependent, expertise in diagnostic and therapeutic reasoning in general varies among physicians even with similar experience. Some internists are considered better diagnosticians and some surgeons better operators that others. It remains useful to think of what leads to superb performance, as education can be a part of it (Asch et al. 2014). Indeed, many scholars prefer the term expert performance as opposed to expertise when referring to clinical reasoning as the former acknowledges the many nuances to this ability that we have outlined in this chapter.

For procedural performance, repetitive practice is key. Competence in colonoscopy requires experience with 150–200 colonoscopies under supervision (Ekkelenkamp et al. 2016). That competence improves with practice is not surprising and known from, for instance, in chess (De Groot 1978). Anecdotally, in the 1960s the Hungarian educational psychologist László Polgár was determined to raise his yet unborn children to become highly skilled in a specific domain and chose chess. All three daughters received careful, highly intensive training, from very young age on, and have become world-top chess players, two of which are currently considered the world’s best female chess players. Psychologist Ericsson has generalized the idea that, rather than innate talent, deliberate practice is key to expert performance (Ericsson et al. 1993). He distinguishes three subsequent mental representations: a planning phase with clear performance goals, a translation to execution, and a representation for monitoring how well one does. Applications in medical training have been described (Ericsson 2015) but have mainly focused on procedures. Clinical reasoning may benefit from deliberate practice, and the work of Mamede et al., using deliberate practice, shows how reasoning can benefit as well (Mamede et al. 2014).

Reflection During Diagnostic Thinking

Donald Schön coined the terminology of reflection in action and reflection on action , as a description of thinking of high-level professionals (Schön 1983). Knowing what to do when you do it may not require much effort if actions are routine, but professionals with nonroutine tasks may often face small problems or questions that require instant adaptive action. Schön maintains that reflection-in-action must be practiced by learners becoming professionals. Mamede and colleagues developed the method of “structured reflection” to improve students’ diagnostic reasoning (Mamede et al. 2010, 2014a, b). Structured reflection in the context of clinical reasoning means that problem-solvers explicitly match a patient’s presentation (case) against every diagnosis they consider for that case. Mamede et al. demonstrated a beneficial effect of this approach. Detailed comparison of a patient’s signs and symptoms with the already available and activated illness scripts and noticing similarities and discrepancies appears to be the mechanism behind this restructuring of knowledge as a consequence of structured reflection. The authors recommend deliberate reflection as a tool for learning clinical reasoning (Schmidt and Mamede 2015).

Bias and Error in Clinical Reasoning

The quality of clinical reasoning is often expressed in how few errors a physician makes. Some errors are typical enough to receive a label and stem from various sources of bias. In 2003 Kempainen et al. published a helpful overview of typical biases that happen in clinical reasoning and that should be attended to in education, which include the following (Kempainen et al. 2003):
  • Availability bias . A differential diagnosis is influenced by what is easily recalled, creating a false sense of prevalence.

  • Representative bias (or judging by similarity ). Clinical suspicion is influenced solely by signs and symptoms and neglects prevalence of competing diagnoses.

  • Confirmation bias (or pseudodiagnosticity ). Additional testing confirms suspected diagnosis but fails to test competing hypotheses.

  • Anchoring bias. Inadequate adjustment of a differential diagnosis in light of new data resulting in a final diagnosis unduly influenced by the starting point.

  • Bounded rationality bias (or search satisficing ). Clinicians stop searching for additional diagnoses after the anticipated diagnosis is made leading to a premature closure of the reasoning process.

  • Outcome bias . A clinical decision is judged on the outcome rather than on the logic and evidence supporting the decision.

A limitation of this approach is that when the reasoning is believed to be successful, biases are not typically recognized, and when looking at a case in hindsight, many mistakes can easily be labeled as caused by “bias.” Indeed, so-called biases actually may serve as heuristics to guide successful behavior (Gigerenzer and Gaissmaier 2011; Gigerenzer 2007). In a recent overview, Norman and colleagues conclude that interventions directed at error reduction through the identification of heuristics and biases have no effect on diagnostic errors. Instead, most errors seem to originate from a limited knowledge based of the clinician (Norman et al. 2017).

Neuroscience and Visual Expertise in Clinical Reasoning

While neuroscience is quickly uncovering many cognitive processes, clinical reasoning has hardly been subject of such studies. More recently however a new line of research has evolved which seeks to explore the biologic underpinnings of clinical reasoning. Indeed, an Achilles heel of clinical reasoning is that it is less subject to introspection or visualization, and thus these new methods such as functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG) are emerging and show particular promise for enhancing our understanding of System 1 thinking. One of the first publications in this domain is from Durning et al. who studied brain process with functional MRI techniques in novices and experts solving clinical problems through vignette-based multiple choice questions. Many parts of the brain were activated. The researchers observed activity in various regions of the prefrontal cortex (Durning et al. 2015). While preliminary, fMRI may be a promising route of future investigation.

A new and related avenue of investigation is that of visual expertise (Bezemer 2017; van der Gijp et al. 2016). Medicine is a highly visual profession, not only for specific disciplines such as radiology, pathology, dermatology, surgery, and cardiology but also in primary care (Kok and Jarodzka 2017). Visually observing a patient, human tissue, or a representation of it, and recognizing abnormality, may not easily be expressed in words but can instantly lead to a System 1 recognition.

In Sum

The intention of this chapter was to provide an overview of theoretical concepts, frequently used terms, and a number of significant thinkers and authors in this domain, all of which underlie our current understanding of clinical reasoning to support the teaching of students about clinical reasoning in the preclinical period and beyond.

While much of the cited literature appeared after the model of case-based clinical reasoning was first created in 1992 (ten Cate 1994), and some aspects apply to clinical rather than preclinical education, none of the recommendations that could be drawn for this chapter would conflict the CBCR approach.

Although it is apparent that there are still numerous gaps in our collective understanding of clinical reasoning, it is also clear that progress into a more thorough understanding of clinical reasoning is advancing.


  1. Asch, D. A., et al. (2014). How do you deliver a good obstetrician? Outcome-based evaluation of medical education. Academic Medicine, 89(1), 24–26.CrossRefGoogle Scholar
  2. Bezemer, J. (2017). Visual research in clinical education. Medical Education, 51(1), 105–113.CrossRefGoogle Scholar
  3. Bolton, J. W. (2015). Varieties of clinical reasoning. Journal of Evaluation in Clinical Practice, 21, n/a–n/a. Available at:
  4. Bordage, G. (1994). Elaborated knowledge: A key to successful diagnostic thinking. Academic Medicine, 69(11), 883–885.CrossRefGoogle Scholar
  5. Bordage, G. (2007). Prototypes and semantic qualifiers: From past to present. Medical Education, 41(12), 1117–1121.CrossRefGoogle Scholar
  6. Bordage, G., & Zacks, R. (1984). The structure of medical knowledge in the memories of medical students and general practitioners: Categories and prototypes. Medical Education, 18(11), 406–416.CrossRefGoogle Scholar
  7. Chang, R., Bordage, G., & Connell, K. (1998). The importance of early problem representation during case presentations. Academic Emergency Medicine: Official Journal of the Society for Academic Emergency Medicine, 73(10), S109–S111.Google Scholar
  8. Clancey, W. J. (1983). The epistemology of a rule-based expert system – A framework for explanation. Artificial Intelligence, 20(3), 215–251.CrossRefGoogle Scholar
  9. Croskerry, P., et al. (2014). Deciding about fast and slow decisions. Academic Medicine, 89(2), 197–200.CrossRefGoogle Scholar
  10. Custers, E. J. F. M. (2013). Medical education and cognitive continuum theory: An alternative perspective on medical problem solving and clinical reasoning. Academic Medicine, 88(8), 1074–1080.CrossRefGoogle Scholar
  11. Custers, E. J. F. M. (2015). Thirty years of illness scripts: Theoretical origins and practical applications. Medical Teacher, 37(5), 457–462.CrossRefGoogle Scholar
  12. Custers, E., Regehr, G., & Norman, G. (1996). Mental representations of medical diagnostic knowledge: A review. Academic Medicine, 71(10), S55–S61.CrossRefGoogle Scholar
  13. De Groot, A. (1978). Thought and choice in chess. The Hague: Mouton.Google Scholar
  14. Durning, S. J., & Artino, A. R. (2011). Situativity theory: A perspective on how participants and the environment can interact: AMEE guide no. 52. Medical Teacher, 33(3), 188–199.CrossRefGoogle Scholar
  15. Durning, S., et al. (2011). Context and clinical reasoning: Understanding the perspective of the expert’s voice. Medical Education, 45(9), 927–938.CrossRefGoogle Scholar
  16. Durning, S. J., et al. (2015). Neural basis of nonanalytical reasoning expertise during clinical evaluation. Brain and Behaviour, 309, 1–10.Google Scholar
  17. Ekkelenkamp, V. E., et al. (2016). Training and competence assessment in GI endoscopy: A systematic review. Gut, 65(4), 607–615. Available at:
  18. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving. In An analysis of clinical reasoning. Cambridge, MA: Harvard University Press.Google Scholar
  19. Ericsson, K. A. (2015). Acquisition and maintenance of medical expertise. Academic Medicine, 90(11), 1–16.CrossRefGoogle Scholar
  20. Ericsson, K. A., et al. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.CrossRefGoogle Scholar
  21. Eva, K. W. (2005). What every teacher needs to know about clinical reasoning. Medical Education, 39(1), 98–106.CrossRefGoogle Scholar
  22. Eva, K. W., et al. (2007). Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Medical Education, 41(12), 1152–1158.CrossRefGoogle Scholar
  23. Feltovics, P. & Barrows, H. (1984). Issues of generality in medical problem solving. In H. Schmidt & M. De Volder (Eds), Tutorials in problem-based learning (pp. 128–142). Assen/Maastricht: Van Gorcum.Google Scholar
  24. Gigerenzer, G. (2007). Gut feelings. The intelligence of the unconscious. New York: Penguin Group.Google Scholar
  25. Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.CrossRefGoogle Scholar
  26. Hobus, P. P. M., et al. (1987). Contextual factors in the activation of first diagnostic hypotheses: Expert-novice differences. Medical Education, 21(6), 471–476.CrossRefGoogle Scholar
  27. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.Google Scholar
  28. Kempainen, R. R., Migeon, M. B., & Wolf, F. M. (2003). Understanding our mistakes: A primer on errors in clinical reasoning. Medical Teacher, 25(2), 177–181.CrossRefGoogle Scholar
  29. Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1–48.CrossRefGoogle Scholar
  30. Kok, E. M., & Jarodzka, H. (2017). Before your very eyes: The value and limitations of eye tracking in medical education. Medical Education, 51(1), 114–122.CrossRefGoogle Scholar
  31. Lazonder, A. W., Wilhelm, P., & Hagemans, M. G. (2008). The influence of domain knowledge on strategy use during simulation-based inquiry learning. Learning and Instruction, 18(6), 580–592.CrossRefGoogle Scholar
  32. Mamede, S., et al. (2010). Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA: The Journal of the American Medical Association, 304(11), 1198–1203.CrossRefGoogle Scholar
  33. Mamede, S., van Gog, T., Sampaio, A. M., et al. (2014a). How can students’ diagnostic competence benefit most from practice with clinical cases? The effects of structured reflection on future diagnosis of the same and novel diseases. Academic Medicine: Journal of the Association of American Medical Colleges, 89(1), 121–127.CrossRefGoogle Scholar
  34. Mamede, S., van Gog, T., van den Berge, K., et al. (2014b). Why do doctors make mistakes? A study of the role of salient distracting clinical features. Academic Medicine: Journal of the Association of American Medical Colleges, 89(1), 114–120.CrossRefGoogle Scholar
  35. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97.CrossRefGoogle Scholar
  36. Moja, L., et al. (2014). Effectiveness of computerized decision support systems linked to electronic health records: A systematic review and meta-analysis. American Journal of Public Health, 104(12), e12–e22.CrossRefGoogle Scholar
  37. Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.Google Scholar
  38. Norman, G. R., et al. (2017). The causes of errors in clinical reasoning: Cognitive biases, knowledge deficits, and dual process thinking. Academic Medicine, 92(1), 23–30.CrossRefGoogle Scholar
  39. Patel, V., Arocha, J., & Zhang, J. (2005). Thinking and reasoning in medicine. In K. Holyoak & R. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp. 727–750). Cambridge: Cambridge University Press.Google Scholar
  40. Pauker, S., et al. (1976). Towards the simulation of clinical cognition: Taking the present illness by computer. Americal Journal of Medicine, 60, 981–996.CrossRefGoogle Scholar
  41. Schmidt, H. G., & Boshuizen, H. P. A. (1993). On acquiring expertise in medicine. Educational Psychology Review, 5(3), 205–221.CrossRefGoogle Scholar
  42. Schmidt, H. G., & Mamede, S. (2015). How to improve the teaching of clinical reasoning: A narrative review and a proposal. Medical Education, 49(10), 961–973.CrossRefGoogle Scholar
  43. Schön, D. A. (1983). The reflective practitioner - how professionals think in action. New York: Basic Books.Google Scholar
  44. ten Cate, O. (1994). Training case-based clinical reasoning in small groups [Dutch]. Nederlands Tijdschrift voor Geneeskunde, 138, 1238–1243.Google Scholar
  45. van der Gijp, A. et al. (2016). How visual search relates to visual diagnostic performance: A narrative systematic review of eye-tracking research in radiology. Advances in Health Sciences Education, 1–23.Google Scholar
  46. Wachter, R. (2015). The digital doctor – hope, hype, harm at the Dawn of medicine’s computer age. New York: McGraw-Hill.Google Scholar
  47. Young, J. Q., et al. (2014). Cognitive load theory: Implications for medical education: AMEE guide no. 86. Medical Teacher, 36(5), 371–384.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Center for Research and Development of EducationUniversity Medical Center UtrechtUtrechtThe Netherlands
  2. 2.Uniformed Services University of the Health SciencesBethesdaUSA

Personalised recommendations