Training Clinical Reasoning: Historical and Theoretical Background

Open Access
Chapter
Part of the Innovation and Change in Professional Education book series (ICPE, volume 15)

Abstract

This chapter discusses how across the ages clinical reasoning has evolved, and has been affected by religious and societal influences, and how historical healers and physicians have contributed to what is now called clinical reasoning.

The reader follows from the early days of Hippocrates and Galen with their humoral theory (disease caused by disturbance in yellow bile, black bile, blood, or phlegm) to the first descriptions of bedside teaching described in Padua in 1543. A significant impetus to bedside teaching was given in Holland’s Golden Age by Van Straaten, Van Heurne, and above all Herman Boerhaave, who integrated contemporary theoretical knowledge and clinical experience and was known for his clinical teaching throughout Europe, even indirectly influencing educators in the United States through the Edinburgh school, which was heavily influenced by Boerhaave. In the mid-eighteenth century, Thomas Bond added a new dimension by predicting findings in autopsies based on clinical signs and symptoms; by using feedback from failed predictions for learning and improvement, he moved clinical medicine one additional step ahead. Another famous educator, Sir William Osler, introduced the discipline of differential diagnosis in the late nineteenth century. In the early twentieth century, Abraham Flexner saw the clinic as a laboratory for investigation and learning, just as he had stressed the importance of basic science as a grounding for rational medicine. Half a century later, computer-aided instruction, patient management problems, artificial intelligence, and problem-based learning were introduced. Meanwhile Elstein’s research findings in 1978 showed that experience determines expertise more than any general reasoning skill – “there is not much that formal theories of problem solving, judgment and decision making can do to facilitate this slow process.” Case specificity in clinical reasoning was found to reveal that knowledge about diseases heavily determines the quality and success of reasoning.

The chapter concludes with a few general recommendations for the teaching of clinical reasoning.

In this chapter, we will try to give a concise overview of what is known about teaching clinical reasoning in the era before the concept of “clinical reasoning” as such emerged in the literature. Not surprisingly, the further we go back in time, the more this concept needs to be stretched to fit what can arguably be seen as the predecessors of today’s clinical teaching and diagnostic reasoning. Yet, even in the earliest days of medicine, teachers taught students how to make sense of the findings associated with diseases (patients’ complaints, signs, and symptoms) and how to use this knowledge to ameliorate a patient’s condition, if this could be achieved at all. Starting in the 1950s, clinical reasoning itself became a subject of study, which increasingly enabled clinical educators to advance beyond merely showing and telling students how to apply knowledge and skills in a practical setting, to building theories and models of how clinical reasoning can be effectively and efficiently trained.

Clinical Reasoning in the Hippocratean Era

Through all ages, humans have tried to make sense of complaints, symptoms, and diseases; in this sense, clinical reasoning is as old as humanity. In the pre-Hippocratic era, people relied on priests or other authoritative individuals who had privileged access to the intentions divine entities had with the sufferer or with society as a whole, for diseases were sometimes seen – by patients as well as by healers – as containing messages from above. Similarly, the cause of a disease could be couched in moral terms, and symptoms and complaints were interpreted as punishment or revenge, from the part of the gods, for the sufferer’s misbehavior. As far as we know, Hippocrates (± 460 BC – ± 370 BC) was the first to acknowledge the natural, i.e., non-divine, nature of diseases. That is, he explained disease in terms of a disturbance of the balance of the four humors: yellow bile, black bile, blood, and phlegm, an explanation that was further elaborated by Galen (131–216) and, though lacking a firm empirical basis, was so convincing that it remained basically unchallenged for two millennia. With respect to treatment, nature itself was assumed to have healing powers, and therapeutic measures aimed at supporting these natural forces. Probably the biggest contributions of Hippocrates and Galen to clinical reasoning was their emphasis on careful observation and registration of all visible symptoms and complaints, including bodily fluids and excretions, as well as environmental factors, diet, and living habits. Hippocrates summarized much of his practical knowledge in aphorisms, many of which are rules of thumb in the “if… then” format. Such rules of thumb, or heuristics, can be viewed as a rudimentary form of clinical reasoning. Though most Hippocratean aphorisms deal with treatment or prediction of the course of an illness, some are about diagnosis (e.g., “In those cases where there is a sandy sediment in the urine, there is calculus in the bladder or kidneys”). The aphorisms were largely based on experience, rather than “logically” derived from Hippocratean humoral theory. In fact, this disconnection between disease theory and clinical practice remained intact until well into the nineteenth century, when methods were developed to investigate the inner workings of the living human body. Even in today’s clinical reasoning, aphorism-like heuristics still play an important role in the diagnostic process (e.g., “if symptom X, then always consider disease Y”), though nowadays generally supported by knowledge of underlying biomedical and pathophysiological mechanisms (Becker et al. 1961; Mangrulkar et al. 2002; Sanders 2009).

Bedside Teaching and Patient Demonstration

Until well into the seventeenth century, academic medicine was almost exclusively a theoretical affair. Reasoning played an important role, but it was exclusively employed to defend theses or to construct logical arguments, rather than to arrive at diagnoses or to select therapies. The introduction of bedside teaching by Batista de Monte in Padua in 1543 may have been the first step in teaching clinical reasoning in a more empirical sense, though little is known about his actual teaching and it was already discontinued by his immediate successors. Attempts to introduce bedside teaching in the Netherlands by Willem van der Straaten in Utrecht (1636) and Jan Van Heurne in Leyden (1638) met a similar fate. Herman Boerhaave (1668–1738) at Leyden University was more successful, and his “model” was followed by Edinburgh and Vienna, from where it spread to other universities in Europe and North America. Yet, even at Boerhaave’s department, this form of teaching played only a marginal role, largely due to lack of access to suitable patients. Moreover, Boerhaave’s bedside teachings were in fact orchestrated patient demonstrations, rather than sessions in which clinical reasoning was taught. Boerhaave’s aim was to achieve an integration, in his students, of theoretical knowledge from books and lectures – “advanced Hippocratean and Galenic theory” – and clinical experience (Risse 1989).

An important next step was made in 1766. In this year, Dr. Thomas Bond delivered his Introductory Lecture to a Course of Clinical Observations in the Pennsylvania Hospital, at the first medical school in America, the Medical College of the University of Pennsylvania. Bond was probably the first teacher to have unrestricted access to a sizeable number of patients in the hospital wards (Flexner 1910) (p. 4). Bond also appears to be the first teacher who introduced empirical elements into the – until then theoretically closed – system of clinical reasoning. That is, unlike Boerhaave’s, Bond’s reasoning could end in predictions that could conflict with empirical observations at obduction. If a patient had died, Bond predicted, rather than just demonstrated, the findings at autopsy. He was well aware of the risk that his predictions were not necessarily borne out: “...if perchance he [the teacher] finds [at autopsy] something unsuspected, which betrays an error in judgment, he like a great and good man, immediately acknowledges the mistake, and for the benefit of survivors points out other methods by which it might have been more hapily treated” (Bridenbaugh 1947) (p. 14). By exposing his clinical reasoning to empirical refutation, Bond opened the door to actual improvement of this reasoning and, as a corollary, to better understanding of the relationship between visible pathology and disease, with the possibility of improving treatment as well.

William Osler and the Differential Diagnosis

Basically, the early nineteenth century saw a rapid abandonment of Hippocratean-Galenic medical theory in favor of modern scientific medicine, which conceives of diseases as derailments of normal processes (or normal structures), rather than as disturbances of some speculative form of homeostasis. New diagnostic tools became available, such as palpation, percussion, and auscultation, which enabled the physician to investigate the interior of the human body without opening it and to distinguish between normal functioning (or structures) and their pathological deviations. The task of the clinician gradually shifted from accurate description of symptoms to drawing conclusions, on basis of indirect information, about underlying pathophysiological or pathological processes. Diseases were no longer defined exclusively on basis of findings (complaints, signs, and symptoms), and it was acknowledged that different diseases could lead to similar symptoms. This led to the emergence of the concept of a differential diagnosis. William Osler (1849–1919) is not only viewed as the founder of North-American clinical medicine, but he is also credited with introducing the “discipline of differential diagnosis” (Maude 2014). A differential diagnosis is a necessary concept if one wants to approach clinical problem solving in a systematic way, taking into account different possible causes of a particular symptom.

Abraham Flexner and the Science of Clinical Medicine

The reformer of American medical education Abraham Flexner (1865–1959) was the first to develop an encompassing view on how clinical medicine should be taught. He distinguished three formats: (1) study or observation of the individual patient throughout the whole course of the disease by the student under proper guidance and control, (2) demonstration of cases by the instructor, and (3) the exposition of principles (Flexner 1925) (p. 238–239). Flexner strongly advocated bringing the student into close and active relation with the patient (Bonner 2002) (p. 84). Most importantly, he saw the teaching clinic as a laboratory, similar to that in the basic sciences, though lagging behind in scientific rigor (Ludmerer 1985). In his view, the scientific approach, which had been so successful to advance physiology, pathology, and biochemistry, could directly be transferred to the bedside: “There are no principles involved in teaching clinical medicine that are not likewise involved in the teaching of the laboratory subjects” (Flexner 1925) (p. 237). The thinking processes of clinicians proceeded along exactly the same lines as those of scientists, he claimed (Flexner 1925) (p. 10). Even the most brilliant demonstration, Flexner believed, was less educative than a “more or less bungled experiment” carried out by the student (Flexner 1912) (p. 84). The response of the medical community in Flexner’s time was ambivalent: at a more abstract level, many physicians and teachers endorsed the view that diagnostic problem solving could benefit from a scientific approach (Becker et al. 1961) (p. 223); but at a more concrete level, they found Flexner’s views unpalatable, as he rejected as unscientific many clinical practices that in their eyes were inevitable, such as “intelligent guesswork,” the “tentative interpretation of fragmentary information,” and what Flexner disparagingly described as “improvised therapy consisting of little more than persuasion sustained only by the physician’s authority and personality” (Miller 1966) (p. 651). Unlike scientists, clinicians cannot indefinitely postpone their judgments and go on collecting further evidence that may enable them eventually to draw firm conclusions; hence, even though students can be trained to do scientific research, the scientific approach Flexner propagated cannot be directly applied to clinical problems.

Half a century later, it was clear that Flexner’s recommendations for a more scientific approach to teaching clinical reasoning had not fallen in fertile soil. On the contrary, Becker et al. (1961) observed that teaching in the clinic was haphazard and consisted largely of residents teaching the students those things which are “closest to the students’ hearts,” namely, “procedure, “pearls,” tips, and other bits of medical wisdom which the resident suspects will be useful for the practicing physician” (Becker et al. 1961) (p. 357). When a student asked a question, “which sounded perfectly reasonable,” Becker et al. (1961) noted the supervisor-clinician frequently gave an answer that started with “In my experience…” and rarely came up with arguments that “carry the force of reason or logic” (Becker et al. 1961) (p. 235). In other words, arguments of persuasion, authority, and experience predominated in the clinic over a more reasoned and systematic approach. Though Becker et al.’s (1961) observations were limited to a single medical school, there is no evidence that the situation was different in other medical schools in the 1950s and 1960s. For example, in an extensive discussion of the new medical curriculum at Western Reserve University in the 1950s, clinical science is defined as “observing and working with patients” (Williams 1980) (p. 162), but no details about clinical problem solving are provided. The fact that Elstein et al. (1972) started their research project on medical problem solving with an exploratory study of how experienced physicians solve diagnostic problems illustrates the belief that little was known about how physicians actually solve clinical problems, let alone how they could teach this in a systematic fashion.

Early Diagnostic Tools, Computer-Assisted Instruction (CAI), and Patient Management Problems

If the art of clinical reasoning cannot be taught and the science of clinical reasoning cannot be applied, are there any options left for teaching clinical reasoning? In the 1950s, a third approach appeared on the stage: diagnosis as applied technology (Balla 1985) (p. 1). Two interrelated developments fueled this view: first, new mathematical and statistical techniques were applied to medical diagnosis; second, the development of the electronic computer opened a window to apply these mathematical and statistical, as well as other analytic, methods, to clinical problem solving. In 1954, Firmin Nash (at the time the director of the South West London Mass X-Ray Service) presented the “Logoscope,” a device analogous to a slide rule, with removable columns which allowed the manipulation of any sample of qualitative clinical data (Nash 1954, 1960). The Logoscope embodied the concept of a disease manifestation matrix, a table with the columns representing disease, and the rows signs, symptoms, or laboratory findings (Jacquez 1964). The Logoscope was the first mechanistic tool to help the diagnostician focus on relevant diagnostic hypotheses, after collecting the (clinical) findings of a specific patient. In the 1960s, the first digital computer programs were written that aimed at instructing students how to solve clinical problems. Given the limited availability of computers and the highly constrained way humans could interact with them, these programs should be seen as experimental systems, rather than as real teaching tools. Clinicians and computer programmers cooperated to preconceive every possible step in the diagnostic process the problem solver (student) could take and the machine’s response to each step. By using “branched programming,” an illusion of flexibility could be created, that is, the student could ask questions and suggest actions or diagnoses by selecting them from a vocabulary list (the precursor of today’s “menu”) to which the computer could then provide appropriate, though “canned,” responses. Some programs even enabled the teacher to program a “pedagogic strategy” to guide the student’s problem-solving process (Feurzeig et al. 1964). Predictable erroneous solution paths could, at least in theory, be recognized, and more appropriate alternative actions could be proposed. A slightly more advanced version of this type of early computer-assisted instruction (CAI) was able to generate “cases” as well : given a particular diagnosis, it could select symptoms and other findings on a statistical basis to characterize the “patient” (McGuire 1963). As teaching instruments, however, these programs could only provide case-specific recommendations, and in this respect, their scope was limited to “diagnostic drill” (Feurzeig et al. 1964). That is, the instructions did not embody an explicit general method to solve clinical problems which could be applied across different cases.

Primarily developed for assessment purposes, but to a limited extent also applicable in teaching contexts, is the conceptually similar, but paper-and-pencil-based approach called “patient management problems” (McCarthy and Gonella 1967; McGuire 1963). The method aims at simulating an actual clinical situation representative of a physician’s practice. Like the early CAI systems, PMPs use branched programming, i.e., the student or clinician can choose from a repertory of possible actions; once an action is chosen, feedback is provided about the outcome. In practice, the user has to erase an opaque overlay designating the chosen action, after which feedback (e.g., results of a laboratory test) becomes visible. PMPs can be used in a teaching context by adapting the feedback, e.g., by providing reasons why the action was inadequate or by referring to literature. In line with the, at the time predominant, behavioristic view of learning, the immediate availability of feedback – without a teacher being physically present – was considered an important asset of the method (McCarthy and Gonella 1967). As PMPs did not allow for (legitimate) flexibility in the way a user can approach a clinical problem, the method became into disuse.

Artificial Intelligence and Problem-Based Learning

Artificial intelligence (AI) is a computer program characterized by flexibility and adaptivity; they do not rely on preprogrammed cases and fixed problem-solving routes, but can accommodate a broad range of user input and react with a similarly broad range of responses, including feedback and recommendations about how to proceed. When applied to complex, knowledge-rich domains, such as medicine, AI programs are called expert systems, and in education, they are known as intelligent tutoring systems (ITS). The fundamentals of all these programs are the same: chains of simple operations (jointly called programs) applied to simple content (basically, arrays of alphanumeric symbols). Complex procedures and complex knowledge emerge by assembling large numbers of simple operations and applying these to large amounts of simple content. In clinical medicine, AI refers to automated diagnostic systems featured by a strict distinction between disease knowledge on the one hand and diagnostic procedures on the other (Clancey 1984). In the 1980s, the heydays of this form of AI, several diagnostic systems were developed, of which INTERNIST (Miller et al. 1982) and MYCIN (Clancey 1983) are the most well-known. GUIDON-MANAGE (Rodolitz and Clancey 1989) was specifically developed to introduce medical students to the process of diagnostic reasoning and is probably the most prominent example of an ITS in medical diagnosis.

In fact, AI heavily draws on principles of human problem solving that, in their turn, were derived from the features of early programmable machines developed in the decades before AI itself was technically possible (Feigenbaum and Feldman 1963; Newell and Simon 1972). In the 1960s, this approach to problem solving was already described at a theoretical level in several publications on clinical problem solving (Gorry and Barnett 1968; Kleinmuntz 1965, 1968; Overall and Williams 1961; Wortman 1972, 1966). From an educational perspective, this appeared a promising approach: if general methods or procedures to solve clinical problems can be formulated independently from clinical content knowledge (Jacquez 1964), the process of clinical diagnosis can be taught directly (Gorry 1970) and applied irrespective of the content of the specific problem. The educational approach directly connected with this view of problem solving in medicine is problem-based learning (PBL) (Barrows 1983; Barrows and Tamblyn 1980; Neufeld and Barrows 1974). In the educational philosophy of McMaster University Medical School in Hamilton, Canada – the cradle of problem-based learning  – becoming a problem solver was an explicit goal of the medical curriculum, apart from the physician as content expert. Like Gorry (1970), Barrows (1983) believed that a problem-solving approach or problem-solving skills could be directly taught. In this respect, however, PBL has not lived up to its promises – today, the method is conceived in entirely different terms, i.e., as an instructional approach that aims to integrate basic science and clinical knowledge (Schmidt 1983, 1993), but with little direct benefit for teaching clinical reasoning. How did this come about? The belief that it would be possible to develop a clear-cut method to solve clinical and diagnostic problems was dealt a fatal blow by Elstein et al. (1978) who extensively investigated differences between experts’ and novices’ approaches to these problems. Experts and novices alike solve diagnostic problems by generating a small number of hypotheses early in the process and then proceed by collecting evidence to confirm (or refute) these hypotheses. The only difference is that experts on the average generate better, i.e., more promising, hypotheses early in the clinical encounter (Hobus et al. 1987; Neufeld et al. 1981). Experts’ superior performance in clinical diagnosis seems to be an inherent consequence of the knowledge structures they develop over the years as a consequence of their experience. As Elstein observes, “there is not much that formal theories of problem solving, judgment and decision making can do to facilitate this slow process” (Elstein 1995) (p. 53–54). Elstein et al.’s (1978) additional finding of expertise being highly case specific suggests that exposing students to a broad range of clinical problems might be the only feasible approach to teach clinical reasoning.

After Medical Problem Solving (1978): A Role Left for Teaching Clinical Reasoning?

Today, many researchers and clinical educators distinguish between two approaches of clinical problem solving: one based on pattern recognition or “pure induction” and one that is usually referred to as “hypothesis generation and testing” (Gale 1982; Norman 2005; Patel et al. 1993). In fact, the former can be seen as a limiting case of the latter, that is, when a physician recognizes a clinical condition with sufficient confidence to immediately (probably unconsciously) suppress all alternative hypotheses that might crop up, without a need for further confirmation. Though praised as the “mainspring of diagnosis” by some, e.g. McCormick (1986), the ability to recognize a multitude of patterns requires extensive experience and is, unlike reasoning, not amenable to direct instruction (Elstein 1995). This leaves us with hypothesis generation and testing as the focus of a diagnostic problem-solving method (Barrows and Feltovich 1987). However, this is a very general approach that humans use to solve all kinds of problems; it lacks the necessary specificity to be applicable to concrete clinical cases (Blois 1984). Thus, alternative approaches have been formulated. For example, Blois claims that if a clinician does not recognize a pattern, he/she nearly always reverts to a causal inquiry, trying to relate specific findings to general physiological or pathological conditions (Blois 1984; Edwards et al. 2009) or to what Ploger (1988) calls “known pathology.” As this form of causal reasoning almost always involves some uncertainty – some steps in the causal sequence are not observable, but have to be inferred – the solution of a diagnostic problem will always be a differential diagnosis , rather than the diagnosis. Several authors have expressed doubts whether students can be taught to construct differential diagnoses for clinical cases (Papa et al. 2007). According to Elstein (1995) and Kassirer and Kopelman (Kassirer et al. 2010), there even is no agreed-upon definition of a differential diagnosis. An alternative approach is to group individual findings that for some reason “belong together,” e.g., appear to have the same cause or are part of a known syndrome. Eddy and Clanton (1982) developed an approach to diagnosis that starts with clustering elementary findings into “aggregate findings.” Next, a differential diagnosis (list of possible causes) is constructed for the most important aggregate finding, which they call the “pivot.” Then, all elementary findings in the case that cannot be subsumed under the pivot are checked against the alternatives in the differential diagnosis of the pivot. If all elementary findings are covered by the differential diagnosis of the pivot, this is the differential diagnosis for the entire case. If not, the process will be repeated with the second aggregate finding now becoming the pivot and so on. Finally, the alternative options (diagnoses) in the DD can be listed as more or less likely. Given the information available, this might be the best possible solution of the case. The advantage of the approach is that it will often be easier to construct a differential diagnosis for a selected collection of findings than for an entire case, in particular if the number of signs, symptoms, and findings is large.

Evans and Gadd present a similar, but slightly more hierarchical approach (Evans and Gadd 1989). They distinguish six levels, ranging from the “empirium” (raw, uninterpreted findings) to the “global complex” which covers not only the diagnosis but also prevention and medical, social, and psychological care. The equivalent of Eddy and Clanton’s (1982) pivot is called “facet” by Evans and Gadd (1989). Facets are sub-diagnostic, complex clusters of findings which can be attributed to a coherent underlying pathophysiological process. “Anemia” would be a good example of a facet. Evans and Gadd (1989) put a stronger emphasis on pathophysiological thinking than Eddy and Clanton (1982), but they are less explicit about how to construct a differential diagnosis. A third method that resembles the previous two approaches is the clinical problem analysis (CPA) (Custers et al. 2000). This approach is based on Weed’s (1968a, b) “problem-oriented medical record.” The “patient problem” in CPA is similar to Eddy and Clanton’s (1982) “pivot” and Evans and Gadd’s (1989) “facet” but has a more practical nature: a “patient problem” can be anything in a case for which a differential diagnosis can be constructed or that may require treatment or further diagnostic action. A critical aspect of the method is that uncertainty is captured by the differential diagnosis, rather than by the patient problem (patient problems are always clear, specific, and certain). Thus, a patient problem can never include likelihood qualifiers, such as “probably X” or “suspicion of Y.” If two findings cannot be subsumed by the same patient problem with certainty, they should be made separate patient problems that require individual analysis. In this approach, the pitfall of “premature closure” – the tendency to stop considering other options after generating a tentative early hypothesis (Graber et al. 2005) – can be avoided, though at the expense of “incomplete synthesis” (the diagnostician may fail to appropriately aggregate findings, and this may slow down the diagnostic process) (Voytovich et al. 1986). But, provided that slowing down is acceptable in the case of clinicians who are still in training, this approach can be used in an educational context.

Teaching Clinical Reasoning: A Few General Recommendations

Today, few medical educators believe that there exists a single clinical reasoning method that can be applied to all diagnostic problems by diagnosticians of all stripes. Yet, this does not imply that one cannot teach beyond “repeated practice [–] on a similar range of problems” (Elstein et al. 1978) or “observing others engaged in the process” (Kassirer and Kopelman 1991). What can be done? Our suggestion would be that if clinical reasoning can neither be taught as a “pure” process nor directly as a skill, teaching it in a case-based format might be a proper middle ground. What further features may an effective case-based approach require? First, it is important to take the term “reasoning” seriously. The teacher or supervisor should avoid to overly emphasize the outcome (the “correct” diagnosis), for this may reinforce undesirable behavior, such as guessing or jumping to conclusions. In addition, teaching should consist of small steps and teachers should not hesitate to frequently ask hypothetical questions or questions that probe a possible explanation of findings, such as “What if…?”, “Can you think of other possibilities?”, “Can you explain this?”, etc. It should also be clear to the participants (teacher and students) that a differential diagnosis is a legitimate endpoint of the process, particularly if different (diagnostic or therapeutic) actions are associated with each alternative in the differential diagnosis. There is limited evidence that a model schema characterizing disease into eight groups (congenital, traumatic, immunologic, neoplastic, metabolic, infectious, toxic, and vascular) can be helpful (Brawer et al. 1988), but any other approach, as long as it is systematic, may also be used by beginning students (Fulop 1985). Moreover, to be effective, objectives and expectations must be clearly communicated before clinical reasoning session begins (Edwards et al. 2009). The best format appears to be a small group session guided by a clinical tutor; in advanced groups, students can be asked to prepare and present a case. During sessions, students should be encouraged to actively participate and take notes – the importance of which was already emphasized by William Osler . To avoid the “retrospective bias” – teaching problem solving as if one is working toward a solution known in advance – the method works best when the teacher or tutor is not familiar with the case but has access to exactly the same information as the students (Kassirer 2010; Kassirer and Kopelman 1991). Critics might argue that this is a reduced form of clinical problem solving – and it is, deliberately so – for clinical reasoning is demanding and involves a high cognitive load (Qiao et al. 2014; Young et al. 2014); hence, it cannot be properly taught in an authentic context, where students simultaneously have to deal with a real patient: in this context, dealing with a real patient would impose “extraneous load” to the detriment of the “germane load,” i.e., learning (van Merriënboer and Sweller 2010). On the other hand, in clinical reasoning sessions, students will learn how to deal with a case report or case record, an aspect of clinical practice that is difficult to train in practical context. In sum, teaching clinical reasoning in a step-by-step fashion, with an emphasis on formulating a correct and comprehensive differential diagnosis, will be the best way to start clinical training of junior medical students.

References

  1. Balla, J. (1985). The diagnostic process. A model for clinical teachers. Cambridge, UK: Cambridge University Press.Google Scholar
  2. Barrows, H. S. (1983). Problem-based, self-directed learning. JAMA: The Journal of the American Medical Association, 250(22), 3077. http://doi.org/10.1001/jama.1983.03340220045031.CrossRefGoogle Scholar
  3. Barrows, H. S., & Feltovich, P. J. (1987). The clinical reasoning process. Medical Education, 21(2), 86–91. http://doi.org/10.1111/j.1365-2923.1987.tb00671.x.CrossRefGoogle Scholar
  4. Barrows, H. S., & Tamblyn, R. M. (1980). Problem-based learning. An approach to medical education. New York: Springer.Google Scholar
  5. Becker, H., Geer, B., Huges, E., & Strauss, A. (1961). Boys in white. Student culture in medical school. Chicago: University of Chicago Press.Google Scholar
  6. Blois, M. (1984). Information and medicine: The nature of medical descriptions. Berkeley: University of California Press.Google Scholar
  7. Bonner, T. N. (2002). Iconoclast. Abraham Flexner and a life in learning. Baltimore: Johns Hopkins University Press.Google Scholar
  8. Brawer, M., Witzke, D., Fuchs, M., & Fulginiti, J. (1988). A schema for teaching differential diagnosis. Proceedings of the Annual Conference of Research in Medical Education, 27, 162–166.Google Scholar
  9. Bridenbaugh, C. (1947). Dr Thomas Bond’s essay on the utility of clinical lectures. Journal of the History of Medinice and the Allied Sciences, 2(1), 10–19.CrossRefGoogle Scholar
  10. Clancey, W. J. (1983). The epistemology of a rule-based expert system—A framework for explanation. Artificial Intelligence, 20(3), 215–251. http://doi.org/10.1016/0004-3702(83)90008-5.CrossRefGoogle Scholar
  11. Clancey, W. (1984). Methodology for building an intelligent tutoring system. In W. Kintsch, J. Miller, & P. Polson (Eds.), Method and tactics in cognitive science (pp. 51–83). Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  12. Custers, E. J., Stuyt, P. M., & De Vries Robbé, P. F. (2000). Clinical problem analysis (CPA): A systematic approach to teaching complex medical problem solving. Academic Medicine, 75(3), 291–297.CrossRefGoogle Scholar
  13. Eddy, D., & Clanton, C. (1982). The art of diagnosis: Solving the clinicopathological exercise. New England Journal of Medicine, 306(21), 1263–1268.CrossRefGoogle Scholar
  14. Edwards, J. C., Brannan, J. R., Burgess, L., Plauche, W. C., & Marier, R. L. (2009). Case presentation format and clinical reasoning: A strategy for teaching medical students. Medical Teacher, 9, 285.CrossRefGoogle Scholar
  15. Elstein, A. (1995). Clinical reasoning in medicine. In J. Higgs & M. Jones (Eds.), Clinical reasoning in the health professions (pp. 49–59). Oxford: Butterworth Heinemann.Google Scholar
  16. Elstein, A., Kagan, N., Shulman, L., Jason, H., & Loupe, M. (1972). Methods and theory in the study of medical inquiry. Journal of Medical Education, 47, 85–92.Google Scholar
  17. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving. An analysis of clinical reasoning. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  18. Evans, D., & Gadd, C. (1989). Managing coherence and context in medical problem-solving discourse. In D. Evans & V. L. Patel (Eds.), Cognitive science in medicine: biomedical modelling (pp. 211–255). Cambridge, MA: The MIT press.Google Scholar
  19. Feigenbaum, E., & Feldman, J. (1963). Computers and thought: A collection of articles. New York: McGraw-Hill.Google Scholar
  20. Feurzeig, W., Munter, P., Swets, J., & Breen, M. (1964). Computer-aided teaching in medical diagnosis. Journal of Medical Education, 39(8), 746–754.Google Scholar
  21. Flexner, A. (1910). Medical education in the United States and Canada. A report to the Carnegie Foundation for the Advancement of Teaching. In ForgottenBooks (Ed.)., 2012 Repr. Boston: D.B. Updike, the Merrymount Press.Google Scholar
  22. Flexner, A. (1912). Medical education in Europe. A report to the Carnegie Foundation for the Advancement of Teaching, Bulletin #6. New York: USA: The Carnegie Foundation.Google Scholar
  23. Flexner, A. (1925). Medical education. A comparative study. New York: The MacMillan Company.Google Scholar
  24. Fulop, M. (1985). Teaching differential diagnosis to beginning clinical students. The American Journal of Medicine, 79(6), 745–749. http://doi.org/10.1016/0002-9343(85)90526-1.CrossRefGoogle Scholar
  25. Gale, J. (1982). Some cognitive components of the diagnostic thinking process. British Journal of Psychology, 52(1), 64–76.Google Scholar
  26. Gorry, G. (1970). Modeling the diagnostic process. Journal of Medical Education, 45(5), 293–302.Google Scholar
  27. Gorry, G. A., & Barnett, G. O. (1968). Experience with a model of sequential diagnosis. Computers and Biomedical Research, 1(5), 490–507. http://doi.org/10.1016/0010-4809(68)90016-5.CrossRefGoogle Scholar
  28. Graber, M. L., Franklin, N., & Gordon, R. (2005). Diagnostic error in internal medicine. Archives of Internal Medicine, 165(13), 1493–1499. http://doi.org/10.1001/archinte.165.13.1493.CrossRefGoogle Scholar
  29. Hobus, P. P. M., Schmidt, H. G., Boshuizen, H. P. A., & Patel, V. L. (1987). Contextual factors in the activation of first diagnostic hypotheses: Expert-novice differences. Medical Education, 21(6), 471–476. http://doi.org/10.1111/j.1365-2923.1987.tb01405.x.CrossRefGoogle Scholar
  30. Jacquez, J. (1964). The diagnostic process: Problems and perspectives. In J. Jacquez (Ed.), The diagnostic process (pp. 23–37). Ann Arbor: University of Michigan Medical School.Google Scholar
  31. Kassirer, J. P. (2010). Teaching clinical reasoning: Case-based and coached. Academic Medicine, 85(7), 1118–1124.CrossRefGoogle Scholar
  32. Kassirer, J., & Kopelman, R. (1991). Learning clinical reasoning. Baltimore: Lippincott Williams & Wilkins.Google Scholar
  33. Kleinmuntz, B. (1965). Diagnostic problem solving by computer. Japanese Psychological Research, 7(4), 189–194. http://doi.org/10.4992/psycholres1954.7.189.CrossRefGoogle Scholar
  34. Kleinmuntz, B. (1968). The processing of clinical information by man and machine. In B. Kleinmuntz (Ed.), Formal representation of human judgment. New York: Wiley.Google Scholar
  35. Ludmerer, K. M. (1985). Learning to heal. The development of American medical education. New York: Basic Books.Google Scholar
  36. Mangrulkar, R. S., Saint, S., Chu, S., & Tierney, L. M. (2002). What is the role of the clinical “pearl”? The American Journal of Medicine, 113(7), 617–624. http://doi.org/10.1016/S0002-9343(02)01353-0.CrossRefGoogle Scholar
  37. Maude, J. (2014). Differential diagnosis: The key to reducing diagnosis error, measuring diagnosis and a mechanism to reduce healthcare costs. Diagnosis, 1(1), 107–109. http://doi.org/10.1515/dx-2013-0009.CrossRefGoogle Scholar
  38. McCarthy, W. H., & Gonella, J. S. (1967). The simulated patient management problem: A technique for evaluating and teaching clinical competence. British Journal of Medical Education, 1(5), 348–352. http://doi.org/10.1111/j.1365-2923.1967.tb01730.x.CrossRefGoogle Scholar
  39. McCormick, J. (1986). Diagnosis: The need for demystification. The Lancet, 328(8521), 1434–1435.CrossRefGoogle Scholar
  40. McGuire, C. (1963). A process approach to the construction and analysis of medical examinations. Journal of Medical Education, 38, 556–563.Google Scholar
  41. van Merriënboer, J. J. G., & Sweller, J. (2010). Cognitive load theory in health professional education: Design principles and strategies. Medical Education, 44(1), 85–93. http://doi.org/10.1111/j.1365-2923.2009.03498.x.Google Scholar
  42. Miller, H. (1966). Fifty years after flexner. The Lancet, 288(7465), 647–654. http://doi.org/10.1016/S0140-6736(66)92827-3.CrossRefGoogle Scholar
  43. Miller, R., Pople, H., & Myers, J. (1982). INTERNIST-I, an experimental computer-based diagnostic consultant for general internal medicine. New England Journal of Medicine, 307(8), 468–476.CrossRefGoogle Scholar
  44. Nash, F. (1954). Differential diagnosis: An apparatus to assist the logical faculties. Lancet, 263(6817), 874–875.CrossRefGoogle Scholar
  45. Nash, F. (1960). Diagnostic reasoning and the logoscope. Lancet, 276(7166), 1442–1446.CrossRefGoogle Scholar
  46. Neufeld, V., & Barrows, H. (1974). The “McMaster philosophy”: An approach to medical education. Journal of Medical Education, 49, 1040–1050.Google Scholar
  47. Neufeld, V., Norman, G. R., Feighter, J. W., & Barrows, H. S. (1981). Clinical problem-solving by medical students: A cross-sectional and longitudinal analysis. Medical Education, 15(5), 315–322. http://doi.org/10.1111/j.1365-2923.1981.tb02495.x.CrossRefGoogle Scholar
  48. Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.Google Scholar
  49. Norman, G. (2005). Research in clinical reasoning: Past history and current trends. Medical Education, 39(4), 418–427. http://doi.org/10.1111/j.1365-2929.2005.02127.x.CrossRefGoogle Scholar
  50. Overall, J. E., & Williams, C. M. (1961). Models for medical diagnosis. Bahavioral Science, 6(2), 134–141.CrossRefGoogle Scholar
  51. Papa, F. J., Oglesby, M. W., Aldrich, D. G., Schaller, F., & Cipher, D. J. (2007). Improving diagnostic capabilities of medical students via application of cognitive sciences-derived learning principles. Medical Education, 41(4), 419–425. http://doi.org/10.1111/j.1365-2929.2006.02693.x.CrossRefGoogle Scholar
  52. Patel, V. L., Groen, G. J., & Norman, G. R. (1993). Reasoning and instruction in medical curricula. Cognition and Instruction, 10(4), 335–378. http://doi.org/10.1207/s1532690xci1004_2.CrossRefGoogle Scholar
  53. Ploger, D. (1988). Reasoning and the structure of knowledge in biochemistry. Instructional Science, 17(1988), 57–76. http://doi.org/10.1007/BF00121234.CrossRefGoogle Scholar
  54. Qiao, Y.Q., Shen, J., Liang, X., Ding, S., Chen, F.Y., Shao, L., … Ran, Z.H. (2014). Using cognitive theory to facilitate medical education. BMC Medical Education, 14(1), 79. http://doi.org/10.1186/1472-6920-14-79.
  55. Risse, G. (1989). Clinical instruction in hospitals: The Boerhaavian tradition in Leyden, Edinburgh, Vienna, and Padua. Clio Medica, 21, 1–19.Google Scholar
  56. Rodolitz, N., & Clancey, W. (1989). GUIDON MANAGE: Teaching the process of medical diagnosis. In D. Evans & V. L. Patel (Eds.), Cognitive science in medicine: Biomedical modeling (pp. 313–348). Cambridge, MA: The MIT press.Google Scholar
  57. Sanders, J. (2009). Every patient tells a story. Medical mysteries and the art of diagnosis. New York: Broadway Books. http://doi.org/10.1172/JCI41900.Google Scholar
  58. Schmidt, H. (1983). Problem-based learning: Rationale and description. Medical Education, 17(1), 11–16.CrossRefGoogle Scholar
  59. Schmidt, H. G. (1993). Foundations of problem-based learning: Some explanatory notes. Medical Education, 27(5), 422–432. http://doi.org/10.1111/j.1365-2923.1993.tb00296.x.CrossRefGoogle Scholar
  60. Voytovich, A. E., Rippey, R. M., & Jue, D. (1986). Diagnostic reasoning in the multiproblem patient: An interactive, microcomputer-based audit. Evaluation & the Health Professions, 9(1), 90–102. http://doi.org/10.1177/016327878600900107.CrossRefGoogle Scholar
  61. Weed, L. (1968a). Medical records that guide and teach. New England Journal of Medicine, 278(12), 593–600.CrossRefGoogle Scholar
  62. Weed, L. (1968b). Medical records that guide and teach (concluded). New England Journal of Medicine, 278(12), 652–657.CrossRefGoogle Scholar
  63. Williams, G. (1980). Western reserve’s experiment on medical education and its outcome. New York: Oxford University Press.Google Scholar
  64. Wortman, P. M. P. (1966). Representation and strategy in diagnostic problem solving. Human Factors, 8(1), 48–53. http://doi.org/10.1177/001872086600800105.CrossRefGoogle Scholar
  65. Wortman, P. M. (1972). Medical diagnosis. An information processing approach. Computers and Biomedical Research, 5(4), 315–328.CrossRefGoogle Scholar
  66. Young, J. Q., Van Merrienboer, J., Durning, S., & ten Cate, O. (2014). Cognitive load theory: Implications for medical education: AMEE guide no. 86. Medical Teacher, 36(5), 371–384. http://doi.org/10.3109/0142159X.2014.889290.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Center for Research and Development of EducationUniversity Medical Center UtrechtUtrechtThe Netherlands

Personalised recommendations