This study explores the application and retention of knowledge in the context of clinical decision making and dual process theory. In order for successful clinical reasoning to occur, and for a doctor or medical student to come to a correct diagnosis, it is necessary for them to be able to apply the knowledge they have gained in various contexts to other scenarios. In other words, they require the ability to transfer knowledge from one situation to another. However, despite its importance in medicine and many other domains, the transfer of knowledge learned is recognized as a process which happens infrequently (Gick & Holyoak, 1980; Haskell, 2001).
This paper builds on a prior study by Rosby et al. (2018), which demonstrated that it was possible to train novice medical students to exhibit expert-like decision-making behaviours, mirroring that of System-1 thinking. However, what this work did not explore was whether the training also promoted the transfer and long-term retention of this knowledge.
The key objective of the current study was to establish whether students trained for System-1-type performance simply learn to recognize the cases for which they are trained (in which case, transfer of knowledge would not occur), or are able to learn the underlying diagnostic features of particular chest diseases (as represented on X-ray). The latter would subsequently enable them to apply this knowledge to new representations of the same condition, varying in terms of the features they have in common (Thorndike & Woodworth, 1901). It was expected that as cases diverged from the original, the accuracy in diagnosis would reduce and time taken to reach that diagnosis would increase.
If trained cases had been understood well enough that students could apply and transfer this knowledge, we also predicted that a degree of knowledge retention would occur although recognize that accuracy would likely reduce secondary to knowledge decay as previously discussed in the literature (Ebbinghaus, 1885; Murre & Dros, 2015; Schmidt et al., 2000).
To test these hypostheses, a study was designed and conducted in two parts. During Part 1, in order to test for knowledge transfer, novice medical students’ were presented with eight online chest X-ray cases similar to those described by Rosby et al. (2018). They were shown half of them repetitively, asked to select the correct diagnosis from a list and were provided with remedial training in the event that they were incorrect. Finally, participants were asked to provide free text diagnoses for a total of 16 cases: the four trained cases, four cases to test for near transfer of knowledge, four cases to assess far transfer, and four unrelated, untrained cases, of different diagnoses to those trained for. Part 2 was conducted two weeks later to measure whether there was a degree of knowledge retention as a result of the training received, which include a knowledge retrieval test. Participants were asked to provide diagnoses for the same 16 cases they had seen in the final stage of Part 1. Dependent variables for both parts of the experiment were diagnostic accuracy and processing time.
Overall, the training particpants received did promote knowledge transfer. Participants were able to apply knowledge learned about the trained cases to diagnose new cases of the same diagnosis, differing in similarity of features. This indicates that near transfer and far transfer did indeed occur to varying degrees, although diagnostic accuracy was reduced compared to that of trained cases, coupled with a significant increase in response time, as predicted. Near transfer cases were more accurately and more quickly diagnosed than far transfer cases. Performance in the untrained cases was lower than that of the far transfer cases, although time taken to reach a diagnosis was similar. This pattern shows that the further away from the original case the images are, the more difficult it is to transfer the knowledge learned. There was a degree of successful knowledge retention demonstrated, although diagnostic accuracy scores were reduced as one would expect.
So, what are the possible reasons why we see successful transfer in this study? It is recognised that successful transfer depends upon a wide variety of factors, some of which are fulfilled in this study and will be further discussed. Firstly, transfer is dependent both on context and adequacy in learning (Bransford et al., 1999), as well as the ability of the learner to generalise what they have studied beyond the initial learning event without any new learning taking place (Lobato, 2003). In this case, the context of the study was relevant to the participants, and the exercise provided many opportunities to reinforce the new knowledge being acquired allowing an adequate learning experience. On the other hand, the students saw always the same versions of the particular diseases during the training phase. Perhaps transfer would be enhanced when students, already in the training phase would be confronted with different versions of the same diseases. A future experiment using our paradigm could compare transfer levels as a result of seeing the same cases all the time, versus seeing different cases during training.
Secondly, it is also important to take into account whether there is any correlation between the learning task and the transfer test, in other words whether there is ‘transfer appropriate learning’ occurring (Blaxton, 1989). It is clear in this study that the initial learning task, that of learning diagnostic features of particular chest conditions on X-ray and reinforcing this with additional training, is closely correlated with the test phase, when participants were asked to provide free-text diagnoses’ for a number of chest X-rays, some familiar and others less so.
Furthermore, transfer of knowledge is positively influenced by the motivation of the learner because this affects the quality of the initial phase of learning (Pugh & Bergin, 2006). It is likewise helpful if the learner can appreciate the worth in what they are learning (Anderson et al., 1996). Simply put, if the learner can see beyond the learning event and how it can be used later. This is also the case when it is clear that the information being learned will have an impact on others (Pintrich & Schunk, 2002). All of these factors apply to our participant group and it is fair to assume that they, as medical students, were likely to have been motivated to learn during this exercise given the relevance to their studies and later medical practice with patients.
Let’s now consider the implications of this study. Firstly, to our knowledge, this is one of the a very small number of studies within the field of diagnostic reasoning (Norman et al., 2007) to demonstrate successful knowledge transfer as a result of a training exercise, and is one of the few studies to provide evidence of knowledge transfer in general, even beyond the context studied here. This possibly adds to the body of literature in this field, bringing a fresh perspective and a new paradigm for further investigation.
Another contribution is the demonstration of a degree of long-term knowledge retention resulting from completion of the online training exercise which involved repeated exposure to specific cases and retrieval of diagnoses. This undercuts existing literature which largely suggests that only if a learner undertakes multiple and distributed opportunities for learning a particular task, will sufficient knowledge retention emerge (Semb & Ellis, 1994). It does however provide additional evidence to support results of experiments carried out by Karpicke and Roediger, who showed that repeated studying and prior testing enhanced long-term knowledge retention (Karpicke & Roediger, 2007; Roediger & Karpicke, 2006). In our study, it took less than 8 min on average to train students in chest x-ray interpretation in a one off training exercise, to a degree that was still detectable two weeks later.
These results, in relation to both transfer and retention, indicate that the exercise described could represent a useful way of teaching medical students how to develop clinical reasoning skills, a component which despite its importance is often lacking in medical school curricula due to untertainty about how to address it (Eva, 2005). It is suggested that medical students, despite all the efforts made to prepare them for professional practice, do not receive sufficient exposure to the large variation in which disease presents itself, and therefore may lack essential diagnostic competencies when entering the health care system (Schmidt & Mamede, 2015). Intense and continuous training in the diagnosis of diseases in all their variations may be an appropriate response to this problem.
Special consideration should be given to the fact that we have framed our findings in terms of developing System-1 thinking in our second-year students. In the Introduction section we talked about the possibility of training novice medical students to exhibit expert-like decision-making behaviours. In terms of what the existing literature sees as indicators of System-1 thinking: short decision times and high accuracy (Norman et al., 2014), we seem to have succeeded. System 1 is envisioned as an immediate and effortless response of the mind to situations which it recognizes. Our experimental paradigm, using pictures that could be judged in the blink of an eye, was optimized for such response. However, even in the trained condition and on immediate test, students took on average almost five seconds to arrive at a decision. It seems that even under such circumstances, students engage in some analytical thinking, perhaps as an extra check on the accuracy of their initial diagnosis. On the other hand, a study using the same paradigm and in which oxygenation of the prefrontal cortex was measured using functional near-infrared spectroscopy (a sign that analytical reasoning is involved), suggested that when trained cases were judged, the prefrontal contex was not involved (Rotgans et al., 2019).
There are of course limitations to our study, the most obvious being that it is focused on a limited field of medicine, chest radiology, so we cannot simply assume that this will be mirrored in other areas of medical practice. What if we were to use vignets of disease rather than pictures that can be judged in the blink of an eye? Would we find similar effects?
Another is the limited number of cases used—students were trained for only 4 cases, and demonstrated success in knowledge transfer and retention. However, in Hatala et al’s (2003) study which used ECGs, and a greater number of case examples, transfer did not occur as hoped (Hatala et al., 2003). It is therefore worth questioning what might happen to transfer and retention if the number of cases in our study were increased?
A further limitation to bear in mind is that the chest x-rays were considered in isolation. In professional practice, x-rays are a part of the diagnostic process, and the diagnosis seen on the X-ray is made in the context of a patient history, examination and other investigation findings. If this information was available, would this make the task of interpreting the x-ray easier, or perhaps more difficult, and how would this impact the likelihood of knowledge transfer and retention?
So, where do we go from here? We have managed to show that the experiment described promotes transfer of the learned knowledge to similar and less similar cases, and not simply recognition of the images trained for. Next, it would be pertinent to consider using this paradigm with a greater array of cases, perhaps in the context of more patient information, such as history and examination to assess how this affects the participants ability to retain and transfer their new found knowledge. We have also noted that this study focusses on a limited area of medicine—so, could this experiment be extended to other specialities, such as histopathology and dermatology to test how far this training is successful outside of the field of radiology? Additionally, having focused on novice medical students, it would seem appropriate to explore this paradigm in more experienced physicians to assess how far the underlying level of experience affects knowledge transfer.