The chapters in this book present a variety of carefully developed simulations of diagnostic tasks. These tasks vary in several key aspects. The learners in these tasks engage in different diagnostic modes and use one of several sources of information. While the simulations in Chaps. 3 through 6 as well as 8 and 9 feature an individual diagnostic process, such as diagnosing mathematical misconceptions, as their diagnostic mode, the simulations presented in Chaps. 7 and 10 require collaboration between two agents, e.g., an internist and a radiologist. In the simulations in Chaps. 3, 8, and 10, learners are required to use documents, such as patients’ files or tasks solved by students, to draw a conclusion. In contrast, the simulations in Chaps. 4, 6, and 7 contain videos of critical diagnostic situations such as students constructing an experiment or teachers orchestrating a classroom. A third type of source, featured in the simulations in Chaps. 5 and 9, are standardized live interactions between students or patients.

Additionally, the presented simulations cover various domains and topics. The simulations from the medical domain (Chaps. 9 and 10) address radiological examinations and medical history-taking. Several of the simulations from the domain of teacher education revolve around students’ competences and misconceptions, including rather domain-specific competences such as mathematical argumentation (see especially Chaps. 3 through 5) but also cross-domain competences such as scientific reasoning (Chap. 7). Other simulations from this domain address topics such as instructional quality (Chap. 6) and learning disorders (Chap. 8).

This diversity is not surprising, as it reflects the variety of real-life diagnostic situations. However, despite these differences, all presented simulations share a common goal: providing students, practitioners, and researchers with tools to test and foster diagnostic competences. Prior studies have shown that it is possible to foster diagnostic competences with a range of learning environments (see a comprehensive meta-analysis by Chernikova et al., 2019). Thus, it is not surprising that several of the simulations presented in this book have already produced promising early results. This is good news for the training of complex skills on the higher education level. However, several research questions remain open at this point. We already mentioned these overarching research questions in the introduction, and the chapters described how they plan to contribute to them. Here, in the conclusion section, we want to provide a more in-depth look at these questions.

11.1 What Processes are Central for Generating Desired Learning Outcomes in Simulations Aimed at Diagnostic Competences?

It is plausible to assume that the improvements that occur in simulations do not happen automatically just by being confronted with a diagnostic task, but because learners engage in certain activities during the diagnostic process. If researchers were better able to describe these activities using a common language across domains, they would be able to conduct coordinated research leading to knowledge accumulation and more efficient learning environments in the future. According to the model presented in Chap. 2 (Chernikova et al., 2022), diagnostic activities are one potential candidate for such a joint language. As simply a common language without an a priori implication for specific sequences of activities, diagnostic activities can serve as the starting point for analyses, especially for processes focused on confirming hypotheses, with activities such as generating hypotheses, generating and evaluating evidence, and drawing conclusions (Fischer et al., 2014). In situations that have a stronger exploratory focus, a different set of activities, such as noticing and knowledge-based reasoning about ongoing observation, might also be a promising conceptualization (Seidel & Stürmer, 2014). In future studies, we not only hope to find out more about the role of diagnostic activities in confirmatory and exploratory diagnostic situations, but also whether this role is different for individual vs. collaborative diagnostic situations, different diagnostic topics, or when different sources of information are used. The diversity of the presented simulations thus proves useful, as it will allow researchers to shed light on these questions.

11.2 How Can Learners in Simulations be Supported in Optimizing Learning Outcomes?

It is known from past research on complex learning environments that learners can become overwhelmed and need additional help if learning outcomes are to be optimized (e.g., Glogger-Frey et al., 2016). This assistance can take various forms, so there is not just one solution to this problem. A rather simple form of assistance can be the additional explicit presentation of information. Having the necessary knowledge base could help learners in the presented simulations focus on the actual diagnostic task-at-hand.

For instance, it might help learners to receive input about common mathematical misconceptions among students or various forms of lung diseases to perform well in diagnosing these entities in the presented cases.

Other promising forms of assistance can be found in the scaffolding literature (Belland et al., 2017; van de Pol et al., 2010). One idea would be to include prompts in the simulations that guide participants’ attention to crucial information that is often missed.

Additionally, scaffolding that includes reflection phases could be useful (Mamede & Schmidt, 2017). Stopping the learners’ thought process every once in a while and asking them to reflect on whether they are on the correct path might prevent them from drawing premature conclusions and learn more effectively from both their successes and failures in diagnosing.

A third tool would be to let learners take on different roles. Switching from the perspective of the person who conducts the diagnosis to the perspective of an observer or even the patient or student might lead to new insights about diagnostic errors (Stegmann et al., 2012). The presented simulations will not only allow us to see whether explicitly presented information on concepts and procedures as well as scaffolding is helpful, but also which version of this information is most beneficial. It is also important to identify any downsides to additional help, e.g., whether prompts or reflection phases can disrupt learning during the diagnostic process.

11.3 Which Variables Mediate or Moderate the Effects of Instructional Support?

Given that a positive effect of instructional support on learning diagnostic competences in simulations has been found, it would be important to know whether this effect is conditional on other variables. For instance, a potential expertise reversal effect is of interest (Sweller et al., 2003). An expertise reversal effect would mean that beginners benefit from instructional support but more advanced learners might be distracted by the same support features and thus learn less because of them. Furthermore, it should be investigated how important it is that learners feel involved in the simulations and perceive them as authentic. Other variables of interest in this regard include interest, motivation, emotions, and self-efficacy. This set of variables could serve as mediators or moderators of the effect of scaffolding on diagnostic competences. In addition, research should focus on observing the influence of instructional support on learners’ cognitive load and whether the effects of instructional support on the learning outcome partly depend on how well a learner has developed basic cognitive functions such as shifting and working memory capacity . All of these variables are known to be important in complex learning environments and thus deserve attention in simulations about diagnostic competences (Glogger-Frey et al., 2016; Miyake & Friedman, 2012; Paas & van Gog, 2006; Pekrun et al., 2016; Renkl, 2014; Rotgans & Schmidt, 2011; Schwaighofer, Bühner, & Fischer, 2017a; Scoresby & Shelton, 2011; Vollmeyer & Rheinberg, 2006; Witmer & Singer, 1998; Zimmerman, 2000).

11.4 How Can the Simulations be Adapted to Fit Learners’ Needs?

One question that should be investigated in the future is how the presented simulations can be adapted to the needs of different learners so that the largest possible number of learners will benefit from them. This adaptability comes in various forms and it is possible that simulations will lead to better outcomes if they are designed in a way that can be easily adapted to the needs of different groups of learners or even each individual (Ruiz et al., 2006). One version would be to give different simulations to learners in different stages of the learning process, e.g., beginners vs. advanced learners, which would address the above-mentioned expertise reversal effect. However, it might also be the case that advanced learners benefit from a range of instructional support measures without detrimental effects (Chernikova et al., 2019), so further research should seek to reveal how relevant the expertise reversal effect is in training diagnostic competences with simulations. Adaptability can also occur within a single simulation. The simulation could include a possibility for learners to seek the help they need, which might even lead them to different parts of the same simulation (Kitsantas et al., 2013). Simulations can also adapt themselves, e.g., in the form of adaptive feedback that is specific to the performance of individual learners (Bimba et al., 2017). Additionally, the timing of scaffolding in the course of acquiring diagnostic competences can be adaptive, too. If learners benefit from scaffolding at the beginning but not in later stages of learning, fading scaffolds could be applied (Pea, 2004; Wecker & Fischer, 2011). A related idea is to experiment with the order in which multiple scaffolds are presented to learners, as there are indications in the literature that this can have an influence on learning gains (Schwaighofer, Vogel, et al., 2017b).

11.5 Overview of Future Contributions and their Potential Impact

The simulations described in this book are well-situated to contribute to all of these questions, with the specifics described at the end of the respective chapters. However, to demonstrate the many ways the simulations will help to answer the four overarching research questions, we want to give an illustrative selection of how the projects will address them. Analyses of central diagnostic processes for optimal learning outcomes (Question 1) will be covered, for example, by the simulations from Chaps. 3 and 7 by analyzing learners’ notes and the influence of the distribution of information in a collaborative diagnostic process, respectively. The simulation in Chap. 8 tackles Question 2 about support for learners by implementing automated feedback, while the simulation in Chap. 10 will use external collaboration scripts. To find out more about mediating and moderating variables (Question 3), the simulations from Chaps. 4 and 5 will be especially useful. The corresponding projects plan to analyze the effects of variables like interest, self-concept, authenticity and immersion. The fourth and final question about adapting simulations to individual learners’ needs will be a focus, for example, for the projects presented in Chaps. 6 and 9. They plan to look at differences between beginners and experienced learners and the influence of the typicality of a case (Papa et al., 1996).

Having the simulations presented throughout this book as tools to answer the questions laid out in this last chapter will not only be important to improve the model of diagnostic reasoning presented in Chap. 2 of this book (Chernikova et al., 2022). These answers are key to ensuring that the highest possible number of learners benefit from the large-scale implementation of simulations as a learning tool for diagnostic competences. One important step in this process is interdisciplinary research, as presented in this book, that brings together experts from different fields and allows researchers to explore whether principles about constructing beneficial simulations transfer across domains. One assumption that can be tested is whether the same principles apply for cognitively similar simulations across domains even if they might not apply for simulations within one domain with different cognitive requirements.

The lessons learned from such an interdisciplinary approach to training diagnostic competences might also be transferable to other relevant higher education skills. The cognitive skills education systems expect higher education graduates to master are complex, and so far ways to test and foster them are scarce (Opitz et al., 2017; Zlatkin-Troitschanskaia et al., 2015). We are confident that the work presented in this book can make a contribution to addressing this problem through interdisciplinary research.