Making efficient decisions in professional fields is impossible without being able to identify, understand, and even predict situations and events relevant to the profession. Therefore, diagnosis is an essential part of professional competences in different domains. It involves problem identification, analysis of context, and application of obtained knowledge and experience to make practical decisions. The two fields of medical and teacher education specifically focus on the processes of collecting and integrating case-specific information to reduce uncertainty and make practical decisions. Teacher education deals with teachers’ assessments of students’ knowledge and learning processes, and medical education investigates primarily clinical reasoning to diagnose patients’ diseases accurately. Despite these different professional contexts and relevant situations, the diagnostic processes and underlying competences required to come to medical or educational decisions are similar. This similarity has resulted in a call to explore a closer link between the two research traditions (e.g., Gartmeier et al. 2015; Stürmer et al. 2016).

Although quite strong empirical evidence supports learning through problem-solving in postsecondary education in general (Belland et al. 2017; Dochy et al. 2003), studies of the use of problem-solving for advancing diagnostic competences in medical and teacher education remain open and need more synthesized systematic evidence. Some empirical studies in medical and teacher education indicate positive effects of additional scaffolding (e.g., structured reflection) in diagnostics-related instruction (Ibiapina et al. 2014; Klug et al. 2016; Mamede et al. 2014). However, other studies report no added value of scaffolding or even negative effects (Heitzmann et al. 2018a; Heitzmann et al. 2015; Stark et al. 2011). This variability in effects leads to further open questions. On the one hand, there are questions related to the optimal use of scaffolding to facilitate diagnostic competences. On the other hand, there are questions about the role of other factors, such as the nature of a diagnostic situation or prior professional knowledge of learners, which can also influence the outcomes.

This meta-analysis aims at providing answers to these questions and enhancing the scientific understanding of various factors and conditions (context, instructional, or personal) that facilitate diagnostic competences in the fields of medical and teacher education. Moreover, this meta-analysis contributes to identifying most effective scaffolding procedures to support learning through solving problems, depending on the levels of prior professional knowledge the learners have already acquired. This contribution provides insights for educators regarding the design and use of learning environments to enhance the advancement of professional diagnostic competences.

Diagnostic Competences in Medical and Teacher Education

Medical diagnosis aims at finding the cause of a disease and the appropriate courses of action for either further diagnosis or treatment (Charlin et al. 2000). Diagnostic processes in medical education focus on examining patients’ body functioning, identifying pathological processes and possible risk factors, and preparing decisions about the most appropriate treatment. Diagnosing in teacher education aims at optimizing the use of instructional methods to close the gap between the present and desired states of the learners’ competences (Helmke et al. 2012). Diagnostic processes in teacher education focus on examining students’ characteristics relevant for learning and performance (e.g., motivation, intelligence), defining students’ academic achievement and performance, and analyzing classroom situations, the impact of instruction, and contextual factors (Glogger-Frey et al. 2018). More generally, diagnosing first focuses on comparing the current state of learners’ knowledge and skills to predefined learning objectives, and subsequently aims at identifying misconceptions, difficulties, or particular needs of learners to choose the most appropriate instructional support to meet both learners’ needs and learning objectives. While acknowledging the differences in diagnostic processes between the two fields, also obvious are shared commonalities—namely, that diagnosing a patient’s health status or a learner’s understanding is a goal-oriented process of collecting and integrating case-specific information to reduce uncertainty in order to make medical or educational decisions (Heitzmann et al. 2015).

Recent conceptualizations further suggest that accurate and effective diagnosing requires advanced diagnostic competences. These entail the coordinated application of different types of knowledge (e.g., Shulman 1987; Stark et al. 2011) relevant for professional diagnostic problems as well as following particular steps that lead to diagnostic decisions. Research on diagnostic competences (Heitzmann et al. 2018b) emphasizes the importance of learners’ characteristics (i.e., prior professional knowledge base) and the diagnostic processes taking place during learning and assessment, and suggests indicators for the quality of processes and outcomes, as discussed in the following sections.

Medical education and teacher education entail different professional contexts and relevant situations. The main task of a teacher is to support the learning of an individual in a class. A medical doctors’ main task is to support individuals to achieve and sustain good health. Another difference is the conceptual professional knowledge base of doctors and teachers. Nonetheless, commonalities in the professional practice of teachers and physicians also can be found: In both professions, decision-making is based on characteristics of other people’s education or health, respectively. Interventions should be based on accurate diagnosis of a current state, often to infer causes of problems or future potentials. Therefore, different types of professional knowledge are coordinated in different diagnostic activities, such as generating hypotheses and evidence, drawing conclusions, and communicating the results. In addition, in medical as well as in teacher education, a main focus is the integration of case-specific information to reduce uncertainty and thus make practical decisions. Higher education programs aim to provide their students with hands-on participation and reflection on practical experience with the goal of advancing students’ competences (e.g., Grossman et al. 2009).

Conceivably, the effects of instructional interventions that aim at facilitating diagnostic competences may be similar in cognitively similar situations that demand equal diagnostic activities across domains compared to cognitively dissimilar situations within the same domain. An instructional intervention that guides learners to generate hypotheses early and gives instruction on how to prioritize evidence might be beneficial for both prospective teachers and prospective medical practitioners.

Even though we are aware of the differences between teacher and medical education, we also point to the commonalities when proposing to link research in those domains to improve our understanding of the domain-general and domain-specific aspects of instructional interventions to facilitate diagnostic competences.

Mainly, the commonalities, together with different empirical research traditions, have resulted in a call to explore the possibilities of a closer link between the two domains and the respective research traditions (e.g., Gartmeier et al. 2015; Stürmer et al. 2016; Trempler et al. 2015). A question, then, might ask why we study only these two domains and not others with professional practices in which diagnostic processes are probably relevant as well, such as car mechanics or law. Although we believe this would be a promising new route of research, not enough of a joint conceptual basis to expand the comparison domains currently exists. Even more important in the context of this meta-analysis, little empirical research and even less experimental work on scaffolding for the development of diagnostic competences has been pursued in these other fields.

Professional Knowledge as a Prerequisite for Advancing the Diagnostic Competences

Professional knowledge (together with cognitive skills and motivational factors) is one of the essential facets of competence and therefore one of the most important learning outcomes of professional training (Blömeke et al. 2015). The boundary approach to competence (Stoof et al. 2002) emphasizes that competence is more than the sum of domain-specific knowledge, skills, and motivational factors; these building blocks are strongly interconnected, but have different weights at different stages of learning and competence development. Nevertheless, the common way to assess level of competence is through assessing the abovementioned building blocks. Professional knowledge is the most commonly addressed component. It is measured frequently and objectively during assessment phases (both as a conceptual knowledge in written and oral tests but also as practical knowledge and knowledge application, measured at the level of skills). At the same time, professional knowledge is a fundamental prerequisite for further development of skills and competences. Professional knowledge defines the capacity to learn from different learning materials and instructional support (i.e., learners with low levels of prior knowledge might require more instructional support and guidance than advanced learners), which therefore might influence the choice of instructional approaches.

In the domain of medical education, the professional knowledge required for diagnosing has been differentiated into (a) biomedical knowledge, operationalized as knowledge about normal functioning and pathological processes in causing the disease (Kaufman et al. 2008), and (b) clinical knowledge, including knowledge about symptoms, symptom patterns, factors indicating high likelihood of particular diseases, and knowledge about appropriate treatment (Van de Wiel et al. 2000). Other researchers, such as Stark et al. (2011), suggest a more general differentiation of knowledge into conceptual knowledge (interrelation of terms) and practical knowledge (knowing how to apply conceptual knowledge to cases). They also further divide practical knowledge into two components: strategic (knowledge of problem-solving steps) and conditional (knowledge of conditions for successful application of problem-solving steps).

In the domain of teacher education, Shulman (1987) has proposed distinguishing among different teacher knowledge facets: content, pedagogical content, and pedagogical. Content knowledge is operationalized as knowledge of subject matter (e.g., division rules in arithmetic, photosynthesis processes in biology). Pedagogical content knowledge includes both knowledge of content (“what”) and knowledge of pedagogical principles to deliver this content (“how”), as well as typical misconceptions and explanations. Pedagogical knowledge is generic across particular domains and includes knowledge about memory, learning, and motivation function; classroom management; and general teaching strategies. In Shulman’s conception, teacher knowledge comprises aspects of acquiring conceptual knowledge regarding these three facets, as well as practical aspects with regard to acting in situations relevant to teaching. The same perspective is taken by Hiebert et al. (2002), for example, when proposing a professional knowledge base for teacher education that incorporates integrated conceptual and practical knowledge.

Although clear differences exist between the professional knowledge bases required in the fields of medical and teacher education (e.g., knowledge of symptoms of a disease versus knowledge about common misconceptions in math, or going through patient examination checklists versus formulating questions for a test), the distinction into conceptual and practical aspects seems to be a common denominator that can therefore be used as cross-domain concept of professional knowledge. Important open questions involve how the different types of knowledge are integrated and applied in decision-making and problem-solving contexts and whether instructional support can facilitate the acquisition of different knowledge types with similar effectiveness.

In summarizing the research in this field, it seems rather important to consider the possible effects of prior professional knowledge as a prerequisite for training and advancing diagnostic competences through means of instruction. In addition, studying differential effects regarding the advancement of conceptual and practical knowledge as learning outcomes is necessary. Both aspects are addressed in this meta-analysis. Conceptual and practical professional knowledge in this meta-analysis refers to the content of measured domain-specific knowledge, what the knowledge is about. The current research is not specific enough to infer the type of knowledge representations—declarative or procedural—suggested by ACT-R (Anderson et al. 2004).

Learning Environment and Learning Processes

Diagnostic processes have been conceptualized as a set of epistemic activities (Fischer et al. 2014), including (a) identifying a problem, (b) questioning, (c) generating hypotheses, (d) constructing artifacts, (e) generalizing evidence, (f) evaluating evidence, (g) drawing conclusions, and (h) communicating process and results. Facilitating these activities during learning phases seems essential for the advancement of competences because it provides multiple opportunities for learners to engage in various diagnostic practices. According to theories of expertise and skill development (e.g., Van Lehn 1996), practice opportunities, combined with sufficient professional knowledge, can facilitate diagnostic competences in higher education.

The training of diagnostic competences that include the application of epistemic-diagnostic activities has been discussed by a number of researchers in the light of complex problem-solving. A strong body of evidence shows that problem-centered approaches, such as problem-based learning, case-based learning, or learning through problem-solving, are effective instructional approaches for facilitating such skill-related outcomes as diagnostic competences in postsecondary education (Belland et al. 2017; Dochy et al. 2003).

However, exposure to complex and ill-structured problems, especially in the early stages of expertise development, is considered problematic from the perspective of cognitive load theory (Renkl and Atkinson 2003; Sweller 2005). According to this theory, learning by complex problem-solving should be effective for learners with knowledge sufficiently organized to enable self-regulated problem-solving, whereas learners missing these prerequisites might be overburdened. Contrary to this general claim from cognitive load theory about problem-centered approaches, Hmelo-Silver et al. (2007) regard problem-centered approaches as suitable for learners with little prior knowledge if high levels of scaffolding accompany the challenging tasks. Quintana et al. (2004) suggest that scaffolding enables a learner to achieve goals (i.e., solve problems) through modifying the task and reducing possible pathways from which to choose, and through prompts and hints to help the learner coordinate the steps in problem-solving or interaction. In meta-analyses, scaffolding has shown positive effects on various learning outcomes, including complex competences (Belland et al. 2017; Devolder et al. 2012; Gegenfurtner et al. 2014).

Instructional Support for Advancing Diagnostic Competences

Guided Problem-Centered Instruction

In problem-centered instructional approaches, learners solve authentic cases with varying levels and types of instructional support (Belland et al. 2017). Problem-centered instructional approaches include, by definition, “problem-based learning, modeling/visualization, case-based learning, design-based learning, project-based learning, inquiry-based learning, […] and problem solving” (Belland et al. 2017, p. 311). These instructional approaches frequently have been used in the past to facilitate diagnostic competences in medicine (Barrows 1996) and teacher education (Seidel et al. 2013). Theoretical arguments to advance diagnostic competences exist for the effectiveness of problem-solving (Anderson 1983; Jonassen 1997), case-based learning (Kolodner 1992), and problem-based learning (Barrows 1996). Previous meta-analytic results and reviews (Albanese and Mitchell 1993; Belland et al. 2017; Dochy et al. 2003; Thistlethwaite et al. 2012) support the effectiveness of these instructional approaches.

Scaffolding

The most prominent definition of scaffolding (Wood et al. 1976) considers it to be the process of supporting learners by regulating or limiting intricate factors of the task. This objective is accomplished by six scaffolding functions: (a) sparking situational interest, (b) reducing the complexity and difficulty of tasks, (c) keeping learners focused on their goal, (d) highlighting crucial features of a task, (e) motivating disappointed learners, and (f) providing solutions and models of a task. The concept of scaffolding builds on Vygotsky’s (1978) notion of the Zone of Proximal Development, which includes challenging tasks a learner can perform successfully with external guidance but would not yet be able to perform independently. According to recent literature reviews (Belland 2014; Reiser and Tabak 2014), the key components of scaffolding are formative assessment and adapting the level of support to the performance or prerequisites of the learner. Scaffolding can focus on cognitive, meta-cognitive, motivational, and strategic outcome measures (Hannafin et al. 1999).

In agreement with Belland (2014), we apply a comprehensive scaffolding definition that includes types of scaffolding with or without adapting support during the learning process (i.e., fading or adding support). Recent research on scaffolding in the context of learning through problem-solving suggests several techniques to structure and guide the facilitation of competences: (a) providing examples, which are partial or whole problem solutions or target behaviors (e.g., Renkl 2014); (b) providing prompts, or hints about how to handle materials or how to proceed with solving the problem (e.g., Quintana et al. 2004); (c) assigning roles to actively involve learners in learning tasks (e.g., Strijbos and Weinberger 2010); and (d) inducing reflection phases, which allow learners to think about goals of the procedure, analyze their own performance, and/or plan further steps (e.g., Mamede and Schmidt 2017). All possible scaffolding forms fall somewhere in a continuum from one in which only a specific element of a scaffold is realized all the way to a full realization in which all elements are realized.

Providing Examples

In example-based learning, learners retrace the steps of a solution (worked example) or observe a model displaying the process of problem-solving (modeling example) before they solve problems independently (Renkl 2014). Example-based learning with worked and modeling examples has already been shown to be effective for the advancement of a variety of complex cognitive skills, such as scientific reasoning (Kirschner et al. 2006; Fischer et al. 2014) and scientific writing (Zimmerman and Kitsantas 2002), which possess some similarities to diagnostic competences. The worked example effect and the underlying cognitive load theory (Sweller 1994) suggest that problem-solving at early stages of knowledge or skill acquisition without scaffolding can lead to an excessive amount of information and an inhibition of schema acquisition (Renkl 2014). Worked examples are typically highly effective for beginners but reduced and even negative effects have been reported for intermediates (Van Gog and Rummel 2010). These effects, however, might be different for modeling examples, which are effective when learners possess sufficient prior knowledge to comprehend and evaluate the complex skills they observe (Van Gog and Rummel 2010). Our analysis would allow estimating the effects of providing examples to facilitate diagnostic competences and therefore extend the findings from complex cognitive skills to competences advancement.

Providing Prompts

Prompts refer to information or guidance offered to learners during the learning process in order to raise effectiveness (Berthold et al. 2007). Various types of prompts have differing objectives. Self-explanation prompts put an emphasis on the verbalization of reasoning and elaboration processes while solving a task (Heitzmann et al. 2015; Quintana et al. 2004). Meta-cognitive prompts raise awareness of meta-cognitive processes that control self-regulated learning (Quintana et al. 2004). Collaboration scripts assist the regulation of social interaction in interactive learning settings (Fischer et al. 2013; Vogel et al. 2017). The open questions are whether providing prompts significantly contributes to the advancement of diagnostic competences in teacher and medical education and whether the effects differ for students with lower and higher levels of prior professional knowledge.

Assigning Roles

Role taking can be considered as a type of scaffolding for which the full complexity of a situation is reduced by assigning a specific role with limited tasks or perspective on the full task. In teacher education, teacher and student are typical roles; in medical encounters, doctor and patient are typical. Additionally, learners can be assigned the role of observer. A large body of empirical research suggests that complex skills can be acquired effectively in the agent (i.e., teacher or doctor) role (Cook 2014). Results on acquiring diagnostic competences in the role of the observer are still lacking, but Stegmann et al. (2012) showed that communication skills can be acquired as effectively as in the agent role. Even though systematic research on the acquisition of diagnostic competences in the roles of patient and student is still lacking, it seems likely that learners may gain specific diagnostic competences and knowledge through displaying clinical symptoms or student’s mistakes and behaviors. Apart from the described results and mechanisms, findings on differences between beginners and intermediates are currently lacking.

Inducing Reflection Phases

The positive effects of reflection on learning were first proposed by Dewey (1933). In modern days, a comprehensive definition of reflection is best stated by Nguyen et al. (2014): “Reflection is the process of engaging the self in attentive, critical, exploratory and iterative interactions with one’s thoughts and actions, and their underlying conceptual frame, with a view to changing them and with a view on the change itself” (p. 1182). Reflection can be induced through guided reflection phases and can take place before, during, or after an event. Reflection can occur in a social context (Nguyen et al. 2014) or individually (e.g., by writing reflective journals (O’Connell and Dyment 2011)). Different types of reflection have been reported to efficiently foster the acquisition of diagnostic competences in medicine (Sandars 2009) and in teacher education (Beauchamp 2015).

Reflection could facilitate diagnostic competences for three major reasons. First, reflection phases add an extra pause for the learner. Beginners might use this pause to better retrieve and apply conceptual knowledge with less time pressure (Renkl et al. 1996). Advanced learners might benefit significantly by having time not only to activate and better integrate the conceptual knowledge and previous experience but also to evaluate the selected strategy and think about alternatives. Second, learners may self-generate feedback internally which advances their learning during reflection (Butler and Winne 1995; Nicol and Macfarlane-Dick 2006). Third, reflection may support planning subsequent steps of the diagnostic process. Current meta-analysis would allow estimating the effects of the reflection in fostering diagnostic competences of learners with high and low levels of prior professional knowledge.

Scaffolding and Self-Regulation

A convincing framework to integrate the different types of scaffolding does not yet exist. Heuristically, we suggest building on the very idea of scaffolding as a temporary shift of control over the learning process from a learner to a teacher or more advanced peer (e.g., Tabak and Kyza 2018). We further suggest locating the scaffolding types on different positions on a scale of self-regulation of problem-solving, with examples located from a rather low level followed by role assignments and prompts with increasing levels of self-regulation potential. Reflection phases followed by unscaffolded problem-solving would be located at the high end of the self-regulation scale.

Although each approach adopts an idea of the transition from other-regulation to self-regulation at some stage (e.g., fading of steps in worked examples, internalizing collaboration scripts), the suggested classification allows estimation of the amount of content support and guidance initially provided through that type of scaffolding by conceptualizing scaffolding types into a continuous dimension with increasing degrees of freedom for the learner. Similar ideas, such as classifying scaffolding measures based on the amount and kind of guidance, were introduced by Brush and Saye (2002). These authors suggested a dichotomous categorization into (a) “soft” scaffolding, focused on fostering meta-cognitive skills and self-regulation, and (b) “hard” scaffolding, focused on content: conceptual or procedural knowledge required to solve the task, providing learners with full or partial solutions to foster learning. Brush and Saye (2002) claim that “hard” scaffolding is beneficial for initial stages of learning, whereas “soft” scaffolding is more beneficial when initial knowledge is already acquired.

Context Factors in Facilitating Diagnostic Competences

The characteristics of real-life situations in which the diagnostic processes take place are important considerations in any facilitation. For the domain of medical education, the factors include identifying the cause of the disease to further plan treatment steps; for the domain of teacher education, they include assessing students’ knowledge level and identifying misconceptions to adjust teaching strategy accordingly or to suggest additional support.

With regard to the nature of the diagnostic situation, we adopt the classification by Heitzmann et al. (2018b), who distinguish two dimensions. The first dimension is information base (e.g., where the information for diagnosis comes from), spanning the spectrum from document-based to interaction-based. Document-based diagnosis relies on information available in written or otherwise recorded form (laboratory findings, x-ray images, students’ academic achievement scores, students’ homework). There is little or no time pressure for the analysis of this information, which can be accessed several times if needed with reflection always possible. Interaction-based diagnosis relies on the information received through communication with patients, students, or their families (e.g., anamnestic interview, oral exam, teacher-guided in-class discussions). Information from interaction usually needs to be processed in “real time,” which involves more time pressure and fewer opportunities for reflection.

The second dimension to describe diagnostic situations according to Heitzmann et al. (2018b) is on a continuum from individual diagnoses to the necessity for collaboration and communication with other professionals during the diagnostic processes. Empirical studies provide evidence that collaboration and social aspects of working on the case can be problematic even for experts, requiring additional knowledge and skills (e.g., Kiesewetter et al. 2013).

This meta-analysis uses context factors to address (a) the role of the diagnostic situation in organizing learning processes to facilitate diagnostic competences and (b) the generalizability of findings across domains.

Interaction Between Professional Knowledge Base and Instructional Support

The professional knowledge base of the learner informs the requirements for the organization of learning processes, choice of learning and teaching strategies, and the amount and type of guidance (Renkl and Atkinson 2003; Sweller 2005; Van Lehn 1996). It is therefore essential to explore the effectiveness of different instructional support measures in relation to the learner’s prior knowledge base. This determination would contribute to practical considerations in the development of educational programs, which addresses the lack of empirical evidence concerning the role of different types of scaffolding for learners with lower and higher levels of prior professional knowledge in facilitating diagnostic competences.

The theoretical framework and the categorization of scaffolding procedures into a continuum based on the degree of self-regulation of problem-solving assumes that learners who already have sufficient conceptual knowledge and experience in its application would benefit more from guidance that allows higher degrees of freedom and the opportunity for more self-regulation, such as by introducing reflection phases built into the problem-solving process. Learners who do not have sufficient prior conceptual or procedural knowledge in the field are expected to benefit more from higher levels of the types of scaffolding that provide them with conceptual knowledge and heuristics for decision-making, such as instructional support that provides examples (e.g., professional solutions, worked-out examples, or behavioral models) or assigns roles. This meta-analysis aims at generating evidence on whether the professional knowledge base moderates the effects of scaffolding types on advancement of diagnostic competences.

Research Questions

RQ1: To what extent can instructional support facilitate diagnostic competences in higher education?

We assume that, in line with research on learning (Hmelo-Silver et al. 2007; Kirschner et al. 2006; Renkl and Atkinson 2003; Van Lehn 1996) and previous meta-analyses on the effects of instructional support on acquisition of complex cognitive skills and competences (Belland et al. 2017; Devolder et al. 2012; Dochy et al. 2003; Gegenfurtner et al. 2014), instructional support would have a positive effect on development of diagnostic competences in medical as well as in teacher education.

RQ2: What is the role of professional knowledge in the acquisition of diagnostic competences?

We assume that prior professional knowledge would be a significant moderator of the effects of instructional support to facilitate diagnostic competences, as professional knowledge is an essential part of professional competence (Blömeke et al. 2015; Stoof et al. 2002). In general, we assume that instructional support would have higher effects on diagnostic competences of learners with low levels of prior knowledge.

RQ3: How do problem-solving and different types of scaffolding facilitate diagnostic competences?

We assume that introducing elements of problem-solving, as instructional approach as well as different types of scaffolding, would all have positive yet different effects on the advancement of diagnostic competences.

RQ4: To what extent do effects of learning through problem-solving and scaffolding depend on professional knowledge?

We assume that the level of prior professional knowledge will moderate the effects of the scaffolding. We further assume that learners with high prior knowledge would benefit more from the types of scaffolding requiring higher levels of self-regulation required, whereas learners with low levels of prior knowledge would benefit more from the types of scaffolding with lower levels of self-regulation and more guidance (Sweller 2005; Van Lehn 1996).

RQ5: What are the roles of contextual factors (i.e., domain, need to collaborate, source of information to use for diagnosing) in facilitating diagnostic competences?

We assume that contextual factors would affect the diagnostic competence advancement and therefore be significant moderators. The analysis should also provide evidence on the generalizability of the findings across the domains of medical and teacher education.

Method

Inclusion and Exclusion Criteria

The inclusion criteria were based on outcome measures reported, research design applied, and statistical information provided in the studies. We discuss these criteria below in more detail.

Diagnostic Competences

The studies eligible for inclusion had to focus on facilitation of diagnostic competences, defined as dispositions enabling goal-oriented gathering and integration of information in order to make medical or educational decisions. In particular, studies in teacher education must be related to measurement of professional vision or formative assessment. The outcome measures had to address either diagnostic quality and/or one or more of several epistemic-diagnostic activities (Fischer et al. 2014): identifying the problem, questioning, generating hypotheses, constructing artifacts, generalizing and evaluating evidence, drawing conclusions, and communicating processes and results. Any studies that did not include any epistemic-diagnostic activities or did not report measures of diagnostic activities (or professional vision or formative assessment), as well as studies that focused on acquisition of motor skills in medical education, were excluded from the analysis. This meta-analysis focuses only on objective measures of learning (written or oral knowledge tests, assessment of performance based on expert rating, or any quantitative measures, including but not limited to frequency of behavior or number of procedures performed correctly). Studies that reported only learners’ attitudes, beliefs, or self-assessment of learning or competence were excluded from the analysis.

Research Design

The aim of this meta-analysis was to make causal inferences regarding the effect of instructional support on diagnostic competences, so the studies eligible for the analysis had to have an experimental design with at least one treatment and one control condition. The treatment condition had to include instructional support measures directed at facilitating diagnostic competences not included in the control condition. The studies that did not report any intervention (i.e., studies on tool or measurement validation), studies that reported comparison of multiple experimental designs (e.g., instruction with few prompts versus many prompts, or using best practices examples versus erroneous examples), and studies that did not provide any control condition, such as waiting condition or historical control, were excluded from the analysis.

Study Site, Language, and Publication Type

Eligible studies were not limited to any specific study site. To ensure that the concepts and definitions of the core elements coded for the meta-analysis were comparable and relevant, only studies published in English were included in the analysis. However, the origin of studies and language of conduction were not restricted. Different sources, both published and unpublished, were considered to ensure the validity and generalizability of the results. There were no limitations regarding publication year.

Effect Sizes

Eligible studies were required to report sufficient data (e.g., sample sizes, descriptive statistics) to compute effect sizes and identify direction of scoring. If a study reported the information about pretest effect size, it was used to adjust for pretest differences between treatment and control conditions.

Search Strategies

To perform the meta-analysis, the following databases were screened for eligible empirical studies: PsycINFO, PsyINDEX, PsycARTICLES, ERIC, and MEDLINE. The search terms used were (professional vision OR formative assesment* OR diagnost* competenc* OR diagnost* skill* OR diagnost* reason* OR clinical reason*) AND (train* OR teach*). Additionally, the first authors of eligible studies were contacted to obtain information about other published or unpublished manuscripts, and references were checked for other studies. The search results were obtained on February 24, 2018. The search resulted in 7510 documents (after deletion of duplicates).

Coding Procedures

The first phase involved screening. Screening for eligibility was conducted based on the inclusion/exclusion criteria mentioned above. A study was excluded from the analysis only if it provided enough information and met one or more exclusion criteria. For example, if the study title provided enough information to exclude the study, the study was excluded already at this stage. If information was insufficient for exclusion, the study was included in further screening (abstract or full text). The work was shared between the fourth and one of the two first authors. The fourth author individually examined titles, abstracts, and full texts of the studies to identify those with the eligible search terms “diagnostic competence,” “diagnostic skill,” “diagnostic reasoning,” or “clinical reasoning.” Then, the first author of the study screened the abstracts and the full texts for the search terms “professional vision” and “formative assessment,” and marked studies that needed further examination. No interrater agreement was determined for the screening stage. However, in regular meetings, the authors of this meta-analysis discussed studies with insufficient information or complex study designs with respect to eligibility until complete agreement on inclusion or exclusion of a study was achieved.

The second phase (coding) used a previously piloted coding scheme, which was refined until a sufficient interrater reliability was achieved. For an overview of the coding manual, please see Appendix Tables 5, 6, 7, 8, and 9. Features of primary studies (study design, use of instructional support measures for treatment and control groups, professional knowledge base, and context) were independently double coded by one of the authors and a trained research assistant. The training procedure involved both individuals coding one of the studies together, and then each coder coded another study of the sample independently, followed by a discussion of the differences in the coding. The satisfactory interrater agreement of the subsequent ratings (above .75) pointed to the success of the training procedure. Remaining disagreements on the ratings were resolved in regular meetings of the coders until 100% agreement on all codes was achieved. The data extracted from eligible primary studies included study characteristics, independent and dependent variables, and statistical values needed for calculating effect sizes.

Study Characteristics

Study characteristics extracted for each study included information about authors and publication year as well as information about sample size. Additionally, studies were coded for study design (i.e., random distribution of treatment and control condition). An overview of study characteristics and moderators is presented in Table 1. Some more descriptive statistics about primary studies including participants, measurements used, and a summary of results are presented in Table 2.

Table 1 Study characteristics
Table 2 Summary of the primary studies

Coding for the Moderators

The coding scheme, including the professional knowledge base, instructional support and scaffolding, and diagnostic context, was based on a recent conceptual framework (Heitzmann et al. 2018b). The domain was coded as either medical or teacher education.

The professional knowledge base was coded as “low” if participants of the study had little or no exposure to similar context (i.e., no or low prior conceptual/procedural knowledge regarding the assignments during the learning phase) or were in the initial phase of their training (also indicating a rather low prior professional knowledge base on conceptual and procedural levels). It was coded as “high” if learners already had experience, were exposed to a similar context, or were in the final phase of their training (indicating that learners already had a high professional knowledge base on conceptual and procedural levels).

Instructional support was coded as “yes” (included) or “no” (not included) for the following categories:

  • Problem-solving (using a problem-centered approach) was coded as “included” if learners received cases/problems and made diagnostic decisions themselves, or “not included.”

  • Examples were coded as included if learners observed modeled behavior, example solutions, or worked examples at some time during the training, or not included.

  • Prompts were coded as included if learners received hints on how to handle the learning material to support them in solving their diagnostic problem. Additionally, the type of prompts (time of exposure) were coded as “during” if prompts were provided during completion of the learning task before the diagnosis/decision, as “after” if support was given after the diagnosis/decision had been made, as “longterm” if the support was provided in various steps of a diagnostic process that takes place over a long period of time (e.g., reflective diaries, longitudinal studies), and as “mixed” if more than one type of prompt was present. This rather rough type of clustering has been chosen because the number of studies was insufficient for a more fine-grained analysis.

  • Roles were coded as “included” if learners acted as physicians or teachers (“agent”), observers, patients, or students (“other”) at least part of the time, or as “not included.”

  • Reflection phases were coded as included if learners were encouraged to think about the goals of the procedure, analyze their own performances, and plan further steps, or “not included.”

Diagnostic situation included the (a) information base, which was coded as “interaction-based” if information to diagnose was gathered through real or simulated interactions; as “document-based” if information was gathered from a document or a video with diagnostic information without a possibility to interact; and (b) processing mode, which was coded as “collaboration” if collaboration during the diagnostic process was necessary, or as “individual” if diagnosis/decision was achieved by a single individual.

Diagnostic competences were coded as outcome measures. The measure of diagnostic competence was coded as “procedural” if application of knowledge in solving diagnostic cases was measured at the posttest. It was coded as “strategic” if application of knowledge to a specific case was not required, but rather the knowledge was measured on a conceptual level concerning the strategy of the diagnostic process (e.g., learners were asked for the diagnostic steps without applying their knowledge to a case). Diagnostic competences were coded as “conceptual” if diagnostic steps or processes were measured on the conceptual level and concerned terms or interrelations of terms (as an example in medicine, knowledge of liver disease); in teaching, it could refer to understanding learning processes.

Studies reporting comparison of two levels of the same moderator (i.e., comparing students with high versus low levels of prior professional knowledge or comparing prompts during versus after the diagnosis processes) within one study were coded as within-study effects (“WSE”) and were excluded from respective moderator analyses.

Statistical Methods

Calculation of the Effect Sizes and Synthesis of the Analysis

This meta-analysis used a random-effects model and adjusted effect size estimation (Hedges g). To address the often-complicated design of the studies, we employed multiple comparisons and therefore correlated meta-regression samples using robust variance estimation (Tanner-Smith et al. 2016). However, robust-variance estimation models are intended neither to provide precise variance parameter estimates nor to test the null hypotheses regarding heterogeneity (Tanner-Smith et al. 2016). To overcome these limitations, we used additional meta-analytic procedures recommended for subgroup analysis (Borenstein et al. 2009). To get a representative result, we used only one effect size per study in the moderator analysis. If the study reported multiple effects, a small-scaled meta-analysis was run to synthesize the results within a single study before including the effect in the summary effect estimation. Also, we used confidence intervals to assess the significance of an effect. We used multiple heterogeneity estimates (Q-statistics, τ2, I2) to determine the variance of the true effect sizes between studies, its statistical significance, and the proportion of this variance that can be explained by random factors. In addition, we used the thresholds suggested by Higgins et al. (2003) to interpret the I2: 25% for low heterogeneity, 50% for medium heterogeneity, and 75% for high heterogeneity.

Assessment of Publication Bias and Questionable Research Practices

This meta-analysis on the effects of instructional support on facilitating diagnostic competences includes primary studies from medical and teacher education and combines studies with large and relatively small samples. To address these issues, we used a range of statistical methods to control and correct for possible publication bias, questionable research practices, and other manipulations to ensure sufficient power, validity, and generalizability of the findings. Because the approaches used to detect and estimate publication bias have different assumptions and limitations, we used a combination of those methods with the assumption that if a strong indicator either for or against publication bias occurred, the results of all methods applied to test for publication bias point in the same direction.

The first approach is based on a graphical representation of the relationship between effect sizes and the standard error. Egger’s test in the absence of publication bias and questionable research practices assumes that studies are evenly distributed on both sides of the average, but if publication bias is present, reported effect sizes correlate with sample sizes (Sterne and Egger 2001). Trim’n’fill techniques can be used to correct for any identified asymmetry (Duval and Tweedie 2000). The weakness of the funnel plot-based methods is that they do not take true heterogeneity into account and cannot distinguish between methodological-caused biases and true differences between study effects.

The second approach, the p-curve analysis, addresses both detection and correction for possible publication bias and evaluates the significance of estimated effect sizes (Simonsohn et al. 2015). This technique provides a robust estimate of the significance of p values from the studies, plots them, and combines the half and full p curve to make inferences about an evidential value; however, it is based only on significant p values.

The third approach, which takes under consideration both significant and insignificant results, is the R-index (R-Index.org. 2014). It can be used to examine the credibility and replicability of studies. The R-index can be between 0 and 100% (Schimmack 2016); values below 22% indicate the absence of a true effect; values below 50% indicate inadequate statistical power of the study; and values above 50% are acceptable to support credibility and replicability of the results, although values above 80% are preferred.

Results

Results of the Literature Search

The search resulted in 7510 articles after deleting duplicates. During abstract and full-text screening, most excluded studies were non-empirical, had no control group, or had measuring outcomes that did not fit the definition of diagnostic competences (Fig. 1). The 35 eligible studies (published between 1997 and 2018) provided 60 effect size estimations. The studies and their characteristics are presented in Table 1 in alphabetical order. The total sample consisted of 3472 participants. Most of the studies (69%) implemented random assignment to control and experimental condition. The sample of studies provided almost equal distribution of participants, with low (16 studies) and high (17 studies) professional knowledge bases.

Fig. 1
figure 1

Flow chart of the study selection process

Some moderator levels included in the coding scheme were not present in the sample of studies. Specifically, all studies measured and reported a procedural aspect of diagnostic competences; however, none of the primary studies reported assessment of conceptual knowledge gain or the strategic aspect of diagnostic competences separately from procedural aspect. Additionally, regarding role-taking, 26 studies (74%) reported assigning roles during learning; all of them reported assigning an agent role, either for the whole learning process (53%) or for parts of it (42%), and two more studies reported multiple conditions.

Among the 35 studies, 25 included problem-solving (71%), 8 studies did not include problem-solving (23%), and 2 reported within-study effects. Only one study reported no use of any type of scaffolding, and instead used only explicit presentation of information to facilitate the advancement of competences. All (100%) of the 25 studies that included problem-solving had at least one type of additional scaffolding.

Results of Quality and Preliminary Analysis

The procedures targeted at assessing the quality of primary studies and the generalizability of the summary and moderator effects found in the meta-analysis indicated no evidence of publication bias or questionable research practices. The Eggers test for funnel plot asymmetry was insignificant (z = 1.58; p = 0.11). However, the p-curve analysis indicated that six results out of 38 provided insufficient evidential value. Furthermore, the R-index analyses indicated that 10 results out of 38 have inadequate replicability indexes. These findings limit the generalizability of evidence for research question 4, due to insufficient data from the primary studies (Table 3).

Table 3 Effects of professional knowledge base, context factors, and instructional support on diagnostic competences

The meta-regression on control variables (year of publication, publication type, lab, design of study) showed that these factors do not explain a statistically significant amount of variance between study effects (p values above .05).

Summary Effect of the Instructional Support on the Diagnostic Competences

Regarding research question 1, instructional support was found to have a medium positive effect (g = .39; p = .001; 95% CI [.22; .56]), on fostering diagnostic competences in the combined sample of studies in medical and teacher education. The effect has sufficient evidential value and an acceptable replicability index. The analysis also identified high heterogeneity between studies (τ2 = .18; I2 = 79.60%), justifying further moderator analyses. The effect sizes found in individual studies, weights, and confidence intervals, as well as the summary effect from the random effect model estimation, are presented in alphabetical order in Fig. 2. A funnel plot of effect size distribution and standard errors is presented in Fig. 3.

Fig. 2
figure 2

Forest plot of the overall effect of instruction on diagnostic competences

Fig. 3
figure 3

Funnel plot of the overall effect of instruction on diagnostic competences

Effects of Moderators

Effect of the Professional Knowledge Base

Regarding research question 2, subgroup analyses (Table 3) indicate that learners with a lower level of prior professional knowledge showed a higher increase in diagnostic competences (g = .48, SE = .12, p < .05) than learners with a higher level of prior professional knowledge, whose diagnostic competences also increased through instructional intervention (g = .27, SE = .11, p < .05).

Effect of Problem-Solving

Regarding research question 3, the studies in the meta-analysis provided evidence in favor of learning through problem-solving (Fig. 4) as the instructional approach to enhance diagnostic competences. Including problem-solving elements in instruction (g = .51, SE = .11, p < .05) was more beneficial than not including problem-solving (g = .20, SE = .11, p = ns) for advancing diagnostic competences. The studies provided sufficient evidential value. The moderator role of problem-solving instructions was statistically significant (Q (1, 31) = 19.09, p < .001).

Fig. 4
figure 4

The effect of problem-solving (elements included vs. not included during learning phases)

Effect of Scaffolding

Despite descriptive differences (Table 3), settings including examples compared with settings not including examples did not reach statistical significance regarding effects on advancement of diagnostic competences (Q (1, 35) = 2.85, p = .06).

Role-taking (taking an agent’s role during the learning phase) had a significant positive effect on advancing diagnostic competences (g = .49, SE = .11, p < .05). Primary studies with settings in which roles were not assigned during learning indicated no statistically significant effect on the advancement of diagnostic competences (g = 0, SE = .09, p > .05). Assigning roles was a statistically significant moderator (Q (1, 33) = 19.09, p < .001).

Including prompts had a significantly higher positive effect on diagnostic competences (g = .47, SE = .09, p < .05) than not including prompts (g = .26, SE = .14, p < .05); the moderator was significant as well (Q (1, 37) = 5.33, p < .05). More specifically, the types of prompts coded in relation to the diagnosis were presented during, after, long-term, or a mixture of these. The type of prompts as moderator did not reach statistical significance (Q (3, 22) = 5.03, p = .071). Note, however, that providing prompts after the diagnosis tended to be more beneficial for the learners than providing prompts during diagnostic processes or combining multiple types of prompts. Providing long-term prompts also tended to be beneficial for advancing diagnostic competences (Table 3).

Reflection phases had a significantly higher positive effect on advancing diagnostic competences (g = .58, SE = .11, p < .05) compared to instructional support not including reflection phases (g = .26, SE = .11, p < .05). This moderator was statistically significant (Q (1, 31) = 17.11, p < .001).

Interaction Between Professional Knowledge Base and Instructional Support

Regarding research question 4, problem-solving was identified as effective for learners with high (g = .59, SE = .17, p < .05) and low (g = .41, SE = .09, p < .05) levels of prior professional knowledge. If problem-solving was not included (k = 8), there was no statistically significant gain in competence for learners with a high level of prior professional knowledge (g = − .10, SE = .12, p > .05; k = 5), nor for learners with a low level of prior professional knowledge (g = .67, SE = .45, p > .05; k = 3). In interpreting these findings, the relatively low number of primary studies that were used in the analysis must be considered (Table 3).

As hypothesized, more advanced learners benefited most from types of scaffolding that afforded higher levels of self-regulation, namely, reflection phases: (g = .67, SE = .23, p < .05). Providing examples instead of problem-solving activities to learners with a high level of prior professional knowledge did not lead to advancement of their diagnostic competences (g = .18, SE = .25, p > .05). In contrast to advanced learners, learners with a low level of prior professional knowledge benefited from examples (g = .52, SE = .14, p < .05). These findings support the hypothesis regarding the degree of self-regulated problem-solving for learners with low vs. high levels of professional knowledge (Fig. 5). However, other measures of instructional support, such as prompts, had similar positive effects on the advancement of diagnostic competences for learners with low as well as high levels of prior professional knowledge. These results did not contribute sufficiently to evaluating the hypothesis, as data from primary studies provided an insufficient evidential value (Table 3).

Fig. 5
figure 5

Interaction between levels of prior professional knowledge and scaffolding

Effect of Contextual Factors

Regarding research question 5, the diagnostic situation significantly moderated the effects of instructional support on the advancement of diagnostic competences (Q (1, 35) = 23.58, p < .01). The diagnostic competences were significantly more advanced through interaction-based activities (g = .77, SE = .19, p < .05) than through document-based activities (g = .27, SE = .08, p < .05). The necessity for collaboration (processing mode) failed to reach significance as a moderator (Q (1, 36) = 0.40, p = .52).

Regarding medical and teacher education, the statistical analysis showed significant variance between the subgroups (Q (1, 33) = 6.01, p < .05). However, insignificant results of meta-regression indicate that the “domain” was not a statistically significant moderator to explain the differences found. Thus, the differences in the magnitude of the effects for medical (g = .33, SE = .09, p < .05, n = 26) and teacher (g = .58, SE = .21, p < .05, n = 9) education are likely due to an unequal amount of studies representing the two fields.

To address the possible difference between the domains, we conducted a post hoc analysis to estimate whether the average effect sizes for the moderators of prior professional knowledge, instructional support measures, and contextual factors differ significantly for medical education and teacher education.

Problem-solving was included in more than half of the studies in medical education (N = 16) and all studies in teacher education (N = 9), resulting in the same average effects with similar standard errors (Table 4). The positive effect of including examples had achieved significance only in medical education; however, no significant differences between the domains were found. Assigning roles, providing prompts, and reflection phases had similar positive effects in both domains; the differences in the magnitude of the effects were not significant, which might be due to an unequal amount of studies in the moderator levels.

Table 4 Interaction between domain and instructional support moderator variables

The analysis indicated that there was a significant difference in prior professional knowledge between the two domains; moreover, there was evidence of the interaction between levels of prior knowledge and the domain affecting the development of diagnostic competences. The mean effect for high prior professional knowledge in teacher education was significantly greater (g = .88, SE = .64, p > .05, n = 3) than the one in medical education (g = .15, SE = .09, p > .05, n = 14); however, neither of the effects individually reached statistical significance. In contrast, the mean effect for low prior knowledge in medical education (g = .57, SE = .19, p < .01, n = 10) was significantly higher than that for teacher education (g = .34, SE = .10, p < .05, n = 6); both effects individually were positive and significant. The only further difference in contextual factors was for the collaborative processing mode (Table 3). Learners in teacher education (g = .57, SE = .25, p > .05, n = 5) benefited significantly more from collaboration than did learners in medical education (g = .20, SE = .11, p > .05, n = 6); however, neither of the effects reached statistical significance individually. Therefore, we suggest that this pattern of findings indicates initial evidence to support the claim that the findings concerning instructional support can be generalized across the two domains.

Summary and Conclusion

This meta-analysis (see Fig. 6 for the overview) shows that interventions for facilitating diagnostic competences in the investigated domains of medical education and teacher education are particularly effective if they involve learners in some form of problem-solving. Advancing diagnostic competences without the learners’ own engagement in problem-solving seems unlikely. This is true for both the low and high levels of professional knowledge investigated. Most studies that addressed forms of problem-centered instructional support additionally provided one or several types of scaffolding (see Hmelo-Silver et al. 2007). Approaches to scaffolding that come in addition to learners’ own problem-solving—namely, assigning roles, providing prompts, and reflection phases—have clear positive effects on diagnostic competences. Overall, this analysis shows no indication for different effect sizes of different types of scaffolding. However, with respect to the timing, prompts tend to be more effective if they are provided several times over a longer period of time or after the learner’s own problem-solving activity compared to prompts delivered during the problem-solving activity itself. The effectiveness of scaffolding approaches depends on the learners’ prior professional knowledge base. Reflection phases are more effective for more advanced learners, whereas providing examples is effective for less advanced learners.

Fig. 6
figure 6

Fostering diagnostic competences

Theoretical Significance

The findings of this meta-analysis have implications that support the claim that interventions on problem-solving skills necessarily need to involve the learners in problem-solving activities (Anderson 1983; Van Lehn 1996), and that this claim can be generalized to the area of solving complex medical- and teaching-related diagnostic problems. Moreover, the meta-analysis yielded evidence in support of generalizing the medium-sized positive overall effects of scaffolding found in studies in other fields (Kim et al. 2018) to medical and teacher education. Additionally, the findings suggest generalization of the well-known positive effect of examples for novice learners rather than for the more advanced learners. Moreover, beyond these generalizations of what has already been established in different fields, the findings of the meta-analysis also contribute to an advancement of the scientific understanding of scaffolding.

First, the expertise reversal effect—that is, a negative effect of scaffolding for more advanced learners (see Kalyuga et al. 2003)—could not be established for the studies we reviewed in medical and teacher education. On the contrary, most of the scaffolding types we reviewed yielded positive effects for learners with a greater knowledge base as well.

Second, this meta-analysis contributes to a better understanding of how and why different types of scaffolds may cause different effects. The initially derived hypothesis on the interaction of scaffolding type and prior knowledge received partial support through the analyses. Indeed, learners with more advanced prior knowledge benefited more from scaffolds affording their self-regulated diagnostic problem-solving activity. However, rather than a continuous dimension with increasing degrees of freedom, scaffolding appears to show a dichotomous distinction: as long as learners are able to practice problem-solving, all of the different types of scaffolding are beneficial.

When more advanced learners are hindered in their problem-solving activity through scaffolding, enforcing alternative activities interventions remains largely ineffective. This explanation might seem to be related to the expertise reversal effect (Kalyuga et al. 2003); however, it is not the same. The expertise reversal effect would assume negative effects of all types of unnecessary scaffolding. Rather, the findings of this meta-analyses show positive effects of different types of scaffolding for more advanced learners, suggesting that this group is able to make good use of the support. This finding seems rather supportive of the so-called Sesame Street or Matthew effect (Walberg and Tsai 1983), indicating that learners with better prerequisites are also better in exploiting offerings originally designed to support learners with less well-developed prerequisites. We can speculate that rather than the degree of freedom for self-regulated activity, it is the fidelity of the problem-solving activity that determines the effects of scaffolding on the learning of more advanced learners.

If the scaffolding changes the learners’ activities away from the diagnostic problem-solving process, then learners with more advanced knowledge would suffer from an expertise reversal effect. If the scaffolding leaves the targeted problem-solving processes untouched but supports the learners to productively engage in them, then a Sesame Street effect is likely to happen. It seems worthwhile to build on this initial explanation in developing a more theory-based classification of different types of scaffolding. Types of scaffolds may differ in how much self-directed problem-solving they afford and require from learners. Scaffolding is thus not just something for beginners. However, we currently know little about the processes through which more advanced learners benefit from scaffolding. This meta-analysis cannot contribute to this issue beyond pointing to the need for more primary studies that include more advanced learners, scaffolding, and process analyses.

Third, the findings of this meta-analysis advance our scientific understanding of other aspects of scaffolding, using prompts, at least for diagnostic problem-solving but probably beyond. Prior meta-analyses have shown a limited overall effect of scaffolding through prompts (Kim et al. 2018). Our study addresses the timing of prompts. With respect to advancing the procedural aspects of diagnostic competences, prompts during diagnostic problem-solving seem less effective than more long-term prompts or prompts that help in understanding the diagnostic processes after the engagement in them. It seems plausible to assume that prompts in the ongoing problem-solving process are meant to avoid failures of learners’ problem-solving through in-process guidance, whereas prompts after the diagnostic problem-solving are typically meant to afford guidance and stimulate reflection. This is in line with models on learning through problem-solving that emphasize how important it is for expertise development that learners self-regulate when they identify and correct errors in their knowledge base (Kapur and Rummel 2012). Therefore, this finding on the timing of prompts may generalize to other types of complex skill development through problem-solving.

Finally, this meta-analysis contributes to our understanding of the generality of the effects of instructional support measures in medical and teacher education. Diagnosing is a goal-oriented collection of information to make decisions in both domains, and scaffolding has quite similar patterns of effects in advancing diagnosis in medical and teacher education. This is not a trivial finding. For example, a recent meta-analysis has shown that scaffolding can have quite different effects across different domains (Kim et al. 2018). Comparable effects on outcomes may be taken as initial evidence that instructional support measures can be transferred between the domains. A meta-analysis cannot deliver evidence that the processes of learning are also comparable without conducting more primary studies focusing on the learning process. The effects found on levels of prior knowledge in medical and teacher education might be explained by the fact that the structure of professional knowledge development in medical education seems to be traced better (i.e., the curriculum introduces topics and practice opportunities in stable order), whereas in teacher education, the students themselves have more control over the sequence and even the amount of topics they engage in during their studies. It is thus more difficult in teacher education to infer prior professional knowledge based on the semester of study (Linninger et al. 2015).

Limitations

The limitations to the generalizability of findings from this meta-analysis are due primarily to insufficient data and the complex experimental designs from the primary studies. First, too few studies address diagnostic competences to do further in-depth comparisons of the two domains of teacher and medical education. The studies from different contexts were analyzed together without yet having the statistical power to look for domain and context effects of the instructional support.

Second, the four scaffolding categories presented in this meta-analysis unite the scaffolding types that can further be divided into subcategories according to theoretical background. It would have been favorable to distinguish each of the scaffolds further, that is, to distinguish between reflections upon the problem at hand and reflections upon own diagnostic reasoning, or between different kinds of prompts, such as providing additional information, self-explanation prompts, and meta-cognitive prompts. However, due to the low number of studies, such comparisons are not presently possible in a meta-analytical way. Furthermore, most of the primary studies included a combination of different types of examples, of reflection phases, and of prompts. For example, some of the studies include combinations of meta-cognitive prompts and prompts providing additional information on the problem. Other studies combine, for example, reflection on the diagnostic situation with reflection on the own reasoning. Therefore, even though reflection or prompts seemed to have positive effects overall, different kinds of reflections or prompts were subsumed, and thus differential effects of the different kinds of reflections or prompts are still possible. Therefore, conclusions about the effectiveness of each scaffolding type should be made with caution; however, the presented analysis offers insight into how scaffolding types might be categorized according to the self-regulation required. If more empirical studies with detailed descriptions of used scaffolding are not available, further systematic analysis with more precise categorization will contribute to explaining more heterogeneity; this analysis was not performed in this study.

Third, the use of multiple instructional support measures in a large proportion of the primary studies precluded direct comparisons of the effectiveness of scaffolding measures and evaluating the effects of scaffolding on different components of diagnostic competences.

Fourth, most of the studies used a combination of tests for assessing learning and reported combined competence measures and global ratings; therefore, the effects of instructional support on different types of professional knowledge (conceptual, practical, or strategic) were not estimated.

The p-curve analysis indicated that the studies in the analysis do not provide sufficient evidential value for some levels of moderators. This is true for not assigning roles to learners during the intervention, providing different types of prompts simultaneously, and exploring the effects of scaffolding on learners with low levels of prior knowledge. Therefore, the corresponding statistically significant results have to be cautiously interpreted (see Table 3). Furthermore, replicability indexes vary considerably for studies within the different levels of the moderators (0.36–0.82). The values below 0.50 indicate inadequate statistical power of the effect; however, generalization of these effects is limited and requires primarily more empirical studies addressing specific roles of different scaffolding procedures and learning outcomes.

Recommendations for Practice

Diagnostic competences may develop with increasing experience in practice. However, a thoroughly planned higher education program would care for practicing opportunities to start this process much earlier during the program. Evidence from this meta-analysis shows that interventions that include problem-solving activities of the learners have the potential to advance the procedural aspects of diagnostic competences. This fact has been recognized in medical education for many years (e.g., Vernon and Blake 1993). Additionally, results of this meta-analysis show that the potential advancement of diagnostic competences through problem-solving interventions is at least as large if not larger in teacher education than in medical education. Traditional lectures and courses with examples but without students’ own problem-solving activities may be good in developing the necessary conceptual and strategic knowledge base, but these teaching formats will probably not contribute much to advancing the procedural aspects of diagnostic competences. Additional instructional guidance through scaffolding is likely to further improve learning through problem-solving. At least for the procedural aspects of diagnostic competences, prompts are more promising if delivered after, rather than during, the problem-solving process. When solving diagnostic problems, learners with little prior professional knowledge are likely to benefit when solving diagnostic problems from additional examples more than from other types of scaffolding. More advanced learners still benefit from scaffolding, but they gain more from types of scaffolding that afford their self-regulated problem-solving.

Further Research

The findings of the current research synthesis provide some insights about differences and similarities in the fields of medical and teacher education and enhance the scientific understanding of the role of instruction, context, and prior professional knowledge base in the facilitation of diagnostic competences. The study also identified several further questions to be addressed by experimental studies and further research syntheses. For example, more primary studies with a design that allows for direct comparisons of scaffolding types are needed to further validate the model suggesting the placement of scaffolding measures on a continuum from high levels of guidance to more self-regulation and meta-cognition. Moreover, more primary studies are needed that report not only global scores, but components of these scores and more specific descriptions of learning and testing activities. Those studies would enable addressing the effects of different types of instruction and scaffolding on components of diagnostic competences (conceptual, procedural, strategic knowledge, analytical and decision-making skills, and epistemic-diagnostic activities).

Another promising direction would be to focus on creating, validating, and implementing scales for the assessment of diagnostic competence to address different components of diagnostic competences within and across domains. More standardized measures such as these would also support identifying what types of scaffolding optimally meet the needs of learners with different levels of prior professional knowledge.

Additionally, further research may also explore the motivational aspects of diagnostic competences more systematically by including subjective measures of learning outcomes (e.g., perceived utility, confidence in applying learned strategies, or self-perceived competence levels).