Keywords

This chapter’s simulation at a glance

Domain

Teacher education

Topic

Scientific reasoning in physics and biology

Learner’s task

To adopt the role of a physics or biology teacher and diagnose—individually or in interdisciplinary collaboration—a student’s scientific reasoning skills

Target group

Pre-service teachers

Diagnostic mode

Individual and collaborative diagnosis

Sources of information

(Interactive) videos of pairs of students who perform inquiry activities in physics and biology

Special features

Standardized and parallelized simulations for two different school subjects (physics and biology); possibility to directly interact with the students by “asking” them questions concerning their inquiry activities

7.1 Scientific Reasoning as a Cross-Domain Skill

Many educational objectives in schools refer to subject-specific knowledge and skills, but others refer to cross-curricular or cross-domain skills such as learning strategies, media literacy, or scientific reasoning skills. These skills have in common that they typically cannot be developed without being applied to particular subject-specific content—a so-called exemplifying domain (Renkl et al., 2009). For example, a learning strategy such as organizing information by constructing a concept map can only be demonstrated and practiced in the context of a particular topic, such as stem cell research, for example (Hilbert et al., 2008). Fostering scientific reasoning skills requires inquiry tasks concerning phenomena such as factors influencing the image of an object projected through a lens or the growth of plants. Typically, exemplifying domains for the development of cross-domain skills are taken from the body of knowledge contained within school subjects.

Cross-domain skills also have in common that they can be applied to topics from more than one school subject. Learning strategies, media literacy, or—to some degree—scientific reasoning skills can be applied to content from the humanities, the social sciences, or the natural sciences. Therefore, promoting such cross-domain skills can be regarded as a joint task of more than one teacher and more than one school subject (Wecker et al., 2016). Against this backdrop, it may be advisable for teachers of subjects that can serve as exemplifying domains for such cross-domain skills to collaborate in this joint task and share information about individual students’ learning progress.

In our own research, we focus on scientific reasoning as a cross-domain skill. Scientific reasoning can be seen as a rather complex set of cognitive activities (Schunn & Anderson, 1999) and is therefore best explained by looking at its subskills. While there are frameworks that differentiate many subskills (Fischer et al., 2014), most researchers distinguish among three dimensions of scientific reasoning skills: (1) formulating hypotheses, (2) designing and conducting experiments, and (3) drawing conclusions from experiments (e.g., de Jong & van Joolingen, 1998; Klahr & Dunbar, 1988). The formulation of hypotheses may be strongly influenced by a person’s domain knowledge in a certain field and can be assessed by looking at the specificity of a stated hypothesis (Lazonder et al., 2008). After a hypothesis has been formulated, experiments have to be designed and conducted to test it. At this point, the so-called control of variables strategy, i.e., varying one independent variable from the hypothesis while holding all other variables constant, plays a crucial role in obtaining unequivocal results (Chen & Klahr, 1999; Tschirgi, 1980; Schwichow et al., 2016). Observations from well-designed experiments can then be evaluated and used to draw conclusions about the tested hypothesis. Just as the initial hypothesis, these conclusions again may vary in terms of their specificity. Furthermore, drawing correct inferences about factors that do or do not influence the dependent variable from informative and well-designed comparisons is an important aspect at this point (see Kuhn et al., 1992).

Although there are views that question the existence of cross-domain skills in general or that scientific reasoning in particular is a cross-domain skill (e.g., Tricot & Sweller, 2014; Osborne, 2018), there is research suggesting that there are in fact scientific reasoning skills that can be applied across content areas, at least in related subjects or different scientific subdisciplines (e.g., Kuhn et al., 1992; Schunn & Anderson, 1999). A reason for this ongoing debate about the existence of domain-general or—as we would prefer to call them—cross-domain skills might be different conceptions of the terms “domain” and “domain-general” (Hetmanek et al., 2018), but in light of the strong research tradition on scientific reasoning, we consider scientific reasoning skills as both real and applicable to content from different subjects.

Research from developmental psychology shows that early in the development of a specific subskill of scientific reasoning, it is often applied in one narrow context and no others. Only with time and practice do learners begin to apply the new subskill to a broader range of topics (Kuhn et al., 1992; Zimmerman, 2007) within and across subjects. Hence, the breadth of topics to which a subskill of scientific reasoning can be applied constitutes a quality dimension of the subskill itself. These considerations suggest that practicing scientific reasoning skills in the context of different science subjects such as physics and biology may contribute to the development of higher levels of scientific reasoning skills.

7.1.1 The Role of Teachers’ Diagnostic Competences for the Development of Learners’ Scientific Reasoning Skills

Teachers’ diagnostic competences are an important prerequisite for their adaptive and effective support for their students (Schrader, 2009). Therefore, teachers need to be able to diagnose their students’ current skill levels to be able to support them appropriately. The definition by Fischer et al. (2022) is adopted as a basis for the work presented in this chapter.

In order to diagnose correctly, teachers need the cognitive and context-specific performance dispositions to do so (Koeppen et al., 2008). Similar to other cognitive skills, it can be assumed that diagnostic competences are based on teachers’ professional knowledge (e.g., Baumert & Kunter, 2006; Förtsch et al., 2018). Therefore, teachers need different types of knowledge (knowing that, knowing how and knowing when and why) as well as content-related facets of knowledge in order to diagnose their students (see Förtsch et al., 2018). Against the background of research on the acquisition of cognitive skills (see VanLehn, 1996), developing diagnostic competences also requires opportunities to apply such knowledge to authentic cases and practice the application of diagnostic competences.

To arrive at a diagnosis, the diagnostician can employ a set of different types of (epistemic) diagnostic activities, including (1) problem identification, (2) questioning, (3) hypothesis generation, (4) construction and redesign of artifacts, (5) evidence generation, (6) evidence evaluation, (7) drawing conclusions, (8) communication and scrutinizing (see Chernikova et al., 2022; Heitzmann et al., 2019).

While research on diagnostic competences has mainly focused on the accuracy of teachers’ judgments of subject-specific knowledge and skills, research on diagnostic competences concerning cross-domain skills, such as scientific reasoning, is still scarce (Südkamp et al., 2012). Therefore, students’ scientific reasoning skills were selected as the focus of teachers’ diagnostic competences in our present work.

Giving students the chance to conduct scientific experiments in class can create the opportunity to diagnose students’ scientific reasoning levels. Two common experiments are experimenting with optical lenses (physics) and experimenting with the growth of plants (biology). The goal while experimenting with plants is to find out which variables (the amount of water, a fertilizer stick, salt and an undefined white powder) influence the growth of a plant (e.g., a bean plant). Therefore, students have to convert their ideas about what influences the growth of a plant into a scientific hypothesis. For example, this could be the idea that the amount of water influences the growth. To test this idea, the students must conduct an experiment. In this case, they would need to vary the quantity of water between two plants to see if there is a difference in growth. Students also need to draw the right conclusions based on the results of the experiment. Based on the growth of the plants, they should be able to determine whether to confirm or reject their hypothesis. The optical lens experiment works quite similarly. Students need to find out which variables (lens curvature, lens size, the distance between the lens and depicted object and an undefined polarizing filter) influence the measurement point at which an object—depicted through an optical lens—appears clear on an imaging screen.

7.1.2 Collaborative Diagnosis of Scientific Reasoning Skills

In the context of daily school routines, diagnosing a student doesn’t always have to be a one-person job. Since different teachers experience the same learners in different situations, exchanging information about these learners might be beneficial for teachers to support their students. Still, it is unclear whether interdisciplinary teacher collaboration can help them achieve better results in diagnosing students’ scientific reasoning skills. Maybe the information a single teacher can gather in his or her own lessons is already comprehensive enough to be able to arrive at a good diagnosis. However, it is possible that this is not the case and that information from several subjects is needed to be able to get enough information to serve as a basis for a satisfactory diagnosis. This might be especially true when it comes to the question of whether or not a student can apply scientific reasoning skills across school subjects (e.g., physics and biology) in a given domain (science). Therefore, situations in different thematic fields might be necessary to get enough insight (see Kuhn et al., 1992; Zimmerman, 2007). In addition, collaborative diagnosis might have an advantage over the individual development of a diagnosis when the collaborating teachers have different—in the best case complementary—areas of expertise. If this is the case, teachers could benefit from each other by working together (de Wit & Greer, 2008). This idea itself is not new and already very common in different fields of expertise—for example, in the field of medicine. The daily routine in hospitals offers many possibilities or rather necessities for doctors from different fields to work together to improve their chance of arriving at better diagnoses. So-called tumor boards are just one example of such interdisciplinary collaboration. Here, experts from different fields come together to discuss particularly complex malignant diseases. Even though it is also recommended for teachers to collaborate when necessary and to seek help with the management of difficult tasks (Helmke, 2010), this kind of exchange is not institutionalized in the same way. Collaboration is often restricted to a group of teachers teaching the same subject working together to create worksheets or tests. Therefore, there is still a lot of potential for interdisciplinary collaboration, especially when it comes to the need for improving the process of diagnosing students. This approach seems especially promising for teachers from related subjects such as English and German or different scientific subjects. Scientific research also shows that medical students who work in groups arrive at better diagnoses than students working on their own (Hautz et al., 2015). Based on these findings, it seems likely that the same might be true for pre-service teachers. Additionally, it has to be stated that such collaborations can only be fruitful if the process of sharing information is implemented successfully (see Radkowitsch et al., 2022).

7.1.3 Simulations as a Learning Opportunity

Since there are not many opportunities in university-based teacher preparation programs for practicing the diagnosis of scientific reasoning skills in real classroom situations, there is a need for additional training opportunities. In this context, video-based simulations constitute a promising setting for both the training and the measurement of diagnostic competences. Overall, simulations are considered representations of reality segments that offer the possibility to control or manipulate certain parameters (see Chernikova et al., 2022). Simulations can, for example, include videos focusing on specific (classroom) situations and thereby control participants’ attention while still creating a realistic scenario. This makes video-based simulations especially interesting for tasks in which learning involves self-regulated exploration—so-called inquiry learning tasks (de Jong, 2006). Another advantage of simulations is that once they are designed and programmed they can be used repeatedly for practice as well as testing.

In contrast to the education of pre-service teachers, learning with simulations is very common in medical education (Peeraer et al., 2007). This is especially interesting since both professions are quite similar when it comes to the need to create training situations for educational purposes. This is the case because in both professions it is difficult to immediately start training in real-life situations. Appropriate alternatives—such as computer-based simulations—can create the opportunity to get this experience.

7.1.4 Video-Based Simulations for Pre-Service Teachers’ Diagnosis of Students’ Scientific Reasoning Skills

Video-based simulations were developed as an environment to practice and measure pre-service teachers’ diagnostic competences concerning students’ scientific reasoning skills. As the diagnosis of cross-domain skills such as scientific reasoning skills may benefit from interdisciplinary collaboration, the simulations can be used for individual as well as collaborative diagnosing in interdisciplinary teams made up of teachers of different science subjects.

The simulation can best be understood in terms of the segment of reality it simulates. In this segment of reality, teachers of science subjects (physics or biology) have to diagnose the scientific reasoning skills of individual learners from their classes. For this purpose, they can observe these learners while they perform inquiry tasks in small groups during lessons in their respective subject. Teachers can watch and listen to their students while they generate research questions and formulate hypotheses, design and run experiments and document their observations, and draw conclusions from their observations concerning their hypotheses. They may also interrupt their students by asking questions about their research questions, hypotheses, observations, and conclusions in order to collect information about learners’ scientific reasoning that is not directly observable or fully transparent from their activities and dialogue. Based on the information gathered by observing and asking questions of their students during these lessons, they can arrive at a diagnosis of each learner’s scientific reasoning skills. Beyond such individual diagnoses, teachers may exchange their observations and discuss their diagnoses with colleagues who teach a different science subject to the same learners and therefore may have collected complementary information about these learners, which may support, contradict, or extend their own diagnoses. Hence, the teachers may collaborate to arrive at a joint diagnosis of each learner’s scientific reasoning skills.

The simulation tries to mimic this segment of reality. It is therefore introduced as a kind of role play. Pre-service teachers have to picture themselves as a teacher working in their own school subject. Staged videos of learner dyads are used to simulate a small segment of teachers’ experiences during lessons, including the opportunity to observe learners’ activities and dialogue and select questions they would like to ask the learners to gain deeper insights into their scientific reasoning during these inquiry tasks. The pre-service teachers’ task is to diagnose the scientific reasoning skills of one pre-designated learner from the dyad captured in the video. After watching the video, they are asked to individually write down a diagnosis concerning this learner’s scientific reasoning skills. In the collaborative version of the simulation, they then enter a phase of interdisciplinary collaboration with a pre-service teacher for the other science subject (physics or biology) in order to generate a joint diagnosis of the learner’s scientific reasoning skills that integrates the observations and conclusions from both science subjects. To arrive at their joint diagnosis, they can talk to each other and use material from their individual diagnoses. The video simulations were implemented as follows:

Platform

The simulation environment runs in a standard web browser. It is written in PHP, HTML, and Javascript, and uses a MySQL database to store configuration tables and log files. The platform also has test and questionnaire functionalities for empirical studies concerning the instructional design of the video simulations.

Interface

During the video simulations with staged videos of learner dyads who collaborate on inquiry tasks, the computer screen is divided into four parts (see Fig. 7.1):

  1. 1.

    The videos are displayed in the top-left area (“video area”).

  2. 2.

    The top-right area (“inquiry table”) displays a worksheet that the learners in the video use to document their experiments in handwriting. It contains a table with one row per experiment and columns for the research questions and/or hypotheses, the settings of the four independent variables, the measured values of the dependent variable, and a conclusion. The inquiry table always displays the worksheet state corresponding to the current state of the video: Each time one of the learners starts to take notes about their current experiment, all the information that is written down at this point is displayed at once so that the pre-service teachers can immediately process this information. This information enables the pre-service teachers to keep track of the experiments the students have already conducted.

  3. 3.

    The bottom-right area (“note pad”) comprises a text box for notes participants can write down while watching the video, just as teachers could take notes during their lessons. In some versions of the simulation environment the note pad contains some text that structures the pre-service teachers’ notes. The notes are saved and displayed again later when participants write their final diagnosis.

  4. 4.

    The bottom-left area (“navigation area”) displays questions (“video links”) that serve as links to short video segments that can be inserted at certain points of the main video and that contain a voice-over of a teacher asking the respective question to the learners in the video along with their responses.

Fig. 7.1
figure 1

Screenshot of a biology simulation

Video Material

The videos show a classroom situation focused on two students. Several scripted videos were produced that show these students performing two inquiry tasks. The tasks are based on the two already described scientific experiments. The physics experiment has to do with lenses and the biology experiment has to do with the growth of plants. Both experiments have exactly the same structure. In both cases, the learners in the video have to find out whether and how the dependent variables—plant growth and optimal distance between lens and illustration screen, respectively—are influenced by four independent variables. In physics, the four independent variables are (1) the curvature of the lens, (2) the size of the lens, (3) the distance between the object and the lens, and (4) a so-called polarizing filter. In biology, the four variables are (1) the amount of water, (2) salt, (3) a fertilizer stick, and (4) an unspecified white powder. The videos are the pre-service teachers’ main source of information, supplemented only by the inquiry table that documents the learners’ experiments.

Developing Video Scripts

At the beginning of creating the simulations, we came up with and wrote down several fictional student profiles containing appropriate values for all relevant scientific reasoning subskills, with the objective of creating realistic, average students. We then wrote corresponding scripts matching these profiles. Those scripts were later handed to the student actors to prepare for their roles and learn their dialogues.

Interaction

By default, typical media player control elements (e.g., play, pause, stop, forward, backward, replay, and time bar functionalities as well as a time display) are disabled for the video area. Thus, the simulation platform mimics the situation in classroom instruction, during which there is also no opportunity to interrupt or revisit parts of the flow of events. To be sure, video interactivity and reflection phases may be helpful design features of video simulations, which can also be investigated in this simulation environment.

The video links in the navigation area constitute the essential feature of the environment that renders it a simulation, because they enable the participants to “interact with the students” in the videos (see Fig. 7.2). During the planning and documentation phases of each experiment in the video, groups of video links with questions that might be appropriate at this point are displayed in the navigation area. When the learners run the experiment or move on to the next experiment, the group of video links disappears and is eventually replaced by a new group of video links.

Fig. 7.2
figure 2

Flowchart for the simulations

If a participant decides to ask a certain question (for example: “What do you want to find out now?”), he or she may click on the corresponding link. The video segment containing the teacher question and learner response is then inserted at the next appropriate point in the main video following the selection of the corresponding question. Until this point, participants have the possibility to withdraw their selection by clicking on the video link for a second time. They may also select more than one video link. If the participant has selected several video links, the corresponding video segments are played in a prespecified sequence. After choosing a question and watching the additional video segment, the main video continues. Only the remaining video links are displayed; hence, no video segment can be viewed twice.

After the main video has ended, a group of video links is displayed that comprises questions which do not refer to individual experiments, but rather to the sequence of experiments as a whole (see Fig. 7.2). One example of these ending questions is: “Is there one or even more than one experiment that wasn’t completely necessary and therefore could have been left out?” When the participant selects one of these video links, the video segment with the corresponding question is played immediately. After the video segment has ended, again only the remaining video links are displayed, and the next question can be selected.

The participants have only limited time for questions during each simulation. It is therefore impossible to view all additional video segments. Hence, participants have to choose the most relevant and important ones. These interactions should always serve the purpose of gaining additional relevant information about the learner’s scientific reasoning skills that cannot be obtained from the main video. In some cases, it also makes sense to postpone the selection of a specific question because the corresponding information may occur in the main video at some later point, and only ask the question at a later occasion if it turns out that the main video does not contain the information. To help the participants keep track of the available time, both the time remaining for additional questions and the length of the video segments corresponding to the video links are displayed in the navigation area.

7.1.5 Measuring Pre-Service Teachers’ Diagnostic Activities and the Quality of Their Diagnoses of Students’ Scientific Reasoning Skills

The participants’ performance in the simulation is later evaluated using accuracy and efficiency measures. Accuracy is a measure for the quality of the participants’ performance in the simulations in terms of choosing the “right” questions. Therefore, we consider the “right” questions to be those that are promising in the sense of the expectation to provide useful information for the diagnosing process. Since we additionally need some unimportant questions as distractors, there are also some questions that are either completely irrelevant or focused on information that can easily be acquired just by watching the main video. On the other hand, efficiency is a measure of accuracy in proportion to time. This is important because participants are encouraged to use their time for questions wisely.

In addition to the performance evaluation in the simulations, we also evaluate the participants’ written diagnoses using only a measure of accuracy. Both the individual diagnoses and—in the collaborative test condition—the additional collaborative diagnoses are rated by comparing them to a sample solution. This sample solution is based on the student profiles used to create the scripts, which include the envisaged values for all relevant scientific reasoning subskills. The level of congruence between the sample solution and the individual diagnosis is considered as an accuracy measure.

7.1.6 Research on (Support for) Pre-Service Teachers’ Diagnosis of Students’ Scientific Reasoning Skills in Video-Based Simulations

The simulation environment and the video simulations described in this contribution provide a basis for investigating several important research questions concerning pre-service teachers’ diagnosis of students’ scientific reasoning skills. In our research, we focus on two main areas: The role of different types and content-related facets of professional knowledge for (pre-service) teachers’ diagnostic activities and the quality of diagnoses of students’ scientific reasoning skills on the one hand, and on kinds of scaffolding that foster the development of pre-service teachers’ individual and collaborative diagnostic competences concerning students’ scientific reasoning skills in video-based simulations on the other. Putting our research interests in context, we will focus on Research Questions 2 and 4, as mentioned in both the introduction by Fischer et al. (2022) and the concluding chapter by Opitz et al. (2022). In particular, we investigate

  1. 1.

    how conceptual content knowledge, scientific reasoning skills, and conceptual pedagogical content knowledge about scientific reasoning and its diagnosis among pre-service teachers in physics and biology are related to their diagnostic activities and the quality of their diagnoses,

  2. 2.

    how the collaborative vs. individual development of a diagnosis influences diagnostic activities and the quality of the diagnosis, as well as what role the distribution of information (shared vs. separate experiences of learners’ inquiry activities during lessons) plays in this respect, and,

  3. 3.

    to what extent a collaboration script for joint diagnosis can enhance diagnostic activities and the quality of the diagnosis as well as the development of individual and collaborative diagnostic competences.

Thus, in the long run, the present research may contribute to the improvement of teacher education at universities.