Advertisement

Learner modeling for adaptive scaffolding in a Computational Thinking-based science learning environment

  • Satabdi BasuEmail author
  • Gautam Biswas
  • John S. Kinnebrew
Article

Abstract

Learner modeling has been used in computer-based learning environments to model learners’ domain knowledge, cognitive skills, and interests, and customize their experiences in the environment based on this information. In this paper, we develop a learner modeling and adaptive scaffolding framework for Computational Thinking using Simulation and Modeling (CTSiM)—an open ended learning environment that supports synergistic learning of science and Computational Thinking (CT) for middle school students. In CTSiM, students have the freedom to choose and coordinate use of the different tools provided in the environment, as they build and test their models. However, the open-ended nature of the environment makes it hard to interpret the intent of students’ actions, and to provide useful feedback and hints that improves student understanding and helps them achieve their learning goals. To address this challenge, we define an extended learner modeling scheme that uses (1) a hierarchical task model for the CTSiM environment, (2) a set of strategies that support effective learning and model building, and (3) effectiveness and coherence measures that help us evaluate student’s proficiency in the different tasks and strategies. We use this scheme to dynamically scaffold learners when they are deficient in performing their tasks, or they demonstrate suboptimal use of strategies. We demonstrate the effectiveness of our approach in a classroom study where one group of 6th grade students received scaffolding and the other did not. We found that students who received scaffolding built more accurate models, used modeling strategies effectively, adopted more useful modeling behaviors, showed a better understanding of important science and CT concepts, and transferred their modeling skills better to new scenarios.

Keywords

Open ended learning environments Modeling and simulation Learning by modeling Computational Thinking Science education Learner modeling Adaptive scaffolding 

1 Introduction

Learner modeling has been an integral component of intelligent tutoring systems (ITS) since their inception. The primary goal has been to analyze students’ solutions and provide individualized remedial advice to help them improve their learning and problem solving (Wenger 1987; Conati et al. 2002; Mitrovic 2012), and to individualize learning content and curriculum sequencing in intelligent tutors, adaptive hypermedia and recommender systems (Anderson et al. 1995; Brusilovsky and Peylo 2003; Conejo et al. 2004; Brusilovsky and Millán 2007; Shang et al. 2001). Depending on the specific learning goals and functionality adopted by a learning environment, the aspects of the learner that are modeled and the way learner models impact learning can vary significantly.

For example, in constraint-based tutors, the violation of domain and problem-solving constraints imply errors, and form the basis for providing advice on the correct form of the constraints (Mitrovic 2012). Many hypermedia-based learning environments, on the other hand, use information on pages visited by learners and the duration of time spent on each page to provide individualized suggestions on other relevant pages to read, and ones to revisit to gain a deeper understanding of the material (e.g. Azevedo 2005; Bannert and Reimann 2012). Similarly, curriculum sequencing environments model the parts of the curriculum learners have accessed before to adapt and sequence the learning content, and propose learning paths (Brusilovsky and Peylo 2003; Shang et al. 2001). Information retrieval and filtering systems find documents that are most relevant to user interests, and then order them by perceived relevance, based on user profiles that represent user interests in terms of keywords or concepts (Brusilovsky and Millán 2007). On the other hand, most inquiry-oriented and modeling and simulation based learning environments track students’ actions in the system and the impact of the actions on the properties of the solution being generated to provide individualized feedback and hints that adapt to students’ demonstrated knowledge levels and skills (Arts et al. 2002; Duque et al. 2012).

In this paper, we develop a learner modeling approach to support adaptive scaffolding in Computational Thinking using Simulation and Modeling (CTSiM)—an open ended learning environment (OELE) that supports synergistic learning of science and Computational Thinking (CT) concepts in middle school science classrooms (Basu et al. 2014a; Sengupta et al. 2013). OELEs (Clarebout and Elen 2008; Land et al. 2012; Land 2000) are learner centered; they provide a learning context and a set of tools to help students explore, hypothesize, and build solutions to authentic and complex problems. They are typically designed to support thinking-intensive interactions with limited external direction (Land 2000, p. 62). These environments are well suited to prepare students for future learning (Bransford and Schwartz 1999) by helping them to develop their abilities for making choices independently when solving open-ended problems (Schwartz and Arena 2013).

As with other OELEs, learning with CTSiM involves a complex set of tasks, which include
  1. 1.

    Reading hypertext resources to identify relevant information needed for modeling a given science topic,

     
  2. 2.

    Correctly interpreting the identified information and applying it to build conceptual and computational models using an agent-based framework,

     
  3. 3.

    Observing model behavior in the form of simulations (visualized through animations and graphs),

     
  4. 4.

    Comparing model behavior against that of an expert model1 through simulation experiments, and

     
  5. 5.

    Refining the conceptual and computational models based on identified issues and differences with the expert model simulations.

     
In previous studies with CTSiM (Basu et al. 2013, 2014a), middle school students showed significant learning gains on science content and CT concepts, but they also faced challenges when working on the tasks listed above. Initially, the help was largely provided by their science teachers and our research team.

Drawing on these experiences, we have designed a learner modeling and adaptive scaffolding framework to support student learning in CTSiM. The goal of our scaffolding framework is not to merely provide corrective feedback on the science models students build, but to also offer useful strategies to support students’ model building, model checking, and information acquisition behaviors. For example, when needed, CTSiM may provide students support on model building strategies, such as seeking information relevant to the part of the model being built or tested, building and testing the model in parts, and modeling a topic conceptually to understand the scope of the model and the interactions between its components before trying to construct more detailed computational models. Going beyond several existing environments, our emphasis is on helping students gain insights into the reason(s) why they may have generated incorrect model behaviors and how they can systematically correct their models, thereby learning modeling skills that can transfer to other domains.

In the rest of this paper, we discuss our learner modeling and scaffolding methodology for CTSiM, and demonstrate its effectiveness in supporting students’ learning of science and CT concepts. Section 2 provides a background review of learner modeling techniques, and how they have been used to develop adaptive scaffolding mechanisms in a variety of computer-based learning environments. Of particular relevance to us is learner modeling and scaffolding in OELEs. Section 3 presents a hierarchical task modeling and related strategy modeling scheme for OELEs that forms the core of our learner modeling approach. We describe the CTSiM learning environment and associated learning activities in Sect. 4, and then present the learner modeling and scaffolding approach that we have developed for the CTSiM environment in Sect. 5. Section 6 describes a recent classroom study with 98, \(6{\hbox {th}}\) grade students to assess the effectiveness of our learner modeling and adaptive scaffolding approach in the CTSiM environment. We created two treatment conditions. The experimental condition received scaffolding from a pedagogical mentor agent, whereas the control condition used a version of the CTSiM system that provided no feedback or hints to the students. Our results presented in Sect. 7 clearly demonstrate the effectiveness of our scaffolding mechanisms. Students in the experimental condition built more accurate models, used good modeling strategies frequently and adopted more useful modeling behaviors, showed a better understanding of important science and CT concepts, and were more effective in transferring their modeling skills to new scenarios. Finally, Sect. 8 summarizes the contributions of this research and discusses our plans for further analyses and directions for future work in refining our learner modeling and scaffolding approach.

2 Background review

Depending on the computer based learning environment, learner modeling can capture different characteristics of learners, ranging from learners’ domain knowledge to their abilities to apply relevant cognitive and metacognitive processes during problem solving. However, there have been very few attempts to build integrated learner models that cover students’ cognitive, metacognitive, and self-regulation processes, and their relation to students’ learning performance and learning behaviors. In this section, we provide a comprehensive background review of learner modeling techniques in computer based learning environments including OELEs, and discuss how they have been used to develop adaptive scaffolding mechanisms.

2.1 Learner modeling in computer-based learning environments

One of the primary goals of intelligent computer-based learning environments has been to adapt instruction to the specific needs of the learner (Lajoie and Derry 1993; Woolf 2009). Much of the initial work in learner modeling arose from the research and development of ITSs, whose primary architecture is assumed to be made up of three components (Self 1998; Wenger 1987): (1) knowledge about the domain being taught (domain knowledge module), (2) knowledge of how to communicate and interact with the learner (pedagogical module), and (3) knowledge about the student (student module). A classical approach to student modeling assumes that the student’s knowledge is a subset of the expert knowledge included in the domain module. This is the overlay formulation for student modeling (Carr and Goldstein 1977; Sison and Shimura 1998; Weber and Specht 1997). However, overlay modeling is restrictive; it implies that the student’s knowledge is a strict subset of expert knowledge, and it does not account for bugs and misconceptions the student may have. To account for student misconceptions, overlay modeling has been extended to perturbation based modeling, which captures flawed versions of expert knowledge units. While the idea of perturbations is general enough to apply to any modeling representation, it has primarily been applied to rule-based representations in the tutoring systems literature, e.g., cognitive tutors (Anderson et al. 1995), and buggy rule models (Brown and Burton 1978; Brown and VanLehn 1980).

Linked to perturbation based modeling, research work in the past two to three decades has focused on how to accurately diagnose a student’s cognitive state from the activities they conduct in the system (Ohlsson 1986). Continual tracking and analysis of students’ activities and performance as they work on the system is termed “diagnosis,” and it forms the basis for dynamic student models that reflect how students’ knowledge state changes as they interact with the system. The simplest form of diagnostics is model tracing, where the tutoring program uses an underlying model to generate traces of the problem solution, and compares students’ activities to these traces (VanLehn 1988; Anderson et al. 1985). However, model tracing is localized to individual problems, and knowledge tracing methods have been developed to extend model tracing across problems and create student models across time. Examples of knowledge tracing methods include Bayesian Knowledge tracing (BKT) developed for Cognitive Tutors (Corbett and Anderson 1995). Simpler versions of BKT assume that each problem solving step can be linked to a single skill, and a student’s success or failure in that step is an indicator of how well the student has mastered that skill. This approach to skills modeling is particularly relevant for tutors that present problems and monitor fine-grained skill mastery, and provide scaffolding on skills and skill components during problem solving (Desmarais and Baker 2012). The sophistication of diagnosis processes can be improved by using Bayesian networks that take into account students’ proficiency in related knowledge components and the level of difficulty of the problem (e.g., Conati et al. 2002).

However, the realization that learners may be more than just mini-experts with a subset of the expert’s knowledge and a few misconceptions, has led to the notion of learner-based approaches to student modeling (Elsom-Cook 1993). Learner-based modeling de-emphasizes learner state, and focuses more on learning mechanisms, which capture why a student may have certain knowledge (correct or incorrect) and also what and how the student might learn from a given intervention. A very early approach to learner-based modelling was the Genetic Graph, which analyzed the requisite skills for a domain into atomic components and created links, such as generalization, correction, and refinement between these components as evolutionary relationships (Goldstein 1979). The links implied directions the system could take in helping students learn new knowledge components and procedures. Another approach, Automated Cognitive Modelling (Langley and Ohlsson 1984) used machine learning techniques to construct a student model off-line from their problem solving traces as a set of production rules. This approach extended previous work on buggy models in that the mal-rules were not pre-determined, but they were constructed directly from the data.

Recently, researchers working in adaptive hypermedia environments have begun to exploit the richness of Semantic Web technologies in developing student models. These models have the advantages of formal semantic representations, reuse, portability, and automatic serialization into a format compatible with popular logical inference engines (Dolog et al. 2008; Jovanovic et al. 2008; Jovanović et al. 2009; Winter et al. 2005). Research has also been conducted on developing ontology-based frameworks for sharing student profiles between different learning systems (Dolog and Schaefer 2005). Jeremić et al. (2012) point out that though these models are based on a more advanced knowledge representation technology, their approach to students’ knowledge (and/or competency) modeling is still based on overlay models and all of their accompanying deficiencies.

In summary, student models can represent a wide range of students’ characteristics, but students’ knowledge states and learning preferences are the most common aspects that have been modeled (Chrysafiadi and Virvou 2013). A number of recent reviews provide more details of student modeling approaches. For example, Brusilovsky and Millán (2007) discuss user modeling in adaptive hypermedia and educational systems, Desmarais and Baker (2012) provide an extensive review of uncertainty-based and machine learning approaches to student modeling, and Chrysafiadi and Virvou (2013) provide a comprehensive literature review of student modeling approaches from 2002 to 2012.

Recent advances in learner modeling, especially in exploratory environments, focus on scaffolding of metacognition and self-regulation processes that include goal setting and planning, judgment of learning, and self-monitoring (Aleven et al. 2006; Azevedo and Hadwin 2005; Biswas et al. 2010; Kinnebrew et al. 2014; Kramarski and Gutman 2006; Luckin and du Boulay 1999; Montalvo et al. 2010; Moos and Honkomp 2011; Roll et al. 2009; Winne 2014). There has been work on modeling other aspects of learners like their collaborative skills, motivation, and affect (Desmarais and Baker 2012). However, there have been very few attempts to build integrated student models that cover all aspects of their learning process, i.e., the cognitive, metacognitive, and self-regulation processes, and their relation to the students’ learning performance and learning behaviors. In this paper, we address this problem by adopting a task-oriented approach combined with strategy modeling to capture students’ cognitive and metacognitive processes during learning. We describe the task and strategy-oriented modeling approaches in Sect. 3.

2.2 Using learner models to provide scaffolds in traditional intelligent learning environments

Shute (2008) points out that formative feedback and scaffolding is “generally regarded as crucial to improving knowledge and skill acquisition” in computer based learning environments. Wood et al. (1976) defined the original notion of scaffolding as an adult helping a child with elements of a task that was currently beyond the child’s capacity, thus enabling the child to focus on aspects of the task that was within his or her range of competence. This would help the child successfully complete the overall learning task. Puntambekar and Hubscher (2005) have drawn analogies between this notion of scaffolding and Vygotsky’s discussion on social interaction as being a key component of cognitive development. Bangert-Drowns et al. (1991) have discussed a number of advantages of feedback and hints in terms of helping students identify and correct misconceptions and errors, develop efficient and effective problem solving strategies, and improve their metacognitive and self-regulation skills. Van der Kleij et al. (2015) in a recent review, conclude that more elaborate feedback and hints produces higher learning outcomes, especially for the higher order learning constructs. Therefore, in addition to supporting cognitive and metacognitive processes in learning, feedback and hints can also play a significant role in developing effective self-regulation skills, increasing motivation and engagement, and reducing frustration (Lepper and Chabay 1985; Shute 2008).

In spite of the importance of feedback and hints in the learning process, feedback may not always have a positive impact on learning. For example, Baker et al. (2004) and others (e.g., Walonoski and Heffernan 2006) have discussed how well-meaning tutor advice to support student learning may be misused by students who game the system and rely on bottom-out hints to solve problems. Other studies have shown that students, left to their own devices, have difficulties in realizing when to seek help, and how to use the feedback and hints provided (Aleven et al. 2004; Karabenick and Knapp 1991). In addition, feedback that focuses primarily on summative scores and interrupts the learner at inopportune moments has a negative impact on learning (Fedor et al. 2001; Shute 2008).

Puntambekar and Hubscher (2005) have clearly articulated some of the central notions of successful scaffolding and feedback and hints: (1) a shared understanding of the goals of current learner activity; (2) a clear indication that the learner is not going to be successful in achieving specific tasks that are associated with the learner’s current goal, and (3) an ongoing diagnosis of the learner’s current level of understanding of the specific task and related tasks. It is clear that learner modeling approaches that accumulate and aggregate information about the learner’s abilities and learning behaviors can be combined with assessments of a learner’s current activities in the system to play a central role in designing adaptive scaffolding schemes that accentuate all of its positive aspects while avoiding the potential negatives.

In more detail, Elsom-Cook (1993) reports six primary functions that student models can play in supporting learners: (1) corrective, directed at helping students correct their misunderstanding or errors in learning and problem solving; (2) elaborative, to help students learn the knowledge that they lack; (3) strategic, focused on helping students invoke a known procedure or piece of knowledge, but they are unable to apply when required; (4) diagnostic, which uses inference procedures to combine multiple instances of student work and infer where students’ lack abilities; (5) predictive, which anticipates how a student is likely to respond in a particular (e.g., problem solving) situation, and, therefore, pre-determines how it will support the student or provide hints; and (6) evaluative, directed to providing a comprehensive assessment of the level of knowledge and achievement of the student. In many ways, learning environments may use immediate forms or more persistent forms of the learner model to provide adaptive scaffolds and feedback and hints that correspond to one or more of the six forms of scaffolds listed above. In this paper, we develop scaffolding mechanisms based on our comprehensive learner modeling scheme that combines the use of corrective, elaborative, strategic, and diagnostic schemes to help learners develop their cognitive and metacognitive processes as they work in the CTSiM environment.

3 Learner modeling in open ended learning environments

As discussed earlier, OELEs provide a learning context and a set of tools for learning and solving complex problems. To be successful in these environments, students have to develop a number of different skills and strategies to become effective learners and problem solvers. From a self-regulation and metacognitive perspective, the complex nature of the problems requires students to develop strategies for decomposing their learning and problem solving tasks, and develop and manage their plans for accomplishing these tasks. The open-ended nature of the environment also implies that students have choices in the way they decompose, plan, sequence, and solve their given tasks. Along with the choice, comes the responsibility for managing, coordinating, monitoring, evaluating, and reflecting on relevant cognitive processes and metacognitive strategies as they search for, interpret, and apply information to construct and test potential problem solutions. On the one hand, this presents significant challenges to novice learners, who may lack both the proficiency to use the system’s tools and the experience and understanding necessary to explicitly regulate their learning and problem solving in these environments. On the other hand, the large solution spaces implied by the open-ended nature of the environments and the complexities of the search space clearly make the application of traditional overlay and perturbation modeling techniques intractable. Learner-based modeling approaches that focus more on learning behaviors and their impact on learning and evolution of the problem solution are likely to be more appropriate. To facilitate learner-based modeling, and provide a framework that encompasses the cognitive and metacognitive processes associated with students’ learning and problem solving tasks, we have developed a task- and strategy-based modeling framework to interpret and analyze students’ actions and activity sequences in the learning environment.

In this paper, we describe our learner modeling framework in the context of the CTSiM learning environment. The CTSiM OELE includes tools for (1) model building at two different levels of abstraction: conceptual and computational, (2) acquiring relevant information to aid the model-building and model checking tasks, and (3) model tracing and verification. As an example, when students learn about ecology by modeling a fish-tank, they can consult hypertext resources with information needed to model a fish tank, build conceptual and computational models in the agent-based paradigm, verify their models by running simulations, and refine their models accordingly. Additional information is provided on computational constructs that they may need to build their simulation model, and check their model behaviors by simulation.
Fig. 1

A task- and strategy-based modeling framework for OELEs

At the core of our approach to learner modeling is a task- and strategy-based approach that uses a hierarchical representation, as illustrated in the right half of Fig. 1. The hierarchical task model captures general learning and model building tasks common to OELE approaches and then successively refines them to task-specific actions in the CTSiM environment. The highest layers in this model describes domain-general tasks, such as information seeking and solution construction that a learner has to be proficient in to succeed in a variety of OELE environments (for example, information seeking and acquisition). The middle layers of the hierarchy focus on approaches for executing subtasks related to the more general tasks, and they may be specific to a particular OELE or genre of OELEs (for example, building a conceptual model in CTSiM). The lowest levels of the hierarchy map onto observable actions that are defined with respect to the tools and interfaces in an individual OELE. Therefore, the task model, represented as a directed acyclic graph, provides a successive, hierarchical breakdown of the tasks into their component subtasks and finally observable OELE actions. However, the task model does not indicate how subtasks may be combined to achieve task goals and subgoals, nor do they specify when a particular task is most suitable for learning or problem solving.

Instead, it is the strategy model, illustrated in the left half of Fig. 1 that captures this information in a form that can be directly leveraged for online interpretation of students’ actions. Thus, the strategy model complements the task model by describing how actions, or higher-level tasks and subtasks, can be combined to provide different approaches or strategies for accomplishing learning and problem-solving goals. By specifying a temporal order and conceptual relationships among elements of the task model that define a strategy, the strategy model codifies the semantics that provide the basis for interpreting a student’s actions beyond the categorical information available in the task model.

Strategies have been defined as consciously-controllable processes for completing tasks (Pressley et al. 1989) and comprise a large portion of metacognitive knowledge; they consist of declarative, procedural, and conditional knowledge that describe the strategy, its purpose, and how and when to employ it (Schraw et al. 2006). How to apply a particular procedure in the OELE describes a cognitive strategy, while strategies for choosing and monitoring one’s own cognitive operations describe metacognitive strategies. In this task-and-strategy modeling approach, strategies manifest as partially-ordered sets of elements from the task model with additional relationships among those elements determining whether a particular, observed behavior can be interpreted as matching the specified strategy. Figure 1 illustrates unary relationships that describe specific features or characterizations of a single strategy element, binary relationships among pairs of elements, and the temporal ordering among elements of the strategy. Further, if a relationship is not specified between any two elements in a strategy, then the strategy is agnostic to the existence or non-existence of that relationship. Because the elements of the task model used in the definition of strategies are hierarchically related, strategies range from more general strategy definitions to more specific variants. In this representation, specifying additional relationships, additional elements, or more specific elements (e.g., a specific action replacing a more general task/subtask) derive a more specific strategy from a general one, as illustrated in Figs. 1 and 2. For example, a general strategy for model construction in CTSiM may involve a reading task followed by application of the information read to build a part of a computational model. More specific strategy variants may be defined as a reading about a particular domain-related or computational construct, then using that construct to build part of the computational model. An important implication of the hierarchical relationships among the strategy process definitions, illustrated in Fig. 1, is that a particular construct or strategy can be expressed in multiple variations that relate to each other. In particular, this allows us to interpret a set of desired and suboptimal implementations of a task or strategy to its corresponding task/subtask or strategy node in the learner model. As illustrated in Fig. 1, the general outline of the strategy is hierarchically linked to a variety of more detailed versions of the process that represent either desired variants or suboptimal ones.
Fig. 2

Strategy matching in OELEs

The system can analyze a student’s behavior by comparing the student’s action sequences and their associated contexts against desired and suboptimal strategy variants defined in the strategy model, as illustrated in Fig. 2 (Kinnebrew et al. 2016). When a student’s action or action sequence corresponds to that specified in a strategy variant, and the temporal ordering between actions, and unary and/or binary relations associated with the actions satisfy the strategy definition, a strategy match is said to occur. Strategy matches provide a basis for estimating a student’s proficiency with respect to a particular strategy, and this information can be used to dynamically update the learner model. Strategy matches with desired strategy variants indicates desired learning by modeling and problem solving behaviors, while frequent matches with suboptimal strategy variants indicates lack of proficient and desirable behaviors, and, therefore, an opportunity to provide scaffolding.

4 The CTSiM learning environment and learning activities

4.1 Learning by modeling using the CTSiM environment

The CTSiM environment (Basu et al. 2014a; Sengupta et al. 2013) adopts a learning-by-modeling approach where students engage in model building, simulation, and analysis tasks by iterating among information acquisition, model building, simulation, and model checking and verification processes and strategies related to these processes. Figure 3 illustrates the CTSiM functional architecture that corresponds to this pedagogical approach.
Fig. 3

The CTSiM learning-by-modeling pedagogy

Students’ model building activities involve two linked representations for conceptual and computational model building. These two representations support modeling at different levels of abstraction and operationalize the important CT concepts of abstraction and decomposition within an agent-based modeling framework. Students start constructing an abstract conceptual model of the domain using an agent based framework in the ‘Model’ interface, which they can leverage to build the computational models of individual agent behaviors in the ‘Build’ interface. Though this implies a hierarchical structure between the two representations, students have the freedom to switch between the representations as they construct and refine their models in parts.
Fig. 4

A partially completed conceptual modeling representation for structuring the domain in terms of entities, their properties, and behaviors

In the conceptual model representation, students use a visual editor to identify the primary agents in the domain of study, along with the relevant properties and behaviors associated with these entities. Agent behavior modeling adopts a sense-act framework, i.e., students have to explicitly specify the properties that need to be sensed in order for the behavior to occur, and the properties that are acted upon when the behavior occurs. Students also have to specify environmental elements that participate in or affect individual agent properties and behaviors. For example, in the fish-macro activity, ‘fish’ represents an agent with properties like ‘hunger’ and ‘energy’, and behaviors like ‘feed’ and ‘swim’. ‘Water’ is an environment element with properties like ‘dissolved oxygen’ and ‘cleanliness’ that affect fish properties and behaviors. The ‘fish-feed’ behavior senses the properties ‘fish-hunger’ and ‘duckweed-existence’, and acts on properties like ‘fish-energy’. However, this representation abstracts several details like how and when the different properties are acted on in the various agent behaviors.

While constructing their computational models, students follow a visual programming approach by dragging blocks from a palette onto the modeling canvas, and arranging them in a specific order to create their models. In order to keep the modeling task simple for middle school students, this representation abstracts the quantitative change in the agent property values. Therefore, most property changes are expressed qualitatively using increase and decrease blocks, or in terms of symbolic constants, such as when there is no dissolved \(\hbox {O}_{2}\) amount in the water fish die.

The programming blocks include domain-specific (e.g., “speed-up” in kinematics, “feed” in biology) and domain-general primitives (e.g., conditionals and loops). The properties specified in the sense-act conceptual model representation for an agent behavior determine the set of domain-specific primitives available in the palette for the behavior. This dynamic linking helps students gain a deeper understanding of the representations and their relationships. For example, the ‘wander’ block is available in the palette of available blocks for the ‘fish-swim’ behavior only if ‘fish-location’ is specified as an acted on property for the behavior. CTSiM adopts a single internal representation for specifying the agent-based conceptual and computational modeling constructs, and a sense-act framework that help students focus on concepts associated with specific science topics, while also accommodating CT constructs and processes that apply across multiple science domains.

Figures 4 and 5 show the CTSiM interfaces for the two modeling representations. Figure 4 shows the part of the conceptual modeling interface where students construct the conceptual model of the science topic in terms of its entities, and their properties and behaviors. Students first identify the agents and the environment elements by clicking on the add (+) buttons at the top of the interface, and selecting from a list of possible agents and environment elements. Then, for each entity selected, students click on the add (+) buttons next to ‘Properties’ and ‘Behaviors’ to select the relevant properties and behaviors associated with the entity from a list of possible properties and behaviors. Figure 5 shows the conceptual–computational interface for modeling agent behaviors (‘fish-feed’ in this case). The leftmost panel depicts the sense-act conceptual representation of the current behavior that the student is modeling, while the middle panel shows the computational palette, and the right panel contains the student-generated computational model. The side-by-side placement on the interface helps students link the two representations and develop their models by leveraging the understanding they gain by going back and forth between the two representations (Chandler and Sweller 1992; Basu et al. 2016b).

To further scaffold the integration of the representation, the red and green coloring of the sense-act properties (see Fig. 5) provides students with visual feedback about the match between their computational and conceptual models for the particular behavior model that they are constructing. All the sense-act properties are initially colored red, but as students use a property in their computational model, the corresponding property changes color from red to green. For example, in Fig. 5, the student has conceptualized that the \(\hbox {O}_{2}\)-amount needs to be sensed for the fish-feed behavior. However, the computational model does not include this information and hence the property is colored red. In such cases, students can verify individual agent behaviors and decide how to refine their computational and/or conceptual models.
Fig. 5

The linked conceptual–computational interface for modeling agent behaviors

As students build their conceptual and computational models, they can visualize and step through their model behaviors by simulating their constructed model in the ‘Run’ interface (see Fig. 6). They can also verify their evolving models (the entire model or a subset of agent behaviors) by comparing the model behaviors against a matched ‘expert’ simulation in the ‘Compare’ interface (see Fig. 7). CTSiM uses an embedded instance of NetLogo (Wilensky 1999) to display the agent-based simulations and plots in the ‘Run’ and ‘Compare’ interfaces. Students do not have access to the expert computational model, but they can study and analyze the differences between the simulation results to guide them in improving their models. NetLogo animations and plotting functionalities provide the students with a dynamic, real-time display of how their agents operate, thus making explicit the emergence of aggregate system behaviors.
Fig. 6

The CTSiM ‘Run’ interface with model tracing functionality

Fig. 7

The CTSiM ‘Compare’ interface for verifying model behaviors

The visual primitives used by students as they build their computational models are internally translated to an intermediate language and represented as code graphs of parameterized computational primitives. These code graphs remain hidden from the learner, and are translated into NetLogo code by the model translator. This generated NetLogo code is then combined with the domain base model to generate the simulations corresponding to the user models. The base model provides NetLogo code for visualization and other housekeeping aspects of the simulation that are not directly relevant to the learning goals of the unit. Furthermore, CTSiM supports stepping through a model, meaning that the system can highlight each primitive in the student model as it is being executed. During model stepping, each visual primitive is translated separately via the Model Interpreter, instead of the entire user model being translated to NetLogo code. Such supports for making algorithms “live” are directed at helping students better understand the correspondence between their models and simulations, as well as identify and correct model errors.

CTSiM also provides two sets of searchable hypertext resources, one with information about the science topic being modeled (the ‘Science Book’), and the other with information about agent-based conceptual and computational modeling (the ‘Programming Guide’). Figure 8 depicts a screenshot from the ‘Programming Guide’. Students can also check their understanding of science and CT concepts by taking formative quizzes administered by a mentor agent in the system named Ms. Mendoza. The mentor agent grades students’ responses to the multiple-choice type quiz questions and provides feedback about the correctness of the responses along with suggested resource pages to read in case of incorrect responses.
Fig. 8

A screenshot from the CTSiM ‘Programming Guide’

4.2 CTSiM learning activity progression

Currently, the CTSiM learning progression comprises an introductory training activity and three primary modeling activities across two domains—Kinematics and Ecology. Students start with an introductory shape drawing activity for the purpose of training and practice; they are not assessed based on this activity. In this activity, students model a single agent and use simple CT concepts like iterations to build shapes like squares and spirals which they use to explore the relations among distance, speed, and acceleration. Then, in their first primary modeling activity, students progress to modeling a real-world phenomenon using more complex computational constructs like conditionals. Activity 1 models a roller coaster (RC) car moving along a track with four segments—up at constant speed (pulled by a motor); down (free fall); flat (constant speed); and up against gravity. In Activities 2 and 3, students advance to modeling ecological processes with multiple agents with multiple behaviors in a fish tank system. In Activity 2, students build a macro-level, semi-stable model of a fish tank with two types of agents: fish and duckweed, and behaviors associated with the food chain, respiration, locomotion, and reproduction of these agents. Since the waste cycle is not modeled, the build-up of toxic fish waste results in the non-sustainability of the macro-model (the fish and the duckweed gradually die off). In Activity 3, students address the non-sustainability by introducing micro-level entities, i.e., Nitrosomonas and Nitrobacter bacteria, which together support the waste cycle, i.e., convert the ammonia in the fish waste to nutrients (nitrates) for the duckweed. The plots generated by the simulation models help students gain an aggregate level understanding of the different cycles in the fish tank ecosystem, and their role in establishing the interdependence and balance among the different agents in the system.

5 Learner modeling and adaptive scaffolding in the CTSiM environment

In this section, we present the learner modeling and adaptive scaffolding framework that we have developed for the CTSiM environment. Figure 9 provides an overview of the framework. The primary goal for providing adaptive scaffolding in CTSiM is to help students become proficient in:
  • cognitive processes related to CTSiM tasks and subtasks as defined in our task model hieararchy, and

  • cognitive and metacognitive strategies that support effective learning and model building.

This form of adaptive scaffolding goes beyond the purely corrective and diagnostic approaches to feedback and hints. Instead, it is designed to help students shift their focus to effective learning and model building by monitoring their model building processes. In CTSiM they can accomplish this by leveraging the links between conceptual and computational models to systematically build complex models in parts, and by developing the abilities to effectively test their evolving models by comparing against behaviors generated by a corresponding but correct expert model.

To support this form of scaffolding, our learner modeling scheme is derived from the task and strategy modeling scheme discussed in Sect. 3. Unlike other learner models described in this special issue, the learner modeling scheme in CTSiM does not explicitly capture students’ factual knowledge about the kinematics or ecology domains being taught. For example, Pelánek et al. (2016) discuss learner modeling efforts for capturing students’ factual knowledge in learning environments where learners may differ considerably in their prior domain-specific factual knowledge. This factual knowledge captured in the learner model is then used for automatic multiple-choice question generation. On the other hand, in CTSiM, the information in the learner model helps adapt the mentor agent’s strategy feedback provided to the students. The learner modeling scheme in Grawemeyer et al. (2017) is similar to the approach we use in CTSiM. It serves the purpose of individualizing student feedback and choosing between more and less interruptive feedback presentation methods. However, this learner model captures information about student affect in Fractions Lab—an environment that fosters conceptual and procedural knowledge of fractions. As discussed, CTSiM focuses on tracking students’ effective and non-effective use of learning strategies, not their affective states.

In more detail, the learner modeling scheme in CTSiM accumulates information about students’ abilities to perform different CTSiM tasks (as specified in the task model hierarchy in Fig. 10), using diagnostic procedures and computing performance metrics that accurately capture students’ model building performance, their cognitive skills, and their effectiveness in applying relevant strategies. Inferring students’ use of strategies requires tracking students’ actions on the system along with the context in which the actions are performed. While the information stored in the learner model is used to individualize the scaffolds students receive in the CTSiM environment, students never gain direct access to their learner models, unlike the open learner modeling approach discussed in Long and Aleven (2016). While learning about fractions in the Lynette environment, students can inspect system-computed values of their skill levels displayed using a skill meter. This approach helps students individualize their curriculum by selecting the level of difficulty for subsequent problems they solve.

In Sect. 5.1, we describe the CTSiM-specific task and strategy models, and the details of how we capture and update information about students’ task performance and strategy use in the learner model in Sect. 5.2. Section 5.3 describes our adaptive scaffolding approach based on information contained in the learner model.
Fig. 9

Learner modeling and adaptive scaffolding framework for CTSiM

Fig. 10

The CTSiM task model

5.1 CTSiM task and strategy model

The CTSiM task model hierarchy is shown in Fig. 10. The top level of the model covers three broad classes of tasks that are relevant to a large number of OELEs: (i) information seeking and acquisition, (ii) solution construction and refinement, and (iii) solution assessment. Each of these OELE task categories is further broken down into three levels that represent: (i) general task and subtask descriptions that are common across the specific class of OELEs that involve learning by modeling; (ii) CTSiM specific descriptors for these tasks; and (iii) actions within the CTSiM environment that students use to execute the various tasks.

Information acquisition (IA) involves identifying relevant information, interpreting that information in the context of a current task or subtask (e.g., solution construction and refinement), and checking one’s understanding of the information acquired in terms of the overall task of building correct models. In CTSiM, students are provided with separate searchable hypertext resources that contains the following information: (i) science content relevant to the science topic being modeled, and (ii) information and examples about conceptual and computational modeling and uses of CT constructs. Students combine information from the two types of resources to build their science models using an agent based modeling approach, and use computational constructs to model agent behaviors using a sense-act framework. Students can check their understanding of the information acquired by taking quizzes provided in the system by the mentor agent, Ms. Mendoza, and can then use the quiz feedback to identify science and CT concepts they need to work on, and the relevant sources of information (specific resource pages) for learning about those concepts.

Solution construction (SC) tasks involve applying information gained through information seeking and solution assessment activities to construct and refine science models. In CTSiM, the science model is described by linked conceptual and computational representations that students can build in parts (Basu et al. 2016b). As described in Sect. 4, conceptual model construction involves structuring the domain in terms of agents, environment elements, their properties and behaviors, as well as representing agent behaviors as sense-act processes. The computational model construction, which is linked to the conceptual model, represents the simulation model that includes all of the agent behaviors that are created by selecting and arranging domain-specific and CT blocks.

Solution assessment (SA) tasks involve running simulation experiments, either in the ‘Run’ window where students can step through their simulation code, and check the evolving model behavior by observing the animation and plots, or in the ‘Compare’ window where students compare their model behavior against an expert model behavior. The goal is to observe the behavior of the constructed model, and verify its correctness. This may require testing the model in parts, comparing the results generated by the student’s model against the behaviors generated by a corresponding expert model, and using this comparison to identify the correct and incorrect parts of the model. As discussed earlier, the student and the expert models are executed in lock step as NetLogo simulations. Observing and comparing the simulations helps infer incorrectly modeled agent behaviors, which students can combine with relevant information seeking actions to refine their existing solutions.

We use different sequences of these tasks, subtasks and actions described in the CTSiM task model, and combine them with information characterizing individual actions (unary relations) and relationships between different action sequences (binary relations) to specify a set of desired strategies or a ‘strategy model’ for CTSiM. While different unary relations can be used to characterize learners’ cognitive processes, we use a unary measure called ‘effectiveness’ to evaluate learners’ actions in the CTSiM environment. Actions are considered effective if they move the learner closer to their corresponding task goal. For example, effective IA actions should result in an improvement in the learner’s understanding of science and CT concepts required for model building in CTSiM. Likewise, effective SC actions improve the accuracy of learners’ conceptual and computational models, and effective SA actions generate information about the correctness (and incorrectness) of individual agent behaviors modeled by the learner. Overall, students with higher proportions of effective actions are considered to have achieved higher mastery of the corresponding tasks and cognitive skills.

Similarly, many types of binary relations can be defined among tasks/actions to represent strategies. In this paper and in previous work (Kinnebrew et al. 2016) we have adopted ‘coherence’ metrics for defining effective strategies comprising action sequences. Two temporally ordered actions or tasks (\(x \rightarrow y)\), i.e., x before y, taken by a learner exhibit the coherence relationship (\(x\ge y)\) if x and y share contexts, i.e., the context for y contains information contained in the context for x. The context for an action comprises detailed information about the interface view(s) associated with the action, such as the specific science or CT page read, the particular conceptual or computational components edited, the part of the model worked on, or the agent behaviors compared. We can assume that students are more likely demonstrating effective metacognitive regulation when an action or task they perform is coherent with or relevant to information that was available in one of their previous actions or tasks.

In this version of CTSiM, we chose a set of five desired strategies (S1S5), and analyzed students’ actions to detect when students were deficient in certain strategies and needed scaffolding. While we realize that these five strategies do not define a complete set of useful strategies for CTSiM, we chose them based on our observations of difficulties that students faced in our previous studies (Basu et al. 2013, 2016a). We previously classified students’ challenges as related to modeling, programming, agent based reasoning, and domain-knowledge, and noticed that students needed repeated help with identifying the agents and their interactions in a science topic, understanding domain concepts and connecting them to the different CTSiM tasks, understanding how to represent science concepts using CT constructs, observing effects of partial code snippets, identifying differences between the user model simulations and the expert simulation, and debugging by decomposing the task into manageable pieces. Based on our observations, we have refined the CTSiM interface by providing students with hypertext resources for science and CT content, and requiring that they work on a linked conceptual modeling task before they work on the computational modeling task. However, we wanted to ensure we could provide additional individualized scaffolds when we detected that students were not using the information sources in an efficient manner. In other words, information derived was not being used in an effective way for modeling building, combining conceptual and computational modeling, or debugging their model in parts.

Hence, three of the desired strategies, S1, S2, and S3, link SC and SA actions to IA actions. S4 focuses on the complexities of SA for larger models, and describes a strategy for testing the model in parts. S5 pertains to SC, and how to effectively use multiple linked representations to build the science model. Each of these strategies is discussed in greater detail below. As discussed earlier, all cognitive strategies involved with a single task are evaluated using effectiveness measures, whereas metacognitive strategies that combine actions linked to different tasks or sub-tasks are evaluated using coherence measures.
S1

Solution construction followed by relevant information acquisition strategy (SC-IA): This strategy relates to seeking information relevant to the part of the model currently being constructed by the student. It can be specified as a SC action (conceptual domain-structure edits, conceptual sense-act edits, or computational model edits) temporally followed by a coherent ‘Science read’ action (SC = > Science Read). Coherence implies that the science resource page accessed contains information relevant to the agent or agent behavior being modeled in the SC action. For example, if a student switches to the science resources while modeling the sense-act structure of the ‘fish-breathe’ behavior, we consider the (SC => Science Read) strategy effective only if the science resource pages read contain information about the ‘fish-breathe’ behavior.

S2

Solution assessment followed by relevant information acquisition strategy (SA-IA): This strategy relates to seeking information relevant to the agent behaviors that were just assessed using a SA task (test model, compare entire model, or compare partial model). The IA that follows is required to be a coherent ‘Science read’ action (SA = > Science Read), i.e., the science resource page contains information relevant to at least one of the agent behaviors assessed in the SA action.

S3

Information acquisition prior to solution construction or assessment strategy (IA-SC/SA): This strategy involves acquiring information about an agent behavior before modeling it or checking that behavior of the agent. A ‘Science Read’ action that is followed by a coherent SC or SA action (Science Read = > SC|SA) defines this strategy.

S4

Test in parts strategy: When a student’s CTSiM model includes multiple agent behaviors, this strategy represents an approach where the student decides to assess a subset of the modeled behaviors at a time to make it easier to compare their model behaviors against the expert simulation. This strategy is characterized by the effectiveness of a single action, ‘Compare partial model’ in case of complex models where the expert model contains greater than 2 agent behaviors. An effective ‘Compare partial model’ action generates information about the correctness or incorrectness of individual or subsets of agent behaviors as opposed to the entire set of agent behaviors. We specify an effective ‘Compare partial model’ action as one that compares a maximum of 2 agent behaviors.

S5

Correspondence between Conceptual and Computational models strategy (Model-Build): This strategy involves building the science model in parts, maintaining the correspondence between the conceptual model and the computational model for each part. It can be represented as a ‘Conceptual sense-act build’ action followed by a coherent (linked) ‘Computational model build’ action (Sense-act build => Computational build), i.e., the computational edit adds a sensing block corresponding to a sensed property or an action block corresponding to an acted property for the same agent behavior. As described in Sect. 4, we provide students with visual feedback about their conceptual–computational model coherence by coloring sense-act properties green or red based on whether the properties are coherently used or not used in their computational models. This visual information provides students feedback on how well they are employing the Model-Build strategy.

5.2 Learner modeling in CTSiM

The CTSiM learner model represents a data-driven scheme that keeps track of students’ performances on various tasks and related actions as defined in the hierarchical task model, as well as the strategies they use to combine and co-ordinate the different tasks. Figure 9 shows a complete learner model schema that maintains information about students’ effectiveness on each of the IA, SC, and SA tasks, as well as their strategy use for a set of strategies that combine the IA, SC, and SA tasks in different but meaningful ways. In this paper, we discuss only the subset of strategies, S1S5 that we have designed detectors for in the current CTSiM environment. Also, in the current CTSiM system, our learner model focuses only on students’ task performance for the SC tasks that are related to conceptual and computational model building. We compute task performance using an effectiveness measure for the actions associated with the task, i.e., the proportion of actions whose consequence aligns with purpose of the task. For example, a SC task that involves a conceptual ‘sense-act build’ action is effective, if it results in the student’s conceptual model becoming closer to the expert conceptual model. Strategy use, on the other hand, keeps count of the number of times use of the actions that make up the strategy are not coherent, or one of the actions produce an ineffective result.

In this section, we describe the online learner information used by the learner modeling module, the modeling performance and behavior comparisons performed, and the details of the information maintained and updated in the learner model.

Learner actions in CTSiM, combined with information about the state of students’ conceptual and computational models, are used to evaluate students’ strategy use for the five strategies, S1S5. Strategy use is tracked as actions or sequences of actions and their computed coherence and effectiveness measures. Our longer-term plan is to store information about students’ optimal and suboptimal strategy use, but the current learner model only stores the frequency of suboptimal or inappropriate use of each of the strategies S1S5. Thus, the scaffolding module can directly use this information to decide when to provide support. Since there can be numerous ways in which a strategy may not be used optimally, we define specific suboptimal variants of the strategies which we specifically want to detect in students’ learning behaviors, and provide scaffolds that helps them overcome these deficiencies. For strategies associated with single actions, a suboptimal strategy use can involve an ineffective instance of the action or a lack of the action altogether. Similarly, for strategies involving coherent action sequences, suboptimal strategy use can be defined by the action sequence with component actions that are not coherent with each other, or by the lack of the action sequence itself. Examples of suboptimal variants of the five strategies defined in this work are as follows:
S1

Suboptimal SC-IA: Suboptimal use of this strategy occurs when the part of the model the learner constructs has errors, and this is followed by the learner seeking information that does not correspond to the part of the model s/he was constructing. It can be specified as an ineffective SC action that is followed by a ‘Science read’ action, which is incoherent with the previous SC action.

S2

Suboptimal SA-IA: Suboptimal use of this strategy occurs when SA determines that one or more agent behaviors are incorrect, and the subsequent ‘Science read’ action is incoherent, i.e., it does not involve the reading of resource pages that are linked to the behaviors assessed to be incorrect.

S3

Suboptimal IA-SC/SA: Suboptimal occurrence of this strategy occurs when a SA action finds an incorrect agent behavior, but this action is temporally preceded by a ‘Science Read’ action for other agent behaviors (incoherent variant of the strategy). It may not be preceded by any ‘Science Read’ action at all (lack of the action sequence).

S4

Suboptimal test in parts: A suboptimal use of this strategy occurs when a ‘Compare’ task during SA of a complex model with multiple erroneous agent behaviors does not provide sufficient information to find the source(s) of the errors. It can involve an ineffective ‘Compare entire model’ action or even an ineffective ‘Compare partial model’ action which does not provide enough information to pinpoint errors to specific agent behaviors.

S5

Suboptimal Model-Build: Suboptimal uses of this strategy involve a conceptual sense-act edit action which is either not temporally followed by a computational edit action or is followed by an incoherent computational edit. An ineffective use of this strategy is detected through system based visual feedback about the sense-act properties for the agent behaviors. If the properties are colored red, it implies that this strategy was used in an ineffective way.

The ‘strategy matcher’ function in the ‘learner modeling module’ compares these instances of the suboptimal use of strategies S1S5 against online learner information to calculate each learner’s frequencies of suboptimal strategy use. However, these frequency calculations are local, and the frequency counts for ineffective strategy use start after the last time the student was scaffolded on the particular strategy.

Besides maintaining a measure of learners’ strategy use, the CTSiM learner model also maintains a local history of learners’ conceptual and computational modeling skills to help detect ineffective SC actions and the aspects of the modeling tasks that learners are struggling with. Since ineffective SC edits can either remove model elements required in the expert model from the student model or add model elements not required in the expert model to the student’s model, separate measures of ‘missing/correctness’ and ‘extra/incorrectness’ are maintained for students’ conceptual and computational models and their various components. Learners’ modeling skills are defined by measures comparing different aspects of their models against the corresponding expert models. Conceptual modeling skills are defined separately for different conceptual model components so that the scaffolds can focus on specific aspects of the modeling task. The different conceptual components include agents, environment elements, properties, and behaviors chosen, as well as the sensed and acted-on properties specified for each agent behavior. The conceptual model comparator in the learner modeling module performs a simple set comparison between students’ conceptual models for a topic and the expert conceptual model for the topic to compute ‘missing’ and ‘extra’ measures for each of the conceptual model components, which are stored in the learner model. The ‘missing’ measure for a conceptual component counts the number of elements of that component which are present in the expert model but missing in the student model. Similarly, the ‘extra’ measure for a component counts the number of elements of that component which are present in the student model but not present in the conceptual model.

On the other hand, defining computational modeling skills involves more nuanced measures beyond ‘missing’ and ‘extra’ blocks, since merely having the same set of programming blocks as the expert model does not guarantee semantic correctness of the student model. The same information can be modeled in a number of ways using different sets of blocks. While we cannot possibly account for all possible correct solutions, we have added a number of functions to our computational model comparator to minimize false positives (same set of blocks as expert model, but different semantic meaning) and false negatives (blocks do not match those in the expert model, but similar semantic meaning). For example, students can represent the correct information in different ways using different sets of blocks. If a conditional in a student model senses a property instead of its complement, or vice versa (e.g., using a ‘some-left’ block instead of a ‘none-left’ block), the consequent and alternative blocks can be interchanged to represent the same information. Another example of a false negative occurs when the expert model for an agent behavior contains a conditional and an action block that is independent of any condition and is hence placed outside the conditional block. If a student places two instances of the action block inside the conditional, once under the consequent and once under the alternative, the solution is less elegant, but conveys the same semantic meaning as the expert. The model comparator takes these possibilities into account, while determining ‘missing’ and ‘extra’ blocks. In order to account for false positives, the model comparator checks whether action blocks in the student models occur under the correct set of conditions as defined in the expert model (irrespective of any condition, under a particular sensing condition, or under multiple simultaneous sensing conditions). The comparator also checks properties whose values are increased or decreased in an expert agent behavior to make sure their direction of change is the same in the student model for the agent behavior. Otherwise, an expert model with blocks ‘Increase (\(\hbox {CO}_{2}\)-amount), Decrease (\(\hbox {O}_{2}\)-amount)’ will be equated to a student model with blocks ‘Increase (\(\hbox {O}_{2}\)-amount), Decrease (\(\hbox {CO}_{2}\)-amount)’, since both models have the same set of four blocks. In summary, the computational model comparator defines computational modeling skills for each agent behavior in terms of the following: (a) number of missing blocks in the behavior as compared to the expert model, (b) number of extra blocks in the behavior as compared to the expert model, (c) whether all actions in the behavior occur under the correct set of conditions (yes/no), and (d) whether all property values modified in the behavior were changed in the correct direction (yes/no).

While we can capture the state of students’ conceptual and computational models as they work in the CTSiM environment, we calculate and update the measures describing students’ conceptual and computational modeling skills only when students assess their models. This design decision was made so that the scaffolds for the model building tasks were not sensitive to the effects of individual SC edits, but depended on the evolution of students’ models between model assessments. Also, since we have designed our scaffolds in this version of CTSiM to depend on how students’ models evolve since the last model assessment, the learner model only maintains a local history of students’ modeling skills instead of maintaining a global one. In this version, the learner model stores a history of a student’s conceptual and computational modeling skills since the last time s/he was provided a scaffold for the particular model construction task.

5.3 The adaptive scaffolding framework

Students’ solution construction task performance and strategy use information captured in the leaner model is used by the scaffolding module and combined with information about triggering conditions (frequency threshold for triggering particular scaffolds and priority and ordering of scaffolds) to decide which task based or strategy based scaffold to provide. Each scaffold is delivered in the form of a mixed-initiative conversational dialog initiated by Ms. Mendoza, and is anchored in the context of students’ modeling goals, their recent actions, and information that is available to students at that point of time (e.g. simulation information or domain information). The mixed-initiative, back-and-forth dialogues between the student and Ms. Mendoza are implemented as conversation trees (Klawe et al. 2002; McCalla and Murtagh 1991). The root node of the tree represents Ms. Mendoza’s initial dialogue which then branches based on conversational choices available to the student. Ms. Mendoza can respond to students’ choices using conversational prompts or by taking specific actions in the system. Such a structure captures the possible directions that a single conversation might take once it has been initiated. This conversation format engages students in a more authentic social interaction with Ms. Mendoza, and allows them to control the depth and direction of the conversation within the space of possible conversations provided by the dialogue and response choices (Segedy et al. 2013). Figure 11 provides an example of a scaffolding conversation tree for the IA-SC/SA scaffold asking students to read about incorrectly modeled agent behaviors which they have modeled and assessed without reading. It illustrates how the agent (Ms. Mendoza) and students can together negotiate goals and plans using such mixed-initiative conversational dialogues. Separate conversation trees are defined for scaffolding each of the five strategies and the solution construction task. The conversation trees generally comprise 2–5 levels of agents prompts, with 1–4 student choices following each agent prompt, and several alternate conversation paths based on students’ choices and actions taken in the system.

Our scaffolding approach is based on helping students with a task or strategy only we detect that they are persistently facing problems, instead of correcting them every time we detect a problem. Hence, the scaffolding reasoner maintains a frequency threshold for triggering each scaffold. At the same time, we did not want students to struggle or remain in a confused state for a long time without receiving help. Therefore, we made educated estimates based on our previous experiences with CTSiM and set the frequency thresholds in the range of 3–5. We plan to conduct more systematic experiments in the future to determine frequency thresholds by studying how these thresholds affect students’ learning behaviors and performance.

Currently, these frequency thresholds are fixed, and do not differ between students or situations. Students with a recent run of poor performances or modeling behavior will receive more instances of scaffolding in our system. In case of strategy scaffolds, the scaffolding reasoner reads in the frequencies of sub-optimal use of each strategy from the learner model and compares them with the corresponding strategy scaffold triggering frequency thresholds. When the sub-optimal strategy use reaches the set threshold, the scaffolding reasoner can choose to deliver scaffolds for the particular strategy if it fits with the stored priority or ordering of scaffolds. Similarly, the scaffolding reasoner stores frequency thresholds for triggering task based scaffolds for conceptual and computational modeling. It takes the frequency (say, ‘n’) for triggering a particular modeling scaffold and looks at the history of a student’s corresponding modeling skills to see if it can find ‘n’ instances where the modeling skill is imperfect or represented by a non-zero distance to expert.
Fig. 11

A scaffolding conversation tree asking students to read about incorrectly modeled agent behaviors

While we do not maintain a strict ordering between the task and strategy based scaffolds, we do maintain a priority list for situations where multiple scaffolds can be triggered simultaneously. For example, when a student performs a SA action where multiple compared agent behaviors have been modeled incorrectly, the ‘Test-in-parts’ strategy scaffold gets triggered first if it meets its triggering requirements, followed by the ‘Information acquisition prior to solution construction and assessment’ strategy scaffold. We first ensure that a student is not trying to compare too many incorrect behaviors simultaneously, because analyzing multiple errors at the same time may make it hard to pinpoint specific ones. When students test their model in parts, we provide scaffolds when we detect students have found incorrect agent behaviors, but they have not looked for information that will help them correct the error. If students have previously read about agent behaviors, but cannot correct incomplete or incorrect behaviors when testing in parts, we provide them with model building scaffolds that hint at using information they have read to correct specific aspects of their model behaviors.

The model building scaffolding uses a top-down approach by providing conceptual modeling scaffolds as long as the ‘missing’ score for any of the conceptual components in the learner model is greater than zero. Specifically, the scaffolds point students to specific levels in the conceptual modeling hierarchy they need to focus on (starting with the set of entities, followed by the set of agent behaviors, and the sense-act properties for the behaviors) and suggest consulting relevant resource pages to acquire the required information for correct conceptual modeling. Once a student’s conceptual model contains all the elements contained in the expert conceptual model (it may still contain extra elements beyond those in the expert model), the coherence measure between the student’s conceptual sense-act models and computational models trigger the Model-Build strategy scaffolds, when applicable. The Model-Build scaffold leverages the visual feedback about conceptual–computational coherence provided by the system through the green or red coloring of the sense-act properties. The scaffold draws students’ attention to the properties colored red in their models, and reminds students that they can either delete the red properties from their conceptual model or add computational blocks which match the properties. Once there are no more sense-act properties colored red, the computational modeling scaffolds help point out whether there are missing or extra blocks in students’ computational models, or action blocks which have not been modeled under the correct set of conditions. The suggestions for rectifying the various types of computational modeling errors for different agent behaviors are similar—acquiring information about the agent behavior by carefully reading the science resources.

The SC-IA and SA-IA strategy scaffolds are mutually exclusive and do not share triggering conditions with any of the other strategy or task-based scaffolds; hence they are triggered whenever their respective critical frequencies are reached. These scaffolds remind students about agents or agent behaviors recently modeled or assessed and ask if students are trying to gather information about any of them. Accordingly, students are provided suggestions on pages to read and reminded about how they can use the search feature to find relevant resource pages by themselves. All the other scaffolds are provided in the context of SA actions and start by asking students to evaluate the correctness of the simulations they just observed. They offer suggestions for testing a few agent behaviors at a time (Test in parts), or reading about the incorrectly modeled agent behaviors before trying to correct them (IA-SA).

While our scaffolds are triggered by possible sources of errors in students’ modeling tasks, and they offer suggestions on how the students can debug and rectify these errors by efficiently integrating information available to them in the CTSiM environment, none of our scaffolds provide ‘bottom-out-hints’ by telling students exactly what to correct in their model (Koedinger and Aleven 2007). Also, though all our scaffolds are provided only when we detect students making multiple errors on a particular task, or multiple ineffective uses of a strategy, they often start with a positive message about students’ previous successes in applying actions and strategies correctly.

6 Research study and assessment metrics

6.1 Research questions

In this paper, we discuss a recent controlled CTSiM study with 98 \(6{\hbox {th}}\)-grade students (average age \(=\) 11.5) from the same middle school, where the students were divided into two approximately equal groups by their science teachers. All the students had taken a few weeks of computer classes in school where they had worked on simple programs using Scratch (Maloney et al. 2004), but they were unaware of the term ‘Computational Thinking’ and had not participated in any earlier studies emphasizing CT. The control group (\(n = 46\)) used a version of the CTSiM system with no adaptive scaffolding provided by the mentor agent, Ms. Mendoza, and an experimental group (\(n = 52\)) used the full version of the CTSiM system, i.e., the system used by the control group plus the learner modeling scheme, and adaptive scaffolding based on the learner model provided by Ms. Mendoza. One of our primary goals in this study was to assess the effectiveness of our task and strategy based scaffolds by comparing the performance and behavior differences between the two groups as measured by their actions in the system as well as paper and pencil pre- and post-tests administered before and after the study. In particular, we analyzed the data generated by this study to answer the following research questions:
  1. 1.

    What effects do the adaptive scaffolds linked to the learner modeling scheme have on students’ performance in building correct conceptual and computational models?

     
  2. 2.

    How do the scaffolds impact students’ modeling behaviors and use of effective strategies?

     
  3. 3.

    How do the requirements for different types of scaffolds vary during the course of the intervention? Are the scaffolds effective, and do they follow a desired ‘fading’ principle?

     
  4. 4.

    What effects do the scaffolds have on (a) students’ science and CT learning, and (b) students’ abilities to transfer conceptual and computational modeling skills to model other science domains outside the CTSiM environment?

     
  5. 5.

    How do students’ (i) task performance and (ii) modeling behaviors relate to their learning of the science concepts in the different modeling activities?

     

6.2 Study procedure

We ran the study with four sections of \(6{\hbox {th}}\) grade students from the same middle school in middle Tennessee. The \(6{\hbox {th}}\) grade science teachers assigned students from two of the sections to the control condition and students from the other two sections to the experimental condition. All students worked on the shape units to gain familiarity with the CTSiM environment, followed by the three primary modeling activities described in Sect. 4.2. The study was run daily over a span of three weeks during the students’ science periods (1 h daily for each \(6{\hbox {th}}\) grade section).

On Day 1, students took 3 paper-based tests that assessed their knowledge of (1) Kinematics, (2) Ecology, and (3) CT concepts. More details on the test questions are presented in Sect. 6.3.1. On day 2, students were introduced to agent based modeling concepts, and got a hands on introduction to the CTSiM system. The whole class worked together to build a model for an introductory shape drawing activity. From Day 3, students worked individually in the CTSiM environment. On days 3 and 4, they worked on generating growing and shrinking spiral shapes, which emphasized the relations between distance, speed, and acceleration. Since the drawing tasks were considered a part of training and practice activities, students were allowed to help each other and seek help from their science teacher or from the research team if they had difficulties. From Day 5 students worked on the three primary modeling activities, and were not provided any individual help external to the system. Students worked on the Rollercoaster unit (Activity 1 described in Sect. 4.2) on days 5 and 6, after which they took paper-based post-tests on Kinematics and CT on Day 7. On days 8-12, students worked on modeling the ecological processes in a fish tank ecosystem. This model was built in two parts as described in Sect. 4.2: modeling the macro (Activity 2) and micro (Activity 3) environments in the fish tank. Students took their Ecology and CT-final post-tests on Day 13. Finally, on Day 14, students worked on a paper-based learning transfer activity where they were provided with a detailed textual description of a wolf-sheep-grass ecosystem. Based on this description they had to first build conceptual models for the agents in the system, much like the fish tank ecosystem. Then they had to build the computational models of agent behaviors using computational and domain-specific modeling primitives that were specified in the question. This model building exercise was similar to the fish tank ecosystem model they had built in CTSiM, except that the science domain was different and it was all done with pencil and paper. Therefore, unlike the CTSiM environment, students did not have access to any of the online tools in CTSiM, nor did they get feedback or hints by simulating their model or from the mentor agent.

As students worked on the CTSiM system, all of their actions on the system and the accompanying views (i.e., the state of the window in which they were performing their actions) were logged for future analysis. We analyzed the action logs to study the evolution of students’ models as they worked on the different modeling activities, students’ overall modeling scores at the end of the activities, the behaviors they exhibited as derived by their actions and action sequences, and the scaffolds that were triggered and delivered by the mentor agent in the experimental condition.

6.3 Assessment artifacts and metrics

In this section, we describe the pre-post questions that we use to assess students’ understanding of kinematics, ecology, and CT concepts, and define the set of metrics we have developed to compute students’ modeling performances and behaviors, and use of desired strategies.

6.3.1 Assessing learning gains

We measured students’ learning gains for science content in the kinematics and ecology domains, and CT content as the differences between the pre- and post-test scores for the individual tests.

The Kinematics pre/post-test assessed whether students understood the concepts of speed, acceleration and distance and their relations. The test required interpreting and generating speed-time and position-time graphs and generating diagrammatic representations to explain motion in a constant acceleration field. An example question asked students to diagrammatically represent the time trajectories of a ball dropped from the same height on the earth and the moon, and then to generate the corresponding speed-time graphs. For the Ecology test, questions focused on students’ understanding of the concepts of interdependence and balance in an ecosystem, and how a change in the population of one species in an ecosystem affects the other species. An example question asked was “Your fish tank is currently healthy and in a stable state. Now, you decide to remove all traces of Nitrobacter bacteria from your fish tank. Would this affect a) Duckweed, b) Goldfish, c) Nitrosomonas bacteria? Explain your answer.”

CT skills were assessed by asking students to predict program segment outputs, and model scenarios using CT constructs. This tested students’ abilities to develop meaningful algorithms using programmatic elements like conditionals, loops and variables. Simple questions tested use of a single CT construct, while modeling complex scenarios involved use of CT constructs like conditionals and loops and domain-specific constructs.

6.3.2 Assessing modeling performance and behaviors

We assess a student’s conceptual and computational modeling performance for an activity by defining metrics that specify the distances between the student’s models and the corresponding expert models. A model distance of 0 implies that the student’s model perfectly matches the expert model (no missing elements and no extraneous elements). We use metrics similar to those used online in the model comparator functions in the learner modeling module (see Sect. 5.2) for offline assessment of students’ evolving conceptual and computational models.

The total conceptual model distance is calculated as the normalized sum of the distances for the individual conceptual model components, i.e. agents, environment elements, agent properties, environment properties, agent behaviors, and sensed and acted-on properties for each agent behavior. The distance metric is computed for any individual component by performing a simple set comparison between the elements of the component in a student’s conceptual model and those contained in the corresponding expert conceptual model. The set difference provides the number of ‘missing’ and ‘extra’ elements in the component, and the sum of the ‘missing’ and ‘extra’ elements across all components of the conceptual model provides the ‘distance’ measure for the model. The ‘missing’ measure for a component counts the number of elements of that component that are present in the expert model but missing in the student model. Similarly, the ‘extra’ measure for a component counts the number of elements of that component which are present in the student model but not present in the expert conceptual model. The ‘distance’ measure, computed as the sum of the ‘missing’ and ‘extra’ measures across all conceptual model components, is normalized by the size of the expert conceptual model (i.e., the sum of the number of elements of each type of conceptual component) to make the ‘distance’ measure independent of the size of the expert model.

The computational model distance was developed in earlier work (Basu et al. 2014a, b). It is calculated by computing separate ‘correctness’ and ‘incorrectness’ measures for a student’s computational model, and then measuring the vector distance between the two-dimensional vector (correctness, incorrectness) to the target vector (1,0). The total correctness and incorrectness measures are calculated by combining the respective measures from the individual agent behaviors using a weighted average based on the size of each behavior’s expert model. The correctness measure for a single agent behavior is computed as the size of the intersection of the collection of visual primitives used in the student and expert models for the behavior. Similarly, the incorrectness measure for an agent behavior is computed as the number of extra primitives in the student computational model as compared to the expert model. A more comprehensive description of these computational accuracy measures can be found in Basu et al. (2014a).

We concede that this computational distance metric is based on presence or absence of primitives and is not sensitive to correct or incorrect assembly of the primitives. Though a metric which neglects ordering of primitives might seem problematic, the use of domain-specific primitives along with an agent based modeling paradigm where all agent behaviors are run in parallel and all primitives in an agent behavior are run in the same time step help minimize the probability of such errors. The measure used online in the computational model comparator function of the learner modeling module for checking whether actions in agent behaviors occur under the correct sets of conditions (see Sect. 5.2) can be used as an additional post-hoc measure of computational model correctness, but has not been used in this paper.

With respect to both conceptual and computational modeling, we describe students’ modeling progress during an activity by calculating the model distances at each model revision (actions performed as part of the SC task) and then characterizing the model evolution using 3 metrics: (1) Effectiveness—the proportion of model edits that bring the model closer to the expert model; (2) Slope—the rate and direction of change in the model distance as students build their models; and (3) Consistency—How closely the model distance evolution matches a linear trend.

Besides detecting students’ use of the desired strategies (S1S5) as defined in Sect. 5.1, we also assess students’ modeling behavior with respect to how they combine the conceptual and computational representations to build their models (Basu et al. 2016b). We use the following metrics for this purpose: (1) We look at activity chunks of each type and use the total number of chunks as a measure of how many times a student switched between the two representations; (2) The average sizes of the conceptual and computational modelingchunks constitutes the second metric, and (3) the ratio of conceptual and computational chunk sizes normalized by the ratio of the sizes of the conceptual and computational expert models defines the third metric.

7 Results

In this section, we analyze data from students’ action logs, their evolving model structures and final models at the end of each activity, the scaffolds they receive, and their responses on the pre-post tests and transfer test to answer the five research questions presented in Sect. 6.1.

7.1 Modeling performance

We assess the effectiveness of our adaptive scaffolding framework by comparing the model building performance of the students in the control (\(n = 46\)) and the experimental (\(n = 52\)) groups. Modeling performance for an activity is measured in terms of the accuracy of students’ final models, as well as their model progressions (model distance and model progression metrics are presented in Sect. 6.3.2). Tables 1 and  2 report the values for students’ conceptual and computational modeling performance measures, respectively.
Table 1

A t test comparison of conceptual modeling performance across conditions

 

Rollercoaster

Fish-macro

Fish-micro

Final conceptual model accuracy

Missing score

   Control

0.088 (0.117)

0.230 (0.137)

0.186 (0.158)

   Experimental

0.024 (0.047)*

0.036 (0.048)***

0.041 (0.019)***

Extra score

   Control

0.897 (0.587)

1.526 (1.519)

1.377 (1.597)

   Experimental

0.174 (0.183)***

0.090 (0.095)***

0.102 (0.070)***

Distance score

   Control

0.985 (0.533)

1.756 (1.476)

1.563 (1.581)

   Experimental

0.198 (0.197)***

0.126 (0.127)***

0.143 (0.084)***

Conceptual model progression

Edit effectiveness

   Control

0.497 (0.060)

0.445 (0.101)

0.483 (0.164)

   Experimental

0.567 (0.038)***

0.592 (0.044)***

0.676 (0.062)***

Model evolution slope

   Control

0.005 (0.007)

0.003 (0.003)

0.002 (0.006)

   Experimental

\(-\)0.003 (0.003)***

\(-\)0.002 (0.002)***

\(-\)0.005 (0.004)***

Model evolution consistency

   Control

0.334 (0.291)

0.500 (0.336)

0.585 (0.340)

   Experimental

0.304 (0.221)

0.591 (0.310)

0.796 (0.225)**

* p < 0.005; ** p < 0.001; *** p < 0.0001

Table 2

A t test comparison of computational modeling performance across conditions

 

Rollercoaster

Fish-macro

Fish-micro

Final computational model accuracy

Correctness score

   Control

0.66 (0.23)

0.48 (0.21)

0.53 (0.27)

   Experimental

0.85 (0.21)***

0.93 (0.1)***

0.97 (0.07)***

Incorrectness score

   Control

0.24 (0.21)

0.15 (0.13)

0.21 (0.23)

   Experimental

0.15 (0.18)*

0.04 (0.03)***

0.02 (0.05)***

Distance score

   Control

0.48 (0.19)

0.57 (0.17)

0.56 (0.28)

   Experimental

0.24 (0.25)***

0.09 (0.1)***

0.04 (0.08)***

Computational model progression

Edit effectiveness

   Control

.43 (.09)

.47 (.07)

.55 (.12)

   Experimental

.43 (.08)

.58 (.08)***

.69 (.11)***

Model evolution slope

   Control

\(-\).004 (.004)

\(-\).002 (.001)

\(-\).005 (.004)

   Experimental

\(-\).006 (.005)*

\(-\).004 (.002)***

\(-\).009 (.004)***

Model evolution consistency

   Control

.41 (.31)

.78 (.21)

.78 (.24)

   Experimental

.6 (.25)**

.95 (.04)**

.95 (.05)**

* p < 0.05; ** p < 0.001; *** p < 0.0001

A t test comparison between students in the two conditions (see Table 1) shows that students in the experimental condition built more accurate conceptual models for the RC, fish-macro, and fish-micro activities (the final model distance scores were significantly lower) when compared to students in the control condition who did not receive any scaffolding from Ms. Mendoza. Breaking down the aggregate distance scores, the two component scores of missing and extra constructs were also significantly lower for the experimental condition. This implies that the experimental group’s models included more of the conceptual model elements from the expert model (lower missing score) and fewer redundant and incorrect conceptual elements (lower extra score) than the control group’s models. Further, the experimental group’s conceptual model progress towards the final model was significantly better than the control group as evidenced by three metrics: (1) higher percentage of effective (i.e., correct) conceptual edits in all three activities; (2) conceptual model accuracy improved with time in each activity, i.e., the slope for model distance over time was negative, whereas the model progression distance slope for the control group was positive. (This was because the control group kept adding unnecessary elements to their models, and their conceptual models became more inaccurate in each activity as time progressed); and (3) conceptual modeling consistency was higher for the experimental group in the fish-micro unit.

Similarly, a t test comparison between students’ computational modeling performance in Table 2 shows that students in the experimental condition built more accurate computational models compared to students in the control condition (the differences in final model distances for the two groups were statistically significant) for the RC, fish-macro and fish-micro modeling activities. Like the conceptual modeling activity, the correctness scores for the experimental students were significantly higher and the incorrectness scores significantly lower in each of the activities. Not only were the final computational models more accurate, but the model progressions within each unit were more consistent and improved more rapidly for the experimental group. The experimental students made a higher percentage of effective computational edits in the ecology activities and their model evolutions were more consistent (with linear trends) in each of the activities. Both conditions had negative computational model evolution slopes, i.e., their model accuracy improved over time in each of the activities. However, the rate of improvement was significantly higher for the experimental group in all of the activities.

In summary, the answer to our \(1{\hbox {st}}\) research question is that the conversational feedback described in Sect. 5.3 resulted in significantly better conceptual and computational modeling task performance for the experimental group.

7.2 Modeling behaviors and use of strategies

Section 5.1 described the five strategies (S1S5) that were supported through adaptive scaffolding in the current version of CTSiM. We answer our \(2{\hbox {nd}}\) research question about the impact of scaffolding on behavior by computing the number of times each strategy was used in an effective manner by the two groups. Table 3 presents the average number of times each of the five strategies was used in each modeling activity, as well as the percentage of students who used the strategy at least once in each activity.
Table 3

A t test comparison of the use of desired strategies across conditions

  

RC

Fish-macro

Fish-micro

Strategy

 

Percentage of students

Mean (s.d.)

Percentage of students

Mean (s.d.)

Percentage of students

Mean (s.d.)

S1. Solution construction followed by relevant science reads

Control

37

1.33 (2.99)

54

2.43 (4.8)

70

1.93 (2.05)

Experimental

63

2.23 (4.71)

83

4.75 (4.97)*

85

3.4 (4.51)*

S2. Solution assessment followed by relevant science reads

Control Experimental

4

0.07 (0.33)

26

0.76 (1.66)

26

0.85 (9.31)

 

38

1.37 (2.69)**

44

1.66 (2.29)*

44

1.06 (0.24)

S3. Fraction of assessed agent behaviors which were read about before being assessed

Control Experimental

80

.73 (.42)

93

.5 (.33)

83

0.89 (0.27)

 

92

.86 (.28)

96

.77 (.32)***

100

0.96 (0.16)

S4. Number of partial-model comparisons

Control

0

na

48

2.65 (5.79)

15

0.57 (1.98)

Experimental

0

na

58

5.42 (7.16)*

19

1.97 (3.22)*

S5. Fraction of added sense-act properties which were either removed or followed by a coherent computational edit

Control Experimental

100

0.67 (0.27)

100

0.69 (0.31)

98

0.59 (0.31)

 

100

0.97 (0.1)***

100

0.99 (0.03)***

100

0.98 (0.06)***

* p < 0.05; ** p < 0.01; *** p < 0.001

We note two general trends in the effective use of all the strategies across the three modeling activities: (1) the fraction of students in the experimental group who used the strategies was always greater than or equal to the fraction that used the same strategy in the control group, and (2) the average use of the strategies was also higher in the experimental group. As shown in Table 3, a number of the differences between average uses of strategies in the two conditions were statistically significant at different confidence levels. While most of the differences had low to medium effect sizes (Cohen’s d in the range of 0.2–0.7), the differences in use of the coherent Model-Build strategy (Strategy S5) had much larger effect sizes in all three modeling activities (Cohen’s d in the range of 1.36–1.75).

Besides comparing students’ use of desired strategies across conditions, we also compared students’ modeling behaviors with respect to how they combined the conceptual and computational modeling representations using the metrics described in Sect. 6.3.2. Table 4 shows that the average sizes (number of edits) of the conceptual and computational chunks was significantly smaller for students in the experimental condition while the number of switches between the conceptual and computational modeling representations was significantly higher than that of the control group. This indicates that students in the experimental condition were better at decomposing their modeling tasks into smaller more manageable chunks, and they switched frequently to take advantage of the coupled representations (Basu et al. 2016b). This difference was consistent and statistically significant across all three modeling activities, but the disparity in both conceptual and computational chunk sizes became more pronounced in the later activities.

The normalized ratio of conceptual and computational chunk sizes, described in Sect. 6.3.2, provides a complementary measure of behavior with respect to integration of the modeling representations. For each of the modeling activities, we noticed a significant difference in this normalized ratio between students in the two conditions, and found that the ratio was always closer to 1 in the experimental condition. A normalized chunk size ratio of 1 for an activity implies that it is equal to the ratio of the number of conceptual and computational elements in the expert model for the activity. This ratio increased from the RC to the fish-macro activity for both conditions, implying that students’ conceptual edits increased as compared to their computational edits with respect to the expert models. However, the increase was significantly greater for the control group. Perhaps, the complexity of the fish-macro activity resulted in students spending more effort (i.e., more edits because they made more errors) in conceptualizing the models (multiple entities, their properties, and behaviors) than in the RC unit. For the experimental condition, the normalized ratio decreased from the macro to the micro unit, implying that students had to spend less effort in conceptualizing the domain model. However, the ratio increased further from macro to micro activities for the control group.

In summary, the results imply that the adaptive scaffolding had a strong effect on effective strategy use and improved students’ modeling behaviors.
Table 4

A t test comparison of modeling behaviors across conditions with respect to combining the conceptual and computational representations

 

Rollercoaster

Fish-macro

Fish-micro

Number of conceptual/computational chunks

Control

20.13 (10.25)

55.02 (26.09)

30.07 (15.2)

Experimental

33.23 (11.57)****

93.52 (30.11)****

56.17 (13.56)****

Average size of conceptual chunks

Control

10.24 (4.48)

18.54 (13.01)

20.29 (16.21)

Experimental

8.24 (2.44)**

8.12 (3.33)****

5.65 (1.6)****

Average size of computational chunks

Control

16.72 (18.08)

8.82 (4.14)

7.2 (4.47)

Experimental

7.92 (2.78)***

5.11 (1.25)****

4.2 (1.26)****

Normalized ratio of conceptual to computational chunk sizes

Control

0.83 (0.5)

2.66 (1.6)

2.73 (1.7)

Experimental

1.1 (0.52)**

2.02 (0.87)*

1.38 (0.42)****

* p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001

7.3 Variety of scaffolds required across modeling activities

We also studied how often students in the experimental group got strategy feedback, and how the scaffolding frequency varied across the three modeling activities (to answer our \(3{\hbox {rd}}\) research question). Table 5 depicts the feedback received for different strategies and different aspects of the modeling task in each activity. For each type of feedback, Table 5 provides 3 values for each activity: (1) n represents the number of students who received the feedback at least once in the activity, (2) range represents the lowest and highest number of times the feedback was received by any student during the activity, and (3) mean (s.d.) represent the average number of times the feedback was received during the activity along with its standard deviation value.

We found that students needed a combination of task and strategy feedback in all the modeling activities. In the initial RC activity, students received more task oriented feedback than in the other two activities. In the more complex fish-macro activity with multiple agents and behaviors, students needed more strategy feedback than in the RC activity, but less task oriented feedback than in the RC activity, implying that the effects of the feedback persisted across units. However, students found it challenging to manage and integrate the different tasks in a complex modeling activity involving a new domain. Finally, in the fish-micro activity, the task feedback received was further reduced, and the strategy feedback also decreased (to a smaller number than in the initial RC activity). This provides preliminary evidence that our scaffolding effects persisted, and, therefore, a fading effect occurred naturally as students worked across units. Further, the resulting conceptual and computational models in the fish-micro activity were the most accurate of any activity, even though the students received less feedback in each category of scaffolds than in the earlier activities.
Table 5

Variation of frequency and types of scaffolds required across modeling activities

 

RC

Fish-macro

Fish-micro

 

n

Range

Mean (s.d.)

n

Range

Mean (s.d.)

n

Range

Mean (s.d.)

Strategy feedback

SC-IA strategy

3

0–1

0.06 (0.2)

22

0–4

0.69 (1.02)

8

0–1

0.15 (0.36)

SA-IA strategy

0

0

0 (0)

0

0

0 (0)

0

0

0 (0)

IA-SC/SA strategy

18

0–15

1.37 (3.11)

19

0–14

1.81 (3.27)

4

0–3

0.13 (0.52)

Test-in-parts strategy

0

0

0 (0)

42

0–9

2.23 (2.13)

23

0–6

0.83 (1.26)

Model-Build strategy

32

0–8

1.79 (2.17)

35

0–10

2.04 (2.43)

30

0–10

1.33 (1.82)

Total strategy feedback

38

0–20

3.2 (3.9)

50

0–18

6.77 (5.05)

39

0–16

2.44 (2.9)

Task oriented feedback

Conceptual model building

   Conceptual entities

4

0–8

0.29 (1.26)

43

0–22

4.77 (4.31)

31

0–13

2.5 (3.3)

   Conceptual set of behaviors

0

0

0 (0)

7

0–5

0.29 (0.99)

1

0–1

0.02 (0.1)

   Sense-act framework

52

2–37

11.86 (8.0)

22

0–12

1.98 (3.05)

21

0–7

1.17 (1.7)

   Total conceptual modeling feedback

52

2–37

12.15 (7.94)

49

0–24

7.04 (5.85)

46

0–14

3.69 (3.3)

Computational model building

37

0–11

2.23 (2.56)

42

0–16

3.46 (4.03)

10

0–5

0.35 (0.9)

Total task based feedback

52

2–45

14.38 (8.69)

50

0–37

10.5 (8.52)

46

0–16

4.04 (3.5)

For the task feedback, we noticed that students needed a combination of conceptual and computational model building feedback in all the activities. Looking specifically at the conceptual modeling scaffolds, we find that almost all of the feedback in the RC activity was directed at correctly conceptualizing sense-act processes. However, students got significantly better in conceptualizing sense-act processes in the fish macro and fish micro activities. In these activities, most of the conceptual model building scaffolds were directed at correctly conceptualizing the right set of entities in the domain. This may be attributed to students’ low prior knowledge in the ecology domain (see pre-test scores in Table 6). They were faced with learning and modeling new domain content with multiple agents and environment elements in the fish macro and fish micro activities. With respect to strategy feedback, we see that students needed a combination of the different scaffolds except that for the SA-IA strategy. The value 0 across all activities for the SA-IA feedback is unusual, but that was because the condition under which this strategy was triggered was rarely assessed. This implies that this assessment needs to be further refined in the learner model in the future. In general, students needed a lot of scaffolding for the Model-Build strategy, the test-in-parts strategy, which was applicable for the larger ecology activities, and the IA-SC/SA strategy.

Again, these results show that the feedback on the five strategies and different aspects of the modeling tasks was effective, in that students learned how to use the strategies, and there was a general fading effect on the need for strategy feedback across units.

7.4 Learning science, CT and modeling skills

We have studied the effects of our task and strategy based scaffolds on students’ task performance and strategy use. In this section, we further analyze the impact of our scaffolds to see if they have an effect on students’ overall science and CT learning, and their relation to transfer of conceptual and computational modeling skills to a new science domain (to answer our \(4{\hbox {th}}\) research question).
Table 6

Paired t tests showing science and CT learning gains for students in the control and experimental conditions

 

Pre

Post

Pre-to-post gains

Pre-to-post p value

Pre-to-post Cohen’s d

Kinematics (max \(=\) 45)

   Control

12.52 (6.32)

15.55 (5.72)

3.03 (4.78)

<0.0001

0.55

   Experimental

16.65 (6.61)

22.38 (6.39)

5.72 (5.62)

<0.0001

0.88

Ecology (max \(=\) 39.5)

   Control

7.40 (3.90)

16.19 (8.35)

8.78 (7.17)

<0.0001

1.35

   Experimental

9.39 (4.47)

27.91 (6.70)

18.53 (6.31)

<0.0001

3.25

CT (max \(=\) 60)

   Control

16.49 (5.68)

22.53 (5.70)

6.04 (5.44)

<0.0001

1.06

   Experimental

22.72 (7.68)

32.24 (5.86)

9.52 (5.23)

<0.0001

1.39

Table 6 reports the pre and post scores and pre-post gains for students in both conditions for the science (Kinematics and Ecology) assessments as well as for the CT assessment. The CT post-test scores in Table 6 refer to scores on the CT tests administered on Day 13 of the study at the end of the ecology unit.

Students in both conditions showed significant pre-post learning gains for kinematics and ecology science content, as well as CT concepts and skills. However, the gains and effect sizes (Cohen’s d) were higher in each case for students in the experimental group compared to those in the control group. We also notice that students in the experimental group had higher pre-test scores, hence we computed ANCOVAs comparing the gains between control and experimental conditions taking pre-test scores as covariates. Factoring out the effect of initial knowledge differences implied by the pre-test scores, we found significant differences in science learning gains between the two conditions with medium to high effect sizes (effect sizes of differences in learning gains between conditions measured in terms of \(\eta _{\mathrm{p}}^{2 }\) or partial-eta-square values): kinematics gains (\(F = 18.91, p < 0.0001, \eta _{\mathrm{p}}^{2} = 0.17\)) and ecology gains (\(F = 52.29, p < 0.0001, \eta _{\mathrm{p}}^{2} = 0.36\)). Similarly, we factored out CT pre-test effects to find a significant effect of condition on CT learning gains (\(F = 40.69, p < 0.0001, \eta _{\mathrm{p}}^{2} = 0.31\)). We also assessed students’ performances on the first CT post-test taken at the end of kinematics unit (Control: Mean score = 20.87, s.d. = 4.9; Experimental: Mean score = 28.58, s.d. = 6.54), and found that students in the experimental group showed higher learning gains from the pre-test to the first post-test (\(F = 18.16, p < 0.0001, \eta _{\mathrm{p}}^{2}\) = 0.16), and gained further from the intermediate to the final CT post-test administered at the end of the ecology unit (\(F = 18.85, p <0.0001, \eta _{\mathrm{p}}^{2} = 0.17\)).

Next, we analyzed students’ performances on the transfer task where they were provided with a detailed description of a wolf-sheep-grass ecosystem and were asked to (i) conceptually model it using an agent based sense-act framework similar to the one students used while working in the CTSiM environment, and (ii) computationally model it using domain-specific and domain-general CT primitives provided in the question. We scored students’ conceptual and computational models of the wolf-sheep-grass ecosystem separately, and report our results in Table 7. We found that students in the experimental condition were able to apply their modeling skills better and built significantly more accurate conceptual and computational models compared to students in the control condition.
Table 7

A t test comparison of learning transfer between conditions

 

Control

Experimental

p value

Cohen’s d

Conceptual modeling score

   Conceptual entities (max = 5)

4.66 (0.79)

4.92 (0.39)

<0.05

0.43

   Conceptual sense-act (max = 41)

11.54 (5.29)

20.93 (6.70)

<0.001

1.56

   Total score (max = 46)

16.21 (5.45)

25.86 (6.73)

<0.001

1.58

Computational modeling score (max = 48)

17.33 (9.23)

30.50 (8.98)

<0.001

1.46

Total transfer test score (max = 94)

33.53 (13.80)

53.36 (14.49)

<0.001

1.63

7.5 Relations between modeling performances and behaviors, strategy usage, and learning

To investigate our fifth research question, we analyzed the correlations between the modeling performances and behaviors and strategy use for each activity and students’ post-test scores for the corresponding science domain.

First, we correlated students’ science post-test scores with their modeling performances and how they integrated the conceptual and computational representations. We did not find any significant correlations between students’ modeling measures in the RC activity and their Kinematics post-test performances. A likely reason is that the RC conceptual representation, with a single agent type, did not provide a lot of scaffolding for designing the corresponding computational models. Therefore, the benefits of the linked representation were not as apparent. Besides, the students may not have become proficient with the representations in activity 1, therefore, they did not help the students to better understand domain knowledge. However, Table 8 shows that students’ modeling metrics in the fish-macro and fish-micro activities were generally correlated with their ecology post-test scores (Note: The significance values in Table 8 are reported after the Bonferroni correction was applied).

We find that the macro and micro final model distances were negatively correlated with ecology post-test scores, implying that lower distances to expert models were associated with higher post test scores. However, the correlations were statistical significant only for the computational modeling performance of the control group whose final conceptual models for the ecology units had a lot of errors (average final conceptual distance was greater than 1.5 times the size of the expert conceptual model, as reported in Table 1). For students in the experimental group, both conceptual and computational performance measures were significantly correlated to post-test scores in the macro unit, but not in the micro unit, possibly due to the homogeneity of their final model distances in the micro unit. In terms of linked representation integration metrics, we found that a higher number of chunks (greater number of switches between the conceptual and computational representations) and lower average chunk sizes were consistently negatively correlated with higher post-test scores, although only 4 of the 12 correlations were statistically significant. This suggests that effective coordination between the linked modeling representations appeared to have a positive effect on science learning. Specifically, decomposing the modeling task and going back-and-forth between representations in relatively small sized chunks appeared to be useful behaviors that supported higher post-test scores.
Table 8

Correlations of modeling performances and behaviors with ecology post-test scores

 

Control

Experimental

 

Conceptual

Computational

Conceptual

Computational

Macro final distance

\(-\)0.35

\(-\)0.56***

\(-\)0.48**

\(-\)0.39*

Macro number of chunks

0.61****

0.076

Macro average chunk size

\(-\)0.35

\(-\)0.31

\(-\)0.42*

\(-\)0.25

Micro final distance

\(-\)0.43*

\(-\)0.62****

\(-\)0.39*

\(-\)0.18

Micro number of chunks

0.51**

0.13

Micro average chunk size

\(-\)0.36

\(-\)0.41*

\(-\)0.29

\(-\)0.3

* p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001 after Bonferroni correction

Furthermore, we also analyzed the correlations between students’ strategy use in each activity and their post test scores in the corresponding science domain. While we found use of certain strategies to be significantly positively correlated to post-test scores in particular units (for example, the Model-Build strategy in fish-macro and fish-micro activities), we did not generally find use of an individual strategy to be correlated with post-test scores across all activities or across conditions. This speaks for the importance of using a combination of the strategies for efficiently integrating the different CTSiM tasks and sub-tasks, since we have found that the experimental group students who displayed a better overall usage of the desired strategies also displayed higher post-test scores.

In summary, we have demonstrated in aggregate the effectiveness of our task and strategy scaffolds using the results presented in Tables 1, 2, 3, 4, 5, 6, 7 and 8. We found that the effects of our scaffolding approach extended beyond students’ modeling performances and strategy uses to students’ abilities to efficiently decompose their modeling tasks and understand and relate representations at different levels of abstraction. This also translated to higher science and CT learning gains and the ability to transfer modeling skills to other science domains. Besides comparing students who received and did not receive scaffolding, we also analyzed how the type and frequency of scaffolds needed by students varied across time. We have shown that students needed a combination of task-based and strategy-based scaffolds in all activities, and that the average number of scaffolds required of each type decreased with time across activities. This result, combined with students’ modeling proficiency in the final activity and their high learning gains, demonstrates the fading effect of our scaffolds.

8 Discussion and future directions

In this paper, we have presented a learner modeling approach for scaffolding students as they work in CTSiM. Model building in CTSiM is a complex task—students build computational models of science phenomena and test and verify their models by comparing the system behaviors against an expert simulation model. The learning environment is open-ended; and students have to carefully combine information acquisition, solution construction, and solution assessment tasks to construct a correct model. The environment provides a number of supporting tools, such as hypertext based searchable domain and CT resources, linked conceptual and computational modeling representations that help students decompose the complex model and build it in parts (Basu et al. 2016b), a block-structured visual language to build the computational models, the ability to step through blocks to test the evolving simulation function, and a compare function that lets students compare the behaviors generated by their model against the behaviors generated by an expert model in parts. However, novice students find it difficult to combine all of the tools and scaffolds provided in the environment in an effective manner. Thus, we have developed a task- and strategy-oriented learner modeling scheme that tracks and interprets students actions in the system in terms of our define task and strategy models for the domain. In addition, the learner model uses effectiveness and coherence measures to evaluate students’ proficiency in individual tasks and strategies. The learner model then forms the basis for providing adaptive task and strategy based feedback to students using a contextualized, mixed initiative conversational dialog framework. A study run with control (no feedback) and experimental (feedback) conditions demonstrates the effectiveness of our approach. The experimental group outperformed the control group in domain and CT learning gains, the ability to construct correct models, and effectiveness in using the set of strategies we tracked in the system. Further, we noticed students in the experimental condition requiring less task and strategy scaffolds across activities, with the fading effects of our scaffolds further implying their effectiveness.

Overall, our approach to learner modeling and scaffolding differs from the work of other researchers in a number of ways. For example, Gobert et al. (2013) have claimed that using pre-determined metrics to assess learner actions in an OELE is problematic, and, to overcome this, they have applied educational data mining techniques to develop assessment metrics to evaluate student work. While it is true that students may use a variety of strategies to select and apply skills, and engineering metrics that take into account all potential corner cases is difficult, our end-goal in CTSiM is not merely developing an accurate assessment metric for students’ task performance or strategy use. Rather, the assessment information that forms the basis of our learner models, is used primarily to provide feedback and hints to students online and in the context of their current task, but only when it is clear that their performance metrics (i.e., effectiveness) is below pre-specified thresholds. Hence, in this paper, our focus is not on developing a comprehensive list of rules for specifying effective and ineffective task performance and strategy use. Our approach in this work, described in Sect. 5, discusses how there can be multiple ineffective variants of a strategy, but we have chosen ones based on our observations in previous studies that we have conducted with CTSiM (Basu et al. 2013; in review). We have previously used sequence mining techniques offline to perform more complete analyses of student behaviors (Kinnebrew et al. 2013, 2014), and discussed how the patterns of behavior derived from offline analyses can be used to track student behavior in future versions of the system (Kinnebrew et al. 2016).

The adaptive scaffolding and feedback and hints provided in CTSiM also goes beyond approaches used in other learning-by-modeling environments for science domains, where students are provided with an assessment of their science models, either through model-driven simulations or by using learner models to give feedback on specific incorrect relationships modeled by students. While several environments use the learner model to only provide immediate feedback and hints on incorrect relationships modeled (e.g., Ecolab—Luckin and du Boulay 1999), a few environments capture more information in the learner models to provide feedback and hints about both the solution (the models built by the students) and the work processes involved in the learning environment (e.g., Co-Lab—Duque et al. 2012). However, though learner actions are tracked and analyzed in Co-Lab, action sequences or relations between actions are not analyzed and actions are not evaluated in terms of their consequences on the nature of the models constructed by the students. The learner model thus maintains very limited information about whether students executed or did not execute specific actions, and hence the scaffolding is also limited to merely reminding students about actions they have not taken or should employ more frequently for model building and testing.

Along with providing a learner modeling and adaptive scaffolding framework, which is likely to be generally applicable to a large class of OELEs that use a learning-by-modeling pedagogy, this work also makes a significant contribution to the field of Computational Thinking in K-12 education. While the importance of introducing CT in the K-12 curricula has been emphasized by several researchers, not many systems have been used successfully with existing K-12 curricula, nor have there been systematic assessments of students’ learning with such systems (Wing 2006; Grover et al. 2014; Jona et al. 2014). The need for studying students’ difficulties as they work in CT-based environments and scaffolding them is also recognized in the field, but there is dearth of research in this area as well (Grover and Pea 2013). Our work with CTSiM provides an example of how CT principles can be operationalized and successfully integrated with existing science curricula, and how scaffolds contextualized in science domain content can help students learn important CT concepts like sequences, loops, conditionals, operators, and variables, and become more proficient in vital CT practices like being incremental and iterative, decomposing complex tasks, testing and debugging, and abstracting and modularizing. In fact, a significant contribution of the CTSiM work is the ability to demonstrate synergistic science and CT learning in middle school classrooms (Basu et al. 2014a; Sengupta et al. 2013). However, we acknowledge that the CTSiM results cannot yet be generalized for all middle school students belonging to different demographics and with different language proficiencies. The data reported in this paper has been collected from an urban magnet school where the majority of the students were good readers and there were few English Language Learners. Since CTSiM relies heavily on reading science and CT resources to acquire the required information for building computational models of science topics, we believe this may prove to be a limiting factor in the effectiveness of CTSiM with certain student populations. As future work, we might consider providing science and CT information in CTSiM through other non-textual modes like an audio-book or a library of videos.

Also, while the CTSiM design is general enough to allow development of diverse learning activities spanning different science domains, we realize that our scaffolding framework may not conform to the approach of other existing CT-based environments. Since a number of existing CT-based environments do not expect students to produce a specific artifact, such as a computational model for a science phenomenon, comparing the student’s work against an expert-produced computational artifact to assess learners’ modeling performance can be difficult. Also, most of these environments only include solution construction and assessment tasks, limiting the types of modeling strategies that can be tracked and about which information can be maintained in the learner model.

While we have demonstrated the overall effectiveness of our scaffolding approach for the CTSiM environment, we plan to analyze students’ action logs further to study students’ responses to individual feedback instances and how well they were able to engage with the feedback and apply it to their model building and problem solving tasks. Continued development of this learner modeling and scaffolding framework will help us understand which forms of feedback students considered most useful, and how best to provide such feedback prompts and hints in the context of the students’ current tasks. In future versions of CTSiM, we plan to implement a more complete version of our learner modeling and adaptive scaffolding approach, where we track students’ performances on all of the CTSiM tasks as defined by the task hierarchy, as well an extended set of effectiveness and coherence metrics to track effective and ineffective uses of a more comprehensive list of strategies.

Also, we currently use information about students’ modeling performances and behaviors to adaptively scaffold students. But we can make CTSiM more useful and adaptable in classroom settings, by providing teachers with assessments of how their class is progressing on their modeling and learning tasks, common challenges the students are facing, and identifying students who are not progressing, and may need individualized assistance. Developing a teacher dashboard with aggregate class data as well as information about individual students can support teachers and assist them in managing classroom instruction in a more effective manner. Using information provided by the dashboard, the teacher can customize their instruction to discuss common mistakes and problems with the whole class, and individually help students who are falling behind in their work.

In addition, as future work, we plan to develop more learning activities for CTSiM which align with middle school science curricular standards to help demonstrate the generalizability of the CTSiM design and make the learning environment more useful for a wider population of teachers. Also, a number of CTSiM activities could entail a longer learning activity progression interspersed with non-CTSiM classroom activities. Exposing students to computational modeling and CT practices over a period of time across different science topics can help students develop a deeper understanding of computational methods and practices, and help them learn science topics better. Currently, defining a new unit or learning activity using the CTSiM architecture involves defining the following components (Basu et al. 2013): (i) an xml file defining the domain in terms of conceptual entities (agents, environment elements, properties, behaviors) and visual primitives, dependencies between visual primitives and conceptual sense-act properties, and how visual primitives should be implemented using NetLogo code, (ii) an xml file describing how the visual primitive blocks are to be depicted graphically in the C-World, including name, positions for arguments, color, etc., (iii) an xml file describing the expert computational model using the visual primitives defined for the unit, and (iv) a domain base model which is responsible for the NetLogo visualization and other housekeeping aspects of the simulation. In order to make developing new CTSiM activities easier in the future, we plan to build authoring tools for the same. This would also allow teachers to design and develop their own learning activities using CTSiM.

Footnotes

  1. 1.

    Students do not have access to the expert model. They can only observe its simulated behaviors.

Notes

Acknowledgements

This work was supported by the NSF (NSF Cyber-learning Grants #1124175 and #1441542).

References

  1. Aleven, V., McLaren, B., Roll, I., Koedinger, K.: Toward tutoring help seeking. In: Intelligent Tutoring Systems, pp. 227–239. Springer, Berlin (2004)Google Scholar
  2. Aleven, V., McLaren, B.M., Roll, I., Koedinger, K.R.: Toward meta-cognitive tutoring: a model of help seeking with a cognitive tutor. I. J. Artif. Intell. Educ. 16(2), 101–128 (2006)Google Scholar
  3. Anderson, J.R., Corbett, A.T., Koedinger, K.R., Pelletier, R.: Cognitive tutors: lessons learned. J. Learn. Sci. 4(2), 167–207 (1995)CrossRefGoogle Scholar
  4. Anderson, J.R., Boyle, C.F., Reiser, B.J.: Intelligent tutoring systems. Science 228, 456–462 (1985)CrossRefGoogle Scholar
  5. Arts, J.A., Gijselaers, W.H., Segers, M.S.: Cognitive effects of an authentic computer-supported, problem-based learning environment. Inst. Sci. 30(6), 465–495 (2002)CrossRefGoogle Scholar
  6. Azevedo, R.: Using hypermedia as a metacognitive tool for enhancing student learning? The role of self-regulated learning. Educ. Psychol. 40(4), 199–209 (2005)CrossRefGoogle Scholar
  7. Azevedo, R., Hadwin, A.F.: Scaffolding self-regulated learning and metacognition-implications for the design of computer-based scaffolds. Instr. Sci. 33(5), 367–379 (2005)CrossRefGoogle Scholar
  8. Baker, R. S., Corbett, A. T., Koedinger, K. R., Wagner, A. Z.: Off-task behavior in the cognitive tutor classroom: when students game the system. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 383–390. ACM, New York (2004)Google Scholar
  9. Bangert-Drowns, R.L., Kulik, C.L.C., Kulik, J.A., Morgan, M.: The instructional effect of feedback in test-like events. Rev. Educ. Res. 61(2), 213–238 (1991)CrossRefGoogle Scholar
  10. Bannert, M., Reimann, P.: Supporting self-regulated hypermedia learning through prompts. Instr. Sci. 40(1), 193–211 (2012)CrossRefGoogle Scholar
  11. Basu, S., Dickes, A., Kinnebrew, J.S., Sengupta, P., Biswas, G.: CTSiM: a computational thinking environment for learning science through simulation and modeling. In: Proceedings of the 5th International Conference on Computer Supported Education, pp. 369–378. Aachen, Germany (2013)Google Scholar
  12. Basu, S., Dukeman, A., Kinnebrew, J., Biswas, G., Sengupta, P.: Investigating student generated computational models of science. In: Proceedings of the 11th International Conference of the Learning Sciences, Boulder, CO (2014a)Google Scholar
  13. Basu, S., Kinnebrew, J., Biswas, G.: Assessing student performance in a computational-thinking based science learning environment. Proceedings of the 12th International Conference on Intelligent Tutoring Systems, pp. 476–481. Springer International Publishing, Honolulu, HI, USA (2014b)CrossRefGoogle Scholar
  14. Basu, S., Sengupta, P., Dickes, A., Biswas, G., Kinnebrew, J.S., Clark, D.: Identifying middle school students’ challenges in computational thinking based science learning. Res. Pract. Technol. Enhanc. Learn. (2016a)Google Scholar
  15. Basu, S., Biswas, G., Kinnebrew, J.S.: Using multiple representations to simultaneously learn computational thinking and middle school science. In: Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ (2016b). doi: 10.1186/s41039-016-0036-2
  16. Biswas, G., Jeong, H., Kinnebrew, J., Sulcer, B., Roscoe, R.: Measuring self-regulated learning skills through social interactions in a teachable agent environment. Res. Pract. Technol. Enhanc. Learn. 5(2), 123–152 (2010)CrossRefGoogle Scholar
  17. Bransford, J., Schwartz, D.: Rethinking transfer: a simple proposal with multiple implications. Rev. Res. Educ. 24(1), 61–101 (1999)CrossRefGoogle Scholar
  18. Brown, J.S., Burton, R.R.: Diagnostic models for procedural bugs in basic mathematical skills. Cogn. Sci. 2(2), 155–192 (1978)CrossRefGoogle Scholar
  19. Brown, J.S., VanLehn, K.: Repair theory: a generative theory of bugs in procedural skills. Cogn. Sci. 4(4), 379–426 (1980)CrossRefGoogle Scholar
  20. Brusilovsky, P., Peylo, C.: Adaptive and intelligent web-based educational systems. Int. J. Artif. Intell. Educ. 13, 159–172 (2003)Google Scholar
  21. Brusilovsky, P., Millán, E.: User models for adaptive hypermedia and adaptive educational systems. In: The Adaptive Web, pp. 3–53. Springer, Berlin (2007)Google Scholar
  22. Carr, B. Goldstein, I.P.: Overlays: a theory of modelling for computer aided instruction (No. AI-M-406). Massachusetts Institute of Technology Cambridge Artificial Intelligence Lab (1977)Google Scholar
  23. Chandler, P., Sweller, J.: The split-attention effect as a factor in the design of instruction. Br. J. Educ. Psychol. 62(2), 233–246 (1992)CrossRefGoogle Scholar
  24. Chrysafiadi, K., Virvou, M.: Student modeling approaches: a literature review for the last decade. Expert Syst. Appl. 40(11), 4715–4729 (2013)CrossRefGoogle Scholar
  25. Clarebout, G., Elen, J.: Advice on tool use in open learning environments. J. Educ. Multimed. Hypermed. 17(1), 81–97 (2008)Google Scholar
  26. Conati, C., Gertner, A., Vanlehn, K.: Using Bayesian networks to manage uncertainty in student modeling. User Model. User Adap. Interact. 12(4), 371–417 (2002)CrossRefzbMATHGoogle Scholar
  27. Conejo, R., Guzman, E., Mill’an, E., Trella, M., P’erez-de-la Cruz, J.L., Rios, A.: SIETTE: a web-based tool for adaptive teaching. Int. J. Artif. Intell. Educ. 14, 29–61 (2004)Google Scholar
  28. Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User Model. User Adap. Interact. 4(4), 253–278 (1995)CrossRefGoogle Scholar
  29. Desmarais, M.C., d Baker, R.S.: A review of recent advances in learner and skill modeling in intelligent learning environments. User Model. User Adap. Inter. 22(1–2), 9–38 (2012)CrossRefGoogle Scholar
  30. Dolog, P. M. Schaefer.: A framework for browsing, manipulating and maintaining interoperable learner profiles. In: Proceedings of the UM2005—10th International Conference on User Modeling. Edinburgh, UK: Springer, Berlin (2005)Google Scholar
  31. Dolog, P., Simon, B., Nejdl, W., Klobučar, T.: Personalizing access to learning networks. ACM Trans. Internet Technol. 8(2), 3 (2008)CrossRefGoogle Scholar
  32. Duque, R., Bollen, L., Anjewierden, A., Bravo, C.: Automating the analysis of problem-solving activities in learning environments: the co-lab case study. J. UCS 18(10), 1279–1307 (2012)Google Scholar
  33. Elsom-Cook, M.: Student modelling in intelligent tutoring systems. Artif. Intell. Rev. 7(3–4), 227–240 (1993)CrossRefGoogle Scholar
  34. Fedor, D.B., Davis, W.D., Maslyn, J.M., Mathieson, K.: Performance improvement efforts in response to negative feedback: the roles of source power and recipient self-esteem. J. Manag. 27(1), 79–97 (2001)Google Scholar
  35. Gobert, J., Sao Pedro, M., Raziuddin, J., Baker, R.S.: From log files to assessment metrics: measuring students’ science inquiry skills using educational data mining. J. Learn. Sci. 22(4), 521–563 (2013). doi: 10.1080/10508406.2013.837391 CrossRefGoogle Scholar
  36. Goldstein, I.P.: The genetic graph: a representation for the evolution of procedural knowledge. Int. J. Man Mach. Stud. 11(1), 51–77 (1979)CrossRefGoogle Scholar
  37. Grawemeyer, B., Mavrikis, M., Holmes, W., Gutiérrez-Santos,S., Wiedmann, M., Rummel, N. Affective learning. Exploring the impact of affect-aware support on learning and engagement. User Model. User Adapt. Interact. J. Personal. Res. 27 (2017) this issueGoogle Scholar
  38. Grover, S., Pea, R.: Computational Thinking in K-12: a review of the state of the field. Educ. Res. 42(1), 38–43 (2013)CrossRefGoogle Scholar
  39. Grover, S., Cooper, S., Pea, R.: Assessing computational learning in K-12. In: Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, pp. 57–62. ACM, New York (2014)Google Scholar
  40. Jeremić, Z., Jovanović, J., Gašević, D.: Student modeling and assessment in intelligent tutoring of software patterns. Expert Syst. Appl. 39(1), 210–222 (2012)CrossRefGoogle Scholar
  41. Jona, K., Wilensky, U., Trouille, L., Horn, MS., Orton, K., Weintrop, D., Beheshti, E.: Embedding computational thinking in science, technology, engineering, and math (CT-STEM). In: Paper Presented at the Future Directions in Computer Science Education Summit Meeting, Orlando, FL (2014)Google Scholar
  42. Jovanović, J., Gasevic, D., Brooks, C., Devedzic, V., Hatala, M., Eap, T., Richards, G.: LOCO-analyst: semantic web technologies in learning content usage analysis. Int. J. Contin. Eng. Educ. Life Long Learn. 18(1), 54–76 (2008)CrossRefGoogle Scholar
  43. Jovanović, J., Gašević, D., Torniai, C., Bateman, S., Hatala, M.: The social semantic web in intelligent learning environments: state of the art and future challenges. Interact. Learn. Environ. 17(4), 273–309 (2009)CrossRefGoogle Scholar
  44. Karabenick, S.A., Knapp, J.R.: Relationship of academic help seeking to the use of learning strategies and other instrumental achievement behavior in college students. J. Educ. Psychol. 83(2), 221 (1991)CrossRefGoogle Scholar
  45. Kinnebrew, J.S., Loretz, K.M., Biswas, G.: A contextualized, differential sequence mining method to derive students’ learning behavior patterns. J. Educ. Data Min. 5(1), 190–219 (2013)Google Scholar
  46. Kinnebrew, J.S., Segedy, J.R., Biswas, G.: Analyzing the temporal evolution of students’ behaviors in open-ended learning environments. Metacogn. Learn. 9(2), 187–215 (2014)CrossRefGoogle Scholar
  47. Kinnebrew, J., Segedy, J.R. Biswas, G.: Integrating model-driven and data-driven techniques for analyzing learning behaviors in open-ended learning environments. IEEE Trans. Learn. Technol. doi: 10.1109/TLT.2015.2513387
  48. Klawe, M., Inkpen, K., Phillips, E., Upitis, R., Rubin, A.: E-GEMS: a project on computer games, mathematics and gender (2002)Google Scholar
  49. Koedinger, K.R., Aleven, V.: Exploring the assistance dilemma in experiments with cognitive tutors. Educ. Psychol. Rev. 19(3), 239–264 (2007)CrossRefGoogle Scholar
  50. Kramarski, B., Gutman, M.: How can self-regulated learning be supported in mathematical E-learning environments? J. Comput. Assist. Learn. 22(1), 24–33 (2006)CrossRefGoogle Scholar
  51. Lajoie, S., Derry, S. (eds.): Computers as Cognitive Tools. Lawrence Erlbaum Associates, Mahwah, NJ (1993)Google Scholar
  52. Land, S.: Cognitive requirements for learning with open-ended learning environments. Educ. Tech. Res. Dev. 48(3), 61–78 (2000)CrossRefGoogle Scholar
  53. Land, S., Hannafin, M., Oliver, K.: Student-centered learning environments: foundations, assumptions and design. In: Jonassen, D., Land, S. (eds.) Theoretical Foundations of Learning Environments, pp. 3–25. Routledge, New York, NY (2012)Google Scholar
  54. Langley, P. Ohlsson, S.: Automated Cognitive Modelling. In: Proceedings of AAAI—84, pp. 193–197 (1984)Google Scholar
  55. Lepper, M.R., Chabay, R.W.: Intrinsic motivation and instruction: conflicting views on the role of motivational processes in computer-based education. Educ. Psychol. 20(4), 217–230 (1985)CrossRefGoogle Scholar
  56. Long, Y. Aleven, V. (2017). Enhancing learning outcomes through self-regulated learning support with an open learner model. User Model. User Adapt. Interact. J. Personal. Res. 27 (2016). doi: 10.1007/s11257-016-9186-6
  57. Luckin, R., du Boulay, B.: Ecolab: the development and evaluation of a vygotskian design framework. Int. J. Artif. Intell. Educ. 10(2), 198–220 (1999)Google Scholar
  58. Mitrovic, A.: Fifteen years of constraint-based tutors: what we have achieved and where we are going. User Model. User Adapt. Interact. 22(1–2), 39–72 (2012)CrossRefGoogle Scholar
  59. Maloney, J., Burd, L., Kafai, Y., Rusk, N., Silverman, B., Resnick, M.: Scratch: A sneak preview. In: Proceedings of Creating, Connecting, and Collaborating Through Computing, pp. 104–109 (2004)Google Scholar
  60. McCalla, G.I., Murtagh, K.: G.E.N.I.U.S.: an experiment in ignorance-based automated program advising. AISB Newsl. 75, 13–20 (1991)Google Scholar
  61. Montalvo, O., Baker, R.S.J., Sao Pedro, M.A., Nakama, A., Gobert, J.D.: Identifying student’ inquiry planning using machine learning. In: Proceedings of the 3rd International Conference on Educational Data Mining, pp. 141–150, Pittsburgh, PA (2010)Google Scholar
  62. Moos, D.C., Honkomp, B.: Adventure learning: motivating students in a Minnesota middle school. J. Res. Technol. Educ. 43(3), 231–252 (2011)CrossRefGoogle Scholar
  63. Ohlsson, S.: Some principles of intelligent tutoring. Instr. Sci. 14(3–4), 293–326 (1986)CrossRefGoogle Scholar
  64. Pelánek R., Papoušek, J., Řihák, J., Stanislav, V., Nižnan, J.: Elo-based learner modeling for adaptive practice of facts. User Model. User Adapt. Interact. J. Personal. Res. (2016). doi: 10.1007/s11257-016-9185-7
  65. Pressley, M., Goodchild, F., Fleet, J., Zajchowski, R., Evansi, E.: The challenges of classroom strategy instruction. Elem. School J. 89, 301–342 (1989)CrossRefGoogle Scholar
  66. Puntambekar, S., Hubscher, R.: Tools for scaffolding students in a complex learning environment: what have we gained and what have we missed? Educ. Psychol. 40(1), 1–12 (2005)CrossRefGoogle Scholar
  67. Roll, I., Aleven, V.,Mclaren, B.M., Koedinger, K.R.: Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. In: Artificial Intelligence in Education (AIED 2007), pp. 203–210 (2009)Google Scholar
  68. Schraw, G., Crippen, K.J., Hartley, K.: Promoting self-regulation in science education: metacognition as part of a broader perspective on learning. Res. Sci. Educ. 36(1–2), 111–139 (2006)CrossRefGoogle Scholar
  69. Schwartz, D.L., Arena, D.: Measuring What Matters Most: Choice-Based Assessments for the Digital Age. MIT Press, Cambridge (2013)Google Scholar
  70. Segedy, J.R., Kinnebrew, J.S., Biswas, G.: The effect of contextualized conversational feedback in a complex open-ended learning environment. Educ. Tech. Res. Dev. 61(1), 71–89 (2013)CrossRefGoogle Scholar
  71. Self, J.: The defining characteristics of intelligent tutoring systems research: ITSs care, precisely. Int. J. Artif. Intell. Educ. 10, 350–364 (1998)Google Scholar
  72. Sengupta, P., Kinnebrew, J.S., Basu, S., Biswas, G., Clark, D.: Integrating computational thinking with K-12 science education using agent-based computation: a theoretical framework. Educ. Inf. Technol. 18(2), 351–380 (2013)CrossRefGoogle Scholar
  73. Shang, Y., Shi, H., Chen, S.S.: An intelligent distributed environment for active learning. J. Educ. Resourc. Comput. 1(2es), 4 (2001)Google Scholar
  74. Shute, V.J.: Focus on formative feedback. Rev. Educ. Res. 78(1), 153–189 (2008)CrossRefGoogle Scholar
  75. Sison, R., Shimura, M.: Student modeling and machine learning. Int. J. Artif. Intell. Educ. 9, 128–158 (1998)Google Scholar
  76. Van der Kleij, F.M., Feskens, R.C., Eggen, T.J.: Effects of feedback in a computer-based learning environment on students’ learning outcomes a meta-analysis. Rev. Educ. Res. 85(4), 475–511 (2015)CrossRefGoogle Scholar
  77. VanLehn, K.: Student modeling. In: Polson, M.C., Richardson, J.J. (eds.) Foundations of Intelligent Tutoring Systems, pp. 55–78. Lawrence Erlbaum, Hillsdale, NJ (1988)Google Scholar
  78. Walonoski, J. A., Heffernan, N. T.: Detection and analysis of off-task gaming behavior in intelligent tutoring systems. In: Intelligent Tutoring Systems, pp. 382–391. Springer, Berlin (2006)Google Scholar
  79. Weber, G., Specht, M.: User modeling and adaptive navigation support in WWW-based tutoring systems. In: User Modeling, pp. 289–300. Springer, Vienna (1997)Google Scholar
  80. Wenger, E.: Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge. Morgan Kaufmann, Los Altos, CA (1987)Google Scholar
  81. Wilensky, U.: NetLogo. Center for Connected Learning and Computer-Based Modeling. Northwestern University, Evanston, IL. (http://ccl.northwestern.edu/netlogo) (1999)
  82. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006)CrossRefGoogle Scholar
  83. Winne, P.H.: Issues in researching self-regulated learning as patterns of events. Metacogn. Learn. 9(2), 229–237 (2014)CrossRefGoogle Scholar
  84. Winter, M., Brooks, C. A., Greer, J. E.: Towards Best Practices for Semantic Web Student Modelling. In: AIED, pp. 694–701 (2005)Google Scholar
  85. Wood, D., Bruner, J.S., Ross, G.: The role of tutoring in problem solving. J. Child Psychol. Psychiatr. 17(2), 89–100 (1976)CrossRefGoogle Scholar
  86. Woolf, B.P.: Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing e-Learning. Morgan Kaufmann, Burlington, MA (2009)Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  • Satabdi Basu
    • 1
    Email author
  • Gautam Biswas
    • 2
  • John S. Kinnebrew
    • 3
  1. 1.SRI InternationalMenlo ParkUSA
  2. 2.Institute for Software Integrated Systems and EECS DepartmentVanderbilt UniversityNashvilleUSA
  3. 3.BridjBostonUSA

Personalised recommendations