Introduction

The primary purpose of this special issue is to introduce AIED researchers to the experimentation capabilities of the Generalized Intelligent Framework for Tutoring (GIFT) (Sottilare et al. 2012; Sottilare et al. 2017a, 2017b), a research prototype sponsored and developed by the US Army Research Laboratory (ARL). GIFT’s design goals are: 1) to lower the skills and time required to author Intelligent Tutoring Systems (ITSs) in a variety of task domains; 2) to deliver effective and efficient adaptive instruction that is tailored to the needs of each individual learner or team of learners; and 3) to provide tools and methods to evaluate the effectiveness of ITSs and support research to continuously improve instructional best practices. These three design goals are aligned with authoring, instructional management, and evaluation, respectively.

While our primary goal was to expose the AIED community to GIFT as a research tool, our major motivation is to engage AIED scientists and practitioners in shaping its design and future functionality. GIFT’s design goals should be familiar to the AIED community and place GIFT in the category of “shell” tutors, “a generalized framework for building ITSs” (Murray 1999), but GIFT also provides a user interface and logic to “allow non-programmers to formalize and visualize their knowledge” (Murray 1999). The principles that have shaped GIFT are based on the individual and team (collaborative) instructional literature which includes a heavy foundation in the AIED and computer-supported collaborative learning (CSCL) literature.

The GIFT software represents a substantial body of work and has been applied widely. We believe it may provide a useful platform for the broader AIED community. To learn more about GIFT and its design, we direct your attention to www.GIFTtutoring.org for details and documentation about GIFT and its authoring, instructional management, and evaluation (testbed) functions. GIFT software is also available as a free download (https://gifttutoring.org/projects/gift/files) or may be used freely in our cloud-based application (https://cloud.gifttutoring.org/dashboard/#login). We invite the AIED community to explore GIFT and bend it to their needs. Opportunities to influence the design and functionality of GIFT may be posted on our online forum (https://www.gifttutoring.org/projects/gift/boards) or by sharing your thoughts and experiences at our annual GIFT Users Symposium (GIFTSym).

Gift Functions

While the main elements of most ITSs include models of learners, instruction, and knowledge/skill domains, there are three functions that tie these models together and encompass both offline and real-time processes to create, deliver, and understand the influence of instruction provided by ITSs. In GIFT, these processes are represented in the architecture, tools, models, and methods as the authoring, instructional management, and evaluation functions.

Authoring Functions

One major goal of GIFT authoring is to lower the skills and time required to author ITSs in a variety of task domains. The design goals for authoring have been adapted from Murray (1999, 2003) and Sottilare and Gilbert (2011). GIFT authoring is composed of policies, tools and methods to enable authors (instructional designers, developers, instructors/teachers, course managers, and subject matter experts) to: 1) create ITSs without knowledge of instructional design principles or software programming; 2) curate (search for and organize) content to accurately represent instructional domains (cognitive, affective, psychomotor, and social); and 3) sequence them for presentation to learners based on their hierarchical or dependency relationships. Associated authoring objectives include ease of use, support for rapid prototyping, collaborative authoring, and rapid integration of external environments (e.g., simulators, serious games, webpages) to reduce development time/cost and promote interoperability and reuse.

In 2011, the US Army Research Laboratory began to define desirable characteristics of authoring systems for both individual learners and teams (Sottilare et al. 2011) that included interoperability with external environments (e.g., simulations, serious games). GIFT’s linkage with external environments builds on an AIED concept called RIDES which featured embedded graphical simulations and simulation-centered tutorials authored and delivered using the same authoring tool (Munro et al. 1997). GIFT generalized the RIDES concept to allow any external environment to pass and receive instructional data via a standardized GIFT gateway.

Today, GIFT authoring tools include a standard gateway specification for interacting with all types of external environments and course objects to interface with games (e.g., Virtual BattleSpace, Virtual Medic), simulations (e.g., excavator simulator, Newtonian Talk), applications (e.g., Microsoft PowerPoint, Media Semantics Virtual Characters), and physiological/behavioral sensors (e.g., Zephyr Bioharness, Microsoft Kinect, Emotiv Epoc EEG). The course objects require no programming and may be added to a GIFT-based tutor through a simple drag and drop. New games, simulations, applications, and sensors can be added with minor programming to define enumerations (e.g., high/moderate/low trust) as public classes of information available to GIFT for assessment. GIFT is now also Learning Tools Interoperability (LTI) compliant to begin supporting adaptive instruction with EdX and other Massive Open Online Course (MOOC) delivery platforms.

Instructional Management Functions

The primary goal of GIFT instructional management is to deliver effective and efficient adaptive instruction that is tailored to the needs and preferences of each individual learner or team of learners. The design objectives for instructional management include delivery of instruction (e.g., content, feedback, support, direction) to individuals and teams in a variety of locations (e.g., distributed learning, mobile learning) and on a variety of computing devices (e.g., laptops, smartphones, tablets, workstations). Instructional management integrates instructional best practices derived from experimentation and reviews of the empirical literature. As a modular architecture, GIFT also allows users to integrate new pedagogical models, instructional strategies, or instructional tactics from other tutoring systems into GIFT. Associated objectives are to: 1) model and adapt to individual differences (e.g., states, traits, preferences) that influence learning and performance; 2) manage the pace, direction, and challenge level of the instruction; and 3) manage the interplay of learning, performance, retention, and transfer of skills.

To support the development of optimized instructional strategies and tactics, GIFT is deeply grounded in learning theory, tutoring theory, and motivational theory. The learning theories applied in GIFT cover a large landscape: building on prerequisites and conditions of instruction (Gagne 1985), component display theory (Merrill 1983), cognitive learning (Anderson et al. 2001; Koedinger et al. 2012), affective learning (Bower 1992; D’Mello and Graesser 2012), psychomotor learning (Simpson 1972), and social learning (Adamson et al. 2014; Soller 2001; Sottilare et al. 2011). In attempts to model expert human tutors, GIFT considers the Intelligent, Nurturant, Socratic, Progressive, Indirect, Reflective, and Encouraging (INSPIRE) model of tutoring success (Lepper et al. 1997) and the one-to-one tutoring process documented by Graesser et al. (2017) in the development of GIFT instructional strategies and tactics.

The GIFT architecture accommodates learner-centric approaches in addition to instructional management capabilities that are sensitive to learner states, traits, and preferences. Self-regulated learning is encouraged through open learner models that allow learners to decide what to learn next and to inspect their progress in mastering the subject matter and their measured psychological attributes (Bull and Kay 2008). As its design goals are realized, GIFT continues to be used in new and more complex domains. A new evolving capability is the development of architectural services to support the tutoring of groups (e.g., team taskwork tutoring, teamwork tutoring, collaborative learning, and collaborative problem solving) through intelligent, computer-guided instruction as defined below:

  • team taskwork tutoring – focused on developing proficiency in task domains requiring more than one member to accomplish the task (Salas 2015).

  • teamwork tutoring - focused on enhancing coordination, cooperation, and communication among individuals on a team to achieve a shared goal (Salas 2015).

  • collaborative learning – “a situation in which two or more people learn or attempt to learn something together” (Dillenbourg 1999, p. 1) which may or may not require more than one member to accomplish once learned.

  • collaborative problem solving – “the capacity of an individual to effectively engage in a process whereby two or more agents [human or computer-simulated] attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution” (PISA 2017, p. 6).

Evaluation Functions

The GIFT evaluation functions emphasize capabilities to support the empirical evaluation of adaptive instructional methods, ITSs, and their component technologies. The evaluation capabilities in GIFT can be used as a testbed (see Fig. 1) to determine the effect of environmental attributes, tools, models, and methods on engagement, learning, performance, retention, reasoning, and transfer of skills.

Fig. 1
figure 1

GIFT evaluation testbed methodology

The purpose of the GIFT evaluation function is to allow ITS researchers to experimentally assess and evaluate ITS technologies (ITS components, tools, and methods). Figure 1 illustrates an analysis testbed methodology that has been implemented in GIFT to support experimentation. This testbed methodology, derived from Hanks et al. (1993), supports manipulation of the learner model, instructional strategies, and domain-specific knowledge within GIFT. It is used to evaluate manipulated or measured variables within the learning effect model (Sottilare 2012; Sottilare et al. 2013).

The testbed is based upon the notion that testbeds have three critical roles related to three major phases of research. During the exploratory phase, agent behaviors need to be observed and classified in broad categories. This can be performed in an experimental environment, using the methods of educational data mining (Baker and Yacef 2009) to study the relationship between specific agent behaviors, the development of learner behaviors over time, and learner outcomes. During the confirmatory phase, the testbed is needed to allow more strict characterizations of agent behavior to test specific hypotheses and compare tools and methods.

Specifically, the GIFT evaluation function enables the comparison/contrast of ITS elements and assessment of their effect on learning outcomes (e.g., engagement, knowledge or skill acquisition, retention, reasoning, and transfer of skills). Finally, in order to generalize results, replication of conditions and associated measurement must be possible. Just as it does for instruction, GIFT enables experiment authors to define measurement and assessment methods to discern learner states and to record contextual data (e.g., conditions in the environment) during evaluations.

Gift as an Experimental Tool for AIED Research

While GIFT has been used as an ITS authoring and experimental tool across a variety of task domains including more traditional topics in science, technology, education, and mathematics (STEM) (e.g., Renduchitnala and Matthews 2017; Warta 2017), this special issue focused on three distinctive topics: 1) sensitivity to learner affect (DeFalco et al. 2017), adaptive training of psychomotor tasks (Goldberg et al. 2017), and 3) adaptive instruction of teams (Fletcher and Sottilare 2017; Gilbert et al. 2017; Sottilare et al. 2017a, 2017b). Although each article describes a functional capability of significant importance to military training and education, we propose that the processes described therein could easily be transferred and applied to parallel civilian training and educational domains.

The articles in this special issue also focused on two fundamental questions. First, how GIFT has been used as an experimental tool to develop models or enhance pedagogy in ITS. Second, how might GIFT be used to validate concepts derived from the literature. The following discussion identifies different ways that the articles address these questions.

GIFT as a Testbed to Build and Embed Affect Sensitivity

There have been a range of papers that have presented models with the ability to automatically detect affect during online learning (see reviews by Baker and Ocumpaugh 2014 and Calvo and D'Mello 2010, for instance). However, relatively few of these models have actually been built into running systems, and fewer still have been used to drive affective intervention, as noted in a review by D’Mello and his colleagues (D’Mello et al. 2014). There has been additional work since then – see, for instance, Grawemeyer et al. (2017) – but the relatively small number of examples since then represents an argument that this combination of factors is difficult to bring together.

Perhaps the best-known (and most successful) example of this line of work is the work on the Affective AutoTutor by D’Mello and colleagues (D'Mello et al. 2010). In this system, a set of physical sensors were combined with interaction data and self-reports of student affect to develop a model that could automatically infer student affect. The resultant model was then embedded into AutoTutor, a natural language-based intelligent tutoring system. AutoTutor responded to negative student affect with encouraging and supportive messages; a randomized experiment determined that it led to better learning outcomes for learners with initial low domain knowledge. In related work within a speech-based intelligent tutor, Grawemeyer et al. (2017) found that affective support based on automated detection of student affect reduced boredom and off-task behavior.

However, beyond even the technical challenges in conducting this kind of research, affective design remains a difficult art, and not all studies in this area have yielded positive effects. Forbes-Riley and Litman (2011) extended a speech-based intelligent tutoring system (ITSPOKE) with the ability to automatically detect student uncertainty, inferred from the acoustic and prosodic properties of student speech. They then used the detection to drive the provision of additional instruction to resolve the uncertainty. However, a randomized experiment did not find a significant difference between the experimental condition and a control condition without affect-sensitivity. A similar null result was obtained in work by Burleson and Picard (2007), who used a range of physical sensors to detect student affect, and embedded the resultant models into a learning system that taught learners how to solve the Towers of Hanoi problem. The models were used to drive messages that told learners they could succeed at solving the problems and get better. In an experimental study, no main effects were found for the intervention.

In this special issue, DeFalco et al. (2017) presents the entire process of developing an affect-sensitive system, including affect detection research, the embedding of the resultant models in a running system, and the use of those models to drive affective interventions. This paper shows that GIFT can be used to obtain data for detector development, and that the resultant models – both models based on student interaction and models based on physical sensors -- can be embedded back into GIFT for reuse. Next, the GIFT framework was used to build and embed affect sensitivity into the ITS’s responses. Finally, GIFT was used to run an experimental study investigating the impacts of affective interventions on a variety of student outcomes. The example provided by DeFalco et al. (2017) could easily form a template for future research to examine the sensitivity of instructional methods and to conduct validation studies or to examine the impact of various adaptive instructional tools or methods.

GIFT as a Testbed for Developing and Evaluating Tutors for Psychomotor Tasks

The US Army has been interested in extending the effectiveness of ITSs beyond the cognitive task domains currently prevalent. A key aspect of this objective is supporting one-to-one tutoring for individuals learning psychomotor tasks that are assessed by measures such as speed, accuracy, balance, and coordination (Simpson 1972). In exploring the artificial intelligence in education (AIED) literature, Santos (2016) posed the challenges of modeling psychomotor interaction and providing personalized support for psychomotor tasks that ranged from sports to surgical procedures to sign language. Since the AIED literature addressing tutoring of psychomotor tasks is lagging with respect to cognitive domains, we suggest that the development and validation of measures, expert models, and adaptive support for psychomotor tasks could benefit from tools and processes in the GIFT testbed. The evolution of unobtrusive sensors provides opportunities to measure physical movement at a distance with a high degree of accuracy. Low cost sensing methods may enable sufficient tracking of psychomotor tasks for individuals, but have some limitations in tracking multiple learners. Examples of current and emerging research examining ITS capabilities to train individual psychomotor task domains are provided in the discussion that follows.

Sottilare and LaViola (2015) examined the use of smart glass technologies to support the adaptive instruction for land navigation, also known as orienteering. During land navigation tasks, learners plan and conduct routes of travel. Since each route of travel is unique, GIFT is currently being extended to allow learners to plan routes as a means of developing unique expert models. Measures of success are based on variance of the actual route on real terrain from this unique expert model built from route planning on virtual terrain.

Sottilare et al. (2016) examined the use smart glass technologies and pressure sensors to support hemorrhage control training during triage. The physical aspects of the hemorrhage control task were simulated as learners applied tourniquets and pressure bandages to control bleeding injuries. In this proof-of-concept, the tourniquets and bandages contain pressure sensors which can be used to measure the learner’s ability to stem blood flow based on pressure readings. Smart glasses were used to support hands-free feedback to the learner during task execution. In this instance, the GIFT testbed can be used to select the pressure sensor that is most representative of the actual task and also the most reliable. The GIFT testbed can also be used to refine the training tasks to optimize learning, performance, retention, and transfer of skills from training to operations.

As a proof of concept, Goldberg et al. (2017) discuss the use of GIFT as a testbed to develop and validate models for rifle marksmanship in this special issue. GIFT was used to develop an adaptive tutor and a gateway to link adaptive instructional best practices with an evolving marksmanship simulation environment. The resulting GIFT-based tutoring testbed was used to present rifle marksmanship tasks and collect learner data from a set of experts. This expert data was used to build an expert model which could subsequently be used to compare learner and expert performance. This testbed methodology is being expanded to include other psychomotor tasks to demonstrate its flexibility.

GIFT as a Testbed to Validate Team or Shared Modeling

Next, we examine how GIFT could be used to validate team models based on investigations within the literature. Meta-analyses of the literature may be conducted to identify the significance/impact of various tutoring mechanisms, but given that the context or conditions are different for each reported effect size, it is still necessary to validate these relationships across a variety of domains in order to generalize the results. The AIED literature has focused heavily on ITSs as collaboration support for groups attempting to solve problems or learn knowledge/skills together (Adamson et al. 2014). Dillenbourg (1999, p. 1) defined collaborative learning as “a situation in which two or more people learn or attempt to learn something together”.

The computer-supported collaborative learning (CSCL) literature is focused on groups working together to accomplish shared goals where they are responsible to maximize their own learning and the learning of all other group members (Johnson and Johnson 1986, 1999). The approach and objectives for collaborative problem solving may be most similar with teamwork tutoring. Both are focused on guiding the interactions between team members to optimize the pursuit of their goals. Both collaborative problem solving and teamwork instructional methods may be generalized (e.g., scientific method) as domain-independent processes, but there also domain-dependent elements to collaborative problem solving. Whereas collaborative learning, collaborative problem solving, and team taskwork focus on learning in specific domains, teamwork tutoring is focused on improving the processes (e.g., communication, coaching, conflict management, cooperation) within the team so the team might function more efficiently and effectively in future domains of instruction and operation. Whatever the similarities or differences, the shared opportunity is that the GIFT testbed could also be used to validate collaborative learning, collaborative problem solving, or team tutoring functions in both AIED and CSCL tutors.

In this special issue, Sottilare et al. (2017a, 2017b) investigated the antecedent teamwork states (e.g., cohesion, conflict management, trust) to team learning and performance in the general ITS, AIED, CSCL, and team performance literature. Teamwork involves behavioral, attitudinal, and cognitive contributors or antecedents to both team learning and performance. While this investigation provides a solid initial step toward identifying measures of good/poor team learning/performance, it will still require experimentation via the GIFT testbed to validate behavioral markers as precursors of team states across domains. Experiments are currently being designed to evaluate the relative influence of each marker and the influence of team states as precursors of team learning and team performance. GIFT is also being extended to support: 1) multiple domain knowledge files that are used to assess learning and performance of both individual team members and the team as a whole (assessment of team taskwork), 2) tracking of team states as measures of team learning and performance, and 3) the remedial cues by the tutor to overcome poor teamwork.

In this special issue, Fletcher and Sottilare (2017) reviewed the learning and performance literature to identify how shared mental models of cognition could be used to enhance the adaptive instruction of teams. The goal was to develop a methodology and extend GIFT to enhance adaptive team instruction at the point-of-need. This review is an initial step toward using GIFT to recognize and model the individual team member and collective understanding of domains experienced by the team during training and educational experiences. Based on the literature, it is expected that the modeling of shared mental models and teamwork by ITSs will determine the system’s ability to guide team learning. To do so it is important to examine the interaction between ITSs, shared mental models of cognition, and teamwork. Augmenting the shared mental modeling processes of ITSs is expected to enhance the system’s effectiveness in tutoring groups (e.g., teams, collaborative learners).

GIFT is being extended to represent mental models of individual learners and teams in various domains of learning. The goal is to determine the effect of various levels of understanding of the team and individual learners by ITSs on their ability to make accurate assessments, make sound instructional decisions, and ultimately to optimize team learning. To this end, GIFT will be extending learner and team modeling beyond near term knowledge and skill found within a single tutoring session to more long term representations of understanding/comprehension based on a history of learner experiences and achievements.

GIFT as a Testbed to Evaluate Instructional Approaches for Team Taskwork.

Given the importance of team models and collaborative approaches to instruction discussed above, next we discuss methods to evaluate instructional approaches for training teams to perform specific tasks which is referred to as team taskwork. For team taskwork, it is often assumed that each individual is proficient at the basic tasks required during team instruction and that the goal is to enhance the team’s performance as demonstrated by measures of quality (e.g., accuracy), speed, and/or reduced resources. This usually involves a tutoring experience where the tutor guides a group of learners toward the goal of learning how to perform a task or set of tasks which require a team to be successful. Since the measures focus on the contributions of each team member to a specific task, team taskwork is domain-dependent.

The AIED and CSCL communities have a long history of research in intelligent support for learning in groups ranging from conversational strategies (Kumar et al. 2011) to embedded training for teams (Zachary et al. 1998) to peer tutoring (Walker et al. 2014) to modeling human tutors (Person et al. 2003). A long desired goal has been to generalize the authoring of taskwork in ITSs. GIFT was created in 2012 with the notion that it was possible to standardize process, data structures, messages, and modules to support a data-driven, learner-centric adaptive instructional capability for both individual learners and teams. As noted previously, some tutoring processes (e.g., collaborative problem solving and teamwork tutoring) lend themselves more easily to a generalized model of computer-based tutoring. A more challenging problem is the authoring of ITSs for taskwork and in particular team taskwork where the group is attempting to become more proficient in a specific team domain where the learner models, team model, measures of assessment, and interventions may be unique to that domain. In the AIED community, Olsen et al. (2013) extended the Cognitive Tutor Authoring Tools (CTAT) to support the development of ITSs to allow multiple learning goals and guide collaboration enabled by a broad range of collaboration scripts across multiple task domains.

In this special issue, Gilbert et al. (2017) discuss the design and evaluation of an instructional approach to represent taskwork for teams in ITSs. This article discusses some of the challenges in extending GIFT to author ITSs for teams. Authoring team learning objectives requires an understanding of the roles and responsibilities of the individual team members along with assessments of each team member’s progress toward assigned goals. The GIFT data structure was extended to represent the domain knowledge of team tasks during a simple surveillance mission involving two team members. This simple example required mechanisms to measure progress toward individual goals, and a model to determine which team member behaviors contributed toward progress for team goals. This prototype formed the basis for new structures in GIFT that the authoring tools will extend to allow ITS developers to define the size of the team, the roles and responsibilities of team members, their expected interactions, and their contributions to team level goals.

Learn More about Gift

If you are interested in knowing more about GIFT and adaptive instruction, we direct your attention to www.GIFTtutoring.org for details and documentation about GIFT and its authoring, instructional management, and evaluation (testbed) functions. GIFT software is also available as a free download (https://gifttutoring.org/projects/gift/files) or may be used freely in our cloud-based application (https://cloud.gifttutoring.org/dashboard/#login).

The Design Recommendations for Intelligent Tutoring Systems book series is available free at www.GIFTtutoring.org and covers volumes on learner modeling, instructional management, authoring tools, domain, modeling, and assessment with future volumes covering team tutoring, machine learning techniques, and potential standards for ITSs.