Abstract
The Generalized Intelligent Framework for Tutoring (GIFT) is a research prototype with three general goals associated with its functions and components: 1) lower the skills and time required to author Intelligent Tutoring Systems (ITSs) in a variety of task domains; 2) provide effective adaptive instruction tailored to the needs of each individual learner or team of learners; and 3) provide tools and methods to evaluate the effectiveness of ITSs and support research to continuously improve instructional best practices. This special issue focuses primarily on the third goal, GIFT as a research testbed. A discussion thread covers each article within this special issue and discusses its actual and potential impact on GIFT as a research tool for AIED. Our primary motivation was to introduce the AIED community to GIFT not just as a research tool, but as an extension of familiar challenges taken on previously by AIED scientists and practitioners. This preface provides a high level overview of the GIFT functions (authoring, instructional delivery and management, and experimentation) and presents its primary design principles. To learn more about GIFT, freely access the software, documentation, and associated technical papers visit www.GIFTtutoring.org.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The primary purpose of this special issue is to introduce AIED researchers to the experimentation capabilities of the Generalized Intelligent Framework for Tutoring (GIFT) (Sottilare et al. 2012; Sottilare et al. 2017a, 2017b), a research prototype sponsored and developed by the US Army Research Laboratory (ARL). GIFT’s design goals are: 1) to lower the skills and time required to author Intelligent Tutoring Systems (ITSs) in a variety of task domains; 2) to deliver effective and efficient adaptive instruction that is tailored to the needs of each individual learner or team of learners; and 3) to provide tools and methods to evaluate the effectiveness of ITSs and support research to continuously improve instructional best practices. These three design goals are aligned with authoring, instructional management, and evaluation, respectively.
While our primary goal was to expose the AIED community to GIFT as a research tool, our major motivation is to engage AIED scientists and practitioners in shaping its design and future functionality. GIFT’s design goals should be familiar to the AIED community and place GIFT in the category of “shell” tutors, “a generalized framework for building ITSs” (Murray 1999), but GIFT also provides a user interface and logic to “allow non-programmers to formalize and visualize their knowledge” (Murray 1999). The principles that have shaped GIFT are based on the individual and team (collaborative) instructional literature which includes a heavy foundation in the AIED and computer-supported collaborative learning (CSCL) literature.
The GIFT software represents a substantial body of work and has been applied widely. We believe it may provide a useful platform for the broader AIED community. To learn more about GIFT and its design, we direct your attention to www.GIFTtutoring.org for details and documentation about GIFT and its authoring, instructional management, and evaluation (testbed) functions. GIFT software is also available as a free download (https://gifttutoring.org/projects/gift/files) or may be used freely in our cloud-based application (https://cloud.gifttutoring.org/dashboard/#login). We invite the AIED community to explore GIFT and bend it to their needs. Opportunities to influence the design and functionality of GIFT may be posted on our online forum (https://www.gifttutoring.org/projects/gift/boards) or by sharing your thoughts and experiences at our annual GIFT Users Symposium (GIFTSym).
Gift Functions
While the main elements of most ITSs include models of learners, instruction, and knowledge/skill domains, there are three functions that tie these models together and encompass both offline and real-time processes to create, deliver, and understand the influence of instruction provided by ITSs. In GIFT, these processes are represented in the architecture, tools, models, and methods as the authoring, instructional management, and evaluation functions.
Authoring Functions
One major goal of GIFT authoring is to lower the skills and time required to author ITSs in a variety of task domains. The design goals for authoring have been adapted from Murray (1999, 2003) and Sottilare and Gilbert (2011). GIFT authoring is composed of policies, tools and methods to enable authors (instructional designers, developers, instructors/teachers, course managers, and subject matter experts) to: 1) create ITSs without knowledge of instructional design principles or software programming; 2) curate (search for and organize) content to accurately represent instructional domains (cognitive, affective, psychomotor, and social); and 3) sequence them for presentation to learners based on their hierarchical or dependency relationships. Associated authoring objectives include ease of use, support for rapid prototyping, collaborative authoring, and rapid integration of external environments (e.g., simulators, serious games, webpages) to reduce development time/cost and promote interoperability and reuse.
In 2011, the US Army Research Laboratory began to define desirable characteristics of authoring systems for both individual learners and teams (Sottilare et al. 2011) that included interoperability with external environments (e.g., simulations, serious games). GIFT’s linkage with external environments builds on an AIED concept called RIDES which featured embedded graphical simulations and simulation-centered tutorials authored and delivered using the same authoring tool (Munro et al. 1997). GIFT generalized the RIDES concept to allow any external environment to pass and receive instructional data via a standardized GIFT gateway.
Today, GIFT authoring tools include a standard gateway specification for interacting with all types of external environments and course objects to interface with games (e.g., Virtual BattleSpace, Virtual Medic), simulations (e.g., excavator simulator, Newtonian Talk), applications (e.g., Microsoft PowerPoint, Media Semantics Virtual Characters), and physiological/behavioral sensors (e.g., Zephyr Bioharness, Microsoft Kinect, Emotiv Epoc EEG). The course objects require no programming and may be added to a GIFT-based tutor through a simple drag and drop. New games, simulations, applications, and sensors can be added with minor programming to define enumerations (e.g., high/moderate/low trust) as public classes of information available to GIFT for assessment. GIFT is now also Learning Tools Interoperability (LTI) compliant to begin supporting adaptive instruction with EdX and other Massive Open Online Course (MOOC) delivery platforms.
Instructional Management Functions
The primary goal of GIFT instructional management is to deliver effective and efficient adaptive instruction that is tailored to the needs and preferences of each individual learner or team of learners. The design objectives for instructional management include delivery of instruction (e.g., content, feedback, support, direction) to individuals and teams in a variety of locations (e.g., distributed learning, mobile learning) and on a variety of computing devices (e.g., laptops, smartphones, tablets, workstations). Instructional management integrates instructional best practices derived from experimentation and reviews of the empirical literature. As a modular architecture, GIFT also allows users to integrate new pedagogical models, instructional strategies, or instructional tactics from other tutoring systems into GIFT. Associated objectives are to: 1) model and adapt to individual differences (e.g., states, traits, preferences) that influence learning and performance; 2) manage the pace, direction, and challenge level of the instruction; and 3) manage the interplay of learning, performance, retention, and transfer of skills.
To support the development of optimized instructional strategies and tactics, GIFT is deeply grounded in learning theory, tutoring theory, and motivational theory. The learning theories applied in GIFT cover a large landscape: building on prerequisites and conditions of instruction (Gagne 1985), component display theory (Merrill 1983), cognitive learning (Anderson et al. 2001; Koedinger et al. 2012), affective learning (Bower 1992; D’Mello and Graesser 2012), psychomotor learning (Simpson 1972), and social learning (Adamson et al. 2014; Soller 2001; Sottilare et al. 2011). In attempts to model expert human tutors, GIFT considers the Intelligent, Nurturant, Socratic, Progressive, Indirect, Reflective, and Encouraging (INSPIRE) model of tutoring success (Lepper et al. 1997) and the one-to-one tutoring process documented by Graesser et al. (2017) in the development of GIFT instructional strategies and tactics.
The GIFT architecture accommodates learner-centric approaches in addition to instructional management capabilities that are sensitive to learner states, traits, and preferences. Self-regulated learning is encouraged through open learner models that allow learners to decide what to learn next and to inspect their progress in mastering the subject matter and their measured psychological attributes (Bull and Kay 2008). As its design goals are realized, GIFT continues to be used in new and more complex domains. A new evolving capability is the development of architectural services to support the tutoring of groups (e.g., team taskwork tutoring, teamwork tutoring, collaborative learning, and collaborative problem solving) through intelligent, computer-guided instruction as defined below:
-
team taskwork tutoring – focused on developing proficiency in task domains requiring more than one member to accomplish the task (Salas 2015).
-
teamwork tutoring - focused on enhancing coordination, cooperation, and communication among individuals on a team to achieve a shared goal (Salas 2015).
-
collaborative learning – “a situation in which two or more people learn or attempt to learn something together” (Dillenbourg 1999, p. 1) which may or may not require more than one member to accomplish once learned.
-
collaborative problem solving – “the capacity of an individual to effectively engage in a process whereby two or more agents [human or computer-simulated] attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution” (PISA 2017, p. 6).
Evaluation Functions
The GIFT evaluation functions emphasize capabilities to support the empirical evaluation of adaptive instructional methods, ITSs, and their component technologies. The evaluation capabilities in GIFT can be used as a testbed (see Fig. 1) to determine the effect of environmental attributes, tools, models, and methods on engagement, learning, performance, retention, reasoning, and transfer of skills.
The purpose of the GIFT evaluation function is to allow ITS researchers to experimentally assess and evaluate ITS technologies (ITS components, tools, and methods). Figure 1 illustrates an analysis testbed methodology that has been implemented in GIFT to support experimentation. This testbed methodology, derived from Hanks et al. (1993), supports manipulation of the learner model, instructional strategies, and domain-specific knowledge within GIFT. It is used to evaluate manipulated or measured variables within the learning effect model (Sottilare 2012; Sottilare et al. 2013).
The testbed is based upon the notion that testbeds have three critical roles related to three major phases of research. During the exploratory phase, agent behaviors need to be observed and classified in broad categories. This can be performed in an experimental environment, using the methods of educational data mining (Baker and Yacef 2009) to study the relationship between specific agent behaviors, the development of learner behaviors over time, and learner outcomes. During the confirmatory phase, the testbed is needed to allow more strict characterizations of agent behavior to test specific hypotheses and compare tools and methods.
Specifically, the GIFT evaluation function enables the comparison/contrast of ITS elements and assessment of their effect on learning outcomes (e.g., engagement, knowledge or skill acquisition, retention, reasoning, and transfer of skills). Finally, in order to generalize results, replication of conditions and associated measurement must be possible. Just as it does for instruction, GIFT enables experiment authors to define measurement and assessment methods to discern learner states and to record contextual data (e.g., conditions in the environment) during evaluations.
Gift as an Experimental Tool for AIED Research
While GIFT has been used as an ITS authoring and experimental tool across a variety of task domains including more traditional topics in science, technology, education, and mathematics (STEM) (e.g., Renduchitnala and Matthews 2017; Warta 2017), this special issue focused on three distinctive topics: 1) sensitivity to learner affect (DeFalco et al. 2017), adaptive training of psychomotor tasks (Goldberg et al. 2017), and 3) adaptive instruction of teams (Fletcher and Sottilare 2017; Gilbert et al. 2017; Sottilare et al. 2017a, 2017b). Although each article describes a functional capability of significant importance to military training and education, we propose that the processes described therein could easily be transferred and applied to parallel civilian training and educational domains.
The articles in this special issue also focused on two fundamental questions. First, how GIFT has been used as an experimental tool to develop models or enhance pedagogy in ITS. Second, how might GIFT be used to validate concepts derived from the literature. The following discussion identifies different ways that the articles address these questions.
GIFT as a Testbed to Build and Embed Affect Sensitivity
There have been a range of papers that have presented models with the ability to automatically detect affect during online learning (see reviews by Baker and Ocumpaugh 2014 and Calvo and D'Mello 2010, for instance). However, relatively few of these models have actually been built into running systems, and fewer still have been used to drive affective intervention, as noted in a review by D’Mello and his colleagues (D’Mello et al. 2014). There has been additional work since then – see, for instance, Grawemeyer et al. (2017) – but the relatively small number of examples since then represents an argument that this combination of factors is difficult to bring together.
Perhaps the best-known (and most successful) example of this line of work is the work on the Affective AutoTutor by D’Mello and colleagues (D'Mello et al. 2010). In this system, a set of physical sensors were combined with interaction data and self-reports of student affect to develop a model that could automatically infer student affect. The resultant model was then embedded into AutoTutor, a natural language-based intelligent tutoring system. AutoTutor responded to negative student affect with encouraging and supportive messages; a randomized experiment determined that it led to better learning outcomes for learners with initial low domain knowledge. In related work within a speech-based intelligent tutor, Grawemeyer et al. (2017) found that affective support based on automated detection of student affect reduced boredom and off-task behavior.
However, beyond even the technical challenges in conducting this kind of research, affective design remains a difficult art, and not all studies in this area have yielded positive effects. Forbes-Riley and Litman (2011) extended a speech-based intelligent tutoring system (ITSPOKE) with the ability to automatically detect student uncertainty, inferred from the acoustic and prosodic properties of student speech. They then used the detection to drive the provision of additional instruction to resolve the uncertainty. However, a randomized experiment did not find a significant difference between the experimental condition and a control condition without affect-sensitivity. A similar null result was obtained in work by Burleson and Picard (2007), who used a range of physical sensors to detect student affect, and embedded the resultant models into a learning system that taught learners how to solve the Towers of Hanoi problem. The models were used to drive messages that told learners they could succeed at solving the problems and get better. In an experimental study, no main effects were found for the intervention.
In this special issue, DeFalco et al. (2017) presents the entire process of developing an affect-sensitive system, including affect detection research, the embedding of the resultant models in a running system, and the use of those models to drive affective interventions. This paper shows that GIFT can be used to obtain data for detector development, and that the resultant models – both models based on student interaction and models based on physical sensors -- can be embedded back into GIFT for reuse. Next, the GIFT framework was used to build and embed affect sensitivity into the ITS’s responses. Finally, GIFT was used to run an experimental study investigating the impacts of affective interventions on a variety of student outcomes. The example provided by DeFalco et al. (2017) could easily form a template for future research to examine the sensitivity of instructional methods and to conduct validation studies or to examine the impact of various adaptive instructional tools or methods.
GIFT as a Testbed for Developing and Evaluating Tutors for Psychomotor Tasks
The US Army has been interested in extending the effectiveness of ITSs beyond the cognitive task domains currently prevalent. A key aspect of this objective is supporting one-to-one tutoring for individuals learning psychomotor tasks that are assessed by measures such as speed, accuracy, balance, and coordination (Simpson 1972). In exploring the artificial intelligence in education (AIED) literature, Santos (2016) posed the challenges of modeling psychomotor interaction and providing personalized support for psychomotor tasks that ranged from sports to surgical procedures to sign language. Since the AIED literature addressing tutoring of psychomotor tasks is lagging with respect to cognitive domains, we suggest that the development and validation of measures, expert models, and adaptive support for psychomotor tasks could benefit from tools and processes in the GIFT testbed. The evolution of unobtrusive sensors provides opportunities to measure physical movement at a distance with a high degree of accuracy. Low cost sensing methods may enable sufficient tracking of psychomotor tasks for individuals, but have some limitations in tracking multiple learners. Examples of current and emerging research examining ITS capabilities to train individual psychomotor task domains are provided in the discussion that follows.
Sottilare and LaViola (2015) examined the use of smart glass technologies to support the adaptive instruction for land navigation, also known as orienteering. During land navigation tasks, learners plan and conduct routes of travel. Since each route of travel is unique, GIFT is currently being extended to allow learners to plan routes as a means of developing unique expert models. Measures of success are based on variance of the actual route on real terrain from this unique expert model built from route planning on virtual terrain.
Sottilare et al. (2016) examined the use smart glass technologies and pressure sensors to support hemorrhage control training during triage. The physical aspects of the hemorrhage control task were simulated as learners applied tourniquets and pressure bandages to control bleeding injuries. In this proof-of-concept, the tourniquets and bandages contain pressure sensors which can be used to measure the learner’s ability to stem blood flow based on pressure readings. Smart glasses were used to support hands-free feedback to the learner during task execution. In this instance, the GIFT testbed can be used to select the pressure sensor that is most representative of the actual task and also the most reliable. The GIFT testbed can also be used to refine the training tasks to optimize learning, performance, retention, and transfer of skills from training to operations.
As a proof of concept, Goldberg et al. (2017) discuss the use of GIFT as a testbed to develop and validate models for rifle marksmanship in this special issue. GIFT was used to develop an adaptive tutor and a gateway to link adaptive instructional best practices with an evolving marksmanship simulation environment. The resulting GIFT-based tutoring testbed was used to present rifle marksmanship tasks and collect learner data from a set of experts. This expert data was used to build an expert model which could subsequently be used to compare learner and expert performance. This testbed methodology is being expanded to include other psychomotor tasks to demonstrate its flexibility.
GIFT as a Testbed to Validate Team or Shared Modeling
Next, we examine how GIFT could be used to validate team models based on investigations within the literature. Meta-analyses of the literature may be conducted to identify the significance/impact of various tutoring mechanisms, but given that the context or conditions are different for each reported effect size, it is still necessary to validate these relationships across a variety of domains in order to generalize the results. The AIED literature has focused heavily on ITSs as collaboration support for groups attempting to solve problems or learn knowledge/skills together (Adamson et al. 2014). Dillenbourg (1999, p. 1) defined collaborative learning as “a situation in which two or more people learn or attempt to learn something together”.
The computer-supported collaborative learning (CSCL) literature is focused on groups working together to accomplish shared goals where they are responsible to maximize their own learning and the learning of all other group members (Johnson and Johnson 1986, 1999). The approach and objectives for collaborative problem solving may be most similar with teamwork tutoring. Both are focused on guiding the interactions between team members to optimize the pursuit of their goals. Both collaborative problem solving and teamwork instructional methods may be generalized (e.g., scientific method) as domain-independent processes, but there also domain-dependent elements to collaborative problem solving. Whereas collaborative learning, collaborative problem solving, and team taskwork focus on learning in specific domains, teamwork tutoring is focused on improving the processes (e.g., communication, coaching, conflict management, cooperation) within the team so the team might function more efficiently and effectively in future domains of instruction and operation. Whatever the similarities or differences, the shared opportunity is that the GIFT testbed could also be used to validate collaborative learning, collaborative problem solving, or team tutoring functions in both AIED and CSCL tutors.
In this special issue, Sottilare et al. (2017a, 2017b) investigated the antecedent teamwork states (e.g., cohesion, conflict management, trust) to team learning and performance in the general ITS, AIED, CSCL, and team performance literature. Teamwork involves behavioral, attitudinal, and cognitive contributors or antecedents to both team learning and performance. While this investigation provides a solid initial step toward identifying measures of good/poor team learning/performance, it will still require experimentation via the GIFT testbed to validate behavioral markers as precursors of team states across domains. Experiments are currently being designed to evaluate the relative influence of each marker and the influence of team states as precursors of team learning and team performance. GIFT is also being extended to support: 1) multiple domain knowledge files that are used to assess learning and performance of both individual team members and the team as a whole (assessment of team taskwork), 2) tracking of team states as measures of team learning and performance, and 3) the remedial cues by the tutor to overcome poor teamwork.
In this special issue, Fletcher and Sottilare (2017) reviewed the learning and performance literature to identify how shared mental models of cognition could be used to enhance the adaptive instruction of teams. The goal was to develop a methodology and extend GIFT to enhance adaptive team instruction at the point-of-need. This review is an initial step toward using GIFT to recognize and model the individual team member and collective understanding of domains experienced by the team during training and educational experiences. Based on the literature, it is expected that the modeling of shared mental models and teamwork by ITSs will determine the system’s ability to guide team learning. To do so it is important to examine the interaction between ITSs, shared mental models of cognition, and teamwork. Augmenting the shared mental modeling processes of ITSs is expected to enhance the system’s effectiveness in tutoring groups (e.g., teams, collaborative learners).
GIFT is being extended to represent mental models of individual learners and teams in various domains of learning. The goal is to determine the effect of various levels of understanding of the team and individual learners by ITSs on their ability to make accurate assessments, make sound instructional decisions, and ultimately to optimize team learning. To this end, GIFT will be extending learner and team modeling beyond near term knowledge and skill found within a single tutoring session to more long term representations of understanding/comprehension based on a history of learner experiences and achievements.
GIFT as a Testbed to Evaluate Instructional Approaches for Team Taskwork.
Given the importance of team models and collaborative approaches to instruction discussed above, next we discuss methods to evaluate instructional approaches for training teams to perform specific tasks which is referred to as team taskwork. For team taskwork, it is often assumed that each individual is proficient at the basic tasks required during team instruction and that the goal is to enhance the team’s performance as demonstrated by measures of quality (e.g., accuracy), speed, and/or reduced resources. This usually involves a tutoring experience where the tutor guides a group of learners toward the goal of learning how to perform a task or set of tasks which require a team to be successful. Since the measures focus on the contributions of each team member to a specific task, team taskwork is domain-dependent.
The AIED and CSCL communities have a long history of research in intelligent support for learning in groups ranging from conversational strategies (Kumar et al. 2011) to embedded training for teams (Zachary et al. 1998) to peer tutoring (Walker et al. 2014) to modeling human tutors (Person et al. 2003). A long desired goal has been to generalize the authoring of taskwork in ITSs. GIFT was created in 2012 with the notion that it was possible to standardize process, data structures, messages, and modules to support a data-driven, learner-centric adaptive instructional capability for both individual learners and teams. As noted previously, some tutoring processes (e.g., collaborative problem solving and teamwork tutoring) lend themselves more easily to a generalized model of computer-based tutoring. A more challenging problem is the authoring of ITSs for taskwork and in particular team taskwork where the group is attempting to become more proficient in a specific team domain where the learner models, team model, measures of assessment, and interventions may be unique to that domain. In the AIED community, Olsen et al. (2013) extended the Cognitive Tutor Authoring Tools (CTAT) to support the development of ITSs to allow multiple learning goals and guide collaboration enabled by a broad range of collaboration scripts across multiple task domains.
In this special issue, Gilbert et al. (2017) discuss the design and evaluation of an instructional approach to represent taskwork for teams in ITSs. This article discusses some of the challenges in extending GIFT to author ITSs for teams. Authoring team learning objectives requires an understanding of the roles and responsibilities of the individual team members along with assessments of each team member’s progress toward assigned goals. The GIFT data structure was extended to represent the domain knowledge of team tasks during a simple surveillance mission involving two team members. This simple example required mechanisms to measure progress toward individual goals, and a model to determine which team member behaviors contributed toward progress for team goals. This prototype formed the basis for new structures in GIFT that the authoring tools will extend to allow ITS developers to define the size of the team, the roles and responsibilities of team members, their expected interactions, and their contributions to team level goals.
Learn More about Gift
If you are interested in knowing more about GIFT and adaptive instruction, we direct your attention to www.GIFTtutoring.org for details and documentation about GIFT and its authoring, instructional management, and evaluation (testbed) functions. GIFT software is also available as a free download (https://gifttutoring.org/projects/gift/files) or may be used freely in our cloud-based application (https://cloud.gifttutoring.org/dashboard/#login).
The Design Recommendations for Intelligent Tutoring Systems book series is available free at www.GIFTtutoring.org and covers volumes on learner modeling, instructional management, authoring tools, domain, modeling, and assessment with future volumes covering team tutoring, machine learning techniques, and potential standards for ITSs.
References
Adamson, D., Dyke, G., Jang, H., & Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. International Journal of Artificial Intelligence in Education, 24(1), 92–124.
Anderson, L. W., Krathwohl, D. R., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., et al. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy. In L. W. Anderson & D. R. Krathwohl (Eds.). New York: Longman Publishing.
Baker, R. S. J. D., & Ocumpaugh, J. (2014). Interaction-based affect detection in educational software. In R. A. Calvo, S. K. D'Mello, J. Gratch, & A. Kappas (Eds.), The Oxford handbook of affective computing (pp. 233–245). Oxford: Oxford University Press.
Baker, R. S. J. D., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1(1), 3–17.
Bower, G. H. (1992). How might emotions affect learning? In S. Christianson (Ed.), The handbook of emotion and memory: Research and theory (pp. 3–32). Hillsdale: Lawrence Erlbaum Associates.
Bull, S., & Kay, J. (2008, June). Metacognition and open learner models. In The 3rd workshop on meta-cognition and self-regulated learning in educational technologies, at ITS2008 (pp. 7-20).
Burleson, W., & Picard, R. (2007). Affective learning companions. Educational Technology, Special Issue on Pedagogical Agents, 47(1), 28–32.
Calvo, R. A., & D'Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1), 18–37.
D’Mello, S., & Graesser, A. (2012). Dynamics of affective states during complex learning. Learning and Instruction, 22(2), 145–157.
D’Mello, S., Blanchard, N., Baker, R., Ocumpaugh, J., & Brawner, K. (2014). I feel your pain: a selective review of affect sensitive instructional strategies. In R. Sottilare, A. Graesser, X. Hu, & B. Goldberg (Eds.), Design recommendations for adaptive intelligent tutoring systems: Adaptive instructional strategies (Vol. 2, pp. 35–48). Orlando: US Army Research Laboratory.
DeFalco, J., Rowe, J., Paquette, L., Georgoulas-Sherry, V., Brawner, K., Mott, B., Baker, R., & Lester, J. (2017). Detecting and addressing frustration in a serious game for military training. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-017-0152-1.
Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.), Collaborative-learning: Cognitive and computational approaches (pp. 1–19). Oxford: Elsevier.
D'Mello, S., Lehman, B., Sullins, J., Daigle, R., Combs, R., Vogt, K., Perkins, L., & Graesser, A. (2010). A time for emoting: When affect-sensitivity is and isn’t effective at promoting deep learning. In J. Kay & V. Aleven (Eds.), Proceedings of the 10th International Conference on Intelligent Tutoring Systems (pp. 245–254). Berlin / Heidelberg: Springer.
Fletcher, J. D., & Sottilare, R. A. (2017). Shared mental models in support of adaptive instruction of collective tasks using GIFT. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-017-0147-y.
Forbes-Riley, K., & Litman, D. J. (2011). Benefits and challenges of real-time uncertainty detection and adaptation in a spoken dialogue computer tutor. Speech Communication, 53(9–10), 1115–1136. https://doi.org/10.1016/j.specom.2011.02.006.
Gagne, R. M. (1985). The conditions of learning and theory of instruction (4th ed.). New York: Holt. Rinehart & Winston.
Gilbert, S., Slavina, A., Dorneich, M., Sinatra, A., Bonner, D., Johnston, J., Holub, J., MacAllister, A., and Winer, E. (2017). Creating a team tutor using GIFT. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-017-0151-2.
Goldberg, B., Amburn, C., Ragusa, C. & Chen, D. (2017). Modeling expert behavior in support of an adaptive psychomotor training environment: A marksmanship use case. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-017-0155-y.
Graesser, A. C., Rus, V., & Hu, X. (2017). Instruction based on tutoring. In R. E. Mayer & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 460–482). New York: Routledge Press.
Grawemeyer, B., Mavrikis, M., Holmes, W., Gutiérrez-Santos, S., Wiedmann, M., & Rummel, N. (2017). Affective learning: Improving engagement and enhancing learning with affect-aware feedback. User Modeling and User-Adapted Interaction, 27(1), 119–158.
Hanks, S., Pollack, M. E., & Cohen, P. R. (1993). Benchmarks, test beds, controlled experimentation, and the design of agent architectures. AI Magazine, 14(4), 17.
Johnson, R. T., & Johnson, D. W. (1986). Cooperative learning in the science classroom. Science and Children, 24, 31–32.
Johnson, D. W., & Johnson, R. T. (1999). What makes cooperative learning work. In D. Kluge, S. McGuire, D. Johnson, & R. Johnson (Eds.), JALT applied materials: Cooperative learning (pp. 23–36). Tokyo: Japan Association for Language Teaching.
Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.
Kumar, R., Beuth, J., & Rosé, C. P. (2011). Conversational strategies that support idea generation productivity in groups. In Proceedings of the 9th International Computer Supported Collaborative Learning Conference, (Volume 1: Long Papers, pp. 398–405). Hong Kong, China: International Society of the Learning Sciences (ISLS).
Lepper, M. R., Drake, M., & O'Donnell-Johnson, T. M. (1997). Scaffolding techniques of expert human tutors. In K. Hogan & M. Pressley (Eds.), Scaffolding student learning: Instructional approaches and issues (pp. 108–144). Northampton: Brookline Books.
Merrill, M. D. (1983). Component display theory. In C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 279–333). Hillsdale: Lawrence Erlbaum Associates.
Munro, A., Johnson, M. C., Pizzini, Q. A., Surmon, D. S., Towne, D. M., & Wogulis, J. L. (1997). Authoring simulation-centered tutors with RIDES. International Journal of Artificial Intelligence in Education, 8(3–4), 284–316.
Murray, T. (1999). Authoring intelligent tutoring systems: An analysis of the state of the art. International Journal of Artificial Intelligence in Education (IJAIED), 10, 98–129.
Murray, T. (2003). An overview of intelligent tutoring system authoring tools: Updated analysis of the state of the art. In T. Murray, S. Blessing, & S. Ainsworth (Eds.), Authoring tools for advanced technology learning environments (pp. 491–544). Dordrecht: Springer Netherlands.
Olsen, J. K., Belenky, D. M., Aleven, V., Rummel, N., & Ringenberg, M. (2013). Authoring collaborative intelligent tutoring systems. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Proceedings of the artificial intelligence in education (AIED) conference. Heidelberg: Springer.
Person, N. K., Graesser, A. C., Kreuz, R. J., & Pomeroy, V. (2003). Simulating human tutor dialog moves in AutoTutor. International Journal of Artificial Intelligence in Education (IJAIED), 12, 23–39.
PISA (2017). Programme for international student assessment (PISA) 2015 collaborative problem solving framework. Organization for Economic Cooperation and Development. April 2017.
Renduchitnala, C., & Matthews, S. (2017). Intelligent tutor system for laboratory testing for febrile rash illness. In R. Sottilare (Ed.) Proceedings of the 5th Annual Generalized Intelligent Framework for Tutoring (GIFT) Users Symposium (GIFTSym5), (pp. 217–225). Orlando, FL: US Army Research Laboratory.
Salas, E. (2015). Team training essentials: A research-based guide. London: Routledge.
Santos, O. C. (2016). Training the body: The potential of AIED to support personalized motor skills learning. International Journal of Artificial Intelligence in Education, 26(2), 730–755.
Simpson, E. (1972). The classification of learning objectives in the psychomotor domain. Washington DC: Gryphon House.
Soller, A. (2001). Supporting social interaction in an intelligent collaborative learning system. International Journal of Artificial Intelligence in Education (IJAIED), 12, 40–62.
Sottilare, R. (2012). Considerations in the development of an ontology for a generalized intelligent framework for tutoring. International Defense & Homeland Security Simulation Workshop in Proceedings of the I3M Conference, (pp. 19–25). Vienna, Austria: DIME Universita di Genova.
Sottilare, R. & Gilbert, S. (2011). Considerations for adaptive tutoring within serious games. International Workshop on Authoring Cognitive Models and Game Interfaces at the International Conference on Artificial Intelligence in Education (AIED) 2011. Auckland, NZ: Defense Technical Information Center ADA558687.
Sottilare, R., & LaViola, J. (2015). Extending intelligent tutoring beyond the desktop to the psychomotor domain: A survey of smart glass technologies. In Proceedings of the Interservice/Industry Training Simulation & Education Conference 2015. Orlando, Florida: National Training and Simulation Association (NTSA).
Sottilare, R., Holden, H., Brawner, K., & Goldberg, B. (2011). Challenges and emerging concepts in the development of adaptive, computer-based tutoring Systems for Team Training. In Proceedings of the Interservice/Industry Training Simulation & Education Conference 2011. Orlando, Florida: National Training and Simulation Association (NTSA).
Sottilare, R.A., Brawner, K.W., Goldberg, B.S. & Holden, H.K. (2012). The generalized intelligent framework for tutoring (GIFT). Concept paper released as part of GIFT software documentation. Orlando: U.S. Army Research Laboratory – Human Research & Engineering Directorate (ARL-HRED). https://gifttutoring.org/attachments/152/GIFTDescription_0.pdf.
Sottilare, R., Ragusa, C., Hoffman, M., & Goldberg, B. (2013). Characterizing an adaptive tutoring learning effect chain for individual and team tutoring. In Proceedings of the Interservice/Industry Training Simulation & Education Conference 2013. Orlando, Florida: National Training and Simulation Association (NTSA).
Sottilare, R., Hackett, M., Pike, W., & LaViola, J. (2016). Adaptive instruction for medical training in the psychomotor domain. The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology. https://doi.org/10.1177/1548512916668680.
Sottilare, R., Brawner, K., Sinatra, A. & Johnston, J. (2017a). An updated concept for a generalized intelligent framework for tutoring (GIFT). Orlando: US Army research laboratory. May 2017. https://doi.org/10.13140/RG.2.2.12941.54244.
Sottilare, R. A., Burke, C. S., Salas, E., Sinatra, A. M., Johnston, J. H., & Gilbert, S. B. (2017b). Towards a design process for adaptive instruction of Teams: A Meta-Analysis. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-017-0146-z.
Walker, E., Rummel, N., & Koedinger, K. R. (2014). Adaptive intelligent support to improve peer tutoring in algebra. International Journal of Artificial Intelligence in Education, 24(1), 33–61.
Warta, S. F. (2017). Science is Zarked: An intelligent tutoring system for learning research methods. In R. Sottilare (Ed.) Proceedings of the 5th Annual Generalized Intelligent Framework for Tutoring (GIFT) Users Symposium (GIFTSym5), (pp. 205–215). Orlando, FL: US Army Research Laboratory.
Zachary, W., Cannon-Bowers, J. A., Bilazarian, P., Krecker, D. K., Lardieri, P. J., & Burns, J. (1998). The advanced embedded training system (AETS): An intelligent embedded tutoring system for tactical team training. International Journal of Artificial Intelligence in Education (IJAIED), 10, 257–277.
Acknowledgements
The research described herein has been sponsored by the U.S. Army Research Laboratory. The statements and opinions expressed in this article do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.
Finally, we would like to express our thanks to all the reviewers and to the Associate Editors of IJAIED who took on the workload of managing papers from the Guest Editors, other authors and organizations to mitigate any risk (real or perceived) of potential conflicts of interest.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Sottilare, R.A., Baker, R.S., Graesser, A.C. et al. Special Issue on the Generalized Intelligent Framework for Tutoring (GIFT): Creating a Stable and Flexible Platform for Innovations in AIED Research. Int J Artif Intell Educ 28, 139–151 (2018). https://doi.org/10.1007/s40593-017-0149-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40593-017-0149-9