Adaptive Instruction for Individual Learners Within the Generalized Intelligent Framework for Tutoring (GIFT)

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9744)


This paper discusses tools and methods which are needed to support adaptive instruction for individual learners within the Generalized Intelligent Framework for Tutoring (GIFT), an open source architecture for authoring, adapting instruction, and evaluating intelligent tutoring systems. Specifically, this paper reviews the learning effect model (LEM) which drives adaptive instruction within GIFT-based tutors. The original LEM was developed in 2012 and has been enhanced over time to represent a full range of function encompassing both the learner’s and the tutor’s interactions and decisions. This paper proposes a set of 10 functions to enhance the scope and functionality of the LEM and to extend it to be a career-long model of adaptive instruction and competency.


Adaptive instruction Intelligent tutoring systems Learning effect model 

1 Introduction

Adaptive instruction is a critical concept in the partnership between self-regulated learners and computer-based intelligent tutoring systems (ITSs). Adaptive systems demonstrate intelligence by altering their behaviors and actions based on their recognition of changing conditions in either the user or the environment [1]. This change is usually managed by software-based agents who use artificial intelligence techniques to guide their decisions and actions [2, 3].

The ability of ITSs to enhance self-regulated learning (SRL) habits is an ongoing challenge in the ITS development community. Self-regulated learners develop skills to allow them to persist in pursuit of learning goals in the face of adversity. Question asking, hypothesis testing skills, and reflection are examples of good SRL attributes. To foster the development of these SRL attributes, the adaptive ITS must allow interaction with the learner to influence ITS courses of action. This is the basis of the learning effect model (LEM) [4] used to drive adaptive instruction in GIFT [5] today.

The original LEM [6] examined the relationship between the learner’s data (measures of behavior, physiology, and the learner’s input and interaction), the learner’s states (informed by the learner’s data) and the strategies/tactics (plans and actions) available to and selected by the ITS, but other processes are also important to extending the LEM to be a long term model of instruction. In other words, it is desirable for the model of instruction to extend beyond a single lesson and to be a career-long model of instruction and competency. If we were to develop a user interface (UI) to support interaction between the learner and the tutor, this UI might include some of the functions shown in Fig. 1 below. These functions are shown without respect to their order or the relationship between these functions and represent brainstorming of representative functions for our UI.
Fig. 1.

Functions in an adaptive user interface

Learning objectives define the learning and performance end-state after training/education. Concepts, the basis of GIFT’s instructional model, break down learning objectives into measurable elements. Learning events define the tasks or experiences needed to expose the learner to all the concepts identified in our learning objectives. Other critical elements of this UI are the conditions under which the tasks will be conducted during training and how they will be measured. Finally, the UI must determine how the tutor will respond to learner actions and changes in the instructional environment to optimize learning and performance.

So, what is missing from this model? As noted earlier, the functions in Fig. 1 are shown without respect to their sequencing, inputs or outputs. To develop a more comprehensive UI and to understand the effect of these functions on learning and performance, we must understand the ontology or relationship of these functions [6].

2 Functions of a Learning Effect Model

The latest version of the LEM is shown in Fig. 2. Note that both learning and performance are essential measures in the LEM, but learning is the key measure to determining acquisition of knowledge and skill, and transfer of skills to new experiences. Performance is a behavioral indicator of the learner’s ability to apply skills to new experiences and tasks of increasing complexity over time.
Fig. 2.

10 functions in an adaptive instructional platform for individual learners

Each of the ten functions shown in the figure and are discussed below in terms of their functional description, relationships to other functions (e.g., inputs and outputs), and their impact or effect on learning and performance.

2.1 Function 1: Identifying Required Knowledge and Skills

Whether the learner is interacting with a human tutor or an ITS, a key first step is to identify what knowledge and skills the learner should acquire over time. The complexity of skills and the time needed to reach a level of competency in a particular domain should be considered. Domain knowledge and skills along with minimum standards are usually determined at the organizational level to support organizational competencies and goals. Input to this function includes the learning and performance history of the learner (see Function 10) which is compared to the required knowledge and skills to identify the output of this function: learning gaps.

2.2 Function 2: Developing and Maintaining Learning and Performance Objectives

Learning gaps from Function 1, identifying required knowledge and skills, are used to determine learning and performance objectives for future learning events which are tailored for each individual learner. Learning and performance objectives act as a roadmap. Once we have identified where we want to go on the map, we can begin figuring out how to get to our destinations by using objectives to inform the scope and complexity of a series of tailored learning events.

2.3 Function 3: Crafting a Tailored Learning Event

Learning and performance objectives developed in Function 2 are used to identify a set of experiences (problems or scenarios) which can impart knowledge and develop/exercise skills. The tailored learning event represents what was called “context” in previous versions of the LEM, and influences the selection of specific actions or tactics by the tutor as discussed in Function 8. The tailored learning event also aids in identifying learning and performance measures.

2.4 Function 4: Identifying Learning and Performance Measures

Based on the tailored learning event, we must identify learning and performance measures which are used to compare the knowledge acquisition, skill acquisition, and performance during the tailored learning events to an expert model (highest standard) or standards of minimum performance. Measures are part of a larger dataset which we identify as learner data.

2.5 Function 5: Capturing Learner Data

The learning and performance measures are usually captured via sensors which monitor learner behaviors or via learner responses to assessments, but learner data includes more than just learning and performance data. Learner data may also include behavioral and physiological data captured by sensors, but unrelated to learning or performance measures. Learner data may also include achievement data or other trait data (e.g., preferences, interests, or values) which may be used to inform the learner’s states and the selection of an instruction strategy.

2.6 Function 6: Classifying Learner States

As noted above, learner data is used to classify learner states. Methods for classifying learner states vary and the types of learner states include cognitive, affective, and physical states which may moderate learning and performance. Cognitive states include, but are not limited to learning and performance states, engagement, and comprehension. Affective states include personality, mood, motivation, and emotions. Physical states include speed, accuracy, and stamina. As with learner data, learner states are also used to influence instructional strategy selection.

2.7 Function 7: Optimizing Instructional Strategies

In the LEM, instructional strategies are plans for action by the tutor based only on the states and traits of the learner. In other words, the selection of an instructional strategy is independent of the instructional context, domain or learning event. It’s all about the learner. Examples of an instructional strategy include prompting the learner for more information, asking the learner to reflect on a recent learning event, asking the learner a question, providing feedback to the learner, or changing the challenge level or complexity of a problem or scenario to match the learner’s changing capabilities. The instructional strategy narrows the options for selection and implementation of specific instructional tactics.

2.8 Function 8: Optimizing Instructional Tactics

Now that the selection of instructional strategy has been determined by the tutor, the selection of an appropriate action or instructional tactic can begin. General options for instructional tactics fall into two categories: interaction with the learner or interaction with the learning environment. Examples of these align with the examples provided in Function 7 (optimizing instructional strategies) and simplified interactions between the learner, the learning environment, and the tutor are illustrated in Fig. 3.
Fig. 3.

Interaction between the tutor, the learner and the instructional environment

If the instructional strategy selection was to “ask the learner to reflect on recent learning event”, GIFT considers “context” in selecting an appropriate tactic. Context includes consideration for where they are in the learning event described in Function 3. More deeply, it also includes consideration for what quadrant (rule, example, recall, practice) of Merrill’s Component Display Theory [7] is under instruction. Function 9: Measuring Learning and Performance States.

It should also be noted that the implementation of an instructional tactic may also influence the learner (see Function 5) and affect their cognitive, affective, or physical measures and thereby their associated states. For example, a tactic involving an increase to the challenge level of a scenario might surprise the learner and increase their mental workload resulting in physiological changes.

2.9 Function 9: Tracking Learning and Performance States in Real-Time

An important subset of classifying learner states (Function 6) is the process of tracking changes to learning (knowledge and skill acquisition) and the performance of tasks which are indicators of learning. As with other transient learner states (e.g., affect), learning and performance are informed by learner data (Function 5) which include learning and performance measures identified as part of Function 4. In GIFT, achievement or mastery of concepts are recorded in a long-term learner model via experience application program interface (xAPI) statements of achievement [8]. Achievements can be defined at various levels of granularity (e.g., course completion, concept mastery, or assignment completion). Over time, changes to learning and performance states indicate trends which form the basis for a long-term learn model of learning and performance discussed in Function 10.

2.10 Function 10: Logging Learning and Performance History

In Function 9, we periodically assess the progress of the learner toward learning and performance objectives and at a more granular level their progress in mastering specific concepts and performing specific tasks under a set of conditions to a specific standard (e.g., passing = 70 % correct). If we track learning and performance over time, we see trends including progress of the learner compared to standards, norms, or other learners. We may also be able to classify the domain competency of the learner relative to an upcoming training or educational experience. This can be used to identify tailored learning and performance objectives (Function 2) for an individual learner relative to the required knowledge and skills (Function 1) for a given training or educational domain.

3 Next Steps

To date, both instructional strategies and tactics within GIFT are based on decision trees and have largely been developed based on best instructional practices identified in the literature. To a large degree many of the best practices implemented are domain-independent and the literature does not cover all of the conditions which might be encountered by the learner during an adaptive instructional event.

Experimentation is needed to identify domain-specific tactics and domain-independent strategies which deal with uncertainty related to classification of learner states in order to optimize learning and performance in real-time during instructional events. Ideally, some type of machine learning algorithm which provides reinforced learning over time will allow GIFT to tailor and optimize learning with each individual learner it encounters. Software-based agents are needed to monitor the status of the 10 LEM functions described in this paper and to develop appropriate policies to govern tutor actions with the goal of optimizing learning and performance.

Finally, an enhanced LEM must be developed to support team tutoring or adaptive instruction for collaborative learning, team development, and cooperative, interdependent tasks.



This research was sponsored by the U.S. Army Research Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding and copyright notation herein.


  1. 1.
    Oppermann, R.: Adaptive User Support. Lawrence Erlbaum, Hillsdale (1994)Google Scholar
  2. 2.
    Sottilare, R.: Making a case for machine perception of trainee affect to aid learning and performance in embedded virtual simulations. In: NATO Research Workshop (HFM-RWS-169) on Human Dimensions in Embedded Virtual Simulations, Orlando, Florida, October 2009Google Scholar
  3. 3.
    Sottilare, R., Roessingh, J.: Exploring the application of intelligent agents in embedded virtual simulations (EVS). In: Final Report of the NATO Human Factors and Medicine Panel – Research Task Group (HFM-RTG-165) on Human Effectiveness in Embedded Virtual Simulation. NATO Research and Technology Office (2012)Google Scholar
  4. 4.
    Sottilare, R.: Fundamentals of adaptive intelligent tutoring systems for self-regulated learning. In: Proceedings of the Interservice/Industry Training Simulation and Education Conference, Orlando, Florida, December 2014Google Scholar
  5. 5.
    Sottilare, R.A., Brawner, K.W., Goldberg, B.S., Holden, H.K.: The Generalized Intelligent Framework for Tutoring (GIFT). U.S. Army Research Laboratory – Human Research and Engineering Directorate (ARL-HRED), Orlando (2012)Google Scholar
  6. 6.
    Sottilare, R.: Considerations in the development of an ontology for a generalized intelligent framework for tutoring. In: Proceedings of the I3M Conference International Defense and Homeland Security Simulation Workshop, Vienna, Austria, September 2012Google Scholar
  7. 7.
    Merrill, M.D.: The Descriptive Component Display Theory. Educational Technology Publications, Englewood Cliffs (1994)Google Scholar
  8. 8.
    Advanced Distributed Learning Initiative. “xAPI Architecture Overview” (2012). Accessed 11 Dec 2015

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.U.S. Army Research LaboratoryOrlandoUSA

Personalised recommendations