Elements of Adaptive Instruction for Training and Education

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9744)


This paper discusses critical elements of adaptive instruction in support of training and education. Modeling and assessing learners and teams, optimizing adaptive instructional methods, applying domain modeling outside of traditional training and educational domains, automating authoring processes, and assessing the learning effect of instruction are among the challenges reviewed.


Adaptive instruction Intelligent tutoring systems Learner modeling Domain modeling Authoring tools Learning effect 

1 Introduction

The goal of adaptive instruction is to tailor learning experiences to the capabilities and needs of each individual learner or team. Adaptive instructional systems, also known as intelligent tutoring systems (ITSs) alter their decisions, behaviors, and actions based on their recognition of changing states/traits of the learner or changing conditions in the environment [1]. These changes are usually managed by software-based agents who use machine learning techniques (i.e. Markov Decision Processes (MDPs), k-Nearest Neighbor, Support Vector Machines) to optimize their responses (decisions and actions) [2, 3]. Adaptive instruction usually results in increased authoring requirements, some of which may not be completely defined at the outset, to support tailored learning experiences for the wide variety of learner states and traits. The following research goals are on the critical path of making adaptive instruction practical and affordable:

  • Understand and model the states and traits of individual learners and teams

  • Tailor adaptive instructional methods to optimize learning, performance, retention of knowledge and skill, and transfer of learning to other domains

  • Understand and model domains beyond those traditional well-defined intelligent tutoring system (ITS) domains (e.g., mathematics and physics)

  • Develop replicable processes to assess the appropriateness of models beyond leveraging subject matter experts

  • Investigate and develop methods to automate authoring processes and thereby reduce the time and skill needed to develop ITSs

  • Investigate and develop methods to assess learning effect and thereby provide a mechanism with which to continuously improve adaptive instruction

2 Modeling Individual Learners and Teams

The more the tutor understands about the learning habits, capabilities, and needs of the learner, the more efficient and effective that tutor will be in guiding learning experiences. Lepper et al. [4] identified the characteristics of an expert human tutor in their INSPIRE (intelligent, nurturant, Socratic, progressive, indirect, reflective and encouraging) model. The characteristics in this model were further explored in Lepper and Woolverton’s [5] study of highly effective tutors with the goal of developing best tutoring practices. Expert tutors are such that they have subject matter knowledge and understanding the appropriate teaching strategies for the types of problems at hand. Capturing the traits and interpreting the states of the learner is a key to modeling the learner’s habits, capabilities and needs, and thereby critical to selecting the most effective learning strategies. Lepper and Woolverton found that highly skilled tutors manage two primary processes related to learning: engagement and motivation.

While there are several challenges in modeling individual learners and teams to support the goal of adaptive instruction, four areas of research are noteworthy: techniques to model individuals and teams (i.e. how to identify differences affecting learning and performance); use of big data to support domain competency modeling of the learner; providing training at the point-of-need (i.e. tutoring anyplace and any time also known as “in the wild”); and the use of artificial intelligence (AI) techniques to support learner state classification (i.e. Bayesian Classifiers, Markov Decision Processes) [6].

Each of these four areas of research prompts associated questions. What learner states and traits are needed for our learner model? What measurements are needed to assess domain competency? What methods are available to capture data when the training is provided at the point-of-need? What methods can be used to classify learner states based on captured data (Fig. 1)?
Fig. 1.

Open questions related to individual and team modeling

3 Tailoring Adaptive Instructional Methods

The goal of adaptive instruction is to tailor training and educational experiences to match the learning capabilities and needs of each individual, and thereby reduce the amount of instruction required to reach a minimum level of competence in a given domain [7]. By evaluating the states and traits of the learner in real-time, ITSs can select an optimal strategy or plan for action (e.g., prompt the user for additional information) and thereby select an optimal tactic or tutor action (e.g., ask a specific question based on where the user is in the instruction). In the Generalized Intelligent Framework for Tutoring (GIFT), an open-source architecture for authoring, delivering, and evaluating ITSs [8], instructional decisions are driven by two processes: Merrill’s component display theory [9], and the learning effect model (LEM) [10].

There are several challenges associated with tailoring adaptive instruction, but five areas of research are noteworthy: determining the type and frequency of guidance and feedback provided by the tutor; understanding the effect of social dynamics in instruction; understanding the effect of metacognitive processes on self-regulated learning; optimizing the selection of instructional tactics; and the effect of personalization (occupational and non-cognitive factors) on learning, retention, and motivation [11].

4 Modeling Non-traditional Domains

Today, ITSs primarily represent well-defined, process-oriented domains which include mathematics and physics. The primary goal for domain modeling is to be able represent the diversity of domains in training and education. This means expanding assessment methods to allow measurement of key moderators of learning for a variety of tasks and conditions, and well beyond common desktop training and education applications.

Again, there are several areas which provide challenges, but we have selected four that align with our primary goal: representing and understanding the influence of domain attributes; reducing the time, cost, and skill required to author and deliver complex instruction; improving the interoperability of domain models; and extending adaptive instruction to include fuzzy domains [12].

5 Automating Authoring Processes

Authoring or development costs are the most significant element in determining the affordability of adaptive instruction. The return on investment is clearer for high density courses where there are many students, but much less so for courses with lower density. Processes are needed to make authoring affordable regardless of density [13]. Beyond affordability, improved authoring experiences can make adaptive instruction more enticing and engaging to the community, and thereby increase use and buy-in of ITSs.

Another major goal is to reduce the time and skill needed to author adaptive instruction so tailored instructional solutions can be available to the masses at an affordable cost, and so people with domain knowledge (but lacking programming skills and instruction design knowledge) can author them. Identifying candidate authoring tasks for automation can reduce the time users spend manually generating tutors and reduce the skill required to author ITSs.

There are five major challenges in automating authoring processes are: describing mental models and defining interaction paradigms for authoring (i.e., different mental models exist for different user groups, which alter how the system is used); identifying candidate processes for automated authoring (i.e., taking those well defined tasks and automating them to reduce workload); extending authoring capabilities to support integration with existing training and educational systems; and enabling collaborative authoring to match skills with authoring tasks [13]. If these challenges are met, then authoring has the opportunity to be less focused around the system itself, and more concentrated on the user and paths to getting the authoring task completed.

6 Assessing Learning Effect

The major challenges in assessing effect during adaptive instructional events include the development of accurate assessment methods for measuring: learning, performance, retention potential, and transfer potential; the wide spectrum of task domains (e.g., cognitive, affective, physical, social); the varying spectrum of kinetic tasks (i.e., from static to limited kinetic to full kinetic) [7].

7 Next Steps

Each version of GIFT has been focused on improving the interoperability of ITS components, capturing best instructional practices, and developing tools and methods to reduce the time and skill needed to author adaptive instruction. Future versions of GIFT are targeted to capture the results of training and educational research with the goal of bringing the state-of-the-art concepts to the state-of-practice. Of particular emphasis is the understanding and development of shared mental models to support adaptive instruction for teams [14, 15].



This research was sponsored by the U.S. Army Research Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding and copyright notation herein.


  1. 1.
    Oppermann, R.: Adaptive User Support. Lawrence Erlbaum, Hillsdale (1994)Google Scholar
  2. 2.
    Sottilare, R.: Making a case for machine perception of trainee affect to aid learning and performance in embedded virtual simulations. In: NATO Research Workshop (HFM-RWS-169) on Human Dimensions in Embedded Virtual Simulations, Orlando, Florida, October 2009Google Scholar
  3. 3.
    Sottilare, R., Roessingh, J.: Exploring the application of intelligent agents in embedded virtual simulations (EVS). In: Final Report of the NATO Human Factors and Medicine Panel – Research Task Group (HFM-RTG-165) on Human Effectiveness in Embedded Virtual Simulation. NATO Research and Technology Office (2012)Google Scholar
  4. 4.
    Lepper, M.R., Drake, M., O’Donnell-Johnson, T.M.: Scaffolding techniques of expert human tutors. In: Hogan, K., Pressley, M. (eds.) Scaffolding Student Learning: Instructional Approaches and Issues, pp. 108–144. Brookline Books, New York (1997)Google Scholar
  5. 5.
    Lepper, M., Woolverton, M.: The wisdom of practice: lessons learned from the study of highly effective tutors. In: Aronson, J. (ed.) Improving Academic Achievement: Impact of Psychological Factors on Education, pp. 135–158. Academic Press, New York (2002)CrossRefGoogle Scholar
  6. 6.
    Goodwin, G., Johnston, J., Sottilare, R., Brawner, K., Sinatra, A., Graesser, A.: Individual Learner and Team Modeling for Adaptive Training and Education in Support of the US Army Learning Model: Research Outline. Army Research Laboratory (ARL-SR-0336), September 2015Google Scholar
  7. 7.
    Sottilare, R., Sinatra, A., Boyce, M., Graesser, A.: Domain Modeling for Adaptive Training and Education in Support of the US Army Learning Model: Research Outline. Army Research Laboratory (ARL-SR-0325), June 2015Google Scholar
  8. 8.
    Sottilare, R.A., Brawner, K.W., Goldberg, B.S., Holden, H.K.: The Generalized Intelligent Framework for Tutoring (GIFT). U.S. Army Research Laboratory – Human Research and Engineering Directorate (ARL-HRED), Orlando (2012)Google Scholar
  9. 9.
    Merrill, M.D.: The Descriptive Component Display Theory. Educational Technology Publications, Englewood Cliffs (1994)Google Scholar
  10. 10.
    Sottilare, R., Ragusa, C., Hoffman, M., Goldberg, B.: Characterizing an adaptive tutoring learning effect chain for individual and team tutoring. In: Proceedings of the Interservice/Industry Training Simulation and Education Conference, Orlando, Florida, December 2013Google Scholar
  11. 11.
    Goldberg, B., Sinatra, A., Sottilare, R., Moss, J., Graesser, A.: Instructional Management for Adaptive Training and Education in Support of the US Army Learning Model: Research Outline. Army Research Laboratory (ARL-SR-0345), November 2015Google Scholar
  12. 12.
    Fletcher, J., Sottilare, R.: Cost analysis for training and educational systems. In: Sottilare, R., Graesser, A., Hu, X., Goldberg, B. (eds.) Design Recommendations for Intelligent Tutoring Systems: Volume 2 - Instructional Management. Army Research Laboratory, Orlando (2014). ISBN 978-0-9893923-2-7Google Scholar
  13. 13.
    Ososky, S., Sottilare, R., Brawner, K., Long, R., Graesser, A.: Authoring Tools and Methods for Adaptive Training and Education in Support of the US Army Learning Model: Research Outline. Army Research Laboratory (ARL-SR-0339), October 2015Google Scholar
  14. 14.
    Johnston, J., Goodwin, G., Moss, J., Sottilare, R., Ososky, S., Cruz, D., Graesser, A.: Effectiveness Evaluation Tools and Methods for Adaptive Training and Education in Support of the US Army Learning Model—Research Outline. Army Research Laboratory (ARL-SR-0333), September 2015Google Scholar
  15. 15.
    Fletcher, J.D., Sottilare, R.: Shared mental models of cognition for intelligent tutoring of teams. In: Sottilare, R., Graesser, A., Hu, X., Holden, H. (eds.) Design Recommendations for Intelligent Tutoring Systems: Volume 1- Learner Modeling. Army Research Laboratory, Orlando (2013). ISBN 978-0-9893923-0-3Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.U.S. Army Research LaboratoryOrlandoUSA

Personalised recommendations