Keywords

1 Introduction

xAPI (experience API) [1] was initially established as a generic framework for storing human learners’ activities in both formal and informal learning environments. Each xAPI statement contains elements that record a learner’s behavior, answering Who, Did, What, Where, Result, and When questions. For example, the current xAPI specification for Who is designed to uniquely identify the human learner (with tag name actor) by and identification such as email. In addition, the collection of verb for Did is also entirely for human learner actions (with tag name verb), such as attempted, learned, completed, answered, etc. (see a list of xAPI verbs [2]).

xAPI specification for actor and verb are appropriate and work well for traditional e-learning or distributed learning environments where the learning content and resources are mostly static with limited types of interactions. When considering adaptive instructional systems (AIS) such as intelligent tutoring systems (ITS), the existing specification for actor and verb may need to be extended. In fact, all existing ITS implementations use a computer to mimic a human tutor’s interactions with a human learner. To fully capture mixed-initiative interaction between the human learner and anthropomorphic computer tutors, we propose to establish an xAPI profile for AIS where behaviors for both the human learners and anthropomorphic AIS cast members are captured in the same learning record store. More importantly, the xAPI profile for AIS provides a standardized description of the behavior of anthropomorphic AIS.

In this paper, we will demonstrate the feasibility of such an xAPI profile for AIS by analyzing the AutoTutor Conversation Engine (ACE) for a conversation-based AIS called AutoTutor [3]. We will then extrapolate from this case to propose a general guideline for creating xAPI profiles for AIS. We argue that creating this kind of xAPI profile specifically for AIS constitutes a concrete and appropriate step toward establishing standards in AIS.

2 Adaptive Instruction System

Taken from the context of e-learning, the following definition of adaptivity fits well in the context of AIS:

Adaptation in the context of e-learning is about creating a learner experience that purposely adjusts to various conditions (e.g. personal characteristics and interests, instructional design knowledge, the learner interactions, the outcome of the actual learning processes, the available content, the similarity with peers) over a period of time with the intention of increasing success for some pre-defined criteria (e.g. effectiveness of e-learning: score, time, economical costs, user involvement and satisfaction).

Peter Van Rosmalen et al., 2006 [4]

In this definition, if “e-learning” is replaced with “learning”, we are looking at the entire educational system. In fact, the way we create curricula for different student populations (such as age and grades), build schools and learning communities, train teachers, and implement educational technologies all deliberately build toward having adaptive learning environments for students. When broadly defined, the concept and framework of AIS dates back over 40 years [5] and has been studied by generations of learning scientists [6,7,8].

3 A Four-Component Model of AIS

For the purpose of this paper, we take a minimalistic, behavioristic view of AIS that contains only four components in the following fashion:

The learners interact with the learning resources in a given learning environment following preset steps of learning processes.

The four components are learners, resources, environments, and processes. In most of the learning systems, human learners are at the center (learner centered design; [9]). There are several types of resources in AIS. Schools, classrooms, etc. are physical learning resources; teachers, librarians, etc. are human resources; static online content, audio/video, etc. are digital resources; computer tutors such as AutoTutor, etc. constitute intelligent digital resources. Types of environments and processes are classified based on type of learning theories implemented in the given AIS.

In the context of this paper, we specially consider AIS that include intelligent digital resources. For example, a conversation-based ITS such as AutoTutor [10, 11] delivers learning in a constructive learning environment that engages learners in a process that follows expectation-misconception tailored dialog in natural language. It is reasonable to assume that most of existing AISs with intelligent digital resources are created with certain guiding principles of learning.

4 Guiding Principles of Learning Systems

There have been numerous “learning principles” relevant to different levels of learning. For example, psychologists provide twenty key principles [12, 13] to answer five fundamental questions of teaching and learning (Appendix A). An IES report identified the following seven cognitive principles of learning and instruction (Appendix B) as being supported by scientific research with a sufficient amount of replication and generality [14]. Further, the 25 Learning Principles to Guide Pedagogy and the Design of Learning Environments [15] list what we know about learning and suggest how we can improve the teaching-learning interaction (Appendix C). The list provides details for each principle to foster understanding and guide implementation.

For example, Deep Questions (principle 18, Appendix C) indicates

“deep explanations of material and reasoning are elicited by questions such as why, how, what-if-and what-if not, as opposed to shallow questions that require the learner to simply fill in missing words, such as who, what, where, and when. Training students to ask deep questions facilitates comprehension of material from text and classroom lectures. The learner gets into the mindset of having deeper standards of comprehension and the resulting representations are more elaborate.”

Graesser and Person, 1994 [16]

Specifically, questions can be categorized into 3 categories and 16 types (Appendix D). Nielsen [17] extended the above taxonomy with five question types (Appendix E).

5 Learning Science Extension of LOM

Since early 2000s, the e-learning industry had been greatly benefited from IEEE LOM [18] and the Advanced Distributed Learning (ADL) SCORM [19, 20]. Most recently, there has been more effort devoted to enrich metadata for learning objects. For example, the creation of Learning Resources Metadata Initiative (LRMI) [21]. Although IEEE LOM and LRMI focus almost exclusively on the learning objects (resources), it has placeholders for other components, such as typicalAgeRangeFootnote 1 for learners; interactivityTypeFootnote 2 for processes; and contextFootnote 3 for environments. However, researchers started to notice its limitations when considered as metadata for learning objects in AIS such as ITS and it was suggested that metadata for learning contents should consider Pedagogical Identifiers [22].

It is definitional to say that all well-designed effective AIS with intelligent digital resources are based on some aspects of learning science. For example, in AutoTutor, one can easily identify the use of learning principles:

  • (6) in Appendix A: Clear, explanatory and timely feedback to students is important for learning.

  • (7) in Appendix B: Ask deep explanatory questions.

  • (18) of Appendix C: Deep Questions.

For any specific AutoTutor module, expectation-misconception tailored dialog works best if deep and complex questions (type 3) of Appendix D are asked as main question to start the dialog.

Unfortunately, detailed documentation at the level of foundational learning science is rarely available in existing AIS applications. Only a few AIS implementations provide documentation for each of the four components (learners, resources, environment, and processes) at the level of learning science. Our proposed approach is to first extend the existing metadata standards (such as IEEE LOM) to include learning science relevant metadata for each of the four components, and then make recommendations to enhance the xAPI statement. As an intuitive approach, we have “extended” the IEEE LOM with a “learning science extension” which could include a specific set of learning principles and associated implementation details (see Appendix F).

6 xAPI for AIS Behavior

Intuitively, xAPI is a way to record the process and result that the learner interacts with the learning resources in a given learning environment. Technically, xAPI is a “specification for learning technology that makes it possible to collect data about the wide range of experiences a person has (online and offline)” [23]. Functionally, xAPI

(1) lets applications share data about human performance.(2) lets you capture (big) data on human performance, along with associated instructional content or performance context information. (3) applies human (and machine) readable “activity streams” to tracking data and provides sub-APIs to access and store information about state and content. (4) enables nearly dynamic tracking of activities from any platform or software systemfrom traditional Learning Management Systems (LMSs) to mobile devices, simulations, wearables, physical beacons, and more.

Experience xAPI - ADL Initiative [24]

The most important components of xAPI statements are actor, verb, and activity. xAPI has been learner centered, where actor is always the learner, verb is the action of the learner, and activity is relatively flexible to include only limited information of learning environment and process.

As we have pointed out earlier, most AIS derive design and functionality from learning science theory. For each interaction between the learner and the system sophisticated computations govern progression, involving all other three components (resources, environments, and processes). For example, in AutoTutor, an expectation-misconception tailored dialog involves only one learner’s action (the input), but multiple processes and components of the AIS are involved. Some of the steps enumerated below (steps 1.3, 2.2, 3.1) are based on theories of learning.

  1. 1.

    Evaluate learner’s input. The result is a function of

    1. 1.1.

      stored expectations,

    2. 1.2.

      stored misconceptions,

    3. 1.3.

      semantic space, etc.

  2. 2.

    Construct feedback. The feedback is a function of

    1. 2.1.

      the evaluation outcome,

    2. 2.2.

      dialog rule, etc.

  3. 3.

    Deliver feedback. The delivery of feedback is a function of

    1. 3.1.

      the delivery agent (teacher agent or student agent),

    2. 3.2.

      the types of delivery technology,

    3. 3.3.

      the time (latency) of the feedback, etc.

Unfortunately, when the learning record store (LRS) records learner behavior for such an AIS, only the input and result (or feedback) are recorded, but no system behavior. To capture wholistic behavior of an AIS, we need to consider all system behavior. So we necessarily extend the current xAPI behavior data specification, such that

  • actor includes all components of the AIS. This includes not just the learner, but also digital resources such as an ITS.

  • verb includes actions of the AIS. This includes not just learner’s actions, but also the actions of the AIS.

  • activity includes extended LOM that have detailed documentation of AIS at the level of learning science.

As an example, when we tried to send systems behavior to the LRS, this is a sample list of statements from the LRS in reverse order, where statement #13 is the starting of the tutoring. Here a human learner John Doe, an intelligent digital resource is TR_DR_Q2_Basics, and Steve is the agent that represents the intelligent digital resource.

  1. 1.

    John Doe listen Steve TR_DR_Q2_Basics on Hint/Prompt

  2. 2.

    TR_DR_Q2_Basics follow_rule_StartExpectation John Doe

  3. 3.

    TR_DR_Q2_Basics follow_rule_FB2MQMetaCog John Doe

  4. 4.

    TR_DR_Q2_Basics Evaluate John Doe on TutoringPack Q1

  5. 5.

    TR_DR_Q2_Basics follow_rule_TutorHint John Doe

  6. 6.

    TR_DR_Q2_Basics transition John Doe on TutoringPack Q1

  7. 7.

    John Doe answer Steve TR_DR_Q2_Basics on Main Question

  8. 8.

    John Doe listen Steve TR_DR_Q2_Basics on Main Question

  9. 9.

    TR_DR_Q2_Basics follow_rule_Start John Doe

  10. 10.

    TR_DR_Q2_Basics follow_rule_StartTutoring John Doe

  11. 11.

    TR_DR_Q2_Basics follow_rule_AskMQbyTutor John Doe

  12. 12.

    TR_DR_Q2_Basics follow_rule_Opening John Doe

  13. 13.

    TR_DR_Q2_Basics transition John Doe on Tutor Start

There are three actions (bold) for the learner (John Doe). First is listen to Steve (#8) and then answer (#7) Steve when the Main Question was delivered and finally listen (#1) to Steve as he delivers hints. But there are multiple actions for the AIS: five actions (#13 to #9) to prepare the delivery of the Main Question, and five actions (#6 to #2) to prepare the delivery of the hint after John Doe answered the Main Question.

7 Discussion

When we consider AIS within a simple four component model that involves the learners, resources, environments, and processes, we need to have a way to document the implementation with as much details as possible. This is especially true when we assume an AIS with intelligent digital resources is built with the guidance of learning science. To do this, we propose to extend the IEEE LOM of the intelligent digital resource to include a learning science extension. This approach in AIS is almost the same as the practice in health science in which all medicine seeking approval from the Food and Drug Administration (even over-the-counter medicine) must include detailed information, such as chemical compounds, potential side effects, and best way to administer the medicine. This information associated with medicine is for physicians and for patients. The proposed learning science extension of LOM for the intelligent digital resources in AIS is for teachers and learners. Furthermore, when it is stored in LRS, this information can be used for either post-hoc and real-time analysis of AIS.

In addition to learning science extension of LOM for intelligent digital resources, we have also proposed to extended the limitation of xAPI statements to include all behaviors of AIS. More importantly, we need to make intelligent digital resources as a legitimate actor and record its actions together with the standard descriptions of environments and processes. When we have the learning science extension of LOM for intelligent digital resources and make the LRS store action from an intelligent digital resource in the same way as a human learner’s actions, we are literally proposing a “symmetric” view of AIS: the human learner and the intelligent digital resource are exchangeable in the stored AIS behavior data (in xAPI). The only difference is that the actions of human learners are observed behavior and actions of intelligent digital resources are programmed.

8 Conclusions

We advocate a simple four component model of AIS that include learners, resources, environments, and processes. We consider this approach particularly valuable for those AIS that include intelligent digital resources. We assume intelligent digital resources are created with the guidance of learning science (such as learning principles), and consequently we proposed to use the learning science extension of IEEE LOM to document the intelligent digital resources as metadata. We further argue that behavior data records for AIS with intelligent digital resources should include behaviors of all AIS including behavior of human learners and the intelligent digital resources. The proposed learning science extension of LOM for intelligent digital resources and modified xAPI for AIS are tested in a conversation based AIS where AutoTutor is the intelligent digital resources.