Keywords

1 Introduction

Today, Intelligent Tutoring Systems (ITSs) are generally authored to support desktop training applications with the most common domains being mathematics and physics. In recent years, implementations of ITSs using the Generalized Intelligent Framework for Tutoring (GIFT) [1, 2] have demonstrated adaptive tutoring techniques, strategies, and tactics for desktop training domains, but many training tasks require adaptive instruction beyond the desktop to be compatible with their physical nature [3]. This paper evaluates the interactions and capabilities needed to realize the design and authoring capabilities needed to support mobile tutoring.

Opportunities to expand the capabilities of ITSs may rest beyond the desktop. Commercial products such as Google Glass offer a glimpse of possible trends in adaptive real-time tutoring beyond the desktop. This paper will evaluate how elements of commercial products like Google Glass might be used to support mobile adaptive tutoring also known as tutoring on the run or tutoring in the wild. What we are talking about are tasks that are largely psychomotor tasks in Bloom’s taxonomy [4, 5], but may have elements that are cognitive [6], affective [7] and/or social [8]. Examples of psychomotor tasks include most sports in which the learner trains over time to the point of automaticity. The learner must take in information about changing and static elements in the training or operational environment, quickly make decisions, and then take appropriate action(s). In orienteering, the navigator comes to an area with specific features, analyzes where they might be in the running course, and then determines the best option/direction to reach the next mark in the least amount of time. This part of the task is primarily cognitive and could be trained in a game-based environment. The part of this task which requires physical exertion, stamina, and decision making while under stress cannot always be duplicated in a virtual environment. In American football, the quarterback comes to the line of scrimmage, analyzes the defense, and then determines the best play to take advantage of the situation. The of this task which cannot be duplicated in a game-based or virtual environment is the pressure of the defense, the physical exertion of running multiple plays, and the need to release the ball before the receiver reaches the reception point on the field, and to accurately place the ball where only the receiver can catch it. Finally, presence [9, 10] plays a large part in immersion, engagement learning in real-world physical spaces and are difficult to replicate in virtual environments or computer games.

Discussions will be presented in this paper relative to the capabilities and limitations of the Google Glass technology to support mobile tutoring, as well as recommendations for future capabilities. Google Glass (Fig. 1) will be analyzed specifically with respect to its capability to support a group orienteering task. Google Glass is commercial product that provides interactive exchange of information (e.g., text, alerts, and weather reports) and media (pictures, videos, and livestreaming) via WiFi or cellular phone network.

Fig. 1.
figure 1figure 1

Google glass product (left [11]) and capabilities (right [12])

Domain modeling will be introduced as a topic of discussion to project how Google Glass optimally supports the training of tasks in the cognitive, psychomotor, affective, and social/cultural domains. Three dimensions will be evaluated, compared and contrasted with desktop tutoring as it exists today on the desktop. The domain modeling dimensions include: task dynamics, task definition and task complexity as shown below in Fig. 1.

For our purposes, we have defined the dimensional dichotomies in Fig. 2 as follows:

Fig. 2.
figure 2figure 2

Representative dimensions of training domains

  • Simple tasks – tasks with relatively few steps and generally linear navigation of concepts from beginning to end (e.g., how to apply a tourniquet)

  • Complex tasks – task with many steps and substantial branching and parallel processes from beginning to end (e.g., how to evaluate the mental health of an employee)

  • Well-defined tasks – tasks with clear measures of success with generally one or few correct paths to success (e.g., how to calculate the area of a circle)

  • Ill-defined tasks – tasks without clear measures of success which may have a variety of paths to success (e.g., how to lead a team)

2 Using Google Glass to Support a Psychomotor Task

Google Glass allows users to conduct hands-free interaction across five general functions. While their commercial value is in doubt [11], and their application as a augmented cognition (job aid) or entertainment platform may be more clear [12], their applicability to mobile tutoring tasks and specifically the group orienteering task are just being imagined. Since orienteering is a physical activity, the dynamic aspect of this training task is considered high across the board. The task complexity and definition might be manipulated by the ITS based on the competency of the learner.

2.1 Receiving Reminders and Alerts

Google Glass is capable of receiving reminders and alerts. The reminder function could easily be adapted to support hint, prompt, and reflective prompts during key sequences in adaptive instructional experiences. While the alerts function could be adapted to bring attention to bear on issues of high importance during learning experiences. During orienteering training, GIFT could be used to drive reminders and alerts during key sequences based on location and variance for any planned route. Instead of saying “you are off your route and need to move south 300 yard”, feedback could be more reflective – “check your location; what features should be visible from your current position… what features are visible from your current position”. The degree of task complexity is being managed by the ITS in this case and the level of support is commensurate with the ability of the learner.

2.2 Navigation Functions

Google Glass offers pop-up maps, turn-by-turn directions, and a compass. These functions could be used to support navigation along a course, but might be used to redirect the learner during training when they vary significantly from the planned course. The compass function might be the most useful for the orienteering training task in that it provides information without direction allowing the learner to make their own decisions. In terms of task complexity, very complex orienteering courses could be broken down into small segments to allow novice orienteers the opportunity to realize frequent successes and then scaffolding (reducing support) as their skills grow.

2.3 Augmented Labels of Real Objects

Google Glass also has the capability to provide augmented labels of real objects. This function could be used to aid navigation or other decision-making or problem solving tasks and provide hints or corrective action during training.

2.4 Ability to Share Media

Google Glass can take photos or movies which could be used by the ITS to enhance its situational awareness of the learner. In other words, understanding where the learner is in the context of the orienteering course. In a team orienteering task, this function could be used to share information among team members and allow team members to physically split up in pursuit of an objective. The livestream function allows the user to share a live point of view for analysis of performance by the ITS or to lead other team members to a location based on recognizable features.

2.5 Communication

Probably the most important function in Google Glass is the ability to communicate with the tutor or other team members through texts or shared screens in Google Hangout. Texts can be used to respond to the tutor or other learners in collaborative learning environments. Google Hangout could be used to support in route planning or re-planning with team members. The tutor can capture this communication to determine levels of trust and cooperation within the team. The communication data collected can be used to support after-action reviews and lessons-learned.

3 Discussion and Next Steps

GIFT is readily compatible with the functions in Google Glass to support training in the psychomotor domain and thereby mobile tutoring. The limitations of Google Glass with respect to mobile tutoring are the lack of sensors (e.g., heart rate, blood pressure) to inform critical learner states (e.g., physical exertion) in the learning effect model (Fig. 3). However, this learner data could be made available through other means (e.g., IPhone blood pressure app) and transmitted via Google Glass to GIFT via the cloud.

Fig. 3.
figure 3figure 3

Learning effect model [1, 3]

Next steps are to evaluate a large cross-section of commercial smart glasses determine which functions are needed to support a wide range of psychomotor tasks. Prototypes would follow along with experimentation to provide empirical evidence of the effect size of these techniques. The testbed methodology used to support this evaluation (Fig. 4) allows for manipulation of learner attributes in the learner model, domain characteristics (e.g., host platform, domain complexity, domain definition, domain dynamics, and training environment conditions), and instructional strategies/tactics/techniques.

Fig. 4.
figure 4figure 4

GIFT effectiveness evaluation testbed methodology [3]

Google Glass with GIFT and some sensory augmentation should be able to support complex tasks, but ill-defined tasks may be more challenging as the measures for these tasks as suggested by the name are less defined. Constraint-based or policy-based approaches focused on achievement of goals (go or no-go situations) may provide the best near-term opportunity to tutor in the wild.

The testbed can be used to determine highest reward values for each decision by the adaptive system given the learner state, task type, training or operational conditions under which the task is usually executed, and the definition of the measures or standards for successful completion of the task. Comparative studies are planned to determine the value (cost/benefit) of conducting a wide range of psychomotor tasks in the wild as contrasted with similar training experiences in desktop simulations. While we expect to see differences in performance, learning, and retention, we anticipate the largest effect will be in transfer. Since the training task conditions will be closely aligned with the operational conditions under which the task is normally executed.

Finally, as we project forward, one objective for augmented cognition on the run is to provide embedded training for dismounted soldiers with dynamic entities and real-time effects [13]. However, limitations not in the adaptive technologies (e.g., intelligent tutoring systems), but limitations in the fields of view (<50°) for a variety of government-off-the-shelf and commercial smart glasses we surveyed [14] may slow adoption of adaptive technologies for embedded training.