Keywords

1 Introduction

In a typical and everyday driving task, a wealth of information is funneled to the driver’s visual and auditory sensory channels. The driver is able to safely maneuver the car using the visual channel to steer, monitor the system for alerts, observe hazards (e.g., lane departures, objects in the road, pedestrians, unsafe driving distances) and observe road signs. The auditory channel of the driver can receive information which aids with safety such as horns, sirens, road noise (e.g. rumble strips), and vehicle noise. In addition, many drivers utilize the auditory channel for conversations and listening to the radio. Similarly, in military environments, Soldiers also receive a wealth of information via the visual and auditory channels. In addition to the driving task, hazard avoidance, and noises from the environment, they receive visual and auditory communications about their environment. With such an awesome amount of visual and auditory stimuli, critical, mission-related information can be easily missed. So in accordance with Multiple Resource Theory, information can be offloaded to another sensory channel to reduce cognitive workload (Wickens 2002). Such a channel must be salient (adequate signal to noise ratio) enough to draw the driver’s attention. Research has indicated that tactile stimuli can be used to provide critical information to Soldiers in vehicles (Carlander and Eriksson 2006; Krausmann and White 2008).

Research and application of tactile displays in vehicles have been explored for many decades. One area of interest is the use of tactile displays for collision avoidance systems (e.g. forward collision warning, lane departure warning, lane change/merge warning and blind spot detection). Such a system employs tactors integrated into the seat, accelerator pedal and belt. Research on collision avoidance shows significant faster braking response and larger safety margins when tactile warning systems are used (Ho et al. 2006). When comparing tactile warnings to visual and auditory warnings, Scott and Gray (Scott and Gray 2008) also found significantly shorter response times. In another investigation, tactile warnings resulted in better localization of crash threats (Fitch et al. 2007). While the above research using simulated driving environment did not particularly address tactile signal saliency, the ability to detect tactile signal could be an issue in a real world driving with moving vehicle and road noises. As a result, Krausmann and White (Krausmann and White 2008) addressed this specific issue and showed that participants were able to detect tactile signals while on a ride motion simulator platform that simulates a Bradley Fighting Vehicle and High Mobility Multipurpose Wheeled Vehicle traversing a cross-country course or gravel road.

The development of Next Generation Combat Vehicles (NGCV) is one of the top modernization priorities of the Army. In support of this, Combat and Capabilities Development Command (CCDC) Army Research Laboratory (ARL) has created the Information for Mixed Squads (INFORMS) laboratory to study crew-agent interactions in a simulated environment. In this environment, up to 14 participants can control at least 6 manned and unmanned robotic vehicles. One of the roles of crew members is to manage robotic agents that aid in mission execution. The simulation environment allows experimenters to expose these manned vehicles and robotic agents to mobility hazards, traditional kinetic threats and electromagnetic spectrum threats. Crew members require the ability to be able to maintain situation awareness of the current status of robotic agents as well as potential hazards, in order to complete their missions. With such critical information, crew members are able to make decisions on assisting the robotic agents and guiding them to safety. The situation awareness-based agent transparency (SAT) model is a framework to improve the crew’s situation awareness and understanding of the robotic agents (Chen et al. 2014). Prior research (Stowers et al. 2016) using SAT model-inspired interfaces has generally presented transparency information using visual displays. Because the visual channel is used heavily in vehicle crew stations, managing robotic agents based on information that is primarily visual can be problematic as the crew is easily overloaded by unimodal stimulation.

The Army has long been interested in using tactile displays to reduce cognitive workload and provide Soldiers the adequate situation awareness required to successfully execute their mission. Past ARL research have shown that tactile communication and multimodal displays are effective in land navigation and reducing cognitive workload for dismounted Soldiers (Coovert et al. 2007; Elliott et al. 2007, 2010; Pettitt et al. 2006; White 2010). In addition, tactile cueing was found to benefit operators performing military and robotics task in multi-tasking environment (Chen and Terrence 2008, 2009). Based on the the aforementioned research studies, it is hypothesized that tactile displays will effectively provide information that will improve crew situation awareness and understanding of robotic agents.

2 Methods

2.1 The INFORMS Laboratory

The INFORMS laboratory is one of the new CCDC ARL funded research facilities designed to house development of concepts and prototyping of the NGCV Warfighter Machine Interface (WMI). The objective is to be able to rapidly bring research concepts and best practices to our transition partners to improve the development of vehicle interfaces. Within the INFORMS laboratory, the WMI Manned-Unmanned Experimentation Laboratory, Simulation in the Loop (MEL-SIL) is being used as a testbed for research in human-autonomy teaming (see Fig. 1). The MEL-SIL is comprised of two 7-person NGCV crew station mockups. This setup allows ARL researchers to translate their relevant research into demonstrable products to showcase novel human-machine interfaces, interactions and teaming capabilities. We used the MEL-SIL of the INFORMS laboratory as the platform and experimentation setup to integrate tactile displays to enhance crew situation awareness and understanding of different components of the agent’s actions and intents.

Fig. 1.
figure 1

A picture of the MEL-SIL setup. Each crew station is equipped with a driving simulation with 3 touch-screen monitors. Each monitor screen provides different interfacing displays. The left screen contains high-level map information for coordination and route planning; the middle screen provides a live view of the manned or unmanned vehicle driving environment and status; the right screen shows a 360\(^\circ \) view from sensors mounted on the vehicle. The interfaces are modular, allowing crew members to configure them dynamically to suit the current task requirements.

2.2 Tactile System

For tactile cuing, we integrated a system developed by Engineering Acoustics, Inc. (EAI) that consists of a tactile belt and a control unit. The belt, which is worn about the torso, contains 8 EAI-C2 tactors and is driven by the control unit (Fig. 2). The C-2 tactor is designed with a primary resonance in the 200–300 Hz range that coincides with peak sensitivity of the Pacinian corpuscle, the skin’s mechanoreceptors that sense vibration. This 8-tactor torso arrangement was initially designed to provide spatial information for navigation purposes. The 8 tactors are arranged in the belt at 45\(^\circ \) intervals, which can be associated with cardinal compass directions.

Fig. 2.
figure 2

A picture of the tactile belt and C2 tactor used in the study. A schematic depicting location of the tactors on the belt is also included.

The tactile control unit is connected to the computer that is used to run the WMI via a USB interface. The WMI uses on-vehicle simulated sensors to gather information about the environment as well the status of the robotic vehicle (e.g. vehicle location, vehicle status, threat location, etc.). This information will be live streamed in using the Lab Streaming Layer (LSL) to initiate tactile stimuli. LSL is a system for the unified collection of measurement time series in research experiments that handles both the networking, time-synchronization and (near-) real-time data. The information obtained from the WMI will be sent to the tactor control unit via the LSL to activate specific tactile cues and messages.

2.3 Scenario and Tasks

The scenario used in the study involves a simulated reconnaissance mission in which the crew is tasked to operate a semi-autonomous robotic combat vehicle (RCV) through an environment containing a gradient of complexity and urbanization while maintaining situation awareness. Along the RCV’s route, the crew is also asked to mark targets by placing battle space objects (BSO) at the appropriate map location, communicate, and perform other tasks using the WMI. Different areas on the map are marked to indicate potential threats (kinetic threats, such as improvised explosive devices or IEDs, and electromagnetic threats such as signal jammers) (shown in Fig. 3). In addition, the crew is not operating the RCV in isolation. This is a team-based scenario in which other crew members within the team are working in the same environment, thus having the ability to communicate with each other. For instance, if crew member #2 places a BSO in the environment within the proximity of crew member #1, depending on the urgency of the information, an alert will be provided in the form of tactile signal.

Fig. 3.
figure 3

A picture showing a top down view of the experiment map used as the simulated environment for the study. Yellow arrows point to different rally points along the driving route. Potential kinetic threats are marked as ‘IED’ and electromagnetic threats are marked as ‘J’ to indicate signal jammers. (Color figure online)

In conveying information about the environment and alerts to the crew, we categorize tactile information into spatial and non-spatial. Spatial cues relate to directional information such as locations of the RCV relative to potential threats or information about next way point. Non-spatial information relates to notifications, alerts and warnings indicating when the RCV has encountered a mobility challenge, or when RCV is about to enter a dangerous area. For spatial information, a single tactor is activated in order to convey directional information that corresponds to the location of a target. For information that does not require a directional cue, a combination of tactors will be activated simultaneously or in sequence to form a tactile pattern or message. Non-spatial tactile information has been shown to be effective, intuitive, require little to no training to recognize and memorize the signals if the number of tactile messages are fewer than five (Fitch et al. 2011). Thus instead of designing tactile messages for every possible non-spatial event that could be communicated to the crew, we group the non-spatial information into three levels based on the attention required and the urgency of the information. Non-spatial information level 1 refers to the most critical warnings that require immediate attention (e.g. RCV encountering a severe mobility challenge, entering a different mission-relevant area, or entering close proximity to known threats in the environment); level 2 refers to intermediate level warnings that require crew situation awareness (e.g. RCV entering areas where threats are suspected); level 3 refers to general information that requires attention but is not an imminent threat to the mission or vehicle (e.g. RCV fuel low, incoming messages/communication, etc.). These three non-spatial tactile information will be comprised of vibration patterns that use stimuli of various duration, frequency, amplitude/intensity, inter-pulse interval, and numbers of tactors to be activated. These tactile signal parameters will be designed and crowd-source tested for effectiveness and intuitiveness before their implementation in the actual study.

The table below provides specific examples of the type of messages or cues to be displayed through tactile messages.

Information

Type

Direction

Spatial

RCV getting stuck

Non-spatial (L1)

RCV entering suspected IED area

Non-spatial (L2)

Incoming message

Non-spatial (L3)

2.4 Evaluation Metrics

Crew situation awareness and understanding of the RCV as well as task performance metrics will be evaluated both qualitatively and quantitatively. For qualitative measures, SAGAT (Endsley 1998) queries will be utilized to determine crew situation awareness and understanding of RCV actions, intentions, goals, and general reasoning during the experiment. SAGAT queries are questions like: is the RVC experimenting mobility problems? Is RCV near a danger zone? Where is the RCV next major turn? where and what is the next threat along the route? Participants will be queried both when completing the task with and without the tactile belt. Questionnaires related to tactile displays usability will also be included at the end of the experiment. Eye tracking data will be used to determine the crew’s ability to maintain 360-degree situation awareness and security over the RCV using its sensors. Crew performance on target marking will be evaluated with quantitative measures of accuracy and response time. Crew performance on RCV mobility will be quantified using the time between arriving at rally points, total route completion time, and response time when RCV requires mobility assistance.

3 Summary

As part of the Army’s modernization efforts, NGCV will be crewed by humans who are responsible for coordinating complex maneuvers between multiple manned and unmanned vehicles. Crew members must have situation awareness and an understanding of the robotic agents in order to successfully execute their mission. The present worked described in this paper is aimed at ensuring that crew members have such information via a tactile belt. Given the abundance of information provided to the visual and auditory channels of crew members, which can induce cognitive overload, the tactile channel is being explored as a potential means of communication. In order to investigate the potential advantages of tactile displays in NGCV, we will collect both quantitative and qualitative data. We hypothesize that the integration of tactile displays will enhance crew members ability to manage robotic agents by improving situation awareness and their understanding of those agents. Findings of this work will be transitioned to our partners within the Army to inform the design of the crew interface in Next Generation Combat Vehicles.