Keywords

1 Introduction

In the recent years, there have been striking developments in wearable computing. This is a category that includes all kinds of smart devices, such as smart watches, glasses and even ingested devices. Among all the different forms of wearable devices, Head Mounted Displays (HMDs) are believed to be the first seamless way to enable workers with real time contextual information and allow companies to integrate with existing back-end systems. The hands-free features that come along with the HMD also offer advantages over many traditional technologies.

Generally speaking, a Head Mounted Display is a device worn on the head or as part of a helmet and has a small display in front of one or both of a user’s eyes. In this paper HMD refers to those which are directly attached to the head, excluding the ones which are worn on or are embedded in a helmet [1].

Consulting and research groups believe that Smart glasses will have great impact on heavy industry such as manufacturing, and oil and gas where they can enable on-the-job training in how to fix equipment and perform manufacturing tasks hands free [2]. The impact on mixed industries such as retail, consumer goods and healthcare, where the benefits may mostly be looking for information via a visual search, are likely to be medium [3]. Other features such as voice command and video calling also promise easy access to key information and convenient remote collaboration.

There is currently a lack of empirical evidence to support the claimed benefits. It is unclear whether potential benefits arise out of individual design characteristics of HMDs. Even if an HMD system is shown to be better than current technologies, it is not known if other HMD systems with different design characteristics would also perform similarly. The design characteristics of HMDs include but not limited to the display’s position, opacity and field of view. Without the knowledge of how individual design attributes affect task outcomes, designers and developers will not be able to identify the best way to customize an HMD system to best match a specific task scenario.

This study explores some of these variables in a controlled set of guided repair and maintenance tasks. Common car maintenance tasks were used and performed in a realistic environment with procedures and preparations that are low-cost and easy to replicate. The goal is to better understand the implications of the attributes that are essential to Head-Mounted Displays, in particular the position of the display.

2 Related Literature

Smailagic & Siewiorek [4] documented the result of engineers of US marines doing a the Limited Technical Inspection (LTI) with VuMan 3, a wearable computer designed at Carnegie Mellon University. They claimed a decrease of up to 40 % in inspection time compared to traditional paper handling and a reduction of total inspection/data entry time by up to 70 %. However, from the screenshot of the display we can see that they just moved the text checklist from paper to the HMD. There was no image of the equipment or visual aid and no sign of task guidance. Therefore, it can’t prove that the HMD actually helped the engineers in performing and completing the task. In a later work Siegel & Bauer conducted a field study comparing a wearable system with a paper technical orders on two aircraft maintenance tasks. This time the wearable system was able to give task guidance and allowed more manipulation, but the specialists took on average 50 % more time to perform the tasks using the wearable system.

Ockerman & Pritchett [5] conducted a study to investigate the capabilities of wearable computers, using a case of procedural task of preflight aircraft inspection. They compared three different methods including a text-based HMD system, a picture-based HMD system and the traditional memory-recall method. The result shows no statistically significant effect on fault detection rate, while the videotape showed that those who used the HMD systems had a higher rate of overlooking the items that were not mentioned on the computer than those who did the same inspection by memory.

Weaver et al. [6] in their order pick study however, did find that HMD with task guidance information led to significantly faster completion time and less errors than the audio, text-based and graphical paper methods. A similar work by Guo et al. [7] also stated HMD was better than LED-indicating system. However, both studies were conducted in a layout optimized for the specific task and because the complexity of this task is relatively low, it was remains unsure if the observed effects could be translated to other task-guidance involved applications.

All of the study mentioned above compared only one HMD technology to status quo of the domain and the HMD technology in each study were very different from another, it’s unclear whether the result would remain the same if all the factors that differentiate different systems were teased out (for example, the size and position of the display was regulated). And it’s even harder to tell which attributes of the HMD technology played the most important role in altering the task performance compared to other methods.

This study aimed to investigate the effects of different display positions – a core factor of HMDs – on guided maintenance and repair tasks. Three HMD systems with highly identical design but different display locations were compared. Cas car repair and maintenance with sufficient complexity were chosen and the study was conducted outdoor in a realistic setting in order to resemble a real life scenario.

3 State of the Art

In recent years many HMD systems have been designed and manufactured in relatively large volumes. These HMD systems are much smaller yet more powerful than the early prototypes which researchers developed for experiment purpose decades ago.

Among these HMD systems, some are specifically designed for industrial application such as Golden-I headset and Vuzix M2000AR glasses. Others systems are more of a combination of productivity and fashion, such as Google Glass and Recon Jet. However, recent trends showed that even those devices originally targeting consumer markets were being utilized for enterprise in the “service and maintenance” [8]. For example, companies like APX Lab and Thalmic Labs had been working on wearable solutions to help enterprises improve efficiency and reduce cost in heavy and mixed industries using a combination of Google Glass, Epson Glass and Myo Armband.

There have been dozens of HMD devices with various input methods (voice control, hand-held control panel, touch pad, etc.) and output configurations (opaque vs. see-through, monocular vs. binocular, etc.) but there’s lack of evidence showing which HMD system provide the best results. As more and more companies are starting to realize the potential of smart glasses in industrial applications, there’s a growing demand for empirical study on the attributes of HMD systems.

4 Method

The focus of this study is the effect of the Display Position on guided repair and maintenance. Car cars maintenance and repair tasks were used as they were easily accessible to the subjects, similar to many mechanical inspections and frequently performed [9].

4.1 Conditions

Four different conditions were investigated in this study: three of them used HMD technologies and the other used paper manual as a baseline of comparison. The three HMD conditions were operationalized via a customized display system. The system was composed of (Fig. 1) the display of a NTSC/PAL (Television) Video Glass, a Raspberry Pi single-board computer, a modem that provides internal network connection, power supplies and 3D printed housings for other parts to reside in.

Fig. 1.
figure 1

Components for prototyping the test device

The core display device was mounted onto a headband which the user wore and could be adjusted to different angles and positions relative to the user’s right eye. This provided three different display conditions (Figs. 2 and 3): above eye (display is above the line-of-sight), eye-centered (display is centered on the line-of-sight) and below eye (display is below the line-of-sight).

Fig. 2.
figure 2

Three different experiment conditions

Fig. 3.
figure 3

A user wearing the test device in each of the three test configurations

For the three HMD conditions, the participants had to use voice commands to navigate through the instructions. “Next” to go one step further, and “Previous” to go one step back. The image that the user saw was mirrored onto a monitor next to car and a researcher was controlling from his end while listening to the user’s commands: “Next” to move on to the next step, and “Previous” to go one step back. As for the paper condition, same instructions were printed out one on each page of a booklet. Participants manually flipped the page to navigate.

4.2 Tasks and Action Types

Eight tasks with instructions were performed by participants:

  • Task 1: Coolant. Participant checks the coolant level.

  • Task 2: Cabin Air Filter. Participant checks the condition of the air filter contained inside a housing and change it if necessary.

  • Task 3: Engine Oil. Participant checks if the oil level is sufficient using the engine oil dipstick.

  • Task 4: Center brake light check. Participant removes the middle brake light assembly and checks if it is burned out.

  • Task 5: Fuse (exterior). Participant pulls out a specific fuse from the exterior fuse box to see if it is blown.

  • Task 6: Washer Fluid. Participant checks the washer fluid level and add fluid if necessary.

  • Task 7: Air Filter. Participant checks the condition of the air filter contained inside a housing and change it if necessary.

  • Task 8: Headlight. Participant removes the right front light assembly and checks if it is burned out.

A Training Task was performed before each the main tasks took place. Participants were asked to open the hood using each test condition.

Each task was decomposed into individual action steps and each step consisted of an actual photo taken on the test car and one simple sentence so that novice users could understand. The instructions were screened and validated with official car manual and online resources [10]. Although some previous works also evaluate the interface design of HMD system [11], it is not the focus in this paper.

Based on task analysis and literature review on previous research [12], all of the steps were classified into four action types: Read-Locate-Manipulate-Assess. Figure 4 shows an example of the interface design for the four action types. Locate involves visual search, typically performed to find a specific car component. Manipulate involves physical manipulation such as unscrewing, lifting and removing. Assess involves visual comparison of what is seen in the real world with what is displayed or described on the screen, such as assessing the condition of a car component.

Fig. 4.
figure 4

Instruction examples of four action types: Read-Locate-Manipulate-Assess

The eight tasks were then grouped into four trials (Fig. 5) based on the estimate complexity (one relatively easy task paired with one relatively harder task). By the end of the experiment, each participant performed all the tasks and experienced all the test conditions.

Fig. 5.
figure 5

Eight tasks were grouped into four trials, each participant performed one trial using one technology.

4.3 Experimental Setup

The study was conducted during the day at an outdoor parking deck. The car used for the experiment was a 2007 Toyota Corolla. The tools necessary to complete all the tasks were handed to the participant when needed and consisted of paper towels, a screwdriver, a pair of pliers, and a bottle of washer fluid. Participants were also asked to put on a pair of gloves before performing the tasks.

Three facilitators were involved in the experiment session. A first person who introduced the procedure to the participant and oversaw the performance of the participant. A second person who operated a camera and video tape the whole process. A third person who set up the HMD system and initiated the computer responses during the tests when participants gave voice commands.

20 participants were recruited for the study. The criteria of the recruitment is that all participants must have at least 6 months of driving experience and currently own a car so that they’re likely to have some knowledge in car repair and maintenance (not necessarily hands-on experience). All participants must have normal or corrected-to-normal vision while conducting the experiment.

4.4 Procedures

Participants were equally distributed amongst four groups at random. Every group performed the same sequence of trials, but received a different sequence of experimental condition (Table 1). At the end of the experiment, every experimental condition was tested equally often on each task (See Table 1). 20 people ensured 5 people in each sequence of experimental condition, which was sufficient to counter-balance the potential order effects.

Table 1. Test groups and corresponding conditions for different Trials.

An experimental session for each subject lasted 40 to 60 min and consisted of three phases. In the first phase, a description of the study was given to the participant. Informed consent was obtained and a demographics questionnaire was then administered, covering some basic information and the experience with the tasks conducted in the experiment. In the second phase, four tests were performed, each one with a different experimental condition. Each test consisted of an introduction to the experimental condition, a practice task, a trial, and a post-trial questionnaire. Subjects could have a short break between each test. In the third phase, the participant was asked to rank the five systems just tested from most favorite to least favorite and was asked to justify the rankings. Each participant received an honorarium of $10.00 in the form of an Amazon Gift Card.

4.5 Measures

Two kinds of measures were gathered: Objective performance measures and subjective user experience measures. Objective measures included completion time and errors. The completion time is the elapse to complete a step (action). Errors were obtained when participant made a wrong assessment when he or she was performing a Assess action. Subjective user experience measures were gathered through NASA-TLX survey and user experience questionnaire.

5 Discussion

At the time of paper submission, half of the user testing had been completed and the data collected was not sufficient for analysis. The main contribution of this paper was to present a method to isolate the effects of key HMD design characteristics by controlling the effects of other factors in they system, in this case, the interaction method, mounting mechanism, display size, and instruction design, etc. Hence the effects of display positions, tasks, and action types on guided repair and maintenance work can be scientifically studied.

It is anticipated that once all the data are gathered and analyzed, the affect of display position on guided repair and maintenance can be identified. Whether these HMD system outperform the traditional paper-based guidance method will also be evaluated.

Unexpected yet interesting findings have already appeared from the test result and user feedback so far. For example, our decision of not optimizing the interface design and instead using simple still pictures and text was questioned as almost all the participants mentioned the demand of animated instructions for certain tasks such as Locate the brake light assembly. As Towne [13] also pointed out that cognitive time could account for 50 percent of total task time in equipment fault isolation tasks, we are curious to see if adding animated instructions in future study would produce significant difference in completion time.