Advertisement

Four Eyes See More Than Two: Shared Gaze in the Car

  • Sandra TröstererEmail author
  • Magdalena Gärtner
  • Martin Wuchse
  • Bernhard Maurer
  • Axel Baumgartner
  • Alexander Meschtscherjakov
  • Manfred Tscheligi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9297)

Abstract

Purposeful collaboration of driver and front-seat passenger can help in demanding driving situations and therefore increase safety. The characteristics of the car, as a context, limit the collaboration possibilities of the driver and front-seat passenger, though. In this paper, we present an approach that supports successful collaboration of the driver and front-seat passenger with regard to the contextual specifics. By capturing the front-seat passenger’s gaze and visualizing it for the driver, we create a collaborative space for information sharing in the car. We present the results from a study investigating the potentials of the co-driver’s gaze as means to support the driver during a navigational task. Our results confirm that the co-driver’s gaze can serve as helpful means to support the collaboration of driver and front-seat passenger in terms of perceived distraction and workload of the driver.

Keywords

Driving Navigation Collaboration Shared gaze Eye-tracking 

1 Introduction

Driving a car is a demanding activity, especially in situations where the driver has to pay an increasing amount of attention to the surroundings due to, e.g., heavy traffic, bad weather or street conditions, or unfamiliarity with the region. In such demanding situations, a front-seat passenger can become a helpful source of support for the driver by, e.g., additionally monitoring the scene, providing hints, or actively guiding the driver. For example, when navigating and driving in an unfamiliar region, a front-seat passenger, who may be familiar with the region, can easily provide the driver with advice. Hence, a purposeful collaboration of the driver and the front-seat passenger can help the driver to focus on the driving task and maintain an appropriate and secure driving style (e.g., [7, 9]).

Nevertheless, the characteristics of the car as a context put limitations on this collaboration and on how information can be effectively shared between the front-seat passenger and driver. Verbal communication is easily possible because the driver and front-seat passenger are located next to each other in the very same space. The fact that they are sitting side-by-side with head and body not turned to each other and the necessity of the driver to attend to the driving task, respectively keep the eyes on the road, are both factors that hinder natural face-to-face communication in the car (apart from verbal communication). In everyday life, however, face-to-face communication allows us to share and communicate information more easily [21]. For example, making eye-contact, following each other’s gaze, monitoring what the partner is oriented toward, or showing points of interest by gesturing supports communication. In the car, these strategies are rather difficult and may even be dangerous in terms of driver distraction. Furthermore, due to the movement of the car, providing and sharing information between the driver and front-seat passenger is often more time-critical.

In our research, we aim at overcoming these potential disadvantages by providing a new way to share information between the front-seat passenger and the driver. Our approach is to capture the gaze of the front-seat passenger and visualize it for the driver. By showing the driver exactly where the front-seat passenger looks, we aim at providing a further means of communication between driver and front-seat passenger to enable a purposeful collaboration. We believe that this shared gaze approach could be helpful for the driver in any situation where the front-seat passenger becomes a supporter of the driver in his or her driving task. In the following, we will refer to the front-seat passenger who is actively supporting the driver in the driving task as “co-driver”. While we have already conducted an exploratory study [15] in order to validate the technical setup of our approach and identify possible future application scenarios, this paper focuses on the results of an experimental study. The main goal of the study was to explore the potentials and pitfalls of the approach, including aspects of its usefulness, caused workload, and perceived distraction of the driver.

In the following chapter, we will provide related work on gaze and collaboration in the automotive context. We then describe the shared gaze approach in detail and provide a chapter on the research goals that we targeted in our study. After describing the method and results of the study, we will discuss benefits and pitfalls of our approach and give an outlook on future work.

2 Related Work

2.1 Collaboration in the Car

The car is a social space in which the driver often interacts with other passengers. Social aspects can be considered as important influential factors when it comes to experiences occurring in the car (e.g., [10, 12]). For example, Gridling et al. [9] found that specifically during navigational tasks, drivers and front-seat passengers are often collaborating, primarily, when the output of the navigation system is misleading or confusing. They conclude that there is a potential for in-car interfaces that support this collaboration. Bryden et al. [4] investigated how passengers assist the driver’s navigation task and found that collaboration was dependent on the perceived way finding abilities of the driver by both passenger and driver. They conclude that if passengers think they will be of assistance, they are more likely to help. Forlizzi et al. [7] claim that social aspects should be considered in future navigation interfaces and recommend that these interfaces should support more interactivity in the timing and manner of information delivery. In addition, Gärtner et al. [8] report that drivers and passengers experience problems to effectively communicate with each other in the car. Verbal navigation instructions were specifically mentioned to sometimes not be efficient enough or ambiguous, e.g., when indicating a driving direction. Based this study, the initial idea of using gaze as additional means to support driver and front-seat passenger collaboration was born and realized as a first design sketch.

2.2 Gaze in the Car

When it comes to gaze in the car, most studies focus on the gaze of the driver, i.e., how s/he looks while driving, how s/he perceives information, or how visually distracted s/he is when performing a secondary task while driving (e.g., [5, 13]). Furthermore, eye movement research in the automotive area, to a great extent, focuses on capturing the driver’s eye movements in order to detect the driver’s state in terms of, e.g., inattention (e.g., [6]), in-alertness or fatigue (e.g., 2), or vigilance (e.g., [1]), while more recent research addresses the gaze of the driver as means to interact with in-vehicle information systems (IVIS, [11]).

The gaze of the front-seat passenger and its potential as further source of information for the driver is mostly neglected. Moniri et al. [17] and Moniri and Müller [18], however, present a first approach towards that direction. They claim that current IVIS do not offer any possibility for the car passengers to interact with the visible environment around the vehicle or provide any information about the visible objects in sight. Therefore, in their approach, they pursue the idea that the front seat passenger could use voice commands, such as “What is this building?”, while looking at an object of interest. They compared different pointing modalities (eye gaze, head pose, pointing gesture, camera view, and view field) and found that eye gaze was the most precise modality to pick an object of interest in the visible environment of the car.

2.3 Gaze in Collaboration

Although gaze as means to support the collaboration of driver and co-driver has been neglected in the automotive domain, there exists plenty of research that focuses on the use of gaze when it comes to remote collaboration (i.e., natural face-to-face communication is hindered, which is quite comparable to the restraints of a car as a context). According to Brennan et al. [3], “collaboration has its benefits, but coordination has its costs” (p. 1465). They state that gaze can be a helpful means in order to reduce these costs. In their study, they found that the shared gaze was twice as fast and efficient as solitary search and even faster than shared-gaze-plus-voice in a collaborative visual search task, which is because speaking incurred substantial coordination costs.

Neider et al. [19] specifically focus on spatial referencing in their work, i.e., “the communication and confirmation of an object’s location” (p. 718) when the collaborating partners are remotely located. They argue that collaborative human activities often require that people attain joint attention on an object of mutual interest in a timely manner. In their study, they found that spatial referencing times (for both remotely located partners to find and agree on targets) were faster in a shared gaze condition than with shared voice, which was primarily due to faster consensus. These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information.

3 The Shared Gaze Approach

The shared gaze approach pursues the idea of visualizing the front-seat passenger’s gaze for the driver, in order to support him/her in the driving task. The aim of our approach is to provide a new way of sharing information between the driver and co-driver by using the co-driver’s gaze as means to communicate spatial information more efficiently and precisely. We believe that shared gaze has potential for the automotive domain and that there are several possible application scenarios. For example, the gaze of the front-seat passenger could be used in order to help the driver in a navigational task by directly pointing to spatial reference points or to provide information about street signs that the driver may oversee in complex driving situations. Furthermore, the gaze of the co-driver could also be used to provide information regarding upcoming obstacles or dangers. On the other hand, it could also be possible to visualize the gaze of the driver for the co-driver in order to provide information for the co-driver, in situations like if the driver has already perceived certain things, e.g., hazards, landmarks, or signs. That is, we believe that our approach could be used unidirectional, but also bidirectional.

However, as already pointed out, in the automotive domain, we also have to face the challenge that working with such “visual aid” may add further distraction to the driver. Hence, we have to be careful that an expansion of the interaction space between driver and co-driver does not come with disadvantages. Therefore, we also need to carefully consider questions such as how the visualization should be exactly realized or by whom and how the visualization should be activated.

In order to study these questions we have built a prototypical implementation of the shared gaze approach in our driving simulation environment. Both the driver and co-driver are sitting in the driving simulator facing a driving simulation scenario, which is projected onto a screen in front of them. We capture the co-driver’s gaze by means of an eye-tracking system and visualize it in real time as an overlay to the driving simulation projection. This enables both, the driver and the co-driver to identify where the co-driver is looking at. Additionally we have implemented a switch for the co-driver to activate and de-activate the visualization. Figure 1 shows the setup as used in our study.
Fig. 1.

Left: Setup of shared gaze approach: driver sits behind the steering wheel; co-driver sits on the front seat; eye-tracker in front of the co-driver catches the co-driver’s gaze, which is projected along with the driving simulation onto a screen in front of the car. The yellow arrow points at the visualization of the co-driver’s gaze, which is a yellow dot. Right: Video streams from the eye-tracking cameras (top) and a model representing the co-driver’s gaze (bottom) (Color figure online).

4 Research Goals

Our main aim is to show the potential of our approach for the automotive domain and to gain insights in how this new approach is generally perceived in terms of its usefulness by the driver and co-driver. Furthermore, we want to find out which impact the visualization of the co-driver’s gaze has on the driver’s driving performance, the perceived distraction, and the perceived workload of the driver and co-driver. We aim at answering the question of whether gaze-supported advice by the co-driver during a navigational task provides an advantage (i.e., better driving performance, less distraction and less workload) compared to solely verbal advice of the co-driver or a solitary condition, where the driver performs the task alone.

Based on the findings from related work [3, 7, 9, 17, 19], we assume that collaborative navigation should lead to better driving performance, less perceived visual distraction, and less workload of the driver compared to solitary navigation. Furthermore, shared-gaze collaborative navigation should lead to better driving performance, less distraction, and less workload of the driver compared to solely verbal collaborative navigation. In order to investigate our research questions, we set up and conducted an explorative user study in our driving simulator. The methodological approach and the outcome of the study are described in the following section.

5 Study Setup

5.1 Participants

In total, 34 subjects (17 driver/co-driver pairs) participated in our study. One pair was excluded from data analysis due to some problems during the conduction of the experiment. The remaining 16 pairs consisted of 9 male and 7 female pairs. The participants had a mean age of 30 years (SD = 6.85), with the youngest subject being 19 and the eldest 48 years. The subjects were not familiar with each other and the pair assignment was based on the same gender and, if possible, same age in order to reduce stereotypical influences on the collaboration. All subjects spoke the same mother tongue and possessed a driving license. The mean mileage was 9,270 km per year. Seventeen subjects indicated that they usually drive with one passenger, six with two passengers, and three with more than two passengers. Except for one subject, all participants have been using a navigation system at least once. Almost all participants (n = 29) have supported a driver during a navigational task in the car at least once, or have been supported by a co-driver when driving (n = 21).

5.2 Experimental Design

The study was realized as permutated within-design with kind of collaboration as independent variable, consisting of four conditions: (1) solitary: the driver performs the navigational task alone while the co-driver sits at the front seat idly, meaning no collaboration between driver and co-driver takes place; (2) verbal: the driver performs the navigational task based on verbal advice provided by the co-driver; (3) gaze: the co-driver provides verbal advice and his/her gaze is permanently visualized for the driver; and (4) gaze activation: the co-driver provides verbal advice and s/he can decide when to show the driver his/her gaze (i.e., the co-driver switches the gaze visualization on or off). As dependent variables, we used driving performance, perceived workload and perceived distraction of the driver. Furthermore, we were interested in general impressions of the participants about the different conditions.

In order to investigate our research questions in the driving simulator, we developed a navigational task that allowed us to easily induce ambiguity of spatial reference points and to compare the different conditions in a controlled way. The development of the task followed the basics of the Lane Change Task (LCT, [14]), a common method for evaluating driver distraction. For our purposes, we used a simulated track consisting of five lanes. At both sides of the track, different street signs were shown in a random order (maximum 8 visible at once), which contained either abstract, monochrome symbols (ambiguous information, i.e., difficult to describe), or street names (unambiguous information, i.e., easy to describe). The ambiguity of signs was varied in order to mimic reality, where spatial reference points may be more or less distinct to describe as well. For example, in reality, it might be rather easy to refer to a street or exit sign compared to a certain building as reference point where the driver should make a turn.

The main task for the driver was to change into a specific lane as soon as s/he has reached a specific sign along the track while driving with constant preset speed (60 km/h), i.e., the question of “where to go” in the real world is mapped to “which lane to change to” in the task. The information of at which sign the lane change had to be performed was provided via a tablet in order to be able to provide the navigation information for both, the driver and the co-driver in the same way. On this tablet, the numbered lanes (1 to 5) and upcoming signs were visualized as plan view and an arrow was displayed abreast the specific sign, indicating the specific lane the driver has to change to. For example, as illustrated in Fig. 2, the driver would have had to change to lane 5 once s/he has reached the sign “Steubenstraße” positioned on the left side of the track. In the verbal, gaze, and gaze activation condition, the co-driver held the tablet in his/her hands and had to guide the driver through the task based on the displayed information. In the solitary condition, the tablet was mounted in the center console and the driver had to perceive the shown information on his own. The information on the tablet was successively updated when driving along the route.
Fig. 2.

View in the driving simulator (left) and on the tablet (right). Left: Currently the driver is driving on the middle lane of 5 lanes, i.e., lane 3. Right: The upcoming signs (either abstract monochrome symbols or street names) are shown on the tablet and the white arrow indicates that the driver needs to change to lane 5 when reaching the sign “Steubenstraße”.

5.3 Materials and Apparatus

Driving Simulator.

The study was conducted in our driving simulator lab, consisting of a sitting area for participants and a car mockup situated in a projected environment (see Fig. 1). The driving simulation was realized with the software OpenDS and a data projector visualized the simulation on a 3.28 × 1.85 m screen with 1920 × 1080 pixel resolution.

Task Specifications.

In total, eight different tracks were setup in OpenDS: two tracks for baseline drives, two short tracks for practicing, and four main tracks, which where assigned to the different conditions in permutated order for each pair of subjects. For the baseline drives, the lane-change information was displayed directly on the signs. In the main tracks, 16 groups of signs with eight signs per group (i.e., 128 signs in total) were shown along the five-lane road. Within a group of signs one lane change had to be performed, i.e., 16 lane changes in total. The lane change had to be performed at different positions within the sign group (e.g., at the first sign, third sign, or eighth sign), which was also permutated. Besides, the signs of one sign group were randomly arranged on the left and right side of the track. A trigger point was set randomly 10–20 m after the last sign of each group. If the driver passed this trigger, the view on the tablet was updated and showed the next sign group, which was between 140–150 m away from that position. The distance between signs varied between 10–20 m. The short tracks were setup similarly, but only four lane changes (within four sign groups) had to be made. As regards the signs, we had a pool of eight abstract and 16 street name signs.

Eye Tracking and Gaze Visualization.

For capturing the co-driver’s gaze, we used the remote 60 Hz SmartEye Pro system. Three cameras were installed in front of the co-driver in order to properly capture head and eye movements (see Fig. 1). In order to gain as high gaze accuracy as possible, we used a 15-point calibration template that was shown on the screen during calibration. The x and y coordinates of the co-driver’s gaze were captured in real time and sent to the driving simulator software, where the gaze was visualized as a yellow dot with a size of 11 × 11 pixel. Beforehand, a low pass filter was applied to the eye tracking data in order to smooth the visualization of the gaze.

Measurement Instruments.

In order to capture the driver’s driving performance, his/her driving data was logged into the simulator software. In the gaze activation condition, the times when gaze was activated were automatically logged as well. Perceived workload was captured using the Driving Activity Load Index (DALI, [20]). We chose this questionnaire as it is tailored to the driving task and allows to capture different dimensions of workload. We further generated several questionnaire items to capture perceived distraction and further impressions, and a pre-questionnaire and interview guideline for a semi-structured final interview were set up in paper form.

5.4 Procedure

During the whole investigation, two experimenters were present. One experimenter guided the subjects through the procedure, while the other was responsible for the technical supervision.

Subjects were welcomed, introduced to each other, and seated at a desk. After providing some general information about the study, they were asked to sign an informed consent and had to fill in the pre-questionnaire. After that, the first experimenter allotted the role of the driver, respectively co-driver, and accompanied the participants to the driving simulator, where they took place according to their role. Participants could then adjust their seats, were asked to buckle up, and the appropriate acquisition of the co-drivers’ head and eyes by the eye-tracking system was ensured. In order to allow the driver to get familiar with the driving simulation, s/he was asked to perform a practice drive first. If s/he felt comfortable with the driving task, s/he was asked to drive again for a defined interval (first baseline drive). After that, the eye tracking calibration of the co-driver was conducted.

Then, the first experimenter, who was sitting in the back of the car for the duration of the experiment, provided general information about the navigational task to the driver and co-driver and instructed them according to the upcoming condition. Conditions were permutated for each pair of subjects. While the driver was instructed to perform the lane change once s/he has reached the sign, the co-driver was instructed to provide the information at which sign the driver has to change the lane as forward-looking and concrete as possible. They were also allowed to exchange opinions about how to perform the task together. For each condition, they then had the possibility to perform a practice drive in order to get familiar with the condition, which could be repeated if necessary. Then, the main navigational task followed and when it was finished, driver and co-driver were asked to fill-in questionnaires that were tailored to the condition. This procedure was repeated for each condition. The solitary condition differed slightly, i.e., the co-driver was instructed to not assist the driver and only the driver had to fill in the questionnaires afterwards. After all conditions were finished, the driver was asked to perform the second baseline drive. Finally, both participants were accompanied back to the desk where a final semi-structured interview was conducted with them both together. At the end, they were compensated with 20 Euro each and thanked for their participation. Overall, the experiment lasted about 90 min.

6 Results

All questionnaire data were preprocessed and analyzed using IBM SPSS Statistics 20. For analyzing the driving performance data we developed a tool to visualize the data and an algorithm allowing us to determine total and mean lane deviation, as well as standard deviation and variance of lane deviation. The data gained from the interviews was transcribed and Microsoft Excel was used in order to further categorize and analyze the comments of the participants according to the basics of qualitative content analysis introduced by Mayring [16].

6.1 Driving Performance

We first took a look at the performance of the drivers in the different conditions. In order to increase the comparability of the data, we related the driving performance of each driver with his/her performance during the baseline drives. That is, we calculated the differences between the lane deviation in the respective condition and the lane deviation in the baseline (i.e., mean of first and second drive). An ANOVA for repeated measures was calculated in order to compare the conditions. Opposed to our initial assumption that the collaborative conditions should lead to less lane deviation than the solitary condition, we could not find any significant differences between the conditions for the mean lane deviation (F(3;45) = 2.000, n.s.), as well as the total lane deviation (F(3;45) = 1.984, n.s.), i.e., the driving performance was comparable in all conditions.

We further visually examined the driving data and found that different errors had occurred primarily during the collaborative conditions, leading to high lane deviations at certain points in time (see Fig. 3). Basically, we could identify three kinds of errors: Error type A - the driver changed to a wrong lane but the error was corrected before approaching the next lane change; Error type B - the driver changed to a wrong lane without correcting it; and Error type C - the driver missed the lane change. Thirteen subject pairs made at least one error during the conduction of the study. We found that most errors happened during the gaze activation condition (12), nine errors were made in the verbal condition, five in the gaze condition, and only two in the solitary condition. Hence, errors occurred more often in the collaborative conditions, but still, although almost error-free, the lane deviation in the solitary condition was comparable.
Fig. 3.

Examples for lane change errors (the optimal lane is red, the driven lane is blue) (Color figure online)

6.2 Perceived Workload

In order to capture the perceived workload, we asked the drivers to fill in the DALI questionnaire [20]. Thereby, drivers had to rate their level of constraint regarding the factors global attention, visual, and auditory demand, stress, temporal demand, and interference (see Fig. 4).
Fig. 4.

Means and standard deviations for the different factors of the DALI questionnaire (0 = low, 5 = high)

We calculated ANOVAs for repeated measures for each factor and post-tests were calculated with Bonferroni-corrected paired t-tests. We found a main effect of condition on global attention demand (F(3;45) = 3.731, p < .05), visual demand (F(3;45) = 13.235, p < .001), auditory demand (F(3;45) = 8.449, p < .01), stress (F(3;45) = 4.189, p < .05), and interference (F(3;45) = 13.406, p < .001). No significant differences among conditions was found for temporal demand (F(3;45) = 1.064, n.s.). The post-tests revealed that in regard to global attention demand, the solitary condition led to a significantly higher global attention demand compared to the gaze condition. Furthermore, the visual demand was significantly higher compared to all collaborative conditions. The auditory demand was rated significantly lower in the solitary condition compared to the solely verbal collaborative condition. In regard to the level of stress, we could find marginally significant differences between the solitary condition and the two gaze conditions with the latter being rated less stressful. Furthermore, the interference was rated significantly higher in the solitary condition compared to all collaborative conditions. These results suggest that the solitary condition was the most demanding with regard to nearly all DALI factors. It is further apparent that the verbal condition comes with some disadvantage due to its auditory demand. Also the induced level of stress in the verbal condition was comparable to the solitary condition.

6.3 Perceived Distraction and Further Interaction Qualities

In order to capture perceived distraction and further qualities of the interaction, we asked the drivers to rate several items (see Fig. 5 for an overview of drivers’ ratings).
Fig. 5.

Means and standard deviations of driver’s ratings (1 = does not apply at all, 7 = does fully apply)

In regard to the issue of distraction, there was a main effect of condition (F(3;45) = 5.736, p < .01). The solitary condition was rated significantly more distractive than the verbal and gaze condition and we could not find significant differences among the collaborative conditions. We further asked whether the advice of the system or co-driver was unclear for the driver and there was also a main effect of condition (F(3;45) = 6.033, p < .01). We found that hints of the co-driver in the verbal and gaze activation condition were significantly perceived as less clear as in the solitary condition. This goes hand-in-hand with the findings from the driving data, where the ambiguity of the given advice is reflected in different lane change errors, which happened most often in the gaze activation and verbal condition.

Additionally, we asked drivers to rate a few statements specifically concerned with the gaze conditions (see Fig. 5). It is apparent that the drivers most strongly agreed that the visualization of the co-driver’s gaze helped them to understand the advice of the co-driver faster. Furthermore, the mean ratings regarding a potential distraction or irritation caused by the gaze visualization were rather low. We further asked whether the visualization of the gaze position was too inaccurate. Here, we found a slightly higher agreement, although generally still rather low.

During the study, we observed that the gaze activation condition had been somehow problematic. As it was completely left to the co-drivers when and where to show their gaze to the driver in this condition, we decided to take a closer look at the actual amount of time and frequencies the gaze was shown. Here we found a huge range among participants. Frequencies varied from 8 to 37 times (median 19) showing the gaze during the trial, while the total amount of time the gaze was shown varied from 1.9 s to 2.5 min (median 38 s). Due to this strong variation, we further examined whether the time the gaze was visible correlated with the driver’s perceived distraction due to the gaze visualization. We calculated Spearman-Rho correlations and found medium correlations of r = 0.682 (p < .01) for the total time and r = 0.595 (p < .01) for the frequency of gazes and the perceived distraction of the driver. These results indicate that the more often and longer the co-driver’s gaze was shown, the more distracted the driver felt.

6.4 Insights from the Interviews

At the end of the study, we conducted semi-structured interviews with driver and co-driver together. We asked them about their general impressions about the different conditions, what they liked best (ranking of conditions according to their preference), and what did not work well.

There was a clear preference for the two gaze conditions among the drivers. Seven drivers preferred the gaze activation condition most and six drivers preferred the gaze condition. Only two drivers mostly preferred the verbal and one driver the solitary condition. For the co-drivers the results were comparable, although there was a stronger preference of the gaze activation condition. Ten co-drivers preferred this condition most, four co-drivers the gaze condition, and two the verbal condition.

Drivers’ Perspective.

The drivers, who voted the gaze conditions on rank one, most often stated that seeing the co-driver’s gaze was helpful for them to quickly identify which sign was meant by the co-driver. Drivers preferring the gaze activation condition argued that it was good that the co-driver could use his/her gaze goal-oriented and they did not need to see the co-driver’s gaze while searching for the sign. As described above, co-drivers had quite different strategies regarding the amount and way they were showing their gaze to the driver, which probably also had an impact on the drivers vote. For example, driver 11 said to the co-driver in the final interview, “I wanted to tell you that you should use your gaze more often, but it was already towards the end [end of trial]”. Driver 17 mentioned it was problematic that the co-driver more often looked on the lane he had to change to rather than on the respective sign, which would have been more helpful.

Drivers preferring the permanent gaze condition stated that seeing the gaze all the time allowed them to recognize tendencies in the co-driver’s gaze, e.g., if the co-driver was primarily looking at the right side of the road, they already knew that the sign must be on the right side. Five drivers also stated that they did not care whether the gaze was shown all the time or not because they could “blank it out”, if it was not relevant for them. We also asked the participants to state which advantages and disadvantages they see in the visualization of the gaze. Almost all drivers stated that the gaze visualization made it clear for them “what was meant” and that it allowed the co-driver for giving hints more directly. As driver 16 put it, “It’s simply unmistakable […] You are bridging communication difficulties”. Three drivers also stated that they felt relaxed and that seeing the gaze of the co-driver provided them with an additional feeling of safety.

Disadvantages of the gaze visualization were primarily mentioned with regard to permanent gaze visualization. It was stated by eight drivers that there is a potential for visual irritation, especially if the gaze is “jumping” or “going everywhere”. As stated by driver 13, “You might become irritated and with additional traffic, this is really not easy.” Three drivers meant that seeing the gaze all the time is distractive. “You only concentrate on the yellow dot” (driver 7) and two drivers felt that the visualization stressed them. Two drivers also mentioned that the gaze was a bit inaccurate.

Of the two drivers who liked the verbal condition most, driver 12 stated that “Interestingly this worked best. This was where I felt most relaxed […] probably because we were well-rehearsed, or because the gaze point is additional information I need to mind.” The other driver (9) meant that “the descriptions became increasingly inaccurate when the gaze was added” and that, in the verbal condition, the advice has been more precise. Driver 5, who liked the solitary condition most, said that this was least stressful for her because “In the other conditions, I needed to adapt to you [the co-driver] because I would have had other terms.” However, most drivers preferred the solitary (n = 7) and the verbal condition (n = 6) least. Main reason was that the solitary condition was found to be distractive. In regard to the verbal advice, drivers stated that it required concentration and a precise description of the signs.

Co-drivers’ Perspective.

Comparable to the driver’s opinion, the co-driver’s agreed that both gaze conditions were helpful because they allowed them to communicate the information faster and more distinct. However, ten co-drivers stated that seeing their gaze all the time irritated them, which is one of the main reasons they preferred the gaze activation condition. They also expressed concerns that this might be irritating for the driver as well, who probably cannot distinguish whether the gaze is meaningful right now or not. As co-driver 4 put it “It stressed me a bit when my gaze was always shown because when I did not look at a sign, I thought this could be irritating for the driver”.

Seven co-drivers explicitly expressed concerns that there have been inaccuracies in the gaze visualization. For example, co-driver 14 stated “I liked most when I could activate my gaze. This is a bit due to the fact that my gaze was not perfectly recognized and then I could try to control it a bit.” Four co-drivers stated that the gaze activation condition stressed them, though. “It was easiest when the gaze was automatically there. I did not need to decide whether I am showing him [the driver] where I look right now, but it was directly projected” (co-driver 11). In some cases, co-driver’s only noticed in the final interview that the driver would have preferred another use of gaze in the activation condition. For example, co-driver 3 commented, “For me it is totally surprising that it helped her [the driver] more when I looked at the signs. I always concentrated on the lane, because I wanted to show her, where she has to go - how you just know it from classical navigation systems to give directions.”

We also asked the co-drivers about the advantages and disadvantages of the gaze approach. Eleven co-drivers stated that the main advantage was that they could directly point at the signs and that less description was necessary. As co-driver 7 put it, “If a word just does not come to your mind, you just can look at it [the sign] and the other nevertheless knows where to go.” Regarding disadvantages, seven drivers mentioned inaccuracy of the gaze, that it required them to concentrate more (n = 5) and that it can be irritating when it is always shown (n = 6). It was also mentioned by five co-drivers that they could not normally look around anymore in that condition, which would also become stressful over time.

Improvements and Further Usage.

Regarding the visualization of the gaze, we found that about half of the participants (14) were completely satisfied with the size and color of the yellow dot we used in our study. The others mentioned that the color should be adaptive and providing more contrast to the surrounding (especially in the real world), or that the dot could have been a bit larger or adaptive in size as well. We further asked participants whether they could imagine using the shared gaze approach in their real car. In total 25 of 34 participants stated they would like to use or at least to try it, although they also admitted that it would be cost-dependent and rather a nice-to-have. Regarding further application areas for our approach, it was mentioned most often that the gaze of the co-driver could be used to point out hazards or dangerous situations to the driver. Others imagined that the approach could be very helpful for rescue or search operations, or in driving schools in order to teach the driver how to look correctly (e.g., while driving curves).

7 Discussion

The results of our study indicate that the shared gaze approach indeed comes with advantages, but also with certain limitations that need to be addressed in future research. Opposed to our initial assumption that the driving performance (in terms of lane deviation) should be better in the collaborative conditions than in the solitary condition, we found a comparable driving performance throughout all conditions. Hence, the question occurred why the collaboration of driver and co-driver did not explicitly contribute to a better driving performance. A closer examination of the data showed that it has to be taken into account that in the collaborative conditions more driving errors happened due to wrong or delayed instructions of the co-driver and that these errors of course had an impact on the overall lane deviation. From that point of view, it is actually surprising that the driving performance was still comparable to the solitary condition, where errors hardly occurred because the driver could gather the necessary information almost immediately, without processing verbal or additional visual information. Our results affirm the statement of Brennan et al. [3] that coordination has its costs, though. However, at this point, we could also see that there must be some advantage of the shared gaze approach, as the number of driving errors was least in the gaze condition in comparison to the other collaborative conditions. This is in line with our goal to support the collaboration of driver and co-driver with our approach.

The findings with regard to perceived workload and distraction were a lot more promising, though. Overall, we found that the collaborative conditions were perceived less stressful and distracting than the solitary condition in almost all cases. We found that perceived interference and visual demand were rated significantly lower in all collaborative conditions and that the stress level was experienced significantly lower in the collaborative conditions with gaze. However, the auditory demand was rated significantly higher in the verbal condition compared to the solitary condition. These results confirm our assumptions with regard to workload. In the verbal condition, the need of the driver to continuously listen to the co-drivers advice and process the verbal information obviously has disadvantages in terms of auditive demand and stress, which could be overcome with the additional visual information provided in the gaze conditions. This is in line with the findings from remote collaborative settings, where shared gaze resulted in faster and more efficient communication (e.g., [19]). We conclude that is because the driver knows earlier when and where to change the lane and, hence, stress is reduced.

Furthermore, we found that drivers rated their perceived distraction significantly lower in the verbal and gaze condition compared to the solitary condition. However, the gaze activation condition seemed a bit problematic. It turned out that the drivers experienced the hints by the co-driver least clear in the verbal and gaze activation condition. A look at the quantitative data and interview findings revealed the reasons for this. With regard to gaze activation, we found that the amount of time and frequency co-drivers showed their gaze to the driver in the gaze activation condition was varying and that different strategies were used. That is, the potential of the shared gaze approach was sometimes not fully exploited (if the gaze was used too seldom) or it was used in ways that were less helpful for the driver, e.g., by showing the lane to change and not the according sign. We also found that it sometimes also stressed the co-driver to decide whether to show his/her gaze to the driver in that condition.

We believe that this is a very important finding that has to be considered in our future research. The role of the co-driver and his/her abilities to support the driver sufficiently are apparently crucial regarding the driving performance. Subsequently, the co-driver explicitly needs to know beforehand how s/he can use his/her gaze to best support the driver, as this is obviously not as intuitive as we initially assumed. We believe that this might be also one reason why the amount of drivers favoring the gaze activation condition over the gaze condition was lower in comparison to the co-drivers. Permanent gaze visualization allowed drivers to recognize at all times where the co-driver is looking and to recognize tendencies quite early, while in the gaze activation condition drivers were more depending on the adequate use by the co-driver in order to take advantage of the gaze visualization.

Our findings further suggest that we need to take a closer look at the topic of distraction. Some driver’s stated that they could “blank out” the gaze, when it was permanently visualized. However, when correlating the amount and frequency the gaze was shown in the gaze activation condition with the driver’s perceived distraction, the results indicate that the more the driver sees the co-driver’s gaze, the more s/he is distracted. Hence, we believe that there are probably two qualities of distraction we need to consider – distraction caused by a permanently moving object, versus distraction caused by a suddenly appearing object. Our findings suggest that both come with advantages and disadvantages. Especially from the co-driver’s perspective permanent gaze visualization is less desirable. In some cases it irritated them, but they also felt that they could not let their eyes wander around the surroundings normally anymore. In addition, it was mentioned most often by the drivers that there is a potential for irritation if the co-driver’s gaze is permanently shown.

Still, almost all participants agreed that the visualization of the co-driver’s gaze allowed a faster and more distinct communication of “what is meant” and the gaze conditions were preferred by most participants. Additionally, the participants could imagine further application areas of our approach, e.g., as means to point out dangerous situations to the driver.

8 Conclusions and Further Work

With the presented study, we aimed at identifying the potentials and pitfalls of our shared gaze approach. We found that the visualization of the co-driver’s gaze has the potential to improve driver and co-driver collaboration during a navigational task and comes with less stress and perceived distraction for the driver. Although we could identify some pitfalls with regard to the gaze activation condition, we still believe that this is the condition to further work on. Based on the statements of the participants and also from a practical viewpoint, it seems feasible to show the co-driver’s gaze to the driver only when it’s needed. Future research, however, needs to consider the question whether giving meaningfulness to the co-driver’s gaze should be an intentional decision of the co-driver or whether it could also be supported in an automated way, e.g., by just showing gazes that exceed a predefined fixation duration. Furthermore, we need to consider the question of how to visualize the co-driver’s gaze in order to allow for a quick detection by the driver, especially in more complex driving environments.

In this first study, we focused on the perceived distraction of the driver, which allowed us to conclude that a basic requirement with regard to driving safety is fulfilled. However, we believe that further research is necessary regarding the actual distraction of the driver, i.e., by also capturing the driver’s gaze in order to see how long and often the driver looks at the provided gaze information. This would also allow us to identify suggestions for a sound gaze visualization.

We are aware that our results are a first step for bringing the shared gaze approach into a real car. One might also ask the question, how such an approach could be technically realized in a real car. Apart from using a large head-up display, we see a lot of potential in using LED lights mounted at the bottom of the windshield as used in existing approaches that aim at providing hints about, e.g., obstacles to the driver, based on sensory information. Currently, we are preparing a study in order to investigate this kind of visualization.

In conclusion, we believe that our shared gaze approach extends the interaction space between driver and co-driver, with the potential for further application scenarios in virtue of our initial motto “four eyes see more than two”.

Notes

Acknowledgements

The financial support by the Federal Ministry of Economy, Family and Youth, the National Foundation for Research, Technology and Development and AUDIO MOBIL Elektronik GmbH is gratefully acknowledged (Christian Doppler Laboratory for “Contextual Interfaces”).

References

  1. 1.
    Bergasa, L., Nuevo, J., Sotelo, M., Barea, R., Lopez, M.: Real- time system for monitoring driver vigilance. IEEE Trans. Intell. Transp. Syst. 7(1), 63–77 (2006). IEEE Press, New YorkCrossRefGoogle Scholar
  2. 2.
    Bhavya, B., Alice Josephine, R.: Intel-eye: an innovative system for accident detection, warning and prevention using image processing. Int. J. Comput. Commun. Eng. 2(2), 189–193 (2013)CrossRefGoogle Scholar
  3. 3.
    Brennan, S.E., Chen, X., Dickinson, C.A., Neider, M.B., Zelinsky, G.J.: Coordinating cognition: the costs and benefits of shared gaze during collaborative search. Cognition 106(3), 1465–1477 (2008)CrossRefGoogle Scholar
  4. 4.
    Bryden, K.J., Charlton, J., Oxley, J., Lowndes, G.: Older driver and passenger collaboration for wayfinding in unfamiliar areas. Int. J. Behav. Dev. 38(4), 378–385 (2014)CrossRefGoogle Scholar
  5. 5.
    Crundall, D.: The integration of top-down and bottom-up factors in visual search during driving. In: Underwood, G. (ed.) Cognitive Processes in Eye Guidance, pp. 283–302. Oxford University Press, Oxford (2005)CrossRefGoogle Scholar
  6. 6.
    Fletcher, L., Zelinsky, A.: Driver inattention detection based on eye gaze-road event correlation. Int. J. Robot. Res. 28(6), 774–801 (2009)CrossRefGoogle Scholar
  7. 7.
    Forlizzi, J., Barley, W.C., Seder, T.: Where should I turn: moving from individual to collaborative navigation strategies to inform the interaction design of future navigation systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1261–1270. ACM, New York (2010)Google Scholar
  8. 8.
    Gärtner, M., Meschtscherjakov, A., Maurer, B., Wilfinger, D., Tscheligi, M.: “Dad, stop crashing my car!”: making use of probing to inspire the design of future in-car interfaces. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2014), article 27, pp. 1–8. ACM, New York (2014)Google Scholar
  9. 9.
    Gridling, N., Meschtscherjakov, A., Tscheligi, M.: I need help!: exploring collaboration in the car. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work Companion, CSCW 2012, pp. 87–90. ACM, New York (2012)Google Scholar
  10. 10.
    Juhlin, O.: Social media on the road: mobile technologies and future traffic research. IEEE MultiMedia 18(1), 8–10 (2011). IEEE Press, New YorkCrossRefGoogle Scholar
  11. 11.
    Kern, D., Mahr, A., Castronovo, S., Schmidt, A., Müller, C.: Making use of drivers’ glances onto the screen for explicit gaze-based interaction. In: Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2010, pp. 110–116. ACM, New York (2010)Google Scholar
  12. 12.
    Knobel, M., Hassenzahl, M., Lamara, M., Sattler, T., Schumann, J., Eckoldt, K., Butz A.: Clique trip: feeling related in different cars. In: Proceedings of the Designing Interactive Systems Conference, DIS 2012, pp. 29–37. ACM, New York (2012)Google Scholar
  13. 13.
    Land, M.F., Tatler, B.W.: Looking and Acting: Vision and Eye Movements in Natural Behaviour. Oxford University Press, Oxford (2009)CrossRefGoogle Scholar
  14. 14.
    Mattes, S.: The Lane Change Task as a Tool for Driver Distraction Evaluation. IHRA-ITS Workshop on Driving Simulator Scenarios (2003). http://www.nrd.nhtsa.dot.gov/IHRA/ITS/MATTES.pdf
  15. 15.
    Maurer, B., Trösterer, S., Gärtner, M., Wuchse, M., Baumgartner, A., Meschtscherjakov, A., Wilfinger, D., Tscheligi, M.: Shared gaze in the car: towards a better driver-passenger collaboration. In: Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2014), pp. 1–6. ACM, New York (2014)Google Scholar
  16. 16.
    Mayring, P.: Qualitative content analysis. In: Flick, U., von Kardorff, E., Steinke, I. (eds.) A Companion To Qualitative Research, pp. 266–269. SAGE Publications Ltd, London (2004)Google Scholar
  17. 17.
    Moniri, M.M., Feld, M., Müller, C.: Personalized in-vehicle information systems: building an application infrastructure for smart cars in smart spaces. In: Proceedings of the 8th International Conference on Intelligent Environments, IE 2012, pp. 379–382. IEEE, New York (2012)Google Scholar
  18. 18.
    Moniri, M.M., Müller, C.: Multimodal reference resolution for mobile spatial interaction in urban environments. In: Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2012, pp. 241–248. ACM, New York (2012)Google Scholar
  19. 19.
    Neider, M.B., Chen, X., Dickinson, C.A., Brennan, S.E., Zelinsky, G.J.: Coordinating spatial referencing using shared gaze. Psychon. Bull. Rev. 17(5), 718–724 (2010)CrossRefGoogle Scholar
  20. 20.
    Pauzié, A.: A method to assess the driver mental workload: the driving activity load index (DALI). IET Intel. Transport Syst. 2(4), 315–322 (2008)CrossRefGoogle Scholar
  21. 21.
    Warkentin, M.E., Luftus, S., Hightower, R.: Virtual teams versus face-to-face teams: an exploratory study of a web-based conference system. Decis. Sci. 28(4), 975–996 (1997)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  • Sandra Trösterer
    • 1
    Email author
  • Magdalena Gärtner
    • 1
  • Martin Wuchse
    • 1
  • Bernhard Maurer
    • 1
  • Axel Baumgartner
    • 1
  • Alexander Meschtscherjakov
    • 1
  • Manfred Tscheligi
    • 1
  1. 1.Christian Doppler Laboratory “Contextual Interfaces”, Center for Human-Computer InteractionUniversity of SalzburgSalzburgAustria

Personalised recommendations