1 Introduction

Sailing is one of the oldest navigation modes and exploits wind as a propulsion source. For many centuries, wind was the only propulsion system for long sea voyages, and its study and representation were never interrupted (Irby 2016). Due to its dynamic behavior and difficult local predictability, wind handling in sailing requires its information to be correctly represented and interpreted by the sailor (Anderson 2008). Knowing in real time, the intensity and direction of the wind are essential to plan the route or escape from an adverse weather condition, as well as instant re-planning of the route for the efficiency of navigation. Consequently, sailing activities require the rapid acquisition and processing of information from the operational scenario in order to execute maneuvers quickly and effectively (Laera 2020). The perception and processing of these data require a cognitive load that takes resources away from the user’s limited cognitive resources (Sweller et al. 1990; Kirschner 2002) and affects his/her driving performance. Considering these issues, the efficiency of the visualization system and the data fruition process are of paramount importance in supporting sailing tasks (Laera et al. 2020).

Over the centuries, increasingly sophisticated wind data visualization systems were developed. For example, the first analog instruments that allowed to identify the direction of the wind were constituted by orientable elements such as metal sheets, flags, and windsocks (Kirwan 1810). These rudimentary instruments later evolved into arrow indicators that simultaneously displayed wind direction and speed.

In the early 1970s, the first companies that developed and produced electronic instruments for navigation began to emerge (Gatehouse 1970). In the 1980 s, their development underwent an acceleration due to the growing demand for more advanced technological instruments for high-level competitions (Expedition 2021).

This process led to the large-scale diffusion of data visualization displays on racing and pleasure sailing boats.

Nowadays, the standard sailing instruments for wind monitoring consist of an anemometer and an anemoscope. They measure, respectively, wind speed and wind direction, which are displayed through simple display instruments placed in strategic points of the boat. A wind data display instrument consists of a dial with a hand aligned with the boat’s bow. The hand indicates the Apparent Wind Angle (AWA) relative to the bow, whereas the Apparent Wind Speed (AWS) is displayed through a small LCD (Fig. 1). Often these displays are duplicated on the boat and arranged in 2 main areas to facilitate data reading for all crew members (Fig. 2). However, wind data are not always in the users’ field of view despite display duplication.

Fig. 1
figure 1

A traditional wind monitoring system configured for displaying the AWS and the AWA data

Fig. 2
figure 2

Displays position onboard: A1 Helm column; A2 Companionway

Consequently, gathering information from these displays negatively affects the sailing performance because it lessens the cognitive resources available for the primary sailing task. Indeed, to be continuously updated on wind data, users should periodically divert their focus from the primary navigation task to look at the display generally positioned lower than the users’ point of view.

Nowadays, innovative information visualization technologies, such as Augmented Reality (AR), allow overcoming many of these drawbacks. AR enables the establishment of a direct correlation between the physical phenomenon and its quantitative representation by visualizing data superimposed onto the environment. Indeed, differently from traditional interfaces AR-based interfaces aim to provide virtual information directly registered in the real environment.

Furthermore, some AR technologies, such as the HMD-based ones, allow keeping the augmented content in the users’ field of view. In the literature, it has already been observed that AR-based interfaces can improve user performance (Uva et al. 2018). Given the promising perspectives of AR technologies in many fields of application, we hypothesized that similar advantages could also be achieved in sailing. In the literature, there are some preliminary attempts to introduce this technology in this field of application, but to the best of our knowledge, none of them carried out a validation procedure assessing the effectiveness of the proposed solutions.

This paper proposes an innovative AR-based data visualization interface obtained as the result of a brainstorming process among AR researchers and a panel of experienced sailors. The interface aims to assist wind data monitoring during sailing. Differently from previous works, we test the proposed interface by comparing it with a traditional display system in terms of user performance, cognitive load, system usability, and user experience associated with the fruition of wind speed and wind direction data.

We formulated the following research questions:

Can the proposed AR-based interface perform better than a traditional two-dimensional display system with respect to tasks execution performance during navigation?

Can the proposed AR-based interface lessen the cognitive load associated with wind data perception during sailing?

Can the proposed AR-based interface give a better user experience than a traditional two-dimensional display system?

2 Brief related work

Researchers started investigating the application of AR technologies in sailing only in recent years. Consequently, only a few studies dealt with AR-based interfaces specifically designed for sailing.

Morgere, Diguet and Laurent (Morgere et al. 2014; Morgère et al. 2015) proposed a solution for generating electronic nautical charts for a marine AR system. They developed a software solution that generates an AR interface based on the user's activities while on the boat. The proposed framework was implemented on an embedded system (MMARS) and connected to a custom optical see-through HMD. However, even if the proposed solution had as a requirement alleviating the cognitive load issues involved in data visualization, no tests were carried out to validate it.

In 2016, Wisernig et al. (Wisernig et al. 2016) underlined the large cognitive load associated with sailing activities and presented the design for an AR data visualization system for sailing. They deployed an experimental proof-of-concept version and asked a few sailboat captains to evaluate a partial implementation of the system and comment on it. Comments were favorable in general.

Butkiewicz (Butkiewicz 2017) presented, in 2017, an interface solution for AR nautical visualization applicable to sailing. They developed an AR ship simulator based on Immersive Virtual Reality (IVR). This work also includes insights on what AR information is most useful to visualize and how to show this AR information in terms of visual representation. The authors did not conduct any tests to validate their AR interface.

Laera et al. (Laera et al. 2021b) presented in 2021 an AR interface specifically for sailing navigation designed for Microsoft Hololens2. The graphic design was evaluated by 75 sailors that observed a recorded video showing a VR simulation of the proposed AR interface and filled out the usability and the User Experience questionnaires. Sailors positively rated the proposed AR interface.

To the best of our knowledge, there are no other studies in the literature investigating the effectiveness of the application of AR technologies in the field of sailing. However, despite the absence of validation studies, the promising potential of AR in this field of application led to some attempts to use it in commercial solutions (Fiorentino et al. 2021). The same survey was conducted in the field of maritime navigation, a field very similar to sailing with which it shares all the aspects related to navigation planning, route, atmospheric agents, and obstacle avoidance. Also in this case there are no validation tests of the user interfaces (Laera et al. 2021a).

3 Interface design and system implementation

The design of the interface presented in this paper is the result of a previously developed and tested interface implementation for displaying wind data. From the test of the first interface, the opinions of the 75 participating users were collected, which made it possible to develop the new version presented (Laera et al. 2021c). The changes were implemented by AR researchers and a panel of experienced sailors, with on average, more than 50 sailing trips per year.

The result of these analyses allowed to define the following three user requirements for the interface:

  • The interface must allow for the display of necessary information within the user’s field of view;

  • The augmented information must have a situated visualization with respect to quantified phenomena (Bressa et al. 2022);

  • The interface should avoid self-occlusions caused by overcrowding of information in the user’s field of view.

In order to meet these requirements, we designed the AR-based interface for Head Mounted Display (HMD), and we adopted a layer system architecture. The layer system architecture allows avoiding overcrowding and localizing the information on the phenomenon to be quantified by placing the information on multiple layers. Furthermore, using an HMD solves the problem of dividing attention between the primary navigation task and the secondary task of reading wind monitoring instruments. Also, trivially, the HMD allows users to keep their hands free, which is of paramount importance during sailing. Thus, starting from this point, we designed our AR interface for sailing to be displayed through an HMD.

3.1 AR-based sailing interface

The proposed AR-based interface consists of 3D virtual elements positioned on two layers, the sea, and the wind layers, referenced with respect to the boat (boat-stabilized), as shown in Fig. 3.

Fig. 3
figure 3

Boat-stabilized AR interface: the wind layer shows the wind data and the sea layer show the compass ring

The reference system is cylindrical, with the longitudinal axis passing through the base of the mast. The rationale of this choice relies on the mast being a specific reference point of a sailboat and because it is in the field of view of various crew members (i.e., bowman, helmsman, tailer); thus, it is natural for them to look in that direction to read information about the wind and its flow on the sails. The axis is always level and does not follow the heel angle, the roll, and the boat’s pitch.

The wind layer is positioned 4 m above the deck level and visualizes wind data through graphic and alphanumeric elements. Graphic elements are an arrow pointing toward the axis of the mast according to the wind direction and a series of notches whose number changes according to the wind speed (Fig. 4).

Fig. 4
figure 4

Wind level interface elements positioned 4 m above the deck and the sea level interface elements above the water surface

The alphanumeric elements consist of two numerical labels indicating the values of the wind direction (AWA) and of the speed (AWS).

The graphic elements are represented in green (R: 0 G: 255 B: 0) except for the notches that represent the intensity of the wind, which are magenta (R: 255 G: 0 B: 140), resulting in the best pair of colors to represent information on the sky background (Xiong et al. 2016). The alphanumeric elements are represented in green with a magenta outline to improve the text legibility in the bright areas of the scene (Ling et al. 2019).

The font used for the alphanumeric elements is the open-source font Roboto (LLC, Google; Robertson 2011), a neo-grotesque sans-serif characterized by a mechanical skeleton with largely geometric forms. We used a sans-serif font in order to improve readability (Bernard et al. 2003). At the same time, the font features friendly and open curves with a good amount of white inside the glyphs, which makes it particularly suitable for use on displays (Perondi et al. 2017).

The AWA is represented both by the orientation of the arrow and the numerical value expressed in sexagesimal degrees. The AWS is expressed graphically by orange notches (1 notch = 1 Knot) and by the numerical value expressed in Knots (kts) rounded to the first decimal place. The numerical data (AWA, AWS) are represented on a plane facing the user.

The sea layer is positioned above the water surface and contains the compass ring displayed with a diameter of 35 m around the boat. As per convention, the four cardinal points are displayed on the compass ring as well as notches placed every 5°.

The interface also provides for two lubber lines indicating the direction of the boat’s bow. The first one is positioned on the sea layer and intersects the compass ring allowing reading the course degrees. The second one is positioned on the wind layer and defines the angle between the course and the wind direction (wind angle with respect to the bow).

3.2 The IVR AR simulator

Conducting tests on AR-based interfaces under real-world sailing conditions is challenging due to the difficulties in guaranteeing user safety and the need for consistent weather conditions to perform user studies. Therefore, similar to other previous studies (Lee et al. 2012, 2013; Jose et al. 2016) we implemented an IVR simulator to carry out the user study. We used the Unity game engine to develop a virtual scenario that faithfully reproduces the environment of a boat during sailing. The virtual boat model reproduces a 40-feet commercial sailboat (Fig. 5) equipped with traditional navigation instruments. The simulated sailing scenario is experienced through an Oculus Rift HMD and two touch controllers. Simulated atmospheric phenomena were also reproduced, such as wave motion and variations in wind intensity and direction. The IVR simulator reproduces wind data with a frequency of 1 Hz coherent with the sampling rate of real data recorded from onboard sensors during a real navigation session.

Fig. 5
figure 5

The AR-based sailing interface simulated in an IVR environment

The IVR simulator provides two wind data visualization modes, AR and traditional. In AR mode, users are immersed in the virtual scenario with the superimposed AR interface visualizing wind data updated in real time. The visual assets are displayed in semi-transparency to simulate the viewing conditions experienced when using an optical see-through HMD.

In traditional mode, users are immersed in the virtual scenario with the boat’s traditional instrumentation realistically replicated. The wind data are updated in real-time on two traditional displays (Fig. 6) positioned as previously shown (Fig. 2).

Fig. 6
figure 6

Display used in the test to simulate the traditional display used onboard

During each user experience, the IVR simulator automatically records quantitative data concerning the user’s performance and saves them into a log file in csv format at the end of each experience.

4 User study

4.1 Experiment design

To investigate our research questions, we formulated the following hypotheses:

  • H1—the proposed AR-based interface allows lower reaction times than a traditional two-dimensional display system;

  • H2—the proposed AR-based interface has a lower cognitive load than a two-dimensional display system;

  • H3—the proposed AR-based interface gives a better user experience than a two-dimensional display system;

  • H4—the proposed AR-based interface has better usability than a two-dimensional display system.

To test the hypotheses, the within-subjects experiment was designed with two conditions: AR mode and traditional mode. In each condition, participants were asked to interact with the simulated sailing scene by accomplishing the following two tasks:

Sea surface monitoring to detect obstacles;

Wind conditions (direction and speed) monitoring to identify a specific wind-event.

Sea surface monitoring involved detecting the appearance of an obstacle (Fig. 7a and b) along the route, corresponding to the emergence of a floating buoy on the sea surface (from now on, we will call it buoy-event). In response to this event, users were required to interact with the scene by promptly pressing a button on the left controller. The visual feedback for accomplishing this task consisted of the buoy’s disappearance. If the task were not accomplished, the presence of the floating buoy would persist until the next buoy appeared.

Fig. 7
figure 7

Events the user must monitor during the test sessions: a Buoy-event in AR mode; b Buoy-event in traditional mode; c Wind-event in AR mode; d Wind-event in traditional mode

Wind conditions monitoring involved detecting the simultaneous display of two numerical values for AWA and AWS, whose last digit was a zero (Fig. 7c and d). When this condition occurred, the wind data display update was stopped. In response to this event, users were required to interact with the scene by promptly pressing a button on the right controller. The visual feedback for accomplishing this task consisted of the wind data update recovering on the display. The two tasks were chosen to simulate the attention conditions that occur on the boat during navigation. Indeed, to satisfy both tasks, the user was asked to pay attention to both the wind-event and the buoy-event simultaneously.

Each test experience included ten randomly distributed buoy-events with the minimum time interval between two different events of 10 s and the maximum of 20 s for a total duration of 180 s. Wind data presented ten potential wind-events. Given the structure of the wind-event-related task, it is worth noticing that the number of wind-events actually presented in each test session depended on the user’s performance.

4.2 Procedure

The whole procedure took about 30 min for each participant. At the beginning, an experimenter explained the purpose of the study, the characteristics of the two events to be detected, and the two tasks to be performed for noticing them. After the participant confirmed that she/he understood the explanation and signed the consent form, she/he was asked to fill in a questionnaire with the demographic data, including age, gender, experience with sailing, and previous experience with the Oculus Rift. After filling in the questionnaire, the experience with the IVR simulator started.

In addition to the test experience, the simulator also included two training scenes to allow participants to train with both the visualization modes properly. Each scene allowed a training session lasting 120 s and had 6 wind-events and 6 buoy-events. Consequently, the overall IVR experience for each modality included a preliminary training session and, after a 60-s rest break, the 180-s test session. Participants experimented the two training modalities in a counterbalanced order to avoid any eventual order effect.

At the end of each test session in one visualization mode, before experiencing the other mode, participants were asked to fill in questionnaires on user experience, usability, and cognitive load. In addition, at the end of both the test sessions, participants were asked to fill in an open-ended questionnaire to express their evaluations freely.

4.3 Measures

To test our hypotheses, we assessed both objective and subjective metrics.

4.3.1 Objective metrics

Concerning hypothesis H1, we compared users’ performance in terms of the overall reaction time and the number of errors. We recorded the reaction times starting from an event's timestamp and ending when the user accomplished the related task.

We assessed the overall reaction time for each participant as the summation of the recorded values. Regarding the wind-events reaction time, if a participant missed all the events, we recorded the time from the first event to the end of the experience.

Regarding the number of errors related to the wind-events, we recorded each false positive event noticing (i.e., the number of times a participant noticed a non-existing wind-event).

Regarding the number of errors related to the buoy-events, we recorded both the false positive noticing and the missed noticing.

4.3.2 Subjective metrics

Concerning hypothesis H2, at the end of the experience in each visualization mode, we administered the raw NASA-TLX (RTLX) questionnaire to evaluate the mental workload (Hart 2006; Uva et al. 2019), and we asked participants to fill it in after each experience. We chose the unweighted version of the Nasa-TLX as its administration is simpler than the weighted version and, in the literature, high correlations have been shown between the weighted and unweighted scores (Byers et al. 1989; Moroney et al. 2003).

Similarly, concerning hypotheses H3 and H4, we asked participants to fill in the Short User Experience Questionnaire (UEQ) and the System Usability Scale questionnaire (SUS). The UEQ allows measuring both pragmatic and hedonic quality aspects of the user experience; it returns one score for each aspect and an overall score (Schrepp et al. 2017). The SUS is a widely used ten-item standardized questionnaire to assess perceived usability (Lewis 2018).

4.4 Participants

The experiment had 45 participants (32 male, 13 female) aged between 22 and 45 years (Mean: 28.8 years; SD: 4.85). All participants achieved normal eyesight, 21 used prescription lenses, and none was affected by color blindness. They all could wear their glasses together with the HMD. All participants were bachelor’s or master’s degree students with no previous sailing experience. The rationale at the bases of this choice regarding the sailing experience relies on the need to avoid the so-called legacy-bias and performance-bias effects (Uva et al. 2019). Indeed, when expressing their preferences on innovative user interfaces or interaction metaphors, users are often biased by their prior habits and legacy technologies that have been used for a long time.

Eleven participants used the Oculus Rift before, but none had extensive IVR experience.

5 Results

5.1 Objective measures

We used the reaction time values to compare the two visualization modes’ performance. Consequently, we compared the wind-events and the buoy-events reaction time values between the AR and traditional visualization modalities a. Given the within-subjects design, we had paired samples values to be analyzed.

We first investigated data for outliers. Then, we removed those couples where at least one value was outside the interval having a lower limit the first quartile minus three times the interquartile range and as an upper limit the third quartile plus three times the interquartile range. By doing so, we removed five couples of samples for the wind-event task and three couples of samples for the buoy-event task.

To compare the means, we applied the paired samples T-test. We started by checking its assumptions (Rasch et al. 2011; Xu et al. 2017). As the distribution of differences between the paired measurements was not normally distributed, we log-transformed samples. By doing so, the T-student test assumptions were met for both the wind- and the buoy- events samples.

Regarding the reaction time for the wind-events, we tested the null hypothesis of equality of the means (Fig. 8). The T-test allowed us to reject the null hypothesis. On average, the AR mode performed better (M = 7,76 s) than the traditional mode (M = 11,48 s). This improvement was statistically significant t(39) = 6.54, p < 0.00001.

Fig. 8
figure 8

Boxplots graph with the reaction times for the wind-events (average and raw data before Log transformation)

Regarding the reaction time for the buoy-events, we tested the null hypothesis of equality of the means. The T-test allowed us to reject the null hypothesis. On average (Fig. 9), the AR mode performed better (M = 9,12 s) than the traditional mode (M = 15,84 s). This improvement was statistically significant (t(41) = 7.76, p < 0.00001).

Fig. 9
figure 9

Boxplots graph with the reaction times for the buoy-events (average, raw data before Log transformation)

Regarding the execution errors for the wind-event-related task, the samples’ distributions for the two visualization modes were not normal; thus, we applied the Wilcoxon signed rank test after positively checking for its assumptions to be met. The test results revealed that the errors for the AR (Mdn = 0 n = 45) were significantly lower than the errors for traditional modality (Mdn = 2, n = 45), Z = − 4.405, p < 0.0001 (Fig. 10). Regarding the execution errors for the buoy-event-related task, we had two different errors—the false positive noticing and the missed noticing.

Fig. 10
figure 10

Boxplots graph with the number of errors for the buoy-event-related task

The sample distributions of these errors for the two visualization modes were not normal; thus, we applied the Wilcoxon signed rank test after positively checking for its assumptions to be met. The test results revealed that there were no significant differences between the AR mode and the traditional mode for both types of errors.

5.2 Subjective measures

Cognitive load (Nasa-RTLX).

We applied the paired-samples T-test to compare the RTLX scores as the T-student assumptions were met. The mean value of the overall RTLX score for the AR mode was lower than the one for the traditional mode. The difference was statistically significant (28 vs. 45, t(44) = − 8.447, p < 0.0001, Fig. 11).

Fig. 11
figure 11

Comparison of the overall and the Nasa-RTLX subscales between AR and traditional visualization modalities

5.2.1 System usability and user experience

The score returned by the SUS questionnaire for the AR mode is equal to 82.9, corresponding to the A + rating, the best position, in the Sauro and Lewis (Sauro and Lewis 2016) curved grading scale.

Regarding the score returned by the SUS questionnaire for the traditional m, it is equal to 55.9, corresponding to a D rating, the penultimate place in the Sauro and Lewis curved grading scale.

The Short UEQ scores concerning the hedonic and the pragmatic estimated quality together with the corresponding Cronbachs Alpha-coefficients are listed in Table 1 (Cronbach 1951).

Table 1 Short UEQ scores

We used the UEQ analysis tool [10] to analyze the two UEQ datasets and compare them using a data set containing data from 20,190 persons from 452 studies concerning different products as a baseline. Figure 12 reports the comparison between the scores obtained by the two modes.

Fig. 12
figure 12

UEQ benchmark histogram comparison between the AR mode and the traditional mode

6 Discussion

The reaction time comparison confirms our hypothesis H1. In particular, for the AR mode, we observe an average reduction in the reaction time for the wind- and the buoy-events-related tasks (32.39% and 42.45%, respectively).

The best performance obtained from the AR interface in the buoy-event-related task can be justified by the fact that the AR-based interface allows for maintaining a fixed view of the navigation scenario.

In fact, the augmented information is always on the observer’s visual axis, unlike traditional instruments whose location could force the observers to look for them and lower their gaze. Also, some of the qualitative feedback gathered from users through the open-ended questionnaire support this hypothesis. According to users, “reading the instrument’s displays and looking ahead simultaneously with the traditional visualization system are difficult and involves a feeling of insecurity.”

This consideration can also be the basis of the best performance recorded with respect to the errors in the wind-event-related task. Regarding the latter result, it is possible to hypothesize that the AR-based interface allows better data readability, but further specific studies are needed to evaluate this effect.

Regarding the non-statistically significant difference between the errors in the buoys-event-related task, the results show that the event’s frequency allows for error-free detection, but the reaction time’s results show that the cognitive load is lower in the AR interface.

Given the experiment’s dual-task structure, these results together with the RTLX scores also confirm our hypothesis H2. We can hypothesize that the AR-based interface allows lessening the extraneous cognitive load introduced by the traditional visualization system that suffers the problem of dividing attention between the primary task of navigation and the secondary task of reading wind monitoring instruments.

Hypotheses H3 and H4 are confirmed by the evaluations on usability and user experience. In both modes, the hedonic and the pragmatic scales’ consistency is supported by a Cronbachs Alpha-Coefficient higher than 0.7.

However, this study has some limitations. First, it provides for one task related to the perception of numerical values; thus, we did not investigate the effectiveness and the contribution of the other elements on the interface (such as arrows, compass, and notches). Nevertheless, the open-ended questionnaire allowed us to collect interesting comments concerning this issue. Some users claimed that understanding the wind direction was much easier by means of the AR-based interface than the traditional one. This may depend on the abstraction of the wind representation in the traditional visualization due to the non-coincidence between the axis of rotation of the handle indicating the wind angle on the traditional display and the axis of rotation of the real wind flow orthogonal to the earth’s surface. Differently, the AR-based interface keeps this coherence by having the perpendicular to the sea surface passing through the bases of the mast as the axis of rotation of the wind direction arrow. Second, the test cohort is composed of non-sailors, and all the users’ evaluations do not belong to people with sailing experience. Third, we tested the proposed interface in a virtual simulator and not in a real sailing scenario.

Further possible limitations may belong to the use of VR to test our interface system is related to the use of AR system in outdoor marine environment. In fact, factors such as the amount of sunlight and the reaction of the HMD device to water splashes are not reproducible in the laboratory. In fact, it is in these factors that we find the major critical issues in the design and consequently in the lack of dedicated HMD hardware on the market. However, it is possible to adapt an HMD device to allow it to be used outdoors in critical light conditions while respecting a waterproof level appropriate for conducting interface testing aboard a sailboat.

Given the encouraging results of this study, we will extend our research by testing our proposed AR-based interface on sailors in a real sailing scenario. By doing so, we will be able to evaluate also other interesting aspects such as: HMD use acceptance among sailors, ergonomics, and augmented information legibility.

7 Conclusion

This paper proposes an AR-based interface for sailing and presents a user study comparing it with a traditional display for wind monitoring. The user study was carried out with a simulated sailing scenario in an IVR-based AR simulator. The results showed that the AR-based interface outperformed the traditional data visualization system in terms of reaction time, cognitive load, system usability, and user experience.

Although our study considers only one possible implementation of an AR interface, the results support applying AR technologies for sailing and pave the way for a specific line of research in this field.

In the future, we plan to extend our research by evaluating the effectiveness in conveying other information for sailing and by evaluating other interface architectures.

We also plan to carry out a user study in a real sailing scenario to evaluate how atmospheric agents affect the AR-based system and to gather comments from users with sailing experience.