Advertisement

Open data exploration in virtual reality: a comparative study of input technology

  • Nico ReskiEmail author
  • Aris Alissandrakis
Open Access
Original Article

Abstract

In this article, we compare three different input technologies (gamepad, vision-based motion controls, room-scale) for an interactive virtual reality (VR) environment. The overall system is able to visualize (open) data from multiple online sources in a unified interface, enabling the user to browse and explore displayed information in an immersive VR setting. We conducted a user interaction study (\(n=24\); \(n=8\) per input technology, between-group design) to investigate experienced workload and perceived flow of interaction. Log files and observations allowed further insights and comparison of each condition. We have identified trends that indicate user preference of a visual (virtual) representation, but no clear trends regarding the application of physical controllers (over vision-based controls), in a scenario that encouraged exploration with no time limitations.

Keywords

Comparative study Gamepad Room-scale virtual reality Virtual reality Vision-based motion controls 3D gestural input 

1 Introduction

Although virtual reality (VR) has been of interest to researchers for many years (Sutherland 1968), it is only due to recent developments in consumer technologies that VR is back in the mainstream spotlight (Abrash 2014; Lanman et al. 2014; Parkin 2013). Head-mounted displays (HMDs) appear to be particularly accessible as they are now more affordable than ever before (Lanman et al. 2014). A HMD is a device that is worn on the head, similar to glasses or ski goggles, visually isolating the user from the physical real-world surroundings. Instead, the user is presented with computer-generated virtual content, which can be explored by moving the head and thus looking around. Perceiving the virtual environment in such a natural manner conforms fundamentally with how humans operate as biological beings (Carmack 2013; LaValle 2016). Consequently, lots of opportunities and potential arise regarding the presentation of digital content in three-dimensional VR environments (Bayyari and Tudoreanu 2006). Render and display technologies play a crucial role in our ability to be visually immersed in a VR environment (Bowman and McMahan 2007). However, as Bowman and McMahan (2007) correctly state, input technologies enabling the user to successfully interact in these environments are not of less importance. As more developers and researchers gain access to affordable hardware, it becomes increasingly important to provide interaction design guidelines, best practices, and recommendations for different contexts in order to ensure an enjoyable user experience (UX) (Bowman et al. 2008), also outside of entertainment scenarios (such as games and movies). Such guidelines can be greatly informed by empirical evaluations, particularly through comparative interaction studies, like the ones presented by Figueiredo et al. (2018), Streppel et al. (2018), and Vosinakis and Koutsabasis (2018), to name just a few.

Following recent advances in consumer VR technologies, the community is implementing various interaction approaches by utilizing different input technologies, arguing for a variety of pros and cons regarding not only interaction and interface design, but also UX. However, as state-of-the-art consumer technologies evolve, there is still a small number of scientific studies directly comparing these latest approaches in applied scenarios.

Following up on our earlier work (Reski and Alissandrakis 2016), we are interested in investigating two aspects. First, input technologies are an essential part of a VR system as they enable interaction (Bowman et al. 2008). Thus, we believe it is important to compare the operation of the same VR system using different input technologies to provide recommendations and insights for developers of future VR experiences. Second, we wish to apply VR technologies in non-entertainment settings, more specifically contributing with an example of using VR for the purpose of (open) data exploration.

In order to investigate these aspects, we developed and implemented an interactive VR system. Our system supports interaction using three prototypes based on different input technologies, namely a GAMEPAD (using an Xbox One controller and an Oculus Rift headset), vision-based motion controls (VBMC) using the Leap Motion controller attached to an Oculus Rift headset, and room-scale virtual reality (RSVR) based on the HTC Vive headset and controller (see Fig. 1). The visual user interface design is intentionally minimalistic for all three prototypes.

The particular selection of input technologies attempted to consider the most commonly currently used state-of-the-art consumer interaction controllers for VR technologies, keeping in mind fundamental differences in their characteristics, for instance, sensor type, data frequency, physicality, and visualization (LaViola et al. 2017). The gamepad controller is an active sensor, requiring manual interaction through the user in order to generate data. The work presented here makes use exclusively of its discrete components (buttons) to allow interaction. Held by the user, the physical gamepad controller has no visual representation inside the VR environment. The VBMC is a passive sensor, allowing the user to interact unobtrusively within the 3D space through hand and finger tracking. Using no additional physical components, the user’s hands and fingers are visually displayed inside the virtual environment according to their relative position to the sensor, which is placed directly on the HMD. RSVR combines spatial tracking with physical device components. The physical controller is continuously tracked by the external sensors (“outside-in” tracking) and visualized accordingly in the 3D space, regardless of the user’s actions. Using the controller’s discrete components (buttons) and contextualizing its position within the virtual environment, user interaction with the developed VR system is possible.

Interaction methods and technologies are evolving rapidly, aiming to support the user through better comfort and easier use (LaValle 2016). In order to enable the user to successfully communicate and interact within a VR environment, the choice of appropriate input devices is important (LaViola et al. 2017), particularly considering that developing interaction mechanisms for VR remains challenging (LaValle 2016). As interactive features of a VR system may be mapped onto any given input device (LaViola et al. 2017), we are interested in investigating potential advantages and disadvantages of using these highly different input devices (GAMEPAD, VBMC, RSVR) to operate the developed VR system for the purpose of (open) data exploration. A user-centric view drives the investigation, as the user’s experience, behavior, and engagement with the presented input technologies are of particular interest. From their applied perspective, it is arguably most appropriate to classify these input technologies based on two aspects: whether it has a visual corresponding representation within the VR environment, and whether it is physical. Table 1 provides an overview of the selected input technologies and their characteristics.

Based on this classification and following our overall study aim, a research question (RQ) and two hypotheses (H) were defined as follows:
RQ:

How do different input technologies (gamepad, vision-based motion control, room-scale virtual reality) affect user experience and behavior in current state-of-the-art virtual reality environments?

H1:

Input technologies that include a visual representation aspect within VR have a more positive impact on user experience and behavior compared to those that do not.

H2:

Input technologies for VR that involve a physical controller have a more negative impact on user experience and behavior compared to those that do not.

In a scenario that encouraged exploration with no time limitations, 24 participants (\(n=8\) per prototype) were asked to use our system and perform a task based on open data available within the VR environment. This enabled us to investigate experienced workload (Hart 2006), perceived flow of interaction (Rheinberg et al. 2003), and simulator sickness (Kennedy et al. 1993; Bouchard et al. 2007) for each prototype. Further quantitative analysis of user behavior and comparison between prototypes was possible through the implementation of logging features directly in our VR system.
Fig. 1

Study participants operating our VR system using the three different interaction prototypes. Left to right: GAMEPAD, VBMC, and RSVR (see Table 1)

Table 1

Classification of input technologies used in our study

Input device characteristics

GAMEPAD

Vision-based motion controls (VBMC)

Room-scale VR (RSVR)

Visual representation (in VR)

No

Yes

Yes

Physical controller

Yes

No

Yes

Sensor type

Active

Passive

Active and passive

Input device data frequency

Discrete

Continuous

Discrete and continuous

HMD

Oculus Rift CV

Oculus Rift CV

HTC Vive

The article is organized in the following way. It begins by reviewing essential literature, looking at other comparative studies, and stating their findings in order to provide important fundamentals of VR as well as similar display and interaction technologies. Some background and descriptions about relevant data collection methods are provided as well. The third section is concerned with the concept and design of the developed VR system and the implementation of the three input technology prototypes. Afterward, methodological considerations for the user interaction study are described, providing insights about setup and environment, study procedure, and the task the participants were asked to complete. The fifth section states the results and limitations of the conducted user interaction study, while the findings are discussed in the sixth section. Finally, the seventh section concludes the article by summarizing the key findings, proposing recommendations for future VR developers, and stating some directions and possibilities for future work.

2 Literature review

Empirical and comparative studies are a common methodology to conduct exploratory studies in the field of human–computer interaction (HCI), particularly with a focus on VR and three-dimensional user interfaces (3D UI). Generally with many options and possibilities available, either software-, hardware-, or design-based, in order to propose interaction and implementation approaches for VR and 3D UI-related matters (LaViola et al. 2017), empirical and comparative studies provide important practical insights in UX and interaction of a developed system. In order to conduct an informative investigation with appropriate methodological considerations, existing literature was reviewed with the objective to examine such empirical and comparative studies. Conducted investigations, applied data collection methods, and existing findings as presented within the literature provide important directions and considerations for the presented work, informing the overall system design as well as its evaluation.

2.1 Locomotion

The mechanism of virtually moving the user from one place to another while remaining in a fixed position in the real-world environment is commonly known as locomotion (LaValle 2016). Cardoso (2016) compared interaction techniques for locomotion in VR, particularly investigating the differences between gaze-direction, gamepad-based, and gestural input (using a Leap Motion controller) approaches. According to the results of the study’s performance measurement, the gamepad-based approach seemed to be faster and more comfortable to operate within the context of a simple path following task (Cardoso 2016). Medeiros et al. (2016) investigated the effects of speed and transition on target-based travel techniques in VR. Implementing and evaluating three travel techniques (teleportation, linear motion, and animated teleport box) in regard to travel time, speed, and transitions, they conclude that infinite velocity techniques (teleportation) cause less discomfort, and that cybersickness and performance are not significantly impacted by the addition of transition effects to these kinds of techniques (Medeiros et al. 2016).

The feeling of being in the virtual world is known as the phenomenon of presence (Abrash 2014; LaValle 2016). Slater et al. (1995) investigated the influence of a body-centered interaction approach, particularly a walking-in-place technique, on presence in VR. With the hardware resources at the time (1995), the users rather preferred navigation by pointing, while the authors state that improved technologies may lead to a preference of their proposed interaction technique (Slater et al. 1995). Later, Tregillus et al. (2017) examined hands-free omnidirectional navigation in VR and compared head-tilt alone, head-tilt with walk-in-place, and joystick interaction. Their results indicate that interaction through head-tilt alone was the fastest and thus performed the best, while both head-tilt alone and head-tilt with walk-in-place interaction approaches reportedly increase the feeling of presence compared to the joystick interaction. Applying joystick-based and head-controlled paradigms, Chen et al. (2013) implemented and compared two six degree of freedom (DOF) navigation techniques for VR. Investigating user performance, cybersickness, and the feeling of presence, they report that head-controlled interaction is better than joystick-based approaches for navigation with six DOF (Chen et al. 2013). However, Chen et al. (2013) acknowledge that the results are certainly interesting from an interaction design perspective, while further research is required in order to fully understand this study outcome.

2.2 Input technologies and interaction

The effects of immersion in regard to three different interaction techniques have been investigated by McMahan et al. (2006). Based on a task-based study design, asking the users to manipulate a virtual object with six DOF in a cave automatic virtual environment (CAVE) setup, McMahan et al. argue that the interaction technique had a significant impact on the performance of the task completion, while the examined components related to immersion, concretely stereoscopy and field of regard, did not (McMahan et al. 2006). Three different interaction techniques (direct, semi-direct, and indirect) using 3D gestural input were compared by Wirth et al. (2018). Conducting typical scrolling- and windowing-tasks within a scenario that presents medical data, immersion and UX of nine radiologists were evaluated, indicating a preference for the implemented direct interaction technique (Wirth et al. 2018). Wolf et al. (2017) investigate different setups and interaction techniques for VR in an architecture scenario, comparing 2D screen plus mouse and keyboard, HMD plus keyboard, and HMD plus walking with gestural input in order to navigate in the VR environment. With a focus on examining the phenomenon of presence, they identified three aspects that can decrease this feeling, namely object mismatch, time mismatch, and spatial mismatch (Wolf et al. 2017). Betella et al. (2014) conducted an empirical study in order to investigate the exploration of large network data in two different setups, concretely a traditional 2D screen setup and a self-developed CAVE with both body and hand tracking for user movement and interaction. Their results report on the users ability to retain more structural information and spatial understanding about the network compared to the 2D screen setup (Betella et al. 2014). Bachmann et al. (2018) surveyed HCI techniques with a focus on 3D gestural input using the Leap Motion controller, providing a good overview of relevant methods and evaluation techniques. Gusai et al. (2017) conducted a study comparing gestural input (using a Leap Motion controller) and controller-based (HTC Vive) interaction in an collaborative scenario. Two users, one being immersed in the VR environment while the other one is assisting from outside the VR environment using a 2D screen application, completed an asymmetrical task of manipulating virtual objects, enabling Gusai et al. (2017) to examine performance, UX, and simulator sickness. Although the gestural input allows a more natural interaction with the virtual objects, it lacks the stability and accuracy of the controller-based interaction, favoring the latter in terms of UX (Gusai et al. 2017). Streppel et al. (2018) investigated different interaction mechanisms in order to move, select, and manipulate objects within a software cities-inspired VR scenario. Comparing interaction within the virtual environment using 3D gestural input (Leap Motion), a physical controller (HTC Vive), and a virtual UI (mixture of Leap Motion and HTC Vive), they conclude that 3D gesture input and physical controls are equally accepted according to the subjective impressions of their 30 study participants (Streppel et al. 2018). Streppel et al. (2018) state that further investigations in this matter are needed, for instance, using a real (measurable) task in order to get more objective results. Figueiredo et al. (2018) recently compared physical control (HTC Vive) and 3D gestural input (Leap Motion) as VR input devices within the scope of five tasks involving near and far object manipulation. The results of the 24 participants were evaluated in terms of performance and usability by following the System Usability Scale (SUS) questionnaire (Figueiredo et al. 2018). Although the physical control performed better in terms of error and time per action metrics, the participants still reported a preference to use 3D gestural input for interaction with objects in close proximity (Figueiredo et al. 2018). Caggianese et al. (2019) conducted a similar study, comparing the HTC Vive and Leap Motion controllers for manipulating virtual objects with VR. Performing three tasks, eight participants were evaluated quantitatively and qualitatively, indicating a preference toward the physical controller of the HTC Vive based on performance and perceived difficulty (Caggianese et al. 2019). However, Caggianese et al. (2019) identified general difficulties with both input devices when it comes to performing different aspects of the manipulation, such as selecting, positioning, and rotating a virtual object, at the same time. Kovarova and Urbancok (2014) compared the operability of a VR environment through keyboard and mouse with interaction through a smart phone device. They argue that today’s smart phone devices are equipped with sufficient enough sensors in order to be used as an input controller to successfully interact in a VR environment (Kovarova and Urbancok 2014). The results of their conducted experiment confirm this possibility, showing that some participants even outperformed keyboard and mouse users in the presented task-based scenario (Kovarova and Urbancok 2014). Lepouras (2018) compared different input methods (5DT gloves, gamepad, numerical keyboard) for numerical input in VR. A study with 22 participants showed trends toward a favor for gamepad controls over the gestural and keyboard interfaces due to familiarity and easy learnability. The interaction using a physical controller (HTC Vive) and 3D gestural input (Manus VR gloves) in a virtual environment were compared by Olbrich et al. (2018). Within a developed proof-of-concept prototype representing a virtual lunar base, 17 participants manipulated virtual objects in order to solve an emergency task (Olbrich et al. 2018). Evaluating the UX, the authors’ had to reject their hypothesis that 3D gestural input using the VR gloves would be more attractive and thus perform better compared to the physical controller (Olbrich et al. 2018). Olbrich et al. (2018) can only provide a weak recommendation to use such VR gloves in a similar scenario. The impact of display and input technologies on a user’s spatial presence, and the feeling for controller naturalness were investigated by Seibert and Shafer (2018). Data collected of 207 participants, either playing a first-person shooter game on a standard monitor/keyboard and mouse setup or using a Oculus Rift DK1 HMD/Razer Hydra setup, indicate that using a VR HMD had a positive impact on spatial presence, while keyboard and mouse controls were assessed more natural than the Razer Hydra tangible controller.

2.3 Immersive technologies

A comparison of a high-end (NVIS SX 60, Cyberglove II) and a low-cost (Oculus Rift DK1, Razer Hydra) HMD VR system was conducted by Young et al. (2014), evaluating both VR systems regarding visual perception and task-based interaction within two experiments. Their results indicate that at the time (2014), the low-cost VR system can keep up (within limits) with high-end model in order to explore perception and interaction in VR, stating that hardware limits are only transient (Young et al. 2014). Another comparison of immersive VR technologies has been conducted by Tcha-Tokey et al. (2017) who examined differences in UX between HMD (mobile) and CAVE technologies in an edutainment scenario. Their results are based on a self-developed UX questionnaire and show that CAVE technology provides a greater UX among the participants in their study (Tcha-Tokey et al. 2017). Furthermore, Tcha-Tokey et al. (2017) state that interaction design is arguably less challenging for CAVE than for HMD systems, while acknowledging that CAVE systems are usually more expensive and bulky compared to their HMD counterpart. The visual comfort of HMDs was investigated by Konrad et al. (2016), particularly investigating adaptive approaches for display technologies. Within their work, Konrad et al. (2016) present improvements in UX by using different display modes. Furthermore, the application of the presented display modes indicates also performance improvements regarding reaction time and accuracy (Konrad et al. 2016).

2.4 User interface design

Montano Murillo et al. (2017) state the importance of ergonomics when designing 3D UIs for VR, comparing three different UI layouts (ergonomic layout, limits of reach layout, and world fixed layout) in an experimental study. Describing the formalization and evaluating their manipulation technique named “Erg-O,” Montano Murillo et al. (2017) propose strategies on how to re-arrange interactive UI elements in order to improve ergonomics and spatial accessibility. Following abstract menu-based and metaphoric virtual belt approaches, Wegner et al. (2017) compare two 3D UI design approaches for VR in an inventory management scenario within the context of serious games. While the abstract menu-based approach features a two-dimensional plane arranging selectable items in a grid, their proposed metaphoric virtual belt arranges items so that they surround the user (Wegner et al. 2017). A study with paramedics, mostly inexperienced to VR applications, revealed no significant differences between both approaches in regard to their usability (Wegner et al. 2017). Vosinakis and Koutsabasis (2018) evaluated different visual feedback techniques for interacting with virtual objects using 3D gestural input in VR and a standard monitor setup, indicating better usability in VR and arguing for the importance of color-coded visual feedback.

2.5 Evaluation methods

The evaluation of VR systems, input technologies, and 3D UIs in order to analyze, assess, and test a developed artifact is a major challenge for researchers (LaViola et al. 2017; LaValle 2016). Due to its complexity, it can be approached from different perspectives, using different evaluation methods, based on the investigation objective. For instance, standardized methods such as the Simulator Sickness Questionnaire (SSQ), Motion Sickness Susceptibility questionnaire, System Usability Scale (SUS), questionnaires as part of the ISO 9241 (ergonomics of human–computer interaction) standard, as well as quantitative measurements from the developed system and its sensors have been applied by researchers within the comparative and empirical studies described in this chapter, to name just a few.

In order to evaluate our developed VR system using the three different input technologies, we aim to investigate the experienced workload as well as the perceived flow of interaction. We also aim to measure the simulator sickness in order to gain feedback about our VR system in practice. However, measuring simulator sickness is not the focus of this comparative input technology study, and therefore, it is described within “Appendix 2” section.

2.5.1 NASA Task Load Index (TLX)

The NASA Task Load Index (TLX) is a method to let a user estimate the experienced workload when operating an interactive system (Hart and Staveland 1988; Hart 2006), and has been applied for VR and input technology scenarios in different contexts before (Lackey et al. 2016; Bachmann et al. 2018). The TLX investigates six factors, namely physical demand, mental demand, temporal demand, effort, frustration, and the user’s perceived own performance. Within a two-step approach the user is first asked to weigh and then rate these factors in order to calculate a final score (weighted rating) representing the user’s perceived workload. Additionally to evaluate the overall workload, it is also possible to have a closer look at the different individual factors in order to gain further insights.

2.5.2 Flow Short Scale (FKS)

The investigation of the overall interaction flow of the user operating a VR system is possible based on Csikszentmihaly’s flow theory (1988, 2014). The Flow Short Scale by Rheinberg et al. (2003) is an adapted version of Csikszentmihaly’s work (1988). The FKS consists of a set of 16 Likert-scale statements investigating the smooth and automatized process, the ability to absorb, concern, the fit of skill and requirements, as well as the user’s overall “flow.”

2.6 The Authors’ approach

Summarizing, different studies have been conducted in the past, from investigating one specific phenomenon such as locomotion, presence or immersion, to examining different 3D UI design approaches, to comparing different display, system, or controller setups. While some of the presented studies show similarities, the closer related to our work are probably the recent studies by Gusai et al. (2017), Streppel et al. (2018), Figueiredo et al. (2018), and Caggianese et al. (2019). However, our approach differs slightly from the similar efforts described here: (1) We directly compare three interaction approaches based on commonly used current state-of-the-art consumer technologies that differ in their classification, emphasizing on the input controllers’ characteristics in terms of their ability to be visually displayed in VR and physicality (see Table 1); (2) our study is rather inductive and exploratory as we aim to investigate multiple components such as workload and interaction flow; (3) we developed our own system that enables a non-expert user to explore open data in an immersive VR environment. To our knowledge, we provide a unique study setup comparing different input technologies, aiming to contribute with new insights regarding user experience and behavior in an immersive VR environment within the context of open data exploration.

3 Open data exploration in virtual reality

Our work is driven through the motivation of applying VR technologies in a non-entertainment setting. In times of a thriving open data movement (Janssen et al. 2012), we believe open data exploration in an immerse VR environment has the potential to provide users with new insights and perspectives about the data. Immersive data exploration technology is likely to be used by both expert and (as importantly) non-expert users as VR technology evolves in the future. Consequently, it is important to us to not only passively display data, but enable active interaction with the data, encouraging the user to explore, inspect, and move among the displayed data. This section describes the concept and interaction design as well as the implementation of the developed VR system.

3.1 Concept and interaction design

The developed VR system uses a defined “Data Structure Reference Model” in order to parse, interpret, and act upon accordingly. Enabling the VR system to handle a diverse set of different data opens up the opportunity for various exploratory studies with different data sets in different scenarios. Collected data, either from a single or from multiple sources, are structured according to the defined model before they are then visualized in the three-dimensional space. These data transformation and visual mapping procedures are essential steps in the visualization pipeline (also referred to as “information visualization reference model”) (Ward et al. 2010). Using such a data structure reference model, independent of the original data, allows the VR system to act as a unified interface for data from, potentially, different sources. Seperation of concerns (SoC) is a commonly known design principle within computer science. In order to make it more clear within the text/sentence, we suggest to format it using italics as follows: Data collection and modeling outside the VR system follow a clear separation of concerns design principle.

The VR system itself is conceptualized around a set of a few key features. The overall visuals are held intentionally minimalistic in order to direct the user’s attention to the data rather than distracting elements in the virtual environment. Individual data items are visually represented as individual data entities (following referred to as “nodes”), e.g., as a cube or sphere with its unique name displayed above it. The placement of each node in the three-dimensional space is decided upon as part of the (prior) visual mapping process. Different placement and arrangement mappings may be applied based on the purpose and aim of the data exploration activity.

In order to allow user interaction in any kind of VR system, Bowman and McMahan (2007) describe that input technology and input interpretation software are needed to successfully establish a human–virtual environment interaction loop. Consequently, besides physically looking around to perceive the displayed data, a set of interactive features allows the user to become an active part in the developed VR system.

Following up on design decisions made within our earlier work (Reski and Alissandrakis 2016), the VR system features a visual interface to display detailed information about a node. Once the user takes action to display such detailed information, three two-dimensional planes, similar to sheets of paper, and each displaying different kinds of information, will be positioned around the user’s current line of sight: one in the center, one slightly to the left, and one slightly to the right (similar to the “content” interaction zone as presented by Wirth et al. 2018). The center plane presents the name as well as a short descriptive, cohesive text of the node. The left plane features a list of different characteristic items, e.g., numerical, about the node. The right plane displays one image associated with the node at a time. The user may interact with the right plane to browse though other images associated with a node.

Following a target-based travel approach (Medeiros et al. 2016), the user has the ability to move between and explore the nodes in the three-dimensional space, while explicitly linking the user’s position to one node at all times, allowing easy identification of the current (data) item of interest.

A filter mechanism that can be applied and reset on demand, provides an element of guidance to the user’s exploration. The VR system can compare the node the user is currently located at to all other nodes on a “is greater than” and “is less than” option basis. The guidance is limited to results that are only (a) minimal, (b) maximal, and (c) medial different, in order to not overwhelm the user with feedback to all, arguably too many, other nodes. The nodes that are within the applied filter result set are highlighted, visually connected to the user’s current location, and color-coded (green = minimum, yellow = median, red = maximum difference). Additionally, while a filter is active, the user’s target-based travel is restricted exclusively to nodes within the applied filter result set.

Finally, a bookmark feature enables the user to visually highlight one node at a time. Such a feature should enable the user to keep track of a discovered node of interest, while not concerning about potentially forgetting about or not finding back to it during the immersive data exploration activity.

Since the comparison of different input technologies is one of the main motivations of the presented work, three interaction prototypes are implemented, each able to operate the developed VR system. First, GAMEPAD interaction is supported using a Xbox One controller and an Oculus Rift Consumer Version (CV) HMD. Second, the VR system can be operated with 3D gestural input through vision-based motion controls (VBMC) using the Leap Motion controller attached in front of an Oculus Rift CV HMD. Third, using the HTC Vive a room-scale virtual reality (RSVR) environment is supported, enabling the user to move freely within the boundaries of a physical two-by-two meter area. Each interaction functionality of the VR system is mapped to an appropriate controller feature available in each prototype (see Table 2).
Table 2

Interaction features per prototype

Input technology prototype (Instruction video for participants)

GAMEPAD https://vrxar.lnu.se/odxvr/gamepad.mp4

VBMC https://vrxar.lnu.se/odxvr/vbmc.mp4

RSVR https://vrxar.lnu.se/odxvr/rsvr.mp4

Node selection

Gaze

Gaze

Touch node in close proximity with controller or pointer tool to aim at distant node

Move to other node

Node selection + A button

Node selection + index finger point forward (left or right hand)

Node selection + trigger button

Show/hide node information

Y button

Thumbs up (right hand)

Grip button

Browse through node’s images

Gaze + A button

Index finger point left (right hand) or index finger point right (left hand)

Touch or pointer tool

Show/hide filter menu

B button

Thumbs up (left hand)

Application menu button

Select and apply filter option

D-pad + A button

Touch filter option

Touch filter option + trigger button

Set/unset node as bookmarked

Node selection + X button

Node selection + all fingers spread (left or right hand)

Touch node + grip button

3.2 Implementation

The VR system can be divided into three parts: the data structure reference model, the VR application itself, as well as supplemental applications for data collection, data transformation, and visual mapping. Figure 2 provides an overview of the VR system. The data structure reference model is based on the JavaScript Object Notation (JSON) file format and contains all information the VR application needs in order to set up and visualize data in the VR environment. The VR application is implemented using the Unity3D cross-platform game engine in version 5.3.4p6 (64-bit) on a computer running Windows 10 (x64-based). SimpleJSON is used to parse and handle JSON-formatted data in Unity3D. The Oculus Rift CV integration in Unity3D is enabled through the usage of the Oculus OVR Plugin 1.3.2 for Unity 5. In order to integrate VBMC using the Leap Motion controller, Unity Assets for Leap Motion Orion BETA v4.1.1 are used. SteamVR Plugin 1.1.0 and SteamVR Unity Toolkit are handling the integration of the HTC Vive. The VR application requests data that are aggregated and prepared beforehand, from a Node.js server. Openly available online sources and data can be used, for example, those received from DBpedia, Wikimedia, Wolfram Alpha, and The New York Times, in order to create data sets to be visualized in the VR application using programming languages like R and JavaScript. In order to collect expressive data about how a user behaves and interacts with the VR application, a logging system is implemented (described in Sect. 4.2.4) to record user interactions. This logging system generates data as a comma-separated value (CSV) file for further analysis through the researcher.
Fig. 2

VR system architecture

4 Methodology

In order to gain insights about the developed VR system in practice, especially with the focus on comparing the three input technologies according to the defined research question and hypotheses as stated in Sect. 1, we conducted an exploratory user interaction study. Following, details about the conduction of this user interaction study as well as the applied data collection methods are provided.

4.1 User interaction study

Three independent variables were tested, namely the input technology prototypes GAMEPAD, VBMC, and RSVR (see Table 1). Measurable differences in experienced workload, and perceived flow of interaction (dependent variables) were expected as a result of the participants operating the VR system using these prototypes. Using these measurements allows for the identification of potential differences in user experience and behavior between the prototypes as well as advantages and disadvantages for a prototype over another within the scope of the presented data exploration scenario and the developed VR system.

A between-group design was used, with each participant using one of the three prototypes (as described in Sect. 3.1 and Table 2). In order to have the same number of participants per prototype, we cycled though the options for each scheduled participant session.

Although there was no explicit time limitation from the researchers’ side, a duration of approximately 43–53 min per session (see Sect. 4.1.3) was planned. Such a session comprised all phases of the study, from completing a user-consent and biographical questionnaire (3 min), to getting familiar with and completing a task using one of the three VR prototypes (18–28 min), to completing various post-test questionnaires (22 min).

4.1.1 Setup and environment

All study sessions were conducted in our VR lab space located at Linnaeus University. The VR lab features a square two-by-two meters area, intended for the user to move freely without any obstacles.1 The VR lab provides enough space for the participant and the researcher to conduct the study comfortable and uninterrupted.

4.1.2 Task

Using one of the prototypes each participant was asked to complete a single task, same for all participants. Using information aggregated from Wikipedia (Wikimedia and DBpedia), Wolfram Alpha, and The New York Times, we created a data set representing the results of the 2016 United States (US) presidential election. Within the VR environment, nodes represented US states, each placed at a corresponding geographical location of its capital city in the real world to facilitate navigation. Exploring a node in more detail provides overall descriptive information about the state as well as some images (both from Wikipedia). The left panel of a node’s detail view features information items about the state, such as its abbreviation and capital (Wikipedia), its longitude and latitude (Wolfram Alpha), and, most relevant for the task, in percentage the voting results of the 2016 US presidential election for the Democratic, Republican, and other parties (The New York Times). These voting results were also made available as filter options in the VR system, to assist the user’s exploration. Figure 3 shows several screenshots from within the VR environment.
Fig. 3

Screenshots from within the VR environment. Left to right and top to bottom: selecting a node, node information panels (left and center), node information panels (center and right), filter menu. Note that here the HTC Vive controller can be seen, as these screenshots are from the RSVR prototype. The instruction video can be seen here: https://vrxar.lnu.se/odxvr/rsvr.mp4

Each participant was asked to explore the data set and identify two states where both the Democratic and Republican party results were close to 50%, indicating a tight election race. The state of Alabama was chosen as the starting point in every session.

Each participant was encouraged to freely explore the nodes using the VR system with their own strategy and pace. To complete the task, each participant had to name, at one point in time, two states they believed fit the task criteria. There are more than two states that (within a reasonable margin) satisfy the task criteria (see Table 6 in “Appendix 3” section). We justify the choice of creating a task with no precise answer in regard to the overall inductive and exploratory nature of our VR system and the fact that we wanted the participant to interact with the VR system in a meaningful way in order to collect data of the different input technologies in practice. A similar task with no precise answers was successfully used in a prior study (Reski and Alissandrakis 2016).

4.1.3 Study procedure

Each user study session followed the same procedure. First, the participant was welcomed and asked to fill in a user-consent and short biographical questionnaire (3 min). Second, the researcher would briefly introduce the developed VR system (8 min). For this purpose, three short introduction videos were provided, one per input technology, of approximately 3 min, giving all participants the same information. After watching the introduction video, the participant was given a short warm-up time with the VR prototype. Third, once comfortable with wearing the HMD and familiar with the interactions, the participant was asked to complete the task as described in Sect. 4.1.2 (10–20 min). Fourth, after completing the hands-on VR part of the study session, the participant was asked to complete the Simulator Sickness Questionnaire (Kennedy et al. 1993) (3 min). Fifth, the participant was asked to estimate their experienced workload following the NASA Task Load Index scale (Hart 2006) (8 min). Sixth, the participant was asked to complete the standardized Flow Short Scale questionnaire (Rheinberg et al. 2003) (3 min), to learn about the participant’s overall flow of interaction. Finally, a short informal interview based on the researcher’s observations of the participant operating the VR prototype and completing the task was conducted to get some further insights and impressions from the participant (8 min).

4.2 Data collection

In order to investigate and compare the different input technologies in an VR environment, established data collection methods were chosen (see Sect. 2.5). The results from these methods could then be compared and put into perspective accordingly.

4.2.1 Biographical questionnaire

Some data (age and sex) about the participants were collected within a biographical questionnaire. Furthermore, the participants were asked to state some details about their prior experiences with the selected input technology.

4.2.2 NASA Task Load Index (TLX)

Within the scope of the conducted study, it is particularly important to gain insights about the user’s cognitive workload in regard to the different input technologies under investigation. After reporting on the experienced simulator sickness, each participant assessed their experienced workload based on the two-step approach as described in Sect. 2.5.1. In order to facilitate the participants’ understanding of the six TLX factors (mental demand, physical demand, temporal demand, own performance, effort, frustration), a handout with the rating scale definitions according to NASA (2018) was provided. The workload scale reaches from 0 to 100, with values toward 0 indicating an extremely low workload, while 100 indicates an extremely high workload.

4.2.3 Flow Short Scale (FKS)

Measuring the user’s flow of interaction in order to reveal similarities and differences between the three input technologies is important due to their highly different functioning. After assessing their experienced workload, each participant reported on their perceived flow of interaction by completing the FKS questionnaire according to Rheinberg et al. (2003). The statements in the FKS questionnaire were rated on a seven-point Likert scale with a dimension reaching from not at all over partly to very much. Due to the missing context in regard to the study task, we decided to not consider the three items of the questionnaire’s “fit of skill and requirements” section.

4.2.4 Logging

In order to collect comprehensive data of the participant operating the developed VR system, we implemented2 a simple logging system. Following a predefined protocol, it is possible to keep track of every action the participant performs, such as moving between nodes, examining a node in more detail, or applying filter options. The implemented logging system creates a CSV file, where each entry represents an individual user action in consecutive order. Following an extended Action-Object-Target protocol allows to systematically keep track of these user actions. A time stamp (in millisecond since the start of the application) provides information about when an action was performed. Using software such as R it is possible to quantitatively evaluate all user actions based on the created CSV file in order to create an interaction summary.

4.2.5 Observation and informal interview

Additionally to collecting quantitative data as part of the logging, the researcher observed the participants during the task completion and took notes. An informal interview and discussion with the participant at the end of the study session provided opportunity to collect qualitative feedback and impression from the participant. The participant was encouraged to state noteworthy feedback based on the experience of operating the VR system. All collected data of the observation and informal interview can then be summarized and categorized, ultimately compiling into a list of overall feedback regarding the VR system and its input technology prototypes.

5 Results

Applying the methodology as described in Sect. 4, a user interaction study with \(n = 24\) participants was conducted. Since each participant tested one of the three developed VR prototypes, data from \(n = 8\) participants per input technology (GAMEPAD, VBMC, RSVR) were collected. Figure 10 (in “Appendix 3” section) and Table 3 present the demographic information and prior experiences of the study participants regarding the input technology.
Table 3

User interaction study: prior experience of participants with the input technology

Input technology

Not experienced

Experienced

GAMEPAD

1

7

VBMC

2

6

RSVR

4

4

5.1 Task completion

According to the task definition as described in Sect. 4.1.2, each study participant was asked to name two states as answers. Table 4 provides a detailed summary of these answers. The majority (21 participants) provided both answers that were considered appropriate; three participants (one using the RSVR and two using the GAMEPAD prototype) provided as one of their answers a state that was not within the target.
Table 4

Results: Task answers

State

Democratic %

Republican %

Difference

GAMEPAD

VBMC

RSVR

Total

Florida

48

49

3

6

1

5

12

Pennsylvania

48

48

4

3

4

5

13

North Carolina

47

51

4

0

1

0

1

Michigan

47

48

5

1

2

2

5

New Hampshire

48

47

5

1

1

2

4

Wisconsin

47

48

5

2

2

1

4

Arizona

45

50

5

0

0

0

0

Georgia

46

51

5

0

2

0

2

Nevada

48

46

6

1

1

0

2

Virginia

50

44

6

0

2

0

2

Maine

48

45

7

1

0

1

2

Minnesota

47

45

8

1

0

0

1

Difference is determined based on the combined distance of each percentage to 50. None of the participants provided Arizona as an answer. Maine and Minnesota are answers that are not considered to be within the top ten expected acceptable answers for this data set (see also Table 6 in “Appendix 3” section)

5.2 NASA Task Load Index (TLX)

Using the participants’ self-reported workload data based on the NASA TLX questionnaire, the individual cognitive workloads3 according to Hart and Staveland (1988) were calculated. Figure 4 shows that RSVR had the lowest median value of estimated workload, but also the largest inter-quartile range. VBMC has the highest estimated workload median value, followed by GAMEPAD. Both GAMEPAD and VBMC have a low inter-quartile range compared to RSVR. Figure 5 illustrates the comparison between the adjusted ratings of each NASA TLX factor as reported by the participants for each input technology. Looking at the median values and inter-quartile ranges, a few things are noteworthy. The PD (physical demand) and FR (frustration) are higher for VBMC compared to GAMEPAD and RSVR; interestingly, VBMC participants also reported themselves as being most performant. (A low score of OP (own performance) indicates “good” performance.) The EF (effort) is higher for GAMEPAD compared to VBMC and RSVR; note that MD (mental demand) for GAMEPAD is also slightly higher.
Fig. 4

Results: Workload estimation based on NASA TLX. See Fig. 11 for additional breakdown considering the participants experience with each input technology

Fig. 5

Results: NASA TLX factors adjusted ratings (weight x rating). MD = mental demand, PD = physical demand, TD = temporal demand, OP = own performance, EF = effort, FR = frustration. See Fig. 12 for additional breakdown considering the participants experience with each input technology

5.3 Flow Short Scale (FKS)

Table 5 presents the mean and standard deviation values for each Likert-scale item of the FKS, grouped according to the suggestions by Rheinberg et al. (2003). The total scores for flow, smooth process, absorb, and concern are illustrated in Fig. 6.

Participants using RSVR experienced more flow compared to GAMEPAD and VBMC. The experienced flow for GAMEPAD and VBMC was similar and lower than RSVR.
Table 5

Results: Flow Short Scale (FKS) overview

Flow Short Scale

GAMEPAD

VBMC

RSVR

Mean

SD

Mean

SD

Mean

SD

F I—Smooth automatized process

5.00

0.33

4.54

0.33

5.38

0.52

8) I knew what I had to do for each step of the way.

4.63

1.77

4.88

1.64

5.38

1.92

7) The right thoughts/movements occur of their own accord.

4.75

1.49

4.38

1.92

4.38

1.51

9) I felt that I had everything under control.

4.88

1.55

4.88

0.99

5.50

1.93

4) I had no difficulty concentrating.

5.00

1.93

4.25

2.49

5.88

1.46

5) My mind is completely clear.

5.50

1.20

4.13

1.73

5.63

0.74

2) My thoughts/actions ran fluidly and smoothly.

5.25

1.67

4.75

1.49

5.50

1.69

F II—Ability to absorb

4.47

1.54

4.94

0.88

4.91

1.61

6) I was totally absorbed in what I was doing.

6.25

0.71

6.13

1.13

6.25

1.04

1) I felt the right amount of challenge.

4.50

1.20

4.75

1.39

4.63

1.69

10) I was completely lost in thought.

2.50

1.20

4.00

1.60

2.75

1.49

3) I did not notice time passing.

4.63

1.85

4.88

1.13

6.00

1.31

Flow (1–10)

4.79

0.96

4.70

0.60

5.19

1.03

F III—Concern

2.54

0.76

2.92

0.40

3.17

0.97

11) Something important to me was at stake here.

2.38

1.41

2.63

2.26

2.88

1.81

12) I did not make any mistake here.

3.38

1.69

2.75

1.98

4.25

1.83

13) I was worried about failing.

1.88

1.64

3.38

2.39

2.38

1.92

Fig. 6

Results: Flow Short Scale (FKS) categories. See Fig. 13 for additional breakdown considering the participants experience with each input technology

5.4 Logging

Using the implemented logging system in our VR system as described in Sect. 4.2.4, a detailed summary of all the participants’ actions operating the three VR prototypes was compiled. The average time a user spent at a node was very similar across all VR prototypes, although GAMEPAD has a higher standard deviation compared to VBMC and RSVR (GAMEPAD = 25.35, SD = 26.024; VBMC = 24.09, SD = 8.882; RSVR = 26.91, SD = 8.690; all in seconds). The least amount of nodes on average were visited by the VBMC participants (GAMEPAD = 26, SD = 11; VBMC = 13, SD = 4; RSVR = 20, SD = 12). Overall, GAMEPAD participants performed the most interactions per minute with 20 (SD = 5). VBMC participants performed on average of 14 interactions per minute (SD = 2), while RSVR participants interacted with the VR system 12 times per minute (SD = 5). Figure 14 in “Appendix 3” section shows the task actions per minute.

In the overall VR system certain actions cannot be performed in certain contexts, e.g., a user cannot move to another node while the content-view (information panels) of the current node was shown, it would have to be dismissed (hidden) first. The most contextually “wrong” interactions of this kind on average were performed by GAMEPAD participants (mean = 20, SD = 17). With a mean value of 8 (SD = 5), VBMC participants have the least amount of contextually wrong interactions. RSVR participants performed on average 13 (SD = 12) contextually wrong interactions.

Furthermore, VBMC participants completed the task on average the fastest with 5.03 minutes (SD = 1.62). GAMEPAD and RSVR participants took 7.67 (SD = 4.46) and 8.86 (SD = 5.10) minutes to complete the task, respectively. The task completion times are shown in Fig. 15 in “Appendix 3” section.

Figure 7 shows strong positive correlations between content-view triggers and the amount of node-to-node movements for all VR prototypes. This indicates that all participants both examined and explored the data in detail.
Fig. 7

Results: Moves vs content views, during the task

Using the logging data pathway visualizations were created, illustrating the exact node-to-node movement of each participant when completing the task. Independent of the three VR prototypes, the participants used different (movement) strategies to complete the task as the examples in Fig. 8 illustrate. Some (\(n=10\)) participants completed the task and named two answers as soon as they encountered two locations they considered suitable for an answer, exploring the locations to a rather minimal extend. In contrast, other participants (\(n=6\)) explored the locations to a greater extend, going back and forth multiple times between already visited nodes.
Fig. 8

Results: Some examples of exploring behaviors from the study (left: highly complex and systematic, middle: loops of lower complexity, right: straight walk). Such strategies were observed across all prototypes

5.5 Observations and informal interview

Sixteen participants were observed applying the filter options provided by the VR system (see Sect. 3.1) in a systematic way in order to find suitable answers. They moved to a node, triggered the content-view to see the votes, applied a filter to, e.g., find states that have slightly more votes for one of the parties, and then moved to that node. In most cases they started this procedure again or moved back to the previous node in case they considered it more suitable as an answer. However, six participants asked for clarification about the filter options after the VR system introduction, while five users made only minimal (if any) usage of the filter features to complete the task, but rather followed a trial-and-error strategy in order to find suitable answers. Twelve participants actively emphasized the overall pleasant VR experience, positively highlighting the target-based movement transitions from node to node. They were furthermore excited about the possibilities provided by the overall VR system. Some suggestions for improvement were also expressed by the participants. Ten participants noted some occlusion issues, e.g., due to nodes in more dense areas interfering with the readability of the elements in the content-view. Five participants made a feature request, asking for some kind of map view, or birds-eye view, in order to get a quick overview about all locations, e.g., by zooming out and having a more “traditional” view on the nodes from above by looking down on them. Four participants suggested to attach additional information directly to the nodes when selecting a node. They argued that this could prevent frequent opening and closing of the content-view when searching for nodes containing a specific parameter. Five participants (GAMEPAD: 3, RSVR: 2) had troubles remembering the button layout of the respective physical controllers and argued that the button layout was not intuitive and they had to make noticeable mental efforts in order to learn and remember which button on the controller triggered what action. The node selection through gaze input (GAMEPAD, VBMC) was physically noticeable according to four participants, especially when trying to select distant nodes and then triggering a movement action.

5.6 Limitations

Conducting one-on-one user interaction studies in the context of VR is relatively costly in regard to time and resources. Due to the comparatively limited available amount of participants (overall and per prototype), it is not possible to state accurate conclusions, but rather note trends and noteworthy considerations based on the collected data and results. Additional studies, e.g., with a higher participant count or participants of a certain target group, could provide additional meaningful data in the future. Furthermore, the collected data and results are to be interpreted within the context of data exploration, particularly considering an exploratory task with no time limitations.

6 Discussion

In this section we discuss the reported results from two perspectives: from a comparative input technology perspective, and how they can inform our developed VR system.

6.1 Input technologies

The main purpose of this study is to investigate potential differences in the operation of the same VR system using different input technologies. All three VR prototypes (GAMEPAD, VBMC, RSVR) enabled the participants to solve the given task in a satisfying manner. Although the exploration strategy was different from user to user, we did not find any strong correlation (also given the sample size) between the input technologies and the participants’ ability to solve the task. For example, as seen in Table 4, compared to GAMEPAD and RSVR, all VBMC participants provided answers within the predetermined acceptable range; however, it was RSVR participants overall that provided the more highly ranked answers (according to Table 6 in “Appendix 3” section).

Examining the NASA TLX adjusted ratings, it is noticeable that the factor MD (mental demand) was reported fairly prominently across all input technologies (see Fig. 5). We believe that this is a result of the overall interplay between operating the VR prototype with each individual input technology and the exploratory task we asked the participants to complete. Generally speaking, the individual workload assessments are rather unique to each participant. Nevertheless, certain trends can be observed.

Users operating the VR prototype using the GAMEPAD controls reported a comparatively high adjusted rating for the factor EF (effort). Examining the summary of the logging data, the mean value of the sum of all contextually wrong interactions is the highest for GAMEPAD among the three input technologies. We suspect a relation between these facts, as the GAMEPAD participants had a harder time remembering the controller’s button layout than the VBMC and RSVR users (according to observations). This might be caused by the fact that the gamepad controller itself has no visual representation in the VR space (see Table 1). Additionally, the reported higher MD (mental demand) adjusted rating of the GAMEPAD participants might relate to this self-reported higher effort.

The reported higher PD (physical demand) adjusted rating of the VBMC participants could be explained due to the, compared to the other conditions, increased hand, finger, and thumb movement and posture.

Cardoso (2016) states that interaction using the Leap Motion controller (3D gestural input) required a considerably higher effort (using the ISO 9241-9 questionnaire), compared to gamepad and gaze techniques, to complete the path following tasks within their study. This corresponds to the reported high TLX PD (physical demand) factor of our study participants (see Fig. 5); note that the EF (effort) TLX factor would not seem to be directly comparable with effort as defined in the ISO 9241-9 questionnaire (mental and physical vs physical only).

According to the logging data, although the VBMC users have the least interactions in a wrong context on average, they also have the least interactions in general on average. While all VBMC users were able to solve the task successfully, they also required the least amount of time to do so according to the calculated mean completion time. In combination with the researcher’s observations, a drawback of the logging system becomes apparent. The logging system only keeps track of detected interactions, such as the push of a button or in case of the VBMC a recognized/detected gesture or hand posture. However, this detection of the VBMC is not perfect, and the users also need to learn how to make adequate hand postures. Quantitative data (e.g., from a video recording and analysis) of this matter were not collected, but the users of the VBMC input technology had sometimes to try multiple times to get the intended interaction detected. Despite the task’s success rate and the overall few interactions in a wrong context, we suspect that this is the reason for the dominant reported FR (frustration) factor.

Despite the higher rated PD (physical demand) and FR (frustration), it is interesting to observe that VBMC participants still gave a high self-assessment of their performance (low score of OP (own performance) compared to GAMEPAD and RSVR).

Having a look at the overall values based on the FKS, the RSVR users felt slightly more “in the flow” compared to the GAMEPAD and VBMC participants. However, these results are only marginally apart from each other. RSVR users reported the best flow rates of operating the VR system, which could relate to the, compared to GAMEPAD and VBMC, lower workload (see Fig. 4).

VBMC users felt slightly less smooth process interacting with the VR system, which we believe is related to the inaccuracy of this input technology’s hand gesture/posture detection. As the VBMC user had sometimes to try multiple times to get their interaction recognized by the Leap Motion controller and thus by our VR system, this arguably interrupted their flow experience.

Looking at “Absorb” in Fig. 6, RSVR would arguably rank first, followed by VBMC, and GAMEPAD certainly least absorbed. In this case, we believe the visual element of the input technology (virtual hands for VBMC, visual representation of the physical controller in RSVR) plays a role. Since humans use primarily visual senses, the translation of their own movements to a visual representation in the VR space as in the case for the VBMC and RSVR input technologies seems to be an impacting factor for extending their feeling of being absorbed in the immersive environment.

Based on the pathway visualization created from the logging data, it seems that every participant had an own strategy on solving the given task independent of the used input technology to interact with the VR system. The logging data show that VBMC participants overall moved the least frequent on average. Additionally, when examining the individual pathway visualizations of the VBMC participants, most of them can be categorized as “straight walk” explorations. Consequently, they explored the least nodes compared to GAMEPAD and RSVR participants, which in contrast did not hinder them from completing the given task.

6.2 Open data exploration VR system

Within the given task, the study participants were encouraged to identify two nodes according to certain criteria (see Sect. 4.1.2). Independent of their exploration strategy, we positively observed the fact that the majority of participants was able to identify not just one but two suitable nodes, while only three of the 24 participants identified one node which was not a reasonable match. Given the provided task and context, the results indicate that a VR system as a unified interface for data from multiple sources as presented here can be used for data exploration. Using the provided interface and displayed information, the participants were able to successfully solve a task related to the data set. In particular, this fact encourages us to pursue the display of other data sets in a VR environment that can then be explored by users accordingly, illustrating a use case for immersed data exploration. With the overall positive response, enthusiasm, and ideas for future features from the study participants, we believe to have developed an exciting and fun way to explore data in the three-dimensional space.

None of the participants quit or paused the VR part of the study session. On the contrary, twelve of the 24 participants highlighted the overall pleasant VR experience. This also indicates the overall appeal and success of the VR system’s design. The displayed information is rather minimalistic and clear, while the participants could still interpret and make meaning as also shown by the successful task completion rate. The target-based movement forced the user to not stray off into the empty three-dimensional space as it anchored them to one specific node at all times. We believe this has two advantages. First, it focuses the user on the data, in particular to one specific item in the data set (represented as the node). The user’s attention is set to that item, from where the user can further explore and take the next steps in the exploration. Second, the applied target-based movement transition was perceived as pleasant, illustrating that this kind of movement approach in a VR environment works just fine. This is backed up by the overall low perceived simulator sickness (see Fig. 9 in “Appendix 2” section), as movement was one of the most frequent actions the users performed in the VR environment.

Based on the collected data from the implemented logging system, it was particularly interesting to visualize the pathways of the participants, creating a move-by-move visualization of the each participant’s exploration in order to solve the given task. Although it was explained to the participants that time is not an important factor and they may take as much as they feel is needed to find a solution, different participants had different approaches. As the pathway visualizations clearly illustrate, some participants wanted to be effective and only explored the data very minimal, others took more time and efforts in order to find a suitable solution. Some of the participants explored the data to a great extend, being very adamant of finding the best possible solution, illustrating that general pleasant experience of the VR system. At the same time, the users exploring the data to a more minimal extend seemed to be satisfied with their performance and exploration as well. As no time limitation for the task completion was given, we assume that the exploratory behavior of the participants is depended on multiple individual factors, such as familiarity with the displayed data, eagerness to gain new insights about the displayed data, or simply mood.

It has been shown that immersive data visualization and exploration allow users to retain more structural information and a better spatial understanding (Betella et al. 2014). Although we did not measure these factors within our investigation, all the participants in our study were able to explore the presented data, access information, and solve the given task in a satisfying manner (see Table 4), providing another example of the application of VR technologies for the purpose of immersive data exploration.

Overall, given the results from the user interaction study, we are satisfied with the interface and interaction design of the VR system. The results as well as the generic approach of being able to take different kinds of data from any source (as long as it is transformed to our VR system’s data model; see Sect. 3.1), encourage us to apply our developed VR system in different contexts and scenarios. Since the data aggregation and transformation are clearly separated from our application and done outside the VR system, anyone can come up with a concept on how to map and display data in the three-dimensional space, practically prepare the data based on the required data model, serve it to the VR system, and start exploring. Although the visual presentation of the VR system’s UI can arguably be improved, the results indicate that this would only be of cosmetic value.

6.3 Hypotheses assessment

H1: A visual representation tied to the input technology in VR will have a positive impact on user experience and behavior.

RSVR scored the most positive results at TLX (lowest workload), arguably followed by VBMC (high PD and FR, but low OP) and then GAMEPAD (high MD and EF); this supports the H1 hypothesis, as both RSVR and VBMC feature a visual representation tied to the input technology. RSVR also arguably scored better on FKS; with an emphasis on Absorb and overall Flow, the same ranking as with TLX can be observed, similarly supporting this hypothesis.

H2: A physical controller tied to the input technology in VR will have a negative impact on user experience and behavior.

Given that VBMC, in both TLX and FKS, ranks in between GAMEPAD and RSVR, it is difficult to argue for supporting or rejecting this hypothesis. While RSVR and GAMEPAD participants interacted and explored the displayed data the most during the task (based on amount of interactions and visited nodes), they also showed more contextual wrong interactions compared to the VBMC users. Prior experience with the controllers’ button layout is needed by the users; a few participants had noticeable challenges in that aspect. This could indicate an overall less intuitive interaction compared to the VBMC input technology.

The results of our input technology comparison are in line with the findings presented by Gusai et al. (2017), indicating a slight preference, due to its greater stability and accuracy, for the HTC Vive controller over the Leap Motion. Performance and detection issues of the Leap Motion controller have been reported before, favoring the more stable tracking of the HTC Vive system (Caggianese et al. 2019). Arguably, the reported hand gesture/posture detection of the Leap Motion controller led to a certain frustration of the users (see Fig. 5), interestingly though not to a drastic decrease within their experienced flow (see Fig. 6) within the context of data exploration with no time limitations. At times it is interesting to observe that although users perform (measurably) better overall under one condition (HTC Vive), they still (subjectively) prefer another one (Leap Motion) for certain tasks (Figueiredo et al. 2018).

Although presenting mostly qualitative results and subjective impressions, the findings by Streppel et al. (2018) are in line with ours, indicating a generally similar acceptance for both HTC Vive and Leap Motion controllers for the purpose of exploring and interacting within a VR environment.

Arguably, one would assume that VBMC and RSVR input technologies are much more suited for immersive interaction due to their tracking capabilities and visual representations in VR. Examining the overall results of all three input technologies, the comparatively close scores of GAMEPAD to both VBMC and RSVR could be explained due to the public’s wide acceptance and general familiarity with such devices (Lepouras 2018).

7 Conclusion

This article presented a VR system that enables the user to explore open data in an immersive environment. An important view on other empirical and comparative studies in the field was provided. According to the concept and design of the developed VR system, three prototypes to compare different input technologies in order to investigate how these technologies affect user experience and behavior were implemented. Twenty-four participants (8 per prototype) provided data by completing an exploratory task with no time limitations, actively operating the VR system. The results of the collected TLX, FKS, and system log data in regard to the input technologies and the VR system itself were reported and discussed.

Based on the results and discussion, the choice of input technology was not decisive on the user experience and behavior for the case of the presented VR system and open data exploration scenario. However, interesting trends throughout our discussion (see Sect. 6) were highlighted, which require further research by the research community in the future. The results indicate a trend in favor of a visual representation in the VR environment, but not a clear trend toward the application of physical controllers (or not) within the presented context.

All participants in the study completed the given task in a satisfying manner. This indicates that an overall well-crafted VR application, from concept to interaction and interface design, is equally important as the applied input technology. Consequently, we believe that the presented VR system is indeed suitable to be successfully applied within the context of open data exploration in an immersive environment.

7.1 Future work

It is possible that the influence of the visual representation (H1) and the physical controller (H2) are not independent or have the same impact on the user experience and behavior. In general, there is more room for interpretation and investigation through the community based on the presented work. It seems that better (in one way or the other) gesture and hand recognition solutions are needed if developers are aiming for a somewhat less frustrating experience in the VR environment. Nevertheless, current gesture and hand recognition solutions may certainly be suitable to be successfully applied in other scenarios already, such as for entertainment or other playful purposes in VR or within the context of augmented reality (AR). We are also interested to apply our developed VR system in other contexts in the future in order to display new data sets. Furthermore, we envision to support the VR system with updates by adding new features, thus further investigating interaction and interface design.

Footnotes

  1. 1.

    Following the general HTC Vive and Oculus Rift VR setup guidelines.

  2. 2.

    Simple system for Unity applications to write log entries to a CSV file: https://github.com/nicoversity/unity_log2csv.

  3. 3.

    R script to analyze and visualize NASA Task Load Index (TLX) data: https://github.com/nicoversity/tlx-vis-r.

Notes

Acknowledgements

We would like to thank all participants for their time contributing to the study. We would also like to express our appreciation to the anonymous reviewers, who provided constructive feedback and comments toward the final manuscript.

References

  1. Abrash M (2014) What VR could, should, and almost certainly will be within two years. Presentation at the Steam Dev Days, January 15–16. https://www.youtube.com/watch?v=G-2dQoeqVVo. Accessed 17 May 2018
  2. Bachmann D, Weichert F, Rinkenauer G (2018) Review of three-dimensional human-computer interaction with focus on the leap motion controller. Sensors 18(7):2194.  https://doi.org/10.3390/s18072194 CrossRefGoogle Scholar
  3. Bayyari A, Tudoreanu ME (2006) The impact of immersive virtual reality displays on the understanding of data visualization. In: Proceedings of the ACM symposium on virtual reality software and technology—VRST ’06, ACM, New York, NY, USA, pp 368–371.  https://doi.org/10.1145/1180495.1180570
  4. Betella A, Bueno EM, Kongsantad W, Zucca R, Arsiwalla XD, Omedas P, Verschure PFMJ (2014) Understanding large network datasets through embodied interaction in virtual reality. In: Proceedings of the 2014 virtual reality international conference on—VRIC ’14, ACM, New York, NY, USA, pp 1–7.  https://doi.org/10.1145/2617841.2620711
  5. Biernacki M, Kennedy R, Dziuda Ł (2016) Simulator sickness and its measurement with Simulator Sickness Questionnaire (SSQ). Medycyna Pracy  https://doi.org/10.13075/mp.5893.00512 Google Scholar
  6. Bouchard S, Robillard G, Renaud P (2007) Revising the factor structure of the Simulator Sickness Questionnaire. Ann Rev Cybether Telemed 5:128–137Google Scholar
  7. Bouchard S, St-Jacques J, Renaud P, Wiederhold BK (2009) Side effects of immersions in virtual reality for people suffering from anxiety disorders. J Cyber Ther Rehabil 2(2):127–137Google Scholar
  8. Bouchard S, Robillard G, Renaud P, Bernier F (2011) Exploring new dimensions in the assessment of virtual reality induced side effects. J Comput Inf Technol 1(3):20–32Google Scholar
  9. Bowman DA, McMahan RP (2007) Virtual reality: how much immersion is enough? Computer 40(7):36–43.  https://doi.org/10.1109/MC.2007.257 CrossRefGoogle Scholar
  10. Bowman DA, Coquillart S, Froehlich B, Hirose M, Kitamura Y, Kiyokawa K, Stuerzlinger W (2008) 3D user interfaces: new directions and perspectives. IEEE Comput Graph Appl 28(6):20–36.  https://doi.org/10.1109/MCG.2008.109 CrossRefGoogle Scholar
  11. Caggianese G, Gallo L, Neroni P (2019) The Vive controllers vs. leap motion for interactions in virtual environments: a comparative evaluation. In: De Pietro G, Gallo L, Howlett RJ, Jain LC, Vlacic L (eds) Intelligent interactive multimedia systems and services—KES-IIMSS-18 2018. Springer, Cham, pp 24–33.  https://doi.org/10.1007/978-3-319-92231-7_3 CrossRefGoogle Scholar
  12. Cardoso JCS (2016) Comparison of gesture, gamepad, and gaze-based locomotion for VR worlds. In: Proceedings of the 22nd ACM conference on virtual reality software and technology—VRST ’16, ACM, New York, NY, USA, pp 319–320.  https://doi.org/10.1145/2993369.2996327
  13. Carmack J (2013) The Engadget Interview: Oculus Rift’s John Carmack. Video Interview, Oct 13. https://www.youtube.com/watch?v=AkasIFGpSHI. Accessed 17 May 2018
  14. Chen W, Plancoulaine A, Férey N, Touraine D, Nelson J, Bourdot P (2013) 6DoF navigation in virtual worlds: comparison of joystick-based and head-controlled paradigms. In: Proceedings of the 19th ACM symposium on virtual reality software and technology—VRST ’13, ACM, New York, NY, USA, p 111.  https://doi.org/10.1145/2503713.2503754
  15. Csikszentmihalyi M (1988) The flow experience and its significance for human psychology. In: Csikszentmihalyi M, Csikszentmihalyi IS (eds) Optimal experience: psychological studies of flow in consciousness. Cambridge University Press, Cambridge, pp 15–35.  https://doi.org/10.1017/CBO9780511621956.002 CrossRefGoogle Scholar
  16. Csikszentmihalyi M (2014) Flow : psychology, creativity, & optimal experience with Mihaly Csikszentmihalyi. Kanopy: Into the Classroom Media. https://www.kanopy.com/product/flow-psychology-creativity-optimal-experie. Accessed 17 May 2018
  17. Figueiredo L, Rodrigues E, Teixeira J, Techrieb V (2018) A comparative evaluation of direct hand and wand interactions on consumer devices. Comput Graph 77:108–121.  https://doi.org/10.1016/j.cag.2018.10.006 CrossRefGoogle Scholar
  18. Gusai E, Bassano C, Solari F, Chessa M (2017) Interaction in an immersive collaborative virtual reality environment: a comparison between leap motion and HTC controllers. In: Battiato S, Farinella G, Leo M, Gallo G (eds) New trends in image analysis and processing ICIAP 2017. Springer, Cham, pp 290–300.  https://doi.org/10.1007/978-3-319-70742-6_27 CrossRefGoogle Scholar
  19. Hart SG (2006) Nasa-task load index (nasa-tlx); 20 years later. Proc Hum Factors Ergon Soc Ann Meet 50(9):904–908.  https://doi.org/10.1177/154193120605000909 CrossRefGoogle Scholar
  20. Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 52:139–183.  https://doi.org/10.1016/S0166-4115(08)62386-9 CrossRefGoogle Scholar
  21. Janssen M, Charalabidis Y, Zuiderwijk A (2012) Benefits, adoption barriers and myths of open data and open government. Inf Syst Manag 29(4):258–268.  https://doi.org/10.1080/10580530.2012.716740 CrossRefGoogle Scholar
  22. Kennedy RS, Lane NE, Berbaum KS, Lilienthal MG (1993) Simulator Sickness Questionnaire: an enhanced method for quantifying simulator sickness. Int J Aviat Psychol 3(3):203–220.  https://doi.org/10.1207/s15327108ijap0303_3 CrossRefGoogle Scholar
  23. Konrad R, Cooper EA, Wetzstein G (2016) Novel optical configurations for virtual reality: evaluating user preference and performance with focus-tunable and monovision near-eye displays. In: Proceedings of the 2016 CHI conference on human factors in computing systems—CHI ’16, ACM, New York, NY, USA, pp 1211–1220.  https://doi.org/10.1145/2858036.2858140
  24. Kovarova A, Urbancok M (2014) Can virtual reality be better controlled by a smart phone than by a mouse and a keyboard? In: Proceedings of the 15th international conference on computer systems and technologies—CompSysTech ’14, ACM, New York, NY, USA, pp 317–324.  https://doi.org/10.1145/2659532.2659608
  25. Lackey SJ, Salcedo JN, Szalma J, Hancock P (2016) The stress and workload of virtual reality training: the effects of presence, immersion and flow. Ergonomics 59(8):1060–1072.  https://doi.org/10.1080/00140139.2015.1122234 CrossRefGoogle Scholar
  26. Lanman D, Fuchs H, Mine M, McDowall I, Abrash M (2014) Put on your 3D glasses now: the past, present, and future of virtual and augmented reality. In: ACM SIGGRAPH 2014 Courses—SIGGRAPH ’14, ACM, New York, NY, USA, pp 12:1–12:173.  https://doi.org/10.1145/2614028.2628332
  27. LaValle SM (2016) Virtual reality, online edn. http://vr.cs.uiuc.edu. Accessed 17 May 2018
  28. LaViola JJ Jr, Kruijff E, McMahan RP, Bowman D, Poupyrev IP (2017) 3D user interfaces: theory and practice, 2nd edn. Addison-Wesley Professional, BostonGoogle Scholar
  29. Lepouras G (2018) Comparing methods for numerical input in immersive virtual environments. Virtual Real 22(1):63–77.  https://doi.org/10.1007/s10055-017-0312-5 CrossRefGoogle Scholar
  30. McMahan RP, Gorton D, Gresock J, McConnell W, Bowman DA (2006) Separating the effects of level of immersion and 3D interaction techniques. In: Proceedings of the ACM symposium on virtual reality software and technology—VRST ’06, ACM, New York, NY, USA, pp 108–111.  https://doi.org/10.1145/1180495.1180518
  31. Medeiros D, Cordeiro E, Mendes D, Sousa M, Raposo A, Ferreira A, Jorge J (2016) Effects of speed and transitions on target-based travel techniques. In: Proceedings of the 22nd ACM conference on virtual reality software and technology—VRST ’16, ACM, New York, NY, USA, pp 327–328.  https://doi.org/10.1145/2993369.2996348
  32. Montano Murillo RA, Subramanian S, Martinez Plasencia D (2017) Erg-O: ergonomic optimization of immersive virtual environments. In: Proceedings of the 30th annual ACM symposium on user interface software and technology—UIST ’17, ACM, New York, NY, USA, pp 759–771.  https://doi.org/10.1145/3126594.3126605
  33. NASA (2018) NASA TLX Paper and Pencil Version Instruction Manual. Human Performance Research Group, NASA Ames Research Center. https://humansystems.arc.nasa.gov/groups/TLX/tlxpaperpencil.php. Accessed 17 May 2018
  34. Olbrich M, Graf H, Keil J, Gad R, Bamfaste S, Nicolini F (2018) Virtual reality based space operations–a study of ESA’s potential for VR based training and simulation. In: Chen JYC, Fragomeni G (eds) Virtual, augmented and mixed reality: interaction, navigation, visualization, embodiment, and simulation–VAMR 2018. Springer, Cham, pp 438–451.  https://doi.org/10.1007/978-3-319-91581-4_33 CrossRefGoogle Scholar
  35. Parkin S (2013) Oculus Rift brings virtual reality to verge of the mainstream | MIT Technology Review. Computer News. Web page, Sept 12. http://www.technologyreview.com/news/519801/can-oculus-rift-turn-virtual-wonder-into-commercial-reality/. Accessed 17 May 2018
  36. Rebenitsch L, Owen C (2016) Review on cybersickness in applications and visual displays. Virtual Real 20(2):101–125.  https://doi.org/10.1007/s10055-016-0285-9 CrossRefGoogle Scholar
  37. Reski N, Alissandrakis A (2016) Change your perspective: exploration of a 3d network created from open data in an immersive virtual reality environment. In: The 9th international conference on advances in computer-human interactions—ACHI 2016, IARIA, Venice, Italy, pp 403–410. http://www.thinkmind.org/index.php?view=article&articleid=achi_2016_19_30_20107
  38. Rheinberg F, Vollmeyer R, Engeser S (2003) Die Erfassung des Flow-Erlebens [The assessment of flow experience]. In: Stiensmeier-Pelster J, Rheinberg F (eds) Diagnostik von Selbstkonzept, Lernmotivation und Selbstregulation [Diagnosis of motivation and self-concept], Hogrefe, Göttingen, pp 261–279. https://nbn-resolving.org/urn:nbn:de:kobv:517-opus-6344
  39. Seibert J, Shafer DM (2018) Control mapping in virtual reality: effects on spatial presence and controller naturalness. Virtual Real 22(1):79–88.  https://doi.org/10.1007/s10055-017-0316-1 CrossRefGoogle Scholar
  40. Slater M, Usoh M, Steed A (1995) Taking steps: the influence of a walking technique on presence in virtual reality. ACM Trans Comput Hum Interact (TOCHI) Spec Issue Virtual Real Softw Technol 2(3):201–219.  https://doi.org/10.1145/210079.210084 CrossRefGoogle Scholar
  41. Streppel B, Pantförder D, Vogel-Heuser B (2018) Interaction in virtual environments–how to control the environment by using VR-glasses in the most immersive way. In: Chen JYC, Fragomeni G (eds) Virtual, augmented and mixed reality: interaction, navigation, visualization, embodiment, and simulation–VAMR 2018. Springer, Cham, pp 183–201CrossRefGoogle Scholar
  42. Sutherland IE (1968) A head-mounted three dimensional display. In: Proceedings of the December 9–11, 1968, fall joint computer conference, part I—AFIPS ’68 (Fall, part I), ACM, New York, NY, USA, pp 757–764.  https://doi.org/10.1145/1476589.1476686
  43. Tcha-Tokey K, Loup-Escande E, Christmann O, Richir S (2017) Effects on user experience in an edutainment virtual environment. In: Proceedings of the European conference on cognitive ergonomics 2017—ECCE 2017, ACM, New York, NY, USA, pp 1–8.  https://doi.org/10.1145/3121283.3121284
  44. Tregillus S, Al Zayer M, Folmer E (2017) Handsfree omnidirectional VR navigation using head tilt. In: Proceedings of the 2017 CHI conference on human factors in computing systems—CHI ’17, ACM, New York, NY, USA, pp 4063–4068.  https://doi.org/10.1145/3025453.3025521
  45. Vosinakis S, Koutsabasis P (2018) Evaluation of visual feedback techniques for virtual grasping with bare hands using Leap Motion and Oculus Rift. Virtual Real 22(1):47–62.  https://doi.org/10.1007/s10055-017-0313-4 CrossRefGoogle Scholar
  46. Ward MO, Grinstein G, Keim D (2010) Interactive data visualization: foundations, techniques, and applications. A. K. Peters, Natick. https://dl.acm.org/citation.cfm?id=1893097
  47. Wegner K, Seele S, Buhler H, Misztal S, Herpers R, Schild J (2017) Comparison of two inventory design concepts in a collaborative virtual reality serious game. In: Extended abstracts publication of the annual symposium on computer-human interaction in play—CHI PLAY ’17 extended abstracts, ACM, New York, NY, USA, pp 323–329.  https://doi.org/10.1145/3130859.3131300
  48. Wirth M, Gradl S, Sembdner J, Kuhrt S, Eskofier BM (2018) Evaluation of interaction techniques for a virtual reality reading room in diagnostic radiology. In: Proceedings of the 31st annual ACM symposium on user interface software and technology—UIST ’18, ACM, New York, NY, USA, pp 867–876.  https://doi.org/10.1145/3242587.3242636
  49. Wolf K, Funk M, Khalil R, Knierim P (2017) Using virtual reality for prototyping interactive architecture. In: Proceedings of the 16th international conference on mobile and ubiquitous multimedia—MUM ’17, ACM, New York, NY, USA, pp 457–464.  https://doi.org/10.1145/3152832.3156625
  50. Young MK, Gaylor GB, Andrus SM, Bodenheimer B (2014) A comparison of two cost-differentiated virtual reality systems for perception and action tasks. In: Proceedings of the ACM symposium on applied perception—SAP ’14, ACM, New York, NY, USA, pp 83–90.  https://doi.org/10.1145/2628257.2628261

Copyright information

© The Author(s) 2019

OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.VRxAR Labs, Department of Computer Science and Media TechnologyLinnæus UniversityVäxjöSweden

Personalised recommendations