Abstract
Cybersecurity practitioners face the challenge of monitoring complex and large datasets. These could be visualized as time-varying node-link graphs, but would still have complex topologies and very high rates of change in the attributes of their links (representing network activity). It is natural, then, that the needs of the cybersecurity domain have driven many innovations in 2D visualization and related computer-assisted decision making. Here, we discuss the lessons learned while implementing user interactions for Virtual Data Explorer (VDE), a novel system for immersive visualization (both in Mixed and Virtual Reality) of complex time-varying graphs. VDE can be used with any dataset to render its topological layout and overlay that with time-varying graph; VDE was inspired by the needs of cybersecurity professionals engaged in computer network defense (CND).
Immersive data visualization using VDE enables intuitive semantic zooming, where the semantic zoom levels are determined by the spatial position of the headset, the spatial position of handheld controllers, and user interactions (UIa) with those controllers. This spatially driven semantic zooming is quite different from most other network visualizations which have been attempted with time-varying graphs of the sort needed for CND, presenting a broad design space to be evaluated for overall user experience (UX) optimization. In this paper, we discuss these design choices, as informed by CND experts, with a particular focus on network topology abstraction with graph visualization, semantic zooming on increasing levels of network detail, and semantic zooming to show increasing levels of detail with textual labels.
The material is based upon work supported by NASA under award number 80GSFC21M0002.
Similar content being viewed by others
Keywords
- User interactions
- Virtual reality
- Mixed reality
- Network visualization
- Topology visualization
- Data visualization
- Cybersecurity
1 Introduction
This work follows a large volume of prior research done on 3D user interactions [3, 6, 10, 24], immersive analytics [1, 4, 5, 18] and the combination of the two [9, 17, 21, 23]. Although the task-specific layout of an immersive data visualization is arguably the most important aspect determining its utility [15], non-intrusive and intuitive user interfaces (UI) and overall user experiences (UX) are also important in determining the usability and utility of an immersive data visualization. In this paper, we report on the applicability of various user interaction (UIa) methods for immersive analytics of node-link diagrams.
Work on Virtual Data Explorer (VDE, Fig. 1) started in 2015, initially as a fork of OpenGraphiti and then rebuilt from scratch as a Unity 3D project [14]. One of the factors that motivated the transfer away from OpenGraphiti at the time was its lack of support for user interactions in virtual reality, which became a particularly significant omission when Oculus Touch controllers were released in late 2016 which enabled sufficiently precise user interactions to be implemented with Unity 3D. User feedback solicited from early VDE users motivated various alterations and additions to the interactions implemented for virtual and mixed reality in VDE.
2 Objective
Encoding information into depth cues while visualizing data has been avoided in the past for a good reason: on a flat screen, it’s not helpful [19]. Nevertheless, recent studies have confirmed [23] that with equipment that provides the user with stereoscopic perception and parallax, three-dimensional shapes can be useful in providing users with insight into the visualized dataset [12]. Additionally, researchers have found that test subjects managed to gather data and to understand the cyber situation presented to them only after few sessions with great performance scores, even if the task seemed difficult to them on the first try [8].
The motivating factors for creating VDE were the challenges that cyber defense analysts, cyber defense incident responders, network operations specialists, and related professionals face while analyzing the datasets relevant to their tasks. Such datasets are often multidimensional but not intrinsically spatial. Consequently, analysts must either scale down the number of dimensions visible at a time for encoding into a 2D or 3D visualization, or they must combine multiple visualizations displaying different dimensions of that dataset into a dashboard. The inspiration for VDE was the hope that immersive visualization would enable the 3D encoding of data in ways better aligned to subject matter experts’ (SMEs’) natural understanding of their datasets’ relational layout, better reflecting their mental models of the multilevel hierarchical relationships of groups of entities expected to be present in a dataset and the dynamic interactions between these entities [13].
Therefore, the target audience for the visualizations created with VDE are the SMEs responsible for ensuring the security of networks and other assets. SMEs utilize a wide array of Computer Network Defense (CND) tools, such as Security Information & Event Management (SIEM) systems which allow data from various sources to be processed and for alerts to be handled [15]. CND tools allow analysts to monitor, detect, investigate, and report incidents that occur in the network, as well as provide an overview of the network state. To provide analysts with such capabilities, CND tools depend on the ability to query, process, summarize and display large quantities of diverse data which have fast and unexpected dynamics [2]. These tools can be thought of along the lines of the seven human-data interaction task levels defined by Shneiderman [22]:
-
1.
Gaining an overview of the entire dataset,
-
2.
Zooming in on an item or subsets of items,
-
3.
Filtering out irrelevant items,
-
4.
Getting details-on-demand for an item or subset of items,
-
5.
Relating between items or subset of items,
-
6.
Keeping a history of actions, and
-
7.
Allowing extraction of subsets of items and query parameters.
These task levels have been taken into account while developing VDE and most have been addressed with its capabilities. When appropriate, Shneiderman’s task levels are referred to by their sequential number later in this paper.
3 Virtual Data Explorer
VDE enables a user to stereoscopically perceive a spatial layout of a dataset in a VR or MR environment (e.g., the topology of a computer network), while the resulting visualization can be augmented with additional data, like TCP/UDP/ICMP session counts between network nodes [16]. VDE allows its users to customize visualization layouts via two complimentary text configuration files that are parsed by the VDE Server and the VDE Client.
To accommodate timely processing of large query results, data-processing in VDE is separated into a server component (VDES). Thread-safe messaging is used extensively - most importantly, to keep the Client (VDEC) visualization in sync with (changes in) incoming data, but also for asynchronous data processing, for handling browser-based user interface actions, and in support of various other features.
A more detailed description of VDE is available at [11].
3.1 Simulator Sickness
Various experiments have shown that applying certain limitations to a user’s ability to move in the virtual environment - limit their view and other forms of constrained navigation - will limit confusion and help prevent simulator sickness while in VR [7]. These lessons were learned while developing VDE and adjusted later, as others reported success with the same or similar mitigation efforts [20]. Most importantly, if an immersed user can only move the viewpoint (e.g., its avatar) either forwards or backwards in the direction of user’s gaze (or head-direction), the effects of simulator sickness can be minimized or avoided altogether [12]. This form of constrained navigation in VR is known as “the rudder movement" [20].
3.2 Virtual or Mixed Reality
Although VDE was initially developed with Virtual Reality headsets (Oculus Rift DK2 and later CV1 with Oculus Touch), its interaction components were always kept modular so that once mixed reality headsets such as the Meta 2, Magic Leap, and Hololens became available, their support could be integrated into the same codebase.
The underlying expectation for preferring MR to VR is the user’s ability to combine stereoscopically perceivable data visualizations rendered by a MR headset with relevant textual information represented by other sources in the user’s physical environment (SIEM, dashboard, or another tool), most likely from flat screens. This requirement was identified from early user feedback that trying to input text or define/refine data queries while in VR would be vastly inferior to the textual interfaces that users are already accustomed to operating while using conventional applications on a flat screen for data analysis. Hence, rather than spend time on inventing 3D data-entry solutions for VR, it was decided to focus on creating and improving stereoscopically perceivable data layouts and letting users use their existing tools to control the selection of data that is then fed to the visualization.
A major advantage provided by the VR environment, relative to MR, is that VR allows users to move (fly) around in a larger scale (overview) visualization of a dataset while becoming familiar with its layout(s) and/or while collaborating with others. However, once the user is familiar with the structure of their dataset, changing their position (by teleporting or flying in VR space) becomes less beneficial over time. Accordingly, as commodity MR devices became sufficiently performant, they were prioritized for development - first, the Meta 2, later followed by support for the Magic Leap and HoloLens.
3.3 User Interface
In the early stages of VDE development on Unity 3D, efforts were made to either use existing VR-based menu systems (VRTK, later MRTK) or to design a native menu, such that would allow the user to control which visualization components are visible and/or interactive; to configure connection to VDE Server; to switch between layouts; and to exercise other control over the immersive environment. However, controlling VDE’s server and client behavior, including data selection and transfer, turned out to be more convenient when done in combination with the VDES web-based interface and with existing conventional tools on a flat screen. For example, in case of cybersecurity related datasets, the data source could be a SIEM, log-correlation, netflow, or PCAP analyzing environments.
3.4 Head-Up Display
Contextual information is displayed on a head-up display (HUD) that is perceived to be positioned a few meters away from the user in MR and about 30m in VR. The HUD smoothly follows the direction of user’s head in order to remain in the user’s field of view (see Fig. 2). This virtual distance was chosen to allow a clear distinction between the HUD and the network itself, which is stereoscopically apparent as being nearer to the user.
3.5 User Interactions
The ability to interact with the visualization, namely, to query information about a visual representation of a datapoint (ex: semi-transparent cube for a node or line for a relation between two nodes) using input devices (ex: hand- and finger-tracking, input controllers) is imperative. While gathering feedback from SMEs [12], this querying capability was found to be crucial for the users’ immersion in the VR data visualization to allow them to explore and to build their understanding of the visualized data.
The MR or VR system’s available input methods are used to detect whether the user is trying to grab something, point at a node, or point at an edge. In case of MR headsets, these interactions are based on the user’s tracked hands (see: Fig. 3 and Fig. 4), and in case of VR headsets, pseudo-hands (see: Fig. 5 Fig. 6) are rendered based on hand-held input controllers.
A user can:
-
1.
point to select a visual representation of a data-object - a node (for example, a cube or a sphere) or an edge - with a “laser" or dominant hand’s index finger of either the virtual rendering of the hand or users real hand tracking results (in case of MR headsets). Once selected, detailed information about the selected object (node or edge) is shown on a line of text rendered next to user’s hand, (Shneiderman Task Level 4).
-
2.
grab (or pinch) nodes and move (or throw) these around to better perceive its relations by observing the edges that are originating or terminating in that node: humans perceive the terminal locations of moving lines better than that of static ones, (Shneiderman Task Levels 3, 5).
-
3.
control data visualization layout’s properties (shapes, curvature, etc.) with controller’s analog sensors, (Shneiderman Task Levels 1, 5).
-
4.
gesture with non-dominant hand to trigger various functionalities. For example: starfish - toggle the HUD; pinch both hands - scale the visualization; fist - toggle edges; etc.
In addition to active gestures and hand recognition, the user’s position and gaze (instead of just their head direction) are used if available to decide which visualization sub-groups to focus on, to enable textual labels, to hide enclosures, to enable update routines, colliders, etc. (Shneiderman Task Levels 2, 3, 4, 5, 7). Therefore, depending on user’s direction and location amongst the visualization components and on the user’s gaze (if eye-tracking is available), a visualization’s details are either visible or hidden, and if visible, then either interactive or not.
The reasons for such a behavior are threefold:
-
1.
Exposing the user to too many visual representations of the data objects will overwhelm them, even if occlusion is not a concern.
-
2.
Having too many active objects may overwhelm the GPU/CPU of a standalone MR/VR headset - or even a computer rendering into a VR headset - due to the computational costs of colliders, joints, or other physics. (see “Optimizations" section, below)
-
3.
By adjusting their location (and gaze), the user can:
-
(a)
See an overview of the entire dataset (Shneiderman Task Level 1),
-
(b)
Zoom on an item or subsets of items (Shneiderman Task Level 2),
-
(c)
Filter irrelevant items (Shneiderman Task Level 3),
-
(d)
Get details-on-demand for an item or subset of items (Shneiderman Task Level 4),
-
(e)
Relate between items or subsets of items. (Shneiderman Task Level 5).
-
(a)
Figure 7 and Fig. 8 show this behavior, while the video (https://coda.ee/HCII22) accompanying this paper makes understanding such MR interaction clearer than is possible from a screenshot, albeit less so than experiencing it with a MR headset.
3.6 Textual Information
Text labels of nodes, edges, groups are a significant issue, as these are expensive to render due to their complex geometrical shapes and also risk the possible occlusion of objects which may fall behind them. Accordingly, text is shown in VDE only when necessary, to the extreme that a label is made visible only when the user’s gaze is detected on a related object. Backgrounds are not used with text in order to reduce their occlusive footprint.
3.7 Optimizations
The basis for VDE: less is more.
Occlusion of visual representations of data objects is a significant problem for 3D data visualizations on flat screens. In VR/MR environments, occlusion can be mostly mitigated by stereoscopic perception of the (semi-transparent) visualizations of data objects and by parallax, but may still be problematic [5].
While occlusion in MR/VR can be addressed by measures such as transparency, transparency adds significant overhead to the rendering process. To optimize occlusion-related issues, VDE strikes a balance between the necessity of transparency of visualized objects, while adjusting the number of components currently visible (textual labels, reducing the complexity of objects that are farther from the user’s viewpoint, etc.) based on the current load (measured FPS); on objects’ relative positions in user’s gaze (in-view, not-in-view, behind the user); and on the user’s virtual distance from these objects. This XR-centric approach to semantic zooming proves a natural user experience, visually akin to the semantic zooming techniques used in online maps which smoothly but dramatically change the extent of detail as a function of zoom level (showing only major highways or the smallest of roads, toggling the visibility of street names and point of interest markers).
Although colors and shapes of the visual representations of data objects can be used to convey information about their properties, user feedback has confirmed that these should be used sparsely. Therefore, in most VDE layouts, the nodes (representing data objects) are visualized as transparent off-white cubes or spheres, and the latter only in case if the available GPU is powerful enough. Displaying a cube versus a sphere may seem a trivial difference, but considering the sizes of some of the datasets visualized (>10,000 nodes and>10,000 edges), these complexities add up quickly and take a significant toll.
4 Conclusion
Immersive visualization of large, dynamic node-link diagrams requires careful consideration of visual comprehensibility and computational performance. While many of node-link visualization idioms are well-studied in 2D flat screen visualizations, the opportunities and constraints presented by VR and MR environments are distinct. As the pandemic made a larger-scale study with many participants impossible, VDE instead underwent a more iterative review process, drawing input from representative users and domain expertise. The approach described herein reflects many iterations of performance testing and user feedback.
Optimizing user interactions for VDE presented the design challenge of providing an interface which intuitively offers an informative presentation of the node-link network both at a high-level “overview" zoom level and at a very zoomed “detail" view, with well-chosen levels of semantic zoom available along the continuum between these extremes. Constrained navigation further optimizes the user experience, limiting confusion and motion sickness. Dynamic highlighting, through the selection and controller-based movement of individual notes, enhances the users’ understanding of the data.
References
Batch, A., Elmqvist, N.: The interactive visualization gap in initial exploratory data analysis. IEEE Trans. Visual Comput. Graph. 24(1), 278–287 (2018). https://doi.org/10.1109/TVCG.2017.2743990
Ben-Asher, N., Gonzalez, C.: Effects of cyber security knowledge on attack detection. Comput. Hum. Behav. 48, 51–61 (2015). https://doi.org/10.1016/j.chb.2015.01.039. https://www.sciencedirect.com/science/article/pii/S0747563215000539
Casallas, J.S., Oliver, J.H., Kelly, J.W., Merienne, F., Garbaya, S.: Using relative head and hand-target features to predict intention in 3d moving-target selection. In: 2014 IEEE Virtual Reality (VR), pp. 51–56 (2014). https://doi.org/10.1109/VR.2014.6802050
Dübel, S., Röhlig, M., Schumann, H., Trapp, M.: 2d and 3d presentation of spatial data: a systematic review. In: 2014 IEEE VIS International Workshop on 3DVis (3DVis), pp. 11–18 (2014). https://doi.org/10.1109/3DVis.2014.7160094
Elmqvist, N., Tsigas, P.: A taxonomy of 3d occlusion management for visualization. IEEE Trans. Visual Comput. Graphics 14(5), 1095–1109 (2008). https://doi.org/10.1109/TVCG.2008.59
Günther, T., Franke, I.S., Groh, R.: Aughanded virtuality - the hands in the virtual environment. In: 2015 IEEE Virtual Reality (VR), pp. 327–328 (2015). https://doi.org/10.1109/VR.2015.7223428
Johnson, D.M.: Introduction to and review of simulator sickness research (2005)
Kabil, A., Duval, T., Cuppens, N.: Alert characterization by non-expert users in a cybersecurity virtual environment: a usability study. In: De Paolis, L.T., Bourdot, P. (eds.) AVR 2020. LNCS, vol. 12242, pp. 82–101. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58465-8_6
Kabil, A., Duval, T., Cuppens, N., Comte, G.L., Halgand, Y., Ponchel, C.: Why should we use 3d collaborative virtual environments for cyber security? In: 2018 IEEE Fourth VR International Workshop on Collaborative Virtual Environments (3DCVE), pp. 1–2 (2018). https://doi.org/10.1109/3DCVE.2018.8637109
Kang, H.J., Shin, J.h., Ponto, K.: A comparative analysis of 3d user interaction: How to move virtual objects in mixed reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 275–284 (2020). https://doi.org/10.1109/VR46266.2020.00047
Kullman, K.: Creating useful 3d data visualizations: Using mixed and virtual reality in cybersecurity (2020). https://coda.ee/MAVRIC, 3nd Annual MAVRIC Conference
Kullman, K., Ben-Asher, N., Sample, C.: Operator impressions of 3d visualizations for cybersecurity analysts. In: 18th European Conference on Cyber Warfare and Security. Coimbra, Portugal (2019)
Kullman, K., Cowley, J., Ben-Asher, N.: Enhancing cyber defense situational awareness using 3d visualizations. In: 13th International Conference on Cyber Warfare and Security, Washington, DC (2018)
Kullman, K.: Virtual data explorer. https://coda.ee/
Kullman, K., Buchanan, L., Komlodi, A., Engel, D.: Mental model mapping method for cybersecurity. In: HCI (2020)
Kullman, K., Engel, D.: Interactive stereoscopically perceivable multidimensional data visualizations for cybersecurity. J. Defence Secur. Technol. 4(3), 37–52 (2022). 10.46713/jdst.004.03
Lu, F., Davari, S., Lisle, L., Li, Y., Bowman, D.A.: Glanceable ar: evaluating information access methods for head-worn augmented reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 930–939 (2020). https://doi.org/10.1109/VR46266.2020.00113
Miyazaki, R., Itoh, T.: An occlusion-reduced 3d hierarchical data visualization technique. In: 2009 13th International Conference Information Visualisation, pp. 38–43 (2009). https://doi.org/10.1109/IV.2009.32
Munzner, T.: Visualization Analysis and Design. AK Peters Visualization Series. CRC Press (2015). https://books.google.de/books?id=NfkYCwAAQBAJ
Pruett, C.: Lessons from the frontlines modern vr design patterns (2017). https://developer.oculus.com/blog/lessons-from-the-frontlines-modern-vr-design-patterns, unity North American Vision VR/AR Summit
Roberts, J.C., Ritsos, P.D., Badam, S.K., Brodbeck, D., Kennedy, J., Elmqvist, N.: Visualization beyond the desktop-the next big thing. IEEE Comput. Graphics Appl. 34(6), 26–34 (2014). https://doi.org/10.1109/MCG.2014.82
Shneiderman, B.: The eyes have it: a task by data type taxonomy for information visualizations. In: Proceedings 1996 IEEE Symposium on Visual Languages, pp. 336–343 (1996). https://doi.org/10.1109/VL.1996.545307
Whitlock, M., Smart, S., Szafir, D.A.: Graphical perception for immersive analytics. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 616–625 (2020). https://doi.org/10.1109/VR46266.2020.00084
Yu, D., Liang, H.N., Fan, K., Zhang, H., Fleming, C., Papangelis, K.: Design and evaluation of visualization techniques of off-screen and occluded targets in virtual reality environments. IEEE Trans. Visual Comput. Graphics 26(9), 2762–2774 (2020). https://doi.org/10.1109/TVCG.2019.2905580
Acknowledgement
The authors thank Alexander Kott, Jennifer A. Cowley, Lee C. Trossbach, Matthew C. Ryan, Jaan Priisalu, and Olaf Manuel Maennel for their ideas and guidance. This research was partly supported by the Army Research Laboratory under Cooperative Agreement Number W911NF-17-2-0083 and in conjunction with the CCDC Command, Control, Computers, Communications, Cyber, Intelligence, Surveillance, and Reconnaissance (C5ISR) Center. The material is based upon work supported by NASA under award number 80GSFC21M0002.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kullman, K., Engel, D. (2022). User Interactions in Virtual Data Explorer. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. HCII 2022. Lecture Notes in Computer Science(), vol 13310. Springer, Cham. https://doi.org/10.1007/978-3-031-05457-0_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-05457-0_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05456-3
Online ISBN: 978-3-031-05457-0
eBook Packages: Computer ScienceComputer Science (R0)