Preparing the HoloLens for user Studies: an Augmented Reality Interface for the Spatial Adjustment of Holographic Objects in 3D Indoor Environments
- 982 Downloads
Augmented reality (AR), the extension of the real physical world with holographic objects provides numerous ways to influence how people perceive and interact with geographic space. Such holographic elements may for example improve orientation, navigation, and the mental representations of space generated through interaction with the environment. As AR hardware is still in an early development stage, scientific investigations of the effects of holographic elements on spatial knowledge and perception are fundamental for the development of user-oriented AR applications. However, accurate and replicable positioning of holograms in real world space, a highly relevant precondition for standardized scientific experiments on spatial cognition, is still an issue to be resolved. In this paper, we specify technical causes for this limitation. Subsequently, we describe the development of a Unity-based AR interface capable of adding, selecting, placing and removing holograms. The capability to quickly reposition holograms compensates for the lack of hologram stability and enables the implementation of AR-based geospatial experiments and applications. To facilitate the implementation of other task-oriented AR interfaces, code examples are provided and commented.
KeywordsAugmented reality AR cartography HoloLens Spatial cognition Experimental methods
Augmented reality (AR), die Erweiterung der realen physischen Welt durch Hologramme, bietet viele Möglichkeiten die menschliche Wahrnehmung von, und Interaktion mit, dem geographischen Raum zu beeinflussen. Solche holografischen Elemente könnten beispielsweise zu einer Verbesserung von Orientierung, Navigation, oder durch Interaktion mit der Umgebung erzeugter mentaler räumlicher Modelle führen. Da sich AR Hardware noch in einem frühen Entwicklungsstadium befindet, sind wissenschaftliche Untersuchungen der Effekte von holografischen Elementen auf räumliches Wissen und die räumliche Wahrnehmung für die Entwicklung von nutzerorientierten AR Anwendungen notwendig. Die exakte und replizierbare Positionierung von Hologrammen, welche eine maßgebliche Voraussetzung für standardisierte wissenschaftliche Experimente ist, ist durch den Stand der Technik jedoch noch nicht gegeben. In diesem Artikel werden die technischen Ursachen hierfür erläutert. Im Anschluss beschreiben wir die Entwicklung eines Unity-basierten Interface, welches das Hinzufügen, Auswählen, Platzieren, und Entfernen von Hologrammen durch den Nutzer ermöglicht. Die Möglichkeit Hologramme schnell neu zu positionieren dient der Kompensation mangelnder räumlicher Stabilität von Hologrammen und ermöglicht die Umsetzung AR-basierter räumlicher Experimente und Anwendungen. Codebeispiele sind beigefügt und kommentiert, um die Entwicklung weiterer aufgabenorientierter AR Interfaces zu fördern.
SchlüsselwörterAugmented Reality AR-Kartographie HoloLens Raumkognition Experimentelle Methoden
Orientation and navigation are fundamental aspects of everyday life. Estimating distances and angles, recalling and recognizing object locations and memorizing routes or landmarks are just a few examples of many basic tasks in outdoor as well as indoor environments (e.g., Keil et al. 2019; Plumert et al. 2005; Postma and De Haan 1996). To optimize the effectiveness and efficiency of these tasks, people often use geospatial media, such as maps or map-like representations. The academic fields of cartography and spatial cognition have increasingly addressed the question of how to integrate and use modern technologies—usually originating in IT- and entertainment industries—to develop proper geospatial media (e.g., Edler et al. 2019; Hruby 2019; Knust and Buchroithner 2014).
Going beyond traditional 2D print approaches, the portfolio of cartographic products was strongly influenced by digital techniques since the establishment of the computer as a mass media device and, in addition, as a tool to create other mass media (e.g., Clarke et al. 2019; Taylor and Lauriault 2007; Müller 1997). Multimedia cartography—sometimes also referred to as “cybercartography” (Taylor 2005)—lead to fundamental approaches of computer-based animated, interactive and multisensory (web) map applications (e.g., Kraak 1999; Peterson 1995; Krygier 1994). It is argued that animation techniques used in cartography have been influenced by the computer and video game industries (Edler et al. 2018a; Edler and Dickmann 2017; Ahlqvist 2011; Corbett and Wade 2005).
The development of computer-based animation techniques came along with appropriate software and hardware solutions. Higher performances of software and hardware also allowed the further development of stable and detailed 3D visualization methods and techniques. For example, autostereoscopic displays allowed to generate 3D depth effects, which was explored in several studies on visualization and user experiments in cartography (e.g., Edler and Dickmann 2015; Bröhmer et al. 2013; Buchroithner 2007). Moreover, the open availability of game engines, such as Unity and Unreal Engine, supports the creation of individual 3D landscapes that can be accessed with virtual reality (VR) headsets, in real-time and from the ego perspective—thus, creating an impression of immersion. The potentials of VR-based visualization are currently under study (e.g., Cöltekin et al. 2019; Edler et al. 2018b; Hruby et al. 2019; Kersten et al. 2018).
Closely related to 3D visualization in VR are 3D visualizations in augmented reality (AR). AR techniques allow to project static or animated objects into real environments, thus extending real physical environments. Representing an early development stage, AR visualization techniques can be based on so called mid-air displays, sometimes also referred as free-space displays (Dickmann 2013). A mid-air displays projects graphical objects on free projection surfaces, such as a hardly visible wall of fog (“fog screen”) created by an installed blower (DiVerdi et al. 2008).
A famous example of an AR application interacting a lot with space is the gaming app “Pokémon GO” (Zhao and Chen 2017). In this smartphone- or tablet-based game, users interact with audiovisually animated game characters that can be found in the real environment. In this way, the whole logic and process of the game is added like an additional information layer into the physical landscape. The smartphone or tablet is the ‘physical gateway’ to this augmentation. The camera of the device is used to record the area in front of the user and the recordings are augmented with virtual objects in real time. As demonstrated by de Almeida Pereira et al. (2017), this technique can also be used to augment physical 2D maps with 3D geographic information as height maps.
1.1 The Potential of AR Techniques for Experiencing Space
As both the Microsoft HoloLens and the HTC Vive Pro are capable of tracking head movements, they make it possible to create an impression of permanent presence of holographic geospatial objects. Even if the user walks around in a defined area, commonly indoor area, holograms remain and adopt to the user location and viewing perspective. This permanent and adaptable holographic projection may lead to visualization approaches that bring additional advantages for the cognitive processing of the geospatial area experienced.
Empirical research of cartography, spatial cognition and experimental psychology has recently led to some recommendations for the construction of user cognition-oriented cartographic media. For example, it was reported in user experiments that an additional layer of square grids increases the performances in memory of object locations (Bestgen et al. 2017; Kuchinke et al. 2016; Edler et al. 2014) and in estimating longer linear distances in maps (Dickmann et al. 2019). The grid-based memory effect also occurs if the grid structure is physically reduced to indicated (“illusory”) lines (Dickmann et al. 2017), gets a depth offset (Edler et al. 2015) or is changed to a hexagonal pattern (Edler et al. 2018c). Other studies reported that reducing the visibility of some map areas can direct visual attention towards other map areas (Keil et al. 2018), and that the display of landmarks can improve route knowledge (Ruddle et al. 2011) and orientation (Li et al. 2014).
The above mentioned cognition-based effects on spatial performance measures in maps are promising results indicating that an extended communication of spatial information can bring advantages for the map user in terms of map perception, orientation, navigation and the formation of spatial knowledge. Similar effects will likely occur in real 3D environments augmented by holographic spatial objects. These holographic layers could offer an additional (geometrical) structure to support the cognitive processing of object locations, distance estimations and relative directions between objects.
To investigate possible effects for the perception of spatial information, new methodological challenges occur. The possibility to implement spatial models in AR applications has already been investigated and described (Wang et al., 2018). However, to take full advantage of the possibilities of AR for geospatial applications, technical limitations of the current available AR devices must be faced. These include the precise placement and stability of holograms in the three-dimensional space, a crucial quality criterion for AR applications (Harders et al. 2008). Having found stable solutions that guarantee a high spatial precision, the AR devices can become valuable methodological tools in geospatial experiments focused on fundamental questions of spatial cognition in 3D environments. In user studies, they could be used to project holographic objects in the environment. Moreover, AR devices could assist experimental investigators to arrange the spatial layout of movable real world objects used in their study. The projection of ‘virtual place markers’ can increase the precision of identical spatial object arrangements, which—from a methodological perspective—increases the comparability of acquired user data (between participants). Moreover, projected ‘virtual markers’ can support the analysis of user tasks, such as the identification/measurement of distortion errors, for example, in location memory tasks.
To exploit the possibilities of AR systems for geospatial user experiments, it is necessary to create technical methods to establish controlled procedures and to standardize the placement of holographic objects in a real 3D setting. In the following sections, we describe the functionality of current AR systems, how technical factors of these AR systems affect the targeted placement of holograms, and additional requirements for geospatial applications and experiments. To address and resolve the described discrepancy between requirements and limitations, we present a self-developed AR interface application capable of (re-)placing holograms highly accurately during runtime. Code examples are provided for transparency, a better understanding, and replicability.
2 Hologram Placement and Display
In many VR and monitor-based 3D applications, positions of 3D objects are ‘hard coded’, i.e., their positions are predefined and cannot be changed. The advantage is that all participants see exactly the same arrangement of visual stimuli, which is often a highly relevant precondition for geospatial experiments. However, in AR-based applications, several technical limitations demand a more flexible approach for the placement of visual stimuli.
Head movements are registered by matching objects recorded by the cameras in real-time to objects already represented in the spatial model. Calculating the relative position towards these objects then allows to triangulate the current head position and rotation inside the 3D space. The advantage of pure inside-out tracking is that no additional hardware is required. Given that the Microsoft HoloLens is a standalone device (Evans et al. 2017), people can walk freely and use the device seamlessly in different rooms or even floors. However, inside-out tracking also has some serious disadvantages concerning the placement of static holograms. First, image analysis, the precondition for image-based tracking, requires a lot of processing power (Liu et al. 2013). As the processing power of a standalone device is naturally limited, this leads to only moderate tracking accuracy and occasional tracking lags. Second, the cameras need to identify at least some reference objects within the spatial range of the cameras, which according to our experience is approximately 5 m. Therefore, tracking lags regularly occur in large empty spaces, especially outdoors. Additionally, poor lighting conditions may negatively affect the capability to identify referenced objects (Loesch et al. 2015). The mentioned tracking lags often lead to a distorted or shifted internal coordinate system. In these occasions, the positions of all ‘hard-coded’ holograms relative to real world objects are also distorted or shifted and need to be readjusted. A third limitation of the Microsoft HoloLens concerning the placement of static holograms is rather code-based than tracking-based. Each time an application is started, the internal coordinate system is set relative to the position of the headset. As it is almost impossible to place the headset at the exact same position when an application is started, ‘hard coded’ static holograms cannot be placed reliably at the same real world position twice.
In contrast to the Microsoft HoloLens, the HTC Vive Pro uses a combination of inside-out and outside-in tracking system. In contrast to inside-out tracking, outside-in tracking uses stationary sensors located around the tracked space to register head movements. In the case of the HTC Vive Pro, two SteamVR tracking infrared base stations (previously called Lighthouse) are placed in opposite corners of a tracked space with a maximum diagonal size of 5 m (HTC 2019). These base stations interact with photo sensors built into the headset and hand controllers. By comparing the times different sensors perceive the signals of the base stations, the positions of the HMD and the controllers in 3D space can be triangulated. Additionally, inside-out tracking is used to generate a 3D model of the objects inside the tracked area. Similar to the Microsoft HoloLens, the 3D model is generated based on two cameras in the headset. Combining inside-out and outside-in tracking limits the mobility of the device, as the tracking area is confined to the range of the base stations. However, this technique has clear advantages in terms of tracking accuracy. First, the use of infrared base stations provides a high tracking accuracy and precision (Ng et al. 2017). Second, the static positioning of base stations also allows for much more accurate resets of the internal coordinate system after short tracking losses or lags. Therefore, the positions of static holograms relative to real world objects need to be readjusted less often.
3 Implementing Interactivity of Holograms
3.1 User Input
3.6 Reducing Interface Visibility
Furthermore, we illustrated which technical characteristics of current AR devices are in conflict with the identified requirements. Especially the stability of holograms was argued to be affected by tracking issues of current AR headsets. As a workaround, we described the development process of an AR interface capable of adding, removing and placing holograms precisely in real world space. This interfaces allows to perform standardized scientific experiments using AR hardware by correcting false hologram positions manually. To reduce interference with experimental visual stimuli, visibility of the AR interface can be reduced to a minimum when it is currently not required. However, our proposed solution addresses only some limitations of current AR devices. The most crucial limitation, the incapability to use current AR devices in large scale and outdoor environments, still remains. As long as highly accurate and reliable tracking cannot be provided by AR hardware (e.g., realized by a combination of inside-out and satellite tracking), the use of AR devices will be limited to spatially confined environments.
This study was supported by grants of the Deutsche Forschungsgemeinschaft (DFG) to FD (DI 771/11-1; Project 218150251). We would like to thank Melvin Sossna for supporting the development of the 3D model presented in Fig. 3.
- Buchroithner MF (2007) Echtdreidimensionalität in der Kartographie: Gestern, heute und morgen. Kartogr Nachrichten 57(5):239–248Google Scholar
- Dickmann F (2013) Freiraum-displays—Ein neues Medium für die Kartographie? Kartogr Nachrichten 63(2/3):89–92Google Scholar
- Edler D, Husar A, Keil J, Vetter M, Dickmann F (2018a) Virtual reality (VR) and open source software: a workflow for constructing an interactive cartographic VR environment to explore urban landscapes. Kartogr Nachrichten 68(1):3–11Google Scholar
- Edler D, Keil J, Dickmann F (2018c) Varianten interaktiver Karten in Video- und Computerspielen - eine Übersicht. Kartogr Nachrichten 68(2):57–65Google Scholar
- Evans G, Miller J, Pena MI, MacAllister A, Winer EH (2017) Evaluating the Microsoft HoloLens through an augmented reality assembly application. In: Sanders-Reed JN and Arthur JJ (eds) Degraded environments: sensing, processing, and display. SPIE, Bellingham, Washington: Proceedings of SPIE 10197. https://doi.org/10.1117/12.2262626
- Gruenefeld U, Hsiao D, Heuten W, Boll S (2017) EyeSee: beyond reality with Microsoft Hololens. In: Simeone AL (ed) Proceedings of the 5th symposium on spatial user interaction. ACM, New York, p 148. https://doi.org/10.1145/3131277.3134362
- HTC (2019) VIVE Pro HMD support. https://www.vive.com/us/support/vive-pro-hmd/. Accessed 12 Mar 2019
- Jarvenpaa HM, Makinen SJ (2008) An empirical study of the existence of the Hype Cycle: a case of DVD technology. IEEE international engineering management conference. IEEE, Piscataway, pp 1–5. https://doi.org/10.1109/IEMCE.2008.4617999
- Kersten T, Deggim S, Tschirschwitz F, Lindstaedt MU, Hinrichsen N (2018) Segeberg 1600—Eine Stadtrekonstruktion in virtual reality. Kartogr Nachrichten 68(4):183–191Google Scholar
- Kraak MJ (1999) Cartography and the use of animation. In: Cartwright W, Peterson MP, Gartner G (eds) Multimedia cartography. Springer, Berlin, pp 317–326Google Scholar
- Loesch B, Christen M, Wüest R, Nebiker S (2015) Geospatial augmented reality—Lösungsansätze mit natürlichen Markern für die Kartographie und die Geoinformationsvisualisierung im Außenraum. In: Kersten TP (ed) Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e.V. DGPF, Münster, pp 89–97Google Scholar
- Müller JC (1997) GIS, Multimedia und die Zukunft der Kartographie. Kartogr Nachrichten 47(2):41–51Google Scholar
- Ng AKT, Chan LKY, Lau HYK (2017) A low-cost lighthouse-based virtual reality head tracking system. In: International conference on 3D immersion (IC3D). IEEE, Brussels, pp 1–5. https://doi.org/10.1109/IC3D.2017.8251910
- Peterson MP (1995) Interactive and animated cartography. Prentice Hall, Englewood CliffsGoogle Scholar
- Taylor DRF (2005) The theory and practice of cybercartography: an introduction. In: Taylor DRF (ed) Cybercartography: theory and practice. Elsevier, Amsterdam, pp 1–13Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.