Skip to main content

Designing Mixed Reality-Based Indoor Navigation for User Studies

Die Gestaltung Mixed-Reality-basierter Indoor-Navigation für Nutzerstudien


Mixed reality (MR) is increasingly applied in indoor navigation. With the development of MR devices and indoor navigation algorithms, special attention has been paid to related cognitive issues and many user studies are being conducted. This paper gives an overview of MR technology, devices, and the design of MR-based indoor navigation systems for user studies. We propose a theoretical framework consisting of spatial mapping, spatial localization, path generation, and instruction visualization. We summarize some critical factors to be considered in the design process. Four approaches to constructing an MR-based indoor navigation system under different conditions are introduced and compared. Our gained insight can be used to help researchers select an optimal design approach of MR-based indoor navigation for their user studies.


Mixed Reality (MR) wird zunehmend in der Indoor-Navigation eingesetzt. Bei der Entwicklung von MR-Geräten und Indoor-Navigationsalgorithmen wurde den damit verbundenen kognitiven Problemen besondere Aufmerksamkeit geschenkt, und es werden viele Benutzerstudien durchgeführt. Dieses Papier gibt einen Überblick über MR-Technologie, Geräte und das Design von MR-basierten Indoor-Navigationssystemen für Benutzerstudien. Wir schlagen einen theoretischen Rahmen vor, der aus räumlicher Kartierung, räumlicher Lokalisierung, Pfadgenerierung und Visualisierung von Anweisungen besteht. Wir fassen einige kritische Faktoren zusammen, die im Designprozess berücksichtigt werden müssen. Vier Ansätze zum Aufbau eines MR-basierten Indoor-Navigationssystems unter unterschiedlichen Bedingungen werden vorgestellt und verglichen. Unsere gewonnenen Erkenntnisse können genutzt werden, um Forschern bei der Auswahl eines optimalen Designansatzes der MR-basierten Indoor-Navigation für ihre Benutzerstudien zu helfen.


People nowadays spend most time indoors (Klepeis et al. 2001). Among many indoor activities, human often need to navigate themselves to certain places. Yet indoor navigation is a complicated task and is regarded as more difficult than outdoors (Bauer et al. 2015, 2016). People get lost more easily within complex public buildings (Fellner et al. 2017), such as universities,Footnote 1 libraries, retail, manufacturing, airports, and hospitals. Many factors lead to the difficulties in identifying directions, for example, indoor structures vary among buildings, and walls may hinder the view (Aksoy et al. 2020; Holscher et al. 2007). Moreover, visitors are sometimes under time pressure, which has a negative effect on navigation. Bartling et al. (2021) demonstrated this negative effect in their study where people were asked to go to specific room under time pressure for an interview.

However, due to the challenges of indoor positioning, e.g. the difficulty of getting stable GNSS (global navigation satellite system) signal for navigation, indoor navigation assistance is still limited. Emerging devices and technologies commonly used for indoor navigation are Beacons, Wi-Fi, and visual positioning system (VPS). Mixed reality (MR) technology, which augments real world by displaying virtual holograms and introducing additional information, is promising for indoor navigation. Despite some MR-based indoor navigation systems implementations, many cognitive issues, such as attention distribution (Bolton et al. 2015), spatial perception (Keil et al. 2020) and spatial learning (Liu et al. 2021), remain unanswered for better user experiences of MR-based indoor navigation. Research has been conducted to address these cognitive issues (Joshi et al. 2020; Liu et al. 2021; Rehman and Cao 2017). Developing MR applications, including MR-based indoor navigation applications, with comprehensive functions can be difficult (Rokhsaritalemi et al. 2020). However, not all the functions are necessary for research purposes. Different approaches can be applied to create MR-based indoor navigation demos for specific research purposes. A proper approach can accelerate the development of the application and the research of MR-based indoor navigation.

In the following sections, we first briefly introduce indoor navigation and navigation assistance, MR technology, and available devices and software. The second section summarizes a framework of designing an MR-based navigation system and the critical factors to be considered, especially for the design of experimental systems involving user studies. Then four different approaches for research purposes and user studies (rather than commercial use or end-user applications) are presented with examples in the third section. We also highlight the pros and cons of each approach along with its applicability. Our findings are summarized, and the future work is presented in the final section.

Indoor Navigation and Navigation Assistance

Many people find it difficult to navigate in public buildings, which are often complex in design. The visual access is limited within such buildings (Holscher et al. 2007). The symmetric structure of buildings increases the difficulty of distinguishing the floors and parts for navigation (Aksoy et al. 2020; Holscher et al. 2007), and people tend to assume the layouts of different floors are the same (Carlson et al. 2010). The furniture and functions of rooms in the buildings usually help the visitors identify floors or sections in such cases, but they can be easily and frequently changed, which makes it difficult for visitors to establish stable anchor points in indoor spaces. Besides, most people visit these buildings only a few times, and the first-time visitors usually go to a building for specific purposes under time pressure, which makes way finding even more stressful (Bartling et al. 2021). All these factors indicate that the daily indoor navigating is not a trivial task.

However, not much indoor navigation assistance is available (Joshi et al. 2020). The main bottlenecks are indoor positioning and accessibility calculation. Current options of indoor positioning include blue-tooth and Wi-Fi signals. MR technology is increasingly used in indoor navigation. MR devices can get the 3D position from simultaneous localization and mapping (SLAM) and require no additional hardware. This means that MR does spatial mapping and spatial positioning without GNSS. MR technology has some typical difficulties in mapping transparent objects and displaying holograms under strong lighting conditions, which is less problematic for indoor navigation. Therefore, MR is potentially suitable for indoor navigation and some companies already provide MR-based navigation service (e.g. XRGO,Footnote 2 TangarFootnote 3).

MR for research purposes is different from that for commercial use. For example, the MR device Microsoft HoloLens 2 supports eye control, providing an interesting and promising interaction method for common users. However, its eye movement data is not ready to be accessed. Kapp et al. (2021) developed ARETT, an easy-to-use toolkit for MR HMDs to get the eye movement data for scientific researches. Creating MR-based indoor navigation for research is also different from that for commercial applications. A commercial application must function properly for the entire indoor space and should be easy to use for common users, while for research purpose, a predefined path could be enough but might require more manual settings from the researcher. The specific requirements for the built navigation application vary with research questions. Some research questions are related to the current technology and become less problematic as the technology matures, e.g., the discomfort caused by the heavy weight. Other research questions are more basic and remain to be addressed, e.g., inattentional blindness in MR-based indoor navigation (Wang et al. 2021). For different research purposes, various approaches are available for MR indoor navigation development, and the workflow should be adjusted according to the research aim to meet the requirements.

MR Technology and Research

The term mixed reality became popular when the HoloLens 1 was launched by Microsoft. Prior to that, mixed reality was used to refer to both virtual reality and augmented reality (Milgram and Kishino 1994). Currently, augmented reality and mixed reality both refer to the technology that displays virtual holograms and the real world simultaneously (Çöltekin et al. 2020). This paper uses mixed reality for both mixed reality (as Microsoft refers to) and augmented reality. Current MR devices are mainly hand-held (HHDs, i.e., smartphones), head-mounted devices (HMDs), and head-up displays (HUDs).

Hand-Held Devices

Most smartphones and tablets support MR. The HHD MR displays the augmented virtual holograms on the screen (Fig. 1) and is pretty attractive for consumers. Some indoor MR navigation apps are already available for smartphones (such as XRGOFootnote 4 and INDOARFootnote 5). However, with HHD MR, users need to switch their visual attention between the device and the environment (Stähli et al. 2021). The high cognitive workload may lead the users to ignore the potential dangers. Besides, it is not practical for multi-tasking users to hold the smartphone in their hands all the time.

Fig. 1
figure 1

Example of HHD MR, Google Maps “Live View” shown on a smartphone

HMD MR Devices

Microsoft HoloLens and Google Glass are among the current most widely used HMD MR devices. Users wearing the goggles/helmet can see the augmented virtual elements displayed on the lenses (Fig. 2). Other MR HMDs, such as Acer,Footnote 6 HP,Footnote 7 and Lenovo Explorer,Footnote 8 are seldom used and not quickly updated. The HMD MR is valued in pedestrian navigation, and the associated cognitive issues are widely studied (Liu et al. 2021; Makimura et al. 2019; Thi Minh Tran and Parker 2020). Some studies aim to find solutions constrained by the current technology, e.g., how to arrange the virtual holograms within the limited field-of-view (FOV) (Kishishita et al. 2014), while other issues related to user behavior need long-term investigation, e.g., users pay too much attention to virtual holograms and ignore the events in the real world (Krupenia and Sanderson 2006; Wang et al. 2021).

Fig. 2
figure 2

Example of HMD MR, Microsoft HoloLens 2


HUDs have been equipped in many cars in form of MR dashboards such as MBUX AR in Benz and Phiar. The augmented virtual elements are either displayed on the windshield directly (Fig. 3a) or an extra screen with real-time camera stream (Fig. 3b). An HUD performs spatial mapping differently from HMD and is beyond the focus of this paper.

Fig. 3
figure 3

Example of HUDs, screenshots of video by Phiar, a displayed on windshield, b displayed on extra screen with real-time camera stream

MR Software Options

Many software options are available to develop MR products and new toolkits are being developed. Here, we briefly introduce the most commonly used software and strongly recommend readers to explore their functionality and services. Unity and Unreal are commonly used and applicable on different platforms. MR indoor navigation can be built with ARKit (for iOS), ARCore (for Android and iOS), or Mixed Reality Toolkit (MRTK, for windows, mixed reality HMD, Android, and iOS). Many companies also provide Software Development Kits to facilitate the development. WebXR is just an example.Footnote 9 The navigation module such as Mapbox Vision AR for Android is also provided.


A Development Framework of MR-Based Navigation System

Given the current location of the user and his/her destination, the MR-based navigation system should be able to generate the navigation path and display it to users. Figure 4 illustrates a framework of designing an MR-based navigation system.

Fig. 4
figure 4

A general framework of how to design MR-based navigation system

Spatial Mapping

Spatial mapping prepares the model/map of the indoor environment needed to generate the path to an indoor location, which is usually beyond sight. The MR device maps the real world surfaces in the nearby environment. It has the potential to automatically generate a building information model (BIM) (Hübner et al. 2019). Standard models for better indoor navigation are also being developed. For example, IndoorGML version 1 was released in 2014, partially inspired by the urgent requirements from indoor navigation (IndoorGML OGC 2020). The destination needs to be defined by users or predefined by researchers.

Spatial Localization

Spatial localization gets the user’s current location. It can be based on visual/image markers, beacons, or visual positioning system (VPS) (badmin 2020).

Path Generation

Path generation is the process of creating a walkable path from the start point to the destination. Many algorithms are available for indoor navigation path planning, such as Dijkstra’s algorithm (Fan and Shi 2010), A* (Wang and Lu 2012), and so on.

Instruction Visualization

Once the path is generated, it will be displayed together with navigation instructions visually (Huang et al. 2012; Liu et al. 2021) and/or audibly (Fellner et al. 2017; Huang et al. 2012). Instruction visualization as the process of determining which instruction to display and how to display is also crucial for MR-based indoor navigation systems (Cock et al. 2019; Liu et al. 2021; Liu and Meng 2020).

Design Factors for User Study

Many factors need to be considered when designing an MR indoor navigation system for user studies. For example, the study area should be easily accessible and safe for the users and ideally have constant lighting conditions to ensure stable visualization (holograms are difficult to see under strong lighting) and comparable results among users. Liu and Meng (2020) summarized relevant factors in this regard.

Besides, the interface design might better be inclusive, e.g., considering the need from color-blind or visually impaired users (Qiu 2019). The interface or algorithm should also run smoothly without exceeding the computing power of the devices (Curtsson 2021).

Design Approaches of MR-Based Indoor Navigation Systems for User Studies

When choosing an approach for developing MR-based navigation, at least two factors need to be considered: (1) spatial anchor and (2) path generation. A spatial anchor is a fixed coordinate system that is generated and tracked by MR, and ensures the anchored holograms are located in the precise location.Footnote 10 In an MR-based navigation system, the spatial anchor can be a local (LA, stored in the device) or a cloud (CA, stored on the cloud, e.g. Azure Spatial AnchorFootnote 11) one and the path can be predefined (PP) or generated during the navigation (GP). Therefore, four approaches are feasible in developing the system. They are compared regarding the requirements of materials, resources, and performance (Table 1). We also give examples of each approach.

Table 1 Comparison of four approaches

Local Anchor–Predefined Path Approach

In this approach, local spatial anchors and predefined paths are used. Spatial anchor is stored on the device and cannot be shared across multiple devices. It is loaded and manually anchored by the user to a new position each time the software runs. In this case, internet connection and BIM are not required, and the anchor position changes, even slightly, between different runs or if multiple devices are used in the research. Applying only one spatial anchor would reduce the time and workload of anchoring but may increase localization errors. Therefore, the number of necessary anchors relies on the study area and research questions.

A predefined path does not mean that only one path is available. It is possible to set multiple destinations/paths during development. However, once deployed, the paths are set, and the visualization cannot be changed. For example, the path cannot be changed to avoid a passer-by. Besides, since all the holograms are locked to one anchor, the errors are cumulating, i.e., the farther the hologram is away from the anchor, the larger the misalignment would be. The anchor should be in the middle of the whole study area instead of the start point, and the study area should not be too big. It is an immediate and low-code requirement to build. Therefore, this approach is suitable for a quick assessment of the interface and elements design and fast feedback on cognitive issues. It is also convenient for the research that must be done without the internet. This workflow suits beginners and small projects or rapid prototyping. However, since it requires manual anchoring, it is unsuitable for research involving large user groups.

Example. Building with Local Spatial Anchor and a Predefined Path

This example uses a local spatial anchor and predefined path to create an indoor navigation demo, which was used to test spatial learning during navigation (Liu et al. 2021). It was built using Unity, MRTK, and HoloLens 1.

Figure 5 shows the workflow in this example. Although BIM is not mandatory, HoloLens is used to create a rough model of the study area and map the layout (Fig. 5a). It allows the researcher to put the holograms in the correct position. A floor map also helps to show the number of turns, length, etc.

Fig. 5
figure 5

Workflow of the example LA-PP approach. a Spatial mapping result from HoloLens. b Prefabs viewed in Unity. c Overlay of spatial mapping result to set the prefabs as the path. d Overview of the redefined path. e User’s view from the start point

The holograms should also be prepared (Fig. 5b). In this case, pictorial landmarks and arrows are used. MRTK provides basic GameObjects, such as cubes, spheres, arrows, etc. The pictorial landmarks are generated from png-format pictures. The png. files are used as the material of the basic GameObjects, and the size of GameObjects can be adapted. The GameObjects can also be attached with scripts and then made into prefabs for re-use. For example, when users are moving around, the landmarks should always face the users to remain identifiable. This function can be realized by the billboard provided by MRTK. The prefabs can be located according to the model (Fig. 5c, d) or the floor map. At the beginning of the user study, the researcher needs to set the spatial anchor manually. It is recommended that the researcher walk through the whole study area and ensure the misalignment is acceptable. The user’s view is shown in Fig. 5e.

This demo used one spatial anchor at the middle point of the path (Fig. 6). The spatial anchor is a rectangle instead of a point to allow the alignment both horizontally and vertically with the real world and make sure the direction of the whole path is correct. The anchor was designed in grey and transparent so that it does not affect the users’ navigation. This design proved effective as most users did not notice this anchor.

Fig. 6
figure 6

The location of local spatial anchor in a predefined route

Local Anchor–Generated Path Approach

Like the LA-PP approach, the LA-GP approach does not require an internet connection, the spatial anchor is not constant for each user, and cannot be shared across devices. This means the visualization of the spatial anchor should also be big enough for accurate localization. This approach is also not ideal for studies with an extensive study area or large user groups. BIM is required so that the location of destination is known and can be used in path generation. This approach is more flexible and suitable for exploring interaction and dynamic situations, for example, how users can find the preferred route or avoid obstacles.

Different from the LA-PP approach, in the LA-GP approach, a path between two locations can be generated according to the users’ command by a path generation algorithm. This allows users to set their preferred destinations or avoid obstacles in real time. However, BIM is needed to generate the path, and it requires higher coding ability to integrate the path generation algorithm in the demo.

One example of the LA-GP approach is the work of Qiu (2019) using Unity, Holo Toolkit (the predecessor of MRTK), and HoloLens 1, with the aim to design an indoor navigation system that can avoid obstacles in real-time. A* search algorithm and BIM were used. The indoor space was segmented into many nodes. Once the user set a destination by hand gesture or voice control, the software generated a path. During walking, the HoloLens constantly maps the spatial environment and checks if there are obstacles (e.g., a passer-by) on the following paths. If so, the path will be re-calculated to avoid the obstacle.

Cloud Anchor–Predefined Path Approach

The resources required in the CA-PP approach are similar to those in the LA-PP approach. The BIM and path generation algorithm are not necessary. However, it requires an internet connection. The holograms are anchored to an online anchor, e.g., Azure Spatial Anchor. A cloud spatial anchor module is needed to upload and download the anchor to/from the cloud. Therefore, it needs more codes and requires higher coding capability compared to the LA-PP approach. The positions of holograms remain the same across sessions, which spares the effort of setting the spatial anchor each time. The spatial anchors can be shared by different devices and thus multiple users can collaborate. Besides, since the paths are predefined, the holograms can be locked to one spatial anchor. In this case, once the spatial anchor is set, no internet connection is necessary. However, if multiple spatial anchors are used, a stable internet connection is necessary. Therefore, the CA-PP approach is suitable for user studies that may last for a long time, involve many users and are with limited internet connection.

Figure 7 shows an example using the CA-PP approach for building MR-indoor navigation. In this demo, Unity, MRTK, and HoloLens 2 were used. The layout was measured by MR measure apps on a smartphone (Fig. 7a). The holograms (Fig. 7b) and path (Fig. 7c) are designed similarly to those in the LA-PP example. The main difference is that the spatial anchor is uploaded to the Azure platform on the first run. Afterward, the spatial anchor can be loaded in each session (Fig. 7d). Therefore, the spatial anchor is not necessarily at the middle point and can be set at the start point or endpoint. Figure 7e shows the user’s view at the start point.

Fig. 7
figure 7

Workflow of the CA-PP example. a MR measurements of the study area are used. b Prefabs viewed in Unity. c Predefined path. d Upload and download the local spatial anchor to/from the cloud. e User’s view from the start point

Cloud Anchor–Generated Path Approach

The abovementioned three approaches can fulfill the requirements of most studies. However, the paths generated in those approaches are usually not very long to allow stable visualization, which cannot reveal cognitive issues that might only occur during more extended usage of MR-based indoor navigation (Curtsson 2021). Besides, some cognitive issues may occur during intensive interactions among different users (Liu et al 2021). The CA-GP approach has the advantage for applications in a bigger study area and with many users.

In this approach, the path generation algorithm and BIM are needed. A stable, constant internet connection is necessary for a smooth navigation experience. Since the spatial anchors are saved online, no BIM is needed. A possible solution is to provide two-player roles. The first role is for the researcher, who needs to walk around, set and upload spatial anchors to the cloud and edit the properties. The second role is for the users to set destinations and navigate themselves. Takahiro Miyaura provides an example.Footnote 13 In this project, the user can create different paths and upload them to the cloud interactively and find the path to a specific destination afterwards.

Conclusion and Outlook

This paper introduces a general framework using MR technology for indoor navigation assistance. To design an MR-based indoor navigation system for research, especially for user studies, we analyzed four approaches based on whether the spatial anchor is local or on the cloud and whether the path is predefined or generated in real time. We recommend the beginners/non-developers to use local spatial anchor and predefined design, which is less flexible but with a low demand of coding capabilities. We also recommend to use this approach for study areas without internet connection. But the spatial anchor needs to be set manually, making it less suitable for large groups of users. For research involving many interactions, we recommend to use the local anchor and path generation in real time, since this approach is more flexible. As for studies involving many users, using cloud anchor in combination with the predefined path is feasible. To study the cognitive issues after a long walking distance or with intensive interaction among different MR users, cloud anchors should be used, and the path should be generated dynamically.

Current indoor navigation is mostly studied in simplified environments with a single building or connected buildings, which significantly restricted the transferability of research findings. User-friendly navigation services are needed for more realistic settings with complex buildings and integrated indoor and outdoor environments. This paper assists researchers in selecting a proper approach for the development of a research-oriented MR-based indoor navigation system in more general environments.

Building a fully functional commercial MR-based indoor navigation is complex and requires a lot of efforts. However, a workable research-oriented MR-based indoor navigation system is much easier to build. With the limited but necessary functionalities, they support the exploration of cognitive issues in MR-based navigation, improve our understanding and accelerates the application of MR technology, and improves the MR user satisfaction.


  1. Steerpath Kiosk Maps (2021). Accessed 28 March 2022.

  2. XRGO (2021) XRGO|We connect the industry with X-Reality (AR, MR, VR). Accessed 28 March 2022.

  3. Tangar—Indoor navigation using Computer Vision and AR (2021) Home—Tangar—Indoor navigation using Computer Vision and AR. Accessed 28 March 2022.

  4. XRGO (2020) Augmented Reality Indoor Navigation App for iOS or Android | XRGO. Accessed 28 March 2022.

  5. INDOAR (2022) INDOAR for Museums | Guided Tours & Immersive Experiences with augmented reality | ViewAR. Accessed 28 March 2022.

  6. Acer (2022) Windows Mixed Reality Headset. Accessed 28 March 2022.

  7. HP (2022) HP Windows Mixed Reality Headset | Discover a new level of immersion—HP Store Schweiz. Accessed 28 March 2022.

  8. Lenovo (2022) Lenovo Explorer | Headset for Windows Mixed Reality | Lenovo UK. Accessed 28 March 2022.

  9. WebXR (2021) Immersive Web Developer Home. Accessed 28 March 2022.

  10. Microsoft (2021), Spatial anchors, Accessed 25 April 2022.

  11. Spatial Anchors (2022), Azure Spatial Anchors | Microsoft Azure. Accessed 28 March 2022.

  12. Nischita (2020), Anchoring Objects with Local Anchors and Persisting with HoloLens 2, Accessed 25 April 2022.

  13. Takahiro Miyaura (2022) WayFindingSamplesUsingASA. Accessed 28 March 2022.


  • Aksoy E, Aydin D, İskifoglu G (2020) Analysis of the correlation between layout and wayfinding decisions in hospitals. Megaron 4:509–520.

  • Badmin (2020) Indoor AR Navigation. Bitforge AG

  • Bartling M, Robinson AC, Resch B, Eitzinger A, Atzmanstorfer K (2021) The role of user context in the design of mobile map applications. Cartogr Geogr Inf Sci 48:432–448.

    Article  Google Scholar 

  • Bauer C, Ullmann M, Ludwig B (2015) Displaying landmarks and the user’s surroundings in indoor pedestrian navigation systems. J Ambient Intell Smart Environ.

    Article  Google Scholar 

  • Bauer C, Müller M, Ludwig B (2016) Indoor pedestrian navigation systems: is more than one landmark needed for efficient self-localization? In: Häkkila J, Ojala T (eds) Proceedings of the 15th International Conference on Mobile and Ubiquitous. ACM, New York, NY, United States, pp 75–79

  • Bolton A, Burnett G, Large DR (2015) An investigation of augmented reality presentations of landmark-based navigation using a head-up display. In: Burnett G (ed) Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, United States, pp 56–63

  • Carlson LA, Hölscher C, Shipley T, Dalton R (2010) Getting lost in buildings. Curr Dir Psychol Sci 19:284–289.

    Article  Google Scholar 

  • de Cock L, Viaene P, Ooms K, van de Weghe N, Michels R, Wulf A, Vanhaeren N, de Maeyer P (2019) Comparing written and photo-based indoor wayfinding instructions through eye fixation measures and user ratings as mental effort assessments. J Eye Move Res 12:1–14.

  • Çöltekin A, Lochhead I, Madden M, Christophe S, Devaux A, Pettit C, Lock O, Shukla S, Herman L, Stachoň Z, Kubíček P, Snopková D, Bernardes S, Hedley N (2020) Extended reality in spatial sciences: a review of research challenges and future directions. IJGI 9:439.

    Article  Google Scholar 

  • Curtsson F (2021) Designing an augmented reality based navigation interface for large indoor spaces

  • Fan D, Shi P (2010) Improvement of Dijkstra's algorithm and its application in route planning. In: 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, pp 1901–1904

  • Fellner I, Huang H, Gartner G (2017) “Turn left after the wc, and use the lift to go to the 2nd floor”—generation of landmark-based route instructions for indoor navigation. IJGI 6:183.

    Article  Google Scholar 

  • Holscher C, Buchner SJ, Brosamle M, Meilinger T, Strube G (2007) Signs and maps—cognitive economy in the use of external aids for indoor navigation. In: 29th Annual Conference of the Cognitive Science Society (CogSci 2007), vol 29, pp 377–382

  • Huang H, Schmidt M, Gartner G (2012) Spatial knowledge acquisition with mobile maps, augmented reality and voice in the context of GPS-based pedestrian navigation: results from a field test. Cartogr Geogr Inf Sc 39:107–116.

    Article  Google Scholar 

  • Hübner P, Steven L, Weinmann M, Wursthorn S (2019) Evaluation of the microsoft hololens for the mapping of indoor building environments

  • IndoorGML OGC (2020) IndoorGML-OGC standard for indoor spatial information. Accessed 1 March 2022

  • Joshi R, Hiwale A, Birajdar S, Gound R (2020) Indoor navigation with augmented reality. In: Kumar A, Mozar S (eds) ICCCE 2019: Proceedings of the 2nd International Conference on Communications and Cyber Physical Engineering, 1st edn. Springer Singapore, Singapore, pp 159–165

  • Kapp S, Barz M, Mukhametov S, Sonntag D, Kuhn J (2021) ARETT: augmented reality eye tracking toolkit for head mounted displays. Sens (Basel).

    Article  Google Scholar 

  • Keil J, Korte A, Ratmer A, Edler D, Dickmann F (2020) Augmented reality (AR) and spatial cognition: effects of holographic grids on distance estimation and location memory in a 3D indoor scenario. PFG 88:165–172.

    Article  Google Scholar 

  • Kishishita N, Kiyokawa K, Orlosky J, Mashita T, Takemura H, Kruijff E (2014) Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 177–186

  • Klepeis NE, Nelson WC, Ott WR, Robinson JP, Tsang AM, Switzer P, Behar JV, Hern SC, Engelmann WH (2001) The national human activity pattern survey (NHAPS): a resource for assessing exposure to environmental pollutants. J Expo Sci Environ Epidemiol 11:231–252.

    Article  Google Scholar 

  • Krupenia S, Sanderson PM (2006) Does a head-mounted display worsen inattentional blindness? Proc Hum Fact Ergon Soc Annu Meet 50:1638–1642.

    Article  Google Scholar 

  • Liu B, Meng L (2020) Doctoral colloquium—towards a better user interface of augmented reality based indoor navigation application. In: 2020 6th International Conference of the Immersive Learning Research Network (iLRN). IEEE, pp 392–394

  • Liu B, Ding L, Meng L (2021) Spatial knowledge acquisition with virtual semantic landmarks in mixed reality-based indoor navigation. Cartogr Geogr Inf Sci 48:305–319.

    Article  Google Scholar 

  • Makimura Y, Shiraiwa A, Nishiyama M, Iwai Y (2019) Visual effects of turning point and travel direction for outdoor navigation using head-mounted display. In: Chen JYC, Fragomeni G (eds) Virtual, augmented and mixed reality, vol 11574. Springer. Cham, Switzerland, pp 235–246

    Google Scholar 

  • Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst 77:1321–1329

    Google Scholar 

  • Qiu L (2019) A real-time obstacle-avoiding indoor navigation system in augmented reality. Master Thesis, Technical University of Munich

  • Rehman U, Cao S (2017) Augmented-reality-based indoor navigation: a comparative analysis of handheld devices versus google glass. IEEE Trans Hum Mach Syst 47:140–151.

    Article  Google Scholar 

  • Rokhsaritalemi S, Sadeghi-Niaraki A, Choi S-M (2020) A review on mixed reality: current trends. Chall Prosp Appl Sci 10:636.

    Article  Google Scholar 

  • Stähli L, Giannopoulos I, Raubal M (2021) Evaluation of pedestrian navigation in smart cities. Environ Plan B Urban Anal City Sci 48:1728–1745.

    Article  Google Scholar 

  • Thi Minh Tran T, Parker C (2020) Designing exocentric pedestrian navigation for AR head mounted displays. In: Bernhaupt R, Mueller F', Verweij D, Andres J, McGrenere J, Cockburn A, Avellino I, Goguey A, Bjørn P, Zhao S, Samson BP, Kocielnik R (eds) Extended abstracts of the 2020 CHI conference on human factors in computing systems. ACM, New York, NY, USA, pp 1–8

  • Wang M, Lu H (2012) Research on algorithm of intelligent 3D path finding in game development. In: 2012 International Conference on Industrial Control and Electronics Engineering. IEEE, pp 1738–1742

  • Wang Y, Wu Y, Chen C, Wu B, Ma S, Wang D, Li H, Yang Z (2021) Inattentional blindness in augmented reality head-up display-assisted driving. Int J Hum Comput Interact.

Download references


This work is supported by the China Scholarship Council under Grant No. 201806040219 and Grant No. 202006040025. The authors appreciate the efforts of the anonymous reviewers and the editor.


Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Bing Liu.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, B., Ding, L., Wang, S. et al. Designing Mixed Reality-Based Indoor Navigation for User Studies. KN J. Cartogr. Geogr. Inf. 72, 129–138 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Mixed reality
  • Indoor navigation
  • Development approaches
  • User study