Advertisement

Interface of mixed reality: from the past to the future

  • Steven Szu-Chi Chen
  • Henry Duh
Review Paper
  • 128 Downloads

Abstract

Mixed reality (MR) is an emerging technology which could potentially shape the future of our everyday lives by its unique approach to presenting information. Technology is changing rapidly and information can be presented on traditional computer screens following a WIMP (Windows, Icons, Menus, and Pointing) interface model, by using a head-mounted display to present virtual reality, or by MR which the process of presenting information through a combination of both virtual and physical elements. This paper classifies MR interfaces by applying a text mining method to a data base of 4296 relevant research papers published over the last two decades. The classification reveals the trends relating to each topic and the relations between them. This paper reviews the earlier studies and discusses the recent developments in each topic area and summarizes the advantages and disadvantages of the MR interface. Our objective is to assist researchers understand the trend for each topic and allows them to focus on the research challenges where technological advancements in the MR interface are most needed.

Keywords

Augmented reality Mixed reality User Interface Human Computer Interaction 

Introduction

In this article, we use the term mixed reality (MR) to cover related applications from augmented reality (AR) to the whole virtuality continuum (Milgram and Kishino 1994). An interface, by definition, is part of a shared boundary where two or more components, systems, or subjects meet and interact with each other. An interface on traditional computer systems mostly follows the WIMP (Windows, Icons, Menus, and Pointing) interface model, but due to the nature of MR, which blurs the line between the virtual and physical world, being no longer limited by the screen, designing a MR interface requires new modalities.

This paper focuses on classifying MR articles which are related to interfaces to narrow down and identify the research gaps. Under each category, previous studies with the highest citation count are reviewed to discover the trends and are then linked to recent studies. Our objective is to assist researchers understand the trends for each topic and allow them to focus on the research challenges where technological advancements in the MR interface are most needed.

MR interface

Analyzing scientific publications is one of the most common methods to identify research trends (Dey et al. 2005). Predictions can be made by measuring the annual growth of publications or citation counts. Usually the classification process is done manually and can be affected by personal experience. We use Latent Dirchlet Allocation (LDA), which is a generative probabilistic model to automatically classify and generate categories without bias.

The interface and MR are both broad topics that draw on many technologies and research areas. But with MR as a medium which needs an interface, cross-disciplinary research between MR and the interface has become a sub-discipline of MR.

Traditional interfaces on computer systems mostly follow the WIMP graphical user interface model, but due to the nature of MR, WIMP cannot be applied directly (Beaudouin-Lafon 2000). Our approach is to identify the research trends by using a text-mining method which automatically classifies articles based on the main topic into six sub-topics for analysis purposes.

Data source

The database used for this analysis is Scopus, which is the largest abstract and citation database of peer-reviewed literature with about 70,000 institutional profiles, 69 million items, 12 million author profiles and 1.4 billion cited references dating back to 1970 (Elsevier 2018). Therefore, it is an extensive collection of all the existing articles on MR.

Selection criteria

The keywords (“augmented reality” OR “mixed reality” AND interface) are used to retrieve articles related to augmented reality and mixed reality with an interface. A total of 4696 papers were retrieved on 04/09/2018. Limitations on document type were added to discard useless data such as conference reviews and books, as the former cannot be counted as an article and the latter usually is a duplication of the document type: book chapter. 4316 articles were retrieved, but of these, only 4296 abstracts were accessible. We use the articles’ abstracts instead of the titles or keywords, as the abstracts provide more extensive information to generate topics and for trend analysis.

Text mining

We use Latent Dirichlet Allocation (LDA) (Blei et al. 2003), which is a generative probabilistic model for text mining, to identify the key topics in which most researchers are interested. This text mining method has been widely used in recommendation systems and in other research (Chen et al. 2017, 2018).

The stop words (Luhn 1957) we use are based on MATLAB’s general stop words list. Additional words such as “augmented”, “interface”, etc. are chosen to remove the core selection words. “Paper”, “research”, “result”, etc. were added to remove some of the most common words in academic papers which are related to computer science.

Topic generation

Figure 1 presents the results as a word cloud showing that six topics are identified where the size of the word represents the weight distributions after convergence. We manually assign one word as a label for each topic, either the word which has the highest weight or the word which summarizes and best represents the whole topic. Topics 1 and 4 simply use the keyword with the highest weighting. Topic 2 uses “education” to encompass everything related to learning. Although the articles in topic 3 are more focused on surgery, we believe “medical” is a label more suited to cover this topic comprehensively. For topic 5 we choose “technical” as the representation because some articles discuss new algorithms or the optimization of algorithms for tracking, and some discuss MR from a hardware aspect. For topic 6, we use “application” instead of “mobile” because not every article which discusses applications is implemented on a mobile device, but every application on a mobile device is counted as an application.
Fig. 1

Word clouds for MR interface

Of the 4296 articles on MR interfaces, the largest number were on applications (1349), followed by user (966), technical (811), design (415), education (407), and medical (348).

Trend analysis

All the retrieved articles fall into one of the six topics categorized by the LDA model. We further divide the articles into seven chronological periods (1997–1999, 2000–2002, 2003–2005, 2006–2008, 2009–2011, 2012–2014, and 2015–2017). Figure 2 plots the total number of publications over the seven time periods and shows a steady increase over the years. The number of publications related to MR interfaces increased more than tenfold from only 102 publications in 1997–1999 to 1107 publications in 2015–2017.
Fig. 2

Number of publication through the years

Figure 3 shows the trends for each topic over the seven time periods. Publications on MR interfaces in the design and medical topics reached saturation in 2006. However, publications in the topics education, user, and application are still increasing steadily.
Fig. 3

Topic trends

User

This topic focuses on the interaction between the user and the objects or environment around them. Earlier studies investigated using “wearable computer systems” to implement MR to support the user in their everyday life in relation to their interactions with the world around them. Researchers discussed the potential of MR as a personal digital assistant, which provides a better way to manage the significant additional information (Starner et al. 1997). Others designed a prototype to explore the urban environment and discussed the hardware requirements to match the users’ needs in display and tracking (Feiner et al. 1997). In their later studies, they proposed four different user interfaces and discussed the difference between indoor/outdoor and MR/traditional screens (Höllerer et al. 1999).

User interfaces on traditional computer systems mostly follow the WIMP graphical user interface model, which made computers accessible to a broad audience for a variety of applications. But the WIMP model cannot be applied directly to MR. Therefore, several researchers tried to extend WIMP for MR by introducing the Instrumental Interaction model, which extends, generalizes, and operationalizes the principles of direct manipulation from an interface point of view, and demonstrates the descriptive, comparative and generative power of the model when used to analyze the WIMP (Beaudouin-Lafon 2000). Some researchers tried to go beyond WIMP and search for a 3D user interface metaphor as powerful as the 2D design, believing that the MR interface requires a broader design approach that integrates multiple user interface dimensions before a successor to the 2D user interface metaphor can emerge (Butz et al. 1999; Schmalstieg et al. 2002). Some researchers tried to apply view management to MR and focus on layout algorithms for more comfortable interaction (Bell et al. 2001). Others explored the possibility of interfaces from different approaches, such as MR desk (Koike et al. 2001) and MR book (Billinghurst et al. 2001a).

Following the interface design principles of direct manipulation, researchers focused on using gesture as input to interact with MR. The functions of MR are not naturally mapped with specific gestures. Researchers proposed a procedure for developing an intuitive ergonomic gesture-based interface (Nielsen et al. 2003). The procedure includes the selection of gestures and testing the selected gestures’ learning rate, ergonomics, and intuition from a user-centered view. A user might use two or three different gestures for the same instruction, which allows the developer to have some degree of flexibility in choosing a gesture for a specific function. Gesture-based interfaces are suitable for different display devices such as tabletop screens, projection on walls, tablets, and even mobile devices. Users generally find using gesture and posture to interact with MR is more enjoyable and efficient compared to the mouse and keyboard. However, errors are more likely to occur when using a virtual keyboard. Compared with a real keyboard, users reported that the lack of tactile feedback during keypresses made text entry awkward, since it was difficult to determine key boundaries (Malik and Laszlo 2004).

Hand tracking or fingertip tracking for the direct manipulation of virtual objects is a crucial part of gesture-based interaction. In the research on hand tracking, due to technical issues, researchers used various methods for tracking such as bare hands, hands with cloth gloves, and even gloved hands with markers to find the method that resulted in the best efficiency with the least equipment. Researchers built a system that allows users to use their two fingers to manipulate virtual objects by wearing a glove with markers on it. They believed haptic feedback and stencil buffer are the two key features for the natural and intuitive manipulation of objects in MR and both greatly enhance the usability of the glove. Conversely, they believed the lack of fine-grain depth cues in their system caused the interaction to be difficult which might be another key feature for the natural and intuitive manipulation of virtual objects (Buchmann et al. 2004).

Although most of the papers discussing algorithms are under the technical topic focused on tracking, a few of them are under this topic discussing the recognition of hands without markers. Combining “wearable computer systems” with algorithms proposed by researchers, users can interact with MR systems through a markerless fingertip tracking gesture interface in real-time (Lee et al. 2007). Other researchers also proposed algorithms for the real-time recognition of hands without markers. Their design is motivated by the limitation of a single consumer-grade camera. Therefore, colored gloves are designed to accommodate the camera limitations, self-occlusion, and algorithm performance (Wang and Popović 2009).

In addition to focusing on specific parts of MR to enhance the performance, the integration and application of the system are also important advancements in MR. Researchers designed a prototype of a wearable gestural interface, which only needs a projector and a camera that can be carried around as a wearable device, with several applications such as navigating through a map, taking a picture, and providing dynamic information on newspapers. The system uses its projector to provide information on the surface of objects as output and its camera is used to track the users’ fingers and receive gestures as input (Mistry et al. 2009).

Researchers studied the MR environment by creating a small room installed with multiple depth cameras and projectors to explore a variety of interactions and computational strategies related to the space that interactive displays inhabit. They proposed a prototype with a novel combination of multiple depth cameras and projectors to imbue normal walls and tables with interactivity, and allows users to interact on, above, and between interactive surfaces in a room-sized environment. With the opportunity of using different interaction techniques that facilitate the transitioning of content between interactive surfaces, this system offers the possibility of rich spatial interactions enabled by depth cameras (Wilson and Benko 2010).

Pseudo-transparency is an interesting idea for gesture interfaces on mobile devices. The system creates the illusion of the mobile device itself being semitransparent by overlaying an image of the users’ hand onto the screen and allows the user to operate the mobile device with their ten fingers simultaneously from both the front and back of the screen. Users gave mixed feedback on the usefulness of this system, showing that they preferred the image of the hand in some situations while wanting it to be disabled in others (Wigdor et al. 2007).

Smart glasses or head-mounted-displays (HMDs) have been gaining traction as next-generation mainstream wearable devices (Hong et al. 2015). Since HMD systems have a small compact wearable platform, their interface requires new modalities. Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze (Ismail and Sunar 2015). Speech recognition is usually discussed in another field of research regarding its accuracy. Gesture-based interfaces have generated numerous studies on interpretation (Nuernberger et al. 2016; Wozniak et al. 2016), and haptic feedback (Tatsumi et al. 2015; Tamaki et al. 2016). Eye-tracking is not a new concept, but combined with MR to form a gaze-based interface is an area which has piqued researchers’ interest (Park et al. 2016; Jacob and Stellmach 2016; Lee et al. 2017).

Research including immersiveness (Chandler T et al. 2015; Ates et al. 2015), engagement (Brondi et al. 2015; Qamar et al. 2015), and usability (Datcu et al. 2015; Sand et al. 2015) is focused on enhancing the users’ experience and effectiveness in relation to completing tasks. The pairing of virtual and physical objects (Huang et al. 2015) (Fig. 4) or the pairing of device (Lin et al. 2017) are factors to create seamless interaction for MR interfaces (Simões et al. 2015).
Fig. 4

Left: user moving object with gesture interface. Right: user’s view on moving object (Huang et al. 2015)

It is clear that the articles under this topic all focus on developing MR technology to increase the ease of use for users. The trend for the studies in this topic area starts from more of a hardware aspect, after which the researchers use existing hardware to study new methods and implement these on software. Only in recent studies have researchers tackled this problem from the users’ point of view using existing hardware and software.

Education

Topic 2 is clustered based on the word “learning”, where some of the articles involve measuring students’ performance on specific tasks and some evaluate the usability of interfaces from the participants’ experience.

Earlier studies focus more on the user experience of MR. Researchers created a model used to evaluate the users’ presence, which is the key aspect of the virtual experience, between virtual reality (VR) and the physical world. The model uses Focus, Locus, and Sensus to discuss three dimensions of experience in virtual worlds. Focus is the direct perception of currently present stimuli. Locus is the attention toward the virtual or the physical world. Sensus is the level of consciousness. This study provides a model which can enrich the understanding of the virtual experience from a psychological point of view (Waterworth and Waterworth 2001).

Different styles of interfaces might affect users’ behaviors for different reasons. A floating interface shown between users for a collaborative MR system, which allows them to see their partner all the time, motivates the users to be involved with each other using natural communication such as gesture or voice. On the other hand, users’ performance benefits from the orientation of the shared object. An interface projected on the wall is more preferred by the participant, since it allows users to see the virtual object from a similar perspective (Kiyokawa et al. 2002).

After several previous studies which confirmed the usability of MR in general, researchers explored the user experience of MR in a more specific field of research, which is education. Researchers have received very positive feedback by applying MR to educational exhibits in science centers and museums (Woods et al. 2004). Classrooms also benefit from MR when it turns the learning session into a “TV-show style game”, which increases the motivation of students significantly (Freitas and Campos 2008). Interfaces with interactive controls such as moving a marker, which corresponds to MR systems, can increase the motivation of users and improve educational outcomes. Although presence and enjoyment both affect learning and are closely related to each other, previous computing experience on the other hand was not correlated with either of these when using MR for educational purposes (Sylaiou et al. 2010). Researchers suggest that perceived usefulness and enjoyment have a similar effect on users’ attitude toward using image-base MR environments. It is claimed that MR environments can be particularly attractive for younger generations, who perceive it to be more like edutainment than pure learning (Wojciechowski and Cellary 2013).

Interaction is one of the reasons why MR thrives in the area of education. Several researchers explored the potential of MR for teaching science in primary school (Kerawalla et al. 2006). They suggest that an interface which is not flexible and controllable will result in children becoming less engaged when learning using MR compared to role-play. They concluded that their findings support previous studies which show that when users are given an opportunity to manipulate virtual objects at their will, this opportunity encourages users to reflect upon the implications of their actions, and is key to achieving changes in understanding (Shelton and Hedley 2002). Ease of use is another important factor that affects usability.

The usability between traditional ball-and-stick models (BSMs) with tangible user interfaces (TUIs) in chemistry education were compared. In the first design, even when it was relatively more difficult to manipulate molecules with TUI than BSM, the usability between their system and BSM is not very different. With the re-design of the TUI, users commented on the improved ease of use, and the probability of using a similar system in an actual learning situation increased (Fjeld et al. 2007).

MR not only helps the novice in learning, it can also be an aid to more experienced users. Compared with LCD, MR reduces both time and movements for maintenance and repair tasks. Mechanics reported on the MR system in terms of intuitiveness and satisfaction, and indicated that they are willing to tolerate its shortcoming, if it provides value. Moreover, MR not only helps educators teach instructional skills, it can even replace them (Henderson and Feiner 2011). A marker-based TUI MR system proved to be equally effective in teaching library skills or aiding users compared with human librarians (Chen and Tsai 2012).

Compared to identical simulation using a traditional mouse and keyboard controls, users using their whole bodies to interact through the interface showed higher learning and more positive attitudes towards the virtual experience and the MR environment. Full-body interactions give the user the opportunity to experience science phenomena from new perspectives which changes the affective and motivational disposition of the learner (Lindgren et al. 2016) (Fig. 5).
Fig. 5

Enhancing learning and engagement through embodied interaction (Lindgren et al. 2016)

Due to the popularity of mobile devices nowadays, almost everyone carries a smart phone with them. With improved technology, an increasing number of smart phones are capable of having a MR function. Researcher discussed cognitive load by learning anatomical concepts through MR via mobile device (Küçük et al. 2016).

Using MR could create a simulated and immersive environment which affects the teachers’ sense of presence and their virtual teaching performance (Ke et al. 2016). Compared with traditional computer screens, using simple forms of hands-on control on TUIs has a large impact on physical observation in educational games (Yannier et al. 2015), and enhances learning and enjoyment through experiencing physical phenomena in an MR environment (Yannier et al. 2016). Other factors such as higher levels of engagement and more positive attitudes towards learning science (Lindgren et al. 2016; Ferrer-Torregrosa et al. 2016), and a decrease in cognitive load in learning neuroanatomy (Küçük et al. 2016) demonstrates how MR impacts learning.

MR also aids skill training such as manual assembly with an intuitively enhanced bare-hand MR interface, which can provide suitable and appropriate guidance, so users can perform tasks more quickly and accurately (Wang et al. 2016a). Systems for training emergency management (Sebillo et al. 2016), and endourologic skills (Sweet 2017) meet the needs for training and assessing particular skills. An increase in task completion time is a tradeoff which might be worth investigating to decrease the mental demand of industrial robot programmers (Stadler et al. 2016), or to avoid shifts in focus and enable the fine-tuning execution of surgical tasks (Andersen et al. 2016).

The trend for MR interfaces under this topic starts from the user experience to general MR technology. After some technical improvements over time, researchers have begun to apply MR to education. They measure the usability of MR educational systems and compared presence, enjoyment, usefulness, attractiveness, motivation, engagement, etc. with traditional educational methods. Environment is a constant focus for researchers under this topic.

Medical

Studies on medical MR usually focus on surgery. MR can be used to support surgeons when performing surgery, and can also be used for surgical training. Therefore, the accuracy of image registration for navigation and guidance is discussed frequently under this topic. Different from the other topics, which mostly start from a more general discussion in MR then narrow down to the specific field of research, researchers started discussing how to use MR to train and support surgeons at the very beginning of this trend.

Computer-assisted surgery (CAS) is a revolutionary advancement in surgery. Not only has it made a great difference in high-precision surgical domains, it also benefits standard surgical procedures. General surgery, neurosurgery, orthopedic surgery, maxillofacial surgery, otolaryngology, cardiovascular and thoracic surgery are some of the disciplines that use MR to navigate in their specific surgical fields (Shuhaiber 2004). In addition to supporting surgeons in navigation and increasing the surgeon’s precision, CAS is also a leading factor in the development of robotic surgery. But its real potential lies in the computer’s ability to offer MR to support surgical training, pre-operative planning and data visualization before and during the operation, and tool guidance during the operation (Megali et al. 2008). One of the reasons that CAS benefits from MR is because of the style of information presented. Accurate patient models are required for planning before surgery and navigating during the operation. Projecting the 3D model onto the patient is better than displaying it on an independent screen. A three-dimensional-image, which is reconstructed by data obtained from CTs and MRIs before the operation, is projected onto the patient’s head as the interface for guidance on the structure of organs and tumors. Researchers suggest a system for neurosurgery that provides guidance during operative procedures is advantageous because surgical procedures can be navigated easily and accurately using MR in the surgical field (Iseki et al. 1997). Projecting anatomical information obtained from computer-generated three-dimensional pre- or intraoperative imaging studies could also lead to advances in microsurgery, ergonomics, solo surgery, and telesurgery (Marescaux et al. 2001).

Head-mounted-displays (HMDs) are used to display information or 3D models in MR to provide CAS. But user acceptance of HMDs in a surgical environment needs to be discussed. Accurate calibration of a head-mounted operation binocular can fulfill the accuracy requirements of CAS, since the performance of the HMD is not significantly affected by distortion correction while projecting the virtual object onto the real world (Birkfellner et al. 2002).

Other than surgery, researchers propose a hybrid in situ visualization method to improve the multi-sensory depth perception on a HMD, which aims to reduce invasive surgery by improving medical diagnosis. Although there are some restraints on the interface, visualization of the anatomical information in real time is achieved (Bichlmeier et al. 2007). There are several important limitations that might hinder the use of MR in minimally invasive procedures. A graphical user interface that presents real-time visualization of internal structures, which includes the complex deformation of the model during surgery, can reduce the error rate of validation on a phantom organ compared to current surgical margins (Haouchine et al. 2013).

Haptic interfaces are attractive due to their ability to safely interact with humans (Loureiro et al. 2003). The combination of tele-presence and MR-based systems can potentially motivate patients to exercise for longer periods of time when undergoing stroke therapy. The haptic interface of this system allows patients to shape their movements while correcting errors such as deviations from the ideal path. Furthermore, it has been proved that undergoing training in laparoscopic surgery with a MR simulator with a haptic interface was more effective compared to VR without a haptic interface. Researchers claim that MR offers better realism, haptic feedback, didactic value, and construct validity than VR (Botden et al. 2007). However, VR is still a valid training method which has been proved previously (Botden and Jakimowicz 2009). Femoral palpation and needle insertion training simulation, in which the trainees’ own hands are rendered in real time without unrealistic occlusion, gained very positive feedback from the experts. They strongly agreed that both the location and tactile feel of the pulse produced by the haptic interface of MR is correct and realistic (Coles et al. 2011).

In contrast to the haptic interface or robotic-assisted surgery, a touchless interface is an ideal solution for the operating room, which is a cleansed and sterilized environment, since it does not require any physical contact for the surgeon and still can provide the necessary control features (Ruppert et al. 2012). Researchers concluded that Kinect is very efficient and was a low-cost and accurate system for hand tracking and gesture recognition.

Inattentional blindness has a higher rate of occurrence when surgeons perform operations with MR compared to standard operations. Attentional tunneling is the specific term to describe this cognitive fixation on specific cues while ignoring alternative information or tasks in the MR environment. Researchers recommend further investigation into interface design for medical MR applications to prevent potential hazards (Dixon et al. 2013).

Using MR to plan surgical interventions can support transformation between different spatial reference frames while visualizing medical images. The system assists users to develop the necessary spatial reasoning skills needed for surgical planning and greatly improves the performance of non-clinicians and significant reduces the time on completing tasks for clinicians (Abhari et al. 2015).

MR collaboration, which provides long-distance, virtual assistance, allows surgeons to engage in complex visual and verbal communication during the procedure (Davis et al. 2016).

The visualization of information is a key factor for MR to aid the surgical process. Depth perception can be improved by a seamless interface which switches between MR and VR, indicating the minimum distance between objects which also facilitates surgical tasks (Choi et al. 2016). MR navigation systems for surgical purposes can provide image guidance for neurosurgical procedures (Fig. 6) (Wang et al. 2015; Besharati Tabrizi and Mahvash 2015) on different devices (Chen et al. 2015). It has the potential to optimize the workflow of bypass procedures by providing essential anatomical information, entirely integrated into the surgical field, and helps surgeons perform minimally invasive procedures (Cabrilo et al. 2015). A different approach to visualization, such as the Magic Mirror, can provide anatomy education by allowing personalized in situ visualization of the anatomy on the user’s body in real time (Ma et al. 2016).
Fig. 6

3D image overlay evaluation. a, b Critical structure overlay in the front teeth and molar areas. c Intraoperative delivery of preplanned surgical plan. d Overlay error evaluation (Wang et al. 2015)

Motor rehabilitation can also be promoted by MR (dos Santos et al. 2016). Results support the clinical effectiveness of mixed reality interventions that satisfy the motor learning principles for upper limb rehabilitation in chronic stroke survivors (Colomer et al. 2016).

Application

Articles which fall under this topic discuss interfaces from different applications. Education and medical applications might fall into this category if the words “application” or “mobile” are used often in their abstracts.

Researchers started important discussions of human computer interaction (HCI) regarding MR in early years. A tangible user interface (TUI) is a common approach for MR to bridge the gap between the virtual and the real world, along with the awareness of human activities in both the foreground and background. Tangible bits is an interface which combines MR and TUI to allow users to grasp and manipulate digital information by coupling them with physical objects and architectural surfaces. They used three prototypes to demonstrate the three key concepts of tangible bits: interactive surfaces, coupling of virtual and physical objects, and ambient media and found that the metaphor of light, shadow, and optics in general to be particularly compelling for interfaces spanning the virtual and physical space (Ishii and Ullmer 1997).

Other researchers also focused on coupling physical objects with a virtual form or representative actions to bridge virtual and physical worlds. Electronic tags were added to various physical objects to create an “invisible interface”. By leveraging the strengths and intuitiveness of the physical world with the advantages and strengths of computation, software and hardware implementation that supports this system can be extended and enhanced in a variety of ways to encompass more complex scenarios (Want et al. 1999). Similar to adding tags to physical objects, others used a different approach to creating an interface. Researchers added visual tags, often called “markers”, to physical objects which can be recognized by a camera. With camera-equipped mobile devices, scanning the visual tags generates digital information about physical objects (Rekimoto and Ayatsuka 2000). A toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes is validated so that even first-time users can build tangible interfaces and easily adapt applications to another technology (Klemmer et al. 2004).

Azuma discussed MR including display devices, tracking sensors and approaches, calibration, interfaces and visualization, and applications thoroughly in his survey paper. They pointed out that the two main trends in MR interaction research either use heterogeneous devices to leverage the advantages of different displays, or integrate with the physical world through tangible interfaces. In this survey paper, the researchers introduced several applications with different approaches to interaction design and identified that research on the MR interface is limited on low-level perceptual issues. They believe that, in the future, there will be significant growth in interface research because MR systems with sufficient capabilities are more commonly available (Azuma et al. 2001). Another survey paper on MR technologies, systems and applications classified MR interfaces into four categories in terms of the difference in the interaction. The four ways of interaction in MR are tangible interfaces, collaborative interfaces, hybrid interfaces, and the emerging multimodal interfaces (Carmigniani et al. 2011).

Ubiquitous computing is a concept in which computing is made to appear anytime and everywhere. Its aim is to augment the environments around the user with computational resources that provide information and services when and where desired (Weiser 1991), which shares the same idea with MR. There are three interaction themes on which the researchers focus: natural interfaces, context-aware applications, and automated capture and access. The researchers concluded that the goal of an interface is to provide many single-activity interactions that together promote unified and continuous interaction between humans and computational services (Abowd and Mynatt 2000). MR interfaces are designed to extend basic biological principles and communication patterns. Multimodal interfaces have the potential to enable multisensory perception through the fusion of different information sources (Papagiannakis et al. 2008). The affordances and limitations of the MR interface for learning, which enable “ubiquitous computing” models, and how it merges digital information with the real world to enhance students’ experiences and interactions are discussed by researchers (Dunleavy et al. 2009).

The interface of collaborative MR systems lets users see each other, along with the virtual objects, which allows communication behaviors to act more like face-to-face than screen-based collaboration. MR combined with TUI can develop interfaces in which physical objects and interactions are as important as the virtual image presented. With the ability to enhance reality, its seamless interaction, the presence of spatial cues, and multiscale collaboration, MR systems can be used to explore the possibility of interfaces for collaborative use (Billinghurst and Kato 2002). Urban planning and design using tangible workbenches can provide users access to computational resources in an intuitive way, moreover, engaging everyone to deal with the task at hand (Underkoffler and Ishii 1999).

Games is one of the subtopics under this topic which has attracted the interest of researchers. Due to the nature of MR, which blurs the line between the virtual and real world, games that apply to MR are usually implemented on mobile devices or wearable computer systems, which are suitable for location-based design. For first-person shooter (FPS) games, the interface design is based around a single, first-person perspective screen, where the status information is at the bottom of the screen and the large top part shows monsters and architecture. The latency of the interface display increases the difficulty of the game. The status information needs to be re-designed in order to be suitable for outdoor gaming (Thomas et al. 2000). Other location-based games usually have an interface of a map (Flintham et al. 2003). Using a traditional drop-down menu with a pen of a PDA is not intuitive. Players prefer easy access and have the freedom of using the system in an innovative way. Functions that can be performed in one or two clicks and text replaced by symbols are preferred by players (Schwabe and Göth 2005). Blurring the frame of a game could provide a more immersive experience. Exploring ambiguity in interface design in order to engage and even provoke users are strategies for mobile interfaces to play with the relationships between different users (Benford et al. 2006).

The use of MR applications is of real interest in the field of medical education because they blend digital elements with the physical learning environment. In order to be of value, applications must be able to transfer information to the user (Barsom et al. 2016). Simulator training holds an important place in the current robotic training curriculum of future robotic surgeons (Kumar et al. 2015). Researchers claimed that MR technology is more educationally useful, and less distracting compared with traditional training methods in the medical field (Dickey et al. 2016). Innovations in MR technology have the potential to significantly enhance performance in neurosurgery (Pelargos et al. 2017). Smart glasses and head-mounted displays have been adopted in the healthcare setting with several useful applications, including hands-free photo and video documentation, telemedicine, electronic health record retrieval and input, rapid diagnostic test analysis, education, and live broadcasting, but it still needs to be tailored to fit the requirements of medical and surgical sub-specialties (Mitrasinovic et al. 2015).

The Internet of Things (IoT) and cognitive computing are part of Industry 4.0. The combination of both have the potential to create highly scalable, adaptable and interactive IoT systems functioning for buildings and it is capable of addressing the challenges encountered in the realm of Homes, Smart Cities and Industry 4.0 (Ploennigs et al. 2018). With the support of information and communication technology, smart cities can be leveraged as a pragmatic framework to support the needed transition toward sustainable urban development. Positioning is one of the key components of smart mobility, which is a key factor of a smart city, and is useful for locating facilities and generally improving spatial orientation with a MR interface (Shahrokni et al. 2015). Four key performance indicators (kilowatt-hours per square meter, carbon dioxide equivalents per capita, kilowatt-hours of primary energy per capita, and share of renewables percentage) on three levels (household, building, and district) on four interfaces are discussed to evaluate a smart city (Shahrokni et al. 2015).

Applications for collaboration (Galambos et al. 2015; Górski et al. 2015), comparisons of interfaces between technologies (Omar and Nehdi 2016), human factor evaluation (Aromaa and Väänänen 2016), pervasive computing (Grubert et al. 2017) (Fig. 7) techniques for a touchless MR interface (Brancati et al. 2015) or omnidirectional videos (Yu and Lakshman 2015) also was the focus of research in recent years.
Fig. 7

Interface examples for the concept of Pervasive Augmented Reality (Grubert et al. 2017)

Technical

In order for MR to function, before overlaying a virtual image onto the real environment, the system needs a camera for tracking and to identify the surface of objects. Gesture-based interfaces also require tracking on hands and fingers to receive input. Therefore, tracking can be seen as one of the most important issues that researchers have to deal with for the MR to work. Some tracking methods and algorithms are discussed under this topic.

Although research on tracking started early, not every researcher studied this field due to technical limitations. The precise alignment of real and virtual coordinate frames for overlay, and capturing the 3D motion of a camera, including camera position estimates for each video frame were two major problems of MR in the nineties. Researchers suggest the motion capture of the camera is especially important for interactive MR application, as this technology can be used to create an MR environment where users can manipulate virtual objects at their will (Koller et al. 1997).

Researchers proposed a calibration-free video-based MR that does not require metric information such as a camera or 3D location calibration parameters and dimensions of the objects in the environment. Their system only requires four fiducial points during system initialization, which are specified by the user, to track across frames in the video. Furthermore, their algorithm, which proved to be suitable for real-time implementation and imposes minimal hardware requirements, demonstrated a fast and accurate merging of the image onto live video (Kutulakos and Vallino 1998).

Tangible user interfaces (TUI) on a tabletop workspace with both display and input mechanisms allow users to organize objects spatially and collaborate with each other easily. The physical objects of TUI not only act as an input device but also become embodiments of digital information. Researchers designed a MR system with TUI that tracks the objects’ positions and orientations on a tabletop display surface with high accuracy and low latency. The objects of the system are sensing tablets that are placed next to each other to form a sensing surface. Compared with pure visual tracking, sensing tablets allow the interface to quickly adjust multiple parameters and receive real-time feedback (Patten et al. 2001).

Other than hands, fingers, or some small objects, capturing and viewing on-site 3D graphical models for large outdoor objects also falls under the tracking domain. A 3D modeler with a gesture interface, which allows users to control a 3D constructive solid geometry modeler to build graphical objects of large physical artefacts in the physical world, helps users to control the modeler while tracking the users’ hands with gloves on them. Models which previously needed to be captured using manual, time-consuming, or expensive methods are improved by the new technology of wearable computer systems. By carrying a wearable backpack computer with pinch gloves, the system can be used to construct example models of real world structures. This system not only allows users to visually verify the object’s accuracy at creation time, it allows others to view the objects indoors in real-time or at a later date (Piekarski et al. 2001).

After the objects or surfaces been tracked by the MR system to obtain the position and orientation of the users’ viewpoint, merging images and maintaining the correct registration of the real and virtual world is the next important step. 3D tracking and estimation of images are the two major methods known for acquiring the users’ viewpoint in geometric registration. 3D tracking locates the user by electromagnetic, ultrasonic, or mechanical trackers. The estimation of images, on the other hand, estimates the users’ location by images captured from a camera from the users’ viewpoint. Researchers designed a stereoscopic video see-through prototype system that can produce MR with correct occlusions between real and virtual objects nearly at video rate. They proposed algorithms to estimate the camera parameters by determine the 3D position of the markers after identifying them. Then the system will merge the real world with virtual objects after estimating the depth of the real world (Kanbara et al. 2000). Algorithms for 3D model-based tracking, visual features, and orientation to form a video see-through monocular vision markerless tracking MR system demonstrate their usability in real-time tracking and have been tested on various images sequences and for various applications (Comport et al. 2006).

The lack of real-time visual feedback is the main difficulty of using Atomic Force Microscopy (AFM) for nanomanipulation. With the improvements in MR, researchers discussed the possibility of combining MR interfaces with haptic feedback to support AFM from an algorithm point of view. Their results indicate that nanopatterns can be accurately created and the nanoparticles can be easily manipulated with the MR interface (Li et al. 2004, 2005).

Face detection and head-pose estimation of a driver with an MR driver assistance system are used to observe the driver’s behavior and evaluate the driver’s awareness. Researchers proposed an algorithm for estimating the pose of a human head, which overcomes the difficulties inherent with varying lighting conditions in a moving car (Murphy-Chutorian and Trivedi 2010).

MR maintenance systems with smart human machine interfaces that provide maintenance information with sticky notes are also enriched with position information. The central element of this approach is an ontology-based context-aware framework, which aggregates and processes data from different sources (Flatt et al. 2015).

Tracking has continued to be a focused research topic over recent years. The development of natural interfaces for human–robot interaction provides the user an intuitive way to control and guide robots (Peppoloni et al. 2015). Real-time magnetic motion-tracking system using multiple identifiable, tiny, lightweight, wireless and occlusion-free markers can achieve reliable tracking with reasonable speed (Huang et al. 2015). Other tracking methods, such as fingertip detection (Jang et al. 2015) and tactile sensors (Yang et al. 2017) (Fig. 8) are discussed from a technical point of view concerning basic concepts, functional materials, sensing mechanisms, promising applications, performance optimization strategies, multifunctional sensing, and system integration. Eye tracking is discussed for the hands-free control of vision augmentations, such as optical zoom or field of view expansion (Orlosky et al. 2015).
Fig. 8

Diagram of wearable tactile sensors and relevant applications (Yang et al. 2017)

One of the issues in MR is how to naturally mediate reality with the virtual content as seen by users. Color distortion is treated by researchers as a semi-parametric model which separates the non-linear color distortion and the linear color shift to solve the problem of color rendering (Itoh et al. 2015). View independence for remote collaboration MR systems allows the remote expert to have an independent view into the shared task space, which led to faster task completion time, more confidence from users, and a decrease in the amount of time spent communicating verbally during the task (Tait and Billinghurst 2015). An MR telepresence system which recreates the experience of a face-to-face conversation by projecting the users’ life-size virtual copy into the remote space proved its usability for solving a collaborative, physical task (Pejsa et al. 2016).

Design

Under the topic Design are studies that either discuss different designs of interfaces or different designs on experiments. Articles which fall under this topic have relatively lower citations compared with the articles in other topics. This is due to the fact that most of the researchers are trying to propose new designs, which might be too specifically focused on their goals and not general enough to apply to different research.

Early studies are still exploring the possibilities of MR, therefore, design and implementation are more focused on the functions of interaction instead of the interface itself. Researchers designed a spatially continuous hybrid work space, where users can freely display, move, or attach digital data among their computers, tables, and walls to explore the possibility of interfaces (Rekimoto and Saitoh 1999).

Sound-based tracking is used for reacting to the impact position of the physical object along with dynamic graphics and sounds to create a tangible interface which combines physical objects and full body motion in physical spaces with digital augmentation. A variety of game modes are presented to discuss how augmentation and transformation of physical games can discover new engaging interactions between the real and virtual world (Ishii et al. 1999). As computers become more ubiquitous and invisible, interfaces that blur the line between the virtual and real world are needed for users to move easily between digital and physical domains. Researchers designed a transitional interface for viewing spatial data sets and also designed an interface which allows the seamless transition between the 2D and 3D views of a data set in a traditional office setting (Billinghurst et al. 2001b).

Applying activity theory to the design of an MR system with TUI enhances group work and allows users to interact with each other and with the models in a virtual three-dimensional setting. Researchers proposed some design guidelines such as: the binding between physical objects and virtual objects should be clear, tactile or haptic feedback is suggested, and visual feedback that is consistent with user expectation is required (Fjeld et al. 2002).

Toolkits can assist designers and programmers in relation to system development. Researchers believe that the difficulty of design exploration, not final content creation, is what has limited MR experience prototyping. They designed an MR toolkit for multimedia development which allows rapid prototyping and early experience testing, and deals with both technical and practical problems to allow designers to handle the complex relationships between the physical and virtual worlds (MacIntyre et al. 2004).

The movement of interfaces can be analyzed in terms of whether they are expected, sensed, and desired. Designers have to wrestle with the complex problem of matching physical form to the capabilities of sensors and the shifting requirements of applications. Therefore, researchers proposed a framework for designing sensing-based interaction which clarified the design tradeoffs and identified and explained the likely problems with interaction, which sometimes helped inspiring new interaction possibilities (Benford et al. 2005).

Tangible user interfaces (TUI) allow users to organize objects spatially and collaborate with other users easily. The physical objects of TUI not only act as input devices but also can become embodiments of digital information. Researcher designed an MR molecular modeling environment to support the fabrication of physical molecular models, which allows virtual 3D representations to be overlaid onto tangible molecular models. The physical models allow users to change the overlaid information easily, providing a powerful, intuitive interface for switching between human intent, physical objects and different representations of information (Gillet et al. 2004, 2005). Others further explored TUI and discussed the design challenge which is the seamless extension of the physical affordances of the objects in the digital domain (Ullmer and Ishii 2000; Ishii 2008).

Researchers proposed a system that opportunistically annexes physical objects from a user’s current physical environment to provide the best-available haptic sensation for virtual objects, and validates the usability and utility of this method on defining haptic experiences (Hettiarachchi and Wigdor 2016) (Fig. 9).
Fig. 9

Design of a tangible user interface. Top: visual models designed to overlay on possible haptic models. Bottom: various visual models matched to physical objects based on the haptic models’ match for physical characteristics (Hettiarachchi, A., Wigdor, D.: Annexing reality: enabling opportunistic use of everyday objects as tangible proxies in augmented reality. In: Proceedings of the 2016)

Modelling in MR environments using hands as the interface enables the context in real-time as information input to make the iterative design process more efficient. Additionally, modelling in a natural scale, directly over the real scene, prevents the designers from focusing their attention on dimensional details and allows them to focus on the product itself and its relation with the environment (Arroyave-Tobón et al. 2015). For decision-support systems in the early design and discussion stages of urban design projects, this could prevent misinterpretation and misunderstanding between the different participants in the design process, especially in complex building situations (Schubert et al. 2015).

With an enhanced bare-hand interface, users can use natural gestures to manipulate and assemble virtual components to real components in an MR assembly motion simulation system (Wang et al. 2016b) or interact with a protein in an intuitive way, thereby making it appealing to computational chemists or structural biologists (Zheng and Waller 2017). A sound interface which uses the role of sound as the primary interface to convey game information and create engaging gaming experiences contributes significantly towards enhancing the immersion levels of users (Chatzidimitris et al. 2016).

By adapting the technology enhanced learning (TEL) methodology, researchers proposed a general architecture for building hyper activities and exercise books to promote learning for children (Di Fuccio et al. 2015).

Serious games have a serious use to educate or promote other types of activity. Some researchers discussed the design of a game with regards to its educational content, game mechanics, and user interface to enjoin players to learn about history while visiting the actual historical sites (Rodrigo et al. 2015). Others discussed using heuristic evaluation which allows user interface and user experience experts to evaluate the software before it is deployed (Gordon et al. 2016).

Challenges and future work

Although the concept of MR was first introduced fifty years ago, it is still not advanced enough to meet researchers’ expectations. Furthermore, general guidelines of interface design for MR have yet to be concluded.

There are still several technical issues that need to be overcome. Both hardware and software need to be improved before MR can be commonly used. The requirements of transferring information from a traditional display to a three-dimensional MR environment includes the specific technique of displaying, precise registration, and handling occlusion between virtual objects and the real world. Mobile devices and applications are currently the most popular trends in the MR research field. However, the computational power of mobile devices needs improvement and performance and capacity need better balance to advance the information provided. The disciplines of education and medicine are very different, but they have one thing in common, which is they both have a huge number of sub-disciplines or subjects.

Most of the studies in either the education or medical or application topics are similar types as they develop an application for specific use. In order to discuss the requirements of interfaces for different applications with different purposes, cooperation between researchers and experts in specific disciplines is needed. HCI research in MR is needed, especially in the technical and design topic. Tangible user interfaces, gesture-based interfaces, haptic interfaces and sound interfaces can be regarded as immature technology when applying MR which lacks a general guideline. Although traditional graphical user interfaces could follow the WIMP model, MR on the other hand, cannot apply WIMP directly due to the nature of MR as it is more of an environment surrounding the user than a screen in front of them. Lastly, interfaces should be optimized from the users’ point of view. Usability testing to measure presence, enjoyment, usefulness, attractiveness, motivation, engagement, etc. are all factors which might affect the users’ performance and preference.

When the potential of MR has been fully explored and the framework for interface design guidelines has been proposed, researchers can finally develop applications without being concerned about the design of the interface which might be the reason for the decrease in usability.

References

  1. Abhari, K., et al.: Training for planning tumour resection: augmented reality and human factors. IEEE Trans. Biomed. Eng. 62(6), 1466–1477 (2015)CrossRefGoogle Scholar
  2. Abowd, G.D., Mynatt, E.D.: Charting past, present, and future research in ubiquitous computing. ACM Trans. Comput. Hum. Interact. (TOCHI) 7(1), 29–58 (2000)CrossRefGoogle Scholar
  3. Andersen, D., et al.: Medical telementoring using an augmented reality transparent display. Surgery 159(6), 1646–1653 (2016)CrossRefGoogle Scholar
  4. Aromaa, S., Väänänen, K.: Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design. Appl. Ergon. 56, 11–18 (2016)CrossRefGoogle Scholar
  5. Arroyave-Tobón, S., Osorio-Gómez, G., Cardona-McCormick, J.F.: Air-modelling: a tool for gesture-based solid modelling in context during early design stages in AR environments. Comput. Ind. 66, 73–81 (2015)CrossRefGoogle Scholar
  6. Ates, H.C., Fiannaca, A., Folmer, E.: Immersive simulation of visual impairments using a wearable see-through display. In: Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 225–228. ACM (2015)Google Scholar
  7. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Comput. Graph. Appl. 21(6), 34–47 (2001)CrossRefGoogle Scholar
  8. Barsom, E., Graafland, M., Schijven, M.: Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 30(10), 4174–4183 (2016)CrossRefGoogle Scholar
  9. Beaudouin-Lafon, M.: Instrumental interaction: an interaction model for designing post-WIMP user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 446–453. ACM (2000)Google Scholar
  10. Bell, B., Feiner, S., Höllerer, T.: View management for virtual and augmented reality. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, pp. 101–110. ACM (2001)Google Scholar
  11. Benford, S., et al.: The frame of the game: blurring the boundary between fiction and reality in mobile experiences. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 427–436. ACM (2006)Google Scholar
  12. Benford, S., et al.: Expected, sensed, and desired: a framework for designing sensing-based interaction. ACM Trans. Comput. Hum. Interact. (TOCHI) 12(1), 3–30 (2005)CrossRefGoogle Scholar
  13. Besharati Tabrizi, L., Mahvash, M.: Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J. Neurosurg. 123(1), 206–211 (2015)CrossRefGoogle Scholar
  14. Bichlmeier, C., Wimmer, F., Heining, S.M., Navab, N.: Contextual anatomic mimesis hybrid in situ visualization method for improving multi-sensory depth perception in medical augmented reality. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007. ISMAR 2007, pp. 129–138. IEEE (2007)Google Scholar
  15. Billinghurst, M., Kato, H.: Collaborative augmented reality. Commun. ACM 45(7), 64–70 (2002)CrossRefGoogle Scholar
  16. Billinghurst, M., Kato, H., Poupyrev, I.: The MagicBook: a transitional AR interface. Comput. Graph. 25(5), 745–753 (2001a)CrossRefGoogle Scholar
  17. Billinghurst, M., Kato, H., Poupyrev, I.: The magicbook-moving seamlessly between reality and virtuality. IEEE Comput. Graph. Appl. 21(3), 6–8 (2001b)Google Scholar
  18. Birkfellner, W., et al.: A head-mounted operating binocular for augmented reality visualization in medicine-design and initial evaluation. IEEE Trans. Med. Imaging 21(8), 991–997 (2002)CrossRefGoogle Scholar
  19. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)zbMATHGoogle Scholar
  20. Botden, S.M., Jakimowicz, J.J.: What is going on in augmented reality simulation in laparoscopic surgery? Surg. Endosc. 23(8), 1693 (2009)CrossRefGoogle Scholar
  21. Botden, S.M., Buzink, S.N., Schijven, M.P., Jakimowicz, J.J.: Augmented versus virtual reality laparoscopic simulation: what is the difference? World J. Surg. 31(4), 764–772 (2007)CrossRefGoogle Scholar
  22. Brancati, N., Caggianese, G., Frucci, M., Gallo, L., Neroni, P.: Touchless target selection techniques for wearable augmented reality systems. In: Intelligent Interactive Multimedia Systems and Services. Springer, pp. 1–9 (2015)Google Scholar
  23. Brondi, R., et al.: Evaluating the impact of highly immersive technologies and natural interaction on player engagement and flow experience in games. In: International Conference on Entertainment Computing, pp. 169–181. Springer (2015)Google Scholar
  24. Buchmann, V., Violich, S., Billinghurst, M., Cockburn, A.: FingARtips: gesture based direct manipulation in Augmented Reality. In: Proceedings of the 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, pp. 212–221. ACM (2004)Google Scholar
  25. Butz, A., Hollerer, T., Feiner, S., MacIntyre, B., Beshers, C.: Enveloping users and computers in a collaborative 3D augmented reality. In: Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), pp. 35–44. IEEE (1999)Google Scholar
  26. Cabrilo, I., Schaller, K., Bijlenga, P.: Augmented reality-assisted bypass surgery: embracing minimal invasiveness. World Neurosurg. 83(4), 596–602 (2015)CrossRefGoogle Scholar
  27. Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.: Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51(1), 341–377 (2011)CrossRefGoogle Scholar
  28. Chandler, T., et al.: Immersive analytics. In: Big Data Visual Analytics (BDVA), pp. 1–8. IEEE (2015)Google Scholar
  29. Chatzidimitris, T., Gavalas, D., Michael, D.: SoundPacman: audio augmented reality in location-based games. In: 2016 18th Mediterranean Electrotechnical Conference (MELECON), pp. 1–6. IEEE (2016)Google Scholar
  30. Chen, S., Duh, H.: Mixed reality in education: recent developments and future trends. In 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT), pp. 367–371 (2018)Google Scholar
  31. Chen, C.-M., Tsai, Y.-N.: Interactive augmented reality system for enhancing library instruction in elementary schools. Comput. Educ. 59(2), 638–652 (2012)CrossRefGoogle Scholar
  32. Chen, X., et al.: Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J. Biomed. Inform. 55, 124–131 (2015)CrossRefGoogle Scholar
  33. Chen, L., Day, T.W., Tang, W., John, N.W.: Recent developments and future challenges in medical mixed reality. IEEE Int. Symp. Mixed Augment. Real. (ISMAR) 2017, 123–135 (2017)Google Scholar
  34. Choi, H., Cho, B., Masamune, K., Hashizume, M., Hong, J.: An effective visualization technique for depth perception in augmented reality-based surgical navigation. Int. J. Med. Robot. Comput. Assist. Surg. 12(1), 62–72 (2016)CrossRefGoogle Scholar
  35. Coles, T.R., John, N.W., Gould, D., Caldwell, D.G.: Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation. IEEE Trans. Haptics 4(3), 199–209 (2011)CrossRefGoogle Scholar
  36. Colomer, C., Llorens, R., Noé, E., Alcañiz, M.: Effect of a mixed reality-based intervention on arm, hand, and finger function on chronic stroke. J. Neuroeng. Rehabil. 13(1), 45 (2016)CrossRefGoogle Scholar
  37. Comport, A.I., Marchand, E., Pressigout, M., Chaumette, F.: Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Trans. Vis. Comput. Graph. 12(4), 615–628 (2006)CrossRefGoogle Scholar
  38. Datcu, D., Lukosch, S., Brazier, F.: On the usability and effectiveness of different interaction types in augmented reality. Int. J. Hum. Comput. Interact. 31(3), 193–209 (2015)CrossRefGoogle Scholar
  39. Davis, M.C., Can, D.D., Pindrik, J., Rocque, B.G., Johnston, J.M.: Virtual interactive presence in global surgical education: international collaboration through augmented reality. World Neurosurg. 86, 103–111 (2016)CrossRefGoogle Scholar
  40. Dey, A., Billinghurst, M., Lindeman, R.W., Swan II, J.E.: A systematic review of usability studies in augmented reality between 2005 and 2014. In: 2016 IEEE International Symposium Mixed and Augmented Reality (ISMAR-Adjunct), pp. 49–50. IEEE (2016)Google Scholar
  41. Di Fuccio, R., Ponticorvo, M., Di Ferdinando, A., Miglino, O.: Towards hyper activity books for children. connecting activity books and montessori-like educational materials. In: Design for Teaching and Learning in a Networked World, pp. 401–406. Springer (2015)Google Scholar
  42. Dickey, R.M., Srikishen, N., Lipshultz, L.I., Spiess, P.E., Carrion, R.E., Hakky, T.S.: Augmented reality assisted surgery: a urologic training tool. Asian J. Androl. 18(5), 732 (2016)CrossRefGoogle Scholar
  43. Dixon, B.J., Daly, M.J., Chan, H., Vescan, A.D., Witterick, I.J., Irish, J.C.: Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg. Endosc. 27(2), 454–461 (2013)CrossRefGoogle Scholar
  44. dos Santos, L.F., Christ, O., Mate, K., Schmidt, H., Krüger, J., Dohle, C.: Movement visualisation in virtual reality rehabilitation of the lower limb: a systematic review. Biomed. Eng. Online 15(3), 144 (2016)CrossRefGoogle Scholar
  45. Dunleavy, M., Dede, C., Mitchell, R.: Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. J. Sci. Educ. Technol. 18(1), 7–22 (2009)CrossRefGoogle Scholar
  46. Elsevier (2018). https://www.elsevier.com/solutions/scopus/content. Accessed 15 Jan 2018
  47. Feiner, S., MacIntyre, B., Höllerer, T., Webster, A.: A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment. Pers. Technol. 1(4), 208–217 (1997)CrossRefGoogle Scholar
  48. Ferrer-Torregrosa, J., Jiménez-Rodríguez, M.Á., Torralba-Estelles, J., Garzón-Farinós, F., Pérez-Bermejo, M., Fernández-Ehrling, N.: Distance learning ects and flipped classroom in the anatomy learning: comparative study of the use of augmented reality, video and notes. BMC Med. Educ. 16(1), 230 (2016)CrossRefGoogle Scholar
  49. Fjeld, M., et al.: Tangible user interface for chemistry education: comparative evaluation and re-design. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 805–808. ACM (2007)Google Scholar
  50. Fjeld, M., Lauche, K., Bichsel, M., Voorhorst, F., Krueger, H., Rauterberg, M.: Physical and virtual tools: activity theory applied to the design of groupware. Comput. Support. Coop. Work (CSCW) 11(1–2), 153–180 (2002)CrossRefGoogle Scholar
  51. Flatt, H., Koch, N., Röcker, C., Günter, A., Jasperneite, J.: A context-aware assistance system for maintenance applications in smart factories based on augmented reality and indoor localization. In: 2015 IEEE 20th Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–4. IEEE (2015)Google Scholar
  52. Flintham, M., et al.: Where on-line meets on the streets: experiences with mobile mixed reality games. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 569–576. ACM (2003)Google Scholar
  53. Freitas, R., Campos, P.: SMART: a SysteM of Augmented Reality for Teaching 2nd grade students. In: Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction, vol. 2, pp. 27–30. BCS Learning and Development Ltd. (2008)Google Scholar
  54. Galambos, P., et al.: Design, programming and orchestration of heterogeneous manufacturing systems through VR-powered remote collaboration. Robot. Comput. Integr. Manuf. 33, 68–77 (2015)CrossRefGoogle Scholar
  55. Gillet, A., Sanner, M., Stoffler, D., Goodsell, D., Olson, A.: Augmented reality with tangible auto-fabricated models for molecular biology applications. In: IEEE Visualization, pp. 235–241. IEEE (2004)Google Scholar
  56. Gillet, A., Sanner, M., Stoffler, D., Olson, A.: Tangible interfaces for structural molecular biology. Structure 13(3), 483–491 (2005)CrossRefGoogle Scholar
  57. Gordon, N., Brayshaw, M., Aljaber, T.: Heuristic evaluation for serious immersive games and M-instruction. In: International Conference on Learning and Collaboration Technologies, pp. 310–319. Springer (2016)Google Scholar
  58. Górski, F., Buń, P., Wichniarek, R., Zawadzki, P., Hamrol, A.: Immersive city bus configuration system for marketing and sales education. Procedia Comput. Sci. 75, 137–146 (2015)CrossRefGoogle Scholar
  59. Grubert, J., Langlotz, T., Zollmann, S., Regenbrecht, H.: Towards pervasive augmented reality: context-awareness in augmented reality. IEEE Trans. Vis. Comput. Graph. 23(6), 1706–1724 (2017)CrossRefGoogle Scholar
  60. Haouchine, N., Dequidt, J., Peterlik, I., Kerrien, E., Berger, M.-O., Cotin, S.: Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery. In: 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 199–208. IEEE (2013)Google Scholar
  61. Henderson, S., Feiner, S.: Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE Trans. Vis. Comput. Graph. 17(10), 1355–1368 (2011)CrossRefGoogle Scholar
  62. Hettiarachchi, A., Wigdor, D.: Annexing reality: enabling opportunistic use of everyday objects as tangible proxies in augmented reality. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1957–1967. ACM (2016)Google Scholar
  63. Höllerer, T., Feiner, S., Terauchi, T., Rashid, G., Hallaway, D.: Exploring MARS: developing indoor and outdoor user interfaces to a mobile augmented reality system. Comput. Graph. 23(6), 779–785 (1999)CrossRefGoogle Scholar
  64. Hong, I., et al.: 18.1 A 2.71 nJ/pixel 3D-stacked gaze-activated object-recognition system for low-power mobile HMD applications. In: 2015 IEEE International Solid-State Circuits Conference-(ISSCC), pp. 1–3. IEEE (2015)Google Scholar
  65. Huang, Z., Li, W., Hui, P.: Ubii: towards seamless interaction between digital and physical worlds. In: Proceedings of the 23rd ACM International Conference on Multimedia, pp. 341–350. ACM (2015)Google Scholar
  66. Huang, J., Mori, T., Takashima, K., Hashi, S., Kitamura, Y.: IM6D: magnetic tracking system with 6-DOF passive markers for dexterous 3D interaction and motion. ACM Trans. Graph. (TOG) 34(6), 217 (2015)CrossRefGoogle Scholar
  67. Iseki, H., et al.: Volumegraph (overlaid three-dimensional image-guided navigation). Stereotact. Funct. Neurosurg. 68(1–4), 18–24 (1997)CrossRefGoogle Scholar
  68. Ishii, H., Ullmer, B.: Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 234–241. ACM (1997)Google Scholar
  69. Ishii, H., Wisneski, C., Orbanes, J., Chun, B., Paradiso, J.: PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 394–401. ACM (1999)Google Scholar
  70. Ishii, H.: Tangible bits: beyond pixels. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, pp. xv–xxv. ACM (2008)Google Scholar
  71. Ismail, A.W., Sunar, M.S.: Multimodal fusion: gesture and speech input in augmented reality environment. In: Computational Intelligence in Information Systems, pp. 245–254. Springer (2015)Google Scholar
  72. Itoh, Y., Dzitsiuk, M., Amano, T., Klinker, G.: Semi-parametric color reproduction method for optical see-through head-mounted displays. IEEE Trans. Vis. Comput. Graph. 21(11), 1269–1278 (2015)CrossRefGoogle Scholar
  73. Jacob, R., Stellmach, S.: What you look at is what you get: gaze-based user interfaces. Interactions 23(5), 62–65 (2016)CrossRefGoogle Scholar
  74. Jang, Y., Noh, S.-T., Chang, H.J., Kim, T.-K., Woo, W.: 3D finger cape: clicking action and position estimation under self-occlusions in egocentric viewpoint. IEEE Trans. Vis. Comput. Graph. 21(4), 501–510 (2015)CrossRefGoogle Scholar
  75. Kanbara, M., Takemura, H., Yokoya, N., Okuma, T.: A stereoscopic video see-through augmented reality system based on real-time vision-based registration. In: vr, p. 255. IEEE (2000)Google Scholar
  76. Ke, F., Lee, S., Xu, X.: Teaching training in a mixed-reality integrated learning environment. Comput. Hum. Behav. 62, 212–220 (2016)CrossRefGoogle Scholar
  77. Kerawalla, L., Luckin, R., Seljeflot, S., Woolard, A.: “Making it real”: exploring the potential of augmented reality for teaching primary school science. Virtual Real. 10(3–4), 163–174 (2006)CrossRefGoogle Scholar
  78. Kiyokawa, K., Billinghurst, M., Hayes, S.E., Gupta, A., Sannohe, Y., Kato, H.: Communication behaviors of co-located users in collaborative AR interfaces. In: Proceedings of the 1st International Symposium on Mixed and Augmented Reality, p. 139. IEEE Computer Society (2002)Google Scholar
  79. Klemmer, S.R., Li, J., Lin, J., Landay, J.A.: Papier-Mache: toolkit support for tangible input. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 399–406. ACM (2004)Google Scholar
  80. Koike, H., Sato, Y., Kobayashi, Y.: Integrating paper and digital information on EnhancedDesk: a method for realtime finger tracking on an augmented desk system. ACM Trans. Comput. Hum. Interact. 8(4), 307–322 (2001)CrossRefGoogle Scholar
  81. Koller, D., Klinker, G., Rose, E., Breen, D., Whitaker, R., Tuceryan, M.: Real-time vision-based camera tracking for augmented reality applications. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 87–94. ACM (1997)Google Scholar
  82. Küçük, S., Kapakin, S., Göktaş, Y.: Learning anatomy via mobile augmented reality: effects on achievement and cognitive load. Anat. Sci. Educ. 9(5), 411–421 (2016)CrossRefGoogle Scholar
  83. Kumar, A., Smith, R., Patel, V.R.: Current status of robotic simulators in acquisition of robotic surgical skills. Curr. Opin. Urol. 25(2), 168–174 (2015)CrossRefGoogle Scholar
  84. Kutulakos, K.N., Vallino, J.R.: Calibration-free augmented reality. IEEE Trans. Vis. Comput. Graph. 4(1), 1–20 (1998)CrossRefGoogle Scholar
  85. Lee, T., Hollerer, T.: Handy AR: markerless inspection of augmented reality objects using fingertip tracking (2007)Google Scholar
  86. Lee, K.-R., Chang, W.-D., Kim, S., Im, C.-H.: Real-time “eye-writing” recognition using electrooculogram. IEEE Trans. Neural Syst. Rehabil. Eng. 25(1), 37–48 (2017)CrossRefGoogle Scholar
  87. Li, G., Xi, N., Yu, M., Fung, W.-K.: Development of augmented reality system for AFM-based nanomanipulation. IEEE/ASME Trans. Mechatron. 9(2), 358–365 (2004)CrossRefGoogle Scholar
  88. Li, G., Xi, N., Chen, H., Pomeroy, C., Prokos, M.: “ Videolized” atomic force microscopy for interactive nanomanipulation and nanoassembly. IEEE Trans. Nanotechnol. 4(5), 605–615 (2005)CrossRefGoogle Scholar
  89. Lin, S., Cheng, H.F., Li, W., Huang, Z., Hui, P., Peylo, C.: Ubii: physical world interaction through augmented reality. IEEE Trans. Mob. Comput. 16(3), 872–885 (2017)CrossRefGoogle Scholar
  90. Lindgren, R., Tscholl, M., Wang, S., Johnson, E.: Enhancing learning and engagement through embodied interaction within a mixed reality simulation. Comput. Educ. 95, 174–187 (2016)CrossRefGoogle Scholar
  91. Loureiro, R., Amirabdollahian, F., Topping, M., Driessen, B., Harwin, W.: Upper limb robot mediated stroke therapy—GENTLE/s approach. Auton. Robots 15(1), 35–51 (2003)CrossRefGoogle Scholar
  92. Luhn, H.P.: A statistical approach to mechanized encoding and searching of literary information. IBM J. Res. Dev. 1(4), 309–317 (1957)MathSciNetCrossRefGoogle Scholar
  93. Ma, M., et al.: Personalized augmented reality for anatomy education. Clin. Anat. 29(4), 446–453 (2016)CrossRefGoogle Scholar
  94. MacIntyre, B., Gandy, M., Dow, S., Bolter, J.D.: DART: a toolkit for rapid design exploration of augmented reality experiences. In: Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, pp. 197–206. ACM (2004)Google Scholar
  95. Malik, S., Laszlo, J.: Visual touchpad: a two-handed gestural input device. In: Proceedings of the 6th International Conference on Multimodal Interfaces, pp. 289–296. ACM (2004)Google Scholar
  96. Marescaux, J., Smith, M.K., Fölscher, D., Jamali, F., Malassagne, B., Leroy, J.: Telerobotic laparoscopic cholecystectomy: initial clinical experience with 25 patients. Ann. Surg. 234(1), 1 (2001)CrossRefGoogle Scholar
  97. Megali, G., et al.: EndoCAS navigator platform: a common platform for computer and robotic assistance in minimally invasive surgery. Int. J. Med. Robot. Comput. Assist. Surg. 4(3), 242–251 (2008)CrossRefGoogle Scholar
  98. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994)Google Scholar
  99. Mistry, P., Maes, P., Chang, L.: WUW-wear Ur world: a wearable gestural interface. In: CHI’09 Extended Abstracts on Human Factors in Computing Systems, pp. 4111–4116. ACM (2009)Google Scholar
  100. Mitrasinovic, S., et al.: Clinical and surgical applications of smart glasses. Technol. Health Care 23(4), 381–401 (2015)CrossRefGoogle Scholar
  101. Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation and augmented reality tracking: an integrated system and evaluation for monitoring driver awareness. IEEE Trans. Intell. Transp. Syst. 11(2), 300–311 (2010)CrossRefGoogle Scholar
  102. Nielsen, M., Störring, M., Moeslund, T.B., Granum, E.: A procedure for developing intuitive and ergonomic gesture interfaces for HCI. In: International Gesture Workshop, pp. 409–420. Springer (2003)Google Scholar
  103. Nuernberger, B., Lien, K.-C., Höllerer, T., Turk, M.: Interpreting 2D gesture annotations in 3D augmented reality. In: 2016 IEEE Symposium on 3D User Interfaces (3DUI), pp. 149–158. IEEE (2016)Google Scholar
  104. Omar, T., Nehdi, M.L.: Data acquisition technologies for construction progress tracking. Autom. Constr. 70, 143–155 (2016)CrossRefGoogle Scholar
  105. Orlosky, J., Toyama, T., Kiyokawa, K., Sonntag, D.: Modular: eye-controlled vision augmentations for head mounted displays. IEEE Trans. Vis. Comput. Graph. 1, 1–1 (2015)Google Scholar
  106. Papagiannakis, G., Singh, G., Magnenat-Thalmann, N.: A survey of mobile and wireless technologies for augmented reality systems. Comput. Animat. Virtual Worlds 19(1), 3–22 (2008)CrossRefGoogle Scholar
  107. Park, S., Choi, S., Lee, J., Kim, M., Park, J., Yoo, H.-J.: 14.1 a 126.1 mw real-time natural ui/ux processor with embedded deep-learning core for low-power smart glasses. In: 2016 IEEE International Solid-State Circuits Conference (ISSCC), pp. 254–255. IEEE (2016)Google Scholar
  108. Patten, J., Ishii, H., Hines, J., Pangaro, G.: Sensetable: a wireless object tracking platform for tangible user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 253–260. ACM (2001)Google Scholar
  109. Pejsa, T., Kantor, J., Benko, H., Ofek, E., Wilson, A.: Room2room: enabling life-size telepresence in a projected augmented reality environment. In: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing, pp. 1716–1725. ACM (2016)Google Scholar
  110. Pelargos, P.E., et al.: Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. J. Clin. Neurosci. 35, 1–4 (2017)CrossRefGoogle Scholar
  111. Peppoloni, L., Brizzi, F., Avizzano, C.A., Ruffaldi, E.: Immersive ROS-integrated framework for robot teleoperation. In: 2015 IEEE Symposium on 3D User Interfaces (3DUI), pp. 177–178. IEEE (2015)Google Scholar
  112. Piekarski, W., Thomas, B.H.: Tinmith-metro: new outdoor techniques for creating city models with an augmented reality wearable computer. In: Proceedings of Fifth International Symposium on Wearable Computers, pp. 31–38. IEEE (2001)Google Scholar
  113. Ploennigs, J., Ba, A., Barry, M.: Materializing the promises of cognitive IoT: how cognitive buildings are shaping the way. IEEE Internet Things J. 5(4), 2367–2374 (2018)CrossRefGoogle Scholar
  114. Qamar, A.M., Khan, A.R., Husain, S.O., Rahman, M.A., Baslamah, S.: A multi-sensory gesture-based occupational therapy environment for controlling home appliances. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, pp. 671–674. ACM (2015)Google Scholar
  115. Rekimoto, J., Ayatsuka, Y.: CyberCode: designing augmented reality environments with visual tags. In: Proceedings of DARE 2000 on Designing Augmented Reality Environments, pp. 1–10. ACM (2000)Google Scholar
  116. Rekimoto, J., Saitoh, M.: Augmented surfaces: a spatially continuous work space for hybrid computing environments. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 378–385. ACM (1999)Google Scholar
  117. Rodrigo, M., Caluya, N.R., Diy, W., Vidal, E.: Igpaw: intramuros—design of an augmented reality game for philippine history. In: Proceedings of the 23rd International Conference on Computers in Education (2015)Google Scholar
  118. Ruppert, G.C.S., Reis, L.O., Amorim, P.H.J., de Moraes, T.F., da Silva, J.V.L.: Touchless gesture user interface for interactive image visualization in urological surgery. World J. Urol. 30(5), 687–691 (2012)CrossRefGoogle Scholar
  119. Sand, A., Rakkolainen, I., Isokoski, P., Kangas, J., Raisamo, R., Palovuori, K.: Head-mounted display with mid-air tactile feedback. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, pp. 51–58. ACM (2015)Google Scholar
  120. Schmalstieg, D., et al.: The studierstube augmented reality project. Presence Teleoperators Virtual Environ. 11(1), 33–54 (2002)CrossRefGoogle Scholar
  121. Schubert, G., Schattel, D., Tönnis, M., Klinker, G., Petzold, F.: Tangible mixed reality on-site: interactive augmented visualisations from architectural working models in urban design. In: International Conference on Computer-Aided Architectural Design Futures, pp. 55–74. Springer (2015)Google Scholar
  122. Schwabe, G., Göth, C.: Mobile learning with a mobile game: design and motivational effects. J. Comput. Assist. Learn. 21(3), 204–216 (2005)CrossRefGoogle Scholar
  123. Sebillo, M., Vitiello, G., Paolino, L., Ginige, A.: Training emergency responders through augmented reality mobile interfaces. Multimed. Tools Appl. 75(16), 9609–9622 (2016)CrossRefGoogle Scholar
  124. Shahrokni, H., Årman, L., Lazarevic, D., Nilsson, A., Brandt, N.: Implementing smart urban metabolism in the Stockholm Royal Seaport: smart city SRS. J. Ind. Ecol. 19(5), 917–929 (2015)CrossRefGoogle Scholar
  125. Shelton, B.E., Hedley, N.R.: Using augmented reality for teaching earth-sun relationships to undergraduate geography students. In: The First IEEE International Workshop on Augmented Reality Toolkit, vol. 8. IEEE (2002)Google Scholar
  126. Shuhaiber, J.H.: Augmented reality in surgery. Arch. Surg. 139(2), 170–174 (2004)CrossRefGoogle Scholar
  127. Simões, B., Prandi, F., De Amicis, R.: Creativity support in projection-based augmented environments. In: International Conference on Augmented and Virtual Reality, pp. 168–187. Springer (2015)Google Scholar
  128. Stadler, S., Kain, K., Giuliani, M., Mirnig, N., Stollnberger, G., Tscheligi, M.: Augmented reality for industrial robot programmers: Workload analysis for task-based, augmented reality-supported robot control. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 179–184. IEEE (2016)Google Scholar
  129. Starner, T., et al.: Augmented reality through wearable computing. Presence Teleoperators Virtual Environ. 6(4), 386–398 (1997)CrossRefGoogle Scholar
  130. Sweet, R.M.: The CREST simulation development process: training the next generation. J. Endourol. 31(1), S69–S75 (2017)CrossRefGoogle Scholar
  131. Sylaiou, S., Mania, K., Karoulis, A., White, M.: Exploring the relationship between presence and enjoyment in a virtual museum. Int. J. Hum Comput Stud. 68(5), 243–253 (2010)CrossRefGoogle Scholar
  132. Tait, M., Billinghurst, M.: The effect of view independence in a collaborative ar system. Comput. Support. Coop. Work (CSCW) 24(6), 563–589 (2015)CrossRefGoogle Scholar
  133. Tamaki, E., Chan, T., Iwasaki, K.: UnlimitedHand: input and output hand gestures with less calibration time. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 163–165. ACM (2016)Google Scholar
  134. Tatsumi, H., Murai, Y., Sekita, I., Tokumasu, S., Miyakawa, M.: Cane walk in the virtual reality space using virtual haptic sensing: toward developing haptic VR technologies for the visually impaired. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2360–2365. IEEE (2015)Google Scholar
  135. Thomas, B., et al.: ARQuake: An outdoor/indoor augmented reality first person application. In: The Fourth International Symposium on Wearable Computers, pp. 139–146. IEEE (2000)Google Scholar
  136. Ullmer, B., Ishii, H.: Emerging frameworks for tangible user interfaces. IBM Syst. J. 39(3.4), 915–931 (2000)CrossRefGoogle Scholar
  137. Underkoffler, J., Ishii, H.: Urp: a luminous-tangible workbench for urban planning and design. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 386–393. ACM (1999)Google Scholar
  138. Wang, R.Y., Popović, J.: Real-time hand-tracking with a color glove. ACM Trans. Graph. (TOG) 28(3), 63 (2009)Google Scholar
  139. Wang, J., et al.: Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. Comput. Med. Imaging Graph. 40, 147–159 (2015)CrossRefGoogle Scholar
  140. Wang, X., Ong, S., Nee, A.Y.-C.: Multi-modal augmented-reality assembly guidance based on bare-hand interface. Adv. Eng. Inform. 30(3), 406–421 (2016a)CrossRefGoogle Scholar
  141. Wang, X., Ong, S., Nee, A.: Real-virtual components interaction for assembly simulation and planning. Robot. Comput. Integr. Manuf. 41, 102–114 (2016b)CrossRefGoogle Scholar
  142. Want, R., Fishkin, K.P., Gujar, A., Harrison, B.L.: Bridging physical and virtual worlds with electronic tags. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 370–377. ACM (1999)Google Scholar
  143. Waterworth, E.L., Waterworth, J.A.: Focus, locus, and sensus: the three dimensions of virtual experience. CyberPsychol. Behav. 4(2), 203–213 (2001)CrossRefGoogle Scholar
  144. Weiser, M.: The computer for the 21st century. Sci. Am. 265(3), 94–105 (1991)CrossRefGoogle Scholar
  145. Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C.: Lucid touch: a see-through mobile device. In: Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, pp. 269–278. ACM (2007)Google Scholar
  146. Wilson, A.D., Benko, H.: Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, pp. 273–282. ACM (2010)Google Scholar
  147. Wojciechowski, R., Cellary, W.: Evaluation of learners’ attitude toward learning in ARIES augmented reality environments. Comput. Educ. 68, 570–585 (2013)CrossRefGoogle Scholar
  148. Woods, E., et al.: Augmenting the science centre and museum experience. In: Proceedings of the 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, pp. 230–236. ACM (2004)Google Scholar
  149. Wozniak, P., Vauderwange, O., Mandal, A., Javahiraly, N., Curticapean, D.: Possible applications of the LEAP motion controller for more interactive simulated experiments in augmented or virtual reality. In: Optics Education and Outreach IV, vol. 9946, p. 99460P. International Society for Optics and Photonics (2016)Google Scholar
  150. Yang, T., Xie, D., Li, Z., Zhu, H.: Recent advances in wearable tactile sensors: materials, sensing mechanisms, and device performance. Mater. Sci. Eng. R Rep. 115, 1–37 (2017)CrossRefGoogle Scholar
  151. Yannier, N., Koedinger, K.R., Hudson, S.E.: Learning from mixed-reality games: is shaking a tablet as effective as physical observation?. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1045–1054. ACM (2015)Google Scholar
  152. Yannier, N., Hudson, S.E., Wiese, E.S., Koedinger, K.R.: Adding physical objects to an interactive game improves learning and enjoyment: evidence from earthshake. ACM Trans. Comput. Hum. Interact. (TOCHI) 23(4), 26 (2016)CrossRefGoogle Scholar
  153. Yu, M., Lakshman, H., Girod, B.: A framework to evaluate omnidirectional video coding schemes. In: 2015 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 31–36. IEEE (2015)Google Scholar
  154. Zheng, M., Waller, M.P.: ChemPreview: an augmented reality-based molecular interface. J. Mol. Graph. Model. 73, 18–23 (2017)CrossRefGoogle Scholar

Copyright information

© China Computer Federation (CCF) 2019

Authors and Affiliations

  1. 1.Department of Computer Science and ITLa Trobe UniversityMelbourneAustralia

Personalised recommendations