Spatial Mapping of Physical and Virtual Spaces as an Extension of Natural Mapping: Relevance for Interaction Design and User Experience

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9179)


Natural user interfaces are designed to be intuitive, and quick to learn. With the use of natural mapping, they rely on previous knowledge or skills from users by employing spatial analogies, cultural standards or biological effects. Virtual environments with a high interaction fidelity also use rich spatial information in addition to natural mapping, e.g. stereoscopy or head-tracking. However, an additional factor for naturalism is the relationship of perceived interaction spaces: We propose to examine the Spatial Mapping of the perceived physical and virtual spaces as an extension of Natural Mapping. Similarly to NM, a high degree of spatial mapping using an isomorphic mapping should result in more intuitive interactions, reducing the mental workload required. However, the benefits of Spatial Mapping on user experience and task performance are only evident for complex spatial tasks. As a consequence, many tasks do not benefit from complex spatial information (e.g. stereoscopy or head-tracking).


Natural mapping User experience Mental models Spatial mapping 

1 Introduction

In HCI research, a user-centered design approach aims to address explicit and implicit needs of users to minimize barriers of technology use. Intuitive user interfaces allow users to utilize prior knowledge and experiences, making them easier to understand and to master. For example, gestures and metaphors such as swipe, pinch, or roll are used to interact with smartphones. All of them are based on analogies of interacting with physical paper. Prior knowledge can stem from experience, such as learning conventions of arbitrary actions (e.g. pressing a button on a keyboard to close an application). Ideally, a natural mapping (NM) allows to infer meaning based on real world experiences and analogies, such as symbolic (e.g. clicking ‘x’ to close an application) or natural analogies (e.g. swiping the application out of the screen) when interacting with technology [1]. Norman introduced various concepts to utilize these experiences, e.g. spatial analogies, cultural standards, perceptual effects, or biological effects [1]. From a cognitive psychology viewpoint, these experiences are stored as mental models [2], cognitive schemes [3], or scripts [4].

NM interaction activates existing mental models and allows for a transfer to the interaction at hand. As a consequence of the transfer, the interaction results in lower required mental workloads, freeing up cognitive resources for processing the actual content of the interaction. Furthermore, NM also allows for a transfer of mental models constructed based on an interaction with technology to the physical world. Virtual simulators (e.g. medical training or driving simulators) can be employed to prepare for real world situations. The extent of these mappings can be modified freely by interaction designers. Often, there have been many attempts to create authentic virtual counterparts of real world interactions. However, studies showed that highly natural mapped interactions (e.g. stereoscopic images, or authentic game controllers) did not automatically enhance performance or user experience (UX), but are only effective for certain interactions [5, 6].

In addition to the mapping of input actions, virtual environments can differ greatly in their mapping of spatial relations: The user perceives both a physical space (e.g. C.A.V.E. environment) and a virtual space (e.g. virtual scene depicted by the C.A.V.E.), where the interaction takes place. For example, a system with high degrees of (natural) spatial mapping may use an isomorphic mapping of distances, object sizes and travel speeds as well as subjective head-tracked viewing perspective, thus resulting in a very natural overall experience. Furthermore, users make assumptions about possible interaction affordances based on their real-world experiences, drawing on existing mental models. Like NM, this spatial mapping should reduce the required mental workloads. Even with the use of very naturalistic input devices (e.g. gestures), systems with lower degrees of spatial mapping (e.g. video games) require more mental transformation processes during the interaction to account for the different perceived perceptual spaces. Yet, a high degree of spatial input and output information available alone should not automatically benefit the interaction process.

In this paper, we discuss the spatial mapping of virtual and physical spaces and the impact of spatial mapping on task performance and user experience. Specifically, as with NM, we argue that the combination of perceived spatial multisensory stimuli has to be meaningful for the specific user task to show any benefits. We examine spatial relationships, because body-centered interaction [7] primarily aims to combine corresponding proprioceptive and exteroceptive sensations to create a sense of embodiment in a virtual environment [8].

2 Natural Mapping and Natural User Interfaces

Natural Mapping is a specific form of input mapping [9] that focuses on intuitive controls for interactive systems. Natural user interfaces (NUI) are often described as direct, authentic, motion-controlled, gesture-based, or controller-less. Instead of relying on buttons, keyboards, joysticks, or mice – which require users to make abstract and arbitrary input commands – they rely on intuitive, physical input methods. These methods are often based on their real life counterparts and don’t require the user to learn the controls before interacting with a video game or virtual environment.

Natural Mapping originally refers to proper and natural arrangements for the relations between controls and their manipulation to the outcome of this manipulation [1]. The interactions are based on prior knowledge: Physical and spatial analogies are used to imitate physical objects within a virtual context, e.g. ‘buttons’ that can be pressed, ‘sliders’ that can be dragged and so on. Cultural standards give the user an idea about an outcome from the interaction, e.g. rotating an object clock- or counterclockwise to increase or reduce a value. What we call ‘intuitive’ means that our cognitive system can adapt to the situation more easily. Based on previous knowledge, mental models about the objects and the interaction are constructed.

2.1 Mental Models

Mental models (MM) are subjective, functional models for technical, physical, and social processes involving complex situations. They are representations of the surrounding world and include relationships between the different parts [2]. MM only include reduced aspects of a situation: Quantitative relationships are reduced to qualitative relations within the models [10], that relate to a specific object in forms of structural or functional analogies. MM are constructed to organize and structure knowledge through processing of experiences. Schemes, frames, or scripts are similar, related concepts. Mental models are used in theories on media reception, such as text [11], film [12], or interactive media [13].

Two mechanisms provide information to construct mental models: In a top-down process, existing knowledge and experiences from other knowledge domains are used as a base of the MM. In a bottom-up process, situation-specific information is integrated into the model. Whenever new, situation-specific information is available, the model adapts to the new circumstances. Both the cognitive processing and the construction of MM are automatic processes.

The benefit from natural mapping comes from the inclusion of previous knowledge in the construction of mental models. NM allows for a transfer from other knowledge domains, thus enhancing the retrieval of existing mental models for the interaction or allow for an easy top-down adaptation of new models [14]. The result is that fewer cognitive resources are needed for the interaction, so more resources are available to process the actual content of the media experience.

2.2 Task-Specific Benefits of Natural Mapping

Interaction tasks in virtual environments are typically divided into natural and magical techniques. Whereas natural interaction aims on high interaction fidelity and simulation of real world counterparts, magical techniques are intentional less natural and focus on usability and performance [15]. Object selection, manipulation, as well as travel/translation, system control and symbolic input are key tasks within a virtual environment [15]. Depending on the context of the interaction or application, the focus may not be on NM at all. When using productive software such as engineering or office software, the efficiency and precision of the controls are more important than an intuitive interaction, thus preferring a magical or abstract technique (such as using a keyboard). Using previously learned hotkeys to achieve a task may not be intuitive, but it is very efficient. As productive software is aimed at experts, intuitive controls for novices are less important.

There are, however, tasks that clearly benefit from NM: Tasks designed for novice users should be intuitive, allowing for a fast learning process. Furthermore, when the task involves sensorimotor transfer processes (e.g. medical training simulations), a NUI should employ natural input devices (e.g. a virtual scalpel) to allow for best transfer results. Also, if the goal of the task is not performance-based, but focuses on entertainment or is meant to provoke body movements (e.g. fitness, sports), NM can be employed effectively [16, 17]. These examples stress the importance of the task-specific context for NM. Depending on the complexity and the goal of the interaction, it may not be necessary to completely simulate a virtual interaction of a real world counterpart – a simplification of the interaction may be sufficient.

2.3 Natural User Interfaces and Spatial Information

Often NUIs aim for a high naturalism, combining spatial input capabilities and multisensory output [18]. Bowman [19] emphasizes the problems of precision of spatial input. Spatial tracking systems are even far behind the modern computer mouse in terms of precision (e.g. jitter), accuracy, responsiveness (e.g. latency) and have several basic disadvantages: (1) Spatial input is often performed in the air and not on a flat surface, (2) in-air movements of humans is often jittery because of natural body tremors, (3) pointing techniques using ray-casting (e.g. magic wands) amplify natural hand tremors, (4) 3D spatial trackers usually don’t stay in the same position when letting go of them [19].

Despite these problems, the fidelity of spatial input capabilities is unparalleled. NM allows for three-dimensional input, e.g. through gestures or tangible objects. For example, a virtual environment could allow users to play virtual golf by using a real world golf club where the position and movements of both the player and the golf club are tracked by the system. The amount of spatial input information can vary greatly: The system could process the information on a basic level, only registering the overall movement of the club as one event. On the other hand, the system could process all available information (6 DOF of movement) for the interaction. In reality, most systems fall in between these extremes. Interactions can be simplified to make them easier to perform (e.g. in video games such as Nintendo Wii Sports or Microsoft Tiger Woods PGA Tour 13 for Kinect) or maintained as complex sequences (e.g. for training simulators).

Simplified interactions usually do not require highly elaborated previous knowledge or skill. Novice users may apply simple concepts (e.g. “swing the golf club and hit the ball”) from common knowledge. Assumptions about the interaction are based on these basic models. Complex interactions in virtual environments are rarely perceived as complex as their real world counterparts [18]. So for experts, even these are simplified. However, a seemingly complex system (e.g. a training simulator) may invoke the assumption of a real world complexity, resulting in frustration and bad user experience, if these assumptions are not met. Still, novices may not notice the simplification due to their basic mental model of the real world interaction.

To use all the benefits of complex spatial input, complex multisensory spatial output is required. If users cannot perceive spatial depth cues, they are not able to make precise spatial inputs. Visual depth cues can be classified into static and dynamic monocular spatial cues and binocular spatial cues [20, 21, 22]. Monocular cues constitute the majority of depth cues for human depth perception, e.g. occlusion, relative height in the visual field, relative size and brightness of objects, texture gradient, linear and aerial perspective and shadows. Spatial cues requiring binocular vison are parallax and stereopsis, i.e. convergence and accommodation of the optical lenses [23]. In media technology, binocular spatial cues are primarily simulated through the use of stereoscopy [24]. Head-mounted displays, shutter/polarized glasses, or autostereoscopic techniques are used to achieve the effect of two separate stereo images, one for each eye. Combined, these visual spatial cues should allow displays to convey highly accurate spatial information. Furthermore, head-tracking can be used to assure a correct subjective perspective of the virtual scene to maximize the effect.

Systems with a high degree of naturalism often combine high degrees of spatial input and output capabilities. The mapping of spatial relations within the system can also be designed differently, which we refer to as spatial mapping.

3 Spatial Mapping

We conceptualize spatial mapping (SM) as an extension of the natural mapping process, where spatial relationships are included in the mental models for a specific interaction in a virtual environment [25, 26, 27]. High (natural) SM is considered as an isomorphic mapping of perceived physical (real) interaction spaces and virtual interaction spaces. In this isomorphic mapping, distances and sizes of objects are identical in both the physical and virtual perception spaces. Building on the theory of NM, the high similarity of both spaces favors the transfer of mental models of the physical world in the virtual environment (and vice versa) (Fig. 1).
Fig. 1.

Left: System with low spatial mapping (system A), requiring the user to transform spatial information from the virtual and physical space. Right: System with high spatial mapping (system B), requiring no cognitive transformations. Source of images: [28].

Although NM with a given system can be quite authentic (e.g. using gesture input), the spatial relationships during the interaction can be mapped differently: For example, playing virtual table tennis with a Nintendo Wii video game console, users control a racket with a NM input controller that enables movement in 6 DOF in front of a TV screen (system A). The user is represented by an avatar on the screen which mirrors his movements to a certain degree. Even with a high NM, SM is low, because cognitive transformation processes are required to combine the physical and virtual perception spaces. Furthermore, the system reduces relevant spatial information to compress physical space needed for the interaction. System B could employ an isomorphic spatial mapping using a C.A.V.E. There is no representation of the user other than his physical self, and all objects perceived in the environment have the same size and distance as in the real world. Only few transformation processes are necessary, and more cognitive resources remain to process the content of the interaction itself.

3.1 Adequacy and Relevance of Spatial Mapping for Different Types of Tasks

In theory, the combination of spatial input and output technologies allows for very high levels of interaction fidelity for interface design [18]. In practice, NUIs are often seen as more engaging and interesting, but also physically more exhausting. They can be successfully implemented for certain types of interaction, but may result in bad UX for other types of interaction. An often cited example [29, 30] for this argument is the NUI from the movie Minority Report [31], where the protagonist uses a gesture-based interaction system to search an audiovisual database. The system looks visually impressive, but the mapping of the input modalities and the requirements of the task are completely inadequate for the task of searching information. It is exhausting to use and does not provide essential benefits over the use of a mouse and keyboard with a two-dimensional display. If the task would have included a detailed manipulation of several objects within a three-dimensional scene, the high degree of spatial information for the input actions could be applied reasonably.

A high degree of detail for input and output modalities is the ideal precondition for high degrees of user experience. However, many interactions do not require high spatial mapping, as it is not relevant and thus, does not affect UX or task performance. An application may offer a visually rich stereoscopic presentation with a highly natural body posture and gesture recognition systems as input modality. But when the user’s task is to react to acoustic stimuli with a wave gesture, the additional spatial information is irrelevant for the user’s task. It should not be beneficial for the UX – in contrary, the additional information could impair UX because of possible side effects like simulator sickness [32] or physical fatigue due to the physical interaction with the system. As a result, the user may perceive the system as inadequate for the task. Simple tasks requiring just one or two spatial dimensions do not benefit from a high degree of spatial information, making the interaction unnecessarily difficult.

This notion is also supported by Bowman [19], who argues that the mapping between input devices and actions in the interface is critical. He recommends to reduce the number of DOFs the user is required to control, e.g. by using lower-DOF input devices or ignoring some of the input DOFs.

Three key components can be identified that characterize spatial mapping:
  • Degree of detail of spatial input modalities

  • Degree of detail of spatial output modalities

  • Interaction task requiring a certain degree of spatial input/output modalities

The interaction task can be simple, using only one-dimensional spatial information (e.g. moving along the x-axis only). Video games like Space Race [33] use two-way joysticks, the user’s task is to steer left or right only as his avatar is accelerating automatically. Complex spatial information is not necessary for the task.

More often, two-dimensional spatial information is required (e.g. interaction within the vector pane spanned by the x-axis and y-axis). Many modern video games include these interaction tasks by allowing inputs for left, right, top and down. Racing simulations as well as side scrolling games or games with a bird’s eye of view fall into this category. Binocular depth cues (i.e. stereoscopy) are not relevant for the task itself.

Many studies on stereoscopic presentation using games found no effect for performance or UX [5, 34, 35]. However, some studies [36, 37] report positive effects of stereoscopy on fun and enjoyment. These could be explained with a novelty effect, as the players may enjoy the stereoscopic technology as it is new to them. Even studies in VR simulators using scenes with simple selection tasks report not finding any benefits of stereoscopic presentation [38, 39], which support the assumptions made here.

Complex tasks involving three-dimensional spatial information requires users to interact in a 3D space. It is insufficient for virtual environments to use complex three-dimensional scenes to present high degrees of spatial information (e.g. Shooter Games, C.A.V.E.), the user’s task has to involve true three-dimensional interaction to make the available detail of spatial information meaningful. Studies using e.g. selection and manipulation of 3D objects in a 3D space [40] show positive effects of stereoscopy and head-tracking for task performance and UX.

Overall, the design of the interaction task determines what degrees of spatial information is relevant for the input and output modalities. The more, the better does not apply here. Higher degrees of spatiality have to be meaningful for the user’s task to significantly enhance UX or task performance. A truly isomorphic spatial mapping therefore should require a three-dimensional task to show any benefits compared to lesser degrees of spatial mapping. The right combination of task and spatial information should show the best results on UX and task performance.

3.2 User Studies

We conducted a series of studies with virtual environments using low and high degrees of spatial mapping to test the assumptions of this theory. A first study (N = 265) compared two systems by manipulation degrees of spatial mapping (high: stereoscopic presentation, isomorphic spatial relations, subjective perspectives, low: monoscopic presentation, non-isomorphic spatial relations, objective perspective) and using two different user tasks [28, 41]. The task of system A (power wall setup with VR table tennis simulation [42]) required a three-dimensional interaction to manipulate objects within a virtual scene, whereas the task of system B (racing game simulation Gran Turismo 5 [43]) required a two-dimensional interaction only. In both systems, UX (measured with questionnaires MEC-SPQ [44], UEQ [45], IMI [46]), task performance and various user variables were recorded and analyzed. The results confirm our hypotheses: High spatial fidelity only resulted in better UX and task performance for users with the three-dimensional task. For users with the two-dimensional task, additional spatial information did not enhance performance nor UX, as it was rated as inadequate and unnecessary by the participants.

A second study (N = 94) examined different spatial mappings in the video game The Elder Scrolls V: Skyrim [47]. By using an Oculus Rift HMD and a Razer Hydra Controller, we manipulated stereoscopic presentation and natural input mapping. In all groups, the task required a complex three-dimensional interaction (i.e. placing and navigating objects through a custom environment created for the experiment). We used the same measures as in the first study. Overall, the results confirmed that the high spatial mapping was rated more adequate and relevant for the complex interaction task and showed a higher task performance compared to the lower degrees of spatial mapping.

4 Discussion and Implications

In this paper we introduced the concept of spatial mapping as an extension to natural mapping. SM refers to the mapping of spatial relations, sizes and distances of objects as well as visual perspectives within a given virtual environment. High degrees of spatial mapping use an isomorphic mapping from perceived real world spatial relations to the virtual world, thus enabling users to apply previous knowledge and skills based on the real world. By using high degrees of spatial mapping, the cognitive workload for the interaction can be reduced, as there are fewer transformation processes required to learn the interaction. Furthermore, the transfer of mental models constructed within the virtual environment (i.e. virtual training simulations using spatial tasks) to real world applications should be easier as well. However, natural user interfaces must reflect the context of the user’s tasks. High degrees of spatial information, both for input and output capabilities, have to be relevant for the interaction to enhance UX or task performance. For example, spatial depth cues provided by stereoscopic presentation or subjective head-tracking are only beneficial for complex three-dimensional tasks. A system may provide a very natural interaction with high interaction fidelity, but when only simple one- or two-dimensional interactions are required, the system may prove no better or even worse than a more basic system.



The work presented has been partially funded by the German Research Foundation (DFG) as part of the research training group Connecting Virtual and Real Social Worlds (grant 1780).


  1. 1.
    Norman, D.: The Design of Everyday Things Revised and Expanded. Basic Books, New York (2013)Google Scholar
  2. 2.
    Johnson-Laird, P.N.: Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge University Press, Cambridge (1983)Google Scholar
  3. 3.
    Anderson, R.C.: The notion of schemata and the educational enterprise: general discussion of the conference. In: Anderson, R.C., Montague, W.E. (eds.) Schooling and the Acquisition of Knowledge, pp. 415–431. Lawrence Erlbaum, Hillsdale (1977)Google Scholar
  4. 4.
    Schank, R.C., Abelson, R.: Scripts, plans, goals and understanding: an inquiry into human knowledge structures. Lawrence Erlbaum, Hillsdale (1977)Google Scholar
  5. 5.
    Elson, M., van Looy, J., Vermeulen, L., Van den Bosch, F.: In the mind’s: no Evidence for an effect of stereoscopic 3D on user experience of digital games. In: ECREA ECC 2012 preconference Experiencing Digital Games: Use, Effects Culture of Gaming, Istanbul (2012)Google Scholar
  6. 6.
    Lapointe, J.F., Savard, P., Vinson, N.G.: A comparative study of four input devices for desktop virtual walkthroughs. Comput. Hum. Behav. 27, 2186–2191 (2011)CrossRefGoogle Scholar
  7. 7.
    Slater, M., Usoh, M.: Body Centered Interaction in Immersive Virtual Environments. In: Thalmann, M., Thalmann, D. (eds.) Artificial Life and Virtual Reality, pp. 125–148. John Wiley, Oxford, UK (1994)Google Scholar
  8. 8.
    Costa, M.R., Kim, S.Y., Biocca, F.: Embodiment and embodied cognition. In: 5th International Conference, VAMR 2013, Held as Part of HCI International 2013, Las Vegas, pp. 333–342, 21–26 July 2013Google Scholar
  9. 9.
    Steuer, J.: Defining virtual reality: dimensions determining telepresence. J. Commun. 42, 73–93 (1992)CrossRefGoogle Scholar
  10. 10.
    Johnson-Laird, P.N.: The history of mental models. In: Manktelow, K., Chung, M.C. (eds.) Psychology of reasoning: Theoretical and historical perspectives, pp. 179–212. Psychol. Press, New York (2004)Google Scholar
  11. 11.
    Van Dijk, T.A., Kintsch, W.: Strategies of Discourse Comprehension. Academic Press, New York (1983)Google Scholar
  12. 12.
    Ohler, P.: Kognitive Filmpsychologie. Verarbeitung und mentale Repräsentation Cognitive psychology of movies. Processing and mental representation of narrative movies. MAkS-Publikationen, Münster (1994)Google Scholar
  13. 13.
    Wirth, W., Hartmann, T., Böcking, S., Vorderer, P., Klimmt, C., Schramm, H., Saari, T., Laarni, J., Ravaja, N., Gouveia, F.R., Biocca, F., Sacau, A., Jäncke, L., Baumgartner, T., Jäncke, P.: A process model of the formation of spatial presence experiences. Media Psychol. 9, 493–525 (2007)CrossRefGoogle Scholar
  14. 14.
    Tamborini, R., Skalski, P.: The role of presence in the experience of electronic games. In: Vorderer, P., Bryant, J. (eds.) Playing video games. motives, responses and consequences, pp. 225–240. Lawrence Erlbaum, Mahwah (2006)Google Scholar
  15. 15.
    Bowman, D.A., Kruijff, E., LaViola, J.J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Pearson Education, Boston (2005)Google Scholar
  16. 16.
    McGloin, R., Krcmar, M.: The Impact of Controller Naturalness on Spatial Presence, Gamer Enjoyment, and Perceived Realism in a Tennis Simulation Video Game. Presence Teleoperators Virtual Environ. 20, 309–324 (2011)CrossRefGoogle Scholar
  17. 17.
    Skalski, P., Tamborini, R., Shelton, A., Buncher, M., Lindmark, P.: Mapping the road to fun: natural video game controllers, presence, and game enjoyment. New Media Soc. 13, 224–242 (2011)CrossRefGoogle Scholar
  18. 18.
    Bowman, D.A., McMahan, R.P., Ragan, E.D.: Questioning naturalism in 3D user interfaces. Commun. ACM 55, 78 (2012)CrossRefGoogle Scholar
  19. 19.
    Bowman, D.A.: 3D User Interfaces. In: Soegaards, M., Dam, R.F. (eds.) The Encyclopedia of Human-Computer Interaction. The Interaction Design Foundation, Aarhus, Denmark (2014)Google Scholar
  20. 20.
    Gibson, J.J.: The Ecological Approach to Visual Prception. Houghton Mifflin, Boston (1979)Google Scholar
  21. 21.
    Surdick, R.T., Davis, E.T., King, R.A., Hodges, L.F.: The perception of distance in simulated visual displays: a comparison of the effectiveness and accuracy of multiple depth cues across viewing distances. Presence. 6, 513–531 (1997)Google Scholar
  22. 22.
    Posner, M.I., Snyder, C.R., Davidson, B.J.: Attention and the Detection of Signals. J. Exp. Psychol. Gen. 109, 73–91 (1980)CrossRefGoogle Scholar
  23. 23.
    Hagendorf, H., Krummenacher, J., Müller, H.J., Schubert, T.: Wahrnehmung und Aufmerksamkeit. Springer Medizin, Berlin (2011)CrossRefGoogle Scholar
  24. 24.
    King, R.D.: A brief history of stereoscopy. wiley interdisciplinary reviews. Comput. Stat. 5, 334–340 (2013)CrossRefGoogle Scholar
  25. 25.
    Pietschmann, D.: Spatial Mapping of Input and Output Spaces in Video Games. In: Schröter, F. (ed.) Games, Cognition, and Emotion. Hamburg University, Hamburg (2013)Google Scholar
  26. 26.
    Pietschmann, D., Liebold, B., Ohler, P.: Spatial mapping of mental interaction models and stereoscopic presentation. In: 2nd Conference on Research and Use of VR/AR Technologies, VAR2 Institute for Machine Tools and Production Processes, (2013)Google Scholar
  27. 27.
    Pietschmann, D., Liebold, B., Valtin, G., Ohler, P.: Taking space literally: reconceptualizing the effects of stereoscopic representation on user experience. Italian Journal of Game Studies 2, (2013).
  28. 28.
    Pietschmann, D.: Relevanz räumlicher Informationen für die User Experience und Aufgabenleistung. Springer, Wiesbaden (2015)CrossRefGoogle Scholar
  29. 29.
    Schmitz, M., Endres, C., Butz, A.: A Survey of human-computer interaction design in science fiction movies. In: INTETAIN 2008 Proceedings of the 2nd international conference on Intelligent Technologies for Interactive Entertainment, Article 7. ICST (2007)Google Scholar
  30. 30.
    Underkoffler, J.: g-speak (point and touch interface demonstration) TED 2010. What the World Needs Now, Long Beach (2010)Google Scholar
  31. 31.
    Spielberg, S.: Minority report. pp. 145 min. Twentieth Century Fox Film Corporation, USA (2002)Google Scholar
  32. 32.
    Kennedy, R.S., Lane, N.E., Berbaum, K.S., Lilienthal, M.G.: Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3, 203–220 (1993)CrossRefGoogle Scholar
  33. 33.
    Atari Inc.: Space Race. Atari Inc., Sunnyvale, CA (1973)Google Scholar
  34. 34.
    Häkkinen, J., Pölönen, M., Takatalo, J., Nyman, G.: Simulator sickness in virtual display gaming: a comparison of stereoscopic and non-stereoscopic situations. In: 8th International Conference on Human Computer Interaction with Mobile Devices and Services Helsinki (2006)Google Scholar
  35. 35.
    Takatalo, J., Häkkinen, J., Kaistinen, J., Nyman, G.: User experience in digital games differences between laboratory and home. Simul. Gaming. 42, 656–673 (2010)Google Scholar
  36. 36.
    Rajae-Joordens, R.J.E., Langendijk, E., Wilinski, P., Heynderickx, I.: Added value of a multi-view auto-stereoscopic 3D display in gaming applications. In: 12th International Display Workshops in conjunction with Asia Display, Takamatsu (2005)Google Scholar
  37. 37.
    LaViola, J.J., Litwiller, T.: Evaluating the benefits of 3d stereo in modern video games. In: CHI 2011 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2345–2354. ACM (2011)Google Scholar
  38. 38.
    Davis, E.T., Hodges, L.F.: Human stereopsis, fusion, and stereoscopic virtual environments. In: Barfield, W., Furness, T.A.I. (eds.) Virtual Environments and Advances Interface Design, pp. 145–174. Oxford University Press, Oxford, GB (1995)Google Scholar
  39. 39.
    McMahan, R.P., Gorton, D., Gresock, J., McConnell, W., Bowman, D.A.: Separating the effects of level of immersion and 3D interaction techniques. 108 (2006)Google Scholar
  40. 40.
    Teather, R.J., Stuerzlinger, W.: Guidelines for 3D positioning techniques. In: Future Play 2007 Proceedings of the 2007 Conference on Future Play, pp. 61. ACM (2007)Google Scholar
  41. 41.
    Pietschmann, D., Rusdorf, S.: Matching levels of task difficulty for different modes of presentation in a VR table tennis simulation by using assistance functions and regression analysis. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014, Part I. LNCS, vol. 8525, pp. 406–417. Springer, Heidelberg (2014)Google Scholar
  42. 42.
    Rusdorf, S., Brunnett, G., Lorenz, M., Winkler, T.: Real time interaction with a humanoid avatar in an immersive table tennis simulation. IEEE Trans. Visual Comput. Graphics 13, 15–25 (2007)CrossRefGoogle Scholar
  43. 43.
    Polyphony Digital Inc.: Gran Turismo 5. vol. PlayStation 3. Sony Computer Entertainment America, Foster City, CA (2010)Google Scholar
  44. 44.
    Vorderer, P., Wirth, W., Gouveia, F.R., Biocca, F., Saari, T., Jäncke, F., Böcking, S., Schramm, H., Gysbers, A., Hartmann, T., Klimmt, C., Laarni, J., Ravaja, N., Sacau, A., Baumgartner, T., Jäncke, P.: MEC spatial presence questionnaire (MEC-SPQ): Short documentation and instructions for application, report to the European Community, Project Presence: MEC (IST-2001–37661) (2004)Google Scholar
  45. 45.
    Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (ed.) USAB 2008. LNCS, vol. 5298, pp. 63–76. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  46. 46.
    McAuley, E., Duncan, T., Tammen, V.V.: Psychometric properties of the Intrinsic Motivation Inventory in a competitive sport setting: a confirmatory factor analysis. Res. Q. Exerc. Sport 60, 48–58 (1989)CrossRefGoogle Scholar
  47. 47.
    Bethesda Game Studios: The Elder Scrolls V: Skyrim. The Elder Scrolls, vol. PC. Bethesda Softworks LLC, Rockville, MD (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Institute for Media ResearchChemnitz University of TechnologyChemnitzGermany

Personalised recommendations