Advertisement

Interaction for Immersive Analytics

  • Wolfgang Büschel
  • Jian Chen
  • Raimund Dachselt
  • Steven Drucker
  • Tim Dwyer
  • Carsten Görg
  • Tobias Isenberg
  • Andreas Kerren
  • Chris North
  • Wolfgang Stuerzlinger
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11190)

Abstract

In this chapter, we briefly review the development of natural user interfaces and discuss their role in providing human-computer interaction that is immersive in various ways. Then we examine some opportunities for how these technologies might be used to better support data analysis tasks. Specifically, we review and suggest some interaction design guidelines for immersive analytics. We also review some hardware setups for data visualization that are already archetypal. Finally, we look at some emerging system designs that suggest future directions.

Keywords

Natural user interfaces Embodied interaction Post-WIMP interfaces Visual analytics Data visualization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adams, N., Witkowski, M., Spence, R.: The inspection of very large images by eye-gaze control. In: Proceedings of the Working Conference on Advanced Visual Interfaces (AVI), pp. 111–118. ACM, New York (2008).  https://doi.org/10.1145/1385569.1385589
  2. 2.
    Amar, R., Eagan, J., Stasko, J.: Low-level components of analytic activity in information visualization. In: Proceedings of the IEEE Symposium on Information Visualization (InfoVis), pp. 111–117. IEEE Computer Society, Los Alamitos (2005).  https://doi.org/10.1109/INFVIS.2005.1532136
  3. 3.
    Andrews, C., Endert, A., North, C.: Space to think: large high-resolution displays for sensemaking. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 55–64. ACM, New York (2010).  https://doi.org/10.1145/1753326.1753336
  4. 4.
    Argelaguet, F., Andujar, C.: A survey of 3D object selection techniques for virtual environments. Comput. Graph. 37(3), 121–136 (2013).  https://doi.org/10.1016/j.cag.2012.12.003CrossRefGoogle Scholar
  5. 5.
    Arif, A.S., Stuerzlinger, W.: User adaptation to a faulty unistroke-based text entry technique by switching to an alternative gesture set. In: Proceedings of Graphics Interface (GI), pp. 183–192. Canadian Information Processing Society, Toronto (2014). https://doi.org/10.20380/GI2014.24
  6. 6.
    Bach, B., Sicat, R., Beyer, J., Cordeil, M., Pfister, H.: The hologram in my hand: how effective is interactive exploration of 3D visualizations in immersive tangible augmented reality? IEEE Trans. Vis. Comput. Graph. 24(1), 457–467 (2018).  https://doi.org/10.1109/TVCG.2017.2745941CrossRefGoogle Scholar
  7. 7.
    Badam, S.K., Amini, F., Elmqvist, N., Irani, P.: Supporting visual exploration for multiple users in large display environments. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 1–10. IEEE Computer Society, Los Alamitos (2016).  https://doi.org/10.1109/VAST.2016.7883506
  8. 8.
    Ball, R., North, C.: Effects of tiled high-resolution display on basic visualization and navigation tasks. In: Extended Abstracts on Human Factors in Computing Systems (CHI EA), pp. 1196–1199. ACM, New York (2005).  https://doi.org/10.1145/1056808.1056875
  9. 9.
    Ball, R., North, C., Bowman, D.A.: Move to improve: promoting physical navigation to increase user performance with large displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 191–200. ACM, New York (2007).  https://doi.org/10.1145/1240624.1240656
  10. 10.
    Ballmer, S.: CES 2010: a transforming trend-the natural user interface. The Huffington Post (2010). http://www.huffingtonpost.com/steve-ballmer/ces-2010-a-transformingt_b_416598.html/
  11. 11.
    Beaudouin-Lafon, M., et al.: Multisurface interaction in the WILD room. Computer 45(4), 48–56 (2012).  https://doi.org/10.1109/MC.2012.110CrossRefGoogle Scholar
  12. 12.
    Benko, H., Ishak, E.W.: Cross-dimensional gestural interaction techniques for hybrid immersive environments. In: Proceedings of the IEEE Conference on Virtual Reality (VR), pp. 209–216, 327. IEEE Computer Society, Los Alamitos (2005).  https://doi.org/10.1109/VR.2005.1492776
  13. 13.
    Besançon, L., Issartel, P., Ammi, M., Isenberg, T.: Hybrid tactile/tangible interaction for 3D data exploration. IEEE Trans. Vis. Comput. Graph. 23(1), 881–890 (2017).  https://doi.org/10.1109/TVCG.2016.2599217CrossRefGoogle Scholar
  14. 14.
    Bier, E.A., et al.: Toolglass and magic lenses: the see-through interface. In: Conference Companion on Human Factors in Computing Systems, pp. 445–446. ACM, New York (1994).  https://doi.org/10.1145/259963.260447
  15. 15.
    Billinghurst, M., Clark, A., Lee, G.: A survey of augmented reality. Found. Trends Hum. Comput. Interact. 8(2–3), 73–272 (2015).  https://doi.org/10.1561/1100000049CrossRefGoogle Scholar
  16. 16.
    Bjork, S., Holopainen, J.: Patterns in Game Design. Game Development Series. Charles River Media Inc., Rockland (2004)Google Scholar
  17. 17.
    Bolt, R.A.: “Put-that-there”: voice and gesture at the graphics interface. ACM SIGGRAPH Comput. Graph. 14(3), 262–270 (1980).  https://doi.org/10.1145/965105.807503CrossRefGoogle Scholar
  18. 18.
    Bolt, R.A.: Gaze-orchestrated dynamic windows. ACM SIGGRAPH Comput. Graph. 15(3), 109–119 (1981).  https://doi.org/10.1145/965161.806796CrossRefGoogle Scholar
  19. 19.
    Branit, B.: World Builder. Online video (2009). https://vimeo.com/3365942
  20. 20.
    Brown, M.A., Stuerzlinger, W.: Exploring the throughput potential of in-air pointing. In: Kurosu, M. (ed.) HCI 2016. LNCS, vol. 9732, pp. 13–24. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-39516-6_2CrossRefGoogle Scholar
  21. 21.
    Brown, M.A., Stuerzlinger, W., Mendonça Filho, E.J.: The performance of un-instrumented in-air pointing. In: Proceedings of Graphics Interface (GI), pp. 59–66. Canadian Information Processing Society, Toronto (2014). https://doi.org/10.20380/GI2014.08
  22. 22.
    Bruder, G., Steinicke, F., Stuerzlinger, W.: Effects of visual conflicts on 3D selection task performance in stereoscopic display environments. In: Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI), pp. 115–118. IEEE Computer Society, Los Alamitos (2013).  https://doi.org/10.1109/3DUI.2013.6550207
  23. 23.
    Bruder, G., Steinicke, F., Stuerzlinger, W.: To touch or not to touch? Comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces. In: Proceedings of the 1st Symposium on Spatial User Interaction (SUI), pp. 9–16. ACM, New York (2013).  https://doi.org/10.1145/2491367.2491369
  24. 24.
    Bruder, G., Steinicke, F., Stuerzlinger, W.: Touching the void revisited: analyses of touch behavior on and above tabletop surfaces. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013. LNCS, vol. 8117, pp. 278–296. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40483-2_19CrossRefGoogle Scholar
  25. 25.
    Büschel, W., Mitschick, A., Dachselt, R.: Here and now: reality-based information retrieval. In: Proceedings of the 2018 Conference on Human Information Interaction & Retrieval, CHIIR 2018, pp. 171–180. ACM, New York (2018).  https://doi.org/10.1145/3176349.3176384
  26. 26.
    Büschel, W., Reipschläger, P., Langner, R., Dachselt, R.: Investigating the use of spatial interaction for 3D data visualization on mobile devices. In: Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, ISS 2017, pp. 62–71. ACM, New York (2017).  https://doi.org/10.1145/3132272.3134125
  27. 27.
    Bush, V.: As we may think. The Atlantic Monthly 176(1), 101–108 (1945). https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
  28. 28.
    Butscher, S., Hubenschmid, S., Müller, J., Fuchs, J., Reiterer, H.: Clusters, trends, and outliers: how immersive technologies can facilitate the collaborative analysis of multidimensional data. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, pp. 90:1–90:12. ACM, New York (2018).  https://doi.org/10.1145/3173574.3173664
  29. 29.
    Buxton, B.: Multi-touch systems that I have known and loved. Technical report, Microsoft Research (2007). http://www.billbuxton.com/multitouchOverview.html
  30. 30.
    Card, S.K., Robertson, G.G., Mackinlay, J.D.: The information visualizer, an information workspace. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 181–186. ACM, New York (1991).  https://doi.org/10.1145/108844.108874
  31. 31.
  32. 32.
    Cernea, D., Kerren, A.: A survey of technologies on the rise for emotion-enhanced interaction. J. Vis. Lang. Comput. 31(Pt. A), 70–86 (2015).  https://doi.org/10.1016/j.jvlc.2015.10.001CrossRefGoogle Scholar
  33. 33.
    Chen, J., Bowman, D.A.: Effectiveness of cloning techniques for architectural virtual environments. In: IEEE Virtual Reality Conference, pp. 103–110. IEEE (2006)  https://doi.org/10.1109/VR.2006.57
  34. 34.
    Chen, J., Bowman, D.A.: Domain-specific design of 3D interaction techniques: an approach for designing useful virtual environment applications. Presence Teleoperators Virtual Environ. 18(5), 370–386 (2009).  https://doi.org/10.1162/pres.18.5.370CrossRefGoogle Scholar
  35. 35.
    Chen, J., Bowman, D.A., Lucas, J.F., Wingrave, C.A.: Interfaces for cloning in immersive virtual environments. In: Eurographics Symposium on Virtual Environments. The Eurographics Association (2004).  https://doi.org/10.2312/EGVE/EGVE04/091-098
  36. 36.
    Chen, J., Narayan, M.A., Manuel, Pérez-Quiñones, A.: The use of hand-held devices for search tasks in virtual environments. In: The IEEE Symposium on 3D User Interfaces, pp. 15–18 (2005)Google Scholar
  37. 37.
    Claes, S., Moere, A.V.: The role of tangible interaction in exploring information on public visualization displays. In: Proceedings of the International Symposium on Pervasive Displays (PerDis), pp. 201–207. ACM, New York (2015). https://doi.org/10.1145/2757710.2757733
  38. 38.
    Coffey, D., et al.: Interactive Slice WIM: Navigating and interrogating volume datasets using a multi-surface, multi-touch VR interface. IEEE Trans. Vis. Comput. Graph. 18(10), 1614–1626 (2012).  https://doi.org/10.1109/TVCG.2011.283CrossRefGoogle Scholar
  39. 39.
    Cordeil, M., Cunningham, A., Dwyer, T., Thomas, B.H., Marriott, K.: ImAxes: immersive axes as embodied affordances for interactive multivariate data visualisation. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, pp. 71–83. ACM, New York (2017).  https://doi.org/10.1145/3126594.3126613
  40. 40.
    Cordeil, M., Dwyer, T., Klein, K., Laha, B., Marriott, K., Thomas, B.H.: Immersive collaborative analysis of network connectivity: CAVE-style or head-mounted display? IEEE Trans. Vis. Comput. Graph. 23(1), 441–450 (2017).  https://doi.org/10.1109/TVCG.2016.2599107CrossRefGoogle Scholar
  41. 41.
    Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.V., Hart, J.C.: The CAVE: audio visual experience automatic virtual environment. Commun. ACM 35(6), 64–72 (1992).  https://doi.org/10.1145/129888.129892CrossRefGoogle Scholar
  42. 42.
    Cummings, J.J., Bailenson, J.N.: How immersive is enough? a meta-analysis of the effect of immersive technology on user presence. Media Psychol. 19(2), 272–309 (2016).  https://doi.org/10.1080/15213269.2015.1015740CrossRefGoogle Scholar
  43. 43.
    van Dam, A.: Post-WIMP user interfaces. Commun. ACM 40(2), 63–67 (1997).  https://doi.org/10.1145/253671.253708MathSciNetCrossRefGoogle Scholar
  44. 44.
    Dostal, J., Hinrichs, U., Kristensson, P.O., Quigley, A.: Spidereyes: designing attention- and proximity-aware collaborative interfaces for wall-sized displays. In: Proceedings of the International Conference on Intelligent User Interfaces (IUI), pp. 143–152. ACM, New York (2014).  https://doi.org/10.1145/2557500.2557541
  45. 45.
    Dourish, P.: Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge (2001)Google Scholar
  46. 46.
    Drucker, S.M., Fisher, D., Sadana, R., Herron, J., Schraefel, M.C.: TouchViz: a case study comparing two interfaces for data analytics on tablets. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 2301–2310. ACM, New York (2013).  https://doi.org/10.1145/2470654.2481318
  47. 47.
    Elmqvist, N., Vande Moere, A., Jetter, H.C., Cernea, D., Reiterer, H., Jankun-Kelly, T.J.: Fluid interaction for information visualization. Inf. Vis. 10(4), 327–340 (2011).  https://doi.org/10.1177/1473871611413180CrossRefGoogle Scholar
  48. 48.
    Febretti, A., Nishimoto, A., Mateevitsi, V., Renambot, L., Johnson, A., Leigh, J.: Omegalib: a multi-view application framework for hybrid reality display environments. In: Proceedings of the IEEE Conference on Virtual Reality (VR), pp. 9–14. IEEE Computer Society, Los Alamitos (2014).  https://doi.org/10.1109/VR.2014.6802043
  49. 49.
    Fikkert, W., D’Ambros, M., Bierz, T., Jankun-Kelly, T.J.: Interacting with visualizations. In: Kerren, A., Ebert, A., Meyer, J. (eds.) Human-Centered Visualization Environments. LNCS, vol. 4417, pp. 77–162. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-71949-6_3CrossRefGoogle Scholar
  50. 50.
    Follmer, S., Leithinger, D., Olwal, A., Hogge, A., Ishii, H.: inFORM: dynamic physical affordances and constraints through shape and object actuation. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 417–426. ACM, New York (2013).  https://doi.org/10.1145/2501988.2502032
  51. 51.
    Frisch, M., Heydekorn, J., Dachselt, R.: Diagram editing on interactive displays using multi-touch and pen gestures. In: Goel, A.K., Jamnik, M., Narayanan, N.H. (eds.) Diagrams 2010. LNCS (LNAI), vol. 6170, pp. 182–196. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14600-8_18CrossRefGoogle Scholar
  52. 52.
    Fu, C.W., Goh, W.B., Ng, J.A.: Multi-touch techniques for exploring large-scale 3D astrophysical simulations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 2213–2222. ACM, New York (2010).  https://doi.org/10.1145/1753326.1753661
  53. 53.
    Gillet, A., Sanner, M., Stoffler, D., Olson, A.: Tangible interfaces for structural molecular biology. Structure 13(3), 483–491 (2005).  https://doi.org/10.1016/j.str.2005.01.009CrossRefGoogle Scholar
  54. 54.
    Harrison, C., Sato, M., Poupyrev, I.: Capacitive fingerprinting: exploring user differentiation by sensing electrical properties of the human body. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 537–544. ACM, New York (2012).  https://doi.org/10.1145/2380116.2380183
  55. 55.
    Heer, J., Shneiderman, B.: Interactive dynamics for visual analysis. Commun. ACM 55(4), 45–54 (2012).  https://doi.org/10.1145/2133806.2133821CrossRefGoogle Scholar
  56. 56.
    Hess, R.F., To, L., Zhou, J., Wang, G., Cooperstock, J.R.: Stereo vision: the haves and have-nots. i-Perception 6(3), June 2015.  https://doi.org/10.1177/2041669515593028CrossRefGoogle Scholar
  57. 57.
    Hincapié-Ramos, J.D., Guo, X., Moghadasian, P., Irani, P.: Consumed endurance: a metric to quantify arm fatigue of mid-air interactions. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 1063–1072. ACM, New York (2014).  https://doi.org/10.1145/2556288.2557130
  58. 58.
    Hinckley, K., Pausch, R., Goble, J.C., Kassell, N.F.: Passive real-world interface props for neurosurgical visualization. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 452–458. ACM, New York (1994).  https://doi.org/10.1145/191666.191821
  59. 59.
    Hinckley, K., et al.: Pen + touch = new tools. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 27–36. ACM, New York (2010).  https://doi.org/10.1145/1866029.1866036
  60. 60.
    Hoffman, D.M., Girshick, A.R., Akeley, K., Banks, M.S.: Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 8(3), 33:1–33:30 (2008).  https://doi.org/10.1167/8.3.33CrossRefGoogle Scholar
  61. 61.
    Horak, T., von Zadow, U., Kalms, M., Dachselt, R.: Discussing the state of the art for “in the wild” mobile device localization. In: Proceedings of the ISS Workshop on Interacting with Multi-Device Ecologies “in the wild” (2016). http://cross-surface.com/papers/Cross-Surface_2016-2_paper_2.pdf
  62. 62.
    Huang, F.C., Chen, K., Wetzstein, G.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics 34(4), 60:1–60:12 (2015) doi: 10.1145/2766922Google Scholar
  63. 63.
    Hutchins, E.L., Hollan, J.D., Norman, D.A.: Direct manipulation interfaces. Hum. Comput. Interact. 1(4), 311–338 (1985).  https://doi.org/10.1207/s15327051hci0104_2CrossRefGoogle Scholar
  64. 64.
    Hutchinson, T.E., White, K.P., Martin, W.N., Reichert, K.C., Frey, L.A.: Human-computer interaction using eye-gaze input. IEEE Trans. Syst. Man Cybern. 19(6), 1527–1534 (1989).  https://doi.org/10.1109/21.44068CrossRefGoogle Scholar
  65. 65.
    Isenberg, P., Dragicevic, P., Willett, W., Bezerianos, A., Fekete, J.D.: Hybrid-image visualization for large viewing environments. IEEE Trans. Vis. Comput. Graph. 19(12), 2346–2355 (2013).  https://doi.org/10.1109/TVCG.2013.163CrossRefGoogle Scholar
  66. 66.
    Isenberg, P., Isenberg, T.: Visualization on interactive surfaces: a research overview. i-com 12(3), 10–17 (2013).  https://doi.org/10.1524/icom.2013.0020
  67. 67.
    Isenberg, T.: Interactive exploration of three-dimensional scientific visualizations on large display surfaces. In: Anslow, C., Campos, P., Jorge, J. (eds.) Collaboration Meets Interactive Spaces, pp. 97–123. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-45853-3_6CrossRefGoogle Scholar
  68. 68.
    Isenberg, T., Hancock, M.: Gestures vs. postures: ‘Gestural’ touch interaction in 3D environments. In: Proceedings of the CHI Workshop on “The 3rd Dimension of CHI: Touching and Designing 3D User Interfaces” (3DCHI), pp. 53–61 (2012). https://hal.inria.fr/hal-00781237
  69. 69.
    Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A., Ben-Joseph, E.: Bringing clay and sand into digital design–Continuous tangible user interfaces. BT Technol. J. 22(4), 287–299 (2004).  https://doi.org/10.1023/B:BTTJ.0000047607.16164.16CrossRefGoogle Scholar
  70. 70.
    Ishii, H.: The tangible user interface and its evolution. Commun. ACM 51(6), 32–36 (2008).  https://doi.org/10.1145/1349026.1349034CrossRefGoogle Scholar
  71. 71.
    Ishii, H., Ullmer, B.: Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 234–241. ACM, New York (1997).  https://doi.org/10.1145/258549.258715
  72. 72.
    Jackson, B., Schroeder, D., Keefe, D.F.: Nailing down multi-touch: anchored above the surface interaction for 3D modeling and navigation. In: Proceedings of Graphics Interface (GI), pp. 181–184. Canadian Information Processing Society, Toronto (2012). https://doi.org/10.20380/GI2012.23
  73. 73.
    Jacob, R.J., Girouard, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T., Zigelbaum, J.: Reality-based interaction: a framework for post-WIMP interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 201–210. ACM, New York (2008).  https://doi.org/10.1145/1357054.1357089
  74. 74.
    Jakobsen, M.R., Haile, Y.S., Knudsen, S., Hornbæk, K.: Information visualization and proxemics: design opportunities and empirical findings. IEEE Trans. Vis. Comput. Graph. 19(12), 2386–2395 (2013).  https://doi.org/10.1109/TVCG.2013.166CrossRefGoogle Scholar
  75. 75.
    Jang, S., Stuerzlinger, W., Ambike, S., Ramani, K.: Modeling cumulative arm fatigue in mid-air interaction based on perceived exertion and kinetics of arm motion. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3328–3339. ACM, New York (2017).  https://doi.org/10.1145/3025453.3025523
  76. 76.
    Jankowski, J., Hachet, M.: Advances in interaction with 3D environments. Comput. Graph. Forum 34(1), 152–190 (2015).  https://doi.org/10.1111/cgf.12466CrossRefGoogle Scholar
  77. 77.
    Jansen, Y.: Physical and tangible information visualization. Ph.D. thesis, Université Paris Sud - Paris XI, France, March 2014. https://tel.archives-ouvertes.fr/tel-00981521
  78. 78.
    Jansen, Y., Dragicevic, P.: An interaction model for visualizations beyond the desktop. IEEE Trans. Vis. Comput. Graph. 19(12), 2396–2405 (2013).  https://doi.org/10.1109/TVCG.2013.134CrossRefGoogle Scholar
  79. 79.
    Jansen, Y., et al.: Opportunities and challenges for data physicalization. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3227–3236. ACM, New York (2015).  https://doi.org/10.1145/2702123.2702180
  80. 80.
    Johnson, R., O’Hara, K., Sellen, A., Cousins, C., Criminisi, A.: Exploring the potential for touchless interaction in image-guided interventional radiology. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3323–3332. ACM, New York (2011).  https://doi.org/10.1145/1978942.1979436
  81. 81.
    Jota, R., Nacenta, M.A., Jorge, J.A., Carpendale, S., Greenberg, S.: A comparison of ray pointing techniques for very large displays. In: Proceedings of Graphics Interface (GI), pp. 269–276. Canadian Information Processing Society, Toronto (2010). https://doi.org/10.20380/GI2010.36
  82. 82.
    Karam, M., Schraefel, M.C.: A taxonomy of gestures in human computer interactions. Technical report 261149, University of Southampton (2005). http://eprints.soton.ac.uk/261149/, ISBN 0854328335
  83. 83.
    Keefe, D.F.: Integrating visualization and interaction research to improve scientific workflows. IEEE Comput. Graph. Appl. 30(2), 8–13 (2010).  https://doi.org/10.1109/MCG.2010.30CrossRefGoogle Scholar
  84. 84.
    Keefe, D.F., Isenberg, T.: Reimagining the scientific visualization interaction paradigm. IEEE Comput. 46(5), 51–57 (2013).  https://doi.org/10.1109/MC.2013.178CrossRefGoogle Scholar
  85. 85.
    Kerren, A., Schreiber, F.: Toward the role of interaction in visual analytics. In: Proceedings of the Winter Simulation Conference (WSC), pp. 420:1–420:13. Winter Simulation Conference (2012). http://dl.acm.org/citation.cfm?id=2429759.2430303
  86. 86.
    Kim, K., Elmqvist, N.: Embodied lenses for collaborative visual queries on tabletop displays. Inf. Vis. 11(4), 319–338 (2012).  https://doi.org/10.1177/1473871612441874CrossRefGoogle Scholar
  87. 87.
    Kirmizibayrak, C., Radeva, N., Wakid, M., Philbeck, J., Sibert, J., Hahn, J.: Evaluation of gesture based interfaces for medical volume visualization tasks. In: Proceedings of the International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI), pp. 69–74. ACM, New York (2011).  https://doi.org/10.1145/2087756.2087764
  88. 88.
    Kister, U., Klamka, K., Tominski, C., Dachselt, R.: GraSp: combining spatially-aware mobile devices and a display wall for graph visualization and interaction. Comput. Graph. Forum 36(3), 503–514 (2017).  https://doi.org/10.1111/cgf.13206CrossRefGoogle Scholar
  89. 89.
    Kister, U., Reipschläger, P., Dachselt, R.: MultiLens: fluent interaction with multi-functional multi-touch lenses for information visualization. In: Proceedings of the ACM Conference on Interactive Surfaces and Spaces (ISS), pp. 139–148. ACM, New York (2016).  https://doi.org/10.1145/2992154.2992168
  90. 90.
    Kister, U., Reipschläger, P., Matulic, F., Dachselt, R.: BodyLenses: embodied magic lenses and personal territories for wall displays. In: Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS), pp. 117–126. ACM, New York (2015).  https://doi.org/10.1145/2817721.2817726
  91. 91.
    Klamka, K., Siegel, A., Vogt, S., Göbel, F., Stellmach, S., Dachselt, R.: Look & pedal: Hands-free navigation in zoomable information spaces through gaze-supported foot input. In: Proceedings of the International Conference on Multimodal Interaction (ICMI), pp. 123–130. ACM, New York (2015).  https://doi.org/10.1145/2818346.2820751
  92. 92.
    Klapperstuck, M., et al.: ContextuWall: peer collaboration using (large) displays. In: Proceedings of the International Symposium on Big Data Visual Analytics (BDVA), pp. 1–8. IEEE, Red Hook (2016).  https://doi.org/10.1109/BDVA.2016.7787047
  93. 93.
    Klum, S., Isenberg, P., Langner, R., Fekete, J.D., Dachselt, R.: Stackables: combining tangibles for faceted browsing. In: Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 241–248. AVI ’12, ACM, New York (2012). http://doi.acm.org/10.1145/2254556.2254600
  94. 94.
    Konchada, V., Jackson, B., Le, T., Borazjani, I., Sotiropoulos, F., Keefe, D.F.: Supporting internal visualization of biomedical datasets via 3D rapid prototypes and sketch-based gestures. In: Proceedings of the Symposium on Interactive 3D Graphics and Games (I3D), pp. 214–214. ACM, New York (2011).  https://doi.org/10.1145/1944745.1944794
  95. 95.
    Kruszyński, K.J., van Liere, R.: Tangible props for scientific visualization: concept, requirements, application. Virtual Real. 13(4), 235–244 (2009).  https://doi.org/10.1007/s10055-009-0126-1CrossRefGoogle Scholar
  96. 96.
    Kulik, A., et al.: C1x6: a stereoscopic six-user display for co-located collaboration in shared virtual environments. ACM Trans. Graph. 30(6), 188:1–188:12 (2011).  https://doi.org/10.1145/2070781.2024222CrossRefGoogle Scholar
  97. 97.
    Kurzhals, K., Fisher, B., Burch, M., Weiskopf, D.: Evaluating visual analytics with eye tracking. In: Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization (BELIV), pp. 61–69. ACM, New York (2014).  https://doi.org/10.1145/2669557.2669560
  98. 98.
    LaViola, J., Kruijff, E., Bowman, D., McMahan, R., Poupyrev, I.: 3D user interfaces: theory and practice. Usability Series, Pearson Education, Limited (2017)Google Scholar
  99. 99.
    Le Goc, M.: Supporting versatility in tangible user interfaces using collections of small actuated objects. Ph.D. thesis, Université Paris-Saclay, France, December 2016. https://tel.archives-ouvertes.fr/tel-01453175
  100. 100.
    Lee, B., Isenberg, P., Riche, N.H., Carpendale, S.: Beyond mouse and keyboard: expanding design considerations for information visualization interactions. IEEE Trans. Vis. Comput. Graph. 18(12), 2689–2698 (2012).  https://doi.org/10.1109/TVCG.2012.204CrossRefGoogle Scholar
  101. 101.
    Lee, B., Kazi, R.H., Smith, G.: SketchStory: telling more engaging stories with data through freeform sketching. IEEE Trans. Vis. Comput. Graph. 19(12), 2416–2425 (2013).  https://doi.org/10.1109/TVCG.2013.191CrossRefGoogle Scholar
  102. 102.
    Leithinger, D., Lakatos, D., DeVincenzi, A., Blackshaw, M., Ishii, H.: Direct and gestural interaction with relief: a 2.5D shape display. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 541–548. ACM, New York (2011).  https://doi.org/10.1145/2047196.2047268
  103. 103.
    López, D., Oehlberg, L., Doger, C., Isenberg, T.: Towards an understanding of mobile touch navigation in a stereoscopic viewing environment for 3D data exploration. IEEE Trans. Vis. Comput. Graph. 22(5), 1616–1629 (2016).  https://doi.org/10.1109/TVCG.2015.2440233CrossRefGoogle Scholar
  104. 104.
    Lucas, J., Bowman, D., Chen, J., Wingrave, C.: Design and evaluation of 3D multiple object selection techniques. In: ACM Interactive 3D graphics (2005)Google Scholar
  105. 105.
    Lundström, C., Rydell, T., Forsell, C., Persson, A., Ynnerman, A.: Multi-touch table system for medical visualization: Application to orthopedic surgery planning. IEEE Trans. Vis. Comput. Graph. 17(12), December 2011.  https://doi.org/10.1109/TVCG.2011.224CrossRefGoogle Scholar
  106. 106.
    Lv, Z., Halawani, A., Feng, S., Li, H., Réhman, S.U.: Multimodal hand and foot gesture interaction for handheld devices. ACM Trans. Multimed. Comput. Commun. Appl. 11(1s), 10:1–10:19 (2014).  https://doi.org/10.1145/2645860CrossRefGoogle Scholar
  107. 107.
    MacKenzie, I.S.: Evaluating eye tracking systems for computer input. In: Majaranta, P., Aoki, H., Donegan, M., Hansen, D.W., Hansen, J.P., Hyrskykari, A., Räihä, K.J. (eds.) Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies: Advances in Assistive Technologies, pp. 205–225. IGI Global, Hershey (2011).  https://doi.org/10.4018/978-1-61350-098-9.ch015
  108. 108.
    Majaranta, P., Bulling, A.: Eye tracking and eye-based human-computer interaction. In: Fairclough, S.H., Gilleade, K. (eds.) Advances in Physiological Computing, pp. 39–65. Springer, London (2014).  https://doi.org/10.1007/978-1-4471-6392-3_3Google Scholar
  109. 109.
    Malik, S., Ranjan, A., Balakrishnan, R.: Interacting with large displays from a distance with vision-tracked multi-finger gestural input. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 43–52. ACM, New York (2005).  https://doi.org/10.1145/1095034.1095042
  110. 110.
    McMahan, A.: Immersion, engagement, and presence: a method for analyzing 3-D video games. In: Wolf, M., Perron, B. (eds.) The Video Game Theory Reader, Chap. 3, pp. 67–86. Routledge (2003). http://www.alisonmcmahan.com/node/277
  111. 111.
    McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago Press (1992). http://press.uchicago.edu/ucp/books/book/chicago/H/bo3641188.html
  112. 112.
    Mohr, P., Kerbl, B., Donoser, M., Schmalstieg, D., Kalkofen, D.: Retargeting technical documentation to augmented reality. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3337–3346. ACM, New York (2015).  https://doi.org/10.1145/2702123.2702490
  113. 113.
    Myers, B.A.: A brief history of human-computer interaction technology. ACM Interact. 5(2), 44–54 (1998).  https://doi.org/10.1145/274430.274436CrossRefGoogle Scholar
  114. 114.
    Ni, T., Bowman, D.A., Chen, J.: Increased display size and resolution improve task performance in information-rich virtual environments. In: Proceedings of Graphics Interface, pp. 139–146 (2006)Google Scholar
  115. 115.
    Nilsson, S., Gustafsson, T., Carleberg, P.: Hands free interaction with virtual information in a real environment: Eye gaze as an interaction tool in an augmented reality system. PsychNology J. 7(2), 175–196 (2009)Google Scholar
  116. 116.
    Norman, D.A.: THE WAY I SEE IT: Signifiers, not affordances. ACM Interact. 15(6), 18–19 (2008).  https://doi.org/10.1145/1409040.1409044CrossRefGoogle Scholar
  117. 117.
    Norman, D.A.: Natural user interfaces are not natural. Interactions 17(3), 6–10 (2010).  https://doi.org/10.1145/1744161.1744163CrossRefGoogle Scholar
  118. 118.
    Norman, D.A.: The design of everyday things: Revised and expanded edition. Basic books, New York (2013). https://www.jnd.org/books/design-of-everyday-things-revised.html
  119. 119.
    Piper, B., Ratti, C., Ishii, H.: Illuminating clay: A 3-D tangible interface for landscape analysis. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 355–362. ACM, New York (2002).  https://doi.org/10.1145/503376.503439
  120. 120.
    Pirolli, P., Card, S.: Information foraging. Psychol. Rev. 106(4), 643–675 (1999).  https://doi.org/10.1037/0033-295X.106.4.643CrossRefGoogle Scholar
  121. 121.
    Preim, B., Dachselt, R.: Interaktive Systeme. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-642-45247-5CrossRefGoogle Scholar
  122. 122.
    Pretorius, A.J., Purchase, H.C., Stasko, J.T.: Tasks for multivariate network analysis. In: Kerren, A., Purchase, H.C., Ward, M.O. (eds.) Multivariate Network Visualization. LNCS, vol. 8380, pp. 77–95. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-06793-3_5CrossRefGoogle Scholar
  123. 123.
    Rädle, R., Jetter, H.C., Butscher, S., Reiterer, H.: The effect of egocentric body movements on users’ navigation performance and spatial memory in zoomable user interfaces. In: Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS), pp. 23–32. ACM, New York (2013).  https://doi.org/10.1145/2512349.2512811
  124. 124.
    Ragan, E.D., Endert, A., Sanyal, J., Chen, J.: Characterizing provenance in visualization and data analysis: an organizational framework of provenance types and purposes. IEEE Trans. Vis. Comput. Graph. 22(1), 31–40 (2016). https://doi.org/10.1109/TVCG.2015.2467551CrossRefGoogle Scholar
  125. 125.
    Rashid, U., Nacenta, M.A., Quigley, A.: The cost of display switching: a comparison of mobile, large display and hybrid UI configurations. In: Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI), pp. 99–106. ACM, New York (2012).  https://doi.org/10.1145/2254556.2254577
  126. 126.
    Rekimoto, J., Green, M.: The information cube: using transparency in 3D information visualization. In: Proceedings of the Annual Workshop on Information Technologies & Systems (WITS), pp. 125–132 (1993). https://www.sonycsl.co.jp/person/rekimoto/cube/
  127. 127.
    Renambot, L., et al.: SAGE2: a collaboration portal for scalable resolution displays. Futur. Gener. Comput. Syst. 54, 296–305 (2016).  https://doi.org/10.1016/j.future.2015.05.014CrossRefGoogle Scholar
  128. 128.
    Roberts, J.C., Ritsos, P.D., Badam, S.K., Brodbeck, D., Kennedy, J., Elmqvist, N.: Visualization beyond the desktop–The next big thing. IEEE Comput. Graph. Appl. 34(6), 26–34 (2014).  https://doi.org/10.1109/MCG.2014.82CrossRefGoogle Scholar
  129. 129.
    Robles-De-La-Torre, G.: The importance of the sense of touch in virtual and real environments. IEEE MultiMed. 13(3), 24–30 (2006).  https://doi.org/10.1109/MMUL.2006.69CrossRefGoogle Scholar
  130. 130.
    Roth, V., Schmidt, P., Güldenring, B.: The IR ring: Authenticating users’ touches on a multi-touch display. In: Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 259–262. ACM, New York (2010).  https://doi.org/10.1145/1866029.1866071
  131. 131.
    Samsung introduces 2007 LCD, plasma, DLP and CRT lineup (2007). https://www.engadget.com/2007/01/07/samsung-introduces-2007-lcd-plasma-dlp-and-crt-lineup/. Accessed 11 Apr 2017
  132. 132.
    Scheurich, D., Stuerzlinger, W.: A one-handed multi-touch method for 3D rotations. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013. LNCS, vol. 8117. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40483-2CrossRefGoogle Scholar
  133. 133.
    Shaer, O., Hornecker, E.: Tangible user interfaces: past, present, and future directions. Found. Trends Hum. Comput. Interact. 3(1–2), 4–137 (2010).  https://doi.org/10.1561/1100000026CrossRefGoogle Scholar
  134. 134.
    Shneiderman, B.: The eyes have it: A task by data type taxonomy for information visualizations. In: Proceedings of the IEEE Symposium on Visual Languages (VL), pp. 336–343. IEEE Computer Society, Los Alamitos (1996).  https://doi.org/10.1109/VL.1996.545307
  135. 135.
    Slater, M., Wilbur, S.: A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence Teleoperators Virtual Environ. 6(6), 603–616 (1997).  https://doi.org/10.1162/pres.1997.6.6.603CrossRefGoogle Scholar
  136. 136.
  137. 137.
    Spindler, M., Tominski, C., Schumann, H., Dachselt, R.: Tangible views for information visualization. In: Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS), pp. 157–166. ACM, New York (2010).  https://doi.org/10.1145/1936652.1936684
  138. 138.
    Stasko, J., Görg, C., Liu, Z.: Jigsaw: supporting investigative analysis through interactive visualization. Inf. Vis. 7(2), 118–132 (2008).  https://doi.org/10.1145/1466620.1466622CrossRefGoogle Scholar
  139. 139.
    Stellmach, S., Dachselt, R.: Look & touch: gaze-supported target acquisition. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 2981–2990. ACM, New York (2012).  https://doi.org/10.1145/2207676.2208709
  140. 140.
    Stellmach, S., Stober, S., Nürnberger, A., Dachselt, R.: Designing gaze-supported multimodal interactions for the exploration of large image collections. In: Proceedings of the Conference on Novel Gaze-Controlled Applications (NGCA), pp. 1:1–1:8. ACM, New York (2011).  https://doi.org/10.1145/1983302.1983303
  141. 141.
    Stuerzlinger, W., Wingrave, C.: The value of constraints for 3D user interfaces. In: Brunnett, G., Coquillart, S., Welch, G. (eds.) Virtual Realities: Dagstuhl Seminar 2008, pp. 203–224. Springer (2011)  https://doi.org/10.1007/978-3-211-99178-7Google Scholar
  142. 142.
    Sun, J., Stuerzlinger, W., Shuralyov, D.: Shift-sliding and depth-pop for 3D positioning. In: Proceedings of the 2016 Symposium on Spatial User Interaction, pp. 69–78. ACM (2016)Google Scholar
  143. 143.
    Sutherland, I.E.: Sketchpad: a man-machine graphical communication system. In: Proceedings of the Spring Joint Computer Conference (AFIPS, Spring), pp. 329–346. ACM, New York (1963).  https://doi.org/10.1145/1461551.1461591
  144. 144.
    Sutherland, I.E.: The ultimate display. In: Proceedings of the IFIP Congress, pp. 506–508 (1965)Google Scholar
  145. 145.
    Sutherland, I.E.: A head-mounted three dimensional display. In: Proceedings of the Fall Joint Computer Conference (AFIPS, Fall, part I), pp. 757–764. ACM, New York (1968).  https://doi.org/10.1145/1476589.1476686
  146. 146.
    Teather, R.J., Stuerzlinger, W.: Pointing at 3D targets in a stereo head-tracked virtual environment. In: Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI), pp. 87–94. IEEE Computer Society, Los Alamitos (2011).  https://doi.org/10.1109/3DUI.2011.5759222
  147. 147.
    Cluster rendering, Unity user manual. https://docs.unity3d.com/Manual/ClusterRendering.html
  148. 148.
    Vertegaal, R.: Attentive user interfaces. Commun. ACM 46(3), 30–33 (2003).  https://doi.org/10.1145/636772.636794CrossRefGoogle Scholar
  149. 149.
    Wai, J., Lubinski, D., Benbow, C.P.: Spatial ability for STEM domains: aligning over 50 years of cumulative psychological knowledge solidifies its importance. J. Educ. Psychol. 101(4), 817 (2009).  https://doi.org/10.1037/a0016127CrossRefGoogle Scholar
  150. 150.
    Walny, J., Carpendale, S., Henry Riche, N., Venolia, G., Fawcett, P.: Visual thinking in action: visualizations as used on whiteboards. IEEE Trans. Vis. Comput. Graph. 17(12), 2508–2517 (2011).  https://doi.org/10.1109/TVCG.2011.251CrossRefGoogle Scholar
  151. 151.
    Walter, R., Bailly, G., Valkanova, N., Müller, J.: Cuenesics: using mid-air gestures to select items on interactive public displays. In: Proceedings of the International Conference on Human-computer Interaction with Mobile Devices & Services (MobileHCI), pp. 299–308. ACM, New York (2014).  https://doi.org/10.1145/2628363.2628368
  152. 152.
    Wigdor, D., Wixon, D.: Brave NUI world: designing natural user interfaces for touch and gesture. Elsevier/Morgan Kaufmann, Amsterdam (2011).  https://doi.org/10.1016/B978-0-12-382231-4.00037-XCrossRefGoogle Scholar
  153. 153.
    Wither, J., DiVerdi, S., Höllerer, T.: Annotation in outdoor augmented reality. Comput. Graph. 33(6), 679–689 (2009).  https://doi.org/10.1016/j.cag.2009.06.001CrossRefGoogle Scholar
  154. 154.
    Yi, J.S., Kang, Y.A., Stasko, J., Jacko, J.: Toward a deeper understanding of the role of interaction in information visualization. IEEE Trans. Vis. Comput. Graph. 13(6), 1224–1231 (2007).  https://doi.org/10.1109/TVCG.2007.70515CrossRefGoogle Scholar
  155. 155.
    Ynnerman, A., Rydell, T., Antoine, D., Hughes, D., Persson, A., Ljung, P.: Interactive visualization of 3D scanned mummies at public venues. Commun. ACM 59(12), 72–81 (2016).  https://doi.org/10.1145/2950040CrossRefGoogle Scholar
  156. 156.
    Yu, L., Efstathiou, K., Isenberg, P., Isenberg, T.: Efficient structure-aware selection techniques for 3D point cloud visualizations with 2DOF input. IEEE Trans. Vis. Comput. Graph. 18(12), 2245–2254 (2012).  https://doi.org/10.1109/TVCG.2012.217CrossRefGoogle Scholar
  157. 157.
    Yu, L., Svetachov, P., Isenberg, P., Everts, M.H., Isenberg, T.: FI3D: direct-touch interaction for the exploration of 3D scientific visualization spaces. IEEE Trans. Vis. Comput. Graph. 16(6), 1613–1622 (2010).  https://doi.org/10.1109/TVCG.2010.157CrossRefGoogle Scholar
  158. 158.
    Brown University YURT homepage. https://web1.ccv.brown.edu/viz-yurt
  159. 159.
    von Zadow, U., Reipschläger, P., Bösel, D., Sellent, A., Dachselt, R.: YouTouch! low-cost user identification at an interactive display wall. In: Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI), pp. 144–151. ACM, New York (2016).  https://doi.org/10.1145/2909132.2909258
  160. 160.
    Zaroff, C.M., Knutelska, M., Frumkes, T.E.: Variation in stereoacuity: normative description, fixation disparity, and the roles of aging and gender. Investig. Ophthalmol. Vis. Sci. 44(2), 891 (2003).  https://doi.org/10.1167/iovs.02-0361CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Wolfgang Büschel
    • 1
  • Jian Chen
    • 2
  • Raimund Dachselt
    • 1
  • Steven Drucker
    • 3
  • Tim Dwyer
    • 4
  • Carsten Görg
    • 5
  • Tobias Isenberg
    • 6
  • Andreas Kerren
    • 7
  • Chris North
    • 8
  • Wolfgang Stuerzlinger
    • 9
  1. 1.Technische Universität DresdenDresdenGermany
  2. 2.The Ohio State UniversityColumbusUSA
  3. 3.Microsoft ResearchRedmondUSA
  4. 4.Monash UniversityMelbourneAustralia
  5. 5.University of ColoradoColoradoUSA
  6. 6.Inria & Université Paris-SaclayParisFrance
  7. 7.Linnaeus UniversityVäxjöSweden
  8. 8.Virginia TechBlacksburgUSA
  9. 9.Simon Fraser UniversityBurnabyCanada

Personalised recommendations