Human-Computer Interaction

INTERACT 2015: Human-Computer Interaction – INTERACT 2015 pp 315-330 | Cite as

An Empirical Investigation of Gaze Selection in Mid-Air Gestural 3D Manipulation

  • Eduardo Velloso
  • Jayson Turner
  • Jason Alexander
  • Andreas Bulling
  • Hans Gellersen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9297)

Abstract

In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of 3D rigid bodies on monoscopic displays. We present the results of a user study with 12 participants in which we compared the performance of Gaze, a Raycasting technique (2D Cursor) and a Virtual Hand technique (3D Cursor) to select objects in two 3D mid-air interaction tasks. Also, we compared selection confirmation times for Gaze selection when selection is followed by manipulation to when it is not. Our results show that gaze selection is faster and more preferred than 2D and 3D mid-air-controlled cursors, and is particularly well suited for tasks in which users constantly switch between several objects during the manipulation. Further, selection confirmation times are longer when selection is followed by manipulation than when it is not.

Keywords

3D user interfaces Eye tracking Mid-air gestures 

References

  1. 1.
    Bowman, D.A., McMahan, R.P., Ragan, E.D.: Questioning naturalism in 3D user interfaces. Commun. ACM 55, 78–88 (2012)CrossRefGoogle Scholar
  2. 2.
    Bowman, D.A., Kruijff, E., LaViola Jr, J.J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Addison-Wesley, Boston (2004)Google Scholar
  3. 3.
    Hinckley, K., Pausch, R., Goble, J.C., Kassell, N.F.: A survey of design issues in spatial input. In: Proceedings of the 7th Annual ACM Symposium on User Interface Software and Technology, pp. 213–222. ACM (1994)Google Scholar
  4. 4.
    Argelaguet, F., Andujar, C.: A survey of 3D object selection techniques for virtual environments. Comput. Graph. 37, 121–136 (2013)CrossRefGoogle Scholar
  5. 5.
    Sibert, L.E., Jacob, R.J.: Evaluation of eye gaze interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 281–288. ACM (2000)Google Scholar
  6. 6.
    Ware, C., Mikaelian, H.H.: An evaluation of an eye tracker as a device for computer input. In: Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface, pp. 183–188. ACM, Toronto (1987)Google Scholar
  7. 7.
    Koons, D.B., Sparrell, C.J., Thorisson, K.R.: Integrating simultaneous input from speech, gaze, and hand gestures. In: Maybury, M.T. (ed.) Intelligent Multimedia Interfaces, pp. 257–276. American Association for Artificial Intelligence, Menlo Park (1993)Google Scholar
  8. 8.
    Tanriverdi, V., Jacob, R.J.: Interacting with eye movements in virtual environments. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 265–272. ACM (2000)Google Scholar
  9. 9.
    Cournia, N., Smith, J.D., Duchowski, A.T.: Gaze-vs. hand-based pointing in virtual environments. In: CHI 2003 Extended Abstracts on Human Factors in Computing Systems, pp. 772–773. ACM (2003)Google Scholar
  10. 10.
    Stellmach, S.: Gaze-supported Multimodal Interaction (2013). http://www.dr.hut-verlag.de/978-3-8439-1235-8.html
  11. 11.
    Kosunen, I., Jylha, A., Ahmed, I., An, C., Chech, L., Gamberini, L., Cavazza, M., Jacucci, G.: Comparing eye and gesture pointing to drag items on large screens. In: ITS, pp. 425–428. ACM (2013)Google Scholar
  12. 12.
    Hales, J., Rozado, D., Mardanbegi, D.: Interacting with objects in the environment by gaze and hand gestures. In: ECEM (2011)Google Scholar
  13. 13.
    Pouke, M., Karhu, A., Hickey, S., Arhippainen, L.: Gaze tracking and non-touch gesture based interaction method for mobile 3D virtual spaces. In: OzCHI, pp. 505–512. ACM (2012)Google Scholar
  14. 14.
    MacKenzie, C.L., Iberall, T.: The Grasping Hand. Elsevier, Amsterdam (1994)Google Scholar
  15. 15.
    Arbib, M.A.: Perceptual structures and distributed motor control. Comprehensive Physiology (1981)Google Scholar
  16. 16.
    Paillard, J.: Le corps situé et le corps identifié. Rev. Méd. Suisse Romande. 100 (1980)Google Scholar
  17. 17.
    Poupyrev, I., Billinghurst, M., Weghorst, S., Ichikawa, T.: The go-go interaction technique: non-linear mapping for direct manipulation in VR. In: Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology, pp. 79–80. ACM, Seattle (1996)Google Scholar
  18. 18.
    Bowman, D.A., Hodges, L.F.: An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. In: Proceedings of the 1997 Symposium on Interactive 3D Graphics, pp. 35–38. ACM (1997)Google Scholar
  19. 19.
    Simeone, A., Velloso, E., Alexander, J., Gellersen, H.: Feet movement in desktop 3D interaction. In: Proceedings of the 2014 IEEE Symposium on 3D User Interfaces. IEEE (2014)Google Scholar
  20. 20.
    Kitamura, Y., Itoh, Y., Kishino, F.: Real-time 3D interaction with ActiveCube. In: CHI 2001 Extended Abstracts on Human Factors in Computing Systems, pp. 355–356. ACM, Seattle (2001)Google Scholar
  21. 21.
    Stellmach, S., Dachselt, R.: Still looking: investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 285–294. ACM (2013)Google Scholar
  22. 22.
    Jacob, R.J.: What you look at is what you get: eye movement-based interaction techniques. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 11–18. ACM (1990)Google Scholar
  23. 23.
    Stellmach, S., Stober, S., Nürnberger, A., Dachselt, R.: Designing gaze-supported multimodal interactions for the exploration of large image collections. In: Proceedings of the 1st Conference on Novel Gaze-Controlled Applications, p. 1. ACM (2011)Google Scholar
  24. 24.
    Stellmach, S., Dachselt, R.: Investigating gaze-supported multimodal pan and zoom. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 357–360. ACM (2012)Google Scholar
  25. 25.
    Stellmach, S., Dachselt, R.: Look & touch: gaze-supported target acquisition. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2981–2990. ACM (2012)Google Scholar
  26. 26.
    Stellmach, S., Dachselt, R.: Still looking: Investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 285–294. ACM (2013)Google Scholar
  27. 27.
    Göbel, F., Klamka, K., Siegel, A., Vogt, S., Stellmach, S., Dachselt, R.: Gaze-supported foot interaction in zoomable information spaces. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 3059–3062. ACM (2013)Google Scholar
  28. 28.
    Zhai, S., Morimoto, C., Ihde, S.: Manual and gaze input cascaded (MAGIC) pointing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 246–253. ACM (1999)Google Scholar
  29. 29.
    Turner, J., Alexander, J., Bulling, A., Schmidt, D., Gellersen, H.: Eye pull, eye push: moving objects between large screens and personal devices with gaze and touch. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013, Part II. LNCS, vol. 8118, pp. 170–186. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  30. 30.
    Yoo, B., Han, J.-J., Choi, C., Yi, K., Suh, S., Park, D., Kim, C.: 3D user interface combining gaze and hand gestures for large-scale display. In: CHI 2010 Extended Abstracts on Human Factors in Computing Systems, pp. 3709–3714. ACM (2010)Google Scholar
  31. 31.
    Cha, T., Maier, S.: Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment. In: Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, p. 13. ACM (2012)Google Scholar
  32. 32.
    Stellmach, S., Dachselt, R.: Looking at 3D user interfaces. In: CHI 2012 Workshop on The 3rd Dimension of CHI (3DCHI): Touching and Designing 3D User Interfaces, pp. 95–98 (2012)Google Scholar
  33. 33.
    El-Nasr, M.S., Yan, S.: Visual attention in 3D video games. In: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, p. 22. ACM (2006)Google Scholar
  34. 34.
    Vinayagamoorthy, V., Garau, M., Steed, A., Slater, M.: An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and experience. Comput. Graph. Forum 23, 1–11 (2004). Wiley Online LibraryCrossRefGoogle Scholar
  35. 35.
    Hillaire, S., Lécuyer, A., Cozot, R., Casiez, G.: Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments. In: Virtual Reality Conference, VR 2008. IEEE, pp. 47–50. IEEE (2008)Google Scholar
  36. 36.
    Turner, J., Velloso, E., Gellersen, H., Sundstedt, V.: EyePlay: applications for gaze in games. In: Proceedings of the First ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, pp. 465–468. ACM (2014)Google Scholar
  37. 37.
    Sundstedt, V.: Gazing at games: using eye tracking to control virtual characters. In: ACM SIGGRAPH 2010 Courses, p. 5. ACM (2010)Google Scholar
  38. 38.
    Duchowski, A.T., Shivashankaraiah, V., Rawls, T., Gramopadhye, A.K., Melloy, B.J., Kanki, B.: Binocular eye tracking in virtual reality for inspection training. In: Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, pp. 89–96. ACM (2000)Google Scholar
  39. 39.
    Duchowski, A.T., Cournia, N., Cumming, B., McCallum, D., Gramopadhye, A., Greenstein, J., Sadasivan, S., Tyrrell, R.A.: Visual deictic reference in a collaborative virtual environment. In: Proceedings of the 2004 Symposium on Eye Tracking Research and Applications, pp. 35–40. ACM (2004)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  • Eduardo Velloso
    • 1
  • Jayson Turner
    • 1
  • Jason Alexander
    • 1
  • Andreas Bulling
    • 2
  • Hans Gellersen
    • 1
  1. 1.School of Computing and Communications, Infolab21Lancaster UniversityLancaster UK
  2. 2.Max Planck Institute for Informatics, Perceptual User Interfaces GroupSaabrückenGermany

Personalised recommendations