Advertisement

Gaze+touch vs. Touch: What’s the Trade-off When Using Gaze to Extend Touch to Remote Displays?

  • Ken Pfeuffer
  • Jason Alexander
  • Hans Gellersen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9297)

Abstract

Direct touch input is employed on many devices, but it is inherently restricted to displays that are reachable by the user. Gaze input as a mediator can extend touch to remote displays - using gaze for remote selection, and touch for local manipulation - but at what cost and benefit? In this paper, we investigate the potential trade-off with four experiments that empirically compare remote Gaze+touch to standard touch. Our experiments investigate dragging, rotation, and scaling tasks. Results indicate that Gaze+touch is, compared to touch, (1) equally fast and more accurate for rotation and scaling, (2) slower and less accurate for dragging, and (3) enables selection of smaller targets. Our participants confirm this trend, and are positive about the relaxed finger placement of Gaze+touch. Our experiments provide detailed performance characteristics to consider for the design of Gaze+touch interaction of remote displays. We further discuss insights into strengths and drawbacks in contrast to direct touch.

Keywords

Gaze interaction Eye-tracking Multitouch Multimodal UI 

References

  1. 1.
    Albinsson, P.-A., Zhai, S.: High precision touch screen interaction. In: CHI 2003, pp. 105–112. ACM (2003)Google Scholar
  2. 2.
    Au, O.K.-C., Su, X., Lau, R.W.H.: LinearDragger: a linear selector for target acquisition on touch screens. In: CHI 2014, pp. 2607–2616. ACM (2014)Google Scholar
  3. 3.
    Benko, H., Izadi, S., Wilson, A.D., Cao, X., Rosenfeld, D., Hinckley, K.: Design and evaluation of interaction models for multi-touch mice. In: GI 2010, pp. 253–260. CIPS (2010)Google Scholar
  4. 4.
    Benko, H., Wilson, A.D., Baudisch, P.: Precise selection techniques for multi-touch screens. In: CHI 2006, pp. 1263–1272. ACM (2006)Google Scholar
  5. 5.
    Casiez, G., Vogel, D., Balakrishnan, R., Cockburn, A.: The impact control-display gain on user performance in pointing tasks. Hum. Comput. Interact. 23, 215–250 (2008)CrossRefzbMATHGoogle Scholar
  6. 6.
    Forlines, C., Wigdor, D., Shen, C., Balakrishnan, R.: Direct-touch vs. mouse input for tabletop displays. In: CHI 2007, pp. 647–656. ACM (2007)Google Scholar
  7. 7.
    Gutwin, C., Cockburn, A., Scarr, J., Malacria, S., Olson, S.C.: Faster command selection on tablets with FastTap. In: CHI 2014, pp. 2617–2626. ACM (2014)Google Scholar
  8. 8.
    Hoggan, E., Williamson, J., Oulasvirta, A., Nacenta, M., Kristensson, P.O., Lehtiö, A.: Multi-touch rotation gestures: performance and ergonomics. In: CHI 2013, pp. 3047–3050. ACM (2013)Google Scholar
  9. 9.
    Jacob, R.J.K.: What you look at is what you get: eye movement-based interaction techniques. In: CHI 1990, pp. 11–18. ACM (1990)Google Scholar
  10. 10.
    Jacob, R.J.K.: Eye movement-based human-computer interaction techniques toward non-command interfaces. Adv Hum. Comput. Interact. 4, 151–190 (1993). Ablex PublishingGoogle Scholar
  11. 11.
    Kin, K., Agrawala, M., DeRose, T.: Determining the benefits of direct-touch, bimanual, and multifinger input on a multitouch workstation. In: GI 2009, pp. 119–124. CIPS (2009)Google Scholar
  12. 12.
    Malik, S., Laszlo, J.: Visual touchpad: a two-handed gestural input device. In: ICMI 2004, pp. 289–296. ACM (2004)Google Scholar
  13. 13.
    Moscovich, T., Hughes, J.F.: Indirect mappings of multi-touch input using one and two hands. In: CHI 2008, pp. 1275–1284. ACM (2008)Google Scholar
  14. 14.
    Pfeuffer, K., Alexander, J., Chong, M.K., Gellersen, H.: Gaze-touch: combining gaze with multi-touch for interaction on the same surface. In: UIST 2014, pp. 509–518. ACM (2014)Google Scholar
  15. 15.
    Pfeuffer, K., Vidal, M., Turner, J., Bulling, A., Gellersen, H.: Pursuit calibration: making gaze calibration less tedious and more flexible. In: UIST 2013 (2013)Google Scholar
  16. 16.
    Potter, R.L., Weldon, L.J., Shneiderman, B.: Improving the accuracy of touch screens: an experimental evaluation of three strategies. In: CHI 1988, pp. 27–32. ACM (1988)Google Scholar
  17. 17.
    Salvucci, D.D., Anderson, J.R.: Intelligent gaze-added interfaces. In: CHI 2000, pp. 273–280. ACM, New York (2000)Google Scholar
  18. 18.
    Schmidt, D., Block, F., Gellersen, H.: A comparison direct and indirect multi-touch input for large surfaces. In: INTERACT 2009, pp. 582–594. Springer (2009)Google Scholar
  19. 19.
    Sibert, L.E., Jacob, R.J.K.: Evaluation of eye gaze interaction. In: CHI 2000, pp. 281–288. ACM (2000)Google Scholar
  20. 20.
    Špakov, O., Isokoski, P., Majaranta, P.: Look and lean: accurate head-assisted eye pointing. In: ETRA 2014, pp. 35–42. ACM, New York (2014)Google Scholar
  21. 21.
    Stellmach, S., Dachselt, R.: Look & touch: gaze-supported target acquisition. In: CHI 2012, pp. 2981–2990. ACM (2012)Google Scholar
  22. 22.
    Stellmach, S., Dachselt, R.: Still looking: investigating seamless gaze-supported selection, positioning, and manipulation distant targets. In: CHI 2013, pp. 285–294. ACM (2013)Google Scholar
  23. 23.
    Turner, J., Alexander, J., Bulling, A., Gellersen, H.: Gaze + RST: integrating gaze and multitouch for remote rotate-scale-translate tasks. In: CHI 2015. ACM (2015, to appear)Google Scholar
  24. 24.
    Turner, J., Alexander, J., Bulling, A., Schmidt, D., Gellersen, H.: Eye pull, eye push: moving objects between large screens and personal devices with gaze & touch. In: INTERACT 2013, pp. 170–186. Springer (2013)Google Scholar
  25. 25.
    Turner, J., Bulling, A., Alexander, J., Gellersen, H.: Cross-device gaze-supported point-to-point content transfer. In: ETRA 2014, pp. 19–26. ACM (2014)Google Scholar
  26. 26.
    Voelker, S., Wacharamanotham, C., Borchers, J.: An evaluation state switching methods for indirect touch systems. In: CHI 2013, pp. 745–754. ACM (2013)Google Scholar
  27. 27.
    Ware, C., Mikaelian, H.H.: An evaluation of an eye tracker as a device for computer input. In: CHI 1887, pp. 183–188. ACM (1987)Google Scholar
  28. 28.
    Wigdor, D., Benko, H., Pella, J., Lombardo, J., Williams, S.: Rock & rails: extending multi-touch interactions with shape gestures to enable precise spatial manipulations. In: CHI 2011, pp. 1581–1590. ACM (2011)Google Scholar
  29. 29.
    Zhai, S., Morimoto, C., Ihde, S.: Manual and gaze input cascaded (MAGIC) pointing. In: CHI 1999, pp 246–253. ACM (1999)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  1. 1.School of Computing and Communications, InfoLab21Lancaster UniversityLancasterUK

Personalised recommendations