Multimedia Tools and Applications

, Volume 75, Issue 6, pp 2913–2929 | Cite as

Attention as an input modality for Post-WIMP interfaces using the viGaze eye tracking framework

  • Ioannis Giannopoulos
  • Johannes Schöning
  • Antonio Krüger
  • Martin Raubal
Article

Abstract

Eye tracking is one of the most prominent modalities to track user attention while interacting with computational devices. Today, most of the current eye tracking frameworks focus on tracking the user gaze during website browsing or while performing other tasks and interactions with a digital device. Most frameworks have in common that they do not exploit gaze as an input modality. In this paper we describe the realization of a framework named viGaze. Its main goal is to provide an easy to use framework to exploit the use of eye gaze as an input modality in various contexts. Therefore it provides features to explore explicit and implicit interactions in complex virtual environments by using the eye gaze of a user for various interactions. The viGaze framework is flexible and can be easily extended to incorporate other input modalities typically used in Post-WIMP interfaces such as gesture or foot input. In this paper we describe the key components of our viGaze framework and additionally describe a user study that was conducted to test the framework. The user study took place in a virtual retail environment, which provides a challenging pervasive environment and contains complex interactions that can be supported by gaze. The participants performed two gaze-based interactions with products on virtual shelves and started an interaction cycle between the products and an advertisement monitor placed on the shelf. We demonstrate how gaze can be used in Post-WIMP interfaces to steer the attention of users to certain components of the system. We conclude by discussing the advantages provided through the viGaze framework and highlighting the potentials of gaze-based interaction.

Keywords

Prototype multimedia systems and platforms Human-computer interaction Eye tracking 

References

  1. 1.
    Biedert R, Buscher G, Schwarz S, Hees J, Dengel A (2010) Text 2.0. In CHI ’10 extended abstracts on human factors in computing systems. ACM Press, NY, USA, pp 4003–4008Google Scholar
  2. 2.
    Bulling A, Roggen D, Tröster G (2008) Eyemote–towards context-aware gaming using eye movements recorded from wearable electrooculography. Fun and games, pp 33–45Google Scholar
  3. 3.
    Cauchard JR, Löchtefeld M, Irani P, Schoening J, Krüger A, Fraser M, Subramanian S (2011) Visual separation in mobile multi-display environments. In Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, pp 451–460Google Scholar
  4. 4.
    Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3(3):201–215CrossRefGoogle Scholar
  5. 5.
    Daiber F, Schöning J, Krüger A (2009) Whole body interaction with geospatial data. In Smart Graphics. Springer, pp 81–92Google Scholar
  6. 6.
    Duchowski A (2007) Eye tracking methodology: Theory and practice, vol 373. SpringerGoogle Scholar
  7. 7.
    Duggan GB, Payne SJ (2011) Skim reading by satisficing: evidence from eye tracking. In Proceedings of the 2011 annual conference on Human factors in computing systems. CHI ’11, ACM, pp 1141–1150Google Scholar
  8. 8.
    Ferscha A, Resmerita S, Holzmann C (2007) Human computer confluence. Universal Access in Ambient Intelligence Environments, pp 14–27Google Scholar
  9. 9.
    Giannopoulos I, Kiefer P, Raubal M (2012) Geogazemarks: providing gaze history for the orientation on small display maps. In Proceedings of the 14th ACM international conference on Multimodal interaction. ICMI ’12, ACM, NY, USA, pp 165–172Google Scholar
  10. 10.
    Hempel J (2011) Navigation with gaze and feet: An interaction method for spatial data. Faculty of Computer Science, University of Magdeburg. Bachelor ThesisGoogle Scholar
  11. 11.
    Hill RL, Dickinson A, Arnott JL, Gregor P, McIver L (2011) Older web users’ eye movements: experience counts. In Proceedings of the 2011 annual conference on Human factors in computing systems. CHI ’11, ACM, pp 1151–1160Google Scholar
  12. 12.
    Jacob R J K (1991) The use of eye movements in human-computer interaction techniques: What you look at is what you get. ACM Trans Inf Syst 9(2):152–169CrossRefGoogle Scholar
  13. 13.
    Kern D, Marshall P, Schmidt A (2010) Gazemarks: gaze-based visual placeholders to ease attention switching. In CHI, pp 2093–2102Google Scholar
  14. 14.
    Komogortsev O, Holland C, Camou J (2011) Adaptive eye-gaze-guided interfaces: design & performance evaluation. In CHI Extended Abstracts, pp 1255–1260Google Scholar
  15. 15.
    Krüger A, Schöning J, Olivier P (2011) How computing will change the face of retail. Computer:84–87Google Scholar
  16. 16.
    Ohno T, Mukawa N, Kawato S (2003) Just blink your eyes: A head-free gaze tracking system. In CHI’03 extended abstracts on Human factors in computing systems. ACM, pp 950–957Google Scholar
  17. 17.
    Pieters R, Warlop L (1999) Visual attention during brand choice: The impact of time pressure and task motivation. Int J Res Mark 16(1):1–16CrossRefGoogle Scholar
  18. 18.
    Porta M, Ravarelli A, Spagnoli G (2010) eCursor, a contextual eye cursor for general pointing in windows environments. In ETRA, pp 331–337Google Scholar
  19. 19.
    Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol bull 124(3):372–422CrossRefGoogle Scholar
  20. 20.
    Rensink RA, O’Regan JK, Clark JJ (1997) To see or not to see: The need for attention to perceive changes in scenes. Psychol Sci 8(5):368–373CrossRefGoogle Scholar
  21. 21.
    Santella A, Agrawala M, DeCarlo D, Salesin D, Cohen M (2006) Gaze-based interaction for semi-automatic photo cropping. In Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, pp 771–780Google Scholar
  22. 22.
    Spassova L, Schöning J, Kahl G, Krüger A (2009) Innovative retail laboratory. In Roots for the Future of Ambient Intelligence. European Conference on Ambient Intelligence (AmI-09). 3rd, pp 18–21Google Scholar
  23. 23.
    Stellmach S, Dachselt R (2012) Investigating gaze-supported multimodal pan and zoom. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, pp 357–360Google Scholar
  24. 24.
    Tanriverdi V, Jacob R J K (2000) Interacting with eye movements in virtual environments. In CHI, pp 265–272Google Scholar
  25. 25.
    Treue S (2003) Visual attention: the where, what, how and why of saliency, vol 13, pp 428–432Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Ioannis Giannopoulos
    • 1
  • Johannes Schöning
    • 2
  • Antonio Krüger
    • 3
  • Martin Raubal
    • 1
  1. 1.Institute of Cartography and Geoinformation, ETH Zurich HILZurichSwitzerland
  2. 2.Hasselt University - tUL - iMinds, Expertise Centre for Digital MediaDiepenbeekBelgium
  3. 3.DFKI GmbH, Campus D3 2SaarbrückenGermany

Personalised recommendations