Smooth Gaze: a framework for recovering tasks across devices using eye tracking

Original Article

Abstract

A user task is often distributed across devices, e.g., a student listening to a lecture in a classroom while watching slides on a projected screen and making notes on her laptop, and sometimes checking Twitter for comments on her smartphone. In scenarios like this, users move between heterogeneous devices and have to deal with task resumption overhead from both physical and mental perspectives. To address this problem, we created Smooth Gaze, a framework for recording the user’s work state and resuming it seamlessly across devices by leveraging implicit gaze input. In particular, we propose two novel and intuitive techniques, smart watching and smart posting, for detecting which display and target region the user is looking at, and transferring and integrating content across devices respectively. In addition, we designed and implemented a cross-device reading system SmoothReading that captures content from secondary devices and generates annotations based on eye tracking, to be displayed on the primary device. We conducted a study that showed that the system supported information seeking and task resumption, and improved users’ overall reading experience.

Keywords

Visual attention Active reading Interaction design User interface 

Notes

Acknowledgments

We thank all the volunteers, and the grant from National Key Research & Development Program of China (No. 2016YFB1001403), National Natural Science Foundation of China (Nos. 61772468 and 61572437), and Zhejiang Provincial Natural Science Foundation of China (No. LY15F020030).

References

  1. 1.
    Dearman D, Pierce JS (2008) It’s on my other computer!: computing with multiple devices. In: Proceedings of the ACM conference on human factors in computing systems (CHI’08), ACM, pp 767–776Google Scholar
  2. 2.
    Hamilton P, Wigdor DJ (2014) Conductor: enabling and understanding cross-device interaction. In: Proceedings of the ACM conference on human factors in computing systems (CHI’14), ACM, pp 2773–2782Google Scholar
  3. 3.
    Li Y, Landay JA (2008) Activity-based prototyping of ubicomp applications for long-lived, everyday human activities. In: Proceedings of the ACM conference on human factors in computing systems (CHI’08), ACM, pp 1303–1312Google Scholar
  4. 4.
    Chang TH, Li Y (2011) Deep shot: a framework for migrating tasks across devices using mobile phone cameras. In: Proceedings of the ACM conference on human factors in computing systems (CHI’11), ACM, pp 2163–2172Google Scholar
  5. 5.
    Turner J, Bulling A, Alexander J, Gellersen H (2014) Cross-device gaze-supported point-to-point content transfer. In: Proceedings of the 2010 symposium on eye-tracking research & applications (ETRA’14), ACM, pp 19–26Google Scholar
  6. 6.
    Stellmach S, Dachselt R (2012) Look & touch: gaze-supported target acquisition. In: Proceedings of the ACM conference on human factors in computing systems (CHI’12), ACM, pp 2981–2990Google Scholar
  7. 7.
    Jacob RJK (1990) What you look at is what you get: eye movement-based interaction techniques. In: Proceedings of the ACM conference on human factors in computing systems (CHI’90), ACM, pp 11–18Google Scholar
  8. 8.
    Turner J, Bulling A, Alexander J, Gellersen H (2013) Eye drop: an interaction concept for gaze-supported point-to-point content transfer. In: Proceedings of the 12th international conference on mobile and ubiquitous multimedia (MUM’13), ACM, Article 37, 4 pagesGoogle Scholar
  9. 9.
    Santosa S, Wigdor D (2013) A field study of multi-device workflows in distributed workspaces. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing (UbiComp’13), ACM, pp 63–72Google Scholar
  10. 10.
    Turner J, Alexander J, Bulling A, Schmidt D, Gellersen H (2013) Eye pull, eye push: moving objects between large screens and personal devices with gaze and touch. In: Proceedings of IFIP conference on human-computer interaction (INTERACT’13), Springer, pp 170-186Google Scholar
  11. 11.
    Jokela T, Ojala J, Olsson T (2015) A diary study on combining multiple information devices in everyday activities and tasks. In: Proceedings of the ACM conference on human factors in computing systems (CHI’15), ACM, pp 3903–3912Google Scholar
  12. 12.
    Nebeling M, Dey AK (2016) XDBrowser: user-defined cross-device web page designs. In: Proceedings of the ACM conference on human factors in computing systems (CHI’16), ACM, pp 5494–5505Google Scholar
  13. 13.
    Stellmach S, Stober S, Nürnberger A, Dachselt R (2011) Designing gaze-supported multimodal interactions for the exploration of large image collections. In: Proceedings of the 1st conference on novel gaze-controlled applications (NGCA’11), ACM, Article 1 , 8 pagesGoogle Scholar
  14. 14.
    Stellmach S, Dachselt R (2013) Still looking: investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In: Proceedings of the ACM conference on human factors in computing systems (CHI’13), ACM, pp 285–294Google Scholar
  15. 15.
    Turner J, Alexander J, Bulling A, Gellersen H (2015) Gaze+RST: integrating gaze and multitouch for remote rotate-scale-translate tasks. In: Proceedings of the ACM conference on human factors in computing systems (CHI’15), ACM, pp 4179–4188Google Scholar
  16. 16.
    Dostal J, Kristensson PO, Quigley A (2013) Subtle gaze-dependent techniques for visualising display changes in multi-display environments. In: Proceedings of the 2013 international conference on intelligent user interfaces (IUI’13), ACM, pp 137–148Google Scholar
  17. 17.
    Turner J, Bulling A, Gellersen H (2012) Extending the visual field of a head-mounted eye tracker for pervasive eye-based interaction. In: Proceedings of the 2012 symposium on eye-tracking research & applications (ETRA’ 12), ACM, pp 269-272Google Scholar
  18. 18.
    Dickie C, Hart J, Vertegaal R, Eiser A (2006) LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers. In: Proceedings of the 18th Australia conference on computerhuman interaction: design: activities, artefacts and environments (OZCHI’06), ACM, pp 119–126Google Scholar
  19. 19.
    Hu MK (1962) Visual pattern recognition by moment invariants. IRE Trans Inf Theory 8:179–187MATHGoogle Scholar
  20. 20.
    Zhu Z, Ji Q (2004) Eye and gaze tracking for interactive graphic display. Vis Appl 15(3):139–148MathSciNetGoogle Scholar
  21. 21.
    Kern D, Marshall P, Schmidt A (2010) Gazemarks: gaze-based visual placeholders to ease attention switching. In: Proceedings of the ACM conference on human factors in computing systems (CHI’10), ACM, pp 2093–2102Google Scholar
  22. 22.
    Kiefer P, Giannopoulos I (2015) A framework for attention-based implicit interaction on mobile screens. In: Proceedings of international conference on human-computer interaction with mobile devices and services (MobileHCI’15), ACM, pp 1088–1093Google Scholar
  23. 23.
    Cheng S, Sun Z, Lu Y (2016) An eye tracking approach to cross-device interaction. J Comput Aided Des Comput Graph 28(7):1094–1104Google Scholar
  24. 24.
    Cheng S, Sun Z, Sun L, Yee K, Dey AK (2015) Gaze-based annotations for reading comprehension. In: Proceedings of the ACM conference on human factors in computing systems (CHI’15), ACM, pp 1569–1572Google Scholar
  25. 25.
    Strasburger H, Rentschler I, Jüttner M (2011) Peripheral vision and pattern recognition: a review. J Vis 11(5):13–13CrossRefGoogle Scholar
  26. 26.
    Rothkopf CA (2016) Minimal sequential gaze models for inferring walkers’ tasks. In: Proceedings of international conference on human-computer interaction with mobile devices and services (MobileHCI’16), ACM, pp 1041–1044Google Scholar
  27. 27.
    Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol Bull 124(3):372–422.  https://doi.org/10.1037/0033-2909.124.3.372 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Zhejiang University of TechnologyHangzhouChina
  2. 2.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations