Advertisement

Is It in Your Eyes? Explorations in Using Gaze Cues for Remote Collaboration

  • Mark Billinghurst
  • Kunal Gupta
  • Masai Katsutoshi
  • Youngho Lee
  • Gun Lee
  • Kai Kunze
  • Maki Sugimoto
Chapter

Abstract

According to previous research, head mounted displays (HMDs) and head worn cameras (HWCs) are useful for remote collaboration. These systems can be especially helpful for remote assistance on physical tasks, when a remote expert can see the workspace of the local user and provide feedback. However, a HWC often has a wide field of view and so it may be difficult to know exactly where the local user is looking. In this chapter we explore how head mounted eye-tracking can be used to convey gaze cues to a remote collaborator. We describe two prototypes developed that integrate an eye-tracker with a HWC and see-through HMD, and results from user studies conducted with the systems. Overall, we found that showing gaze cues on a shared video appears to be better than just providing the video on its own, and combining gaze and pointing cues is the most effective interface for remote collaboration among the conditions tested. We also discuss the limitations of this work and present directions for future research.

References

  1. 1.
    Bauer M, Kortuem G, Segall Z (1999) Where are you pointing at? A study of remote collaboration in a wearable videoconference system. In: Proceedings of ISWC, vol 99, pp 151–159Google Scholar
  2. 2.
    Berlo DK (1960) The process of communication. Holt, Rinehart and Winston Inc, New YorkGoogle Scholar
  3. 3.
    Brennan SE, Chen X, Dickinson CA, Neider MB, Zelinsky GJ (2008) Coordinating cognition: the costs and benefits of shared gaze during collaborative search. Cognition 106:1465–1477CrossRefGoogle Scholar
  4. 4.
    Brother Air Scouter. http://www.brother.co.uk/ Accessed April 10th 2016
  5. 5.
    Buxton W (1992) Telepresence: Integrating shared task and person spaces. In: Proceedings of graphics interface, vol 92, pp 123–129Google Scholar
  6. 6.
    Carletta J, Hill RL, Nicol C, Taylor T, De Ruiter JP, Bard EG (2010) Eyetracking for two-person tasks with manipulation of a virtual world. Behav Res Methods 42(1):254–265CrossRefGoogle Scholar
  7. 7.
    Calvo RA, D’Mello S (2010) Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans Affect Comput 1:18–37Google Scholar
  8. 8.
    Clark HH, Brennan SE (1991) Grounding in communication. In: Perspectives on socially shared cognition, vol 13, pp 127–149Google Scholar
  9. 9.
    Clark HH, Marshall CE (1981) Definite reference and mutual knowledge. In: Joshi AK, Webber BL, Sag IA (eds) Elements of discourse understanding. Cambridge University Press, Cambridge, pp 10–63Google Scholar
  10. 10.
    Daft RL, Lengel RH (1986) Organizational information requirements, media richness and structural design. Manage Sci 32(5):554–571CrossRefGoogle Scholar
  11. 11.
    Epson Moverio website. http://www.epson.com/moverio/ Accessed 10 April 2016
  12. 12.
    Fussell SR, Kraut RE Siegel J (2000) Coordination of communication: effects of shared visual context on collaborative work. In: Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, pp 21–30Google Scholar
  13. 13.
    Fussell SR, Setlock LD (2003) Using eye-tracking techniques to study collaboration on physical tasks: implications for medical research. Unpublished manuscript, Carnegie Mellon UniversityGoogle Scholar
  14. 14.
    Fussell SR, Setlock LD, Kraut RE (2003) Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In: Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, pp 513–520Google Scholar
  15. 15.
    Fussell SR, Setlock LD, Yang J, Ou J, Mauer E, Kramer AD (2004) Gestures over video streams to support remote collaboration on physical tasks. Hum-Comput Interact 19(3):273–309CrossRefGoogle Scholar
  16. 16.
    Gergle D, Kraut RE, Fussell SR (2013) Using visual information for grounding and awareness in collaborative tasks. Hum–Comput Interact 28(1):1–39Google Scholar
  17. 17.
    Gauglitz S, Lee, C., Turk, M., Höllerer, T. (2012). Integrating the physical environment into mobile remote collaboration. In: Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services, pp 241–250Google Scholar
  18. 18.
    Gauglitz S, Nuernberger B, Turk M, Höllerer T (2014) World-stabilized annotations and virtual scene navigation for remote collaboration. In: Proceedings of the 27th annual ACM symposium on user interface software and technology. ACM, pp 449–459Google Scholar
  19. 19.
    Gunawardena CN, Zittle FJ (1997) Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. Am J Dist Educ 11(3):8–26CrossRefGoogle Scholar
  20. 20.
    Google Hangout https://hangouts.google.com/ Accessed 10 April 2016
  21. 21.
    Ishii H, Kobayashi M, Arita K (1994) Interactive design of seamless collaboration media. Commun ACM 37(8):83–98CrossRefGoogle Scholar
  22. 22.
    Kim S, Lee G, Sakata N, Billinghurst M (2014) Improving co-presence with augmented visual communication cues for sharing experience through video conference. In: 2014 IEEE International symposium on mixed and augmented reality (ISMAR). IEEE, pp. 83–92Google Scholar
  23. 23.
    Kraut RE, Fussell SR, Siegel J (2003) Visual information as a conversational resource in ollaborative physical tasks. Hum-Comput Interact 18(1):13–49CrossRefGoogle Scholar
  24. 24.
    Kurata T, Sakata N, Kourogi M, Kuzuoka H, Billinghurst M (2004) Remote collaboration using a shoulder-worn active camera/laser. In: Eighth international symposium on wearable computers, 2004. ISWC 2004, vol 1. IEEE, pp 62–69Google Scholar
  25. 25.
    Kuzuoka H (1992) Spatial workspace collaboration: a Shared View video support system for remote collaboration capability. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 533–540Google Scholar
  26. 26.
    Lanir J, Stone R, Cohen B, Gurevich P (2013) Ownership and control of point of view in remote assistance. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 2243–2252Google Scholar
  27. 27.
    Li J, Manavalan M, D’Angelo S, Gergle D (2016) Designing shared gaze awareness for remote collaboration. In: Proceedings of the 19th ACM conference on computer supported cooperative work and social computing companion (CSCW ‘16 Companion). ACM, New York, NY, USA, pp 325–328Google Scholar
  28. 28.
    Litchfield D, Ball LJ (2011) Using another’s gaze as an explicit aid to insight problem solving. Q J Exp Psychol 64(4):649–656CrossRefGoogle Scholar
  29. 29.
    Masai K, Sugiura Y, Ogata M, Kunze K, Inami M, Sugimoto M (2016) Facial expression recognition in daily life by embedded photo reflective sensors on smart eyewear. In: Proceedings of the 21st international conference on intelligent user interfaces. ACM, pp. 317–326Google Scholar
  30. 30.
    Mee A (1898) The pleasure telephone. The Strand Magazine:339–345Google Scholar
  31. 31.
    Müller R, Helmert JR, Pannasch S, Velichkovsky BM (2013) Gaze transfer in remote cooperation: Is it always helpful to see what your partner is attending to? Q J Exp Psychol 66(7):1302–1316CrossRefGoogle Scholar
  32. 32.
    Noll AM (1992) Anatomy of a failure: picturephone revisited. Telecommun Policy 16(4):307–316CrossRefGoogle Scholar
  33. 33.
    Oda O, Sukan M, Feiner S, Tversky B (2013) 3D referencing for remote task assistance in augmented reality. In: 2013 IEEE symposium on 3D user interfaces (3DUI). IEEE, pp 179–180Google Scholar
  34. 34.
    Ou J, Oh LM, Fussell SR, Blum T, Yang J (2008) Predicting visual focus of attention from intention in remote collaborative tasks. IEEE Trans Multim 10(6):1034–1045Google Scholar
  35. 35.
    Picard RW, Picard R (1997) Affective computing, vol 252. MIT press, CambridgeGoogle Scholar
  36. 36.
    Pupil Labs website: https://pupil-labs.com/. Accessed 10 April 2016
  37. 37.
    Skype website: https://www.skype.com/. Accessed 10 April 2016
  38. 38.
    Tait M, Billinghurst M (2015) The effect of view independence in a collaborative AR system. Comput Support Coop Work (CSCW) 24(6):563–589Google Scholar
  39. 39.
    Tan CSS, Luyten K, Van Den Bergh J, Schöning J, Coninx K (2014) The role of physiological cues during remote collaboration. Presence: Teleoperat Virtual Environ 23(1):90–107CrossRefGoogle Scholar
  40. 40.
    Tecchia F, Alem L, Huang W (2012) 3D helping hands: a gesture based MR system for remote collaboration. In: Proceedings of the 11th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry. ACM, pp 323–328Google Scholar
  41. 41.
    Velichkovsky BM (1995) Communicating attention: Gaze position transfer in cooperative problem solving. Pragmat Cogn 3:199–222CrossRefGoogle Scholar
  42. 42.
    Whittaker S (2003) Theories and methods in mediated communication. The handbook of discourse processes, pp 243–286Google Scholar
  43. 43.
    Wobbrock JO, Findlater L, Gergle D, Higgins JJ (2011) The aligned rank transform for nonparametric factorial analyses using only ANOVA procedures. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 143–146Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Mark Billinghurst
    • 1
  • Kunal Gupta
    • 2
  • Masai Katsutoshi
    • 3
  • Youngho Lee
    • 1
    • 4
  • Gun Lee
    • 2
  • Kai Kunze
    • 3
  • Maki Sugimoto
    • 3
  1. 1.Empathic Computing LaboratoryUniversity of South AustraliaAdelaideAustralia
  2. 2.The HIT Lab NZUniversity of CanterburyChristchurchNew Zealand
  3. 3.Keio UniversityMinato, TokyoJapan
  4. 4.Mokpo National UniversityMokpoSouth Korea

Personalised recommendations