Advertisement

GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration

  • Khanh-Duy LeEmail author
  • Ignacio Avellino
  • Cédric Fleury
  • Morten Fjeld
  • Andreas Kunz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11747)

Abstract

In hub-satellite collaboration using video, interpreting gaze direction is critical for communication between hub coworkers sitting around a table and their remote satellite colleague. However, 2D video distorts images and makes this interpretation inaccurate. We present GazeLens, a video conferencing system that improves hub coworkers’ ability to interpret the satellite worker’s gaze. A \(360^{\circ }\) camera captures the hub coworkers and a ceiling camera captures artifacts on the hub table. The system combines these two video feeds in an interface. Lens widgets strategically guide the satellite worker’s attention toward specific areas of her/his screen allow hub coworkers to clearly interpret her/his gaze direction. Our evaluation shows that GazeLens (1) increases hub coworkers’ overall gaze interpretation accuracy by \(25.8\%\) in comparison to a conventional video conferencing system, (2) especially for physical artifacts on the hub table, and (3) improves hub coworkers’ ability to distinguish between gazes toward people and artifacts. We discuss how screen space can be leveraged to improve gaze interpretation.

Keywords

Remote collaboration Telepresence Gaze Lens widgets 

Supplementary material

Supplementary material 1 (mp4 14940 KB)

References

  1. 1.
    Vertegaal, R.: The GAZE groupware system: mediating joint attention in multiparty communication and collaboration. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1999, pp. 294–301. ACM, New York (1999)Google Scholar
  2. 2.
    Akkil, D., James, J.M., Isokoski, P., Kangas, J.: GazeTorch: enabling gaze awareness in collaborative physical tasks. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2016, pp. 1151–1158. ACM, New York (2016)Google Scholar
  3. 3.
    Vertegaal, R., van der Veer, G., Vons, H.: Effects of gaze on multiparty mediated communication. In: Proceedings of Graphics Interface 2000, pp. 95–102. ACM, New York (2010)Google Scholar
  4. 4.
    Vertegaal, R., Slagter, R., Van der Veer, G., Nijholt, A.: Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2001, pp. 301–308. ACM, New York (2001)Google Scholar
  5. 5.
    Higuch, K., Yonetani, R., Sato, Y.: Can eye help you? Effects of visualizing eye fixations on remote collaboration scenarios for physical tasks. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI 2016, pp. 5180–5190. ACM, New York (2016)Google Scholar
  6. 6.
    Xu, B., Ellis, J., Erickson, T.: Attention from afar: simulating the gazes of remote participants in hybrid meetings. In: Proceedings of the 2017 Conference on Designing Interactive Systems, DIS 2017, pp. 101–113. ACM, New York (2017)Google Scholar
  7. 7.
    Sellen, A., Buxton, B., Arnott, J.: Using spatial cues to improve videoconferencing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1992, pp. 651–652. ACM, New York (1992)Google Scholar
  8. 8.
    Nguyen, D., Canny, J.: MultiView: spatially faithful group video conferencing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2005, pp. 799–808. ACM, New York (2005)Google Scholar
  9. 9.
    Pan, Y., Steed, A.: A gaze-preserving situated multiview telepresence system. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, pp. 2173–2176. ACM, New York (2014)Google Scholar
  10. 10.
    Chen, M.: Leveraging the asymmetric sensitivity of eye contact for videoconferencing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2002, pp. 49–56. ACM, New York (2002)Google Scholar
  11. 11.
    Ishii, H., Kobayashi, M.: ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1992, pp. 525–532. ACM, New York (1992)Google Scholar
  12. 12.
    Hauber, J., Regenbrecht, H., Billinghurst, M., Cockburn, A.: Spatiality in videoconferencing: trade-offs between efficiency and social presence. In: Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work, CSCW 2006, pp. 413–422. ACM, New York (2006)Google Scholar
  13. 13.
    Otsuka, K.: MMSpace: kinetically-augmented telepresence for small group-to-group conversations. In: Proceedings of 2016 IEEE Virtual Reality (VR). IEEE (2016)Google Scholar
  14. 14.
    Küchler, M., Kunz, A.: Holoport-a device for simultaneous video and data conferencing featuring gaze awareness. In: Proceedings of Virtual Reality Conference 2006, pp. 81–88. IEEE (2006) Google Scholar
  15. 15.
    Jones, A., et al.: Achieving eye contact in a one-to-many 3D video teleconferencing system. ACM Trans. Graph. (TOG) 28(3) (2009)CrossRefGoogle Scholar
  16. 16.
    Gotsch, D., Zhang, X., Meeritt, T., Vertegaal, R.: TeleHuman2: a cylindrical light field teleconferencing system for life-size 3D human telepresence. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, p. 552. ACM, New York (2018)Google Scholar
  17. 17.
    Otsuki, M., Kawano, T., Maruyama, K., Kuzuoka, H., Suzuki, Y.: ThirdEye: simple add-on display to represent remote participant’s gaze direction in video communication. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 5307–5312. ACM, New York (2017)Google Scholar
  18. 18.
    Vertegaal, R., Weevers, I., Sohn, C., Cheung, C.: GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2003, pp. 521–528. ACM, New York (2003)Google Scholar
  19. 19.
    Norris, J., Schnädelbach, H., Qiu, G.: CamBlend: an object focused collaboration tool. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 627–636. ACM, New York (2012)Google Scholar
  20. 20.
    Bailey, R., McNamara, A., Sudarsanam, N., Grimm, C.: Subtle gaze direction. ACM Trans. Graph. (TOG) 28(4) (2009)CrossRefGoogle Scholar
  21. 21.
    Hata, H., Koike, H., Sato, Y.: Visual guidance with unnoticed blur effect. In: Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI 2016, pp. 28–35. ACM, New York (2016) Google Scholar
  22. 22.
    Stokes, R.: Human factors and appearance design considerations of the Mod II PICTUREPHONE station set. ACM Trans. Graph. (TOG) 17(2), 318 (1969)MathSciNetGoogle Scholar
  23. 23.
    Average human sitting posture dimensions required in interior design. https://gharpedia.com/average-human-sitting-posture-dimensions-required-in-interior-design/
  24. 24.
    Gemmell, J., Toyama, K., Zitnick, C.L., Kang, T., Seitz, S.: Gaze awareness for video-conferencing: a software approach. IEEE MultiMedia 7(4), 26–35 (2000)CrossRefGoogle Scholar
  25. 25.
    Giger, D., Bazin, J.-C., Kuster, C., Popa, T., Gross, M.: Gaze correction with a single webcam. In: 2014 IEEE International Conference on Multimedia and Expo (ICME). IEEE (2014)Google Scholar
  26. 26.
    Venolia, G., et al.: Embodied social proxy: mediating interpersonal connection in hub-and-satellite teams. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 1049–1058. ACM, New York (2010)Google Scholar
  27. 27.
    Kendon, A.: Some functions of gaze-direction in social interaction. Acta Psychologica 26, 22–63 (1967)CrossRefGoogle Scholar
  28. 28.
    Brennan, S.E., Chen, X., Dickinson, C.A., Neider, M.B., Zelinsky, G.J.: Coordinating cognition: the costs and benefits of shared gaze during collaborative search. Cognition 106(3), 1465–1477 (2008)CrossRefGoogle Scholar
  29. 29.
    Akkil, D., Isokoski, P.: I see what you see: gaze awareness in mobile video collaboration. In: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, ETRA 2018, p. 32. ACM, New York (2018)Google Scholar
  30. 30.
    Yao, N., Brewer, J., D’Angelo, S., Horn, M., Gergle, D.: Visualizing gaze information from multiple students to support remote instruction. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, p. LBW051. ACM, New York (2018)Google Scholar
  31. 31.
    Avellino, I., Fleury, C., Beaudouin-Lafon, M.: Accuracy of deictic gestures to support telepresence on wall-sized displays. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2393–2396. ACM, New York (2015)Google Scholar
  32. 32.
    Monk, A.F., Gale, C.: A look is worth a thousand words: full gaze awareness in video-mediated conversation. Discourse Process. 33(4), 257–278 (2002)CrossRefGoogle Scholar
  33. 33.
    Hart, S.G., Staveland, L.E: Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1998)Google Scholar
  34. 34.
  35. 35.
  36. 36.
    Cisco CTS-SX80-IPST60-K9 TelePresence (CTS-SX80-IPST60-K9). https://www.bechtle.com/ch-en/shop/cisco-cts-sx80-ipst60-k9-telepresence-896450-40-p. Accessed 26 Jan 2019
  37. 37.
    Enterprise Video Conference. http://www.avsolutions.com/enterprise-video-conference. Accessed 26 Jan 2019
  38. 38.
    Caine, K.: Local standards for sample size at CHI. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 981–992. ACM, New York (2016)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Khanh-Duy Le
    • 1
    Email author
  • Ignacio Avellino
    • 2
  • Cédric Fleury
    • 3
  • Morten Fjeld
    • 1
  • Andreas Kunz
    • 4
  1. 1.Chalmers University of TechnologyGothenburgSweden
  2. 2.ISIR, CNRS, Sorbonne UniversitéParisFrance
  3. 3.LRI, Univ. Paris-Sud, CNRS, Inria, Université Paris-SaclayParisFrance
  4. 4.ETH ZurichZurichSwitzerland

Personalised recommendations