Social Impact of Enhanced Gaze Presentation Using Head Mounted Projection

  • David M. Krum
  • Sin-Hwa KangEmail author
  • Thai Phan
  • Lauren Cairco Dukes
  • Mark Bolas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10291)


Projected displays can present life-sized imagery of a virtual human character that can be seen by multiple observers. However, typical projected displays can only render that virtual human from a single viewpoint, regardless of whether head tracking is employed. This results in the virtual human being rendered from an incorrect perspective for most individuals in a group of observers. This could result in perceptual miscues, such as the “Mona Lisa” effect, causing the virtual human to appear as if it is simultaneously gazing and pointing at all observers in the room regardless of their location. This may be detrimental to training scenarios in which all trainees must accurately assess where the virtual human is looking or pointing a weapon. In this paper, we discuss our investigations into the presentation of eye gaze using REFLCT, a previously introduced head mounted projective display. REFLCT uses head tracked, head mounted projectors and retroreflective screens to present personalized, perspective correct imagery to multiple users without the occlusion of a traditional head mounted display. We examined how head mounted projection for enhanced presentation of eye gaze might facilitate or otherwise affect social interactions during a multi-person guessing game of “Twenty Questions.”


Enhanced gaze Head mounted projection 



The authors would like to thank Joshua Newth and Logan Olson for early contributions to this project. This work was sponsored, in whole or in part, by the U.S. Army Research Laboratory (ARL) under contract number W911F-14-D-0005. Statements, expressed opinions, and content included do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.


  1. 1.
    Acker, S., Levitt, S.: Designing videoconference facilities for improved eye contact. J. Broadcast. Electron. Media 31(2), 181–191 (1987)CrossRefGoogle Scholar
  2. 2.
    Agrawala, M., Beers, A.C., McDowall, I., Fröhlich, B., Bolas, M., Hanrahan, P.: The two-user responsive workbench: Support for collaboration through individual views of a shared space. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1997, pp. 327–332. ACM Press/Addison-Wesley Publishing Co., New York(1997).
  3. 3.
    Argyle, M.: The Psychology of Interpersonal Behaviour. Penguin Books, London (1967)Google Scholar
  4. 4.
    Argyle, M., Cook, M.: Gaze and Mutual Gaze. Cambridge University Press, London (1976)Google Scholar
  5. 5.
    Aymerich-Franch, L., Bailenson, J.: The use of doppelgangers in virtual reality to treat public speaking anxiety: a gender comparison. In: Proceedings of the International Society for Presence Research Annual Conference, Vienna, Austria, March 2014Google Scholar
  6. 6.
    Bailenson, J.N., Beall, A.C., Blascovich, J.: Gaze and task performance in shared virtual environments. J. Vis. Comput. Anim. 13(5), 313–320 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Baker, H., Li, Z.: Camera and projector arrays for immersive 3d video. In: Proceedings of the 2nd International Conference on Immersive Telecommunications, IMMERSCOM 2009, pp. 23:1–23:6. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels (2009).
  8. 8.
    Banks, M., Rose, H., Vishwanath, D., Girshick, A.: Where should you sit to watch a movie? In: Proceedings of SPIE: Human Vision and Electronic Imaging (2005)Google Scholar
  9. 9.
    Beebe, S.: Effects of eye contact, posture and vocal inflection upon credibility and comprehension. Aust. SCAN J. Hum. Commun. 7, 57–70 (1980)Google Scholar
  10. 10.
    Bolas, M., Krum, D.M.: Augmented reality applications and user interfaces using head-coupled near-axis personal projectors with novel retroreflective props and surfaces. In: Pervasive 2010 Ubiprojection Workshop, May 2010Google Scholar
  11. 11.
    Boyarskaya, E., Sebastian, A., Bauermann, T., Hecht, H., Tüscher, O.: The Mona Lisa effect: neural correlates of centered and off-centered gaze. Hum Brain Mapp. 36(2), 619–632 (2015)CrossRefGoogle Scholar
  12. 12.
    Brewster, D.: Letters on Natural Magic, Addressed to Sir Walter Scott, BART. John Murray, London (1832)Google Scholar
  13. 13.
    DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S.C., Fabrizio, M., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., Morency, L.P.: SimSensei kiosk: a virtual human interviewer for healthcare decision support. In: Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), pp. 1061–1068. International Foundation for Autonomous Agents and Multiagent Systems, Paris, France, May 2014.
  14. 14.
    Fergason, J.: Optical system for a head mounted display using a retro-reflector and method of displaying an image. US Patent 5621572 (1997)Google Scholar
  15. 15.
    Goldberg, G.N., Kiesler, C.A., Collins, B.E.: Visual behavior and face-to-face distance during interaction. Sociometry 32(1), 43–53 (1969)CrossRefGoogle Scholar
  16. 16.
    Goldstein, E.: Spatial layout, orientation relative to the observer, and perceived projection in pictures viewed at an angle. J. Exp. Psychol. Hum Percept. Perform. 13, 256–266 (1987)CrossRefGoogle Scholar
  17. 17.
    Hua, H., Gao, C.: A polarized head-mounted projective display. In: ISMAR, pp. 32–35, October 2005Google Scholar
  18. 18.
    Hua, H., Gao, C., Brown, L., Biocca, F., Rolland, J.P.: Design of an ultralight head-mounted projective display (hmpd) and its applications in augmented collaborative environments. In: Proceedings of the SPIE, vol. 4660, pp. 492–497 (2002).
  19. 19.
    Inami, M., Kawakami, N., Sekiguchi, D., Yanagida, Y., Maeda, T., Tachi, S.: Visuo-haptic display using head-mounted projector. In: Virtual Reality, pp. 233–240 (2000)Google Scholar
  20. 20.
    Jones, A., Lang, M., Fyffe, G., Yu, X., Busch, J., McDowall, I., Bolas, M., Debevec, P.: Achieving eye contact in a one-to-many 3d video teleconferencing system. ACM Trans. Graph. 28(3), 1–8 (2009). CrossRefGoogle Scholar
  21. 21.
    Jones, A., McDowall, I., Yamada, H., Bolas, M., Debevec, P.: Rendering for an interactive 360 light field display. ACM Trans. Graph. 26(3), July 2007.
  22. 22.
    Jones, A., Unger, J., Nagano, K., Busch, J., Yu, X., Peng, H.Y., Alexander, O., Bolas, M., Debevec, P.: An automultiscopic projector array for interactive digital humans. In: SIGGRAPH Emerging Technologies, SIGGRAPH, p. 6:1 (2015).
  23. 23.
    Kallman, M., Marsella, S.C.: Hierarchical motion controllers for real-time autonomous virtual humans. In: Intelligent Virtual Agents, September 2005Google Scholar
  24. 24.
    Kang, S.H., Gratch, J.: Socially anxious people reveal more personal information with virtual counselors that talk about themselves using intimate human back stories. In: The 17th Annual Review of CyberTherapy and Telemedicine, vol. 181, pp. 202–207 (2012)Google Scholar
  25. 25.
    Karitsuka, T., Sato, K.: A wearable mixed reality with an on-board projector. In: The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Proceedings, pp. 321–322, October 2003Google Scholar
  26. 26.
    Kendon, A.: Some function of gaze direction in social interaction. Acta Psychol. 32, 1–25 (1967)Google Scholar
  27. 27.
    Krum, D.M., Suma, E., Bolas, M.: Augmented reality using personal projection and retroflection. Pers. Ubiquit. Comput. 16(1), 17–26 (2012)CrossRefGoogle Scholar
  28. 28.
    Krumhuber, E., Manstead, A., Cosker, D., Marshall, D., Rosin, P.: Effects of dynamic attributes of smiles in human and synthetic faces: a simulated job interview setting. J. Non-Verbal Behav. 33(1), 1–15 (2009)CrossRefGoogle Scholar
  29. 29.
    Maruyama, K., Endo, M., Sakurai, K.: An experimental consideration on “Mona Lisa gaze effect”. Tohoku Psychol. Folia 44, 109–121 (1985)Google Scholar
  30. 30.
    Matusik, W., Pfister, H.: 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM Trans. Graph. 23(3), 814–824 (2004). CrossRefGoogle Scholar
  31. 31.
    Negroponte, N.: Being Digital. Vintage Books, New York (1995)Google Scholar
  32. 32.
    Okada, K., Maeda, F., Ichikawaa, Y., Matsushita, Y.: Multiparty videoconferencing at virtual social distance: MAJIC design. In: Proceedings of CSCW94. ACM, Chapel Hill (1994)Google Scholar
  33. 33.
    Parsons, T.D., Rizzo, A.: Virtual human patients for training of clinical interview and communication skills. In: Proceedings of the 2008 International Conference on Disability, Virtual Reality and Associated Technology, Maia, Portugal, September 2008Google Scholar
  34. 34.
    Rogers, S., Lunsford, M., Strother, L., Kubovy, M.: The Mona Lisa effect: Perception of gaze direction in real and pictured faces. In: Rogers, S., Effken, J. (eds.) Studies in Perception and Action VII. Lawrence Erlbaum, Oxford (2002)Google Scholar
  35. 35.
    Rolland, J.P., Biocca, F., Hamza-Lup, F., Ha, Y., Martins, R.: Development of head-mounted projection displays for distributed, collaborative, augmented reality applications. Presence 14(5), 528–549 (2005)CrossRefGoogle Scholar
  36. 36.
    Sato, T., Hosokawa, K.: Mona Lisa effect of eyes and face. i-Perception 3(9), 707 (2012). CrossRefGoogle Scholar
  37. 37.
    Sellen, A.: Remote conversations: the effects of mediating talk with technology. Hum. Comput. Interact. 10(4), 401–444 (1995)CrossRefGoogle Scholar
  38. 38.
    Slater, M., Rovira, A., Southern, R., Swapp, D., Zhang, J., Campbell, C.: Bystander responses to a violent incident in an immersive virtual environment. PLoS ONE 8(1), e52766 (2013)CrossRefGoogle Scholar
  39. 39.
    Stolle, H., Olaya, J.C., Buschbeck, S., Sahm, H., Schwerdtner, A.: Technical solutions for a full-resolution autostereoscopic 2d/3d display technology. In: Proceedings of the SPIE, vol. 6803, 68030Q–68030Q-12 (2008).
  40. 40.
    Swartout, W., Artstein, R., Forbell, E., Foutz, S., Lane, H.C., Lange, B., Morie, J., Noren, D., Rizzo, A., Traum, D.: Virtual humans for learning. AI Mag. Spec. Issue Intell. Learn. Technol. 34(4), 13–30 (2013)Google Scholar
  41. 41.
    Swartout, W., et al.: Ada and grace: toward realistic and engaging virtual museum guides. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 286–300. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15892-6_30 CrossRefGoogle Scholar
  42. 42.
    Thiebaux, M., Marshall, A., Marsella, S.C., Kallmann, M.: SmartBody: behavior realization for embodied conversational agents. In: Autonomous Agents and Multiagent Systems (AAMAS), May 2008Google Scholar
  43. 43.
    Vertegaal, R., Weevers, I., Sohn, C., Cheung, C.: Gaze-2: Conveying eye contact in group video conferencing using eye-controlled camera direction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2003, NY, USA, pp. 521–528 (2003).
  44. 44.
    Vishwanath, D., Girshick, A., Banks, M.: Why pictures look right when viewed from the wrong place. Nat. Neurosci. 8(10), 1401–1410 (2005)CrossRefGoogle Scholar
  45. 45.
    Wang, N., Gratch, J.: Don’t just stare at me!. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, NY, USA, pp. 1241–1250 (2010).
  46. 46.
    Watson, D., Clark, L.A., Tellegen, A.: Development and validation of brief measures of positive and negative affect: the panas scales. J. Pers. Soc. Psychol. 54(6), 1063 (1988)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • David M. Krum
    • 1
  • Sin-Hwa Kang
    • 1
    Email author
  • Thai Phan
    • 1
  • Lauren Cairco Dukes
    • 2
  • Mark Bolas
    • 1
  1. 1.USC Institute for Creative TechnologiesPlaya VistaUSA
  2. 2.Clemson UniversityClemsonUSA

Personalised recommendations