Skip to main content

Guidelines for Inclusive Avatars and Agents: How Persons with Visual Impairments Detect and Recognize Others and Their Activities

Part of the Lecture Notes in Computer Science book series (LNISA,volume 12376)


Realistic virtual worlds are used in video games, in virtual reality, and to run remote meetings. In many cases, these environments include representations of other humans, either as stand-ins for real humans (avatars) or artificial entities (agents). Presence and individual identity of such virtual characters is usually coded by visual features, such as visibility in certain locations and appearance in terms of looks. For people with visual impairments (VI), this creates a barrier to detecting and identifying co-present characters and interacting with them. To improve the inclusiveness of such social virtual environments, we investigate which cues people with VI use to detect and recognize others and their activities in real-world settings. For this, we conducted an online survey with fifteen participants (adults and children). Our findings indicate an increased reliance on multimodal information: vision for silhouette recognition; audio for the recognition through pace, white cane, jewelry, breathing, voice and keyboard typing; sense of smell for fragrance, food smell and airflow; tactile information for length of hair, size, way of guiding or holding the hand and the arm, and the reactions of a guide-dog. Environmental and social cues indicate if somebody is present: e. g. a light turned on in a room, or somebody answering a question. Many of these cues can already be implemented in virtual environments with avatars and are summarized by us in a set of guidelines.


  • Virtual reality
  • Accessible avatars and agents
  • Virtual environment
  • Blindness
  • Low vision

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


  1. Ahn, H.J., Lee, H.J., Cho, K., Park, S.J.: Utilizing knowledge context in virtual collaborative work. Decis. Support Syst. 39(4), 563–582 (2005)

    CrossRef  Google Scholar 

  2. Connors, E.C., Chrastil, E.R., Sánchez, J., Merabet, L.B.: Virtual environments for the transfer of navigation skills in the blind: a comparison of directed instruction vs. video game based learning approaches. Front. Hum. Neurosci. 8, 223 (2014)

    Google Scholar 

  3. Cornelissen, K.K., McCarty, K., Cornelissen, P.L., Tovée, M.J.: Body size estimation in women with anorexia nervosa and healthy controls using 3D avatars. Sci. Rep. 7(1), 1–15 (2017)

    CrossRef  Google Scholar 

  4. Guerreiro, J., Ahmetovic, D., Kitani, K.M., Asakawa, C.: Virtual navigation for blind people: building sequential representations of the real-world. In: ASSETS 2017. ACM, Baltimore (2017)

    Google Scholar 

  5. Guerrón, N.E., Cobo, A., Olmedo, J.J.S., Martín, C.: Sensitive interfaces for blind people in virtual visits inside unknown spaces. Int. J. Hum. Comput. Stud. (2019).

  6. Haans, A., Ijsselsteijn, W.: Mediated social touch: a review of current research and future directions. Virtual Reality 9(2–3), 149–159 (2006)

    CrossRef  Google Scholar 

  7. Hoppe, M., Rossmy, B., Neumann, D.P., Streuber, S., Schmidt, A., Machulla, T.K.: A human touch: Social touch increases the perceived human-likeness of agents in virtual reality. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–11 (2020)

    Google Scholar 

  8. Kreimeier, J., Götzelmann, T.: First steps towards walk-in-place locomotion and haptic feedback in virtual reality for visually impaired. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2214. ACM (2019)

    Google Scholar 

  9. Kunz, A., Miesenberger, K., Zeng, L., Weber, G.: Virtual navigation environment for blind and low vision people. In: Miesenberger, K., Kouroupetroglou, G. (eds.) ICCHP 2018. LNCS, vol. 10897, pp. 114–122. Springer, Cham (2018).

    CrossRef  Google Scholar 

  10. Lahav, O., Mioduser, D.: Construction of cognitive maps of unknown spaces using a multi-sensory virtual environment for people who are blind. Comput. Huma. Behav. 24(3), 1139–1155 (2008).

    CrossRef  Google Scholar 

  11. Mölbert, S.C., et al.: Assessing body image in anorexia nervosa using biometric self-avatars in virtual reality: Attitudinal components rather than visual body size estimation are distorted. Psychol. Med. 48(4), 642–653 (2018)

    CrossRef  Google Scholar 

  12. Nakanishi, H., Koizumi, S., Ishida, T.: Virtual cities for real-world crisis management. In: van den Besselaar, P., Koizumi, S. (eds.) Digital Cities 2003. LNCS, vol. 3081, pp. 204–216. Springer, Heidelberg (2005).

    CrossRef  Google Scholar 

  13. Negrón, A.P.P., Vera, R.A.A., de Antonio Jimenez, A.: Collaborative interaction analysis in virtual environments based on verbal and nonverbal interaction. In: 2010 Ninth Mexican International Conference on Artificial Intelligence, pp. 129–133. IEEE (2010)

    Google Scholar 

  14. Paiva, P.V., Machado, L.S., Valença, A.M.G., Batista, T.V., Moraes, R.M.: Simcec: a collaborative VR-based simulator for surgical teamwork education. Comput. Entertain. (CIE) 16(2), 1–26 (2018)

    CrossRef  Google Scholar 

  15. Pan, X., Hamilton, A.F.D.C.: Why and how to use virtual reality to study human social interaction: the challenges of exploring a new research landscape. Br. J. Psychol. 109(3), 395–417 (2018)

    CrossRef  Google Scholar 

  16. Pejsa, T., Gleicher, M., Mutlu, B.: Who, me? How virtual agents can Shape conversational footing in virtual reality. IVA 2017. LNCS (LNAI), vol. 10498, pp. 347–359. Springer, Cham (2017).

    CrossRef  Google Scholar 

  17. Rudinsky, J., Hvannberg, E.T., Helgason, A.A., Petursson, P.B.: Designing soundscapes of virtual environments for crisis management training. In: Proceedings of the Designing Interactive Systems Conference, pp. 689–692 (2012)

    Google Scholar 

  18. Shachaf, P.: Cultural diversity and information and communication technology impacts on global virtual teams: an exploratory study. Inf. Manag. 45(2), 131–142 (2008)

    CrossRef  MathSciNet  Google Scholar 

  19. Steed, A., Schroeder, R.: Collaboration in immersive and non-immersive virtual environments. In: Lombard, M., Biocca, F., Freeman, J., IJsselsteijn, W., Schaevitz, R.J. (eds.) Immersed in Media, pp. 263–282. Springer, Cham (2015).

    CrossRef  Google Scholar 

  20. Thévin, L., Briant, C., Brock, A.M.: X-road: virtual reality glasses for orientation and mobility training of people with visual impairments. ACM Trans. Access. Comput. (TACCESS) 13(2), 1–47 (2020)

    CrossRef  Google Scholar 

  21. Thévin, L., Brock, A.: How to move from inclusive systems to collaborative systems: the case of virtual reality for teaching O&M. In: CHI 2019 Workshop on Hacking Blind Navigation (2019)

    Google Scholar 

  22. Zhao, Y., et al.: Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 116. ACM (2018)

    Google Scholar 

  23. Zhao, Y., Cutrell, E., Holz, C., Morris, M.R., Ofek, E., Wilson, A.D.: Seeingvr: a set of tools to make virtual reality more accessible to people with low vision. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2019)

    Google Scholar 

Download references


We thank the participants and the institutions that distributed our survey, especially IRSA and Ocens. This work was supported by the European Union’s Horizon2020 Program under ERCEA grant no. 683008 AMPLIFY.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Lauren Thevin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thevin, L., Machulla, T. (2020). Guidelines for Inclusive Avatars and Agents: How Persons with Visual Impairments Detect and Recognize Others and Their Activities. In: Miesenberger, K., Manduchi, R., Covarrubias Rodriguez, M., Peňáz, P. (eds) Computers Helping People with Special Needs. ICCHP 2020. Lecture Notes in Computer Science(), vol 12376. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58795-6

  • Online ISBN: 978-3-030-58796-3

  • eBook Packages: Computer ScienceComputer Science (R0)