Abstract
Realistic virtual worlds are used in video games, in virtual reality, and to run remote meetings. In many cases, these environments include representations of other humans, either as stand-ins for real humans (avatars) or artificial entities (agents). Presence and individual identity of such virtual characters is usually coded by visual features, such as visibility in certain locations and appearance in terms of looks. For people with visual impairments (VI), this creates a barrier to detecting and identifying co-present characters and interacting with them. To improve the inclusiveness of such social virtual environments, we investigate which cues people with VI use to detect and recognize others and their activities in real-world settings. For this, we conducted an online survey with fifteen participants (adults and children). Our findings indicate an increased reliance on multimodal information: vision for silhouette recognition; audio for the recognition through pace, white cane, jewelry, breathing, voice and keyboard typing; sense of smell for fragrance, food smell and airflow; tactile information for length of hair, size, way of guiding or holding the hand and the arm, and the reactions of a guide-dog. Environmental and social cues indicate if somebody is present: e. g. a light turned on in a room, or somebody answering a question. Many of these cues can already be implemented in virtual environments with avatars and are summarized by us in a set of guidelines.
Keywords
- Virtual reality
- Accessible avatars and agents
- Virtual environment
- Blindness
- Low vision
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Ahn, H.J., Lee, H.J., Cho, K., Park, S.J.: Utilizing knowledge context in virtual collaborative work. Decis. Support Syst. 39(4), 563–582 (2005)
Connors, E.C., Chrastil, E.R., Sánchez, J., Merabet, L.B.: Virtual environments for the transfer of navigation skills in the blind: a comparison of directed instruction vs. video game based learning approaches. Front. Hum. Neurosci. 8, 223 (2014)
Cornelissen, K.K., McCarty, K., Cornelissen, P.L., Tovée, M.J.: Body size estimation in women with anorexia nervosa and healthy controls using 3D avatars. Sci. Rep. 7(1), 1–15 (2017)
Guerreiro, J., Ahmetovic, D., Kitani, K.M., Asakawa, C.: Virtual navigation for blind people: building sequential representations of the real-world. In: ASSETS 2017. ACM, Baltimore (2017)
Guerrón, N.E., Cobo, A., Olmedo, J.J.S., Martín, C.: Sensitive interfaces for blind people in virtual visits inside unknown spaces. Int. J. Hum. Comput. Stud. (2019). https://doi.org/10.1016/J.IJHCS.2019.08.004
Haans, A., Ijsselsteijn, W.: Mediated social touch: a review of current research and future directions. Virtual Reality 9(2–3), 149–159 (2006)
Hoppe, M., Rossmy, B., Neumann, D.P., Streuber, S., Schmidt, A., Machulla, T.K.: A human touch: Social touch increases the perceived human-likeness of agents in virtual reality. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–11 (2020)
Kreimeier, J., Götzelmann, T.: First steps towards walk-in-place locomotion and haptic feedback in virtual reality for visually impaired. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2214. ACM (2019)
Kunz, A., Miesenberger, K., Zeng, L., Weber, G.: Virtual navigation environment for blind and low vision people. In: Miesenberger, K., Kouroupetroglou, G. (eds.) ICCHP 2018. LNCS, vol. 10897, pp. 114–122. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94274-2_17
Lahav, O., Mioduser, D.: Construction of cognitive maps of unknown spaces using a multi-sensory virtual environment for people who are blind. Comput. Huma. Behav. 24(3), 1139–1155 (2008). https://doi.org/10.1016/j.chb.2007.04.003
Mölbert, S.C., et al.: Assessing body image in anorexia nervosa using biometric self-avatars in virtual reality: Attitudinal components rather than visual body size estimation are distorted. Psychol. Med. 48(4), 642–653 (2018)
Nakanishi, H., Koizumi, S., Ishida, T.: Virtual cities for real-world crisis management. In: van den Besselaar, P., Koizumi, S. (eds.) Digital Cities 2003. LNCS, vol. 3081, pp. 204–216. Springer, Heidelberg (2005). https://doi.org/10.1007/11407546_10
Negrón, A.P.P., Vera, R.A.A., de Antonio Jimenez, A.: Collaborative interaction analysis in virtual environments based on verbal and nonverbal interaction. In: 2010 Ninth Mexican International Conference on Artificial Intelligence, pp. 129–133. IEEE (2010)
Paiva, P.V., Machado, L.S., Valença, A.M.G., Batista, T.V., Moraes, R.M.: Simcec: a collaborative VR-based simulator for surgical teamwork education. Comput. Entertain. (CIE) 16(2), 1–26 (2018)
Pan, X., Hamilton, A.F.D.C.: Why and how to use virtual reality to study human social interaction: the challenges of exploring a new research landscape. Br. J. Psychol. 109(3), 395–417 (2018)
Pejsa, T., Gleicher, M., Mutlu, B.: Who, me? How virtual agents can Shape conversational footing in virtual reality. IVA 2017. LNCS (LNAI), vol. 10498, pp. 347–359. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67401-8_45
Rudinsky, J., Hvannberg, E.T., Helgason, A.A., Petursson, P.B.: Designing soundscapes of virtual environments for crisis management training. In: Proceedings of the Designing Interactive Systems Conference, pp. 689–692 (2012)
Shachaf, P.: Cultural diversity and information and communication technology impacts on global virtual teams: an exploratory study. Inf. Manag. 45(2), 131–142 (2008)
Steed, A., Schroeder, R.: Collaboration in immersive and non-immersive virtual environments. In: Lombard, M., Biocca, F., Freeman, J., IJsselsteijn, W., Schaevitz, R.J. (eds.) Immersed in Media, pp. 263–282. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10190-3_11
Thévin, L., Briant, C., Brock, A.M.: X-road: virtual reality glasses for orientation and mobility training of people with visual impairments. ACM Trans. Access. Comput. (TACCESS) 13(2), 1–47 (2020)
Thévin, L., Brock, A.: How to move from inclusive systems to collaborative systems: the case of virtual reality for teaching O&M. In: CHI 2019 Workshop on Hacking Blind Navigation (2019)
Zhao, Y., et al.: Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 116. ACM (2018)
Zhao, Y., Cutrell, E., Holz, C., Morris, M.R., Ofek, E., Wilson, A.D.: Seeingvr: a set of tools to make virtual reality more accessible to people with low vision. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2019)
Acknowledgments
We thank the participants and the institutions that distributed our survey, especially IRSA and Ocens. This work was supported by the European Union’s Horizon2020 Program under ERCEA grant no. 683008 AMPLIFY.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Thevin, L., Machulla, T. (2020). Guidelines for Inclusive Avatars and Agents: How Persons with Visual Impairments Detect and Recognize Others and Their Activities. In: Miesenberger, K., Manduchi, R., Covarrubias Rodriguez, M., Peňáz, P. (eds) Computers Helping People with Special Needs. ICCHP 2020. Lecture Notes in Computer Science(), vol 12376. Springer, Cham. https://doi.org/10.1007/978-3-030-58796-3_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-58796-3_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58795-6
Online ISBN: 978-3-030-58796-3
eBook Packages: Computer ScienceComputer Science (R0)