3D Interaction Accessible to Visually Impaired Users: A Systematic Review
- 1.2k Downloads
There is currently a large number of visually impaired people in Brazil and worldwide. And just as any citizen they have their rights, including in them the right to education and other services that accelerate the process of social. With advent of technology increasingly virtual environments in three dimensions are being used for various areas. But often these environments are not accessible to visually impaired becoming a digital divide. In this context, a review of interactions in three dimensions accessible to visually impaired people may facilitate the work of researchers and developers to build such accessible applications. This paper presents the results of such a systematic literature review.
Keywords3D interaction Visual impairment Virtual environments
According to IBGE  there is a large number of visually impaired people in Brazil and in 2010 the World Health Organization estimates that, in the whole world, there are 285 million people with severe visual disability, out of which 39 million are completely blind .
The inherent right to respect for their human dignity. Disabled persons have the same fundamental rights as their fellow-citizens, which implies first and foremost the right to enjoy a decent life, as normal and full as possible;
Measures designed to enable them to become as self-reliant as possible;
Right to education and other services which will enable them to develop their capabilities and skills to the maximum and will hasten the processes of their social integration or reintegration.
And with technological advances, new devices for three-dimensional (3D) interaction are being created or becoming more available and less costly, contributing to the popularization of Virtual and Augmented Environments. Many of these environments, however, are not accessible to visually impaired users, creating a digital barrier and excluding these users from certain activities .
In this context, a review of interactions in three dimensions accessible to visually impaired people may facilitate the work of researchers and developers to build such accessible applications. The objective of this paper is to present a Systematic Review based on the method proposed by Kitchenham et al.  to identify 3D interaction techniques accessible to visually impaired users and the input and output devices and senses explored in these techniques.
Before beginning to apply the method proposed Kitchenham et al.  we conducted an exploratory review of the related literature to identify the most frequent terms and keywords used in this context. We then proceeded to create a review protocol with the following information:
What are the existing techniques and applications of 3D interaction accessibly to visually disabled users?
What are the input and output devices used in these techniques?
How is feedback given to the user in these techniques and which senses does it explore?
The search was conducted in three databases relevant to the area: ACM Digital Library (http://dl.acm.org); IEEE Xplore (http://ieeexplore.ieee.org) and Springer (http://link.springer.com), using the following search string (adapted as needed to each engine): ((“Interact 3D” OR “augmented reality” OR “Ambient Intelligence” OR “virtual reality”) AND (“blind user” OR “visually impaired” OR “blind people”)).
Full text available in English in the selected databases;
Must conduct and discuss some sort of experiment with either visually impaired participants or somehow simulating such impairment.
Only discuss 3D interaction techniques not accessible to visually impaired users.
Only discuss 2D interaction techniques, even if they are accessible.
Discuss techniques are accessible only to users with other types of disability but not to visual disability.
One example of paper discarded due to these criteria is  because even though it discusses 3D interaction techniques accessible to visually disabled users, it discussed no experiments simulating their exploration by those users.
All papers returned by the search strings initially had their title and abstract read ub a first pass to verify whether they fit the inclusion and exclusion criteria. In a second pass, all remaining papers were read entirely until it was clear they did not match the criteria for inclusion. Finally the selected papers were read in their entirety and the relevant information was extracted from them and tabulated. Mendeley Desktop 1.12.4 was used to help organize the papers and references.
The information extracted from each selected paper was: bibliographic information, filename, country where the research was conducted, year of publications, user senses explored, application, input and output device, form of feedback and a summary of its contents, relevant to this review.
Navigation: aiding visually impaired users to navigate indoor, outdoor or virtual environments was the main concern and application of 20 out of the 35 papers included in this review [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26].  actually proposes two different techniques for a total of 21 distinct ones. Jain  proposes an example of a system to aid in indoor navigation with two main components: modules to mark walls and one to represent a user, which includes a smartphone and a device attached to the waist. Vibration is used to inform the user whether he is following a correct path or not and information is also supplied to the user taking advantage of sound and the smartphone’s text-to-speech functionality. The waist component communicates with the cellphone via Bluetooth and with the wall modules via infrared.
Gallo et al.  describes an adaptation to the white canes used by visually disabled people with increased exploration range adding sensors (such as ultrasonic) to it and tactile feedback with vibration motors. An advantage of this system is that the way to use the cane does not change, so the user gains more exploration range without the need to relearn this skill.
Shangguan et al.  present an example of outdoor navigation system, to aid visually impaired users to cross safely along crosswalks using a smartphone’s camera, orientation sensors and microphone as input devices and using voice audio messages as feedback.
Finding objects: out of the selected papers, 4 discuss some sort of solution to aid visually impaired users in finding objects around them or along their way [27, 28, 29, 30]. Tang and Li , for instance, propose the use of a depth camera and spatial audio to locate objects and as feedback, respectively.
Object recognition: 3 papers attempt to aid in the problem of object recognition by visually impaired users [31, 32, 33]. Al-Khalifa and Al-Khalifa , for instance, identify objects using a smartphone camera and computer vision and add an augmented layer over physical objects of interest using sound. Pointing the smartphone to an object submits a query to a server requesting information about that object and the returned data is communicated to the user using audio telling what the object is and any other relevant characteristics.
Object manipulation: we found 2 works related to the manipulation of virtual objects by the visually impaired [28, 34]. Niinimäki and Tahiroglu  present a technique using Microsoft’s Kinect as a sensor and providing both audio and haptic feedback using an active glove. All objects are surrounded by an exterior sphere and contain an interior cube. When users touch this sphere they begin receiving feedback, which increases in intensity as they approach the cube until they reach it. Once they are “touching” the object with both hands, Kinect tracks their position which is used to manipulate the virtual object in space.
Object exploration and analysis: 2 of the selected papers [35, 36] fit this classification. Ritterbusch et al.  attempts to reduce the obstacles a visually impaired user has in exploring certain objects, such as a map. They propose combining the feedback and input from a haptic device with 3D audio and show applications in three areas: architecture, math and medicine. Buonamici et al.  present a viability study for a novel system to map some work of art in virtual bas-relief and an audio description. User hand positions are tracked using Kinect while they explore this representation so the system can tell which part of the audio description to play. Kinect was also used as a 3D scanner to build the objects virtual representation.
Feeling object texture: this was the goal of 2 works included in this review. Ando et al. , propose a device placed under the nail of one finger to detect collision with virtual objects (or augmented real objects, such as a line drawing) and offer vibration feedback. Bau e Poupyrev  explore inverse electro-vibration, using weak electric signals on the user as feedback, to aid in the perception of texture of real objects, but these objects must be prepared beforehand.
Entertainment: Baldan  developed a virtual table tennis game accessible to visually disabled players that uses a smartphone as the paddle. While we are aware of a few other 3D games accessible to visually impaired users (including at least one first person shooter) our search of the literature did not return any of them.
Braille reading: Only Amemiya  described work in augmenting Braille text to aid in this task using a device called Finger-Braille that fits the fingers similarly to a glove that can aid both in Braille reading and navigation in an environment augmented with RFID tags and a camera.
Spatial Perception: Khambadkar and Folmer  use gesture-based interaction to aid visually impaired users in spatial perception, using a Kinect sensor attached to the user and synthesized voice audio feedback. The system is called GIST and has two modes of operation, mapping and gesture. In mapping mode it creates a map of the environment using color and depth information from Kinect, after which gesture mode is activated and the user is informed of it. Gestures are then used for different tasks, such as tell whether another person is present in the environment or identify how far objects are or their color.
Others: Hermann  proposes a system that helps identify head gestures directed to the user, such as shaking the head meaning “no”. Its main contributions are two novel ways to represent these head gestures using sound: continuous sonification and event-based sonorization.
Figure 3a shows which input devices were used in the selected papers and how often, with the ubiquitous smartphones being used most often for both input and output (Fig. 3b), followed by other cameras and the Kinect sensors. While haptic devices and active gloves are very useful in many of these applications, their relatively high cost and low availability are probably the reason why they are not explored more often. Figure 3b shows the same for output devices. Mono audio was used most frequently by far, often as voice-based feedback, whether using synthesized or prerecorded voices, but other sound signals were frequent as well. Stereo audio was used often as well, particularly when exploring 3D sound. Haptic devices were more frequently used for feedback than for input. Figure 3c shows form of feedback independent of device, with audio being by far the most frequently used and Fig. 3d shows that hearing, of course, followed by proprioception were the senses most often explored in these accessible techniques.
We presented the results of a systematic literature review about 3D interaction accessible to visually disabled persons. Most of the research effort we found was aimed at the task of navigation, particularly to augment real environments with information and help users move through them. While we did find work to aid these users in exploring purely virtual environments, there is a clear deficit of research in this area. We hope that this review helps in some small measure to foment more research in this area, showing possible applications, research gaps to be filled and which senses and devices are more frequently and successfully explored aiming to help those who might want to get started in this sort of research.
- 1.IBGE, Diretoria de Pesquisas. Departamento de População e Indicadores Sociais.Rio de Janeiro (2010)Google Scholar
- 2.OMS, Organização Mundial da Saúde.: Global data on visual impairments 2010. Geneva, 17 p 2010. Disponível em. <http://www.who.int/entity/blindness/GLOBALDATAFINALforweb.pdf> Acessado em: 21 Nov. 2014 (2010)
- 3.ONU. Declaração de Direitos das Pessoas Deficientes in: Assembléia Geral da Organização das Nações Unidas. 09 Dez (1975)Google Scholar
- 4.White, G., Fitzpatrick, G., McAllister, G.: Toward accessible 3D virtual environments for the blind and visually impaired. In: Proceedings of the 3rd International Conference on Digital Interactive Media in Entertainment and Arts. DIMEA 2008, vol. 349, pp. 134–141. ACM, New York (2008)Google Scholar
- 7.Jain, D.: Path-guided indoor navigation for the visually impaired using minimal building retrofitting. In: Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 225–232 (2014)Google Scholar
- 8.Gallo, S., Chapuis, D., Santos-Carreras, L., Kim, Y., Retornaz, P., Bleuler, H., Gassert, R.: Augmented white cane with multimodal haptic feedback. In: 2010 3rd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pp. 149–155 (2010)Google Scholar
- 9.Shangguan, L., Yang, Z., Zhou, Z.: CrossNavi: enabling real-time crossroad navigation for the blind with commodity phones. In: UbiComp 2014 - Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2014)Google Scholar
- 10.Amemiya, T., Yamashita, J., Hirota, K., Hirose, M.: Virtual leading blocks for the deaf-blind: a real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space. In: IEEE Virtual Reality, pp. 165–287 (2004)Google Scholar
- 11.Berretta, L., Soares, F., Ferreira, D.J., Nascimento, H.A.D., Cardoso, A., Lamounier, E.: Virtual environment manipulated by recognition of poses using kinect: a study to help blind locomotion. In: 2013 XV Symposium on Unfamiliar Surroundings in Virtual and Augmented Reality (SVR), pp. 10–16 (2013)Google Scholar
- 12.Chuang, C., Hsieh, J., Fan, K.: A smart handheld device navigation system based on detecting visual code. In: 2013 International Conference on Machine Learning and Cybernetics, vol. 1, pp. 1407–1412 (2013)Google Scholar
- 13.Fallah, N., Apostolopoulos, I., Bekris, K., Folmer, E.: The user as a sensor. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI 2012, p. 425 (2012)Google Scholar
- 14.Heller, F., Borchers, J.: AudioTorch: using a smartphone as directional microphone in virtual audio spaces. In: Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services, pp. 483–488 (2014)Google Scholar
- 15.Jain, D.: Pilot evaluation of a path-guided indoor navigation system for visually impaired in a public museum. In: Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 273–274 (2014)Google Scholar
- 16.Joseph, S.L., Zhang, X., Dryanovski, I., Xiao, J., Yi, C., Tian, Y.: Semantic indoor navigation with a blind-user oriented augmented reality. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, 2013, no. 65789, pp. 3585–3591 (2013)Google Scholar
- 17.Magnusson, C., Molina, M., Grohn, K.R., Szymczak, D.: Pointing for non-visual orientation and navigation. In: Proceedings 6th Nord Conference Human-Computer Interact. Extending Boundaries - Nord. 2010, p. 735 (2010)Google Scholar
- 18.Magnusson, C., Waern, A., Grohn, K.R., Bjernryd, A., Bernhardsson, H., Jakobsson, A., Salo, J., Wallon, M., Hedvall, P.O.: Navigating the world and learning to like it. In: Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services - MobileHCI 2011, p. 285 (2011)Google Scholar
- 19.Paneels, S.A., Olmos, A., Blum, J.R., Cooperstock, J.R.: Listen to it yourself!: evaluating usability of what’s around me? for the blind. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2107–2116 (2013)Google Scholar
- 20.Raposo, N., Rios, H., Lima, D., Gadelha, B., Castro, T.: An application of mobility aids for the visually impaired. In: Proceedings of the 13th International Conference on Mobile and Ubiquitous Multimedia - MUM 2014, pp. 180–189 (2014)Google Scholar
- 21.Ribeiro, F., Florencio, D., Chou, P.A., Zhang, Z.: Auditory augmented reality: Object sonification for the visually impaired. In: 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), pp. 319–324 (2012)Google Scholar
- 22.Schneider, J., Strothotte, T.: Constructive exploration of spatial information by blind users. In: Proceedings of the Fourth International ACM Conference on Assistive Technologies - Assets 2000 (2000)Google Scholar
- 23.Soukaras, D.P., Chaniotis, I.K., Karagiannis, I.G., Stampologlou, I.S., Triantafyllou, C.A., Tselikas, N.D., Foukarakis, I.E., Boucouvalas, A.C.: Augmented audio reality mobile application specially designed for visually impaired people. In: 2012 16th Panhellenic Conference on Informatics, pp. 13–18 (2012)Google Scholar
- 24.Zollner, M., Huber, S., Jetter, H.C., Reiterer, H.: NAVI: a proof-of-concept of a mobile navigational aid for visually impaired based on the microsoft kinect. In: Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part IV, pp. 584–587 (2011)Google Scholar
- 25.Rodriguez-Sanchez, M.C., Moreno-Alvarez, M.A., Martin, E., Borromeo, S., Hernandez-Tamames, J.A.: Accessible smartphones for blind users: A case study for a wayfinding system. In: Expert Systems with Applications (2014)Google Scholar
- 27.Tang, T.J.J., Li, W.H.: An assistive EyeWear prototype that interactively converts 3D object locations into spatial audio. In: Proceedings of the 2014 ACM International Symposium on Wearable Computers - ISWC 2014, pp. 119–126 (2014)Google Scholar
- 28.Vaananen-Vainio-Mattila, K., Suhonen, K., Laaksonen, J., Kildal, J., Tahiroglu, K.: User experience and usage scenarios of audio-tactile interaction with virtual objects in a physical environment. In: Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces - DPPI 2013, p. 67 (2013)Google Scholar
- 29.Deville, B., Bologna, G., Pun, T.: Detecting objects and obstacles for visually impaired individuals using visual saliency. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS 2010, p. 253 (2010)Google Scholar
- 30.Dramas, F., Oriola, B., Katz, B.G., Thorpe, S.J., Jouffrais, C.: Designing an assistive device for the blind based on object localization and augmented auditory reality. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility - Assets 2008, p. 263 (2008)Google Scholar
- 32.Nanayakkara, S., Shilkrot, R.: EyeRing: a finger-worn input device for seamless interactions with our surroundings. In: AH 2013 Proceedings of the 4th Augmented Human International Conference (2013)Google Scholar
- 33.Nanayakkara, S., Shilkrot, R., Maes, P.: EyeRing: A finger-worn assistant. In: CHI 2012 Extended Abstracts on Human Factors in Computing Systems, pp. 1961–1966 (2012)Google Scholar
- 34.Niinimaki M., Tahiroglu, K.: AHNE: a novel interface for spatial interaction. In: CHI 2012 Extended Abstracts on Human Factors in Computing Systems, 2012, pp. 1031–1034 (2012)Google Scholar
- 37.Baldan, S., Gotzen, A., de Serafin, S.: Mobile rhythmic interaction in a sonic tennis game. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems on - CHI EA 2013, p. 2903 (2013)Google Scholar
- 38.Ando, H., Miki, T., Inami, M., Maeda, T.: SmartFinger: nail-mounted tactile display. In: ACM SIGGRAPH 2002 conference abstracts and applications on - SIGGRAPH 2002, 2002, p. 78 (2002)Google Scholar
- 39.Ba, O., Poupyrev, I., Goc, M.L., Galliot, L., Glisson, M.: REVEL: tactile feedback technology for augmented reality. In: SIGGRAPH 2012 ACM SIGGRAPH 2012 Emerging Technologies (2012)Google Scholar
- 40.Khambadkar, V., Folmer, E.: GIST: a gestural interface for remote nonvisual spatial perception. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology - UIST 2013, pp. 301–310 (2013)Google Scholar
- 41.Hermann, T., Neumann, A., Zehe, S.: Head gesture sonification for supporting social interaction. In: Proceedings of the 7th Audio Most. Conf. A Conf. Interact. with Sound - AM 2012, pp. 82–89 (2012)Google Scholar