Advertisement

3D Interaction Accessible to Visually Impaired Users: A Systematic Review

  • Erico de Souza VeriscimoEmail author
  • João Luiz BernardesJr.
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9738)

Abstract

There is currently a large number of visually impaired people in Brazil and worldwide. And just as any citizen they have their rights, including in them the right to education and other services that accelerate the process of social. With advent of technology increasingly virtual environments in three dimensions are being used for various areas. But often these environments are not accessible to visually impaired becoming a digital divide. In this context, a review of interactions in three dimensions accessible to visually impaired people may facilitate the work of researchers and developers to build such accessible applications. This paper presents the results of such a systematic literature review.

Keywords

3D interaction Visual impairment Virtual environments 

1 Introduction

According to IBGE [1] there is a large number of visually impaired people in Brazil and in 2010 the World Health Organization estimates that, in the whole world, there are 285 million people with severe visual disability, out of which 39 million are completely blind [2].

Like any other citizen, those with visual impairment also have rights. The United Nations established in 1975 a declaration of rights specific to people with some form of disability [3] and these rights include:
  • The inherent right to respect for their human dignity. Disabled persons have the same fundamental rights as their fellow-citizens, which implies first and foremost the right to enjoy a decent life, as normal and full as possible;

  • Measures designed to enable them to become as self-reliant as possible;

  • Right to education and other services which will enable them to develop their capabilities and skills to the maximum and will hasten the processes of their social integration or reintegration.

And with technological advances, new devices for three-dimensional (3D) interaction are being created or becoming more available and less costly, contributing to the popularization of Virtual and Augmented Environments. Many of these environments, however, are not accessible to visually impaired users, creating a digital barrier and excluding these users from certain activities [4].

In this context, a review of interactions in three dimensions accessible to visually impaired people may facilitate the work of researchers and developers to build such accessible applications. The objective of this paper is to present a Systematic Review based on the method proposed by Kitchenham et al. [5] to identify 3D interaction techniques accessible to visually impaired users and the input and output devices and senses explored in these techniques.

2 Methodology

Before beginning to apply the method proposed Kitchenham et al. [5] we conducted an exploratory review of the related literature to identify the most frequent terms and keywords used in this context. We then proceeded to create a review protocol with the following information:

Our research questions were:
  1. 1.

    What are the existing techniques and applications of 3D interaction accessibly to visually disabled users?

     
  2. 2.

    What are the input and output devices used in these techniques?

     
  3. 3.

    How is feedback given to the user in these techniques and which senses does it explore?

     

The search was conducted in three databases relevant to the area: ACM Digital Library (http://dl.acm.org); IEEE Xplore (http://ieeexplore.ieee.org) and Springer (http://link.springer.com), using the following search string (adapted as needed to each engine): ((“Interact 3D” OR “augmented reality” OR “Ambient Intelligence” OR “virtual reality”) AND (“blind user” OR “visually impaired” OR “blind people”)).

Papers returned by this string were then included in the review if they obeyed all of our inclusion criteria and none of the exclusion criteria. The inclusion criteria were:
  • Full text available in English in the selected databases;

  • Must conduct and discuss some sort of experiment with either visually impaired participants or somehow simulating such impairment.

And the exclusion criteria removed works that:
  • Only discuss 3D interaction techniques not accessible to visually impaired users.

  • Only discuss 2D interaction techniques, even if they are accessible.

  • Discuss techniques are accessible only to users with other types of disability but not to visual disability.

One example of paper discarded due to these criteria is [6] because even though it discusses 3D interaction techniques accessible to visually disabled users, it discussed no experiments simulating their exploration by those users.

All papers returned by the search strings initially had their title and abstract read ub a first pass to verify whether they fit the inclusion and exclusion criteria. In a second pass, all remaining papers were read entirely until it was clear they did not match the criteria for inclusion. Finally the selected papers were read in their entirety and the relevant information was extracted from them and tabulated. Mendeley Desktop 1.12.4 was used to help organize the papers and references.

The information extracted from each selected paper was: bibliographic information, filename, country where the research was conducted, year of publications, user senses explored, application, input and output device, form of feedback and a summary of its contents, relevant to this review.

3 Results

The initial search in the databases returned 330 unique references, with 210 from ACM, 23 from IEEE and 97 from Springer. The databases were searched in that order. 181 of that total were discarded in the first pass, applying the inclusion and exclusion criteria while reading only title and abstract. Out of the 71 remaining papers, 39 were discarded in the second pass and 35 were left (27 from ACM, 6 from IEEE and 2 from Springer) from which information was extracted. Figure 1a shows the distribution of papers along the years and Fig. 1b shows the distribution by country. No papers were found before the year 2000. Papers came from many different countries and 27 different journals or conference proceedings, with most contributing only one or two papers and only CHI and SIGACESS proceedings having 3 papers found.
Fig. 1.

Paper distribution

We classified the papers in 10 different types of application of accessible 3D interaction: navigation, finding objects, object recognition, object manipulation, object exploration and analysis, feeling the texture of objects, Braille reading, spatial perception and other applications. Figure 2 shows the total number and percentage of papers that mentioned each of these applications. Navigation was the most frequent concern mentioned in more than half the papers, particularly to augment the real environment to aid visually impaired users in navigating inside it. Interacting with virtual objects (or augmenting real objects with information) was the second most important concern in this review and we opted to subdivide this application further in more specific ways to interact with these objects. Each application and a few representative papers in each are summarized below.
Fig. 2.

Total number and percentage of papers mentioning each application (Color figure online)

Navigation: aiding visually impaired users to navigate indoor, outdoor or virtual environments was the main concern and application of 20 out of the 35 papers included in this review [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. [24] actually proposes two different techniques for a total of 21 distinct ones. Jain [6] proposes an example of a system to aid in indoor navigation with two main components: modules to mark walls and one to represent a user, which includes a smartphone and a device attached to the waist. Vibration is used to inform the user whether he is following a correct path or not and information is also supplied to the user taking advantage of sound and the smartphone’s text-to-speech functionality. The waist component communicates with the cellphone via Bluetooth and with the wall modules via infrared.

Gallo et al. [7] describes an adaptation to the white canes used by visually disabled people with increased exploration range adding sensors (such as ultrasonic) to it and tactile feedback with vibration motors. An advantage of this system is that the way to use the cane does not change, so the user gains more exploration range without the need to relearn this skill.

Shangguan et al. [8] present an example of outdoor navigation system, to aid visually impaired users to cross safely along crosswalks using a smartphone’s camera, orientation sensors and microphone as input devices and using voice audio messages as feedback.

Finding objects: out of the selected papers, 4 discuss some sort of solution to aid visually impaired users in finding objects around them or along their way [27, 28, 29, 30]. Tang and Li [27], for instance, propose the use of a depth camera and spatial audio to locate objects and as feedback, respectively.

Object recognition: 3 papers attempt to aid in the problem of object recognition by visually impaired users [31, 32, 33]. Al-Khalifa and Al-Khalifa [31], for instance, identify objects using a smartphone camera and computer vision and add an augmented layer over physical objects of interest using sound. Pointing the smartphone to an object submits a query to a server requesting information about that object and the returned data is communicated to the user using audio telling what the object is and any other relevant characteristics.

Object manipulation: we found 2 works related to the manipulation of virtual objects by the visually impaired [28, 34]. Niinimäki and Tahiroglu [34] present a technique using Microsoft’s Kinect as a sensor and providing both audio and haptic feedback using an active glove. All objects are surrounded by an exterior sphere and contain an interior cube. When users touch this sphere they begin receiving feedback, which increases in intensity as they approach the cube until they reach it. Once they are “touching” the object with both hands, Kinect tracks their position which is used to manipulate the virtual object in space.

Object exploration and analysis: 2 of the selected papers [35, 36] fit this classification. Ritterbusch et al. [35] attempts to reduce the obstacles a visually impaired user has in exploring certain objects, such as a map. They propose combining the feedback and input from a haptic device with 3D audio and show applications in three areas: architecture, math and medicine. Buonamici et al. [36] present a viability study for a novel system to map some work of art in virtual bas-relief and an audio description. User hand positions are tracked using Kinect while they explore this representation so the system can tell which part of the audio description to play. Kinect was also used as a 3D scanner to build the objects virtual representation.

Feeling object texture: this was the goal of 2 works included in this review. Ando et al. [38], propose a device placed under the nail of one finger to detect collision with virtual objects (or augmented real objects, such as a line drawing) and offer vibration feedback. Bau e Poupyrev [39] explore inverse electro-vibration, using weak electric signals on the user as feedback, to aid in the perception of texture of real objects, but these objects must be prepared beforehand.

Entertainment: Baldan [37] developed a virtual table tennis game accessible to visually disabled players that uses a smartphone as the paddle. While we are aware of a few other 3D games accessible to visually impaired users (including at least one first person shooter) our search of the literature did not return any of them.

Braille reading: Only Amemiya [10] described work in augmenting Braille text to aid in this task using a device called Finger-Braille that fits the fingers similarly to a glove that can aid both in Braille reading and navigation in an environment augmented with RFID tags and a camera.

Spatial Perception: Khambadkar and Folmer [40] use gesture-based interaction to aid visually impaired users in spatial perception, using a Kinect sensor attached to the user and synthesized voice audio feedback. The system is called GIST and has two modes of operation, mapping and gesture. In mapping mode it creates a map of the environment using color and depth information from Kinect, after which gesture mode is activated and the user is informed of it. Gestures are then used for different tasks, such as tell whether another person is present in the environment or identify how far objects are or their color.

Others: Hermann [41] proposes a system that helps identify head gestures directed to the user, such as shaking the head meaning “no”. Its main contributions are two novel ways to represent these head gestures using sound: continuous sonification and event-based sonorization.

Besides these applications we also extracted more information from the selected papers, summarized in Fig. 3.
Fig. 3.

Ocurrence of input and output devices, forms of feedback and explored senses

Figure 3a shows which input devices were used in the selected papers and how often, with the ubiquitous smartphones being used most often for both input and output (Fig. 3b), followed by other cameras and the Kinect sensors. While haptic devices and active gloves are very useful in many of these applications, their relatively high cost and low availability are probably the reason why they are not explored more often. Figure 3b shows the same for output devices. Mono audio was used most frequently by far, often as voice-based feedback, whether using synthesized or prerecorded voices, but other sound signals were frequent as well. Stereo audio was used often as well, particularly when exploring 3D sound. Haptic devices were more frequently used for feedback than for input. Figure 3c shows form of feedback independent of device, with audio being by far the most frequently used and Fig. 3d shows that hearing, of course, followed by proprioception were the senses most often explored in these accessible techniques.

4 Conclusion

We presented the results of a systematic literature review about 3D interaction accessible to visually disabled persons. Most of the research effort we found was aimed at the task of navigation, particularly to augment real environments with information and help users move through them. While we did find work to aid these users in exploring purely virtual environments, there is a clear deficit of research in this area. We hope that this review helps in some small measure to foment more research in this area, showing possible applications, research gaps to be filled and which senses and devices are more frequently and successfully explored aiming to help those who might want to get started in this sort of research.

References

  1. 1.
    IBGE, Diretoria de Pesquisas. Departamento de População e Indicadores Sociais.Rio de Janeiro (2010)Google Scholar
  2. 2.
    OMS, Organização Mundial da Saúde.: Global data on visual impairments 2010. Geneva, 17 p 2010. Disponível em. <http://www.who.int/entity/blindness/GLOBALDATAFINALforweb.pdf> Acessado em: 21 Nov. 2014 (2010)
  3. 3.
    ONU. Declaração de Direitos das Pessoas Deficientes in: Assembléia Geral da Organização das Nações Unidas. 09 Dez (1975)Google Scholar
  4. 4.
    White, G., Fitzpatrick, G., McAllister, G.: Toward accessible 3D virtual environments for the blind and visually impaired. In: Proceedings of the 3rd International Conference on Digital Interactive Media in Entertainment and Arts. DIMEA 2008, vol. 349, pp. 134–141. ACM, New York (2008)Google Scholar
  5. 5.
    Kitchenham, B., Brereton, O., Budegen, D., Turner, M., Bailey, J., Linkman, S.: Systematic literature reviews in software engineering–a systematic literature review. Inf. Softw. Technol. 51(1), 7–15 (2009)CrossRefGoogle Scholar
  6. 6.
    Schätzle, S., Weber, B.: Towards vibrotactile direction and distance information for virtual reality and workstations for blind people. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 148–160. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  7. 7.
    Jain, D.: Path-guided indoor navigation for the visually impaired using minimal building retrofitting. In: Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 225–232 (2014)Google Scholar
  8. 8.
    Gallo, S., Chapuis, D., Santos-Carreras, L., Kim, Y., Retornaz, P., Bleuler, H., Gassert, R.: Augmented white cane with multimodal haptic feedback. In: 2010 3rd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pp. 149–155 (2010)Google Scholar
  9. 9.
    Shangguan, L., Yang, Z., Zhou, Z.: CrossNavi: enabling real-time crossroad navigation for the blind with commodity phones. In: UbiComp 2014 - Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2014)Google Scholar
  10. 10.
    Amemiya, T., Yamashita, J., Hirota, K., Hirose, M.: Virtual leading blocks for the deaf-blind: a real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space. In: IEEE Virtual Reality, pp. 165–287 (2004)Google Scholar
  11. 11.
    Berretta, L., Soares, F., Ferreira, D.J., Nascimento, H.A.D., Cardoso, A., Lamounier, E.: Virtual environment manipulated by recognition of poses using kinect: a study to help blind locomotion. In: 2013 XV Symposium on Unfamiliar Surroundings in Virtual and Augmented Reality (SVR), pp. 10–16 (2013)Google Scholar
  12. 12.
    Chuang, C., Hsieh, J., Fan, K.: A smart handheld device navigation system based on detecting visual code. In: 2013 International Conference on Machine Learning and Cybernetics, vol. 1, pp. 1407–1412 (2013)Google Scholar
  13. 13.
    Fallah, N., Apostolopoulos, I., Bekris, K., Folmer, E.: The user as a sensor. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI 2012, p. 425 (2012)Google Scholar
  14. 14.
    Heller, F., Borchers, J.: AudioTorch: using a smartphone as directional microphone in virtual audio spaces. In: Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services, pp. 483–488 (2014)Google Scholar
  15. 15.
    Jain, D.: Pilot evaluation of a path-guided indoor navigation system for visually impaired in a public museum. In: Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 273–274 (2014)Google Scholar
  16. 16.
    Joseph, S.L., Zhang, X., Dryanovski, I., Xiao, J., Yi, C., Tian, Y.: Semantic indoor navigation with a blind-user oriented augmented reality. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, 2013, no. 65789, pp. 3585–3591 (2013)Google Scholar
  17. 17.
    Magnusson, C., Molina, M., Grohn, K.R., Szymczak, D.: Pointing for non-visual orientation and navigation. In: Proceedings 6th Nord Conference Human-Computer Interact. Extending Boundaries - Nord. 2010, p. 735 (2010)Google Scholar
  18. 18.
    Magnusson, C., Waern, A., Grohn, K.R., Bjernryd, A., Bernhardsson, H., Jakobsson, A., Salo, J., Wallon, M., Hedvall, P.O.: Navigating the world and learning to like it. In: Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services - MobileHCI 2011, p. 285 (2011)Google Scholar
  19. 19.
    Paneels, S.A., Olmos, A., Blum, J.R., Cooperstock, J.R.: Listen to it yourself!: evaluating usability of what’s around me? for the blind. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2107–2116 (2013)Google Scholar
  20. 20.
    Raposo, N., Rios, H., Lima, D., Gadelha, B., Castro, T.: An application of mobility aids for the visually impaired. In: Proceedings of the 13th International Conference on Mobile and Ubiquitous Multimedia - MUM 2014, pp. 180–189 (2014)Google Scholar
  21. 21.
    Ribeiro, F., Florencio, D., Chou, P.A., Zhang, Z.: Auditory augmented reality: Object sonification for the visually impaired. In: 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), pp. 319–324 (2012)Google Scholar
  22. 22.
    Schneider, J., Strothotte, T.: Constructive exploration of spatial information by blind users. In: Proceedings of the Fourth International ACM Conference on Assistive Technologies - Assets 2000 (2000)Google Scholar
  23. 23.
    Soukaras, D.P., Chaniotis, I.K., Karagiannis, I.G., Stampologlou, I.S., Triantafyllou, C.A., Tselikas, N.D., Foukarakis, I.E., Boucouvalas, A.C.: Augmented audio reality mobile application specially designed for visually impaired people. In: 2012 16th Panhellenic Conference on Informatics, pp. 13–18 (2012)Google Scholar
  24. 24.
    Zollner, M., Huber, S., Jetter, H.C., Reiterer, H.: NAVI: a proof-of-concept of a mobile navigational aid for visually impaired based on the microsoft kinect. In: Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part IV, pp. 584–587 (2011)Google Scholar
  25. 25.
    Rodriguez-Sanchez, M.C., Moreno-Alvarez, M.A., Martin, E., Borromeo, S., Hernandez-Tamames, J.A.: Accessible smartphones for blind users: A case study for a wayfinding system. In: Expert Systems with Applications (2014)Google Scholar
  26. 26.
    Doush, I.A., Alshattnawi, S., Barhoush, M.: Non-visual navigation interface for completing tasks with a predefined order using mobile phone: a case study of pilgrimage. Int. J. Mobile Netw. Design Innov. 6(1), 1–13 (2015)CrossRefGoogle Scholar
  27. 27.
    Tang, T.J.J., Li, W.H.: An assistive EyeWear prototype that interactively converts 3D object locations into spatial audio. In: Proceedings of the 2014 ACM International Symposium on Wearable Computers - ISWC 2014, pp. 119–126 (2014)Google Scholar
  28. 28.
    Vaananen-Vainio-Mattila, K., Suhonen, K., Laaksonen, J., Kildal, J., Tahiroglu, K.: User experience and usage scenarios of audio-tactile interaction with virtual objects in a physical environment. In: Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces - DPPI 2013, p. 67 (2013)Google Scholar
  29. 29.
    Deville, B., Bologna, G., Pun, T.: Detecting objects and obstacles for visually impaired individuals using visual saliency. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS 2010, p. 253 (2010)Google Scholar
  30. 30.
    Dramas, F., Oriola, B., Katz, B.G., Thorpe, S.J., Jouffrais, C.: Designing an assistive device for the blind based on object localization and augmented auditory reality. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility - Assets 2008, p. 263 (2008)Google Scholar
  31. 31.
    Al-Khalifa, A.S., Al-Khalifa, H.S.: Do-It-Yourself object identification using augmented reality for visually impaired people. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 560–565. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  32. 32.
    Nanayakkara, S., Shilkrot, R.: EyeRing: a finger-worn input device for seamless interactions with our surroundings. In: AH 2013 Proceedings of the 4th Augmented Human International Conference (2013)Google Scholar
  33. 33.
    Nanayakkara, S., Shilkrot, R., Maes, P.: EyeRing: A finger-worn assistant. In: CHI 2012 Extended Abstracts on Human Factors in Computing Systems, pp. 1961–1966 (2012)Google Scholar
  34. 34.
    Niinimaki M., Tahiroglu, K.: AHNE: a novel interface for spatial interaction. In: CHI 2012 Extended Abstracts on Human Factors in Computing Systems, 2012, pp. 1031–1034 (2012)Google Scholar
  35. 35.
    Ritterbusch, S., Constantinescu, A., Koch, V.: Hapto-acoustic scene representation. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 644–650. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  36. 36.
    Buonamici, F., Furferi, R., Governi, L., Volpe, Y.: Making blind people autonomous in the exploration of tactile models: a feasibility study. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 82–93. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  37. 37.
    Baldan, S., Gotzen, A., de Serafin, S.: Mobile rhythmic interaction in a sonic tennis game. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems on - CHI EA 2013, p. 2903 (2013)Google Scholar
  38. 38.
    Ando, H., Miki, T., Inami, M., Maeda, T.: SmartFinger: nail-mounted tactile display. In: ACM SIGGRAPH 2002 conference abstracts and applications on - SIGGRAPH 2002, 2002, p. 78 (2002)Google Scholar
  39. 39.
    Ba, O., Poupyrev, I., Goc, M.L., Galliot, L., Glisson, M.: REVEL: tactile feedback technology for augmented reality. In: SIGGRAPH 2012 ACM SIGGRAPH 2012 Emerging Technologies (2012)Google Scholar
  40. 40.
    Khambadkar, V., Folmer, E.: GIST: a gestural interface for remote nonvisual spatial perception. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology - UIST 2013, pp. 301–310 (2013)Google Scholar
  41. 41.
    Hermann, T., Neumann, A., Zehe, S.: Head gesture sonification for supporting social interaction. In: Proceedings of the 7th Audio Most. Conf. A Conf. Interact. with Sound - AM 2012, pp. 82–89 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Erico de Souza Veriscimo
    • 1
    Email author
  • João Luiz BernardesJr.
    • 1
  1. 1.School of Arts Sciences and Humanities – EACHUniversity of São PauloSão PauloBrazil

Personalised recommendations