Vision Substitution Experiments with See ColOr
See ColOr is a mobility aid for visually impaired people that uses the auditory channel to represent portions of captured images in real time. A distinctive feature of the See ColOr interface is the simultaneous coding of colour and depth. Four main modules were developed, in order to replicate a number of mechanisms present in the human visual systems. In this work, we first present the main experiments carried out in the first years of the project; among them : the avoidance of obstacles, the recognition and localization of objects, the detection of edges and the identification of coloured targets. Finally, we introduce new undergoing experiments in Colombia with blind persons, whose purpose is (1) to determine and to touch a target; (2) to navigate and to find a person; and (3) to find particular objects. Preliminary results illustrate encouraging results.
Keywords3D-vision vision substitution colour-depth sonification human-computer interaction
Unable to display preview. Download preview PDF.
- 1.Bologna, G., Deville, B., Vinckenbosch, M., Pun, T.: A perceptual interface for vision Ssubstitution in a color matching experiment. In: Proc. Int. Joint Conf. Neural Networks, Part of IEEE World Congress on Computational Intelligence, Hong Kong, June 1-6, pp. 1621–1628 (2008)Google Scholar
- 4.Gomez, J.D., Bologna, G., Deville, B., Pun, T.: Multisource sonification for visual substitution in an auditory memory game: one or two fingers? In: Proc. of the 17th International Conference on Auditory Display, Budapest, Hungary, June 20-23, pp. 1529–1534 (2011)Google Scholar
- 5.Gomez, J.D., Mohammed, S., Bologna, G., Pun, T.: 3D Scene accesibility for the blind via auditory-multitouch interfaces. In: Proc. of the AEGIS Workshop and International Conference (Accessibility Reaching Everywhere), Brussells, Belgium, November 28-30 (2011)Google Scholar
- 6.Gomez, J.D., Bologna, G., Pun, T.: Spatial awareness and intelligibility for the blind: audio-touch interfaces. In: Proc. of CHI 2012 (International Conference on Human-Computer Interaction), Austin, Texas, US, May 5-10, pp. 1529–1534 (2012)Google Scholar
- 7.Gomez, J.D., Bologna, G., Pun, T.: Non-visual-cueing-based sensing and understanding. In: Proc. of the 14th International ACM SIGACCESS Conference on Computers Accessibility (ASSETS 2012), Boulder Colorado, U.S, October 22-24, pp. 213–214 (2012)Google Scholar
- 8.Kalal, Z., Matas, J., Mikolajczyk, K.: P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints. In: Proc. of CVPR 2010, International Conference on Computer Vision and Pattern Recognition, San Francisco, CA, US, June 13-18, pp. 49–56 (2010)Google Scholar
- 10.Levy-Tzekdek, S., Hanassy, S., Abboud, S., Maidenbaum, S., Amedi, A.: Fast, accurate reaching movements with a visual-to-auditory sensory substitution device. Restroative Neurology and Neuroscience 31, 1–11 (2012)Google Scholar