UAHCI 2013: Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion pp 500-509 | Cite as
Multimodal Kinect-Supported Interaction for Visually Impaired Users
Abstract
This paper discusses Kreader, a proof-of-concept for a new interface for blind or visually impaired users to have text read to them. We use the Kinect device to track the users body. All feedback is presented with auditory cues, while a minimal visual interface can be turned on optionally. Interface elements are organized in a list manner and placed ego-centric, in relation to the user’s body. Moving around in the room does not change the element’s location. Hence visually impaired users can utilize their ”body-sense” to find elements. Two test sessions were used to evaluate Kreader. We think the results are encouraging and provide a solid foundation for future research into such an interface, that can be navigated by sighted and visually impaired users.
Keywords
Interface Element Text Reading Test Participant Impaired People Blind UserPreview
Unable to display preview. Download preview PDF.
References
- 1.Andersen, M., Jensen, T., Lisouski, P., Mortensen, A., Hansen, M., Gregersen, T., Ahrendt, P.: Kinect Depth Sensor Evaluation for Computer Vision Applications. Technical report, ECE-TR-6, Dept. of Engineering, Aarhus University (2012)Google Scholar
- 2.Boyd, L.H., Boyd, W.L., Vanderheiden, G.C.: The Graphical User Interface Crisis: Danger and Opportunity (1990)Google Scholar
- 3.Gadea, C., Ionescu, B., Ionescu, D., Islam, S., Solomon, B.: Finger-based gesture control of a collaborative online workspace. In: 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), pp. 41–46 (May 2012)Google Scholar
- 4.Gerling, K., Livingston, I., Nacke, L., Mandryk, R.: Full-body motion-based game interaction for older adults. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI 2012, p. 1873 (2012)Google Scholar
- 5.Harrison, C., Ramamurthy, S., Hudson, S.E.: On-body interaction: armed and dangerous. In: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012, pp. 69–76. ACM, New York (2012)CrossRefGoogle Scholar
- 6.Harrison, C., Tan, D., Morris, D.: Skinput: appropriating the body as an input surface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 453–462. ACM, New York (2010)Google Scholar
- 7.Hörner, S., Labus, S., Leimpeters, C., Nappert, C., Ruschkowski, A., Talhi, B., Wirth, B., Raab, M.: Navigation via echolocation-like auditory feedback. In: Outlook (2011)Google Scholar
- 8.Ionescu, D., Ionescu, B., Gadea, C., Islam, S.: An intelligent gesture interface for controlling tv sets and set-top boxes. In: 2011 6th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), pp. 159–164 (May 2011)Google Scholar
- 9.Kahn, S., Kuijper, A.: Fusing real-time depth imaging with high precision pose estimation by a measurement arm. In: Cyberworlds, pp. 256–260 (2012)Google Scholar
- 10.Kamel Boulos, M.N., Blanchard, B.J., Walker, C., Montero, J., Tripathy, A., Gutierrez-Osuna, R.: Web GIS in practice X: A Microsoft Kinect natural user interface for Google Earth navigation. International Journal of Health Geographics 10(1), 45 (2011)CrossRefGoogle Scholar
- 11.Khoshelham, K.: Accuracy Analysis of Kinect Depth Data. In: ISPRS Workshop Laser Scanning (2011)Google Scholar
- 12.Livingston, M.A., Sebastian, J., Ai, Z., Decker, J.W.: Performance measurements for the Microsoft Kinect skeleton. In: 2012 IEEE Virtual Reality (VR), pp. 119–120 (March 2012)Google Scholar
- 13.Majewski, M., Braun, A., Marinc, A., Kuijper, A.: Visual support system for selecting reactive elements in intelligent environments. In: Cyberworlds, pp. 251–255 (2012)Google Scholar
- 14.Mann, S., Huang, J., Janzen, R., Lo, R., Rampersad, V., Chen, A., Doha, T.: Blind navigation with a wearable range camera and vibrotactile helmet. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 1325–1328 (2011)Google Scholar
- 15.McMahan, R.P., Alon, A.J.D., Lazem, S.Y., Beaton, R.J., Machaj, D., Schaefer, M., Silva, M.G., Leal, A., Hagan, R., Bowman, D.A.: Evaluating natural interaction techniques in video games. In: 3DUI, pp. 11–14. IEEE (2010)Google Scholar
- 16.Morelli, T., Folmer, E.: Real-time sensory substitution to enable players who are blind to play video games using whole body gestures. In: Proceedings of the 6th International Conference on Foundations of Digital Games, FDG 2011, pp. 147–153. ACM, New York (2011)Google Scholar
- 17.Pitt, I.J., Edwards, A.D.N.: Pointing in an Auditory Interface for Blind Users. In: Group, pp. 280–285 (1995)Google Scholar
- 18.Pitt, I.J., Edwards, A.D.N.: Improving the usability of speech-based interfaces for blind users. In: Proceedings of the Second Annual ACM Conference on Assistive Technologies - Assets 1996, pp. 124–130 (1996)Google Scholar
- 19.Turunen, M., Hakulinen, J., Melto, A., Heimonen, T., Laivo, T., Hella, J.: SUXES - User Experience Evaluation Method for Spoken and Multimodal Interaction. In: Methodology, pp. 2567–2570 (2009)Google Scholar
- 20.Turunen, M., Soronen, H., Pakarinen, S., Hella, J., Laivo, T., Hakulinen, J., Melto, A., Rajaniemi, J.-P., Mäkinen, E., Heimonen, T., Rantala, J., Valkama, P., Miettinen, T., Raisamo, R.: Accessible multimodal media center application for blind and partially sighted people. Comput. Entertain. 8(3), 16:1–16:30 (2010)Google Scholar