How Speech Technologies Can Help People with Disabilities
Conference paper
- 1.1k Downloads
Abstract
Neither eyes nor arms are necessary in human-machine speech communication. Both human and machine just speak and/or listen, or machine does it instead of people. Therefore, speech communication can help people who cannot use their eyes or arms. Apart from both the visually impaired and physically disabled, speech technology can also help to the speech impaired and the hearing impaired, as well as to the elderly people. This paper is a review how speech technologies can be useful for different kind of persons with disabilities.
Keywords
Persons with disabilities assistive technologies aids based on speech technologies: text-to-speech synthesis speech and speaker recognitionPreview
Unable to display preview. Download preview PDF.
References
- 1.Delić, V., Sečujski, M., Bojanić, M., Knežević, D., Vujnović Sedlar, N., Mak, R.: Aids for the Disabled Based on Speech Technologies - Case Study for the Serbian Language. In: 11th Int. Conf. ETAI, pp. E2-1.1-4, Ohrid, Macedonia (2013)Google Scholar
- 2.Delić, V., Sečujski, M., Jakovljević, N., Gnjatović, M., Stanković, I.: Challenges of Natural Language Communication with Machines. Chapter VIII in Challenges for the Future –Industrial Engineering. In: Zelenovic, D., Katalinic, B. (eds.) Published by Faculty of Technical Sciences (Novi Sad, Serbia), DAAAM International (Vienna, Austria) and Fraunhofer IAO (Stuttgart, Germany), pp. 153–169 (2013)Google Scholar
- 3.Beukelman, D., Mirenda, P.: Augmentative and Alternative Communication. Management of severe communication disorders in children and adults. Paul H. Brookes Publishing Co., Baltimore (1998)Google Scholar
- 4.Secujski, M., Obradovic, R., Pekar, D., Jovanov, L., Delic, V.: AlfaNum system for speech synthesis in serbian language. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2002. LNCS (LNAI), vol. 2448, pp. 237–244. Springer, Heidelberg (2002)CrossRefGoogle Scholar
- 5.Delić, V., Vujnović Sedlar, N., Sečujski, M.: Speech-Enabled Computers as a Tool for Serbian-Speaking Blind Persons. In: 5th IEEE Int. Conf. EUROCON, pp. 1662-1665, Belgrade, Serbia (2005)Google Scholar
- 6.Mišković, D., Vujnović Sedlar, N., Sečujski, M., Delić, V.: Audio Library for the visually impaired – ABSS 2.0. In: 6th Conference of Digital Signal and Image Processing - DOGS, pp. 67-70, Serbia (2006)Google Scholar
- 7.Tešendić, D., Boberić Krstićev, D., Milosavljević, B.: Linking Visually Impaired People to Libraries. In: 3rd Int. Conf. on Information Society Technology, ICIST, Kopaonik, Serbia, pp. 212–217 (2013)Google Scholar
- 8.Delić, V.: A Review of R&D of Speech Technologies in Serbian and their Applications in Western Balkan Countries. In: 12th International Conference “Speech and Computer”, a Keynote Lecture at SPECOM, Moscow, Russia pp. 64–83 (2007)Google Scholar
- 9.Radio-televizija Srbije, http://www.rts.rs/
- 10.Pekar, D., Mišković, D., Knežević, D., Vujnović Sedlar, N., Sečujski, D.V.: Applications of Speech Technologies in Western Balkan Countries. In: Shabtai, N. (ed.), Advances in Speech Recognition, SCIYO, Croatia, pp. 105–122 (2010)Google Scholar
- 11.Tóth, B., Nagy, P., Németh, G.: New Features in the VoxAid Communication Aid for Speech Impaired People. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 295–302. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 12.Delić, V., Gnjatović, M., Jakovljević, N., Popović, B., Jokić, I., Bojanić, M.: User-awareness and adaptation in conversational agents. Facta Universitatis, Series: Electronics and Energetics 27( 3) (2014) (in press)Google Scholar
- 13.Jakovljević, N., Mišković, D., Janev, M., Pekar, D.: A Decoder for Large Vocabulary Speech Recognition. In: 18th International Conference on Systems, Signals and Image Processing - IWSSIP, Sarajevo, Bosnia and Herzegovina, pp. 287–290 (2011)Google Scholar
- 14.Delić, V., Sečujski, M., Jakovljević, N., Pekar, D., Mišković, D., Popović, B., Ostrogonac, S., Bojanić, M., Knežević, D.: Speech and Language Resources within Speech Recognition and Synthesis Systems for Serbian and Kindred South Slavic Languages. In: Železný, M., Habernal, I., Ronzhin, A. (eds.) SPECOM 2013. LNCS, vol. 8113, pp. 319–326. Springer, Heidelberg (2013)CrossRefGoogle Scholar
- 15.Gnjatović, M., Delić, V.: Cognitively-inspired representational approach to meaning in machine dialogue. In: Knowledge-Based Systems (2014) doi: 10.1016/j.knosys.2014.05.001Google Scholar
- 16.Bourlard, H., Dines, J., Magimai-Doss, M., Garner, P.N., Imseng, D., Motlicek, P., Liang, H., Saheer, L., Valente, F.: Current trends in multilingual speech processing. Sādhanā, Invited paper for special issue on the topic of Speech Communication and Signal Processing 36(5), 885–915 (2011)Google Scholar
- 17.Kindiroglu, A., Yalcın, H., Aran, O., Hruz, M., Campr, P., Akarun, L., Karpov, A.: Auto-matic Recognition of Fingerspelling Gestures in Multiple Languages for a Communication Interface for the Disabled. Pattern Recognition and Image Analysis 22(4), 527–536 (2012)CrossRefGoogle Scholar
- 18.Argyropoulos, S., Moustakas, K., Karpov, A., Aran, O., Tzovaras, D., Tsakiris, T., Varni, G., Kwon, B.: A Multimodal Framework for the Communication of the Disabled. Journal on Multimodal User Interfaces 2(2), 105–116 (2008)CrossRefGoogle Scholar
- 19.Karpov, A., Krnoul, Z., Zelezny, M., Ronzhin, A.: Multimodal Synthesizer for Russian and Czech Sign Languages and Audio-Visual Speech. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2013, Part I. LNCS, vol. 8009, pp. 520–529. Springer, Heidelberg (2013)CrossRefGoogle Scholar
Copyright information
© Springer International Publishing Switzerland 2014