Advertisement

A Multimodal Communication Aid for Persons with Cerebral Palsy Using Head Movement and Speech Recognition

Conference paper
  • 305 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12377)

Abstract

In this study, we proposed a multimodal communication aid for persons with cerebral palsy. This system supports their interpersonal communication based on utterance recognition and head movement detection. To compensate for the inaccuracy of utterances owing to oral motor impairment in persons with cerebral palsy, vowel string-based word prediction and decision behavior detection via head movement measurement were implemented. The proposed system was tested by a participant with cerebral palsy and the obtained results were compared with those for conventional communication aid tools such as transparent communication boards. Our results confirmed that the time required for communication using our proposed method was shorter than that required using the conventional communication tools.

Keywords

Cerebral palsy Communication aid Speech recognition Gesture recognition. 

References

  1. 1.
    Bharti, P., Panwar, A., Gopalakrishna, G., Chellappan, S.: Watch-dog: detecting self-harming activities from wrist worn accelerometers. IEEE J. Biomed. Health Inform. 22(3), 686–696 (2017)CrossRefGoogle Scholar
  2. 2.
    Guerrier, Y., Kolski, C., Poirier, F.: Proposition of a communication system used in mobility by users with physical disabilities, focus on cerebral palsy with athetoid problems. In: 2013 International Conference on Advanced Logistics and Transport, pp. 269–274 (2013)Google Scholar
  3. 3.
    Hochstein, D.D., McDaniel, M.A., Nettleton, S.: Recognition of vocabulary in children and adolescents with cerebral palsy: a comparison of two speech coding schemes. Augment. Altern. Commun. 20(2), 45–62 (2004).  https://doi.org/10.1080/07434610410001699708CrossRefGoogle Scholar
  4. 4.
    Farwell, L.A., Donchin, E.: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 70(6), 510–523 (1988)CrossRefGoogle Scholar
  5. 5.
    Shor, J., et al.: Personalizing ASR for dysarthric and accented speech with limited data. In: Proceedings of Interspeech 2019, pp. 784–788 (2019).  https://doi.org/10.21437/Interspeech.2019-1427
  6. 6.
    Ohnishi, S., Kojima, C., Yokchi, K.: Articulation analysis of athetoid children. Jpn. Soc. Logop. Phoniatr. 33, 221–226 (1992)CrossRefGoogle Scholar
  7. 7.
    Niwa, T., Torii, I., Ishii, N.: Development of communication tool for physically handicapped with involuntary movements by line-of-sight detection. In: 2016 4th International Conference on Applied Computing and Information Technology, pp. 253–238 (2016).  https://doi.org/10.1109/ACIT-CSII-BCD.2016.056
  8. 8.
    Takiguti, T., Ariki, Y.: Multimodal assistive technologies for people with articulation disorders. In: ICT Innovation Forum, pp. 784–788 (2013)Google Scholar
  9. 9.
    Guerrier, Y., Naveteur, J., Kolski, C., Poirier, F.: Communication system for persons with cerebral palsy. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds.) ICCHP 2014. LNCS, vol. 8547, pp. 419–426. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08596-8_64CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Intelligent and Mechanical Interaction SystemsUniversity of TsukubaTsukubaJapan
  2. 2.Faculty of Engineering, Information and SystemsUniversity of TsukubaTsukubaJapan

Personalised recommendations