Skip to main content

Automatic Speech Recognition with Deep Neural Networks for Impaired Speech

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10077))

Abstract

Automatic Speech Recognition has reached almost human performance in some controlled scenarios. However, recognition of impaired speech is a difficult task for two main reasons: data is (i) scarce and (ii) heterogeneous. In this work we train different architectures on a database of dysarthric speech. A comparison between architectures shows that, even with a small database, hybrid DNN-HMM models outperform classical GMM-HMM according to word error rate measures. A DNN is able to improve the recognition word error rate a 13 % for subjects with dysarthria with respect to the best classical architecture. This improvement is higher than the one given by other deep neural networks such as CNNs, TDNNs and LSTMs. All the experiments have been done with the Kaldi toolkit for speech recognition for which we have adapted several recipes to deal with dysarthric speech and work on the TORGO database. These recipes are publicly available.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Recipes are publicly available at https://github.com/cristinae/ASRdys.

  2. 2.

    http://www.speech.cs.cmu.edu/cgi-bin/cmudict.

References

  1. Abdel-Hamid, O., Deng, L., Yu, D.: Exploring convolutional neural network structures and optimization techniques for speech recognition. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 3366–3370 (2013)

    Google Scholar 

  2. Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B.C., Chen, J., Chrzanowski, M., Coates, A., Diamos, G., Elsen, E., Engel, J., Fan, L., Fougner, C., Han, T., Hannun, A.Y., Jun, B., LeGresley, P., Lin, L., Narang, S., Ng, A.Y., Ozair, S., Prenger, R., Raiman, J., Satheesh, S., Seetapun, D., Sengupta, S., Wang, Y., Wang, Z., Wang, C., Xiao, B., Yogatama, D., Zhan, J., Zhu, Z.: Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. CoRR abs/1512.02595 (2015)

    Google Scholar 

  3. Bourlard, H.A., Morgan, N.: Connectionist Speech Recognition: A Hybrid Approach. Kluwer Academic Publishers, Norwell (1993)

    Google Scholar 

  4. Dahl, G., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20(1), 30–42 (2012)

    Article  Google Scholar 

  5. Dehak, N., Dehak, R., Kenny, P., Brummer, N., Ouellet, P., Dumouchel, P.: Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification. In: Interspeech 2009, Brighton, United Kingdom, 6–10 September 2009, pp. 1559–1562 (2009)

    Google Scholar 

  6. Gopinath, R.A.: Constrained maximum likelihood modeling with Gaussian distributions. In: Proceedings of ICASSP 1998, Seattle, Washington, USA, 12–15 May 1998, pp. 661–664 (1998)

    Google Scholar 

  7. Li, X., Wu, X.: Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, 19–24 April 2015, pp. 4520–4524 (2015)

    Google Scholar 

  8. Maas, A.L., Hannun, A.Y., Lengerich, C.T., Qi, P., Jurafsky, D., Ng, A.Y.: Increasing Deep Neural Network Acoustic Model Size for Large Vocabulary Continuous Speech Recognition. CoRR abs/1406.7806 (2014)

    Google Scholar 

  9. Mengistu, K.T., Rudzicz, F.: Adapting acoustic and lexical models to dysarthric speech. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP11), pp. 4924–4927. IEEE (2011)

    Google Scholar 

  10. Miao, Y., Metze, F.: Improving low-resource CD-DNN-HMM using dropout and multilingual DNN training. In: Bimbot, F., Cerisara, C., Fougeron, C., Gravier, G., Lamel, L., Pellegrino, F., Perrier, (eds.) Interspeech, pp. 2237–2241. ISCA (2013)

    Google Scholar 

  11. Miao, Y., Metze, F.: On speaker adaptation of long short-term memory recurrent neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 1101–1105 (2015)

    Google Scholar 

  12. Peddinti, V., Chen, G., Povey, D., Khudanpur, S.: Reverberation robust acoustic modeling using i-vectors with time delay neural networks. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 2440–2444 (2015)

    Google Scholar 

  13. Peddinti, V., Povey, D., Khudanpur, S.: A time delay neural network architecture for efficient modeling of long temporal contexts. In: Interspeech 2015, Dresden, Germany, 6–10 September 2015, pp. 3214–3218 (2015)

    Google Scholar 

  14. Povey, D., Burget, L., Agarwal, M., Akyazi, P., Kai, F., Ghoshal, A., Glembek,O., Goel, N., Karafiát, M., Rastrow, A., Rose, R.C., Schwarz, P., Thomas, S.: The subspace Gaussian mixture model - a structured model for speech recognition. Comput. Speech Lang. 25(2), 404–439 (2011)

    Google Scholar 

  15. Povey, D., Ghoshal, A., Boulianne, G., Goel, N., Hannemann, M., Qian, Y., Schwarz, P., Stemmer, G.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society (2011)

    Google Scholar 

  16. Povey, D., Saon, G.: Feature and model space speaker adaptation with full covariance Gaussians. In: Interspeech 2016 ICSLP, Pittsburgh, PA, USA, 17–21 September (2006)

    Google Scholar 

  17. Sainath, T.N., Mohamed, A., Kingsbury, B., Ramabhadran, B.: Deep convolutional neural networks for LVCSR. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, 26–31 May 2013, pp. 8614–8618 (2013)

    Google Scholar 

  18. Sak, H., Senior, A.W., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Interspeech 2014, Singapore, 14–18 September 2014, pp. 338–342 (2014)

    Google Scholar 

  19. Saon, G., Sercu, T., Rennie, S.J., Kuo, H.J.: The IBM 2016 English Conversational Telephone Speech Recognition System. CoRR abs/1604.08242 (2016)

    Google Scholar 

  20. Saon, G., Soltau, H., Nahamoo, D., Picheny, M.: Speaker adaptation of neural network acoustic models using i-vectors. In: ASRU, pp. 55–59. IEEE (2013)

    Google Scholar 

  21. Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Interspeech 2011, Florence, Italy, 27–31 August 2011, pp. 437–440 (2011)

    Google Scholar 

  22. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings of the Seventh International Conference of Spoken Language Processing (ICSLP2002), Denver, Colorado, USA, pp. 901–904 (2002)

    Google Scholar 

  23. Tan, T., Qian, Y., Yu, D., Kundu, S., Lu, L., Sim, K.C., Xiao, X., Zhang, Y.: Speaker-aware training of LSTM-RNNS for acoustic modelling. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, 20–25 March 2016, pp. 5280–5284 (2016)

    Google Scholar 

  24. Trentin, E., Gori, M.: A survey of hybrid ANN/HMM models for automatic speech recognition. Neurocomputing 37(14), 91–126 (2001)

    Article  MATH  Google Scholar 

  25. Veselý, K., Ghoshal, A., Burget, L., Povey, D.: Sequence-discriminative training of deep neural networks. In: Interspeech 2013, Lyon, France, 25–29 August 2013, pp. 2345–2349 (2013)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund, contract INNPACTO IPT-2012-0914-300000 and TEC2015-69266-P (MINECO/FEDER, UE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cristina España-Bonet .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

España-Bonet, C., Fonollosa, J.A.R. (2016). Automatic Speech Recognition with Deep Neural Networks for Impaired Speech. In: Abad, A., et al. Advances in Speech and Language Technologies for Iberian Languages. IberSPEECH 2016. Lecture Notes in Computer Science(), vol 10077. Springer, Cham. https://doi.org/10.1007/978-3-319-49169-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49169-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49168-4

  • Online ISBN: 978-3-319-49169-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics