Advertisement

Exploring End-to-End Techniques for Low-Resource Speech Recognition

  • Vladimir BataevEmail author
  • Maxim Korenevsky
  • Ivan Medennikov
  • Alexander Zatvornitskiy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11096)

Abstract

In this work we present simple grapheme-based system for low-resource speech recognition using Babel data for Turkish spontaneous speech (80 h). We have investigated different neural network architectures performance, including fully-convolutional, recurrent and ResNet with GRU. Different features and normalization techniques are compared as well. We also proposed CTC-loss modification using segmentation during training, which leads to improvement while decoding with small beam size.

Our best model achieved word error rate of 45.8%, which is the best reported result for end-to-end systems using in-domain data for this task, according to our knowledge.

Keywords

Low-resource speech recognition End-to-end speech recognition Connectionist Temporal Classification 

Notes

Acknowledgements

This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132).

References

  1. 1.
    CTC Decoder for PyTorch. https://github.com/parlance/ctcdecode
  2. 2.
  3. 3.
  4. 4.
    The SRI Language Modeling Toolkit. http://www.speech.sri.com/projects/srilm/
  5. 5.
  6. 6.
    Alumäe, T., et al.: The 2016 BBN Georgian telephone speech keyword spotting system. In: Proceedings of ICASSP, pp. 5755–5759 (2017)Google Scholar
  7. 7.
    Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and Mandarin (2015). arxiv:1512.02595
  8. 8.
    Chan, W., Jaitly, N., Le, Q.V., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: Proceedings of ICASSP, pp. 4960–4964 (2016)Google Scholar
  9. 9.
    Chernykh, G., Korenevsky, M., Levin, K., Ponomareva, I., Tomashenko, N.: State level control for acoustic model training. In: Ronzhin, A., Potapova, R., Delic, V. (eds.) SPECOM 2014. LNCS (LNAI), vol. 8773, pp. 435–442. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11581-8_54CrossRefGoogle Scholar
  10. 10.
    Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2letter: an end-to-end ConvNet-based speech recognition system (2016). arxiv:1609.03193
  11. 11.
    Dalmia, S., Sanabria, R., Metze, F., Black, A.W.: Sequence-based multi-lingual low resource speech recognition (2018). arxiv:1802.07420
  12. 12.
    Gales, M.J.F., Knill, K.M., Ragni, A.: Low-resource speech recognition and keyword-spotting. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 3–19. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66429-3_1CrossRefGoogle Scholar
  13. 13.
    Graves, A.: Sequence transduction with recurrent neural networks (2012). arxiv:1211.3711
  14. 14.
    Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of ICML, pp. 369–376 (2006)Google Scholar
  15. 15.
    Hannun, A.Y., et al.: Deep speech: scaling up end-to-end speech recognition (2014). arxiv:1412.5567
  16. 16.
    Khokhlov, Y.Y., et al.: The STC keyword search system for OpenKWS 2016 evaluation. In: Proceedings of INTERSPEECH, pp. 3602–3606 (2017)Google Scholar
  17. 17.
    Khomitsevich, O., Mendelev, V., Tomashenko, N., Rybin, S., Medennikov, I., Kudubayeva, S.: A bilingual Kazakh-Russian system for automatic speech recognition and synthesis. In: Ronzhin, A., Potapova, R., Fakotakis, N. (eds.) SPECOM 2015. LNCS (LNAI), vol. 9319, pp. 25–33. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23132-7_3CrossRefGoogle Scholar
  18. 18.
    Kingsbury, B.: Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2009, pp. 3761–3764 (2009)Google Scholar
  19. 19.
    Ko, T., Peddinti, V., Povey, D., Khudanpur, S.: Audio augmentation for speech recognition. In: Proceedings of INTERSPEECH (2015)Google Scholar
  20. 20.
    Levin, K., et al.: Automated closed captioning for Russian live broadcasting. In: Proceedings of INTERSPEECH, pp. 1438–1442 (2014)Google Scholar
  21. 21.
    Liptchinsky, V., Synnaeve, G., Collobert, R.: Letter-based speech recognition with Gated ConvNets (2017). arxiv:1712.09444
  22. 22.
    Miao, Y., Gowayyed, M., Metze, F.: EESEN: end-to-end speech recognition using deep RNN models and WFST-based decoding. In: Proceedings of ASRU, pp. 167–174 (2015)Google Scholar
  23. 23.
    Povey, D., et al.: The Kaldi speech recognition toolkit. In: Proceedings of ASRU (2011)Google Scholar
  24. 24.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Neurocomputing: foundations of research. In: Learning Representations by Back-Propagating Errors, pp. 696–699. MIT Press, Cambridge (1988)Google Scholar
  25. 25.
    Zhou, Y., Xiong, C., Socher, R.: Improved regularization techniques for end-to-end speech recognition (2017). arxiv:1712.07108

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Vladimir Bataev
    • 1
    Email author
  • Maxim Korenevsky
    • 2
    • 3
  • Ivan Medennikov
    • 2
    • 3
  • Alexander Zatvornitskiy
    • 1
    • 2
    • 3
  1. 1.Speech Technology Center Ltd.St. PetersburgRussia
  2. 2.STC-Innovations Ltd.St. PetersburgRussia
  3. 3.ITMO UniversitySt. PetersburgRussia

Personalised recommendations