Advertisement

A Comparison of Language Model Training Techniques in a Continuous Speech Recognition System for Serbian

  • Branislav PopovićEmail author
  • Edvin Pakoci
  • Darko Pekar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11096)

Abstract

In this paper, a number of language model training techniques will be examined and utilized in a large vocabulary continuous speech recognition system for the Serbian language (more than 120000 words), namely Mikolov and Yandex RNNLM, TensorFlow based GPU approaches and CUED-RNNLM approach. The baseline acoustic model is a chain sub-sampled time delayed neural network, trained using cross-entropy training and a sequence-level objective function on a database of about 200 h of speech. The baseline language model is a 3-gram model trained on the training part of the database transcriptions and the Serbian journalistic corpus (about 600000 utterances), using the SRILM toolkit and the Kneser-Ney smoothing method, with a pruning value of 10−7 (previous best). The results are analyzed in terms of word and character error rates and the perplexity of a given language model on training and validation sets. Relative improvement of 22.4% (best word error rate of 7.25%) is obtained in comparison to the baseline language model.

Keywords

Language modeling RNNLM LSTM LVCSR 

Notes

Acknowledgments

The work described in this paper was supported in part by the Ministry of Education, Science and Technological Development of the Republic of Serbia, within the project “Development of Dialogue Systems for Serbian and Other South Slavic Languages”, EUREKA project DANSPLAT, “A Platform for the Applications of Speech Technologies on Smartphones for the Languages of the Danube Region”, id E! 9944, and the Provincial Secretariat for Higher Education and Scientific Research, within the project “Central Audio-Library of the University of Novi Sad”, No. 114-451-2570/2016-02.

References

  1. 1.
    Goodman, J.T.: A bit of progress in language modeling, extended version. Microsoft Research, Technical report, MSR-TR-2001-72 (2001)Google Scholar
  2. 2.
    Rosenfeld, R.: Two decades of statistical language modeling: where do we go from here? Proc. IEEE 88, 1270–1278 (2000)CrossRefGoogle Scholar
  3. 3.
    Pakoci, E., Popović, B., Pekar, D.: Language model optimization for a deep neural network based speech recognition system for Serbian. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 483–492. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66429-3_48CrossRefGoogle Scholar
  4. 4.
    Mulder, W.D., Bethard, S., Moens, M.F.: A survey on the application of recurrent neural networks to statistical language modeling. Comput. Speech Lang. 30(1), 61–98 (2015)CrossRefGoogle Scholar
  5. 5.
    Mikolov, T., Kombrink, S., Burget, L., Černocký, J.H., Khudanpur, S.: Extensions of recurrent neural network language model. In: Proceedings of ICASSP, pp. 5528–5531. IEEE (2011)Google Scholar
  6. 6.
    Popović, B., Pakoci, E., Pekar, D.: End-to-end large vocabulary speech recognition for the Serbian language. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 343–352. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66429-3_33CrossRefGoogle Scholar
  7. 7.
    Pakoci, E., Popović, B., Pekar, D.: Fast sequence-trained deep neural network models for Serbian speech recognition. In: 11th Digital Speech and Image Processing, DOGS, Novi Sad, Serbia, pp. 25–28 (2017)Google Scholar
  8. 8.
    Mikolov, T., Kombrink, S., Deoras, A., Burget, L., Černocký, J.H.: RNNLM - recurrent neural network language modeling toolkit. In: Procedings of ASRU Workshop (2011)Google Scholar
  9. 9.
    Mikolov, T., Chen K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space, arXiv:1301.3781 (2013)
  10. 10.
    Niu, F., Recht, B., Ré, C., Wright, S.J.: Hogwild!: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, Chicago, pp. 693–701 (2011)Google Scholar
  11. 11.
    Chen, X., Liu, X., Gales, M.J.F., Woodland, P.C.: Recurrent neural network language model training with noise contrastive estimation for speech recognition. In: Proceedings of ICASSP, pp. 5411–5415. IEEE (2015)Google Scholar
  12. 12.
    Abadi, M: TensorFlow: large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467 (2016)
  13. 13.
    Chen, X., Liu, X., Qian, Y., Gales, M.J.F., Woodland P.C.: CUED-RNNLM – an open-source toolkit for efficient training and evaluation of recurrent neural network language models. In: Proceedings of ICASSP, pp. 6000–6004. IEEE (2015)Google Scholar
  14. 14.
    Povey, D., et al.: The Kaldi speech recognition toolkit. In: IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 1–4. IEEE Signal Processing Society (2011)Google Scholar
  15. 15.
    Xu, H., et al.: A pruned RNNLM lattice-rescoring algorithm for automatic speech recognition (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Branislav Popović
    • 1
    • 3
    • 4
    Email author
  • Edvin Pakoci
    • 1
    • 2
  • Darko Pekar
    • 1
    • 2
  1. 1.Department for Power, Electronic and Telecommunication Engineering, Faculty of Technical SciencesUniversity of Novi SadNovi SadSerbia
  2. 2.AlfaNum Speech TechnologiesNovi SadSerbia
  3. 3.Department for Music Production and Sound Design, Academy of ArtsAlfa BK UniversityBelgradeSerbia
  4. 4.Computer Programming Agency Code85 OdžaciOdžaciSerbia

Personalised recommendations