Skip to main content

On the Effectiveness of Neural Text Generation Based Data Augmentation for Recognition of Morphologically Rich Speech

  • Conference paper
  • First Online:
Text, Speech, and Dialogue (TSD 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12284))

Included in the following conference series:

  • 1323 Accesses

Abstract

Advanced neural network models have penetrated Automatic Speech Recognition (ASR) in recent years, however, in language modeling many systems still rely on traditional Back-off N-gram Language Models (BNLM) partly or entirely. The reason for this are the high cost and complexity of training and using neural language models, mostly possible by adding a second decoding pass (rescoring). In our recent work we have significantly improved the online performance of a conversational speech transcription system by transferring knowledge from a Recurrent Neural Network Language Model (RNNLM) to the single pass BNLM with text generation based data augmentation. In the present paper we analyze the amount of transferable knowledge and demonstrate that the neural augmented LM (RNN-BNLM) can help to capture almost 50% of the knowledge of the RNNLM yet by dropping the second decoding pass and making the system real-time capable. We also systematically compare word and subword LMs and show that subword-based neural text augmentation can be especially beneficial in under-resourced conditions. In addition, we show that using the RNN-BNLM in the first pass followed by a neural second pass, offline ASR results can be even significantly improved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/btarjan/stateful-LSTM-LM.

References

  1. Adel, H., Kirchhoff, K., Vu, N.T., Telaar, D., Schultz, T.: Comparing approaches to convert recurrent neural networks into backoff language models for efficient decoding. Interspeech 2014, 651–655 (2014)

    Google Scholar 

  2. Creutz, M., Lagus, K.: Unsupervised discovery of morphemes. In: Proceedings ACL-02 Workshop on Morphological and Phonological Learning, vol. 6, Morristown, NJ, USA, pp. 21–30 (2002)

    Google Scholar 

  3. Deoras, A., Mikolov, T., Kombrink, S., Karafiat, M., Khudanpur, S.: Variational approximation of long-span language models for LVCSR. In: 2011 IEEE International Conference on Acoustics, Speech, and Signal Processin, pp. 5532–5535. IEEE, May 2011

    Google Scholar 

  4. Enarvi, S., Smit, P., Virpioja, S., Kurimo, M.: Automatic speech recognition with very large conversational finnish and estonian vocabularies. IEEE/ACM Trans. Audio Speech Lang. Process. 25(11), 2085–2097 (2017)

    Article  Google Scholar 

  5. Irie, K., Zeyer, A., Schl, R., Ney, H., Gmbh, A.: Language modeling with deep transformers. Interspeech 2019, 3905–3909 (2019)

    Article  Google Scholar 

  6. Kurimo, M., et al.: Unlimited vocabulary speech recognition for agglutinative languages. In: HLT-NAACL 2006, Morristown, NJ, USA, pp. 487–494 (2007)

    Google Scholar 

  7. Povey, D., et al.: Semi-orthogonal low-rank matrix factorization for deep neural networks. In: Interspeech 2018, ISCA, ISCA, pp. 3743–3747, September 2018

    Google Scholar 

  8. Povey, D., et al.: The kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition & Understanding. IEEE Signal Processing Society (2011)

    Google Scholar 

  9. Singh, M., Virpioja, S., Smit, P., Kurimo, M.: Subword RNNLM approximations for out-of-vocabulary keyword search. Interspeech 2019, 4235–4239 (2019)

    Article  Google Scholar 

  10. Smit, P., Virpioja, S., Kurimo, M.: Improved subword modeling for WFST-based speech recognition. In: Interspeech 2017. ISCA, ISCA, pp. 2551–2555, August 2017

    Google Scholar 

  11. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings International Conference on Spoken Language Processing, Denver, US, pp. 901–904 (2002)

    Google Scholar 

  12. Sundermeyer, M., Schlueter, R., Ney, H.: LSTM neural networks for language modeling. Interspeech 2012, 194–197 (2012)

    Google Scholar 

  13. Suzuki, M., Itoh, N., Nagano, T., Kurata, G., Thomas, S.: Improvements to N-gram language model using text generated from neural language model. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 7245–7249 (2019)

    Google Scholar 

  14. Tarján, B., Szaszák, G., Fegyó, T., Mihajlik, P.: Investigation on N-gram approximated RNNLMs for recognition of morphologically rich speech. In: Martín-Vide, C., Purver, M., Pollak, S. (eds.) SLSP 2019. LNCS (LNAI), vol. 11816, pp. 223–234. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31372-2_19

    Chapter  Google Scholar 

Download references

Acknowledgements

The research was supported by the CAMEP (2018-2.1.3-EUREKA-2018-00014) and NKFIH FK-124413 projects.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Balázs Tarján .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tarján, B., Szaszák, G., Fegyó, T., Mihajlik, P. (2020). On the Effectiveness of Neural Text Generation Based Data Augmentation for Recognition of Morphologically Rich Speech. In: Sojka, P., Kopeček, I., Pala, K., Horák, A. (eds) Text, Speech, and Dialogue. TSD 2020. Lecture Notes in Computer Science(), vol 12284. Springer, Cham. https://doi.org/10.1007/978-3-030-58323-1_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58323-1_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58322-4

  • Online ISBN: 978-3-030-58323-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics