Skip to main content

Data Augmentation for Training of Noise Robust Acoustic Models

  • Conference paper
  • First Online:
Analysis of Images, Social Networks and Texts (AIST 2016)

Abstract

In this paper we analyse ways to improve the acoustic models based on deep neural networks with the help of data augmentation. These models are used for speech recognition in a priori unknown possibly noisy acoustic environment (with the presence of office or home noise, street noise, babble, etc.) and may deal with both the headset and distant microphone recordings. We compare acoustic models trained on speech corpora with artificially added noises of different origins and reverberation. At various test sets, word recognition accuracy improvement over the baseline model trained on clean headset recordings reaches 45%. In real-life environments like a meeting room or a noisy open space, the gain varies from 10 to 40%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/freerussianasr/recipes.

References

  1. Hinton, G., Deng, L., Yu, D., Dahl, G.E.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29, 82–97 (2012)

    Article  Google Scholar 

  2. Yaman, S., Pelecanos, J.W., Sarikaya, R.: Bottleneck features for speaker recognition. Odyssey 12, 105–108 (2012)

    Google Scholar 

  3. Ragni, A., Knill, K.M., Rath, S.P., Gales, M.J.F.: Data augmentation for low resource languages. In: Proceedings of Interspeech 2014, pp. 810–814 (2014)

    Google Scholar 

  4. Kim, C., Stern, R.M.: Feature extraction for robust speech recognition based on maximizing the sharpness of the power distribution and on power flooring. In: Proceedings of ICASSP 2010, pp. 4574–4577 (2010)

    Google Scholar 

  5. Hermansky, H., Morgan, N., Bayya, A., Kohn, P.: Compensation for the effect of communication channel in auditory-like analysis of speech (RASTA-PLP). In: Proceedings of European Conference on Speech Technology 1991, pp. 1367–1370 (1991)

    Google Scholar 

  6. Viikki, O., Bye, D., Laurila, K.: A recursive feature vector normalization approach for robust speech recognition in noise. In: Proceedings of ICASSP 1998, pp. 733–736 (1998)

    Google Scholar 

  7. Boll, F.: Suppression of acoustic noise in speech using spectral subtraction. IEEE T-ASSP 27(2), 113–120 (1979)

    Article  Google Scholar 

  8. Mauuary, L.: Blind equalization in the cepstral domain for robust telephone based speech recognition. In: Proceedings of EUSPICO 1998, vol. 1, pp. 359–363 (1998)

    Google Scholar 

  9. Gauvain, J.-L., Lee, C.-H.: Maximum a posteriori estimation of multivariate Gaussian mixture observations of Markov chains. IEEE T-SAP 2(2), 291–298 (1994)

    Google Scholar 

  10. Gales, M.J.F.: Maximum likelihood linear transformations for HMM-based speech recognition. Comput. Speech Lang. 12, 75–98 (1998)

    Article  Google Scholar 

  11. Deng, L., Acero, A., Jiang, L., Droppo, J., Huang, X.D.: High-performance robust speech recognition using stereo training data. In: Proceedings of ICASSP 2001, pp. 301–304 (2001)

    Google Scholar 

  12. Gales, M.J.F.: Cluster adaptive training of hidden Markov models. IEEE T-SAP 8(4), 417–428 (2000)

    Google Scholar 

  13. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Proceedings of NIPS 2000, pp. 556–562 (2000)

    Google Scholar 

  14. Deng, J., Li, L., Yu, D., Gong, Y., Acero, A.: High-performance HMM adaptation with joint compensation of additive and convolutive distortions via vector Taylor series. In: Proceedings of ASRU 2007, pp. 65–70 (2007)

    Google Scholar 

  15. Lamel, L., Gauvain, J.-L.: Lightly supervised and unsupervised acoustic model training. Comput. Speech Lang. 16, 115–129 (2002)

    Article  Google Scholar 

  16. Gales, M.J.F., Ragni, A., AlDamarki, H., Gautier, C.: Support vector machines for noise robust ASR. In: Proceedings of ASRU 2009, pp. 205–210 (2009)

    Google Scholar 

  17. Jaitly, N., Hinton, G.E.: Vocal tract length perturbation (VTLP) improves speech recognition. In: Proceedings of ICML 2013 (2013)

    Google Scholar 

  18. Burget, L., Schwarz, P., Agarwal, M., Akyazi, P.: Multilingual acoustic modeling for speech recognition based on subspace Gaussian mixture models. In: Proceedings of ICASSP 2010, pp. 4334–4337 (2010)

    Google Scholar 

  19. Ko, T., Peddinti, V., Povey, D., Khudanpur, S.: Audio augmentation for speech recognition. In: Proceedings of Interspeech 2015 (2015)

    Google Scholar 

  20. Cui, X., Goel, V., Kingsbury, B.: Data augmentation for deep neural network acoustic modeling. In: Proceedings of ICASSP 2014 (2014)

    Google Scholar 

  21. Jeub, M., Schaefer, M., Vary, P.: A binaural room impulse response database for the evaluation of dereverberation algorithms. In: Proceedings of 16th International Conference on Digital Signal Processing (DSP), Santorini, Greece (2009)

    Google Scholar 

  22. Peddinti, V., Chen, G., Povey, D., Khudanpur, S.L.: Reverberation robust acoustic modeling using i-vectors with time delay neural networks. In: Proceedings of Interspeech 2015, pp. 2440–2444 (2015)

    Google Scholar 

  23. Yu, D., Seltzer, M.L.: Improved bottleneck features using pretrained deep neural networks. In: Proceedings of Interspeech 2011, pp. 237–240 (2011)

    Google Scholar 

  24. Karafiát, M., Grézl, F., Burget, L., Szőke, I., Černoský, J.: Three ways to adapt a CTS recognizer to unseen reverberated speech in BUT system for the ASpIRE challenge. In: Proceedings of Interspeech 2015, pp. 2454–2458 (2015)

    Google Scholar 

  25. Picone, J.W.: Signal modeling techniques in speech recognition. Proc. IEEE 81(9), 1215–1247 (1993)

    Article  Google Scholar 

  26. Dean, D.B., Kanagasundaram, A., Ghaemmaghami, H., Rahman, M., Sridharan, S.: The QUT-NOISE-SRE protocol for the evaluation of noisy speaker recognition. In: Proceedings of the 16th Annual Conference of the International Speech Communication Association, Interspeech 2015, pp. 3456–3460 (2015)

    Google Scholar 

  27. Pollák, P.: Efficient and reliable measurement and simulation of noisy speech background. In: 2002 11th European Signal Processing Conference, pp. 1–4 (2002)

    Google Scholar 

  28. Löllmann, H.W., Yilmaz, E., Jeub, M., Vary, P.: An improved algorithm for blind reverberation time estimation. In: Proceedings of International Workshop on Acoustic Echo and Noise Control (IWAENC) (2010)

    Google Scholar 

  29. McDermott, E., Hazen, T., Roux, J.L., Nakamura, A., Katagiri, S.: Discriminative training for large vocabulary speech recognition using minimum classification error. IEEE Trans. Speech Audio Process 15(1), 203–223 (2007)

    Article  Google Scholar 

Download references

Acknowledgements

This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.579.21.0057 (ID RFMEFI57914X0057).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valentin Mendelev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Prisyach, T., Mendelev, V., Ubskiy, D. (2017). Data Augmentation for Training of Noise Robust Acoustic Models. In: Ignatov, D., et al. Analysis of Images, Social Networks and Texts. AIST 2016. Communications in Computer and Information Science, vol 661. Springer, Cham. https://doi.org/10.1007/978-3-319-52920-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-52920-2_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-52919-6

  • Online ISBN: 978-3-319-52920-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics