Skip to main content

Speech Enhancement Using Dynamical Variational AutoEncoder

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13996))

Included in the following conference series:

  • 208 Accesses

Abstract

This research focuses on dealing with speech enhancement via a generative model. Many other solutions, which are trained with some fixed kinds of interference or noises, need help when extracting speech from the mixture with a strange noise. We use a class of generative models called Dynamical Variational AutoEncoder (DVAE), which combines generative and temporal models to analyze the speech signal. This class of models makes attention to speech signal behavior, then extracts and enhances the speech. Moreover, we design a new architecture in the DVAE class named Bi-RVAE, which is more straightforward than the other models but gains good results. Experimental results show that DVAE class, including our proposed design, achieves a high-quality recovered speech. This class could enhance the speech signal before passing it into the central processing models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bayer, J., Osendorfer, C.: Learning stochastic recurrent networks (2014). https://doi.org/10.48550/ARXIV.1411.7610

  2. Bie, X., Girin, L., Leglaive, S., Hueber, T., Alameda-Pineda, X.: A benchmark of dynamical variational autoencoders applied to speech spectrogram modeling, pp. 46–50 (2021). https://doi.org/10.21437/Interspeech.2021-256

  3. Bie, X., Leglaive, S., Alameda-Pineda, X., Girin, L.: Unsupervised speech enhancement using dynamical variational auto-encoders (2021)

    Google Scholar 

  4. Bie, X., Leglaive, S., Alameda-Pineda, X., Girin, L.: Unsupervised Speech Enhancement using Dynamical Variational Auto-Encoders (2021). https://hal.inria.fr/hal-03295630. Working paper or preprint

  5. Dean, D., Kanagasundaram, A., Ghaemmaghami, H., Rahman, M.H., Sridharan, S.: The QUT-NOISE-SRE protocol for the evaluation of noisy speaker recognition (2015). https://doi.org/10.21437/Interspeech.2015-685

  6. Dean, D., Sridharan, S., Vogt, R., Mason, M.: The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms, pp. 3110–3113 (2010). https://doi.org/10.21437/Interspeech.2010-774

  7. Do, H.D., Chau, D.T., Tran, S.T.: Speech representation using linear chirplet transform and its application in speaker-related recognition. In: Nguyen, N.T., Manolopoulos, Y., Chbeir, R., Kozierkiewicz, A., Trawinski, B. (eds.) ICCCI 2022. LNCS, vol. 13501, pp. 719–729. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16014-1_56

    Chapter  Google Scholar 

  8. Do, H.D., Chau, D.T., Tran, S.T.: Speech feature extraction using linear chirplet transform and its applications. J. Inf. Telecommun. 1–16 (2023). https://doi.org/10.1080/24751839.2023.2207267

  9. Do, H.D., Tran, S.T., Chau, D.T.: Speech separation in the frequency domain with autoencoder. J. Commun. 15, 841–848 (2020)

    Article  Google Scholar 

  10. Do, H.D., Tran, S.T., Chau, D.T.: Speech source separation using variational autoencoder and bandpass filter. IEEE Access 8, 156219–156231 (2020)

    Article  Google Scholar 

  11. Do, H.D., Tran, S.T., Chau, D.T.: A variational autoencoder approach for speech signal separation. In: International Conference on Computational Collective Intelligence (2020)

    Google Scholar 

  12. Fabius, O., van Amersfoort, J.R.: Variational recurrent auto-encoders (2014). https://doi.org/10.48550/ARXIV.1412.6581

  13. Fraccaro, M., Sønderby, S.K., Paquet, U., Winther, O.: Sequential neural models with stochastic layers. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS 2016, pp. 2207–2215. Curran Associates Inc., Red Hook (2016)

    Google Scholar 

  14. Garofolo, J., et al.: Timit acoustic-phonetic continuous speech corpus. Linguistic Data Consortium (1992)

    Google Scholar 

  15. Girin, L., Leglaive, S., Bie, X., Diard, J., Hueber, T., Alameda-Pineda, X.: Dynamical variational autoencoders: a comprehensive review. Found. Trends® Mach. Learn. 15, 1–175 (2021). https://doi.org/10.1561/2200000089

  16. Hao, X., Shi, Y., Yan, X.: Speech enhancement algorithm based on independent component analysis in noisy environment, pp. 456–461 (2020). https://doi.org/10.1109/ICAIIS49377.2020.9194905

  17. Jancovic, P., Zou, X., Kokuer, M.: Speech Enhancement and Representation Employing the Independent Component Analysis, pp. 103–113 (2011). https://doi.org/10.2174/978160805172411101010103

  18. Kemp, F.: Independent component analysis independent component analysis: principles and practice. J. Roy. Stat. Soc. Ser. D (Stat.) 52 (2003). https://doi.org/10.1111/1467-9884.00369_14

  19. Krishnan, R.G., Shalit, U., Sontag, D.: Deep kalman filters (2015). https://doi.org/10.48550/ARXIV.1511.05121

  20. Kumar, C., ur Rehman, F., Kumar, S., Mehmood, A., Shabir, G.: Analysis of MFCC and BFCC in a speaker identification system. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5 (2018). https://doi.org/10.1109/ICOMET.2018.8346330

  21. Kumar, C., Ur Rehman, F., Kumar, S., Mehmood, A., Shabir, G.: Analysis of MFCC and BFCC in a speaker identification system (2018). https://doi.org/10.1109/ICOMET.2018.8346330

  22. Leglaive, S., Alameda-Pineda, X., Girin, L., Horaud, R.: A recurrent variational autoencoder for speech enhancement, pp. 371–375 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053164

  23. Leglaive, S., Girin, L., Horaud, R.: A variance modeling framework based on variational autoencoders for speech enhancement (2018). https://doi.org/10.1109/MLSP.2018.8516711

  24. Leglaive, S., Girin, L., Horaud, R.: Semi-supervised multichannel speech enhancement with variational autoencoders and non-negative matrix factorization (2019). https://doi.org/10.1109/ICASSP.2019.8683704

  25. Li, Y., Ye, Z.: Boosting independent component analysis. IEEE Signal Process. Lett. 29, 1–5 (2022). https://doi.org/10.1109/LSP.2022.3180680

  26. Nakhate, S., Singh, R., Somkuwar, A.: Speech enhancement using independent component analysis based on entropy maximization, pp. 154–158 (2010). https://doi.org/10.1142/9789814289771_0030

  27. Park, S., Lee, J.: A fully convolutional neural network for speech enhancement, pp. 1993–1997 (2017). https://doi.org/10.21437/Interspeech.2017-1465

  28. Rix, A., Beerends, J., Hollier, M., Hekstra, A.: Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In: 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 01CH37221), vol. 2, pp. 749–752 (2001). https://doi.org/10.1109/ICASSP.2001.941023

  29. Van Kuyk, S., Kleijn, W., Hendriks, R.: An evaluation of intrusive instrumental intelligibility metrics. IEEE/ACM Trans. Audio Speech Lang. Process. (2018). https://doi.org/10.1109/TASLP.2018.2856374

  30. Wei, H., Shi, X., Yang, J., Pu, Y.: Speech independent component analysis, pp. 445–448 (2010). https://doi.org/10.1109/ICMTMA.2010.604

  31. Xue, Q., Li, X., Zhao, J., Zhang, W.: Deep Kalman filter: a refinement module for the rollout trajectory prediction methods (2021)

    Google Scholar 

Download references

Acknowledgement

Hao D. Do was funded by Vingroup JSC and supported by the Ph.D Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2022.TS.037.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hao D. Do .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Do, H.D. (2023). Speech Enhancement Using Dynamical Variational AutoEncoder. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13996. Springer, Singapore. https://doi.org/10.1007/978-981-99-5837-5_21

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-5837-5_21

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-5836-8

  • Online ISBN: 978-981-99-5837-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics