Abstract
This research focuses on dealing with speech enhancement via a generative model. Many other solutions, which are trained with some fixed kinds of interference or noises, need help when extracting speech from the mixture with a strange noise. We use a class of generative models called Dynamical Variational AutoEncoder (DVAE), which combines generative and temporal models to analyze the speech signal. This class of models makes attention to speech signal behavior, then extracts and enhances the speech. Moreover, we design a new architecture in the DVAE class named Bi-RVAE, which is more straightforward than the other models but gains good results. Experimental results show that DVAE class, including our proposed design, achieves a high-quality recovered speech. This class could enhance the speech signal before passing it into the central processing models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bayer, J., Osendorfer, C.: Learning stochastic recurrent networks (2014). https://doi.org/10.48550/ARXIV.1411.7610
Bie, X., Girin, L., Leglaive, S., Hueber, T., Alameda-Pineda, X.: A benchmark of dynamical variational autoencoders applied to speech spectrogram modeling, pp. 46–50 (2021). https://doi.org/10.21437/Interspeech.2021-256
Bie, X., Leglaive, S., Alameda-Pineda, X., Girin, L.: Unsupervised speech enhancement using dynamical variational auto-encoders (2021)
Bie, X., Leglaive, S., Alameda-Pineda, X., Girin, L.: Unsupervised Speech Enhancement using Dynamical Variational Auto-Encoders (2021). https://hal.inria.fr/hal-03295630. Working paper or preprint
Dean, D., Kanagasundaram, A., Ghaemmaghami, H., Rahman, M.H., Sridharan, S.: The QUT-NOISE-SRE protocol for the evaluation of noisy speaker recognition (2015). https://doi.org/10.21437/Interspeech.2015-685
Dean, D., Sridharan, S., Vogt, R., Mason, M.: The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms, pp. 3110–3113 (2010). https://doi.org/10.21437/Interspeech.2010-774
Do, H.D., Chau, D.T., Tran, S.T.: Speech representation using linear chirplet transform and its application in speaker-related recognition. In: Nguyen, N.T., Manolopoulos, Y., Chbeir, R., Kozierkiewicz, A., Trawinski, B. (eds.) ICCCI 2022. LNCS, vol. 13501, pp. 719–729. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16014-1_56
Do, H.D., Chau, D.T., Tran, S.T.: Speech feature extraction using linear chirplet transform and its applications. J. Inf. Telecommun. 1–16 (2023). https://doi.org/10.1080/24751839.2023.2207267
Do, H.D., Tran, S.T., Chau, D.T.: Speech separation in the frequency domain with autoencoder. J. Commun. 15, 841–848 (2020)
Do, H.D., Tran, S.T., Chau, D.T.: Speech source separation using variational autoencoder and bandpass filter. IEEE Access 8, 156219–156231 (2020)
Do, H.D., Tran, S.T., Chau, D.T.: A variational autoencoder approach for speech signal separation. In: International Conference on Computational Collective Intelligence (2020)
Fabius, O., van Amersfoort, J.R.: Variational recurrent auto-encoders (2014). https://doi.org/10.48550/ARXIV.1412.6581
Fraccaro, M., Sønderby, S.K., Paquet, U., Winther, O.: Sequential neural models with stochastic layers. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS 2016, pp. 2207–2215. Curran Associates Inc., Red Hook (2016)
Garofolo, J., et al.: Timit acoustic-phonetic continuous speech corpus. Linguistic Data Consortium (1992)
Girin, L., Leglaive, S., Bie, X., Diard, J., Hueber, T., Alameda-Pineda, X.: Dynamical variational autoencoders: a comprehensive review. Found. Trends® Mach. Learn. 15, 1–175 (2021). https://doi.org/10.1561/2200000089
Hao, X., Shi, Y., Yan, X.: Speech enhancement algorithm based on independent component analysis in noisy environment, pp. 456–461 (2020). https://doi.org/10.1109/ICAIIS49377.2020.9194905
Jancovic, P., Zou, X., Kokuer, M.: Speech Enhancement and Representation Employing the Independent Component Analysis, pp. 103–113 (2011). https://doi.org/10.2174/978160805172411101010103
Kemp, F.: Independent component analysis independent component analysis: principles and practice. J. Roy. Stat. Soc. Ser. D (Stat.) 52 (2003). https://doi.org/10.1111/1467-9884.00369_14
Krishnan, R.G., Shalit, U., Sontag, D.: Deep kalman filters (2015). https://doi.org/10.48550/ARXIV.1511.05121
Kumar, C., ur Rehman, F., Kumar, S., Mehmood, A., Shabir, G.: Analysis of MFCC and BFCC in a speaker identification system. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5 (2018). https://doi.org/10.1109/ICOMET.2018.8346330
Kumar, C., Ur Rehman, F., Kumar, S., Mehmood, A., Shabir, G.: Analysis of MFCC and BFCC in a speaker identification system (2018). https://doi.org/10.1109/ICOMET.2018.8346330
Leglaive, S., Alameda-Pineda, X., Girin, L., Horaud, R.: A recurrent variational autoencoder for speech enhancement, pp. 371–375 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053164
Leglaive, S., Girin, L., Horaud, R.: A variance modeling framework based on variational autoencoders for speech enhancement (2018). https://doi.org/10.1109/MLSP.2018.8516711
Leglaive, S., Girin, L., Horaud, R.: Semi-supervised multichannel speech enhancement with variational autoencoders and non-negative matrix factorization (2019). https://doi.org/10.1109/ICASSP.2019.8683704
Li, Y., Ye, Z.: Boosting independent component analysis. IEEE Signal Process. Lett. 29, 1–5 (2022). https://doi.org/10.1109/LSP.2022.3180680
Nakhate, S., Singh, R., Somkuwar, A.: Speech enhancement using independent component analysis based on entropy maximization, pp. 154–158 (2010). https://doi.org/10.1142/9789814289771_0030
Park, S., Lee, J.: A fully convolutional neural network for speech enhancement, pp. 1993–1997 (2017). https://doi.org/10.21437/Interspeech.2017-1465
Rix, A., Beerends, J., Hollier, M., Hekstra, A.: Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In: 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 01CH37221), vol. 2, pp. 749–752 (2001). https://doi.org/10.1109/ICASSP.2001.941023
Van Kuyk, S., Kleijn, W., Hendriks, R.: An evaluation of intrusive instrumental intelligibility metrics. IEEE/ACM Trans. Audio Speech Lang. Process. (2018). https://doi.org/10.1109/TASLP.2018.2856374
Wei, H., Shi, X., Yang, J., Pu, Y.: Speech independent component analysis, pp. 445–448 (2010). https://doi.org/10.1109/ICMTMA.2010.604
Xue, Q., Li, X., Zhao, J., Zhang, W.: Deep Kalman filter: a refinement module for the rollout trajectory prediction methods (2021)
Acknowledgement
Hao D. Do was funded by Vingroup JSC and supported by the Ph.D Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2022.TS.037.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Do, H.D. (2023). Speech Enhancement Using Dynamical Variational AutoEncoder. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13996. Springer, Singapore. https://doi.org/10.1007/978-981-99-5837-5_21
Download citation
DOI: https://doi.org/10.1007/978-981-99-5837-5_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-5836-8
Online ISBN: 978-981-99-5837-5
eBook Packages: Computer ScienceComputer Science (R0)