Abstract
A new neural network approach to speech characteristic features extraction is proposed. The features provide separable information about the pitch of a speech signal and amplitude spectral envelope. The features can compactly describe a speech signal and allows effectively training machine learning models that use the proposed features as an input. The experimental application to the voice activity detection (VAD) problem shows the advantage of the proposed features over the features that are widely used: melspectrogram, amplitude spectrum and MFCC. Experimental results show that a VAD based on proposed features is more accurate while using a simpler architecture and much less number of training parameters. The performance has been tested on noisy speech with different SNR ratios and the results show that the proposed features are more robust to noise. The experimental results indicate that the proposed VAD model outperforms the VAD from the WebRTC framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
van den Oord, A., et al.: WaveNet: a generative model for raw audio. http://arxiv.org/abs/1609.03499. Accessed 26 Nov 2019
Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and mandarin. http://arxiv.org/abs/1512.02595. Accessed 13 Jan 2020
Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2Letter: an end-to-end convnet-based speech recognition system. http://arxiv.org/abs/1609.03193. Accessed 15 Jan 2020
Prenger, R., Valle, R., Catanzaro, B.: Waveglow: a flow-based generative network for speech synthesis. In: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, pp. 3617–3621 (2019). https://doi.org/10.1109/icassp.2019.8683143
Spanias, A., Painter, T., Atti, V.: Audio Signal Processing and Coding. Wiley, Hoboken (2007)
Yoo, I.-C., Lim, H., Yook, D.: Formant-based robust voice activity detection. Trans. Audio Speech Lang. Process. 23(12), 2238–2245 (2015). https://doi.org/10.1109/TASLP.2015.2476762
Pang, J.: Spectrum energy based voice activity detection. In: The 7th IEEE Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, USA, pp. 1–5 (2017). https://doi.org/10.1109/CCWC.2017.7868454
Kinnunen, T., et al.: Voice activity detection using MFCC features and support vector machine. In: The 12th International Conference on Speech and Computer (SPECOM07), Moscow, Russia, vol. 2, pp. 556–561 (2007)
Zazo, R., Sainath, T.N., Simko, G., Parada, C.: Feature learning with raw-waveform CLDNNs for voice activity detection. In: 17th Annual Conference of the International Speech Communication Association, San Francisco, USA, pp. 3668–3672 (2016). https://doi.org/10.21437/interspeech.2016-268
Zhang, X., Wu, J.: Denoising deep neural networks based voice activity detection. In: International Conference on Acoustics Speech and Signal Processing, Vancouver, Canada, pp. 853–857 (2013). https://doi.org/10.1109/ICASSP.2013.6637769
Ryant, N., Liberman, M., Yuan, J.: Speech activity detection on youtube using deep neural networks. In: 14th Annual Conference of the International Speech Communication Association Lyon, France, pp. 728–731 (2013)
Wang, Q., et al.: A universal VAD based on jointly trained deep neural networks. In: 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, pp. 2282–2286 (2015)
Hughes, T., Mierle, K.: Recurrent neural networks for voice activity detection. In: International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada, pp. 7378–7382 (2013). https://doi.org/10.1109/ICASSP.2013.6639096
Eyben, F., Weninger, F., Squartini, S., Schuller, B.: Real-life voice activity detection with LSTM recurrent neural networks and an application to Hollywood movies. In: International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada, pp. 483–487 (2013). https://doi.org/10.1109/ICASSP.2013.6637694
Python interface to the WebRTC Voice Activity Detector. https://github.com/wiseman/py-webrtcvad
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Valentini-Botinhao, C.: Noisy speech database for training speech enhancement algorithms and TTS models [sound]. University of Edinburgh. School of Informatics. Centre for Speech Technology Research (CSTR) (2017). https://doi.org/10.7488/ds/2117
Snyder, D., Chen, G., Povey, D.: Musan: a music, speech, and noise corpus. arXiv preprint arXiv:1510.08484 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Vashkevich, R., Azarov, E. (2020). Robust Noisy Speech Parameterization Using Convolutional Neural Networks. In: Karpov, A., Potapova, R. (eds) Speech and Computer. SPECOM 2020. Lecture Notes in Computer Science(), vol 12335. Springer, Cham. https://doi.org/10.1007/978-3-030-60276-5_58
Download citation
DOI: https://doi.org/10.1007/978-3-030-60276-5_58
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60275-8
Online ISBN: 978-3-030-60276-5
eBook Packages: Computer ScienceComputer Science (R0)