A Continuous Vocoder Using Sinusoidal Model for Statistical Parametric Speech Synthesis
- 978 Downloads
In our earlier work in statistical parametric speech synthesis, we proposed a source-filter based vocoder using continuous F0 (contF0) in combination with Maximum Voiced Frequency (MVF), which was successfully used with deep learning. The advantage of a continuous vocoder in this scenario is that vocoder parameters are simpler to model than conventional vocoders with discontinuous F0. However, our vocoder lacks some degree of naturalness and still not achieving a high-quality speech synthesis compared to the well-known vocoders (e.g. STRAIGHT or WORLD). Previous studies have shown that human voice can be modelled effectively as a sum of sinusoids. In this paper, we firstly address the design of a continuous vocoder using sinusoidal synthesis model that is applicable in statistical frameworks. The same three parameters of the analysis part from our previous model have been also extracted and used for this study. For refining the output of the contF0 estimation, post-processing approach is utilized to reduce the unwanted voiced component of unvoiced speech sounds, resulting in a smoother contF0 track. During synthesis, a sinusoidal model with minimum phase is applied to reconstruct speech. Finally, we have compared the voice quality of the proposed system to the STRAIGHT and WORLD vocoders. Experimental results from objective and subjective evaluations have shown that the proposed vocoder gives state-of-the-art vocoders performance in synthesized speech while outperforming the previous work of our continuous F0 based source-filter vocoder.
KeywordsContinuous vocoder Speech synthesis Sinusoidal model ContF0
The research was partly supported by the VUK (AAL-2014-1-183), and by the EUREKA (DANSPLAT E!9944) projects. The Titan X GPU used for this research was donated by NVIDIA Corporation. We would like to thank the subjects for participating in the listening test.
- 3.Drugman, T., Wilfart, G., Dutoit, T.: A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis. In: 10th Proceedings of Interspeech, Brighton, pp. 1779–1782 (2009)Google Scholar
- 5.Csapó, T.G., Németh, G., Cernak, M.: Residual-based excitation with continuous F0 modeling in HMM-based speech synthesis. In: Dediu, A.-H., Martín-Vide, C., Vicsi, K. (eds.) SLSP 2015. LNCS (LNAI), vol. 9449, pp. 27–38. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25789-1_4CrossRefGoogle Scholar
- 8.Al-Radhi, M.S., Csapó, T.G., Németh, G.: Time-domain envelope modulating the noise component of excitation in a continuous residual-based vocoder for statistical parametric speech synthesis. In: 18th Proceedings of Interspeech, Stockholm, pp. 434–438 (2017)Google Scholar
- 10.Stylianou, Y., Laroche, J., Moulines, E.: High-quality speech modification based on a harmonic + noise model. In: Proceedings of Eurospeech, Madrid, pp. 451–454 (1995)Google Scholar
- 11.Degottex, G., Stylianou, Y.: A full-band adaptive harmonic representation of speech. In: 13th Proceedings of Interspeech, Portland (2012)Google Scholar
- 12.Hu, Q., Stylianou, Y., Maia, R., Richmond, K., Yamagishi, J.: Methods for applying dynamic sinusoidal models to statistical parametric speech synthesis. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, pp. 4889–4893. IEEE (2015)Google Scholar
- 13.Tokuda, K., Kobayashi, T., Masuko, T., Imai, S.: Mel-generalized cepstral analysis - a unified approach to speech spectral estimation. In: 8th International Conference on Spoken Language Processing (ICSLP), Yokohama, pp. 1043–1046 (1994)Google Scholar
- 15.Talkin, D.: A robust algorithm for pitch tracking (RAPT). In: Kleijn, B., Paliwal, K. (eds.) Speech Coding and Synthesis, pp. 497–518. Elsevier, Amesterdam (1995)Google Scholar
- 18.Kominek, J., Black, A.W.: CMU ARCTIC databases for speech synthesis. Language Technologies Institute (2003). http://festvox.org/cmu_arctic/
- 22.Itakura, F., Saito, S.: An analysis-synthesis telephony based on the maximum-likelihood method. In: Proceedings of the 6th International Congress on Acoustics, Tokyo, pp. C17–C20 (1968)Google Scholar
- 24.ITU-R Recommendation BS.1534. Method for the subjective assessment of intermediate audio quality (2001)Google Scholar