Abstract
The deep neural network (DNNs) has been applied in voice conversion (VC) system successfully. DNN shows its effectiveness especially with a large scale of training speech data, but on a small dataset, DNN suffers from the overfitting problem, resulting in performance degradation. Recently, many generative adversarial network (GAN) based VC methods have been developed, which prove to be a promising approach. However, the quality and similarity of generated speech are not entirely satisfactory. The converted speech usually consists of vague contents with buzzy noise and sounds robotic. In this paper, we study and make modification to the basic conditional generative network (cGAN). Focusing on establishing a robust framework for high-quality VC, the proposed method uses a cGAN with a L1 loss and a less explicit constraint, an identity loss. These constraints cooperate to generate speech of higher quality and similarity. We test our method on non-parallel datasets. An objective evaluation demonstrates that this approach achieves better performance and robustness than other GAN-based methods on the intra-gender task. A subjective evaluation shows that the naturalness and similarity are improved.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jia, Y., Zhang, Y., Weiss, R., et al.: Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In: Advances in Neural Information Processing Systems, pp. 4480–4490 (2018)
Jia, Y., Weiss, R.J., Biadsy, F., et al.: Direct speech-to-speech translation with a sequence-to-sequence model (2019). arXiv preprint arXiv:1904.06037
Zhao, Y., Yue, J., Song, W., et al.: Tibetan multi-dialect speech and dialect identity recognition. Comput. Mater. Continua 60(3), 1223–1235 (2019)
Kersta, L.G.: Voiceprint identification. J. Acoust. Soc. Am. 34(5), 725–725 (1962)
Abe, M., Nakamura, S., Shikano, K., et al.: Voice conversion through vector quantization. J. Acoust. Soc. Jpn (E) 11(2), 71–76 (1990)
Stylianou, Y., Cappé, O., Moulines, E.: Continuous probabilistic transform for voice conversion. IEEE Trans. Speech Audio Process. 6(2), 131–142 (1998)
Chen, Y., Chu, M., Chang, E., et al.: Voice conversion with smoothed GMM and MAP adaptation. In: Eighth European Conference on Speech Communication and Technology (2003)
Desai, S., Black, A.W., Yegnanarayana, B., et al.: Spectral mapping using artificial neural networks for voice conversion. IEEE Trans. Audio Speech Lang. Process. 18(5), 954–964 (2010)
Mohammadi, S.H., Kain, A.: Voice conversion using deep neural networks with speaker-independent pre-training. In: IEEE Spoken Language Technology Workshop (SLT), pp. 19–23 (2019)
Kaneko, T., Kameoka, H., Hiramatsu, K., et al.: Sequence-to-sequence voice conversion with similarity metric learned using generative adversarial networks. In: INTERSPEECH, pp. 1283–1287 (2017)
Tobing, P.L., Wu, Y.C., Hayashi, T., et al.: Voice conversion with cyclic recurrent neural network and fine-tuned Wavenet vocoder. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6815–6819. IEEE (2019)
Sun, L., Kang, S., Li, K., et al.: Voice conversion using deep bidirectional long short-term memory based recurrent neural networks. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4869–4873. IEEE (2015)
Sun, L., Li, K., Wang, H., et al.: Phonetic posteriorgrams for many-to-one voice conversion without parallel data training. In: 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2016)
Hsu, C.C., Hwang, H.T., Wu, Y.C., et al.: Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks (2017). arXiv preprint arXiv:1704.00849
Fang, F., Yamagishi, J., Echizen, I., et al.: High-quality nonparallel voice conversion based on cycle-consistent adversarial network. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5279–5283. IEEE (2018)
Kaneko, T., Kameoka, H., Tanaka, K., et al.: CycleGAN-VC2: improved CycleGAN-based non-parallel voice conversion. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6820–6824. IEEE (2019)
Silén, H., Helander, E., Nurminen, J., et al.: Ways to implement global variance in statistical speech synthesis. In: Thirteenth Annual Conference of the International Speech Communication Association (2012)
Takamichi, S., Toda, T., Black, A.W., et al.: Postfilters to modify the modulation spectrum for statistical parametric speech synthesis. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 24(4), 755–767 (2016)
Creswell, A., White, T., Dumoulin, V., et al.: Generative adversarial networks: an overview. IEEE Sig. Process. Mag. 35(1), 53–65 (2018)
Li, C., Jiang, Y., Cheslyar, M.: Embedding image through generated intermediate medium using deep convolutional generative adversarial network. Comput. Mater. Continua 56(2), 313–324 (2018)
Tu, Y., Lin, Y., Wang, J., Kim, J.-U.: Semi-supervised learning with generative adversarial networks on digital signal modulation classification. Comput. Mater. Continua 55(2), 243–254 (2018)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017)
Mirza, M., Osindero, S.: Conditional generative adversarial nets (2014). arXiv preprint arXiv:1411.1784
Isola, P., Zhu, J.Y., Zhou, T., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Pathak, D., Krahenbuhl, P., Donahue, J., et al.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)
Muda, L., Begam, M., Elamvazuthi, I.: Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques (2010). arXiv preprint arXiv:1003.4083
Liu, K., Zhang, J., Yan, Y.: High quality voice conversion through phoneme-based linear mapping functions with STRAIGHT for mandarin. In: Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007), vol. 4, pp. 410–414. IEEE (2007)
Ohtani, Y., Toda, T., Saruwatari, H., et al.: Maximum likelihood voice conversion based on GMM with STRAIGHT mixed excitation (2006)
Morise, M., Yokomori, F., Ozawa, K.: WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst. 99(7), 1877–1884 (2016)
Zhao, Y., Li, G., Xie, W., et al.: GUN: gradual upsampling network for single image super-resolution. IEEE Access 6, 39363–39374 (2018)
Lorenzo-Trueba, J., Yamagishi, J., Toda, T., et al.: The voice conversion challenge 2018: Promoting development of parallel and nonparallel methods (2018). arXiv preprint arXiv:1804.04262
Acknowledgements
This research is supported in part by the National Science and Technology Major Project for IND (investigational new drug) (Project No. 2018ZX09201-014), National Key Research and Development Project (Grant No. 2017YFC0820504), and the CETC Joint Advanced Research Foundation (Grant No. 6141B08080101, 6141B08010102).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, L., Wang, Y., Liu, Y., Xiao, W., Xie, H. (2020). A Robust Framework for High-Quality Voice Conversion with Conditional Generative Adversarial Network. In: Sun, X., Wang, J., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2020. Communications in Computer and Information Science, vol 1252. Springer, Singapore. https://doi.org/10.1007/978-981-15-8083-3_18
Download citation
DOI: https://doi.org/10.1007/978-981-15-8083-3_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-8082-6
Online ISBN: 978-981-15-8083-3
eBook Packages: Computer ScienceComputer Science (R0)