Advertisement

Two-Stage Sequence-to-Sequence Neural Voice Conversion with Low-to-High Definition Spectrogram Mapping

  • Sou Miyamoto
  • Takashi Nose
  • Kazuyuki Hiroshiba
  • Yuri Odagiri
  • Akinori Ito
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 110)

Abstract

In this study, we propose a voice conversion technique with two-stage conversion, which is realized by using two models consisting of U-Net and pix2pix. Using U-Net, we tried to reproduce intonation of a target speaker by performing low-dimensional feature conversion considering the time direction. We introduced pix2pix for the task of spectrogram enhancement. The pix2pix is trained to map from low definition spectrogram to high definition spectrogram (low-to-high spectrogram mapping). Low definition spectrogram is reconstructed from low dimensional mel-cepstrum converted by U-Net and high definition spectrogram is extracted from natural speech. In objective evaluations, we showed that the proposed method was effective in improvement of mel-cepstral distance (MCD) and Log F0 RMSE. Subjective evaluations revealed that the use of the proposed method had a certain effect in improving speech individuality while maintaining the same level of naturalness as the conventional method.

Keywords

DNN-based voice conversion U-Net Pix2pix CNN Two-stage conversion 

Notes

Acknowledgment

Part of this work was supported by JSPS KAKENHI Grant Numbers JP16K13253 and JP17H00823.

References

  1. 1.
    Alec, R., Luke, M., Soumith, C.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: International Conference on Learning Representations (2016)Google Scholar
  2. 2.
    Desai, S., Raghavendra, E.V., Yegnanarayana, B., Black, A.W., Prahallad, K.: Voice conversion using artificial neural networks. In: 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3893–3896. IEEE (2009)Google Scholar
  3. 3.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  4. 4.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  5. 5.
    Kain, A., Macon, M.W.: Spectral voice conversion for text-to-speech synthesis. In: 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 285–288. IEEE (1998)Google Scholar
  6. 6.
    Kaneko, T., Kameoka, H., Hiramatsu, K., Kashino, K.: Sequence-to-sequence voice conversion with similarity metric learned using generative adversarial networks. In: Proceedings of the INTERSPEECH, pp. 1283–1287 (2017)Google Scholar
  7. 7.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)Google Scholar
  8. 8.
    Kobayashi, K., Toda, T.: sprocket: open-source voice conversion software. In: Proceedings of the Odyssey 2018 The Speaker and Language Recognition Workshop, pp. 203–210 (2018)Google Scholar
  9. 9.
    Masanobu, A.: A segment-based approach to voice conversion. In: 1991 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 2, pp. 765–768 (1991)Google Scholar
  10. 10.
    Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks, pp. 52–59. Springer (2011)Google Scholar
  11. 11.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  12. 12.
    Mohammadi, S.H., Kain, A.: An overview of voice conversion systems. Speech Commun. 88(C), 65–82 (2017)CrossRefGoogle Scholar
  13. 13.
    Morise, M., Yokomori, F., Ozawa, K.: World: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst. 99(7), 1877–1884 (2016)CrossRefGoogle Scholar
  14. 14.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Springer (2015)Google Scholar
  15. 15.
    Saito, Y., Takamichi, S., Saruwatari, H.: Voice conversion using input-to-output highway networks. IEICE Trans. Inf. Syst. E100.D(8), 1925–1928 (2017)CrossRefGoogle Scholar
  16. 16.
    Sun, L., Kang, S., Li, K., Meng, H.: Voice conversion using deep bidirectional long short-term memory based recurrent neural networks. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4869–4873. IEEE (2015)Google Scholar
  17. 17.
    Toda, T., Tokuda, K.: A speech parameter generation algorithm considering global variance for hmm-based speech synthesis. IEICE Trans. Inf. Syst. E90–D(5), 816–824 (2007)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Sou Miyamoto
    • 1
  • Takashi Nose
    • 1
  • Kazuyuki Hiroshiba
    • 2
  • Yuri Odagiri
    • 2
  • Akinori Ito
    • 1
  1. 1.Graduate School of EngineeringTohoku UniversitySendai-shiJapan
  2. 2.DWANGO Co., Ltd.Chuo-kuJapan

Personalised recommendations