Skip to main content

WaveNet-Based Speech Synthesis Applied to Czech

A Comparison with the Traditional Synthesis Methods

  • Conference paper
  • First Online:
Text, Speech, and Dialogue (TSD 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11107))

Included in the following conference series:

Abstract

WaveNet is a recently-developed deep neural network for generating high-quality synthetic speech. It produces directly raw audio samples. This paper describes the first application of WaveNet-based speech synthesis for the Czech language. We used the basic WaveNet architecture. The duration of particular phones and the required fundamental frequency used for local conditioning were estimated by additional LSTM networks. We conducted a MUSHRA listening test to compare WaveNet with 2 traditional synthesis methods: unit selection and HMM-based synthesis. Experiments were performed on 4 large speech corpora. Though our implementation of WaveNet did not outperform the unit selection method as reported in other studies, there is still a lot of scope for improvement, while the unit selection TTS have probably reached its quality limit.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    TensorFlow is an open-source machine learning framework available at http://www.tensorflow.org.

References

  1. Ling, Z.H., Kang, S.Y., Zen, H., et al.: Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process. Mag. 32(3), 35–52 (2015)

    Article  Google Scholar 

  2. Zen, H., Tokuda, K., Black, A.W.: Statistical parametric speech synthesis. Speech Commun. 51(11), 1039–1064 (2009)

    Article  Google Scholar 

  3. Zen, H.: Acoustic modeling in statistical parametric speech synthesis - from HMM to LSTM-RNN. In: Proceedings of MLSLP (2015)

    Google Scholar 

  4. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., et al.: WaveNet: a generative model for raw audio. CoRR abs/1609.03499 (2016). http://arxiv.org/abs/1609.03499

  5. Kobayashi, K., Hayashi, T., Tamamori, A., Toda, T.: Statistical voice conversion with WaveNet-based waveform generation. In: Proceedings of Interspeech 2017, pp. 1138–1142 (2017)

    Google Scholar 

  6. Hayashi, T., Tamamori, A., Kobayashi, K., et al.: An investigation of multi-speaker training for WaveNet vocoder. In: Proceedings of ASRU 2017, pp. 712–718 (2017)

    Google Scholar 

  7. Tamamori, A., Hayashi, T., Kobayashi, K., et al.: Speaker-dependent WaveNet vocoder. In: Proceedings of Interspeech 2017, pp. 1118–1122 (2017)

    Google Scholar 

  8. Arik, S.O., Chrzanowski, M., Coates, A., et al.: Deep voice: real-time neural text-to-speech. CoRR abs/1702.07825 (2017). https://arxiv.org/abs/1702.07825

  9. Matoušek, J., Tihelka, D., Romportl, J.: Current state of Czech text-to-speech system ARTIC. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2006. LNCS, vol. 4188, pp. 439–446. Springer, Heidelberg (2006). https://doi.org/10.1007/11846406_55

    Chapter  Google Scholar 

  10. Matousšek, J., Legát, M., Tihelka, D.: Is unit selection aware of audible artifacts? In: Proceedings of SSW8, pp. 267–271. ISCA (2013)

    Google Scholar 

  11. Hanzlíček, Z.: Czech HMM-Based Speech Synthesis. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2010. LNCS, vol. 6231, pp. 291–298. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15760-8_37

    Chapter  Google Scholar 

  12. Hanzlíček, Z.: Optimal number of states in HMM-based speech synthesis. In: Ekštein, K., Matoušek, V. (eds.) TSD 2017. LNCS, vol. 10415, pp. 353–361. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64206-2_40

    Chapter  Google Scholar 

  13. Matousšek, J., Tihelka, D., Romportl, J.: Building of a speech corpus optimised for unit selection TTS synthesis. In: Proceedings of LREC (2008)

    Google Scholar 

  14. Method for the subjective assessment of intermediate quality level of coding systems. ITU Recommendation ITU-R BS.1534-2 (2014)

    Google Scholar 

  15. Henter, G.E., Merritt, T., Shannon, M., et al.: Measuring the perceptual effects of modelling assumptions in speech synthesis using stimuli constructed from repeated natural speech. In: Proceedings of Interspeech 2014, pp. 1504–1508 (2014)

    Google Scholar 

  16. van den Oord, A., Li, Y., Babuschkin, I., et al.: Parallel WaveNet: fast high-fidelity speech synthesis. CoRR abs/1711.10433 (2017). https://arxiv.org/abs/1711.10433

Download references

Acknowledgment

This research was supported by the Czech Science Foundation (GACR), project No. GA16-04420S and by the grant of the University of West Bohemia, project No. SGS-2016-039. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the program “Projects of Large Research, Development, and Innovations Infrastructures” (CESNET LM2015042), is greatly appreciated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zdeněk Hanzlíček .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hanzlíček, Z., Vít, J., Tihelka, D. (2018). WaveNet-Based Speech Synthesis Applied to Czech. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech, and Dialogue. TSD 2018. Lecture Notes in Computer Science(), vol 11107. Springer, Cham. https://doi.org/10.1007/978-3-030-00794-2_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00794-2_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00793-5

  • Online ISBN: 978-3-030-00794-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics