Advertisement

Adding Personality to Neutral Speech Synthesis Voices

  • Christopher G. BuchananEmail author
  • Matthew P. Aylett
  • David A. Braude
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11096)

Abstract

A synthetic voice personifies the system using it. Previous work has shown that using sub-corpora with different voice qualities (e.g. tense and lax) can be used to modify the perceived personality of a voice as well as adding expressive and emotional functionality. In this work we explore the use of LPC source/filter decomposition together with modification of the residual to artificially add voice quality sub-corpora to a voice without recording bespoke data. We evaluate this artificially enhanced voice against a baseline unit selection voice with pre-recorded sub-corpora. Although artificial modification impacts naturalness, it has the advantage of adding emotional range to voices where none was recorded in the source data, deals with data sparsity issues caused by sub-corpora, and results in significant effects in terms of perceived emotion.

Keywords

Voice modification Glottal signal modelling Glottal vocoding Speech synthesis Unit selection Expressive speech synthesis Emotion Prosody Artificial personality 

Notes

Acknowledgements

This work was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 645378 (Aria VALUSPA).

References

  1. 1.
    Schröder, M.: Emotional speech synthesis: A review. In: Seventh European Conference on Speech Communication and Technology (2001)Google Scholar
  2. 2.
    Gobl, C., Chasaide, A.N.: The role of voice quality in communicating emotion, mood and attitude. Speech Commun. 40(1–2), 189–212 (2003)CrossRefGoogle Scholar
  3. 3.
    Aylett, M.P., Vinciarelli, A., Wester, M.: Speech synthesis for the generation of artificial personality. IEEE Trans. Affect. Comput. (2017)Google Scholar
  4. 4.
    Valbret, H., Moulines, E., Tubach, J.P.: Voice transformation using PSOLA technique. Speech Commun. 11(2–3), 175–187 (1992)CrossRefGoogle Scholar
  5. 5.
    Aylett, M., Pidcock, C.: Adding and controlling emotion in synthesised speech. Pat no. GB2447263A, September 2008Google Scholar
  6. 6.
    Gibiansky, A., et al.: Deep voice 2: multi-speaker neural text-to-speech. In: Advances in Neural Information Processing Systems, pp. 2966–2974 (2017)Google Scholar
  7. 7.
    Nordstrom, K.I., Tzanetakis, G., Driessen, P.F.: Transforming perceived vocal effort and breathiness using adaptive pre-emphasis linear prediction. IEEE Trans. Audio, Speech Lang. Process. 16(6), 1087–1096 (2008)CrossRefGoogle Scholar
  8. 8.
    Huber, S., Roebel, A.: On glottal source shape parameter transformation using a novel deterministic and stochastic speech analysis and synthesis system. In: Interspeech 2015 (2015)Google Scholar
  9. 9.
    Shechtman, A., Shechtman, S., Rendel, A.: Semi parametric concatenative TTS with instant voice modification capabilities. In: Interspeech 2017 (2017)Google Scholar
  10. 10.
    Drugman, T., Wilfart, G., Dutoit, T.: A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis. In: Interspeech 2009 (2009)Google Scholar
  11. 11.
    Erro, D., Iaki Sainz, E.N., Hernaez, I.: Harmonics plus noise model based vocoder for statistical parametric speech synthesis. IEEE J. Sel. Top. Signal Process. 8(2), 184–194 (2014)CrossRefGoogle Scholar
  12. 12.
    Csap, T.G., Nmeth, G., Cernak, M., Garner, P.N.: Modeling unvoiced sounds in statistical parametric speech synthesis with a continuous vocoder. In: 24th European Signal Processing Conference (EUSIPCO), pp. 184–194 (2016)Google Scholar
  13. 13.
    Alku, P.: Glottal wave analysis with pitch synchronous iterative adaptive inverse filtering. Speech Commun. 11, 109–118 (1992)CrossRefGoogle Scholar
  14. 14.
    Alku, P.: Glottal inverse filtering analysis of human voice productiona review of estimation and parameterization methods of the glottal excitation and their applications. Sadhana 36(5), 623–650 (2011)CrossRefGoogle Scholar
  15. 15.
    Fant, G., Liljencrants, J., Lin, Q.G.: A four-parameter model of glottal flow. STL-QPSR 26(4), 001–013 (1985)Google Scholar
  16. 16.
    Rosenberg, A.E.: Effect of glottal pulse shape on the quality of natural vowels. J. Acoust. Soc. Am. 49(2B), 583–590 (1971)CrossRefGoogle Scholar
  17. 17.
    Fant, G.: The LF-model revisited. Transformations and frequency domain analysis. Speech Trans. Lab. Q. Rep. R. Inst. Tech. Stockh. 2(3), 40 (1995)Google Scholar
  18. 18.
    Brookes, M.: VOICEBOX: speech processing toolbox for MATLAB. http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html. Accessed 13 Oct 2017
  19. 19.
    Kominek, J., Black, A.W.: The CMU arctic speech databases. In: Fifth ISCA Workshop on Speech Synthesis (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Christopher G. Buchanan
    • 1
    Email author
  • Matthew P. Aylett
    • 1
  • David A. Braude
    • 1
  1. 1.CereProc Ltd.EdinburghUK

Personalised recommendations