Prosodic Reading Style Simulation for Text-to-Speech Synthesis

  • Oliver Jokisch
  • Hans Kruschke
  • Rüdiger Hoffmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3784)


The simulation of different reading styles (mainly by adapting prosodic parameters) can improve the naturalness of synthetic speech and supports a more intelligent human machine interaction. The article exemplarily investigates the reading styles News and Tale. For comparison, all examined texts contained the same genre-neutral paragraphs which have been read without a specific style instruction: Normal but also faster, slower, rather monotone or more emotional which led to corresponding artificial styles.

The measured original intonation and durations style patterns control a diphone synthesizer (mapped contours). Additionally, the patterns are used to train a neural network (NN) model.

Within two separate listening tests, different stimuli presented as original signal/style, respectively, with mapped or NN generated prosodic contours have been evaluated. The results show that both, original utterances and artificial styles are basically perceived in their intended reading styles. Some reciprocal confusions indicate the similarities between different styles like News and Fast, Tale and Slow as well as Tale and Expressive. The confusions are more likely for synthetic speech. To produce e. g. the complex style Tale, different features of the prosodic variations Slow and Expressive are combined. The training method for the synthetic styles requires a further improvement.


Dynamic Time Warping Prosodic Feature Synthetic Speech Pause Duration Original Speech 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bulut, M., Narayanan, S.S., Syrdal, A.K.: Expressive speech synthesis using a concatenative synthesizer. In: Proc. International Conference on Spoken Language Processing, ICSLP, Denver, USA, pp. 1265–1268 (2002)Google Scholar
  2. 2.
    Hoffmann, R., Jokisch, O., Hirschfeld, D., Strecha, G., Kruschke, H., Kordon, U.: A multilingual TTS system with less than 1 mbyte footprint for embedded applications. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Hong Kong, pp. 532–535 (2003)Google Scholar
  3. 3.
    Jokisch, O., Ding, H., Kruschke, H.: Towards a multilingual prosody model for Text-to-Speech. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Orlando, USA, pp. 421–424 (2002)Google Scholar
  4. 4.
    Kruschke, H.: Advances in the parameter extraction of a command-response intonation model. In: Proc. IEEE International Symposium on Intelligent Signal Processing and Communication Systems, ISPACS, Nashville, USA (2001)Google Scholar
  5. 5.
    Laan, G.P.M., van Bergem, D.R.: The contribution of pitch contour, phonem durations and spectral features to the character of spontaneous and read aloud speech. In: Proc. Eurospeech, Berlin, pp. 569–572 (1993)Google Scholar
  6. 6.
    Mixdorff, H.: A novel approach to the fully automatic extraction of fujisaki model parameters. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Istanbul, Turkey, pp. 1281–1284 (2000)Google Scholar
  7. 7.
    Mixdorff, H., Jokisch, O.: Building an integrated prosodic model of German. In: Aalborg, Denmark, pp. 947–950 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Oliver Jokisch
    • 1
  • Hans Kruschke
    • 1
  • Rüdiger Hoffmann
    • 1
  1. 1.Laboratory of Acoustics and Speech CommunicationDresden University of TechnologyDresdenGermany

Personalised recommendations