A Dynamic Speech Breathing System for Virtual Characters

  • Ulysses Bernardet
  • Sin-hwa Kang
  • Andrew Feng
  • Steve DiPaola
  • Ari Shapiro
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking – speech breathing – into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of low- and high-level parameters, the system produces dynamic signals in real-time that control the virtual character’s anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing). The system is implemented in Python, offers a graphical user interface for easy parameter control, and simultaneously controls the visual and auditory aspects of speech breathing through the integration of the character animation system SmartBody [16] and the audio synthesis platform SuperCollider [12]. Beyond contributing to realism, the presented system allows for a flexible generation of a wide range of speech breathing behaviors that can convey information about the speaker such as mood, age, and health.

Keywords

Speech breathing Speaking Breathing Virtual character Animation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
    Bernardet, U., Schiphorst, T., Adhia, D., Jaffe, N., Wang, J., Nixon, M., Alemi, O., Phillips, J., DiPaola, S., Pasquier, P.: m+m: A novel middleware for distributed, movement based interactive multimedia systems. In: Proceedings of the 3rd International Symposium on Movement and Computing - MOCO 2016, pp. 1–21. ACM Press, New York (2016). http://dl.acm.org/citation.cfm?doid=2948910.2948942
  6. 6.
    Gebhard, P., Schröder, M., Charfuelan, M., Endres, C., Kipp, M., Pammi, S., Rumpler, M., Türk, O.: IDEAS4Games: building expressive virtual characters for computer games. In: Prendinger, H., Lester, J., Ishizuka, M. (eds.) IVA 2008. LNCS, vol. 5208, pp. 426–440. Springer, Heidelberg (2008). doi:10.1007/978-3-540-85483-8_43 CrossRefGoogle Scholar
  7. 7.
    Henderson, A., Goldman-Eisler, F., Skarbek, A.: Temporal Patterns of Cognitive Activity and Breath Control in Speech. Language and Speech 8(4), 236–242 (1965)CrossRefGoogle Scholar
  8. 8.
    Hixon, T.J., Goldman, M.D., Mead, J.: Kinematics of the Chest Wall during Speech Production: Volume Displacements of the Rib Cage, Abdomen, and Lung. Journal of Speech Language and Hearing Research 16(1), 78 (1973). http://jslhr.pubs.asha.org/article.aspx?doi=10.1044/jshr.1601.78
  9. 9.
    Howard, I.S., Messum, P.: Modeling motor pattern generation in the development of infant speech production. In: 8th International Seminar on Speech Production, pp. 165–168 (2008)Google Scholar
  10. 10.
    Huber, J.E., Stathopoulos, E.T.: Speech Breathing Across the Life Span and in Disease. The Handbook of Speech Production, pp. 11–33 (2015). http://dx.doi.org/10.1002/9781118584156.ch2
  11. 11.
    Ladd, D.R.: Declination: a review and some hypotheses. Phonology 1(1), 53–74 (1984)CrossRefGoogle Scholar
  12. 12.
    McCartney, J.: Rethinking the Computer Music Language: SuperCollider. Computer Music Journal 26(4), 61–68 (2002). http://www.mitpressjournals.org/doi/10.1162/014892602320991383
  13. 13.
    McFarland, D.H., Smith, A.: Effects of vocal task and respiratory phase on prephonatory chest wall movements. Journal of Speech and Hearing Research 35(5), 971–82 (1992). http://www.ncbi.nlm.nih.gov/pubmed/1447931
  14. 14.
    Rickel, J., André, E., Badler, N., Cassell, J.: Creating Interactive Virtual Humans: Some Assembly Required. IEEE Intelligent Systems (2002)Google Scholar
  15. 15.
    Sanders, B., Dilorenzo, P., Zordan, V., Bakal, D.: Toward anatomical simulation for breath training in mind/body medicine. In: Magnenat-Thalmann, N., Zhang, J.J., Feng, D.D. (eds.) Recent Advances in the 3D Physiological Human. Springer (2009). http://graphics.cs.ucr.edu/papers/sanders:2008:TAS.pdf
  16. 16.
    Shapiro, A.: Building a character animation system. In: Motion in Games, pp. 98–109 (2011)Google Scholar
  17. 17.
    Tsoli, A., Mahmood, N., Black, M.J.: Breathing life into shape. ACM Transactions on Graphics 33(4), 1–11 (2014)CrossRefGoogle Scholar
  18. 18.
    Veltkamp, R.C., Piest, B.: A physiological torso model for realistic breathing simulation. In: Magnenat-Thalmann, N. (ed.) 3DPH 2009. LNCS, vol. 5903, pp. 84–94. Springer, Heidelberg (2009). doi:10.1007/978-3-642-10470-1_8 CrossRefGoogle Scholar
  19. 19.
    Winkworth, A.L., Davis, P.J., Adams, R.D., Ellis, E.: Breathing patterns during spontaneous speech. Journal of Speech and Hearing Research 38(1), 124–144 (1995). http://www.ncbi.nlm.nih.gov/pubmed/7731204
  20. 20.
    Włodarczak, M., Heldner, M., Edlund, J.: Breathing in Conversation: An Unwritten History (2015)Google Scholar
  21. 21.
    Zordan, V.B., Celly, B., Chiu, B., DiLorenzo, P.C.: Breathe easy. In: Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation - SCA 2004, p. 29. ACM, New York (2004)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ulysses Bernardet
    • 1
  • Sin-hwa Kang
    • 2
  • Andrew Feng
    • 2
  • Steve DiPaola
    • 1
  • Ari Shapiro
    • 2
  1. 1.School of Interactive Arts and TechnologySimon Fraser UniversityVancouverCanada
  2. 2.Institute for Creative TechnologiesUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations