Advertisement

Automated Gesturing for Embodied Agents

  • Goranka Zoric
  • Karlo Smid
  • Igor S. Pandzic
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4012)

Abstract

In this paper we present our recent results in automatic facial gesturing of graphically embodied animated agents. In one case, conversational agent is driven by speech in automatic Lip Sync process. By analyzing speech input, lip movements are determined from the speech signal. Another method provides virtual speaker capable of reading plain English text and rendering it in a form of speech accompanied by the appropriate facial gestures. Proposed statistical model for generating virtual speaker’s facial gestures, can be also applied as addition to lip synchronization process in order to obtain speech driven facial gesturing. In this case statistical model will be triggered with the input speech prosody instead of lexical analysis of the input text.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    McAllister, D.F., Rodman, R.D., Bitzer, D.L., Freeman, A.S.: Lip synchronization for Animation. In: Proceedings of SIGGRAPH 1997, Los Angeles, CA (1997)Google Scholar
  2. 2.
    Pandžic, I.S., Forchheimer, R. (eds.): MPEG-4 Facial Animation - The Standard, Implementation and Applications. John Wiley & Sons Ltd, Chichester (2002)Google Scholar
  3. 3.
    Axelsson, A., Björhall, E.: Real time speech driven face animation, Master Thesis at The Image Coding Group, Dept. of Electrical Engineering at Linköping University, Linköping (2003)Google Scholar
  4. 4.
    Dávila, J.J.: Genetic optimization of neural networks for the task of natural language processing, dissertation, New York (1999)Google Scholar
  5. 5.
    Rojas, R.: Neural networks. In: A Systematic Introduction, Springer, Heidelberg (1996)Google Scholar
  6. 6.
    Jones, A.J.: Genetic algorithms and their applications to the design of neural networks. Neural Computing & Applications 1(1), 32–45 (1993)CrossRefGoogle Scholar
  7. 7.
    Black Box Genetic algorithm, http://fdtd.rice.edu/GA/
  8. 8.
    Pelachaud, C., Badler, N., Steedman, M.: Generating Facial Expressions for Speech. Cognitive, Science 20(1), 1–46 (1996)CrossRefGoogle Scholar
  9. 9.
    Radman, V.: Leksicka analiza teksta za automatsku proizvodnju pokreta lica, Graduate work no. 2472 on Faculty of Electrical Engineering and Computing, University of Zagreb (2004)Google Scholar
  10. 10.
  11. 11.
    Microsoft Speech Technologies, http://www.microsoft.com/speech

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Goranka Zoric
    • 1
  • Karlo Smid
    • 2
  • Igor S. Pandzic
    • 1
  1. 1.Department of Telecommunications, Faculty of Electrical Engineering and ComputingUniversity of ZagrebZagrebCroatia
  2. 2.Ericsson Nikola TeslaZagrebCroatia

Personalised recommendations