APML, a Markup Language for Believable Behavior Generation

  • Berardina De Carolis
  • Catherine Pelachaud
  • Isabella Poggi
  • Mark Steedman
Part of the Cognitive Technologies book series (COGTECH)


Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind—Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.


Facial Expression Markup Language Communicative Goal Conversational Agent Pitch Accent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Arafa, Y., Kamyab, K., Mamdani, E., Kshirsagar, S., Guye-Vuilleéme, A., Thalmann, D.: Two approaches to scripting character animation. In: Embodied conversational agents — let’s specify and evaluate theml, Proceedings of the AAMAS’02 Workshop, Bologna, Italy (July 2002 )Google Scholar
  2. 2.
    Austin, J.L.: How to do things with words. ( Oxford University Press, London 1962 )Google Scholar
  3. 3.
    Bates, J.: Realism and believable agents. In: Lifelike Computer Characters ’94 (1994)Google Scholar
  4. 4.
    Black, A.W., Taylor, P.: Festival speech synthesis system: System documentation (1.1.1). Technical Report HCRC/TR-83, Human Communication Research Centre, Edinburgh (1997)Google Scholar
  5. 5.
    Canamero, L., Aylett, R. (eds): Animating Expressive Characters for Social Interactions (John Benjamins, Amsterdam). In press.Google Scholar
  6. 6.
    Cassell, J., Vilhjâlmsson, H., Bickmore, T.: BEAT: The Behavior Expression Animation Toolkit. In: Computer Graphics Proceedings, Annual Conference Series. ACM SIGGRAPH (2001). Reprinted in this volume. 84 Berardina De Carolis et al.Google Scholar
  7. 7.
    Castelfranchi, C., Poggi, I.: Bugie finsioni sotterfugi. In: Per una scienza dell’inganno ( Carucci, Roma 1998 )Google Scholar
  8. 8.
    Conte, R., Castelfranchi, C.: Cognitive and Social Action ( University College Press, London 1995 )Google Scholar
  9. 9.
    De Carolis, B.: Generating mixed-initiative hypertexts: A reactive approach. In: Proceedings of the International Conference on Intelligent User Interfaces, Redondo Beach, CA (ACM Press, New York 1999 )Google Scholar
  10. 10.
    De Carolis, B., Carofiglio, V., Bilvi, M., Pelachaud, C.: APML, a mark-up language for believable behavior generation. In: Embodied conversational agents — Let’s specify and evaluate theml, Proceedings of the AAMAS’02 Workshop, Bologna, Italy (2002)Google Scholar
  11. 11.
    De Carolis, B., Pelachaud, C., Poggi, I., de Rosis, F.: Behavior planning for a reflexive agent. In: IJCAI’01, Seattle, USA (2001)Google Scholar
  12. 12.
    de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., De Carolis, B.: From Greta’s mind to her face: Modeling the dynamics of affective states in a conversational embodied agent. Special Issue on “Applications of Affective Computing in Human-Computer Interaction”, The International Journal of Human-Computer Studies 59: 81–118 (2003)Google Scholar
  13. 13.
    de Rosis, F., Poggi, I., Pelachaud, C.: Tanscultural believability in embodied agents: A matter of consistent adaptation. In: Agent Culture: Designing Virtual Characters for a Multi-Cultural World,ed Trappi, R., Petta, P. (Kluwer Academic, Dordrecht). In press.Google Scholar
  14. 14.
    Grosz, B.J., Sidner, C.L.: Attention, intentions, and the structure of discourse. Computational Linguistics 12 (3): 175–204 (1986)Google Scholar
  15. 15.
    HumanML. Human markup language.
  16. 16.
    Ishizuka, M., Tsutsui, T., Saeyor, S., Dohi, H., Zong, Y., Prendinger, H.: MPML: A multimodal presentation markup language with character agent control functions. In: Achieving Human-like Behavior in Interactive Animated Agents, Proceedings of the AA ‘00 Workshop, Barcelona (2000) pp 50–54Google Scholar
  17. 17.
    Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A multimodal utterance representation markup language for conversational agents. In: Embodied conversational agents — let’s specify and evaluate theml, Proceedings of the AAMAS’02 Workshop, Bologna, Italy (2002)Google Scholar
  18. 18.
    Larsson, S., Bohlin, P., Bos, J., Traum, D.: TRINDIKIT 1.0 manual for D2.2.
  19. 19.
    Life-like Characters. Tools, Affective Functions and Applications. Scholar
  20. 20.
    Bryan Loyall, A., Bates, J.: Personality-rich believable agents that use language. In: Proceedings of the First International Conference on Autonomous Agents (Agents’97), Marina del Rey, CA, USA (ACM Press, New York 1997 ) pp 106–113CrossRefGoogle Scholar
  21. 21.
    Mann, W.C., Matthiessen, C.M.I.M., Thompson, S.: Rhetorical structure theory and text analysis. Technical Report 89–242, ISI Research (1989)Google Scholar
  22. 22.
    Moore, J.D., Paris, C.L.: Planning text for advisory dialogues. In: ACL (1989) pp 203–211Google Scholar
  23. 23.
    Paradiso, A., L’Abbate, M.L.: A model for the generation and combination of emotional expressions. In: Multimodal Communication and Context in Embodied Agents, Proceedings of the AA ’01 Workshop, Montreal, Canada (May 2001 )Google Scholar
  24. 24.
    Pelachaud, C.: Visual text-to-speech. In: MPEG4 Facial Animation — The standard, implementations and applications, ed Pandzic, I.S., Forchheimer, R. ( Wiley 2002 )Google Scholar
  25. 25.
    Pelachaud, C., Carofiglio, V., De Carolis, B., de Rosis, F.: Embodied contextual agent in information delivering application. In: First International Joint Conference on Autonomous Agents & Multi-Agent Systems (AAMAS), Bologna, Italy (July 2002 )Google Scholar
  26. 26.
    Pelachaud, C., Poggi, I.: Subtleties of facial expressions in embodied agents. Journal of Visualization and Computer Animation 13: 301–312 (2002)zbMATHCrossRefGoogle Scholar
  27. 27.
    Pierrehumbert, J., Hirschberg, J.: The meaning of intonational contours in the interpretation of discourse. In: Intentions in Communication, ed Cohen, P., Morgan, J., Pollack, M. ( MIT Press, Cambridge, MA 1990 ) pp 271–312Google Scholar
  28. 28.
    Piwek, P., Krenn, B., Schröder, M., Grice, M., Baumann, S., Pirker, H.: RRL: A rich representation language for the description of agents behaviour in NECA. In: Embodied conversational agents — Let’s specify and evaluate theml, Proceedings of the AAMAS’02 Workshop, Bologna, Italy (July 2002 )Google Scholar
  29. 29.
    Poggi, I.: Towards the alphabet and the lexicon of gesture, gaze and touch. In: Virtual Symposium on “Multimodality of Human Communication. Theories, problems and applications”,ed Bouissac, P. (2002).
  30. 30.
    Poggi, I., Pelachaud, C.: Facial performative in a conversational system. In: Embodied Conversational Agents, ed Cassell, J., Sullivan, J., Prevost, S., Churchill, E. ( The MIT Press, Cambridge, MA 2000 )Google Scholar
  31. 31.
    Poggi, I., Pelachaud, C., de Rosis, F.: Eye communication in a conversational 3D synthetic agent. AI Communications 13 (3): 169–181 (2000)Google Scholar
  32. 32.
    Prendinger, H., Ishizuka, M.: Social role awareness in animated agents. In: Proceedings of the 5th International Conference on Autonomous Agents, Montreal, Canada (2001) pp 270–277CrossRefGoogle Scholar
  33. 33.
    Prevost, S., Steedman, M.: Specifying intonation from context for speech synthesis. Speech Communication 15: 139–153 (1994)CrossRefGoogle Scholar
  34. 34.
    Ruttkay, Z., Pelachaud, C.: Exercises of style for virtual humans. In: Symposium of the AISB’O2 Convention, Volume Animating Expressive Characters for Social Interactions, London (2002)Google Scholar
  35. 35.
    Steedman, M.: Information structure and the syntax-phonology interface Linguistic Inquiry 31: 649–689 (2000)CrossRefGoogle Scholar
  36. 36.
    Trappl, R., Payr, S. (eds.) Agent Culture: Designing virtual characters for a multi-cultural world. (Kluwer Academic, Dordrecht, in press)Google Scholar
  37. 37.
    VHML. Virtual human markup language.

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Berardina De Carolis
    • 1
  • Catherine Pelachaud
    • 2
  • Isabella Poggi
    • 3
  • Mark Steedman
    • 4
  1. 1.Dipartimento di InformaticaUniversity of BariBariItaly
  2. 2.LINC — ParagrapheIUT of Montreuil — University of ParisParisFrance
  3. 3.Dipartimento di EducazioneUniversity of Rome ThreeRomeItaly
  4. 4.School of InformaticsUniversity of EdinburghEdinburghUK

Personalised recommendations