Abstract
Embodied Conversational Agents (ECAs) are life-like CG characters that interact with human users in face-to-face conversations. To achieve natural multi-modal conversations, ECA systems are sophisticated and are composed with assemblies of various functions. They are thus difficult for an individual research group to develop. To address this problem, we are developing a Generic ECA Framework to integrate those assemblies with each other seamlessly. It is composed with a low-level communication platform, a high-level protocol and a set of API libraries. With such a common framework, ECAs can be prototyped rapidly while research result sharing can be facilitated. This paper presents the_concepts of this framework, protocol, and a script language that defines the behaviours of an ECA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cassel, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjlmsson, H. & Yan, H. Embodiment in conversational interfaces: Rea. In Proceedings of CHI’99, pp520-527, 1999
OpenAIR 1.0, http://www.mindmakers.org/openair/airPage.jsp
visage\SDK, visage technologies, http://www.visagetechnologies.com/index.html
MPEG-4, ISO/IEC JTC1/SC29/WG11, ISO/IEC 14496:1999, Coding of Audio, Picture, Multimedia and Hypermedia Information, N3056, 1999.
Artificial Intelligence Markup Language (AIML), http://www.alicebot.org/
EMMA: Extensible MultiModal Annotation Markup Language, http://www.w3.org/TR/emma/
Nakano, Y., Okamoto, M., Kawahara, D., Li Q. & Nishida, T. Converting Text into Agent Animations: Assigning Gestures to Text, in Proceedings of The Human Language Technology Conference, 2004
Oka, K. & Sato, Y. Real-time modeling of a face deformation for 3D head pose estimation, Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG2005), October 2005
Huang, H., Cerekovic, A., Tarasenko, K., Levacic, V., Zoric, G., Treumuth, M., Pandzic, I. S., Nakano Y. & Nishida, T. An Agent Based Multicultural User Interface in a Customer Service Application, in Proceedings of the eNTERFACE’06 Workshop on Multimodal Interfaces, 2006
National Food Research Institute, Japan. http://www.nfri.affrc.go.jp/english/ourroles/index.html
Becker, C., Kopp, S., & Wachsmuth, I. Simulating the emotion dynamics of a multimodal conversational agent. In Proceedings on Tutorial and Research Workshop on Affective Dialogue Systems (ADS-04), LNAI 3068, pages 154-165, Springer, 2004
Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thorisson, K. & Vilhjalmsson, H. Towards a Common Framework for Multimodal Generation: The Markup Language, in the proceedings of IVA2006, August, 2006
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag London Limited
About this paper
Cite this paper
Huang, HH., Cerekovic, A., Pandzic, I.S., Nakano, Y., Nishida, T. (2008). Scripting Human-Agent Interactions in a Generic ECA Framework. In: Ellis, R., Allen, T., Petridis, M. (eds) Applications and Innovations in Intelligent Systems XV. SGAI 2007. Springer, London. https://doi.org/10.1007/978-1-84800-086-5_8
Download citation
DOI: https://doi.org/10.1007/978-1-84800-086-5_8
Publisher Name: Springer, London
Print ISBN: 978-1-84800-085-8
Online ISBN: 978-1-84800-086-5
eBook Packages: Computer ScienceComputer Science (R0)