Advertisement

Journal on Multimodal User Interfaces

, Volume 1, Issue 1, pp 41–48 | Cite as

An agent based multicultural tour guide system with nonverbal user interface

  • Hung-Hsuan Huang
  • Kateryna Tarasenko
  • Toyoaki Nishida
  • Aleksandra Cerekovic
  • Vjekoslav Levacic
  • Goranka Zoric
  • Igor S. Pandzic
  • Yukiko Nakano
Article

Abstract

The advancement of traffic and computer networks makes the world more and more internationalized and increases the frequency of communications between people who speak different languages and show different nonverbal behaviors. To improve the communication of embodied conversational agent (ECA) systems with their human users, the importance of their capability to cover cultural differences emerged. Various excellent ECA systems are developed and proposed previously, however, the cross-culture communication issues are seldom addressed by researchers. This paper describes a short-term project aiming to explore the possibility of rapidly building multicultural and the multimodal ECA interfaces for a tour guide system by using a generic framework connecting their functional blocks.

Keywords

Embodied conversational agent Distributed system User interface Non-verbal interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

5. References

  1. [1]
    J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, eds.,Embodied Conversational Agents. MIT Press, 2000. 41Google Scholar
  2. [2]
    H. Huang, T. Masuda, A. Cerekovic, K. Tarasenko, I. Pandzic, Y. Nakano, and T. Nishida, “Toward a Universal Platform for Integrating Embodied Conversational Agent Components”, inProceedings of the 10th International Conference Knowledge-Based Intelligent Information and Engineering Systems (KES 2006), (Bournemouth, UK), pp. 220–226, October 2006. 41, 43Google Scholar
  3. [3]
    The eNTERFACE’06 workshop on multimodal interfaces. http://enterface.tel.fer.hr. 41Google Scholar
  4. [4]
    OpenAIR protocol. http://www.mindmakers.org/openair/airPage.jsp. 42Google Scholar
  5. [5]
    NaturalPoint OptiTrack Flex 3. http://www.naturalpoint.com/optitrack/. 43Google Scholar
  6. [6]
    NEC/Tokin 3D motion sensor. http://www.nec-tokin.com/english/product/3d/. 43Google Scholar
  7. [7]
    MS SAPI (Microsoft Speech Application Programming Interface). http://www.microsoft.com/speech. 43Google Scholar
  8. [8]
    Java SAPI (Java Speech Application Programming Interface). http://java.sun.com/products/java-media/speech/. 43Google Scholar
  9. [9]
    Y. Nakano, M. Okamoto, D. Kawahara, Q. Li, and T. Nishida, “Converting Text into Agent Animations: Assigning Gestures to Text”, inProceedings of The Human Language Technology Conference (HLT-NAACL04), 2004. 43Google Scholar
  10. [10]
    AIML (Artificial Intelligence Markup Language). http://www.alicebot.org. 43Google Scholar
  11. [11]
    visage|SDK, Visage player, Visage Technologies. http://www.visagetechnologies.com. 43Google Scholar
  12. [12]
    Juman. http://nlp.kuee.kyoto-u.ac.jp/nl-resource/juman.html. 43Google Scholar
  13. [13]
    KNP. http://nlp.kuee.kyoto-u.ac.jp/nl-resource/knp.html. 43Google Scholar
  14. [14]
    G. Zoric and I. S. Pandzic, “A Real-time Language Independent Lip Synchronization Method Using a Genetic Algorithm”, inProceedings of ICME 2005, July 6–8 2005. 45Google Scholar
  15. [15]
    ARToolKit. http://artoolkit.sourceforge.net/. 46Google Scholar
  16. [16]
    S. Kopp, B. Krenn, S. Marsella, A. Marshall, C. Pelachaud, H. Pirker, K. Thorisson, and H. Vilhjalmsson, “Towards a Common Framework for Multimodal Generation: The Markup Language”, inProceedings of IVA2006, August 2006. 47Google Scholar
  17. [17]
    CUBE-G (Culture-adaptive Behavior Generation for interactions with embodied conversational agents). http://mm-werkstatt.informatik.uni-augsburg.de/projects/cube-g/. 47Google Scholar
  18. [18]
    F. Landragin, “Visual Perception, Language and Gesture: A Model for their Understanding in Multimodal Dialogue Systems”, inSignal Processing, Elsevier, 2006. 47Google Scholar

Copyright information

© OpenInterface Association 2007

Authors and Affiliations

  • Hung-Hsuan Huang
    • 1
  • Kateryna Tarasenko
    • 1
  • Toyoaki Nishida
    • 1
  • Aleksandra Cerekovic
    • 2
  • Vjekoslav Levacic
    • 2
  • Goranka Zoric
    • 2
  • Igor S. Pandzic
    • 2
  • Yukiko Nakano
    • 3
  1. 1.Graduate School of InformaticsKyoto UniversityJapan
  2. 2.Faculty of Electrical Engineering and ComputingUniversity of ZagrebCroatia
  3. 3.Department of Computer, Information and Communication SciencesTokyo University of Agriculture & TechnologyJapan

Personalised recommendations