AI & SOCIETY

, Volume 23, Issue 2, pp 201–212 | Cite as

WOZ experiments for understanding mutual adaptation

  • Yong Xu
  • Kazuhiro Ueda
  • Takanori Komatsu
  • Takeshi Okadome
  • Takashi Hattori
  • Yasuyuki Sumi
  • Toyoaki Nishida
Original Article

Abstract

A robot that is easy to teach not only has to be able to adapt to humans but also has to be easily adaptable to. In order to develop a robot with mutual adaptation ability, we believe that it will be beneficial to first observe the mutual adaptation behaviors that occur in human–human communication. In this paper, we propose a human–human WOZ (Wizard-of-Oz) experiment setting that can help us to observe and understand how the mutual adaptation procedure occurs between human beings in nonverbal communication. By analyzing the experimental results, we obtained three important findings: alignment-based action, symbol-emergent learning, and environmental learning.

Keywords

Hand Gesture Nonverbal Communication Human User Mutual Adaptation Robot Experiment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Cheyer A, Julia L, Martin JC (1998) A unified framework for constructing multimodal experiments and applications. Lect Notes Comput Sci 2155:234–242CrossRefGoogle Scholar
  2. Hatakeyama M (2004) Human–robot interaction based on interaction schema (in Japanese). Master Thesis, University of Tokyo, JapanGoogle Scholar
  3. Kipp M (2004) Gesture generation by imitation—from human behavior to computer character animation, Boca Raton, Florida: Dissertation.comGoogle Scholar
  4. Komatsu T, Utsunomiya A, Suzuki K, Ueda K, Hiraki K, Oka N (2005) Experiments toward a mutual adaptive speech interface that adopts the cognitive features humans use for communication and induces and exploits users’ adaptations. Int J Hum Comput Interact 18(3):243–268CrossRefGoogle Scholar
  5. Nishida T, Terada K, Tajima T, Hatakeyama M, Ogasawara Y, Sumi Y, Xu Y, Mohammad Y, Tarasenko K, Ohya T, Hiramatsu T (2006) Towards robots as an embodied knowledge medium, invited paper, special section on human communication II. IEICE Trans Inf Syst E89-D(6):1768–1780CrossRefGoogle Scholar
  6. Ogasawara Y, Okamoto M, Nakano IY, Xu Y, and Nishida T (2005) How to make robot a robust and interactive communicator. In: Khosla R et al (eds) KES, LNAI, vol 3683, Part III3683, pp 289–295Google Scholar
  7. Tajima T, Xu Y, and Nishida T (2004) Entrainment based human–agent interaction. In: Proceedings of 2004 IEEE conference on robotics, automation and mechatronics (RAM2004), Singapore, pp 1042–1047Google Scholar
  8. Yamada S, Yamaguchi T (2004) Training AIBO like a dog. In: The 13th international workshop on robot and human interactive communication (ROMAN-2004), Kurashiki, Japan, pp 431–436Google Scholar

Copyright information

© Springer-Verlag London Limited 2007

Authors and Affiliations

  • Yong Xu
    • 1
  • Kazuhiro Ueda
    • 2
  • Takanori Komatsu
    • 3
  • Takeshi Okadome
    • 4
  • Takashi Hattori
    • 4
  • Yasuyuki Sumi
    • 1
  • Toyoaki Nishida
    • 1
  1. 1.Department of Intelligence Science and Technology, Graduate School of InformaticsKyoto UniversityKyotoJapan
  2. 2.Department of General System StudiesThe University of TokyoTokyoJapan
  3. 3.Department of Media ArchitectureFuture University-HakodateHokkaidoJapan
  4. 4.Innovative Communication LaboratoryNTT Communication Science LaboratoriesKeihanna Science City, KyotoJapan

Personalised recommendations