Advertisement

Applied Intelligence

, Volume 32, Issue 2, pp 148–163 | Cite as

Controlling gaze with an embodied interactive control architecture

  • Yasser MohammadEmail author
  • Toyoaki Nishida
Article

Abstract

Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.

Keywords

Robotic architectures Action integration HRI Gaze control 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Argyle M (2001) Bodily communication. Routledge, London, New Ed edition Google Scholar
  2. 2.
    Atienza R, Zelinsky E (2003) Intuitive human-robot interaction through active 3d gaze tracking. In: 11th int symposium of robotics research Google Scholar
  3. 3.
    Kuno Y, Sakurai A, Miyauchi D, Nakamura A (2004) Two-way eye contact between humans and robots. In: ICMI ’04: Proceedings of the 6th international conference on multimodal interfaces. ACM, New York, pp 1–8. doi: 10.1145/1027933.1027935 CrossRefGoogle Scholar
  4. 4.
    Seemann E, Nickel K, Stiefelhagen R (2004) Head pose estimation using stereo vision for human-robot interaction. In: Sixth IEEE international conference on automatic face and gesture recognition, pp 626–631 Google Scholar
  5. 5.
    Sidner CL, Kidd CD, Lee C, Lesh N (2004) Where to look: a study of human-robot engagement. In: IUI ’04: Proceedings of the 9th international conference on Intelligent user interfaces. ACM, New York, pp 78–84. doi: 10.1145/964442.964458 CrossRefGoogle Scholar
  6. 6.
    Hoffman MW, Grimes DB, Shon AP, Rao RPN (2006) A probabilistic model of gaze imitation and shared attention. Neural Netw 19(3):299–310. doi: 10.1016/j.neunet.2006.02.008 zbMATHCrossRefGoogle Scholar
  7. 7.
    Mohammad Y, Nishida T (2009) Towards combining autonomy and interactivity for social robots. AI Soc 24(1):35 CrossRefGoogle Scholar
  8. 8.
    Ziemke T (1999) Does representation need reality, Chap Rethinking grounding. Kluwer Academic, Dordrecht, pp 177–190 Google Scholar
  9. 9.
    Nicolescu MN, Matarić MJ (2002) A hierarchical architecture for behavior-based robots. In: AAMAS ’02: Proceedings of the first international joint conference on autonomous agents and multiagent systems. ACM, New York, pp 227–233. doi: 10.1145/544741.544798 CrossRefGoogle Scholar
  10. 10.
    Perez MC (2003) A proposal of a behavior-based control architecture with reinforcement learning for an autonomous underwater robot. PhD thesis, University of Girona Google Scholar
  11. 11.
    Mohammad YFO, Nishida T (2007) A new, hri inspired, view of intention. In: AAAI-07 workshop on human implications of human-robot interactions, pp 21–27 Google Scholar
  12. 12.
    Ishiguro H, Ono T, Imai M, Maeda T, Kanda T, Nakatsu R (2001) Obovie: an interactive humanoid robot. Ind Rob 28(6):498–504 CrossRefGoogle Scholar
  13. 13.
    Mohammad YFO, Nishida T (2007) Intention through interaction: Towards mutual intention in human-robot interactions. In: IEA/AIE 2007 conference, pp 114–124 Google Scholar
  14. 14.
    Mahanti G, Chakraborty A, Das S (2005) Floating-point genetic algorithm for design of a reconfigurable antenna arrays by phase-only control. In: Microwave conference proceedings, APMC 2005. Asia-Pacific conference proceedings, vol 5, 3 pp. doi: 10.1109/APMC.2005.1606987
  15. 15.
    Devaraj D, Yegnanarayana B (2005) Genetic-algorithm-based optimal power flow for security enhancement. In: Generation, transmission and distribution. IEE proceedings, vol 152(6), pp 899–905. doi: 10.1049/ip-gtd:20045234
  16. 16.
    Tian L, Collins C (2003) Motion planning for redundant manipulators using a floating point genetic algorithm. J Intell Rob Syst 38:297–312 CrossRefGoogle Scholar
  17. 17.
    Mohammad Y, Xu Y, Matsumura K, Nishida T (2008) The h 3 r explanation corpus:human-human and base human-robot interaction dataset. In: The fourth international conference on intelligent sensors, sensor networks and information processing (ISSNIP2008) Google Scholar
  18. 18.
    Gusfield D (1997) Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge University Press, Cambridge zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations