Controlling gaze with an embodied interactive control architecture
- 116 Downloads
Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.
KeywordsRobotic architectures Action integration HRI Gaze control
Unable to display preview. Download preview PDF.
- 1.Argyle M (2001) Bodily communication. Routledge, London, New Ed edition Google Scholar
- 2.Atienza R, Zelinsky E (2003) Intuitive human-robot interaction through active 3d gaze tracking. In: 11th int symposium of robotics research Google Scholar
- 4.Seemann E, Nickel K, Stiefelhagen R (2004) Head pose estimation using stereo vision for human-robot interaction. In: Sixth IEEE international conference on automatic face and gesture recognition, pp 626–631 Google Scholar
- 8.Ziemke T (1999) Does representation need reality, Chap Rethinking grounding. Kluwer Academic, Dordrecht, pp 177–190 Google Scholar
- 10.Perez MC (2003) A proposal of a behavior-based control architecture with reinforcement learning for an autonomous underwater robot. PhD thesis, University of Girona Google Scholar
- 11.Mohammad YFO, Nishida T (2007) A new, hri inspired, view of intention. In: AAAI-07 workshop on human implications of human-robot interactions, pp 21–27 Google Scholar
- 13.Mohammad YFO, Nishida T (2007) Intention through interaction: Towards mutual intention in human-robot interactions. In: IEA/AIE 2007 conference, pp 114–124 Google Scholar
- 14.Mahanti G, Chakraborty A, Das S (2005) Floating-point genetic algorithm for design of a reconfigurable antenna arrays by phase-only control. In: Microwave conference proceedings, APMC 2005. Asia-Pacific conference proceedings, vol 5, 3 pp. doi: 10.1109/APMC.2005.1606987
- 15.Devaraj D, Yegnanarayana B (2005) Genetic-algorithm-based optimal power flow for security enhancement. In: Generation, transmission and distribution. IEE proceedings, vol 152(6), pp 899–905. doi: 10.1049/ip-gtd:20045234
- 17.Mohammad Y, Xu Y, Matsumura K, Nishida T (2008) The h 3 r explanation corpus:human-human and base human-robot interaction dataset. In: The fourth international conference on intelligent sensors, sensor networks and information processing (ISSNIP2008) Google Scholar