Robot Head Motion Control with an Emphasis on Realism of Neck–Eye Coordination during Object Tracking
- 418 Downloads
- 10 Citations
Abstract
Important aspects of present-day humanoid robot research is to make such robots look realistic and human-like, both in appearance, as well as in motion and mannerism. In this paper, we focus our study on advanced control leading to realistic motion coordination for a humanoid’s robot neck and eyes while tracking an object. The motivating application for such controls is conversational robotics, in which a robot head “actor” should be able to detect and make eye contact with a human subject. Therefore, in such a scenario, the 3D position and orientation of an object of interest in space should be tracked by the redundant head–eye mechanism partly through its neck, and partly through its eyes. In this paper, we propose an optimization approach, combined with a real-time visual feedback to generate the realistic robot motion and robustify it. We also offer experimental results showing that the neck–eye motion obtained from the proposed algorithm is realistic comparing to the head–eye motion of humans.
Keywords
Humanoid robots Human–robot interaction Eye–neck coordination Robot head human trackingPreview
Unable to display preview. Download preview PDF.
References
- 1.Bartneck, C., Kanda, T., Ishiguro, H., Hagita, N.: Is the uncanny valley an uncanny cliff? In: Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2007, pp. 368–373. IEEE (2007)Google Scholar
- 2.Breazeal, C., Scassellati, B.: Challenges in building robots that imitate people. In: Dautenhahn, K., Nehaniv, C. (eds.) Imitation in Animals and Artifacts. MIT Press (2001)Google Scholar
- 3.Breazeal, C., Edsinger, A., Fitzpatrick, P., Scassellati, B.: Active vision for sociable robots. IEEE Trans. Syst. Man Cybern. Part A 31(5), 443–453 (2001)CrossRefGoogle Scholar
- 4.Cannata, G., D’Andrea, M., Maggiali, M.: Design of a humanoid robot eye: models and experiments. In: 2006 6th IEEE-RAS International Conference on Humanoid Robots, pp. 151–156, 4–6 Dec 2006Google Scholar
- 5.Cannata, G., Maggiali, M.: Implementation of Listing’s law for a tendon driven robot eye. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3940–3945, 9–15 Oct 2006Google Scholar
- 6.Goetz, J., Kiesler, S., Powers, A.: Matching robot appearance and behavior to tasks to improve human–robot cooperation. In: The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003, pp. 55–60, 31 Oct – 2 Nov 2003Google Scholar
- 7.Gurbuz, S., Shimizu, T., Cheng, G.: Real-time stereo facial feature tracking: mimicking human mouth movement on a humanoid robot head. In: 2005 5th IEEE-RAS International Conference on Humanoid Robots, pp. 363–368, 5–5 Dec 2005Google Scholar
- 8.MacDorman, K.F., Minato, T.: CogSci-2005 Workshop: Toward Social Mechanisms of Android Science (2005). http://www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html
- 9.Mori, M.: Bukimi no tani (the uncanny valley). Energy 7, 33–35 (1970) (in Japanese, see [8] for translation)Google Scholar
- 10.Oh, J.-H., Hanson, D., Kim, W.-S., Han, Y., Kim, J.-Y., Park, I.-W.: Design of Android type Humanoid Robot Albert HUBO. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1428–1433, Oct 2006Google Scholar
- 11.Rajruangrabin, J., Dang, P., Popa, D.O., Lewis, F.L., Stephanou, H.E.: Simultaneous visual tracking and pose estimation with applications to robotic actors. In: World Congress in Computer Science 2008. Proceedings, IPCV 08, pp. 497–503 (2008)Google Scholar
- 12.Scassellati, B.: Investigating models of social development using a humanoid robot. In: Proceedings of the International Joint Conference on Neural Networks, 2003, pp. 2704–2709, 20–24 July 2003Google Scholar
- 13.Sharkey, P.M., Murray, D.W., Heuring, J.J.: On the kinematics of robot heads. IEEE Trans. Robot. Autom. 13(3), 437–442 (1997)CrossRefGoogle Scholar
- 14.Shimada, M., Minato, T., Itakura, S., lshiguro, H.: Uncanny valley of androids and its lateral inhibition hypothesis. In: The 16th IEEE International Symposium on Robot and Human Interactive Communication, 2007. RO-MAN 2007, pp. 374–379, 26–29 Aug 2007Google Scholar
- 15.Singh, S.K., Pieper, S.D., Popa, D.O., Guinness, J.: Control and coordination of head, eyes and facial expressions of virtual actors in virtual environments. In: 2nd IEEE International Workshop on Robot and Human Communication, 1993. Proceedings, pp. 335–339, 3–5 Nov 1993Google Scholar
- 16.Woods, S., Dautenhahn, K., Schulz, J.: The design space of robots: investigating children’s views. In: 13th IEEE International Workshop on Robot and Human Interactive Communication, 2004. ROMAN 2004, pp. 47–52, 20–22 Sept 2004Google Scholar
- 17.Tsitsiklis, J.N., Van Roy, B.: An analysis of temporal-difference learning with function approximation. IEEE Trans. Automat. Contr. 42(5), 674–690 (1997)MATHCrossRefGoogle Scholar
- 18.Sutton, R.S., Barto, A.G.: Reinforcement Learning: an Introduction. Bradford Books, MIT Press, Cambridge, MA, 2002 edition (1998)Google Scholar