Advertisement

Design and Implementation of Personality of Humanoids in Human Humanoid Non-verbal Interaction

  • Hiroshi G. Okuno
  • Kazuhiro Nakadai
  • Hiroaki Kitano
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2718)

Abstract

Controlling robot behaviors becomes more important recently as active perception for robot, in particular active audition in addition to active vision, has made remarkable progress. We are studying how to create social humanoids that perform actions empowered by real-time audio-visual tracking of multiple talkers. In this paper, we present personality as a means of controlling non-verbal behaviors. It consists of two dimensions, dominance vs. submissiveness and friendliness vs. hostility, based on the Interpersonal Theory in psychology. The upper-torso humanoid SIG equipped with real-time audio-visual multiple-talker tracking system is used as a testbed for social interaction. As a companion robot, with friendly personality, it turns toward a new sound source in order to show its attention, while with hostile personality, it turns away from a new sound source. As a receptionist robot with dominant personality, it focuses its attention on the current customer, while with submissive personality, its attention to the current customer is interrupted by a new one.

Keywords

Sound Source Humanoid Robot Visual Stream Automatic Speech Recognition System Human Robot Interaction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bates, J. The role of emotion in believable agents. Comm. of the ACM37, 7 (1994), 122–125.CrossRefGoogle Scholar
  2. 2.
    Breazeal, C., and Scassellati, B. A context-dependent attention system for a social robot. In Proceedings of 16th International Joint Conference on Atificial Intelligence (IJCAI-1999), pp. 1146–1151.Google Scholar
  3. 3.
    Breazeal, C. Emotive qualities in robot speech. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2001), IEEE, pp. 1389–1394.Google Scholar
  4. 4.
    Brooks, R. A., Breazeal, C., Irie, R., Kemp, C. C., Marjanovic, M., Scassellati, B., and Williamson, M. M. Alternative essences of intelligence. In Proceedings of 15th National Conference on Artificial Intelligence (AAAI-1998), pp. 961–968.Google Scholar
  5. 5.
    Cassell, J. More than just another pretty face: Embodied conversational interface agents. Comm. of the ACM 43, 4 (2000), 70–78.CrossRefGoogle Scholar
  6. 6.
    Cherry, E. C. Some experiments on the recognition of speech, with one and with two ears. Journal of Acoustic Society of America 25 (1953), 975–979.CrossRefGoogle Scholar
  7. 7.
    Handel, S.Listening. The MIT Press, MA., 1989.Google Scholar
  8. 8.
    Hayes-Roth, B., Ball, G., Lisetti, C., Picard, R., and Stern, A. Affect and emotion in the user interface. In Proceedings of 1998 International Conference on Intelligent User Interfaces (1998), ACM, pp. 91–96.Google Scholar
  9. 9.
    Kagami, S., Okada, K., Inaba, M., and Inoue, H. Real-time 3d optical flow generation system. In Proc. of International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI-1999), pp. 237–242.Google Scholar
  10. 10.
    Kawahara, T., Lee, A., Kobayashi, T., Takeda, K., Minematsu, N., Itou, K., Ito, A., Yamamoto, M., Yamada, A., Utsuro, T., and Shikano, K. Japanese dictation toolkit — 1997 version —. Journal of Acoustic Society Japan (E) 20,3 (1999), 233–239.Google Scholar
  11. 11.
    Kiesler, D. The 1982 interpersonal circle: A taxonomy for complementarity in human transactions. Psychological Review 90 (1993), 185–214.CrossRefGoogle Scholar
  12. 12.
    Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., and Kobayashi, T. Multi-person conversation via multi-modal interface — a robot who communicates with multi-user. In Proceedings of 6th European Conference on Speech Communication Technology (EUROSPEECH-1999), ESCA, pp. 1723–1726.Google Scholar
  13. 13.
    Miwa, H., Takanishi, A., and Takanobu, H. Experimental study on robot personality for humanoid head robot. In Proceedings of 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2001), IEEE, pp. 1183–1188.Google Scholar
  14. 14.
    Nakadai, K., Hidai, K., Mizoguchi, H., Okuno, H. G., and Kitano, H. Real-time auditory and visual multiple-object tracking for robots. In Proceedings of 17th International Joint Conference on Artificial Intelligence (IJCAI-2001), IJCAI, pp. 1425–1432.Google Scholar
  15. 15.
    Nakadai, K., Lourens, T., Okuno, H. G., and Kitano, H. Active audition for humanoid. In Proc. of 17th National Conference on Artificial Intelligence (AAAI-2000), AAAI, pp. 832–839.Google Scholar
  16. 16.
    Nakadai, K., Matsui, T., Okuno, H. G., and Kitano, H. Active audition system and humanoid exterior design. In Proceedings of IEEE/RAS International Conference on Intelligent Robots and Systems (IROS 2000), IEEE, pp. 1453–1461.Google Scholar
  17. 17.
    Nakadai, K., Okuno, H. G., and Kitano, H. Exploiting auditory fovea in humanoid-human interaction. In Proceedings of 18th National Conference on Artificial Intelligence (AAAI-2002), AAAI, pp. 431–438.Google Scholar
  18. 18.
    Ono, T., Imai, M., and Ishiguro, H. A model of embodied communications with gestures between humans and robots. In Proceedings of Twenty-third Annual Meeting of the Cognitive Science Society (CogSci2001), AAAI, pp. 732–737.Google Scholar
  19. 19.
    Pashler, H.The Psychology of Attention. The MIT Press, MA., 1997.Google Scholar
  20. 20.
    Reeves, B., and Nass, C.The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, Cambridge, UK, 1996.Google Scholar
  21. 21.
    Waldherr, S., Thrun, S., Romero, R., and Margaritis, D. Template-based recoginition of pose and motion gestures on a mobile robot. In Proceedings of 15th National Conference on Artificial Intelligence (AAAI-1998), AAAI, pp. 977–982.Google Scholar
  22. 22.
    Weizenbaum, J. Eliza — a computer program for the study of natural language communication between man and machine. Communications of the ACM 9,1 (1966), 36–45.CrossRefGoogle Scholar
  23. 23.
    Wolfe, J., Cave, K. R., and Franzel, S. Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance 15,3 (1989), 419–433.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Hiroshi G. Okuno
    • 1
    • 2
  • Kazuhiro Nakadai
    • 2
  • Hiroaki Kitano
    • 2
    • 3
  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan
  2. 2.Kitano Symbiotic Systems Project, ERATONational Institute of Advanced Industrial Science and TechnologyShibuya, TokyoJapan
  3. 3.Sony Computer Science Laboratories, Inc.Shinagawa, Tokyo

Personalised recommendations