Advertisement

International Journal of Social Robotics

, Volume 5, Issue 2, pp 153–169 | Cite as

Guidelines for Contextual Motion Design of a Humanoid Robot

  • Jinyung Jung
  • Takayuki Kanda
  • Myung-Suk Kim
Article

Abstract

The motion of a humanoid robot is one of the most intuitive communication channels for human-robot interaction. Previous studies have presented related knowledge to generate speech-based motions of virtual agents on screens. However, physical humanoid robots share time and space with people, and thus, numerous speechless situations occur where the robot cannot be hidden from users. Therefore, we must understand the appropriate roles of motion design for a humanoid robot in many different situations. We achieved the target knowledge as motion-design guidelines based on the iterative findings of design case studies and a literature review. The guidelines are largely separated into two main roles for speech-based and speechless situations, and the latter can be further subdivided into idling, observing, listening, expecting, and mood-setting, all of which are well-distributed by different levels of intension. A series of experiments proved that our guidelines help create preferable motion designs of a humanoid robot. This study provides researchers with a balanced perspective between speech-based and speechless situations, and thus they can design the motions of a humanoid robot to satisfy users in more acceptable and pleasurable ways.

Keywords

Contextual motion design Guideline Humanoid robot 

Notes

Acknowledgements

We thank Fumitaka Yamaoka, Takamasa Iio, Yusuke Okuno, and Shiga Miwa for their extensive help in making the scenario movies, and we also appreciate the technical assistance of Shunsuke Yoshida and Kazuhiko Shinozawa of ATR’s IRC Lab.

References

  1. 1.
    Argyle M (1975) Bodily communication. Methuen, London Google Scholar
  2. 2.
    Breazeal C (2004) Social interactions in HRI: the robot view. IEEE Trans Syst Man Cybern 34(2):181–186 CrossRefGoogle Scholar
  3. 3.
    Breazeal C, Scassellati B (2000) Infant-like social interactions between a robot and a human caregiver. Adapt Behav 8(1):49–74 CrossRefGoogle Scholar
  4. 4.
    Breazeal C, Kidd D, Thomaz A, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proc IROS’05, pp 708–713 Google Scholar
  5. 5.
    Breemen V (2004) Bringing robots to life: applying principles of animation to robots. In: Proc shaping human-robot interaction workshop, CHI’04 Google Scholar
  6. 6.
    Brooks A, Arkin R (2006) Behavioral overlays for non-verbal communication expression on a humanoid robot. Auton Robots 22(1):55–74 CrossRefGoogle Scholar
  7. 7.
    Boudreau D (2002) Design 4U. ASU research magazine, 2002 summer: 40–43 Google Scholar
  8. 8.
    Cassell J, Pelachaud C, Badler N, Steedman M (1994) Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In: Proc the 21st annual conference on computer graphics and interactive techniques, pp 413–420 Google Scholar
  9. 9.
    Cassell J, Vilhj’almsson H, Bickmore T (2001) BEAT: the behavior expression animation toolkit. In: Proc SIGGRAPH’01, pp 477–486 Google Scholar
  10. 10.
    Duffy B (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3–4):177–190 CrossRefzbMATHGoogle Scholar
  11. 11.
    Egges A, Molet T, Magnenat-Thalmann N (2004) Personalized real-time idle motion synthesis. In: Proc the 12th pacific graphics conference, pp 121–130 Google Scholar
  12. 12.
    Ellenberg R, Grunberg D, Kim Y, Oh P (2009) Creating an autonomous dancing robot. In: Proc international conference on hybrid information Technology’09, pp 222–227 Google Scholar
  13. 13.
    Foner L (1997) What’s an agent, anyway? A sociological case study. Agents memo 93-01. Agents group. MIT Media Lab, Cambridge, MA, p 199 Google Scholar
  14. 14.
    Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3–4):143–166 CrossRefzbMATHGoogle Scholar
  15. 15.
    Goffman E (1963) Behavior in public space. Free Press, New York Google Scholar
  16. 16.
    Hall E (1966) The hidden dimension. Garden City, Doubleday Google Scholar
  17. 17.
    Hasanuzzaman T, Zhang V, Ampornaramveth H, Ueno H (2006) Gesture-based human-robot interaction using a knowledge-based software platform. Ind Robot, Int J 33(1):37–49 CrossRefGoogle Scholar
  18. 18.
    Imai M, Ono T, Ishiguro H (2003) Physical relation and expression: joint attention for human-robot interaction. IEEE Trans Ind Electron 50(4):636–643 CrossRefGoogle Scholar
  19. 19.
    Kendon A (1988) Sign languages of aboriginal Australia: cultural, semiotic and communicative perspectives. Cambridge University Press, Cambridge Google Scholar
  20. 20.
    Kipp M, Neff M, Kipp K, Albrecht I (2007) Towards natural gesture synthesis: evaluating gesture units in a data-driven approach to gesture synthesis. In: Proc international conference of intelligent virtual agents, pp 15–28 Google Scholar
  21. 21.
    Knapp M, Hall J (1997) Nonverbal communication in human interaction, 4th edn. Harcourt Brace College Publishers, Fort Worth Google Scholar
  22. 22.
    Kopp S, Wachsmuth I (2000) A knowledge-based approach for lifelike gesture animation. A knowledge-based approach for lifelike gesture animation. In: Proc European conference on artificial intelligence, pp 663–667 Google Scholar
  23. 23.
    Kozima H, Vatikiotis-Bateson E (2001) Communicative criteria for processing time/space-varying information. In: Proc IEEE international workshop on robot and human communication, pp 377–382 Google Scholar
  24. 24.
    Lee J, Marsella S (2006) Nonverbal behavior generator for embodied conversational agents. Intell Virtual Agents 4133:243–255 CrossRefGoogle Scholar
  25. 25.
    Lee J, Resnick M (2009) Reading with robots: engaging children to read aloud to robots. Technical report, MIT, School of Media Arts and Sciences Google Scholar
  26. 26.
    Lee J, DeVault D, Marsella S, Traum D (2008) Thoughts on FML: behavior generation in the virtual human communication architecture. Presented at the first functional markup language workshop, AAMAS’08 Google Scholar
  27. 27.
    Machotka P, Spiegel J (1982) The articulate body. Irvington Publishers, New York Google Scholar
  28. 28.
    Mali A (2002) On the evaluation of agent behaviors. Technical report, Department of Electrical Engineering and Computer Science, Wisconsin Univ, Milwaukee Google Scholar
  29. 29.
    Matsusaka Y (2008) History and current researches on building a human interface for humanoid robots. Model Commun Robots Virtual Hum 4930:109–124 CrossRefGoogle Scholar
  30. 30.
    Moore C, Dunham P (eds) (1995) Joint attention: its origins and role in development. Erlbaum, Hillsdale Google Scholar
  31. 31.
    Nakadai K, Hidai K, Mizoguchi H, Okuno H, Kitano H (2001) Real-time auditory and visual multiple-object tracking for robots. In: Proc the international joint conference on artificial intelligence, pp 1425–1432 Google Scholar
  32. 32.
    Nehaniv C (2005) Classifying types of gesture and inferring intent. In: Proc AISB’05 symposium on robot companions, pp 74–81 Google Scholar
  33. 33.
    Ogasawara Y, Okamoto M, Nakano Y, Nishida T (2005) Establishing natural communication environment between a human and a listener robot. In: Proc AISB symposium on conversational informatics, pp 42–51 Google Scholar
  34. 34.
    Pelachaud C (2005) Multimodal expressive embodied conversational agents. In: Proc the 13th annual ACM international conference on multimedia, pp 683–689 CrossRefGoogle Scholar
  35. 35.
    Poyatos F (1976) Man beyond words. Theory and methodology of nonverbal communication. New York State English Council, Oswego Google Scholar
  36. 36.
    Ribeiro T, Paiva A (2012) The illusion of robotic life: principles and practices of animation for robots. In: Proc HRI’12, pp 383–390 Google Scholar
  37. 37.
    Saerbeck M, Breemen V (2007) Design guidelines and tools for creating believable motion for personal robots. In: Proc the 16th IEEE international symposium on robot and human interactive communication, pp 386–391 Google Scholar
  38. 38.
    Sakamoto D, Kanda T, Ono T, Kamachima M, Imai M, Ishiguro H (2005) Cooperative embodied communication emerged by interactive humanoid robots. Int J Hum-Comput Stud 62:247–265 CrossRefGoogle Scholar
  39. 39.
    Salvini P, Laschi C, Dario P (2010) Design for acceptability: improving robots’ coexistence in human society. Int J Soc Robot 2:451–460. doi: 10.1007/s12369-010-0079-2 CrossRefGoogle Scholar
  40. 40.
    Suzuki K, Hikiji R, Hashimoto S (2002) Development of an autonomous humanoid robot, iSHA, for harmonized human-machine environment. J Robot Mechatron 14(5):497–505 Google Scholar
  41. 41.
    Thomas F, Johnson O (1981) The illusion of life. Disney animation, Walt Disney productions Google Scholar
  42. 42.
    Young J et al (2011) Evaluating human-robot interaction focusing on the holistic interaction experience. Int J Soc Robot 3:53–67. doi: 10.1007/s12369-010-0081-8 CrossRefGoogle Scholar
  43. 43.
    Wolf T, Rode J, Sussman J, Kellogg W (2006) Dispelling design as the ‘Black Art’ of CHI. In: Proc the SIGCHI conference on human factors in computing systems, pp 521–530 Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.Dept. of Industrial DesignKorea Advanced Institute of Science and Technology (KAIST)Yuseong-guSouth Korea
  2. 2.Intelligent Robotics and Communications Lab.Advanced Telecommunication Institute InternationalSeika-choJapan

Personalised recommendations