International Journal of Social Robotics

, Volume 7, Issue 5, pp 653–672 | Cite as

Capturing Expertise: Developing Interaction Content for a Robot Through Teleoperation by Domain Experts

  • Kanae Wada
  • Dylan F. Glas
  • Masahiro Shiomi
  • Takayuki Kanda
  • Hiroshi Ishiguro
  • Norihiro Hagita
Article

Abstract

The development of humanlike service robots which interact socially raises a new question: how can we create good interaction content for such robots? Domain experts specializing in the target service have the knowledge for making such content. Yet, while they can easily engage in good face-to-face interactions, we found it difficult for them to prepare conversational content for a robot in written form. Instead, we propose involving experts as teleoperators in a short-cycle iterative development process in which the expert develops content, teleoperates a robot using that content, and then revises the content based on that interaction. We propose a software system and design guidelines to enable such an iterative design process. To validate these solutions, we conducted a comparison experiment in the field, with a teleoperated robot acting as a guide at a tourist information center in Nara, Japan. The results showed that our system and guidelines enabled domain experts with no robotics background to create better interaction content and conduct better interactions than domain experts without our system.

Keywords

Communication robots Techniques  Field experiments 

References

  1. 1.
    Pineau J, Montemerlo M, Pollack M, Roy N, Thrun S (2003) Towards robotic assistants in nursing homes: challenges and results. Robot Autonom Syst 42:271–281CrossRefMATHGoogle Scholar
  2. 2.
    Shiomi M, Kanda T, Glas DF, Satake S, Ishiguro H, Hagita N (2009) Field trial of networked social robots in a shopping mall. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS2009), pp 2846–2853Google Scholar
  3. 3.
    Gross H-M, Boehme H-J, Schroeter C, Mueller S, Koenig A, Martin C, Merten M, Bley A (2008) ShopBot: progress in developing an interactive mobile shopping assistant for everyday use. IEEE international conference on systems, man, and cybernetics, pp 3471–3478Google Scholar
  4. 4.
    Gockley R, Bruce A, Forlizzi J, Michalowski M, Mundell A, Rosenthal S, Sellner B, Simmons R, Snipes K, Schultz AC, Wang J (2005) Designing robots for long-term social interaction. In: IEEE/RSJ international conference on intelligent robots and systems (IROS2005), pp 1338–1343Google Scholar
  5. 5.
    Weiss A, Igelsbock J, Tscheligi M, Bauer A, Kuhnlenz K, Wollherr D, Buss M (2010) Robots asking for directions: the willingness of passers-by to support robots. In: Proceedings of the ACM/IEEE international conference on human-robot interaction (HRI2010), pp 23–30Google Scholar
  6. 6.
    Polanyi M (1962) Personal knowledge: towards a post-critical philosophy. University of Chicago Press, ChicagoGoogle Scholar
  7. 7.
    Eraut M (2000) Non-formal learning and tacit knowledge in professional work. Br J Educat Psychol 70:113–136CrossRefGoogle Scholar
  8. 8.
    Kahn P, Freier N, Kandau T, Ishiguro H, Ruckert J, Severson R, Kane S (2008) Design patterns for sociality in human-robot interaction. In: ACM/IEEE international conference on human-robot interaction, pp 97–104Google Scholar
  9. 9.
    Kanda T, Shiomi M, Miyashita Z, Ishiguro H, Hagita N (2010) A communication robot in a shopping mall. IEEE Trans Robot 26:897–913CrossRefGoogle Scholar
  10. 10.
    Glas DF, Kanda T, Ishiguro H, Hagita N (2012) Teleoperation of multiple social robots. IEEE Trans Syst Man Cybernet A Syst Humans 42(3):530–544CrossRefGoogle Scholar
  11. 11.
    McTear MF (2002) Spoken dialogue technology: enabling the conversational user interface. ACM Comput Surv (CSUR) 34:90–169CrossRefGoogle Scholar
  12. 12.
    Glas DF, Satake S, Kanda T, Hagita N (2011) An interaction design framework for social robots. In: Robotics: science and systems conferenceGoogle Scholar
  13. 13.
    McTear MF (1998) Modelling spoken dialogues with state transition diagrams: experiences with the CSLU toolkit. In: International conference on spoken language processing (ICSLP1998), pp 1223–1226Google Scholar
  14. 14.
    Bohus D, Raux A, Harris TK, Eskenazi M, Rudnicky AI (2007) Olympus: an open-source framework for conversational spoken language interface research. In: HLT-NAACL 2007 workshop on bridging the gap: academic and industrial research in dialog technology, 2007, pp 32–39Google Scholar
  15. 15.
    Krenn B, Sieber G (2008) Functional mark-up for behaviour planning: theory and practice. In: Proceedings of the AAMAS 2008 workshop FML: functional markup language. Why conversational agents do what they doGoogle Scholar
  16. 16.
    Nishimura Y, Minotsu S, Dohi H, Ishizuka M, Nakano M, Funakoshi K, Takeuchi J, Hasegawa Y, Tsujino H (2007) A markup language for describing interactive humanoid robot presentations. In: International conference on intelligent user interfaces (IUI 2007), pp 333–336Google Scholar
  17. 17.
    Chernova S, Orkin J, Breazeal C (2010) Crowdsourcing HRI through online multiplayer games. In: The AAAI fall symposium dialog with robots, pp 14–19Google Scholar
  18. 18.
    Wallis P, Mitchard H, Das J, ODea D (2001) Dialogue modelling for a conversational agent. In: Stumptner M, Corbett D, Brooks M (eds) AI 2001: advances in artificial intelligence, vol 2256. Lecture notes in artificial intelligenceSpringer, BerlinGoogle Scholar
  19. 19.
    Boose JH, Bradshaw JM (1987) Expertise transfer and complex problems: using AQUINAS as a knowledge-acquisition workbench for knowledge-based systems. Int J Man Mach Stud 26(1):3–28Google Scholar
  20. 20.
    Kuo I-H, Jayawardena C, Broadbent E, MacDonald BA (2011) Multidisciplinary design approach for implementation of interactive services: communication initiation and user identification for healthcare service robots. Int J Soc Robot 3:443–456CrossRefGoogle Scholar
  21. 21.
    Lohse M, Siepmann F (2014) A modeling framework for user-driven iterative design of autonomous systems. Int J Soc Robot 6(1):121–139CrossRefGoogle Scholar
  22. 22.
    Ross D, Lim J, Lin R-S, Yang M-H (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1–3):125–141CrossRefGoogle Scholar
  23. 23.
    Zang P, Tian R, Thomaz AL, Isbell CL (2010) Batch versus interactive learning by demonstration. In: Proceedings of the IEEE 9th international conference on development and learning (ICDL), pp 219–224Google Scholar
  24. 24.
    Petelin JB, Nelson ME, Goodman J (2007) Deployment and early experience with remote-presence patient care in a community hospital. Surgic Endosc 21:53–56CrossRefGoogle Scholar
  25. 25.
    Tsui KM, Desai M, Yanco HA, Uhlik C (2011) Exploring use cases for telepresence robots. In: ACM/IEEE international conference on human-robot interaction (HRI2011)Google Scholar
  26. 26.
    Yun S-S, Kim M, Choi M-T (2013) Easy interface and control of tele-education robots. Int J Soc Robot 5:335–343CrossRefGoogle Scholar
  27. 27.
    Takayama L, Marder-Eppstein E, Harris H, Beer JM (2011) Assisted driving of a mobile remote presence system: system design and controlled user evaluation. In Proceedings of the IEEE international conference on robotics and automation (ICRA2011), pp 1883–1889Google Scholar
  28. 28.
    Zheng K, Glas DF, Kanda T, Ishiguro H, Hagita N (2011) How many social robots can one operator control? Proceedings of the ACM/IEEE 6th annual conference on human-robot interaction. Lausanne, Switzerland, pp 379–386Google Scholar
  29. 29.
    Kanda T, Shiomi M, Miyashita Z, Ishiguro H, Hagita N (2009) An affective guide robot in a shopping mall. In: ACM/IEEE international conference on human-robot interaction (HRI2009), pp 173–180Google Scholar
  30. 30.
    Castro-Gonzlez Á, Malfaz M, Salichs MA (2011) Learning the selection of actions for an autonomous social robot by reinforcement learning based on motivations. Int J Soc Robot 3:427–441CrossRefGoogle Scholar
  31. 31.
    Hollingsed TK, Ward Nigel G (2007) A combined method for discovering short-term affect-based response rules for spoken tutorial dialog. In: Proceedings of the ISCA ITRW workshop on speech and language technology in education (SLaTE)Google Scholar
  32. 32.
    Nigel W, Anais GR, Karen W, David GN (2005) Some usability issues and research priorities in spoken dialog applications. Technical report UTEP-CS-05-23, University of Texas at El PasoGoogle Scholar
  33. 33.
    Jung J, Kanda T, Kim M-S (2013) Guidelines for contextual motion design of a humanoid robot. Int J Soc Robot 5(2):153–169CrossRefGoogle Scholar
  34. 34.
    Sacks HH, Schegloff EA, Jefferson G (1974) A simplest systematics for the organization of turn-taking for conversation. Language 50(4):696–735CrossRefGoogle Scholar
  35. 35.
    Chao C, Thomaz AL (2010) Turn taking for human-robot interaction. Paper presented at the AAAI fall symposium: dialog with robotsGoogle Scholar
  36. 36.
    Nagaoka C, Komori M, Nakamura T (2005) Influence of response latencies on impression evaluation of speakers in dialogues: differences of cues used for evaluation by degree of social skill. Tech Rep IEICE 104(745):57–60Google Scholar
  37. 37.
    Jaffe J, Feldstein S (1970) Rhythms of dialogue. Academic, New YorkGoogle Scholar
  38. 38.
    Shiwa T, Kanda T, Imai M, Ishiguro H, Hagita N (2008) How quickly should communication robots respond? In: Proceedings of the ACM/IEEE international conference on human-robot interaction (HRI2008), pp 153–160Google Scholar
  39. 39.
    Glas DF, Kanda T, Ishiguro H, Hagita N (2012) Temporal awareness in teleoperation of conversational robots. IEEE Trans Syst Man Cybernet A Syst Humans 42(4):905–919CrossRefGoogle Scholar
  40. 40.
    Ward N, Tsukahara W (2000) Prosodic features which cue back-channel responses in English and Japanese. J Pragmat 32(8):1177–1207CrossRefGoogle Scholar
  41. 41.
    Kanda T, Ishiguro H, Imai M, Ono T (2004) Development and evaluation of interactive humanoid robots. Proc IEEE 92(11):1839–1850CrossRefGoogle Scholar
  42. 42.
    Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Rev 63(2):81CrossRefGoogle Scholar
  43. 43.
    Dahlbck N, Jnsson A, Ahrenberg L (1993) Wizard of Oz studies: why and how. In: International conference on intelligent user interfaces, pp 193–200Google Scholar
  44. 44.
    Villano M, Crowell CR, Wier K, Tang K, Thomas B, Shea N, Schmitt LM, Diehl JJ (2011) DOMER: a wizard of oz interface for using interactive robots to scaffold social skills for children with autism spectrum disorders. In: ACM/IEEE 6th international conference on human-robot interaction, Lausanne, SwitzerlandGoogle Scholar
  45. 45.
    Kawai H, Toda T, Ni J, Tsuzaki M, Tokuda K (2004) XIMERA: a new TTS from ATR based on corpus-based technologies. In: ISCA speech synthesis workshop, pp 179–184Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  • Kanae Wada
    • 1
  • Dylan F. Glas
    • 1
  • Masahiro Shiomi
    • 1
  • Takayuki Kanda
    • 1
  • Hiroshi Ishiguro
    • 2
  • Norihiro Hagita
    • 1
  1. 1.ATR Intelligent Robotics and Communication LaboratoriesKyotoJapan
  2. 2.Faculty of Science and EngineeringOsaka UniversityToyonakaJapan

Personalised recommendations