A Mixed-Initiative Approach to Interactive Robot Tutoring

  • Ingo Lütkebohle
  • Julia Peltason
  • Lars Schillingmann
  • Christof Elbrechter
  • Sven Wachsmuth
  • Britta Wrede
  • Robert Haschke
Chapter
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 76)

Abstract

Integrating the components described in the previous articles of this chapter, we introduce the Bielefeld “Curious Robot”, which is able to acquire new knowledge and skills in direct human-robot interaction. This paper focuses on the cognitive architecture of the overall system. We propose to combine (i) a communication layer based on a generic, human-accessible XML data format, (ii) multiple low-level sensor and control processes publishing their sensor information into the system and receiving commands or parameterizations from higher-level deliberative processes, and (iii) high-level coordination processes based on hierarchical state machines. The efficiency of the proposed approach is shown in an interactive tutoring scenario, where the Bielefeld “Curious Robot”, a bimanual robot system, should learn to identify, grasp, and clean various everyday objects from a table. The capability of the system to interact with lay persons is proven in a user study.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen, J.F.: Mixed-initiative interaction. IEEE Intelligent Systems 14(5), 14–23 (1999)CrossRefGoogle Scholar
  2. 2.
    Bohus, D., Rudnicky, A.I.: The ravenclaw dialog management framework: Architecture and systems. Computer Speech & Language 23(3), 332–361 (2009)CrossRefGoogle Scholar
  3. 3.
    Engelhardt, K.G., Edwards, R.A.: Human-robot integration for service robotics. In: Rahimi, M., Karwowski, W. (eds.) Human-Robot Interaction, pp. 315–346. Taylor & Francis Ltd. (1992)Google Scholar
  4. 4.
    Fink, G.A.: Developing HMM-Based Recognizers with ESMERALDA. In: Matoušek, V., Mautner, P., Ocelíková, J., Sojka, P. (eds.) TSD 1999. LNCS (LNAI), vol. 1692, pp. 229–234. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  5. 5.
    Fong, T., Thorpe, C., Baur, C.: Collaboration, dialogue, human-robot interaction. In: Advances in Telerobotics, pp. 255–266. Springer (2003)Google Scholar
  6. 6.
    Fong, T., Kunz, C., Hiatt, L.M., Bugajska, M.: The human-robot interaction operating system. In: HRI 2006: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, pp. 41–48. ACM, New York (2006)CrossRefGoogle Scholar
  7. 7.
    Furnas, G.W., Landauer, T.K., Gomez, L.M., Dumais, S.T.: The vocabulary problem in human-system communication. Commun. ACM 30(11), 964–971 (1987)CrossRefGoogle Scholar
  8. 8.
    Hanheide, M., Sagerer, G.: Active memory-based interaction strategies for learning-enabling behaviors. In: International Symposium on Robot and Human Interactive Communication (RO-MAN), Munich (2008)Google Scholar
  9. 9.
    Hüwel, S., Wrede, B., Sagerer, G.: Robust speech understanding for multi-modal human-robot communication. In: Proc. 15th Int. Symposium on Robot and Human Interactive Communication, pp. 45–50. IEEE (2006)Google Scholar
  10. 10.
    Kim, T., Hinds, P.: Who should i blame? effects of autonomy and transparency on attributions in human-robot interaction. In: The 15th IEEE Interantional Symposium on Robot and Human Interactive Communication (RO-MAN 2006), pp. 80–85 (2006)Google Scholar
  11. 11.
    Kortenkamp, D., Simmons, R.: Robotic System Architectures and Programming, ch. 8, pp. 187–206. Springer (2008)Google Scholar
  12. 12.
    Lang, C., et al.: Feedback interpretation based on facial expressions in human–robot interaction. In: International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 189–194. IEEE, Toyama (2009)Google Scholar
  13. 13.
    Litman, D.J., Hirschberg, J.B., Swerts, M.: Predicting automatic speech recognition performance using prosodic cues. In: Proceedings of NAACL 2000, pp. 218–225 (2000)Google Scholar
  14. 14.
    Lütkebohle, I.: Coordination and composition patterns in the “curious robot” scenario. PhD thesis, Bielefeld University (in Press, 2011)Google Scholar
  15. 15.
    Lütkebohle, I., Peltason, J., Schillingmann, L., Elbrechter, C., Wrede, B., Wachsmuth, S., Haschke, R.: The Curious Robot - Structuring Interactive Robot Learning. In: International Conference on Robotics and Automation, Robotics and Automation Society. IEEE (2009)Google Scholar
  16. 16.
    Nagai, Y., Hosada, K., Morita, A., Asada, M.: A constructive model for the development of joint attention. Connection Science 15(4), 211–229 (2003)CrossRefGoogle Scholar
  17. 17.
    Pohling, M.: Verhaltensweisen zur Steuerung der Blickrichtung eines humanoiden Roboters. Tech. rep., Bielefeld University, bachelor Thesis (2009)Google Scholar
  18. 18.
    Steels, L., Kaplan, F.: Aibo’s first words: The social learning of language and meaning. Evolution of Communication 4(1), 3–32 (2001)CrossRefGoogle Scholar
  19. 19.
    Woods, S.N., Walters, M.L., Koay, K.L., Dautenhahn, K.: Methodological issues in HRI: A comparison of live and video-based methods in robot to human approach direction trials. In: Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 51–58. IEEE (2006)Google Scholar
  20. 20.
    Wrede, S., Hanheide, M., Wachsmuth, S., Sagerer, G.: Integration and coordination in a cognitive vision system. In: International Conference on Computer Vision Systems (ICVS), IEEE, New York City (2006)Google Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  • Ingo Lütkebohle
    • 1
  • Julia Peltason
    • 1
  • Lars Schillingmann
    • 1
  • Christof Elbrechter
    • 1
  • Sven Wachsmuth
    • 1
  • Britta Wrede
    • 1
  • Robert Haschke
    • 1
  1. 1.Cognitive Interaction Technology Excellence ClusterBielefeld UniversityBielefeldGermany

Personalised recommendations