Training a Robot via Human Feedback: A Case Study

  • W. Bradley Knox
  • Peter Stone
  • Cynthia Breazeal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8239)

Abstract

We present a case study of applying a framework for learning from numeric human feedback—tamer—to a physically embodied robot. In doing so, we also provide the first demonstration of the ability to train multiple behaviors by such feedback without algorithmic modifications and of a robot learning from free-form human-generated feedback without any further guidance or evaluative feedback. We describe transparency challenges specific to a physically embodied robot learning from human feedback and adjustments that address these challenges.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Argall, B., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robotics and Autonomous Systems 57(5), 469–483 (2009)CrossRefGoogle Scholar
  2. 2.
    Breazeal, C., Siegel, M., Berlin, M., Gray, J., Grupen, R., Deegan, P., Weber, J., Narendran, K., McBean, J.: Mobile, dexterous, social robots for mobile manipulation and human-robot interaction. In: SIGGRAPH 2008: ACM SIGGRAPH 2008 New Tech Demos (2008)Google Scholar
  3. 3.
    Crick, C., Osentoski, S., Jay, G., Jenkins, O.C.: Human and robot perception in large-scale learning from demonstration. In: Proceedings of the 6th International Conference on Human-Robot Interaction, pp. 339–346. ACM (2011)Google Scholar
  4. 4.
    Isbell, C., Kearns, M., Singh, S., Shelton, C., Stone, P., Kormann, D.: Cobot in LambdaMOO: An Adaptive Social Statistics Agent. In: Proceedings of The 5th Annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (2006)Google Scholar
  5. 5.
    Knox, W.B., Stone, P.: Combining manual feedback with subsequent MDP reward signals for reinforcement learning. In: Proceedings of The 9th Annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (2010)Google Scholar
  6. 6.
    Knox, W.B.: Learning from Human-Generated Reward. PhD thesis, Department of Computer Science, The University of Texas at Austin (August 2012)Google Scholar
  7. 7.
    Knox, W.B., Breazeal, C., Stone, P.: Learning from feedback on actions past and intended. In: Proceedings of 7th ACM/IEEE International Conference on Human-Robot Interaction, Late-Breaking Reports Session (HRI 2012) (March 2012)Google Scholar
  8. 8.
    Knox, W.B., Glass, B.D., Love, B.C., Maddox, W.T., Stone, P.: How humans teach agents: A new experimental perspective. International Journal of Social Robotics, Special Issue on Robot Learning from Demonstration 4(4), 409–421 (2012)CrossRefGoogle Scholar
  9. 9.
    Knox, W.B., Stone, P.: Interactively shaping agents via human reinforcement: The TAMER framework. In: The 5th International Conference on Knowledge Capture (September 2009)Google Scholar
  10. 10.
    Knox, W.B., Stone, P.: Reinforcement learning from human reward: Discounting in episodic tasks. In: 21st IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man) (September 2012)Google Scholar
  11. 11.
    Knox, W.B., Stone, P.: Reinforcement learning with human and MDP reward. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (June 2012)Google Scholar
  12. 12.
    Knox, W.B., Stone, P.: Learning non-myopically from human-generated reward. In: International Conference on Intelligent User Interfaces (IUI) (March 2013)Google Scholar
  13. 13.
    León, A., Morales, E.F., Altamirano, L., Ruiz, J.R.: Teaching a robot to perform task through imitation and on-line feedback. In: San Martin, C., Kim, S.-W. (eds.) CIARP 2011. LNCS, vol. 7042, pp. 549–556. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Li, G., Hung, H., Whiteson, S., Knox, W.B.: Using informative behavior to increase engagement in the TAMER framework (May 2013)Google Scholar
  15. 15.
    Nehaniv, C.L., Dautenhahn, K.: 2 the correspondence problem. Imitation in Animals and Artifacts, 41 (2002)Google Scholar
  16. 16.
    Pilarski, P., Dawson, M., Degris, T., Fahimi, F., Carey, J., Sutton, R.: Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. In: IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–7. IEEE (2011)Google Scholar
  17. 17.
    Sridharan, M.: Augmented reinforcement learning for interaction with non-expert humans in agent domains. In: Proceedings of IEEE International Conference on Machine Learning Applications (2011)Google Scholar
  18. 18.
    Suay, H., Chernova, S.: Effect of human guidance and state space size on interactive reinforcement learning. In: 20th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man), pp. 1–6 (2011)Google Scholar
  19. 19.
    Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press (1998)Google Scholar
  20. 20.
    Tenorio-Gonzalez, A.C., Morales, E.F., Villaseñor-Pineda, L.: Dynamic reward shaping: Training a robot by voice. In: Kuri-Morales, A., Simari, G.R. (eds.) IBERAMIA 2010. LNCS, vol. 6433, pp. 483–492. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  21. 21.
    Thomaz, A., Breazeal, C.: Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence 172(6-7), 716–737 (2008)CrossRefGoogle Scholar
  22. 22.
    Vien, N.A., Ertel, W.: Reinforcement learning combined with human feedback in continuous state and action spaces. In: 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), pp. 1–6. IEEE (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • W. Bradley Knox
    • 1
  • Peter Stone
    • 2
  • Cynthia Breazeal
    • 1
  1. 1.Media LabMassachusetts Institute of TechnologyCambridgeUSA
  2. 2.Dept. of Computer ScienceUniversity of Texas at AustinAustinUSA

Personalised recommendations