Autonomous Robots

, Volume 37, Issue 4, pp 351–368 | Cite as

Integrating human observer inferences into robot motion planning

Article

Abstract

Our goal is to enable robots to produce motion that is suitable for human–robot collaboration and co-existence. Most motion in robotics is purely functional, ideal when the robot is performing a task in isolation. In collaboration, however, the robot’s motion has an observer, watching and interpreting the motion. In this work, we move beyond functional motion, and introduce the notion of an observer into motion planning, so that robots can generate motion that is mindful of how it will be interpreted by a human collaborator. We formalize predictability and legibility as properties of motion that naturally arise from the inferences in opposing directions that the observer makes, drawing on action interpretation theory in psychology. We propose models for these inferences based on the principle of rational action, and derive constrained functional trajectory optimization techniques for planning motion that is predictable or legible. Finally, we present experiments that test our work on novice users, and discuss the remaining challenges in enabling robots to generate such motion online in complex situations.

Keywords

Human–robot collaboration Predictability Legibility Trajectory optimization Action interpretation 

Notes

Acknowledgments

We thank Geoff Gordon, Jodi Forlizzi, Hendrik Christiansen, Kenton Lee, Chris Dellin, Alberto Rodriguez, and the members of the Personal Robotics Lab for fruitful discussion and advice. This material is based upon work supported by NSF-IIS-0916557, NSF-EEC-0540865, ONR-YIP 2012, the Intel Embedded Computing ISTC, and the Intel Ph.D. Fellowship.

References

  1. Abbeel, P., & Ng, A. Y. (2004). Apprenticeship learning via inverse reinforcement learning. In ICML.Google Scholar
  2. Alami, R., Albu-Schaeffer, A., Bicchi, A., Bischoff, R., Chatila, R., Luca, A. D., Santis, A. D., Giralt, G., Guiochet, J., Hirzinger, G., Ingrand, F., Lippiello, V., Mattone, R., Powell, D., Sen, S., Siciliano, B., Tonietti, G., & Villani, L. (2006). Safe and dependable physical human–robot interaction in anthropic domains: State of the art and challenges. In IROS workshop on pHRI.Google Scholar
  3. Alami, R., Clodic, A., Montreuil, V., Sisbot, E. A., & Chatila, R. (2006). Toward human-aware robot task planning. In AAAI spring symposium, (pp. 39–46).Google Scholar
  4. Argall, B., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.CrossRefGoogle Scholar
  5. Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009). Action understanding as inverse planning appendix. Cognition, 113, 329.CrossRefGoogle Scholar
  6. Beetz, M., Stulp, F., Esden-Tempski, P., Fedrizzi, A., Klank, U., Kresse, I., et al. (2010). Generality and legibility in mobile manipulation. Autonomous Robots, 28, 21–44.CrossRefGoogle Scholar
  7. Bergersen, G., Hannay, J., Sjoberg, D., Dyba, T., & Karahasanovic, A. (2011). Combining time and quality: Inferring skill from tests of programming performance. In ESEM.Google Scholar
  8. Boyan, J., & Moore, A. (1995). Generalization in reinforcement learning: Safely approximating the value function. NIPS. Cambridge, MA: MIT.Google Scholar
  9. Brock, O., & Khatib, O. (2002). Elastic strips: A framework for motion generation in human environments. International Journal of Robotics Research, 21(12), 1031.CrossRefGoogle Scholar
  10. Carter, E. J., Hodgins, J. K., & Rakison, D. H. (2011). Exploring the neural correlates of goal-directed action and intention understanding. NeuroImage, 54(2), 1634–1642.CrossRefGoogle Scholar
  11. Csibra, G., & Gergely, G. (1998). The teleological origins of mentalistic action explanations: A developmental hypothesis. Developmental Science, 1, 255–259.CrossRefGoogle Scholar
  12. Csibra, G., & Gergely, G. (2007). Obsessed with goals: Functions and mechanisms of teleological interpretation of actions in humans. Acta Psychologica, 124(1), 60–78.CrossRefGoogle Scholar
  13. Dey, D., Liu, T. Y., Hebert, M., & Bagnell, J. A. (2012, July). Contextual sequence prediction with application to control library optimization. In R: SS.Google Scholar
  14. Dragan, A., Gordon, G, & Srinivasa, S. (2011). Setting the right goals: Learning from experience in manipulation planning. In ISRR.Google Scholar
  15. Dragan, A., Lee, K., & Srinivasa, S. (2013). Legibility and predictability of robot motion. In Human–Robot Interaction.Google Scholar
  16. Dragan, A., Lee, K., & Srinivasa, S. (2014). Familiarization to robot motion. In Human–Robot Interaction.Google Scholar
  17. Dragan, A., Ratliff, N., & Srinivasa, S. (2011, May). Manipulation planning with goal sets using constrained trajectory optimization. In ICRA.Google Scholar
  18. Dragan, A., & Srinivasa, S. (2013). Generating legible motion. In Robotics: Science and systems.Google Scholar
  19. Dragan, A., & Srinivasa, S. S. (2012). Formalizing assistive teleoperation. In R: SS.Google Scholar
  20. Holladay, R., Dragan, A., & Srinivasa, S. S. (2014). Legible robot pointing. In RO-MAN.Google Scholar
  21. Fan, J., He, J., & Tillery, S. (2006). Control of hand orientation and arm movement during reach and grasp. Experimental Brain Research, 171, 283–296.CrossRefGoogle Scholar
  22. Flah, T., & Hogan, N. (1985, July). The coordination of arm movements: An experimentally confirmed mathematical model. The Journal of Neuroscience, 5, 1688–1703.Google Scholar
  23. Gergely, G., Nadasdy, Z., Csibra, G., & Biro, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56(2), 165–193.CrossRefGoogle Scholar
  24. Gielniak, M. & Thomaz, A. (2011). Generating anticipation in robot motion. In RO-MAN.Google Scholar
  25. Gielniak, M. & Thomaz, A. L. (2011). Spatiotemporal correspondence as a metric for human-like robot motion. In ACM/IEEE HRI.Google Scholar
  26. Hauf, P., & Prinz, W. (2005). The understanding of own and others actions during infancy: You-like-me or me-like-you? Interaction Studies, 6(3), 429–445.CrossRefGoogle Scholar
  27. Heinzmann, J., & Zelinsky, A. (1999). The safe control of human-friendly robots. In IEEE/RSJ IROS.Google Scholar
  28. Igel, C., Toussaint, M., & Weishui, W. (2006). Rprop using the natural gradient. Trends and applications in constructive approximation (pp. 259–272). Berlin: Springer.Google Scholar
  29. Jim Mainprice, T. S., Akin Sisbot, E., & Alami, R. (2010). Planning safe and legible hand-over motions for human–robot interaction. In IARP workshop on technical challenges for dependable robots in human environments.Google Scholar
  30. Jain, A., Wojcik, B., Joachims, T., & Saxena, A. (2013). Learning trajectory preferences for manipulators via iterative improvement. In NIPS.Google Scholar
  31. Kalakrishnan, M., Chitta, S., Theodorou, E., Pastor, P., & Schaal, S. (2011). STOMP: Stochastic trajectory optimization for motion planning. In IEEE ICRA.Google Scholar
  32. Kalakrishnan, M., Pastor, P., Righetti, L., & Schaal, S. (2013). Learning objective functions for manipulation. In IEEE ICRA.Google Scholar
  33. Kamewari, K., Kato, M., Kanda, T., Ishiguro, H., & Hiraki, K. (2005). Six-and-a-half-month-old children positively attribute goals to human action and to humanoid–robot motion. Cognitive Development, 20(2), 303–320.CrossRefGoogle Scholar
  34. Kruse, T., Basili, P., Glasauer, S., & Kirsch, A. (2012). Legible robot navigation in the proximity of moving humans. In Advanced robotics and its social impacts (ARSO).Google Scholar
  35. Lacquaniti, F., & Soechting, J. (1982). Coordination of arm and wrist motion during a reaching task. The Journal of Neuroscience, 2, 399–408.Google Scholar
  36. Lasseter, J. (1987). Principles of traditional animation applied to 3d computer animation. In SIGGRAPH.Google Scholar
  37. Levine, S., & Koltun, V. (2012). Continuous inverse optimal control with locally optimal examples. In ICML ’12: Proceedings of the 29th international conference on machine learning.Google Scholar
  38. Lichtenthäler, C., Lorenz, T., & Kirsch, A. (2011). Towards a legibility metric: How to measure the perceived value of a robot. In ICSR work-in-progress-track, 2011.Google Scholar
  39. Lichtenthäler, C., & Kirsch, A. (2013). Towards legible robot navigation—How to increase the intend expressiveness of robot navigation behavior. In International conference on social robotics—workshop embodied communication of goals and intentions.Google Scholar
  40. Nikolaidis, S., & Shah, J. (2012) Human–robot teaming using shared mental models. In ACM/IEEE HRI.Google Scholar
  41. Phillips, A. T., & Wellman, H. M. (2005). Infants’ understanding of object-directed action. Cognition, 98(2), 137–155.CrossRefGoogle Scholar
  42. Quinlan, S. (1994) The real-time modification of collision-free paths. Ph.D. thesis, Stanford University.Google Scholar
  43. Ratliff, N., Bagnell, J. A., & Zinkevich, M. (2006) Maximum margin planning. In ICML.Google Scholar
  44. N. Ratliff, M. Zucker, J. A. D. Bagnell, & S. Srinivasa. (2009, MAy). Chomp: Gradient optimization techniques for efficient motion planning. In ICRA.Google Scholar
  45. Short, E., Hart, J., Vu, M., & Scassellati, B. (2010). No fair!! an interaction with a cheating robot. In ACM/IEEE HRI.Google Scholar
  46. Sodian, B., & Thoermer, C. (2004). Infants’ understanding of looking, pointing, and reaching as cues to goal-directed action. Journal of Cognition and Development, 5(3), 289–316.CrossRefGoogle Scholar
  47. Srinivasa, S., Berenson, D., Cakmak, M., Collet, A., Dogar, M., Dragan, A., Knepper, R., Niemueller, T., Strabala, K., Weghe, M. V., & Ziegler, J. (2012). Herb 2.0: Lessons learned from developing a mobile manipulator for the home. In Proceedings of the IEEE, special issue on quality of life technology.Google Scholar
  48. Szafir, D., Mutlu, B., & Fong, T. (2014). Communication of intent in assistive free flyers. In HRI.Google Scholar
  49. Takayama, L., Dooley, D., & Ju, W. (2011). Expressing thought: Improving robot readability with animation principles. In HRI.Google Scholar
  50. Tellex, S., Knepper, R., Li, A., Rus, D., & Roy, N. (2014) Asking for Help Using Inverse Semantics. In RSS.Google Scholar
  51. Thomas, F., & Johnston, O. (1981). Disney animation. In F. Johnston (Ed.), The illusion of life. New York: Hyperion.Google Scholar
  52. Tinker, M. A. (1963). Legibility of print. Ames: Iowa State University Press.Google Scholar
  53. Todorov, E. & Li, W. (2005). A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems. In ACC.Google Scholar
  54. Tomasello, M., Carptenter, M., Call, J., Behne, T., & Moll, H. (2004). Understanding and sharing intentions: the origins of cultural cognition. Behavioral and Brain Sciences, 28, 675.Google Scholar
  55. Toussaint, M. (2009). Robot trajectory optimization using approximate inference. In International conference on machine learning.Google Scholar
  56. Vesper, C., Butterfill, S., Knoblich, G., & Sebanz, N. (2010). A minimal architecture for joint action. Neural Networks, 23, 998.CrossRefGoogle Scholar
  57. Witkin, A. & Kass, M. (1988). Spacetime constraints. In SIGGRAPH.Google Scholar
  58. Woodward, A. L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69(1), 1–34.CrossRefGoogle Scholar
  59. Wiese, E., Wykowska, A., Zwickel, J., & Müller, H. (2012). I see what you mean: how attentional selection is shaped by ascribing intentions to others. PLoS One, 7, e45391.CrossRefGoogle Scholar
  60. Ziebart, B. D., Maas, A., Bagnell, J. A., & Dey, A. (2008). Maximum entropy inverse reinforcement learning. In AAAI.Google Scholar
  61. Zucker, M., Ratliff, N., Dragan, A., Pivtoraiko, M., Klingensmith, M., Klingensmith, M., et al. (2013). Covariant Hamiltonian optimization for motion planning. International Journal of Robotics Researc, 32, 1164.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.The Robotics InstituteCarnegie Mellon UniversityPittsburghPA

Personalised recommendations