Advertisement

Autonomous Robots

, Volume 28, Issue 4, pp 403–423 | Cite as

Effects of anticipatory perceptual simulation on practiced human-robot tasks

  • Guy HoffmanEmail author
  • Cynthia Breazeal
Article

Abstract

With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task.

In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot.

In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses.

We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.

Human-robot interaction Perceptual simulation Anticipation Human-robot teamwork Joint practice Human-subject studies Cognitive models Top-down bias Priming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

Below is the link to the electronic supplementary material. (28.5 MB)

References

  1. Barsalou, L. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660. Google Scholar
  2. Bregler, C. (1997). Learning and recognizing human dynamics in video sequences. In CVPR ’97: proceedings of the 1997 conference on computer vision and pattern recognition (p. 568). Los Alamitos: IEEE Computer Society. CrossRefGoogle Scholar
  3. Duffy, B. (2000). The social robot. PhD thesis, University College Dublin, Ireland. Google Scholar
  4. Endo, Y. (2005). Anticipatory and improvisational robot via recollection and exploitation of episodic memories. In Proceedings of the AAAI fall symposium. Google Scholar
  5. Fong, T. W., Thorpe, C., & Baur, C. (2001). Collaboration, dialogue, and human-robot interaction. In Proceedings of the 10th international symposium of robotics research, Lorne, Victoria, Australia. London: Springer. Google Scholar
  6. Hamdan, R., Heitz, F., & Thoraval, L. (1999). Gesture localization and recognition using probabilistic visual learning. In Proceedings of the 1999 conference on computer vision and pattern recognition (CVPR ’99) (pp. 2098–2103), Ft Collins, CO, USA. Google Scholar
  7. Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. New York: Wiley. Google Scholar
  8. Hoffman, G. (2007). Ensemble: fluency and embodiment for robots acting with humans. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA. Google Scholar
  9. Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. In Proceedings of the AIAA 1st intelligent systems technical conference. Chicago: AIAA. Google Scholar
  10. Hoffman, G., & Breazeal, C. (2006). Robotic partners’ bodies and minds: an embodied approach to fluid human-robot collaboration. In Fifth international workshop on cognitive robotics (AAAI’06). Google Scholar
  11. Hoffman, G., & Breazeal, C. (2007). Cost-based anticipatory action-selection for human-robot fluency. IEEE Transactions on Robotics and Automation, 23(5), 952–961. Google Scholar
  12. Hoffman, G., & Breazeal, C. (2008). Achieving fluency through perceptual-symbol practice in human-robot collaboration. In Proceedings of the ACM/IEEE international conference on human-robot interaction (HRI’08). New York: ACM. Google Scholar
  13. Horvath, A. O., & Greenberg, L. S. (1989). Development and validation of the working alliance inventory. Journal of Counseling Psychology, 36(2), 223–233. CrossRefGoogle Scholar
  14. Jones, H., & Rock, S. (2002). Dialogue-based human-robot interaction for space construction teams. In IEEE aerospace conference proceedings (Vol. 7, pp. 3645–3653). Google Scholar
  15. Khatib, O., Brock, O., Chang, K., Ruspini, D., Sentis, L., & Viji, S. (2004). Human-centered robotics and interactive haptic simulation. International Journal of Robotics Research, 23(2), 167–178. CrossRefGoogle Scholar
  16. Kimura, H., Horiuchi, T., & Ikeuchi, K. (1999). Task-model based human robot cooperation using vision. In Proceedings of the IEEE international conference on intelligent robots and systems (IROS’99) (pp. 701–706). Google Scholar
  17. Marsella, S., & Gratch, J. (2009). EMA: a process model of appraisal dynamics. Cognitive Systems Research, 10(1), 70–90. CrossRefGoogle Scholar
  18. Sebanz, N., Bekkering, H., & Knoblich, G. (2006). Joint action: bodies and minds moving together. Trends in Cognitive Sciences, 10(2), 70–76. CrossRefGoogle Scholar
  19. Simmons, K., & Barsalou, L. W. (2003). The similarity-in-topography principle: Reconciling theories of conceptual deficits. Cognitive Neuropsychology, 20, 451–486. CrossRefGoogle Scholar
  20. Spivey, M. J., Richardson, D. C., & Gonzalez-Marquez, M. (2005). On the perceptual-motor and image-schematic infrastructure of language. In Pecher, D., & Zwaan, R. A. (Eds.), Grounding cognition: the role of perception and action in memory, language, and thinking. Cambridge: Cambridge University Press. Google Scholar
  21. Ude, A., Moren, J., & Cheng, G. (2007). Visual attention and distributed processing of visual information for the control of humanoid robots. In Hackel, M. (Ed.), Humanoid robots: human-like machines (pp. 423–436). Vienna: I-Tech Education and Publishing. Google Scholar
  22. Walker, W., Lamere, P., Kwok, P., Raj, B., Singh, R., Gouvea, E., Wolf, P., & Woelfe, J. (2004). Sphinx-4: a flexible open source framework for speech recognition (Tech. Rep. TR-2004-139). Sun Microsystems Laboratories. Google Scholar
  23. Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625–636. Google Scholar
  24. Wilson, M., & Knoblich, G. (2005). The case for motor involvement in perceiving conspecifics. Psychological Bulletin, 131, 460–473. CrossRefGoogle Scholar
  25. Woern, H., & Laengle, T. (2000). Cooperation between human beings and robot systems in an industrial environment. In Proceedings of the mechatronics and robotics (Vol. 1, pp. 156–165). Google Scholar
  26. Wren, C., Clarkson, B., & Pentland, A. (2000). Understanding purposeful human motion. In Proceedings of the Fourth IEEE international conference on automatic face and gesture recognition (pp. 378–383). Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.MIT Media LaboratoryCambridgeUSA

Personalised recommendations