Effects of anticipatory perceptual simulation on practiced human-robot tasks
- 438 Downloads
With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task.
In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot.
In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses.
We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.
Unable to display preview. Download preview PDF.
Below is the link to the electronic supplementary material. (28.5 MB)
- Barsalou, L. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660. Google Scholar
- Duffy, B. (2000). The social robot. PhD thesis, University College Dublin, Ireland. Google Scholar
- Endo, Y. (2005). Anticipatory and improvisational robot via recollection and exploitation of episodic memories. In Proceedings of the AAAI fall symposium. Google Scholar
- Fong, T. W., Thorpe, C., & Baur, C. (2001). Collaboration, dialogue, and human-robot interaction. In Proceedings of the 10th international symposium of robotics research, Lorne, Victoria, Australia. London: Springer. Google Scholar
- Hamdan, R., Heitz, F., & Thoraval, L. (1999). Gesture localization and recognition using probabilistic visual learning. In Proceedings of the 1999 conference on computer vision and pattern recognition (CVPR ’99) (pp. 2098–2103), Ft Collins, CO, USA. Google Scholar
- Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. New York: Wiley. Google Scholar
- Hoffman, G. (2007). Ensemble: fluency and embodiment for robots acting with humans. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA. Google Scholar
- Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. In Proceedings of the AIAA 1st intelligent systems technical conference. Chicago: AIAA. Google Scholar
- Hoffman, G., & Breazeal, C. (2006). Robotic partners’ bodies and minds: an embodied approach to fluid human-robot collaboration. In Fifth international workshop on cognitive robotics (AAAI’06). Google Scholar
- Hoffman, G., & Breazeal, C. (2007). Cost-based anticipatory action-selection for human-robot fluency. IEEE Transactions on Robotics and Automation, 23(5), 952–961. Google Scholar
- Hoffman, G., & Breazeal, C. (2008). Achieving fluency through perceptual-symbol practice in human-robot collaboration. In Proceedings of the ACM/IEEE international conference on human-robot interaction (HRI’08). New York: ACM. Google Scholar
- Jones, H., & Rock, S. (2002). Dialogue-based human-robot interaction for space construction teams. In IEEE aerospace conference proceedings (Vol. 7, pp. 3645–3653). Google Scholar
- Kimura, H., Horiuchi, T., & Ikeuchi, K. (1999). Task-model based human robot cooperation using vision. In Proceedings of the IEEE international conference on intelligent robots and systems (IROS’99) (pp. 701–706). Google Scholar
- Spivey, M. J., Richardson, D. C., & Gonzalez-Marquez, M. (2005). On the perceptual-motor and image-schematic infrastructure of language. In Pecher, D., & Zwaan, R. A. (Eds.), Grounding cognition: the role of perception and action in memory, language, and thinking. Cambridge: Cambridge University Press. Google Scholar
- Ude, A., Moren, J., & Cheng, G. (2007). Visual attention and distributed processing of visual information for the control of humanoid robots. In Hackel, M. (Ed.), Humanoid robots: human-like machines (pp. 423–436). Vienna: I-Tech Education and Publishing. Google Scholar
- Walker, W., Lamere, P., Kwok, P., Raj, B., Singh, R., Gouvea, E., Wolf, P., & Woelfe, J. (2004). Sphinx-4: a flexible open source framework for speech recognition (Tech. Rep. TR-2004-139). Sun Microsystems Laboratories. Google Scholar
- Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625–636. Google Scholar
- Woern, H., & Laengle, T. (2000). Cooperation between human beings and robot systems in an industrial environment. In Proceedings of the mechatronics and robotics (Vol. 1, pp. 156–165). Google Scholar
- Wren, C., Clarkson, B., & Pentland, A. (2000). Understanding purposeful human motion. In Proceedings of the Fourth IEEE international conference on automatic face and gesture recognition (pp. 378–383). Google Scholar