Robot Programming by Demonstration

  • Aude BillardEmail author
  • Sylvain CalinonEmail author
  • Rüdiger DillmannEmail author
  • Stefan SchaalEmail author


Robot programming by demonstration (PbD) has become a central topic of robotics that spans across general research areas such as human-robot interaction, machine learning, machine vision and motor control.

Robot PbD started about 30 years ago, and has grown importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training robots to perform a task is three-fold.

First and foremost, PbD, also referred to as imitation learning, is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts.

Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Fig. 59.1). Imitation learning is thus a natural means of interacting with a machine that would be accessible to lay people.
Fig. 59.1

Left: A robot learns how to make a chess move (namely moving the queen forward) by generalizing across different demonstrations of the task performed in slightly different situations (different starting positions of the hand). The robot records its jointsʼ trajectories and learns to extract what-to-imitate, i.e. that the task constraints are reduced to a subpart of the motion located in a plane defined by the three chess pieces. Right: The robot reproduces the skill in a new context (for different initial position of the chess piece) by finding an appropriate controller that satisfies both the task constraints and constraints relative to its body limitation (how-to-imitate problem), adapted from [59.1]

Third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has made a leap forward during the past decade toward these goals. In addition, we anticipate that these promises may be fulfilled very soon.

Section 59.1 presents a brief historical overview of robot Programming by Demonstration (PbD), introducing several issues that will be discussed later in this chapter. Section 59.2 reviews engineering approaches to robot PbD with an emphasis on machine learning approaches that provide the robot with the ability to adapt the learned skill to different situations (Sect. 59.2.1). This section discusses also the different types of representation that one may use to encode a skill and presents incremental learning techniques to refine the skill progressively (Sect. 59.2.4). Section 59.2.3 emphasizes the importance to give the teacher an active role during learning and presents different ways in which the user can convey cues to the robot to help it to improve its learning. Section 59.2.4 discusses how PbD can be jointly used with other learning strategies to overcome some limitations of PbD. Section 59.3 reviews works that take a more biological approach to robot PbD and develops models of either the cognitive or neural processes of imitation learning in primates. Finally, Sect. 59.4 lists various open issues in robot PbD that have yet been little explored by the field.


Hide Markov Model Reinforcement Learning Augmented Reality Gaussian Mixture Model Humanoid Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



artificial intelligence and simulation o behavior


augmented reality


biomimetic robotics


expectation maximization


elementary operators


Gaussian mixture model


Gaussian mixture regression


Gaussian processes


hidden Markov model


human–robot interaction


independent component analysis


locally weighted regression


machine learning


maximum likelihood


mirror neuron system


principle components analysis


programming by demonstration


radial basis function


receptive field weighted regression


reinforcement learning


virtual reality


  1. 60.1.
    S. Calinon, F. Guenter, A. Billard: On Learning Representing and Generalizing a Task in a Humanoid Robot, IEEE Trans. Syst. Man Cybernet. 37(2), 286–298 (2007), Special issue on robot learning by observation, demonstration and imitationCrossRefGoogle Scholar
  2. 60.2.
    T. Lozano-Perez: Robot programming, Proc. IEEE 71(7), 821–841 (1983)CrossRefGoogle Scholar
  3. 60.3.
    B. Dufay, J.-C. Latombe: An approach to automatic robot programming based on inductive learning, The Int. J. Robot. Res. 3(4), 3–20 (1984)CrossRefGoogle Scholar
  4. 60.4.
    A. Levas, M. Selfridge: A user-friendly high-level robot teaching system, Proc. IEEE International Conference on Robotics (1984) pp. 413–416Google Scholar
  5. 60.5.
    A.B. Segre, G. DeJong: Explanation-based manipulator learning Acquisition of planning ability through observation, IEEE Conference on Robotics and Automation (ICRA) (1985) pp. 555–560Google Scholar
  6. 60.6.
    A.M. Segre: Machine Learning of Robot Assembly Plans (Kluwer Academic Publishers, Boston 1988)Google Scholar
  7. 60.7.
    S. Muench, J. Kreuziger, M. Kaiser, R. Dillmann: Robot Programming by Demonstration (RPD) - Using Machine Learning and User Interaction Methods for the Development of Easy and Comfortable Robot Programming Systems, Proc. International Symposium on Industrial Robots (ISIR) (1994) pp. 685–693Google Scholar
  8. 60.8.
    Y. Kuniyoshi, Y. Ohmura, K. Terada, A. Nagakubo, S. Eitoku, T. Yamamoto: Embodied basis of invariant features in execution and perception of whole-body dynamic actionsknacks and focuses of Roll-and-Rise motion, Robot. Auton. Syst. 48(4), 189–201 (2004)CrossRefGoogle Scholar
  9. 60.9.
    Y. Kuniyoshi, M. Inaba, H. Inoue: Teaching by showing: Generating robot programs by visual observation of human performance, Proc. International Symposium of Industrial Robots (1989) pp. 119–126Google Scholar
  10. 60.10.
    Y. Kuniyoshi, M. Inaba, H. Inoue: Learning by Watching: Extracting Reusable Task Knowledge from Visual Observation of Human Performance, IEEE Trans. Robot. Autom. 10(6), 799–822 (1994)CrossRefGoogle Scholar
  11. 60.11.
    S.B. Kang, K. Ikeuchi: A robot system that observes and replicates grasping tasks, Proc. International Conference on Computer Vision (ICCV) (1995) pp. 1093–1099Google Scholar
  12. 60.12.
    C.P. Tung, A.C. Kak: Automatic learning of assembly task using a DataGlove system, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (1995) pp. 1–8Google Scholar
  13. 60.13.
    K. Ikeuchi, T. Suchiro: Towards an assembly plan from observation, Part I: Assembly task recognition using face-contact relations (polyhedral objects), Proc. IEEE International Conference on Robotics and Automation (ICRA), Vol. 3 (1992) pp. 2171–2177Google Scholar
  14. 60.14.
    M. Ito, K. Noda, Y. Hoshino, J. Tani: Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model, Neur. Netw. 19(3), 323–337 (2006)CrossRefzbMATHGoogle Scholar
  15. 60.15.
    T. Inamura, N. Kojo, M. Inaba: Situation Recognition and Behavior Induction Based on Geometric Symbol Representation of Multimodal Sensorimotor Patterns, Proc. IEEE/RSJ international Conference on Intelligent Robots and Systems (IROS) (2006) pp. 5147–5152Google Scholar
  16. 60.16.
    S. Liu, H. Asada: Teaching and learning of deburring robots using neural networks, Proc. IEEE International Conference on Robotics and Automation (ICRA) (1993) pp. 339–345Google Scholar
  17. 60.17.
    A. Billard, G. Hayes: DRAMA, a connectionist architecture for control and learning in autonomous robots, Adapt. Behav. 7(1), 35–64 (1999)CrossRefGoogle Scholar
  18. 60.18.
    M. Kaiser, R. Dillmann: Building elementary robot skills from human demonstration, Proc. IEEE International Conference on Robotics and Automation (ICRA) (1996) pp. 2700–2705Google Scholar
  19. 60.19.
    R. Dillmann, M. Kaiser, A. Ude: Acquisition of elementary robot skills from human demonstration, Proc. International Symposium on Intelligent Robotic Systems (SIRS) (1995) pp. 1–38Google Scholar
  20. 60.20.
    J. Yang, Y. Xu, C.S. Chen: Hidden Markov model approach to skill learning and its application in telerobotics, Proc. IEEE International Conference on Robotics and Automation (ICRA) (1993) pp. 396–402Google Scholar
  21. 60.21.
    P.K. Pook, D.H. Ballard: Recognizing teleoperated manipulations, Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (1993) pp. 578–585Google Scholar
  22. 60.22.
    G.E. Hovland, P. Sikka, B.J. McCarragher: Skill Acquisition from Human Demonstration Using a Hidden Markov Model, Proc. IEEE International Conference on Robotics and Automation (ICRA) (1996) pp. 2706–2711Google Scholar
  23. 60.23.
    S.K. Tso, K.P. Liu: Hidden Markov model for intelligent extraction of robot trajectory command from demonstrated trajectories, Proc. IEEE International Conference on Industrial Technology (ICIT) (1996) pp. 294–298Google Scholar
  24. 60.24.
    C. Lee, Y. Xu: Online, Interactive Learning of Gestures for Human/Robot Interfaces, Proc. IEEE international Conference on Robotics and Automation (ICRA) (1996) pp. 2982–2987Google Scholar
  25. 60.25.
    G. Rizzolatti, L. Fadiga, L. Fogassi, V. Gallese: Resonance behaviors and mirror neurons, Archives Italiennes de Biologie 137(2-3), 85–100 (1999)Google Scholar
  26. 60.26.
    J. Decety, T. Chaminade, J. Grezes, A.N. Meltzoff: A PET Exploration of the Neural Mechanisms Involved in Reciprocal Imitation, Neuroimage 15(1), 265–272 (2002)CrossRefGoogle Scholar
  27. 60.27.
    J. Piaget: Play, Dreams and Imitation in Childhood (Norton, New York 1962)Google Scholar
  28. 60.28.
    J. Nadel, C. Guerini, A. Peze, C. Rivet: The Evolving Nature of Imitation as a Format for Communication. In: Imitation in Infancy (Cambridge University Press, Cambrige 1999) pp. 209–234Google Scholar
  29. 60.29.
    M.J. Matarić: Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics. In: Imitation in Animals and Artifacts, ed. by C. Nehaniv, K. Dautenhahn (MIT Press, Cambrige 2002)Google Scholar
  30. 60.30.
    S. Schaal: Nonparametric regression for learning nonlinear transformations. In: Prerational Intelligence in Strategies, High-Level Processes and Collective Behavior, ed. by H. Ritter, O. Holland (Kluwer Academic, Dortrecht 1999)Google Scholar
  31. 60.31.
    A. Billard: Imitation: a means to enhance learning of a synthetic proto-language in an autonomous robot. In: Imitation in Animals and Artifacs, ed. by K. Dautenhahn, C. Nehaniv (MIT Press, Cambrige 2002) pp. 281–311Google Scholar
  32. 60.32.
    K. Dautenhahn: Getting to know each other - Artificial social intelligence for autonomous robots, Robot. Auton. Syst. 16(2-4), 333–356 (1995)CrossRefGoogle Scholar
  33. 60.33.
    C. Nehaniv, K. Dautenhahn: Of Hummingbirds and Helicopters: An Algebraic Framework for Interdisciplinary Studies of Imitation and Its Applications. In: Interdisciplinary Approaches to Robot Learning, ed. by J. Demiris, A. Birk (World Scientific Press, Singapore 2000) pp. 136–161CrossRefGoogle Scholar
  34. 60.34.
    C.L. Nehaniv: Nine Billion Correspondence Problems and Some Methods for Solving Them, Proc. International Symposium on Imitation in Animals and Artifacts (AISB) (2003) pp. 93–95Google Scholar
  35. 60.35.
    P. Bakker, Y. Kuniyoshi: Robot See, Robot Do : An Overview of Robot Imitation, Proc. workshop on Learning in Robots and Animals (AISB) (1996) pp. 3–11Google Scholar
  36. 60.36.
    M. Ehrenmann, O. Rogalla, R. Zoellner, R. Dillmann: Teaching Service Robots Complex Tasks: Programming by Demonstation for Workshop and Household Environments, Proc. IEEE International Conference on Field and Service Robotics (FRS) (2001)Google Scholar
  37. 60.37.
    M. Skubic, R.A. Volz: Acquiring robust, force-based assembly skills from human demonstration, IEEE Trans. Robot. Autom. (2000) pp. 772–781Google Scholar
  38. 60.38.
    M. Yeasin, S. Chaudhuri: Toward automatic robot programming: learning human skill from visual data, IEEE Transactions on Systems, Man and Cybernetics, Part B, 30(1), 180–185 (2000)Google Scholar
  39. 60.39.
    J. Zhang, B. Rössler: Self-Valuing Learning and Generalization with Application in Visually Guided Grasping of Complex Objects, Robot. Auton. Syst., 47(2-3), 117–127 (2004)CrossRefGoogle Scholar
  40. 60.40.
    A. Kheddar: Teleoperation based on the hidden robot concept, IEEE Trans. Syst., Man Cybernet., Part A 31(1), 1–13 (2001)Google Scholar
  41. 60.41.
    R. Dillmann: Teaching and Learning of Robot Tasks via Observation of Human Performance, Robot. Auton. Syst. 47(2-3), 109–116 (2004)CrossRefGoogle Scholar
  42. 60.42.
    J. Aleotti, S. Caselli, M. Reggiani: Leveraging on a Virtual Environment for Robot Programming by Demonstration, Robot. Auton. Syst. 47(2-3), 153–161 (2004)CrossRefGoogle Scholar
  43. 60.43.
    S. Ekvall, D. Kragic: Grasp Recognition for Programming by Demonstration, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2005) pp. 748–753Google Scholar
  44. 60.44.
    J. Aleotti, S. Caselli: Trajectory Clustering and Stochastic Approximation for Robot Programming by Demonstration, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2005) pp. 1029–1034Google Scholar
  45. 60.45.
    A. Alissandrakis, C.L. Nehaniv, K. Dautenhahn, J. Saunders: Evaluation of Robot Imitation Attempts: Comparison of the Systemʼs and the Humanʼs Perspectives, Proc. ACM SIGCHI/SIGART conference on Human-robot interaction (HRI) (2006) pp. 134–141Google Scholar
  46. 60.46.
    N. Delson, H. West: Robot Programming by Human Demonstration: Adaptation and Inconsistency in Constrained Motion, Proc. IEEE International Conference on Robotics and Automation (ICRA) (1996) pp. 30–36Google Scholar
  47. 60.47.
    T. Sato, Y. Genda, H. Kubotera, T. Mori, T. Harada: Robot imitation of human motion based on qualitative description from multiple measurement of human and environmental data, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2003) pp. 2377–2384Google Scholar
  48. 60.48.
    K. Ogawara, J. Takamatsu, H. Kimura, K. Ikeuchi: Extraction of essential interactions through multiple observations of human demonstrations, IEEE Trans. Indust. Electron. 50(4), 667–675 (2003)CrossRefGoogle Scholar
  49. 60.49.
    M.N. Nicolescu, M.J. Matarić: Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice, Proc. International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS) (2003) pp. 241–248Google Scholar
  50. 60.50.
    M. Pardowitz, R. Zoellner, S. Knoop, R. Dillmann: Incremental Learning of Tasks from User Demonstrations, Past Experiences and Vocal Comments., IEEE Trans. Syst., Man Cybernet. 37(2), 322–332 (2007), Special issue on robot learning by observation, demonstration and imitationCrossRefGoogle Scholar
  51. 60.51.
    B. Jansen, T. Belpaeme: A computational model of intention reading in imitation, Robot. Auton. Syst. 54(5), 394–402 (2006)CrossRefGoogle Scholar
  52. 60.52.
    S. Ekvall, D. Kragic: Learning Task Models from Multiple Human Demonstrations, Proc. IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2006) pp. 358–363Google Scholar
  53. 60.53.
    H. Friedrich, S. Muench, R. Dillmann, S. Bocionek, M. Sassin: Robot programming by Demonstration (RPD): Supporting the induction by human interaction, Mach. Learn. 23(2), 163–189 (1996)Google Scholar
  54. 60.54.
    J. Saunders, C.L. Nehaniv, K. Dautenhahn: Teaching robots by moulding behavior and scaffolding the environment, Proc. ACM SIGCHI/SIGART conference on Human-Robot Interaction (HRI) (2006) pp. 118–125Google Scholar
  55. 60.55.
    A. Alissandrakis, C.L. Nehaniv, K. Dautenhahn: Correspondence Mapping Induced State and Action Metrics for Robotic Imitation, IEEE Trans. Syst. Man Cybernet. 37(2), 299–307 (2007), Special issue on robot learning by observation, demonstration and imitationCrossRefGoogle Scholar
  56. 60.56.
    A. Ude: Trajectory Generation from Noisy Positions of Object Features for Teaching Robot Paths, Robot. Auton. Syst. 11(2), 113–127 (1993)CrossRefGoogle Scholar
  57. 60.57.
    J. Yang, Y. Xu, C.S. Chen: Human Action Learning via Hidden Markov Model, IEEE Trans. Syst. Man Cybernet. 27(1), 34–44 (1997)CrossRefGoogle Scholar
  58. 60.58.
    K. Yamane, Y. Nakamura: Dynamics Filter - Concept and implementation of online motion Generator for human figures, IEEE Trans. Robot. Autom. 19(3), 421–432 (2003)CrossRefGoogle Scholar
  59. 60.59.
    A.J. Ijspeert, J. Nakanishi, S. Schaal: Learning Control Policies For Movement Imitation and Movement recognition, Neural Inform. Process. Syst. (NIPS) 15, 1547–1554 (2003)Google Scholar
  60. 60.60.
    S. Vijayakumar, S. Schaal: Locally Weighted Projection Regression: An On Algorithm for Incremental Real Time Learning in High Dimensional Spaces, Proc. International Conference on Machine Learning (ICML) (2000) pp. 288–293Google Scholar
  61. 60.61.
    S. Vijayakumar, A. Dʼsouza, S. Schaal: Incremental Online Learning in High Dimensions, Neur. Comput. 17(12), 2602–2634 (2005)CrossRefMathSciNetGoogle Scholar
  62. 60.62.
    N. Kambhatla: Local Models and Gaussian Mixture Models for Statistical Data Processing. Ph.D. Thesis (Oregon Graduate Institute of Science and Technology, Portland 1996)Google Scholar
  63. 60.63.
    A. Shon, K. Grochow, R. Rao: Robotic Imitation from Human Motion Capture using Gaussian Processes, Proc. IEEE/RAS International Conference on Humanoid Robots (Humanoids) (2005)Google Scholar
  64. 60.64.
    K. Grochow, S.L. Martin, A. Hertzmann, Z. Popovic: Style-based inverse kinematics, Proc. ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) (2004) pp. 522–531Google Scholar
  65. 60.65.
    K.F. MacDorman, R. Chalodhorn, M. Asada: Periodic nonlinear principal component neural networks for humanoid motion segmentation, generalization, and generation, Proc. International Conference on Pattern Recognition (ICPR), Vol. 4 (2004) pp. 537–540Google Scholar
  66. 60.66.
    S. Calinon, A. Billard: What is the Teacherʼs Role in Robot Programming by Demonstration? - Toward Benchmarks for Improved Learning, Interact. Stud. 8(3), 441–464 (2007), Special Issue on Psychological Benchmarks in Human-Robot InteractionGoogle Scholar
  67. 60.67.
    D. Bullock, S. Grossberg: VITE and FLETE: neural modules for trajectory formation and postural control. In: Volitional control, ed. by W.A. Hersberger (Elsevier, Amsterdam 1989) pp. 253–297CrossRefGoogle Scholar
  68. 60.68.
    F.A. Mussa-Ivaldi: Nonlinear force fields: a distributed system of control primitives for representing and learning movements, IEEE International Symposium on Computational Intelligence in Robotics and Automation (1997) pp. 84–90Google Scholar
  69. 60.69.
    P. Li, R. Horowitz: Passive velocity field control of mechanical manipulators, IEEE Trans. Robot. Autom. 15(4), 751–763 (1999)CrossRefGoogle Scholar
  70. 60.70.
    G. Schoener, C. Santos: Control of movement time and sequential action through attractor dynamics: a simulation study demonstrating object interception and coordination, Proc. International Symposium on Intelligent Robotic Systems (SIRS) (2001)Google Scholar
  71. 60.71.
    T. Inamura, H. Tanie, Y. Nakamura: Keyframe Compression and Decompression for Time Series Data based on Continuous Hidden Markov Models, Proc. IEEE/RSJ international Conference on Intelligent Robots and Systems (IROS) (2003) pp. 1487–1492Google Scholar
  72. 60.72.
    T. Inamura, I. Toshima, Y. Nakamura: Acquiring Motion Elements for Bidirectional Computation of Motion Recognition and Generation. In: Experimental Robotics VIII, Vol. 5, ed. by B. Siciliano, P. Dario (Springer, Berlin Heidelberg 2003) pp. 372–381CrossRefGoogle Scholar
  73. 60.73.
    T. Inamura, N. Kojo, T. Sonoda, K. Sakamoto, K. Okada, M. Inaba: Intent Imitation using Wearable Motion Capturing System with On-line Teaching of Task Attention, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2005) pp. 469–474Google Scholar
  74. 60.74.
    D. Lee, Y. Nakamura: Stochastic Model of Imitating a New Observed Motion Based on the Acquired Motion Primitives, Proc. IEEE/RSJ international Conference on Intelligent Robots and Systems (IROS) (2006) pp. 4994–5000Google Scholar
  75. 60.75.
    D. Lee, Y. Nakamura: Mimesis Scheme using a Monocular Vision System on a Humanoid Robot, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2007) pp. 2162–2168Google Scholar
  76. 60.76.
    S. Calinon, A. Billard: Learning of Gestures by Imitation in a Humanoid Robot. In: Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions, ed. by K. Dautenhahn, C.L. Nehaniv (Cambridge Univ. Press, Cambrige 2007) pp. 153–177CrossRefGoogle Scholar
  77. 60.77.
    A. Billard, S. Calinon, F. Guenter: Discriminative and Adaptive Imitation in Uni-Manual and Bi-Manual Tasks, Robot. Auton. Syst. 54(5), 370–384 (2006)CrossRefGoogle Scholar
  78. 60.78.
    S. Calinon, A. Billard: Recognition and Reproduction of Gestures using a Probabilistic Framework combining PCA, ICA and HMM, Proc. International Conference on Machine Learning (ICML) (2005) pp. 105–112Google Scholar
  79. 60.79.
    S. Calinon, F. Guenter, A. Billard: Goal-Directed Imitation in a Humanoid Robot, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2005) pp. 299–304Google Scholar
  80. 60.80.
    T. Asfour, F. Gyarfas, P. Azad, R. Dillmann: Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2006) pp. 40–47Google Scholar
  81. 60.81.
    J. Aleotti, S. Caselli: Robust trajectory learning and approximation for robot programming by demonstration, Robot. Auton. Syst. 54(5), 409–413 (2006)CrossRefGoogle Scholar
  82. 60.82.
    C.G. Atkeson: Using Local Models to Control Movement, Adv. Neur. Inform. Process. Syst. (NIPS) (1990) pp. 316–323Google Scholar
  83. 60.83.
    C.G. Atkeson, A.W. Moore, S. Schaal: Locally Weighted Learning for Control, Artifi. Intell. Rev. 11(1-5), 75–113 (1997)CrossRefGoogle Scholar
  84. 60.84.
    A.W. Moore: Fast, Robust Adaptive Control by Learning only Forward Models. In: Adv. Neur. Inform. Process. Syst. (NIPS), Vol. 4, ed. by S. Editor (Morgan Kaufmann, San Francisco 1992)Google Scholar
  85. 60.85.
    S. Schaal, C.G. Atkeson: From Isolation to Cooperation: An Alternative View of a System of Experts. In: Adv. Neur. Inform. Process. Syst. (NIPS), Vol. 8, ed. by S. Editor (Morgan Kaufmann, San Francisco 1996) pp. 605–611Google Scholar
  86. 60.86.
    S. Schaal, C.G. Atkeson: Constructive Incremental Learning from Only Local Information, Neur. Comput. 10(8), 2047–2084 (1998)CrossRefGoogle Scholar
  87. 60.87.
    M. Hersch, F. Guenter, S. Calinon, A.G. Billard: Learning Dynamical System Modulation for Constrained Reaching Tasks, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2006) pp. 444–449Google Scholar
  88. 60.88.
    A.J. Ijspeert, J. Nakanishi, S. Schaal: Movement imitation with nonlinear dynamical systems in humanoid robots, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2002) pp. 1398–1403Google Scholar
  89. 60.89.
    C. Breazeal, M. Berlin, A. Brooks, J. Gray, A.L. Thomaz: Using perspective taking to learn from ambiguous demonstrations, Robot. Auton. Syst. 54(5), 385–393 (2006)CrossRefGoogle Scholar
  90. 60.90.
    Y. Sato, K. Bernardin, H. Kimura, K. Ikeuchi: Task analysis based on observing hands and objects by vision, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2002) pp. 1208–1213Google Scholar
  91. 60.91.
    R. Zoellner, M. Pardowitz, S. Knoop, R. Dillmann: Towards Cognitive Robots: Building Hierarchical Task Representations of Manipulations from Human Demonstration, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2005) pp. 1535–1540Google Scholar
  92. 60.92.
    M. Pardowitz, R. Zoellner, R. Dillmann: Incremental learning of task sequences with information-theoretic metrics, Proc. European Robotics Symposium (EUROS) (2005)Google Scholar
  93. 60.93.
    M. Pardowitz, R. Zoellner, R. Dillmann: Learning sequential constraints of tasks from user demonstrations, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2005) pp. 424–429Google Scholar
  94. 60.94.
    S. Calinon, A. Billard: Teaching a Humanoid Robot to Recognize and Reproduce Social Cues, Proc. IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2006) pp. 346–351Google Scholar
  95. 60.95.
    B. Scassellati: Imitation and Mechanisms of Joint Attention: A Developmental Structure for Building Social Skills on a Humanoid Robot, Lect. Notes Comput. Sci. 1562, 176–195 (1999)CrossRefGoogle Scholar
  96. 60.96.
    H. Kozima, H. Yano: A robot that learns to communicate with human caregivers, Proc. International Workshop on Epigenetic Robotics (2001)Google Scholar
  97. 60.97.
    H. Ishiguro, T. Ono, M. Imai, T. Kanda: Development of an Interactive Humanoid Robot Robovie - An interdisciplinary approach, Robot. Res. 6, 179–191 (2003)CrossRefGoogle Scholar
  98. 60.98.
    K. Nickel, R. Stiefelhagen: Pointing gesture recognition based on 3 D-tracking of face, hands and head orientation, Proc. international conference on Multimodal interfaces (ICMI) (2003) pp. 140–146Google Scholar
  99. 60.99.
    M. Ito, J. Tani: Joint attention between a humanoid robot and users in imitation game, Proc. International Conference on Development and Learning (ICDL) (2004)Google Scholar
  100. 60.100.
    V.V. Hafner, F. Kaplan: Learning to interpret pointing gestures: experiments with four-legged autonomous robots. In: Biomimetic Neural Learning for Intelligent Robots. Intelligent Systems, Cognitive Robotics, and Neuroscience, ed. by S. Wermter, G. Palm, M. Elshaw (Springer, Berlin, Heidelberg 2005) pp. 225–234CrossRefGoogle Scholar
  101. 60.101.
    C. Breazeal, D. Buchsbaum, J. Gray, D. Gatenby, B. Blumberg: Learning from and about Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots, Artificial Life 11(1-2), 31–62 (2005)CrossRefGoogle Scholar
  102. 60.102.
    P.F. Dominey, M. Alvarez, B. Gao, M. Jeambrun, A. Cheylus, A. Weitzenfeld, A. Martinez, A. Medrano: Robot command, interrogation and teaching via social interaction, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2005) pp. 475–480Google Scholar
  103. 60.103.
    A.L. Thomaz, M. Berlin, C. Breazeal: Robot Science Meets Social Science: An Embodied Computational Model of Social Referencing, Workshop Toward Social Mechanisms of Android Science (CogSci) (2005) pp. 7–17Google Scholar
  104. 60.104.
    C. Breazeal, L. Aryananda: Recognition of Affective Communicative Intent in Robot-Directed Speech, Autonomous Robots 12(1), 83–104 (2002)CrossRefzbMATHGoogle Scholar
  105. 60.105.
    H. Bekkering, A. Wohlschlaeger, M. Gattis: Imitation of gestures in children is goal-directed, Quart. J. Exp. Psychol. 53A(1), 153–164 (2000)Google Scholar
  106. 60.106.
    M. Nicolescu, M.J. Matarić: Task Learning Through Imitation and Human-Robot Interaction. In: Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions, ed. by K. Dautenhahn, C.L. Nehaniv (Cambridge Univ. Press, Cambrige 2007) pp. 407–424CrossRefGoogle Scholar
  107. 60.107.
    J. Demiris, G. Hayes: Imitative Learning Mechanisms in Robots and Humans, Proc. European Workshop on Learning Robots, ed. by V. Klingspor (1996) pp. 9–16Google Scholar
  108. 60.108.
    P. Gaussier, S. Moga, J.P. Banquet, M. Quoy: From perception-action loop to imitation processes: a bottom-up approach of learning by imitation, Appl. Artif. Intell. 7(1), 701–729 (1998)CrossRefGoogle Scholar
  109. 60.109.
    M. Ogino, H. Toichi, Y. Yoshikawa, M. Asada: Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping, Robot. Auton. Syst. 54(5), 414–418 (2006)CrossRefGoogle Scholar
  110. 60.110.
    A. Billard, M. Matarić: Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture, Robot. Auton. Syst. 37(2), 145–160 (2001)CrossRefzbMATHGoogle Scholar
  111. 60.111.
    W. Erlhagen, A. Mukovskiy, E. Bicho, G. Panin, C. Kiss, A. Knoll, H. van Schie, H. Bekkering: Goal-directed imitation for robots: A bio-inspired approach to action understanding and skill learning, Robot. Auton. Syst. 54(5), 353–360 (2006)CrossRefGoogle Scholar
  112. 60.112.
    A. Chella, H. Dindo, I. Infantino: A cognitive framework for imitation learning, Robot. Auton. Syst. 54(5), 403–408 (2006)CrossRefGoogle Scholar
  113. 60.113.
    S. Calinon, A. Billard: Incremental Learning of Gestures by Imitation in a Humanoid Robot, Proc. ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2007) pp. 255–262Google Scholar
  114. 60.114.
    F. Guenter, M. Hersch, S. Calinon, A. Billard: Reinforcement Learning for Imitating Constrained Reaching Movements, Adv. Robot. 21(13), 1521–1544 (2007), Special Issue on Imitative RobotsGoogle Scholar
  115. 60.115.
    R.H. Cuijpers, H.T. van Schie, M. Koppen, W. Erlhagen, H. Bekkering: Goals and means in action observation: A computational approach, Neural Networks 19(3), 311–322 (2006)CrossRefzbMATHGoogle Scholar
  116. 60.116.
    M.W. Hoffman, D.B. Grimes, A.P. Shon, R.P.N. Rao: A probabilistic model of gaze imitation and shared attention, Neural Networks 19(3), 299–310 (2006)CrossRefzbMATHGoogle Scholar
  117. 60.117.
    Y. Demiris, B. Khadhouri: Hierarchical attentive multiple models for execution and recognition of actions, Robot. Auton. Syst. 54(5), 361–369 (2006)CrossRefGoogle Scholar
  118. 60.118.
    J. Peters, S. Vijayakumar, S. Schaal: Reinforcement Learning for Humanoid Robotics, Proc. IEEE International Conference on Humanoid Robots (Humanoids) (2003)Google Scholar
  119. 60.119.
    T. Yoshikai, N. Otake, I. Mizuuchi, M. Inaba, H. Inoue: Development of an Imitation Behavior in Humanoid Kenta with Reinforcement Learning Algorithm based on the Attention during Imitation, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2004) pp. 1192–1197Google Scholar
  120. 60.120.
    D.C. Bentivegna, C.G. Atkeson, G. Cheng: Learning Tasks from Observation and Practice, Robot. Auton. Syst. 47(2-3), 163–169 (2004)CrossRefGoogle Scholar
  121. 60.121.
    Y.K. Hwang, K.J. Choi, D.S. Hong: Self-Learning Control of Cooperative Motion for a Humanoid Robot, Proc. IEEE International Conference on Robotics and Automation (ICRA) (2006) pp. 475–480Google Scholar
  122. 60.122.
    A. Billard, K. Dautenhahn: Grounding communication in autonomous robots: an experimental study, Robot. Auton. Syst. 24(1-2), 71–79 (1998), Special Issue on Scientific methods in mobile roboticsCrossRefGoogle Scholar
  123. 60.123.
    S. Schaal: Is Imitation Learning the Route to Humanoid Robots?, Trends Cognit. Sci. 3(6), 233–242 (1999)CrossRefGoogle Scholar
  124. 60.124.
    J.H. Maunsell, D.C. Van Essen: Functional properties of neurons in middle temporal visual area of the macaque monkey. II. Binocular interactions and sensitivity to binocular disparity, Neurophysiol. 49(5), 1148–1167 (1983)Google Scholar
  125. 60.125.
    M. Arbib, T. Iberall, D. Lyons: Coordinated Control Program for Movements of the Hand, Exp. Brain Res. Suppl. 10, 111–129 (1985)Google Scholar
  126. 60.126.
    D. Sternad, S. Schaal: Segmentation of endpoint trajectories does not imply segmented control, Exp. Brain Res. 124(1), 118–136 (1999)CrossRefGoogle Scholar
  127. 60.127.
    R.S. Sutton, S. Singh, D. Precup, B. Ravindran: Improved switching among temporally abstract actions, Advances in Neural Information Processing Systems (NIPS) 11, 1066–1072 (1999)Google Scholar
  128. 60.128.
    Y. Demiris, G. Hayes: Imitation as a Dual Route Process Featuring Predictive and Learning Components: a Biologically-Plausible Computational Model. In: Imitation in Animals and Artifacs, ed. by K. Dautenhahn, C. Nehaniv (MIT Press, Cambrige 2002) pp. 327–361Google Scholar
  129. 60.129.
    D.M. Wolpert, M. Kawato: Multiple paired forward and inverse models for motor control, Neural Networks 11(7-8), 1317–1329 (1998)CrossRefGoogle Scholar
  130. 60.130.
    C. Nehaniv, K. Dautenhahn: Imitation in Animals and Artifacs (MIT Press, Boston 2002)Google Scholar
  131. 60.131.
    R.S. Sutton, A.G. Barto: Reinforcement learning: an introduction. In: Adaptive computation and machine learning, ed. by S. Editor (MIT Press, Cambridge 1998)Google Scholar
  132. 60.132.
    G. Rizzolatti, L. Fogassi, V. Gallese: Neurophysiological mechanisms underlying the understanding and imitation of action, Nature Rev. Neurosci. 2, 661–670 (2001)CrossRefGoogle Scholar
  133. 60.133.
    M. Iacoboni, R.P. Woods, M. Brass, H. Bekkering, J.C. Mazziotta, G. Rizzolatti: Cortical Mechanisms of Human Imitation, Science 286, 2526–2528 (1999)CrossRefGoogle Scholar
  134. 60.134.
    E. Oztop, M. Kawato, M.A. Arbib: Mirror neurons and imitation: A computationally guided review, Neural Networks 19(3), 254–321 (2006)CrossRefzbMATHGoogle Scholar
  135. 60.135.
    E. Sauser, A. Billard: Biologically Inspired Multimodal Integration: Interferences in a Human-Robot Interaction Game, Proc. IEEE/RSJ international Conference on Intelligent Robots and Systems (IROS) (2006) pp. 5619–5624Google Scholar
  136. 60.136.
    A.H. Fagg, M.A. Arbib: Modeling Parietal-Premotor Interactions in Primate Control of Grasping, Neural Networks 11(7), 1277–1303 (1998)CrossRefGoogle Scholar
  137. 60.137.
    E. Oztop, M.A. Arbib: Schema Design and Implementation of the Grasp-Related Mirror Neuron System, Biol. Cybernet. 87(2), 116–140 (2002)CrossRefzbMATHGoogle Scholar
  138. 60.138.
    M. Arbib, A. Billard, M. Iacoboni, E. Oztop: Mirror neurons, Imitation and (Synthetic) Brain Imaging, Neural Networks 13(8-9), 953–973 (2000)CrossRefGoogle Scholar
  139. 60.139.
    E. Oztop, M. Lin, M. Kawato, G. Cheng: Dexterous Skills Transfer by Extending Human Body Schema to a Robotic Hand, Proc. IEEE-RAS International Conference on Humanoid Robots (Humanoids) (2006) pp. 82–87Google Scholar
  140. 60.140.
    E. Sauser, A. Billard: Parallel and Distributed Neural Models of the Ideomotor Principle: An Investigation of Imitative Cortical Pathways, Neural Networks 19(3), 285–298 (2006)CrossRefzbMATHGoogle Scholar
  141. 60.141.
    E.L. Sauser, A.G. Billard: Dynamic Updating of Distributed Neural Representations using Forward Models, Biol. Cybernet. 95(6), 567–588 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  142. 60.142.
    M. Brass, H. Bekkering, A. Wohlschlaeger, W. Prinz: Compatibility between observed and executed movements: Comparing symbolic, spatial and imitative cues, Brain Cognit. 44(2), 124–143 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.Learning Algorithms and Systems Laboratory (LASA)Ecole Polytechnique Federale de Lausanne (EPFL)LausanneSwitzerland
  2. 2.Learning Algorithms and Systems Laboratory (LASA)Ecole Polytechnique Federale de Lausanne (EPFL)LausanneSwitzerland
  3. 3.Institut für Technische InformatikUniversität KarlsruheKarlsruheGermany
  4. 4.Computer Science and NeuroscienceUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations