Synthesising Novel Movements through Latent Space Modulation of Scalable Control Policies

  • Sebastian Bitzer
  • Ioannis Havoutis
  • Sethu Vijayakumar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5040)


We propose a novel methodology for learning and synthesising whole classes of high dimensional movements from a limited set of demonstrated examples that satisfy some underlying ’latent’ low dimensional task constraints. We employ non-linear dimensionality reduction to extract a canonical latent space that captures some of the essential topology of the unobserved task space. In this latent space, we identify suitable parametrisation of movements with control policies such that they are easily modulated to generate novel movements from the same class and are robust to perturbations. We evaluate our method on controlled simulation experiments with simple robots (reaching and periodic movement tasks) as well as on a data set of very high-dimensional human (punching) movements. We verify that we can generate a continuum of new movements from the demonstrated class from only a few examples in both robotic and human data.


Latent Space Joint Space Control Policy Inverse Kinematic Task Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. Phil. Trans. Royal Soc. London B Biol. Sci. 358(1431), 537–547 (2003)CrossRefGoogle Scholar
  2. 2.
    Grimes, D.B., Chalodhorn, R., Rao, R.P.N.: Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In: Proc. Robotics: Science & Systems (2006)Google Scholar
  3. 3.
    Peters, J., Schaal, S.: Reinforcement learning for parameterized motor primitives. In: 2006 International Joint Conference on Neural Networks, IJCNN, pp. 73–80 (2006)Google Scholar
  4. 4.
    Wiley, D.J., Hahn, J.K.: Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Applications 17(6), 39–45 (1997)CrossRefGoogle Scholar
  5. 5.
    Giese, M.A., Poggio, T.: Morphable models for the analysis and synthesis of complex motion patterns. International Journal of Computer Vision 38(1), 59–73 (2000)zbMATHCrossRefGoogle Scholar
  6. 6.
    Grochow, K., Martin, S.L., Hertzmann, A., Popovic, Z.: Style-based inverse kinematics. In: ACM Transactions on Graphics (Proceedings of SIGGRAPH) (2004)Google Scholar
  7. 7.
    Tatani, K., Nakamura, Y.: Dimensionality reduction and reproduction with hierarchical NLPCA neural networks - extracting common space of multiple humanoid motion patterns. In: Proc. IEEE Intl. Conf. on Robotics and Automation, pp. 1927–1932. ICRA (2003)Google Scholar
  8. 8.
    Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models for human motion. IEEE Trans. on Pattern Analysis and Machine Intelligence 30(2), 283–298 (2008)CrossRefGoogle Scholar
  9. 9.
    Carreira-Perpinan, M.A., Lu, Z.: The laplacian eigenmaps latent variable model. In: Proc. of the 11th Intl. Conference on Artificial Intelligence and Statistics. AISTATS (2007)Google Scholar
  10. 10.
    Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research 6, 1783–1816 (2005)MathSciNetGoogle Scholar
  11. 11.
    Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Advances in Neural Information Processing Systems, vol. 15, pp. 1523–1530 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Sebastian Bitzer
    • 1
  • Ioannis Havoutis
    • 1
  • Sethu Vijayakumar
    • 1
  1. 1.Institute of Perception, Action and BehaviourUniversity of EdinburghEdinburghUK

Personalised recommendations