Mixture Models for the Analysis, Edition, and Synthesis of Continuous Time Series

Part of the Unsupervised and Semi-Supervised Learning book series (UNSESUL)


This chapter presents an overview of techniques used for the analysis, edition, and synthesis of continuous time series, with a particular emphasis on motion data. The use of mixture models allows the decomposition of time signals as a superposition of basis functions. It provides a compact representation that aims at keeping the essential characteristics of the signals. Various types of basis functions have been proposed, with developments originating from different fields of research, including computer graphics, human motion science, robotics, control, and neuroscience. Examples of applications with radial, Bernstein, and Fourier basis functions are presented, with associated source codes to get familiar with these techniques.


Gaussian mixture regression Locally weighted regression Movement primitives Ergodic control Basis functions Fourier series Bernstein polynomials 



I would like to thank Prof. Michael Liebling for his help in the development of the ergodic control formulation applied to Gaussian mixture models and for his recommendations on the preliminary version of this chapter.

The research leading to these results has received funding from the European Commission’s Horizon 2020 Programme (H2020/2018-20) under the MEMMO Project (Memory of Motion,, grant agreement 780684.


  1. 1.
    Abraham, I., Prabhakar, A., Hartmann, M.J., Murphey, T.D.: Ergodic exploration using binary sensing for nonparametric shape estimation. IEEE Robot. Autom. Lett. 2(2), 827–834 (2017)CrossRefGoogle Scholar
  2. 2.
    Antonsson, E. K., Mann, R.W.: The frequency content of gait. J. Biomech. 18(1), 39–47 (1985)CrossRefGoogle Scholar
  3. 3.
    Atkeson, C. G.: Using local models to control movement. In: Advances in Neural Information Processing Systems (NIPS), vol. 2, pp 316–323 (1989)Google Scholar
  4. 4.
    Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning for control. Artif. Intell. Rev. 11(1–5), 75–113 (1997)CrossRefGoogle Scholar
  5. 5.
    Berio, D., Calinon, S., Leymarie, F.F.: Generating calligraphic trajectories with model predictive control. In: Proceedings of the 43rd Conference on Graphics Interface, pp 132–139. Canadian Human-Computer Communications Society School of Computer Science, University of Waterloo, Waterloo (2017)Google Scholar
  6. 6.
    Bouveyron, C., Brunet, C.: Model-based clustering of high-dimensional data: A review. Comput. Stat. Data Anal. 71, 52–78 (2014)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Calinon, S., Lee, D.: Learning control. In: Vadakkepat, P., Goswami, A. (eds.) Humanoid Robotics: A Reference, pp. 1261–1312. Springer, Berlin (2019)CrossRefGoogle Scholar
  8. 8.
    Cleveland, W.S.: Robust locally weighted regression and smoothing scatterplots. Am. Stat. Assoc. 74(368), 829–836 (1979)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Egerstedt, M., Martin, C.: Control Theoretic Splines: Optimal Control, Statistics, and Path Planning. Princeton University Press, Princeton (2010)zbMATHGoogle Scholar
  10. 10.
    Falk, T.H., Shatkay, H., Chan, W.Y.: Breast cancer prognosis via Gaussian mixture regression. In: Conference on Electrical and Computer Engineering, pp. 987–990. IEEE, Piscataway (2006)Google Scholar
  11. 11.
    Farouki, R.T.: The Bernstein polynomial basis: A centennial retrospective. Comput. Aided Geom. Des. 29(6), 379–419 (2012)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Ghahramani, Z., Jordan, M.I.: Supervised learning from incomplete data via an EM approach. In: Cowan, J.D., Tesauro, G., Alspector, J. (eds.) Advances in Neural Information Processing Systems (NIPS), vol 6, pp 120–127. Morgan Kaufmann Publishers, San Francisco (1994)Google Scholar
  13. 13.
    Hersch, M., Guenter, F., Calinon, S., Billard, A.: Dynamical system modulation for robot learning via kinesthetic demonstrations. IEEE Trans. Robot. 24(6), 1463–1467 (2008)CrossRefGoogle Scholar
  14. 14.
    Huang, Y., Rozo, L., Silvério, J., Caldwell, D.G.: Kernelized movement primitives. Int. J. Robot. Res. 38(7), 833–852 (2019)CrossRefGoogle Scholar
  15. 15.
    Hueber, T., Bailly, G.: Statistical conversion of silent articulation into audible speech using full-covariance HMM. Comput. Speech Lang. 36(C), 274–293 (2016)CrossRefGoogle Scholar
  16. 16.
    Ivan, V., Zarubin, D., Toussaint, M., Komura, T., Vijayakumar, S.: Topology-based representations for motion planning and generalization in dynamic environments with interactions. Int. J. Robot. Res. 32(9–10), 1151–1163 (2013)CrossRefGoogle Scholar
  17. 17.
    Jaquier, N., Calinon, S.: Gaussian mixture regression on symmetric positive definite matrices manifolds: Application to wrist motion estimation with sEMG. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 59–64. IEEE, Piscataway (2017)Google Scholar
  18. 18.
    Jaquier, N., Haschke, R., Calinon, S.: Tensor-variate mixture of experts. arXiv:190211104 pp 1–11 (2019)Google Scholar
  19. 19.
    Kolda, T., Bader, B.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Maeda, G.J., Neumann, G., Ewerton, M., Lioutikov, R., Kroemer, O., Peters, J.: Probabilistic movement primitives for coordination of multiple human-robot collaborative tasks. Auton. Robot. 41(3), 593–612 (2017)CrossRefGoogle Scholar
  21. 21.
    Mathew, G., Mezic, I.: Metrics for ergodicity and design of ergodic dynamics for multi-agent systems. Phys. D Nonlinear Phenom. 240(4), 432–442 (2011)CrossRefGoogle Scholar
  22. 22.
    Miller, L.M., Silverman, Y., MacIver, M.A., Murphey, T.D.: Ergodic exploration of distributed information. IEEE Trans. Robot. 32(1), 36–52 (2016)CrossRefGoogle Scholar
  23. 23.
    Mussa-Ivaldi, F.A., Giszter, S.F., Bizzi, E.: Linear combinations of primitives in vertebrate motor control. Proc Natl. Acad. Sci. 91, 7534–7538 (1994)CrossRefGoogle Scholar
  24. 24.
    Paraschos, A., Daniel, C., Peters, J.R., Neumann, G.: Probabilistic movement primitives. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems (NIPS), pp 2616–2624. Curran Associates, Red Hook (2013)Google Scholar
  25. 25.
    PbDlib robot programming by demonstration software library. Accessed 18 April 2019
  26. 26.
    Pignat, E., Calinon, S.: Bayesian Gaussian mixture model for robotic policy imitation. arXiv:190410716, pp. 1–7 (2019)Google Scholar
  27. 27.
    Schaal, S., Atkeson, C.G.: Constructive incremental learning from only local information. Neural Comput. 10(8), 2047–2084 (1998)CrossRefGoogle Scholar
  28. 28.
    Stulp, F., Sigaud, O.: Many regression algorithms, one unified model—a review. Neural Netw. 69, 60–79 (2015)CrossRefGoogle Scholar
  29. 29.
    Tanwani, A.K., Calinon, S.: Small variance asymptotics for non-parametric online robot learning. Int. J. Rob. Res. 38(1), 3–22 (2019)CrossRefGoogle Scholar
  30. 30.
    Tian, Y., Sigal, L., De la Torre, F., Jia, Y.: Canonical locality preserving latent variable model for discriminative pose inference. Image Vis. Comput. 31(3), 223–230 (2013)CrossRefGoogle Scholar
  31. 31.
    Ting, J.A., Kalakrishnan, M., Vijayakumar, S., Schaal, S.: Bayesian kernel shaping for learning control. In: Advances in Neural Information Processing Systems (NIPS), pp 1673–1680 (2008)Google Scholar
  32. 32.
    Toda, T., Black, A.W., Tokuda, K.: Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory. IEEE Trans. Audio Speech Lang. Process. 15(8), 2222–2235 (2007)CrossRefGoogle Scholar
  33. 33.
    Vijayakumar, S., D’souza, A., Schaal, S.: Incremental online learning in high dimensions. Neural Comput. 17(12), 2602–2634 (2005)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Idiap Research InstituteMartignySwitzerland

Personalised recommendations