Skip to main content

Unsupervised Learning of Finite Gaussian Mixture Models (GMMs): A Greedy Approach

  • Conference paper

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 89))

Abstract

In this work we propose a clustering algorithm that learns on-line a finite gaussian mixture model from multivariate data based on the expectation maximization approach. The convergence of the right number of components as well as their means and covariances is achieved without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during the expectation maximization steps. Once the stopping criteria has been reached, the classical EM algorithm with the best selected mixture is run in order to optimize the solution. We show the effectiveness of the method in a series of simulated experiments and compare in with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with the iCub humanoid robot.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahrendt, P.: The multivariate gaussian probability distribution. Tech. rep. (January 2005), http://www2.imm.dtu.dk/pubdb/p.php?3312

  2. Carpin, S., Lewis, M., Wang, J., Balakirsky, S., Scrapper, C.: Bridging the gap between simulation and reality in urban search and rescue. In: Lakemeyer, G., Sklar, E., Sorrenti, D.G., Takahashi, T. (eds.) RoboCup 2006: Robot Soccer World Cup X. LNCS (LNAI), vol. 4434, pp. 1–12. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  3. Dempster, A., Laird, N., Rubin, D.: Maximum likelihood estimation from incomplete data via the em algorithm. J. Royal Statistic Soc. 30(B), 1–38 (1977)

    MathSciNet  Google Scholar 

  4. Dobbe, J.G.G., Streekstra, G.J., Hardeman, M.R., Ince, C., Grimbergen, C.A.: Measurement of the distribution of red blood cell deformability using an automated rheoscope. Cytometry (Clinical Cytometry) 50, 313–325 (2002)

    Article  Google Scholar 

  5. Figueiredo, A., Jain, A.: Unsupervised learning of finite mixture models. IEEE Trans. Patt. Anal. Mach. Intell. 24(3) (2002)

    Google Scholar 

  6. Greggio, N., Silvestri, G., Menegatti, E., Pagello, E.: Simulation of small humanoid robots for soccer domain. Journal of The Franklin Institute - Engineering and Applied Mathematics 346(5), 500–519 (2009)

    Article  MATH  Google Scholar 

  7. Greggio, N., Bernardino, A., Santos-Victor, J.: A practical method for self-adapting gaussian expectation maximization. In: International Conference on Informatics in Control, Automation and Robotics (ICINCO 2010), Funchal, Madeira - Portugal, June 15–18 ( 2010)

    Google Scholar 

  8. Hartley, H.: Maximum likelihood estimation from incomplete data. Biometrics 14, 174–194 (1958)

    Article  MATH  Google Scholar 

  9. Jensen, J.H., Ellis, D., Christensen, M.G., Jensen, S.H.: Evaluation distance measures between gaussian mixture models of mfccs. In: Proc. Int. Conf. on Music Info. Retrieval ISMIR 2007, pp. 107–108 (2007)

    Google Scholar 

  10. Li, J.Q., Barron, A.R.: Mixture density estimation. In: NIPS, vol. 11. MIT Press, Cambridge (2000)

    Google Scholar 

  11. Montesano, L., Lopes, M., Bernardino, A., Santos-Victor, J.: Earning object affordances: From sensory motor maps to imitation. IEEE Trans. on Robotics 24(1) (2008)

    Google Scholar 

  12. Shim, H., Kwon, D., Yun, I., Lee, S.: Robust segmentation of cerebral arterial segments by a sequential monte carlo method: Particle filtering. Computer Methods and Programs in Biomedicine 84(2-3), 135–145 (2006)

    Article  Google Scholar 

  13. Ueda, N., Nakano, R., Ghahramani, Y., Hiton, G.: Smem algorithm for mixture models. Neural Comput. 12(10), 2109–2128 (2000)

    Article  Google Scholar 

  14. Verbeek, J., Vlassis, N., Krose, B.: Efficient greedy learning of gaussian mixture models. Neural Computation 15(2), 469–485 (2003)

    Article  MATH  Google Scholar 

  15. Vincze, M.: Robust tracking of ellipses at frame rate. Pattern Recognition 34, 487–498 (2001)

    Article  MATH  Google Scholar 

  16. Zhang, Z., Chen, C., Sun, J., Chan, K.L.: Em algorithms for gaussian mixtures with split-and-merge operation. Pattern Recognition 36, 1973–1983 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Greggio, N., Bernardino, A., Santos-Victor, J. (2011). Unsupervised Learning of Finite Gaussian Mixture Models (GMMs): A Greedy Approach. In: Cetto, J.A., Ferrier, JL., Filipe, J. (eds) Informatics in Control, Automation and Robotics. Lecture Notes in Electrical Engineering, vol 89. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19539-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-19539-6_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-19538-9

  • Online ISBN: 978-3-642-19539-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics