Advertisement

A Gradient Entropy Regularized Likelihood Learning Algorithm on Gaussian Mixture with Automatic Model Selection

  • Zhiwu Lu
  • Jinwen Ma
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

In Gaussian mixture (GM) modeling, it is crucial to select the number of Gaussians for a sample data set. In this paper, we propose a gradient entropy regularized likelihood (ERL) algorithm on Gaussian mixture to solve this problem under regularization theory. It is demonstrated by the simulation experiments that the gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly.

Keywords

Gaussian Mixture Model Gaussian Component Regularization Theory Original Mixture Regularization Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Devijver, P.A., Kittter, J.: Pattern Recognition: A Statistical Approach. Prentice Hall, Englewood Cliffs (1982)MATHGoogle Scholar
  2. 2.
    Render, R.A., Walker, H.F.: Mixture Densities, Maximum Likelihood and the EM Algorithm. SIAM Review 26(2), 195–239 (1984)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Akaike, H.: A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control 19(6), 716–723 (1974)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Schwarz, G.: Estimating the Dimension of a Model. The Annals of Statistics 6(2), 461–464 (1978)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Xu, L.: BYY Harmony Learning, Structural RPCL, and Topological Self-Organizing on Mixture Modes. Neural Networks 15(8–9), 1231–1237 (2002)Google Scholar
  6. 6.
    Lu, Z., Cheng, Q., Ma, J.: A Gradient BYY Harmony Learning Algorithm on Mixture of Experts for Curve Detection. In: Gallagher, M., Hogan, J.P., Maire, F. (eds.) IDEAL 2005. LNCS, vol. 3578, pp. 250–257. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  7. 7.
    Ma, J., Wang, T., Xu, L.: A Gradient BYY Harmony Learning Rule on Gaussian Mixture with Automated Model Selection. Neurocomputing 56, 481–487 (2004)CrossRefGoogle Scholar
  8. 8.
    Dennis, D.C., Finbarr, O.S.: Asymptotic Analysis of Penalized Likelihood and Related Estimators. The Annals of Statistics 18(6), 1676–1695 (1990)MATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Zhiwu Lu
    • 1
  • Jinwen Ma
    • 2
  1. 1.Institute of Computer Science & Technology of Peking UniversityBeijingChina
  2. 2.Department of Information Science, School of Mathematical Sciences and LMAMPeking UniversityBeijingChina

Personalised recommendations