A Gradient Entropy Regularized Likelihood Learning Algorithm on Gaussian Mixture with Automatic Model Selection
In Gaussian mixture (GM) modeling, it is crucial to select the number of Gaussians for a sample data set. In this paper, we propose a gradient entropy regularized likelihood (ERL) algorithm on Gaussian mixture to solve this problem under regularization theory. It is demonstrated by the simulation experiments that the gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly.
KeywordsGaussian Mixture Model Gaussian Component Regularization Theory Original Mixture Regularization Factor
Unable to display preview. Download preview PDF.
- 5.Xu, L.: BYY Harmony Learning, Structural RPCL, and Topological Self-Organizing on Mixture Modes. Neural Networks 15(8–9), 1231–1237 (2002)Google Scholar