Unsupervised Learning of Correlated Multivariate Gaussian Mixture Models Using MML
Mixture modelling or unsupervised classification is the problem of identifying and modelling components (or clusters, or classes) in a body of data. We consider here the application of the Minimum Message Length (MML) principle to a mixture modelling problem of multivariate Gaussian distributions. Earlier work in MML mixture modelling includes the multinomial, Gaussian, Poisson, von Mises circular, and Student t distributions and in these applications all variables in a component are assumed to be uncorrelated with each other. In this paper, we propose a more general type of MML mixture modelling which allows the variables within a component to be correlated. Two MML approximations are used. These are the Wallace and Freeman (1987) approximation and Dowe’s MMLD approximation (2002). The former is used for calculating the relative abundances (mixing proportions) of each component and the latter is used for estimating the distribution parameters involved in the components of the mixture model. The proposed method is applied to the analysis of two real-world datasets – the well-known (Fisher) Iris and diabetes datasets. The modelling results are then compared with those obtained using two other modelling criteria, AIC and BIC (which is identical to Rissanen’s 1978 MDL), in terms of their probability bit-costings, and show that the proposed MML method performs better than both these criteria. Furthermore, the MML method also infers more closely the three underlying Iris species than both AIC and BIC.
KeywordsUnsupervised Classification Mixture Modelling Machine Learning Knowledge Discovery and Data Mining Minimum Message Length MML Classification Clustering Intrinsic Classification Numerical Taxonomy Information Theory Statistical Inference
Unable to display preview. Download preview PDF.
- 1.Agusta, Y., Dowe, D.L.: Clustering of Gaussian and t Distributions using Minimum Message Length. In: Proc. Int’l. Conf. Knowledge Based Computer Systems - KBCS-2002, Mumbai, India, pp. 289–299. Vikas Publishing House Pvt. Ltd. (2002)Google Scholar
- 5.Cheeseman, P., Stutz, J.: Bayesian Classification (AutoClass): Theory and Results. In: Advances in Knowledge Discovery and Data Mining, pp. 153–180. AAAI Press/MIT Press (1996)Google Scholar
- 9.Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188 (1936)Google Scholar
- 11.Fitzgibbon, L.J., Dowe, D.L., Allison, L.: Univariate Polynomial Inference by Monte Carlo Message Length Approximation. In: Proc. 19th International Conf. of Machine Learning (ICML 2002), Sydney, pp. 147–154. Morgan Kaufmann, San Francisco (2002)Google Scholar
- 13.Fraley, C., Raftery, A.E.: MCLUST: Software for Model-Based Cluster and Discriminant Analysis. Technical Report 342, Statistics Dept., Washington Uni., Seattle, USA (1998)Google Scholar
- 16.Lam, E.: Improved approximations in MML. Honours Thesis, School of Computer Science and Software Engineering, Monash Uni., Clayton 3800 Australia (2000)Google Scholar
- 18.McLachlan, G.J., Peel, D., Basford, K.E., Adams, P.: The EMMIX software for the fitting of mixtures of Normal and t-components. J. Stat. Software 4 (1999)Google Scholar
- 26.Wallace, C.S.: An improved program for classification. In: Proc. 9th Aust. Computer Science Conference (ACSC-9), vol. 8, pp. 357–366. Monash Uni., Australia (1986)Google Scholar
- 28.Wallace, C.S., Dowe, D.L.: Intrinsic classification by MML - the Snob program. In: Proc. 7th Aust. Joint Conf. on AI, pp. 37–44. World Scientific, Singapore (1994)Google Scholar
- 29.Wallace, C.S., Dowe, D.L.: MML Mixture Modelling of Multi-State, Poisson, von Mises Circular and Gaussian Distributions. In: Proc. 6th International Workshop on Artificial Intelligence and Statistics, Florida, pp. 529–536 (1997)Google Scholar