Abstract
Our main contribution is to propose a novel model selection methodology, expectation minimization of description length (EMDL), based on the minimum description length (MDL) principle. EMDL makes a significant impact on the combinatorial scalability issue pertaining to the model selection for mixture models having types of components. A goal of such problems is to optimize types of components as well as the number of components. One key idea in EMDL is to iterate calculations of the posterior of latent variables and minimization of expected description length of both observed data and latent variables. This enables EMDL to compute the optimal model in linear time with respect to both the number of components and the number of available types of components despite the fact that the number of model candidates exponentially increases with the numbers. We prove that EMDL is compliant with the MDL principle and enjoys its statistical benefits.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)
Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Caski, F. (eds.) Proceedings of the 2nd International Symposium on Information Theory, pp. 267–281 (1973)
Wallace, C.S., Dowe, D.L.: Intrinsic classification by mml - the snob program. In: Proceedings of the 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44 (1994)
Wallace, C.S., Dowe, D.L.: Mml clustering of multi-state, poisson, von mises circular and gaussian distributions. Statistics and Computing 10, 73–83 (2000)
Edwards, R.T., Dowe, D.L.: Single factor analysis in mml mixture modelling. In: Wu, X., Kotagiri, R., Korb, K.B. (eds.) PAKDD 1998. LNCS, vol. 1394, pp. 96–109. Springer, Heidelberg (1998)
Bishop, C., Tipping, M.: A hierarchical latent variable model for data visualization. IEEE Transaction on Pattern Analysis and Machine Intelligence 20(3), 281–293 (1998)
Dennis, J.E., Torczon, V.J.: Derivative-free pattern search methods for multidisciplinary design problems. In: Proceedings of the 5th AIAA/ USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization (1994)
Momma, M., Bennett, K.P.: A pattern search method for model selection of support vector regression. In: Proceedings of the SIAM International Conference on Data Mining. SIAM, Philadelphia (2002)
Ghahramani, Z., Beal, M.J.: Variational inference for bayesian mixtures of factor analysers. In: Advances in Neural Information Processing Systems 12, pp. 449–455. MIT Press, Cambridge (2000)
Ma, Y., Derksen, H., Hong, W., Wright, J.: Segmentation of multivariate mixed data via lossy data coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9), 1546–1562 (2007)
Grunwald, P.D., Myung, I.J., Pitt, M.A.: Advances in minimum description length. MIT Press, Cambridge (2005)
Yamanishi, K.: A learning criterion for stochastic rules. Machine Learning 9, 165–203 (1992)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from imcomplete data via the em algorithm. Journal of the Royal Statistical Society B39(1), 1–38 (1977)
Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, Hoboken (1991)
Rissanen, J.: Fisher information and stochastic complexity. IEEE Transaction on Information Theory 42(1), 40–47 (1996)
Tenmoto, H., Kudo, M., Shimbo, M.: MDL-based selection of the number of components in mixture models for pattern classification. In: Amin, A., Pudil, P., Dori, D. (eds.) SPR 1998 and SSPR 1998. LNCS, vol. 1451, pp. 831–836. Springer, Heidelberg (1998)
Rissanen, J.: Universal prior for integers and estimation by minimum description length. Annals of Statistics 11(2), 416–431 (1983)
Ueda, N., Nakano, R.: EM algorithm with split and merge operations for mixture models. IEICE transactions on information and systems E83-D(12), 2047–2055 (2000)
Asuncion, A., Newman, D.: UCI machine learning repository (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Fujimaki, R., Morinaga, S., Momma, M., Aoki, K., Nakata, T. (2009). Linear Time Model Selection for Mixture of Heterogeneous Components. In: Zhou, ZH., Washio, T. (eds) Advances in Machine Learning. ACML 2009. Lecture Notes in Computer Science(), vol 5828. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05224-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-05224-8_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-05223-1
Online ISBN: 978-3-642-05224-8
eBook Packages: Computer ScienceComputer Science (R0)