Advertisement

Linear Time Model Selection for Mixture of Heterogeneous Components

  • Ryohei Fujimaki
  • Satoshi Morinaga
  • Michinari Momma
  • Kenji Aoki
  • Takayuki Nakata
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5828)

Abstract

Our main contribution is to propose a novel model selection methodology, expectation minimization of description length (EMDL), based on the minimum description length (MDL) principle. EMDL makes a significant impact on the combinatorial scalability issue pertaining to the model selection for mixture models having types of components. A goal of such problems is to optimize types of components as well as the number of components. One key idea in EMDL is to iterate calculations of the posterior of latent variables and minimization of expected description length of both observed data and latent variables. This enables EMDL to compute the optimal model in linear time with respect to both the number of components and the number of available types of components despite the fact that the number of model candidates exponentially increases with the numbers. We prove that EMDL is compliant with the MDL principle and enjoys its statistical benefits.

Keywords

heterogeneous mixture model model selection minimum description length 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)zbMATHCrossRefGoogle Scholar
  2. 2.
    Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Caski, F. (eds.) Proceedings of the 2nd International Symposium on Information Theory, pp. 267–281 (1973)Google Scholar
  3. 3.
    Wallace, C.S., Dowe, D.L.: Intrinsic classification by mml - the snob program. In: Proceedings of the 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44 (1994)Google Scholar
  4. 4.
    Wallace, C.S., Dowe, D.L.: Mml clustering of multi-state, poisson, von mises circular and gaussian distributions. Statistics and Computing 10, 73–83 (2000)CrossRefGoogle Scholar
  5. 5.
    Edwards, R.T., Dowe, D.L.: Single factor analysis in mml mixture modelling. In: Wu, X., Kotagiri, R., Korb, K.B. (eds.) PAKDD 1998. LNCS, vol. 1394, pp. 96–109. Springer, Heidelberg (1998)Google Scholar
  6. 6.
    Bishop, C., Tipping, M.: A hierarchical latent variable model for data visualization. IEEE Transaction on Pattern Analysis and Machine Intelligence 20(3), 281–293 (1998)CrossRefGoogle Scholar
  7. 7.
    Dennis, J.E., Torczon, V.J.: Derivative-free pattern search methods for multidisciplinary design problems. In: Proceedings of the 5th AIAA/ USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization (1994)Google Scholar
  8. 8.
    Momma, M., Bennett, K.P.: A pattern search method for model selection of support vector regression. In: Proceedings of the SIAM International Conference on Data Mining. SIAM, Philadelphia (2002)Google Scholar
  9. 9.
    Ghahramani, Z., Beal, M.J.: Variational inference for bayesian mixtures of factor analysers. In: Advances in Neural Information Processing Systems 12, pp. 449–455. MIT Press, Cambridge (2000)Google Scholar
  10. 10.
    Ma, Y., Derksen, H., Hong, W., Wright, J.: Segmentation of multivariate mixed data via lossy data coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9), 1546–1562 (2007)CrossRefGoogle Scholar
  11. 11.
    Grunwald, P.D., Myung, I.J., Pitt, M.A.: Advances in minimum description length. MIT Press, Cambridge (2005)Google Scholar
  12. 12.
    Yamanishi, K.: A learning criterion for stochastic rules. Machine Learning 9, 165–203 (1992)zbMATHGoogle Scholar
  13. 13.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from imcomplete data via the em algorithm. Journal of the Royal Statistical Society B39(1), 1–38 (1977)zbMATHMathSciNetGoogle Scholar
  14. 14.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, Hoboken (1991)zbMATHCrossRefGoogle Scholar
  15. 15.
    Rissanen, J.: Fisher information and stochastic complexity. IEEE Transaction on Information Theory 42(1), 40–47 (1996)zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Tenmoto, H., Kudo, M., Shimbo, M.: MDL-based selection of the number of components in mixture models for pattern classification. In: Amin, A., Pudil, P., Dori, D. (eds.) SPR 1998 and SSPR 1998. LNCS, vol. 1451, pp. 831–836. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  17. 17.
    Rissanen, J.: Universal prior for integers and estimation by minimum description length. Annals of Statistics 11(2), 416–431 (1983)zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Ueda, N., Nakano, R.: EM algorithm with split and merge operations for mixture models. IEICE transactions on information and systems E83-D(12), 2047–2055 (2000)Google Scholar
  19. 19.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Ryohei Fujimaki
    • 1
  • Satoshi Morinaga
    • 1
  • Michinari Momma
    • 1
  • Kenji Aoki
    • 1
  • Takayuki Nakata
    • 1
  1. 1.NEC Common Platform Software Research Laboratories 

Personalised recommendations