Skip to main content

Linear Time Model Selection for Mixture of Heterogeneous Components

  • Conference paper
Advances in Machine Learning (ACML 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5828))

Included in the following conference series:

Abstract

Our main contribution is to propose a novel model selection methodology, expectation minimization of description length (EMDL), based on the minimum description length (MDL) principle. EMDL makes a significant impact on the combinatorial scalability issue pertaining to the model selection for mixture models having types of components. A goal of such problems is to optimize types of components as well as the number of components. One key idea in EMDL is to iterate calculations of the posterior of latent variables and minimization of expected description length of both observed data and latent variables. This enables EMDL to compute the optimal model in linear time with respect to both the number of components and the number of available types of components despite the fact that the number of model candidates exponentially increases with the numbers. We prove that EMDL is compliant with the MDL principle and enjoys its statistical benefits.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)

    Article  MATH  Google Scholar 

  2. Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Caski, F. (eds.) Proceedings of the 2nd International Symposium on Information Theory, pp. 267–281 (1973)

    Google Scholar 

  3. Wallace, C.S., Dowe, D.L.: Intrinsic classification by mml - the snob program. In: Proceedings of the 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44 (1994)

    Google Scholar 

  4. Wallace, C.S., Dowe, D.L.: Mml clustering of multi-state, poisson, von mises circular and gaussian distributions. Statistics and Computing 10, 73–83 (2000)

    Article  Google Scholar 

  5. Edwards, R.T., Dowe, D.L.: Single factor analysis in mml mixture modelling. In: Wu, X., Kotagiri, R., Korb, K.B. (eds.) PAKDD 1998. LNCS, vol. 1394, pp. 96–109. Springer, Heidelberg (1998)

    Google Scholar 

  6. Bishop, C., Tipping, M.: A hierarchical latent variable model for data visualization. IEEE Transaction on Pattern Analysis and Machine Intelligence 20(3), 281–293 (1998)

    Article  Google Scholar 

  7. Dennis, J.E., Torczon, V.J.: Derivative-free pattern search methods for multidisciplinary design problems. In: Proceedings of the 5th AIAA/ USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization (1994)

    Google Scholar 

  8. Momma, M., Bennett, K.P.: A pattern search method for model selection of support vector regression. In: Proceedings of the SIAM International Conference on Data Mining. SIAM, Philadelphia (2002)

    Google Scholar 

  9. Ghahramani, Z., Beal, M.J.: Variational inference for bayesian mixtures of factor analysers. In: Advances in Neural Information Processing Systems 12, pp. 449–455. MIT Press, Cambridge (2000)

    Google Scholar 

  10. Ma, Y., Derksen, H., Hong, W., Wright, J.: Segmentation of multivariate mixed data via lossy data coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9), 1546–1562 (2007)

    Article  Google Scholar 

  11. Grunwald, P.D., Myung, I.J., Pitt, M.A.: Advances in minimum description length. MIT Press, Cambridge (2005)

    Google Scholar 

  12. Yamanishi, K.: A learning criterion for stochastic rules. Machine Learning 9, 165–203 (1992)

    MATH  Google Scholar 

  13. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from imcomplete data via the em algorithm. Journal of the Royal Statistical Society B39(1), 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  14. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, Hoboken (1991)

    Book  MATH  Google Scholar 

  15. Rissanen, J.: Fisher information and stochastic complexity. IEEE Transaction on Information Theory 42(1), 40–47 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  16. Tenmoto, H., Kudo, M., Shimbo, M.: MDL-based selection of the number of components in mixture models for pattern classification. In: Amin, A., Pudil, P., Dori, D. (eds.) SPR 1998 and SSPR 1998. LNCS, vol. 1451, pp. 831–836. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  17. Rissanen, J.: Universal prior for integers and estimation by minimum description length. Annals of Statistics 11(2), 416–431 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  18. Ueda, N., Nakano, R.: EM algorithm with split and merge operations for mixture models. IEICE transactions on information and systems E83-D(12), 2047–2055 (2000)

    Google Scholar 

  19. Asuncion, A., Newman, D.: UCI machine learning repository (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fujimaki, R., Morinaga, S., Momma, M., Aoki, K., Nakata, T. (2009). Linear Time Model Selection for Mixture of Heterogeneous Components. In: Zhou, ZH., Washio, T. (eds) Advances in Machine Learning. ACML 2009. Lecture Notes in Computer Science(), vol 5828. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05224-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-05224-8_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05223-1

  • Online ISBN: 978-3-642-05224-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics