Skip to main content

Kernel Trick Embedded Gaussian Mixture Model

  • Conference paper
Algorithmic Learning Theory (ALT 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2842))

Included in the following conference series:

Abstract

In this paper, we present a kernel trick embedded Gaussian Mixture Model (GMM), called kernel GMM. The basic idea is to embed kernel trick into EM algorithm and deduce a parameter estimation algorithm for GMM in feature space. Kernel GMM could be viewed as a Bayesian Kernel Method. Compared with most classical kernel methods, the proposed method can solve problems in probabilistic framework. Moreover, it can tackle nonlinear problems better than the traditional GMM. To avoid great computational cost problem existing in most kernel methods upon large scale data set, we also employ a Monte Carlo sampling technique to speed up kernel GMM so that it is more practical and efficient. Experimental results on synthetic and real-world data set demonstrate that the proposed approach has satisfing performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Achlioptas, D., McSherry, F., Schölkopf, B.: Sampling techniques for kernel methods. In: Advances in Neural Information Processing System (NIPS), vol. 14, MIT Press, Cambridge (2002)

    Google Scholar 

  2. Bilmes, J.A.: A Gentle Tutorial on the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, Technical Report, UC Berkeley, ICSI-TR-97-021 (1997)

    Google Scholar 

  3. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)

    Google Scholar 

  4. Dahmen, J., Keysers, D., Ney, H., Güld, M.O.: Statistical Image Object Recognition using Mixture Densities. Journal of Mathematical Imaging and Vision 14(3), 285–296 (2001)

    Article  MATH  Google Scholar 

  5. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons Press, New York (2001)

    MATH  Google Scholar 

  6. Everitt, B.S.: An Introduction to Latent Variable Models. Chapman and Hall, London (1984)

    MATH  Google Scholar 

  7. Francis, R.B., Michael, I.J.: Kernel Independent Component Analysis. Journal of Machine Learning Research 3, 1–48 (2002)

    Google Scholar 

  8. Gestel, T.V., et al.: Bayesian framework for least squares support vector machine classifiers, gaussian processs and kernel fisher discriminant analysis. Neural Computation 15(5), 1115–1148 (2002)

    Article  Google Scholar 

  9. Herbrich, R., Graepel, T., Campbell, C.: Bayes Point Machines: Estimating the Bayes Point in Kernel Space. In: Proceedings of International Joint Conference on Artificial Intelligence Work-shop on Support Vector Machines, pp. 23–27 (1999)

    Google Scholar 

  10. Kwok, J.T.: The Evidence Framework Applied to Support Vector Machines. IEEE Trans. on NN 11, 1162–1173 (2000)

    Google Scholar 

  11. Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.R.: Fisher discriminant analysis with kernels. In: IEEE Workshop on Neural Networks for Signal Processing IX, pp. 41–48 (1999)

    Google Scholar 

  12. Mjolsness, E., Decoste, D.: Machine Learning for Science: State of the Art and Future Pros-pects, Science, vol. 293 (2001)

    Google Scholar 

  13. Roberts, S.J.: Parametric and Non-Parametric Unsupervised Cluster Analysis. Pattern Recogni-tion 30(2), 261–272 (1997)

    Article  Google Scholar 

  14. Schölkopf, B., Smola, A.J., Müller, K.R.: Nonlinear Component Analysis as a Kernel Eigen-value Problem. Neural Computation 10(5), 1299–1319 (1998)

    Article  Google Scholar 

  15. Schölkopf, B., Mika, S., Burges, C.J.C., Knirsch, P., Müller, K.R., Raetsch, G., Smola, A.: Input Space vs. Feature Space in Kernel-Based Methods. IEEE Trans. on NN 10(5), 1000–1017 (1999)

    Google Scholar 

  16. Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  17. Tipping, M.E.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research (2001)

    Google Scholar 

  18. Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, New York (1997)

    Google Scholar 

  19. Williams, C., Seeger, M.: Using the Nyström Method to Speed Up Kernel Machines. In: Leen, T.K., Diettrich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems (NIPS)13, MIT Press, Cambridge (2001)

    Google Scholar 

  20. Taylor, J.S., Williams, C., Cristianini, N., Kandola, J.: On the Eigenspectrum of the Gram Matrix and Its Relationship to the Operator Eigenspectrum. In: Cesa-Bianchi, N., Numao, M., Reischuk, R., et al. (eds.) ALT 2002. LNCS (LNAI), vol. 2533, pp. 23–40. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  21. Ng, A.Y., Jordan, M.I., Weiss, Y.: On Spectral Clustering: Analysis and an algorithm, Advance in Neural Information Processing Systems (NIPS) 14. MIT Press, Cambridge (2002)

    Google Scholar 

  22. Moghaddam, B., Pentland, A.: Probabilistic visual learning for object representation. IEEE Trans. on PAMI 19(7), 696–710 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wang, J., Lee, J., Zhang, C. (2003). Kernel Trick Embedded Gaussian Mixture Model. In: Gavaldá, R., Jantke, K.P., Takimoto, E. (eds) Algorithmic Learning Theory. ALT 2003. Lecture Notes in Computer Science(), vol 2842. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39624-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39624-6_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20291-2

  • Online ISBN: 978-3-540-39624-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics