Advertisement

Model Selection Using a Class of Kernels with an Invariant Metric

  • Akira Tanaka
  • Masashi Sugiyama
  • Hideyuki Imai
  • Mineichi Kudo
  • Masaaki Miyakoshi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4109)

Abstract

Learning based on kernel machines is widely known as a powerful tool for various fields of information science such as pattern recognition and regression estimation. The efficacy of the model in kernel machines depends on the distance between the unknown true function and the linear subspace, specified by the training data set, of the reproducing kernel Hilbert space corresponding to an adopted kernel. In this paper, we propose a framework for the model selection of kernel-based learning machines, incorporating a class of kernels with an invariant metric.

Keywords

True Function Reproduce Kernel Hilbert Space Kernel Machine Parametric Projection Machine Learning Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Muller, K., Mika, S., Ratsch, G., Tsuda, K., Scholkopf, B.: An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks 12, 181–201 (2001)CrossRefGoogle Scholar
  2. 2.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1999)Google Scholar
  3. 3.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Recognition. Cambridge University Press, Cambridge (2004)Google Scholar
  4. 4.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000)Google Scholar
  5. 5.
    Aronszajn, N.: Theory of Reproducing Kernels. Transactions of the American Mathematical Society 68, 337–404 (1950)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Mercer, J.: Functions of Positive and Negative Type and Their Connection with The Theory of Integral Equations. Transactions of the London Philosophical Society A, 415–446 (1909)CrossRefGoogle Scholar
  7. 7.
    Ogawa, H.: Neural Networks and Generalization Ability. IEICE Technical Report NC95-8, 57–64 (1995)Google Scholar
  8. 8.
    Schatten, R.: Norm Ideals of Completely Continuous Operators. Springer, Berlin (1960)MATHGoogle Scholar
  9. 9.
    Sugiyama, M., Ogawa, H.: Incremental Projection Learning for Optimal Generalization. Neural Networks 14, 53–66 (2001)CrossRefGoogle Scholar
  10. 10.
    Imai, H., Tanaka, A., Miyakoshi, M.: The family of parametric projection filters and its properties for perturbation. The IEICE Transactions on Information and Systems E80–D, 788–794 (1997)Google Scholar
  11. 11.
    Oja, E., Ogawa, H.: Parametric Projection Filter for Image and Signal Restoration. IEEE Transactions on Acoustics, Speech and Signal Processing ASSP–34, 1643–1653 (1986)CrossRefGoogle Scholar
  12. 12.
    Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. John Wiley & Sons, Chichester (1971)MATHGoogle Scholar
  13. 13.
    Saitoh, S.: Integral Transforms, Reproducing Kernels and Their Applications. Addison Wesley Longman Ltd, Amsterdam (1997)MATHGoogle Scholar
  14. 14.
    Sugiyama, M., Ogawa, H.: Subspace Information Criterion for Model Selection. Neural Computation 13, 1863–1889 (2001)MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Akira Tanaka
    • 1
  • Masashi Sugiyama
    • 2
  • Hideyuki Imai
    • 1
  • Mineichi Kudo
    • 1
  • Masaaki Miyakoshi
    • 1
  1. 1.Division of Computer Science, Graduate School of Information Science and TechnologyHokkaido UniversitySapporoJapan
  2. 2.Department of Computer ScienceTokyo Institute of TechnologyTokyoJapan

Personalised recommendations