Advertisement

Building a Sparse Kernel Classifier on Riemannian Manifold

  • Yanyun Qu
  • Zejian Yuan
  • Nanning Zheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4270)

Abstract

It is difficult to deal with large datasets by kernel based methods since the number of basis functions required for an optimal solution equals the number of samples. We present an approach to build a sparse kernel classifier by adding constraints to the number of support vectors and to the classifier function. The classifier is considered on Riemannian manifold. And the sparse greedy learning algorithm is used to solve the formulated problem. Experimental results over several classification benchmarks show that the proposed approach can reduce the training and runtime complexities of kernel classifier applied to large datasets without scarifying high classification accuracy.

Keywords

Support Vector Machine Support Vector Riemannian Manifold Relevance Vector Machine Machine Learn Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burges, C.J.C.: Simplified support vector decision rules. In: Proc. 13th International Conference on Machine Learning, pp. 71–77. Morgan Kaufmann, San Francisco (1996)Google Scholar
  2. 2.
    Nair, P.B., Choudhury, A., Keane, A.J.: Some greedy learning algorithms for sparse regression and classification with mercer kernels. Journal of Machine Learning Research 3, 781–801 (2002)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Tipping, M.E.: Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211–244 (2001)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Lee, Y.J., Mangasarian, O.L.: RSVM: reduced support vector machines. In: CD Proceedings of the First SIAM International Conference on Data Mining, Chicago (2001)Google Scholar
  5. 5.
    Wu, M., Scholkopf, B., Bakir, G.: Building sparse large margin classifiers. In: Proceedings of the 22th International Conference on Machine Learning, Bonn, Germany (2005)Google Scholar
  6. 6.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003)MATHCrossRefGoogle Scholar
  7. 7.
    Smola, A.J., Scholkopf, B.: Sparse greedy matrix approximation for machine learning. In: Proceedings of the 17th International Conference on Machine Learning, pp. 911–918. Morgan Kaufmann, San Francisco (2000)Google Scholar
  8. 8.
    Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM Journal of Computing 25(2), 227–234 (1995)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Mallat, S., Zhang, Z.: Matching pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing 41, 3397–3415 (1993)MATHCrossRefGoogle Scholar
  10. 10.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yanyun Qu
    • 1
    • 2
  • Zejian Yuan
    • 1
  • Nanning Zheng
    • 1
  1. 1.Institute of Artificial Intelligence and RoboticsXi’an Jiaotong UniversityXi’anP.R. China
  2. 2.Computer Science DepartmentXiamen UniversityXiamenP.R. China

Personalised recommendations