Skip to main content

Sparse Reductions for Fixed-Size Least Squares Support Vector Machines on Large Scale Data

  • Conference paper
Advances in Knowledge Discovery and Data Mining (PAKDD 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7818))

Included in the following conference series:

Abstract

Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nyström approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M. In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L 0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hoegaerts, L., Suykens, J.A.K., Vandewalle, J., De Moor, B.: A comparison of pruning algorithms for sparse least squares support vector machines. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 1247–1253. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  2. Geebelen, D., Suykens, J.A.K., Vandewalle, J.: Reducing the Number of Support Vectors of SVM classifiers using the Smoothed Seperable Case Approximation. IEEE Transactions on Neural Networks and Learning Systems 23(4), 682–688 (2012)

    Article  Google Scholar 

  3. Suykens, J.A.K., Vandewalle, J.: Least Squares Support Vector Machine Classifiers. Neural Processing Letters 9(3), 293–300 (1999)

    Article  MathSciNet  Google Scholar 

  4. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer (1995)

    Google Scholar 

  5. Suykens, J.A.K., Lukas, L., Vandewalle, J.: Sparse approximation using Least Squares Support Vector Machines. In: Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS 2000), pp. 757–760 (2000)

    Google Scholar 

  6. Li, Y., Lin, C., Zhang, W.: Improved Sparse Least-Squares Support Vector Machine Classifiers. Neurocomputing 69(13), 1655–1658 (2006)

    Article  Google Scholar 

  7. Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, B., Vandewalle, J.: Least Squares Support Vector Machines. World Scientific Publishing Co., Pte., Ltd., Singapore (2002)

    Book  MATH  Google Scholar 

  8. Nyström, E.J.: Über die praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben. Acta Mathematica 54, 185–204 (1930)

    Article  MathSciNet  MATH  Google Scholar 

  9. Baker, C.T.H.: The Numerical Treatment of Integral Equations. Oxford Claredon Press (1983)

    Google Scholar 

  10. De Brabanter, K., De Brabanter, J., Suykens, J.A.K., De Moor, B.: Optimized Fixed-Size Kernel Models for Large Data Sets. Computational Statistics & Data Analysis 54(6), 1484–1504 (2010)

    Article  MathSciNet  Google Scholar 

  11. Karsmakers, P., Pelckmans, K., De Brabanter, K., Hamme, H.V., Suykens, J.A.K.: Sparse conjugate directions pursuit with application to fixed-size kernel methods. Machine Learning, Special Issue on Model Selection and Optimization in Machine Learning 85(1), 109–148 (2011)

    MATH  Google Scholar 

  12. Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.: Use of the Zero Norm with Linear Models and Kernel Methods. Journal of Machine Learning Research 3, 1439–1461 (2003)

    MATH  Google Scholar 

  13. Huang, K., Zheng, D., Sun, J., et al.: Sparse Learning for Support Vector Classification. Pattern Recognition Letters 31(13), 1944–1951 (2010)

    Article  Google Scholar 

  14. Lopez, J., De Brabanter, K., Dorronsoro, J.R., Suykens, J.A.K.: Sparse LSSVMs with L 0-norm minimization. In: ESANN 2011, pp. 189–194 (2011)

    Google Scholar 

  15. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases, Irvine, CA (1998), http://archive.ics.uci.edu/ml/datasets.html

  16. Williams, C.K.I., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Advances in Neural Information Processing Systems, vol. 13, pp. 682–688 (2001)

    Google Scholar 

  17. Scott, D.W., Sain, S.R.: Multi-dimensional Density Estimation. Data Mining and Computational Statistics 23, 229–263 (2004)

    Article  Google Scholar 

  18. Xavier de Souza, S., Suykens, J.A.K., Vandewalle, J., Bolle, D.: Coupled Simulated Annealing for Continuous Global Optimization. IEEE Transactions on Systems, Man, and Cybertics - Part B 40(2), 320–335 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mall, R., Suykens, J.A.K. (2013). Sparse Reductions for Fixed-Size Least Squares Support Vector Machines on Large Scale Data. In: Pei, J., Tseng, V.S., Cao, L., Motoda, H., Xu, G. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2013. Lecture Notes in Computer Science(), vol 7818. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37453-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37453-1_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37452-4

  • Online ISBN: 978-3-642-37453-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics