Skip to main content
Log in

New Least Squares Support Vector Machines Based on Matrix Patterns

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Support vector machine (SVM), as an effective method in classification problems, tries to find the optimal hyperplane that maximizes the margin between two classes and can be obtained by solving a constrained optimization criterion using quadratic programming (QP). This QP leads to higher computational cost. Least squares support vector machine (LS-SVM), as a variant of SVM, tries to avoid the above shortcoming and obtain an analytical solution directly from solving a set of linear equations instead of QP. Both SVM and LS-SVM operate directly on patterns represented by vector, i.e., before applying SVM or LS-SVM to a pattern, any non-vector pattern such as an image has to be first vectorized into a vector pattern by some techniques like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. Moreover, as the dimension d of the weight vector in SVM or LS-SVM with the linear kernel is equal to the dimension d 1 × d 2 of the original input pattern, as a result, the higher the dimension of a vector pattern is, the more space is needed for storing it. In this paper, inspired by the method of feature extraction directly based on matrix patterns and the advantages of LS-SVM, we propose a new classifier design method based on matrix patterns, called MatLSSVM, such that the new method can not only directly operate on original matrix patterns, but also efficiently reduce memory for the weight vector (d) from d 1 × d 2 to d 1 + d 2. However like LS-SVM, MatLSSVM inherits LS-SVM’s existence of unclassifiable regions when extended to multi-class problems. Thus with the fuzzy version of LS-SVM, a corresponding fuzzy version of MatLSSVM (MatFLSSVM) is further proposed to remove unclassifiable regions effectively for multi-class problems. Experimental results on some benchmark datasets show that the proposed method is competitive in classification performance compared to LS-SVM, fuzzy LS-SVM (FLS-SVM), more-recent MatPCA and MatFLDA. In addition, more importantly, the idea used here has a possibility of providing a novel way of constructing learning model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Vapnik V (1995) The nature of statistical learning theory. Springer-Verlag, New York

    MATH  Google Scholar 

  2. Vapnik V (1998) Statistical learning theory. John Wiley, New York

    MATH  Google Scholar 

  3. Vapnik V (1998) The support vector method of function estimation. In: Suykens AK, Vandewalle J (eds) Nonlinear modeling: advanced black-box techniques. Kluwer Academic Publishers, Boston, pp 55–85

    Google Scholar 

  4. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300

    Article  Google Scholar 

  5. Suykens JAK, Van Gestel T, De Brabanter J, De Moor B, Vandewalle J (2002) Least squares support vector machines. World Scientific, Singapore (ISBN 981-238-151-1)

    MATH  Google Scholar 

  6. Beymer D, Poggio T (1996) Image representations for visual learning. Science 272:1905–1909

    Article  ADS  Google Scholar 

  7. Chen SC, Zhu YL, Zhang DQ, Yang JY (2005) Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA. Pattern Recog Lett 26:1157–1167

    Article  Google Scholar 

  8. Wang H, Ahuja N (2005) Rank-R approximation of tensors: using image-as-matrix representation. IEEE Conference on Computer Vision and Pattern Recognition, 2005 (CVPR’05)

  9. Suykens JAK, De Brabanter J, Lukas L, Vandewalle J (2002) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48:85–105

    Article  MATH  Google Scholar 

  10. Yang J, Zhang D, Frangi AF, Yang J-U (2004) Two-dimension PCA: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Analysis Machine Intelligence 26(1):131–137

    Article  Google Scholar 

  11. Ye J, Janardan R, Li Q (2004) Two-dimensional linear discriminant analysis, University of Minnesota. Adv Neural Inform Process Syst 17:1569–1576

    Google Scholar 

  12. Li M, Yuan B (2005) 2D-LDA: a statistical linear discriminant analysis for image matrix. Pattern Recognition Lett 26:527–532

    Article  Google Scholar 

  13. Tsujinishi D, Abe S (2003) Fuzzy least squares support vector machines for multiclass problems. Neural Netw 16(5–6):785–792

    Article  Google Scholar 

  14. KreBel UH-G (1999) Pairwise classification and support vector machines. In: Scholkopf B, Burges CJC, Smola AJ (eds) Advances in kernel methods: support vector learning. MIT Press, Cambridge, MA, pp 255–268

    Google Scholar 

  15. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge

    Google Scholar 

  16. Graham A (1981) Kronecker products and matrix calculus: with applications. Halsted Press, John Wiley and Sons, NY

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Songcan Chen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, Z., Chen, S. New Least Squares Support Vector Machines Based on Matrix Patterns. Neural Process Lett 26, 41–56 (2007). https://doi.org/10.1007/s11063-007-9041-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9041-1

Keywords

Navigation