Skip to main content

Advertisement

Log in

Structural least square twin support vector machine for classification

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The least square twin support vector machine (LS-TSVM) obtains two non-parallel hyperplanes by directly solving two systems of linear equations instead of two quadratic programming problems (QPPs) as in the conventional twin support vector machine (TSVM), which makes the computational speed of LS-TSVM faster than that of the TSVM. However, LS-TSVM ignores the structural information of data which may contain some vital prior domain knowledge for training a classifier. In this paper, we apply the prior structural information of data into the LS-TSVM to build a better classifier, called the structural least square twin support vector machine (S-LSTSVM). Since it incorporates the data distribution information into the model, S-LSTSVM has good generalization performance. Furthermore, S-LSTSVM costs less time by solving two systems of linear equations compared with other existing methods based on structural information. Experimental results on twelve benchmark datasets demonstrate that our S-LSTSVM performs well. Finally, we apply it into Alzheimer’s disease diagnosis to further demonstrate the advantage of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/datasets.html.

  2. http://adni.loni.ucla.edu/about/data-statistics/

References

  1. Vapnik V (1995) The nature of statistical learning theory. Springer, New York

    Book  MATH  Google Scholar 

  2. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge university press, Cambridge

    Book  Google Scholar 

  3. Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge university press, Cambridge

    Book  Google Scholar 

  4. Yeung D, Wang D, Ng W (2007) Structured large margin machines: sensitive to data distributions. Mach Learn 68(2):171–200

    Article  Google Scholar 

  5. Ward J (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236–244

    Article  Google Scholar 

  6. Xue H, Chen S, Yang Q (2011) Structural regularized support vector machine: a framework for structural large margin classifier. Neural networks. IEEE Trans 22(4):573–587

    Google Scholar 

  7. Jayadeva KR, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intellegence 29:905–910

    Article  Google Scholar 

  8. Qi Z, Tian Y, Shi Y (2013) Structural twin support vector machine for classification. Knowl-Based Syst 43:74–81

    Article  Google Scholar 

  9. Arun Kumar M, Gopal M (2009) Least squares twin support vector machines for pattern classification. Expert Syst Appl 36(4):7535–7543

    Article  Google Scholar 

  10. Xu Y, Xi W, Lv X (2012) An improved least squares twin support vector machine. J. Inf & Comput Sci 9:1063–1071

    Google Scholar 

  11. Shao Y, Zhang C, Wang X (2011) Improvements on twin support vector machines. Neural networks. IEEE Trans 22(6):962–968

    Google Scholar 

  12. Hartigan JA, Wong MA (1979) Algorithm AS 136: A k-means clustering algorithm. J R Stat Soc: Ser C: Appl Stat 28(1):100–108

    MATH  Google Scholar 

  13. Lu S, Fu K (1978) A sentence-to-sentence clustering procedure for pattern analysis. Systems, man and cybernetics. IEEE Trans 8(5):381–389

    MATH  MathSciNet  Google Scholar 

  14. Zadeh L (1965) Fuzzy sets. Inform Control 8(3):338–353

    Article  MATH  MathSciNet  Google Scholar 

  15. El-Hamdouchi A, Willett P (1989) Comparison of hierarchic agglomerative clustering methods for document retrieval. The Comput J 32(3):220–227

    Article  Google Scholar 

  16. Salvador S, Chan P (2004) Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms//Tools with Artificial Intelligence. In: 16th IEEE International Conference on IEEE, pp 576-584

  17. Smola A (2001) Learning with kernels: support vector machines, regularization, optimization, and beyond (Adaptive computation and machine learning)[J]

  18. Golub G, Van Loan C (1996) Matrix computations, 3rd ed. The Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  19. Xu Y, Guo R (2014) An improved v-twin support vector machine. Appl Intell 41(1):42–54

    Article  Google Scholar 

  20. Xu Y, Wang L, Zhong P (2012) A rough margin-based ν-twin support vector machine. Neural Comput & Applic 21:1307–1317

    Article  Google Scholar 

  21. Xu Y, Wang L (2014) K-nearest neighbor-based weighted twin support vector regression. Appl Intell 41(1):299–309

    Article  Google Scholar 

  22. Dems̆ar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  Google Scholar 

  23. García S, Fernández A, Luengo J, Herrera F (2010) Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining. Experimental analysis of power. Inf Sci 180:2044–2064

    Article  Google Scholar 

  24. Li H , Liu Y , Gong P (2014) Hierarchical nteractions Model for Predicting Mild Cognitive Impairment (MCI) to Alzheimers Disease (AD) Conversion. Plos9(1)

  25. Ye J, Farnum M, Yang E (2012) Sparse learning and stability selection for predicting MCI to AD conversion using baseline ADNI data. BMC Neurol 12:46

    Article  Google Scholar 

  26. Wang L, Xu Y, Zhao L (2005) A kind of hybrid classification algorithm based on rough set and support vector machine. In: Proceedings of the fourth International Conference on Machine Learning and Cybernetics (ICMLC), pp 1676-1679

  27. Xu Y, Zhen L et al. (2009) Classification algorithm based on feature selection and samples selection. Lect Notes Comput Sci 5552:631–638

    Article  Google Scholar 

  28. Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J of the Royal Stat Soc Ser B 58(1):267–288

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation. This work was supported in part by National Natural Science Foundation of China (No. 61153003), China Scholarship Fund (No. 201208110282), and Chinese Universities Scientific Fund (No. 2014QJ003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yitian Xu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, Y., Pan, X., Zhou, Z. et al. Structural least square twin support vector machine for classification. Appl Intell 42, 527–536 (2015). https://doi.org/10.1007/s10489-014-0611-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-014-0611-4

Keywords

Navigation