Skip to main content

Support Vector Machines

  • Chapter
  • First Online:

Abstract

SVM is one of the most popular nonparametric classification algorithms. It is optimal and is based on computational learning theory. This chapter is dedicated to SVM. We first introduce the SVM model. Training methods for classification, clustering, and regression using SVM are introduced in detail. Associated topics such as model architecture optimization are also described.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Adankon, M. M., Cheriet, M., & Biem, A. (2009). Semisupervised least squares support vector machine. IEEE Transactions on Neural Networks, 20(12), 1858–1870.

    Article  Google Scholar 

  2. Aiolli, F., & Sperduti, A. (2005). Multiclass classification with multi-prototype support vector machines. Journal of Machine Learning Research, 6, 817–850.

    MathSciNet  MATH  Google Scholar 

  3. Alabdulmohsin, I., Zhang, X., & Gao, X. (2014). Support vector machines with indefinite kernels. In JMLR Workshop and Conference Proceedings: Asian Conference on Machine Learning (Vol. 39, pp. 32–47).

    Google Scholar 

  4. Allwein, E. L., Schapire, R. E., & Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1, 113–141.

    MathSciNet  MATH  Google Scholar 

  5. Angiulli, F., & Astorino, A. (2010). Scaling up support vector machines using nearest neighbor condensation. IEEE Transactions on Neural Networks, 21(2), 351–357.

    Article  Google Scholar 

  6. Baker, J. L. (2003). Is there a support vector machine hiding in the dentate gyrus? Neurocomputing, 52–54, 199–207.

    Article  MathSciNet  Google Scholar 

  7. Barbero, A., Takeda, A., & Lopez, J. (2015). Geometric intuition and algorithms for E\(\nu \)-SVM. Journal of Machine Learning Research, 16, 323–369.

    MathSciNet  MATH  Google Scholar 

  8. Belkin, M., Niyogi, P., & Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from examples. Journal of Machine Learning Research, 7, 2399–2434.

    MathSciNet  MATH  Google Scholar 

  9. Ben-Hur, A., Horn, D., Siegelmann, H., & Vapnik, V. (2001). Support vector clustering. Journal of Machine Learning Research, 2, 125–137.

    MATH  Google Scholar 

  10. Bo, L., Wang, L., & Jiao, L. (2007). Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, 19(4), 1082–1096.

    Article  MathSciNet  MATH  Google Scholar 

  11. Bordes, A., Ertekin, S., Wesdon, J., & Bottou, L. (2005). Fast kernel classifiers for online and active learning. Journal of Machine Learning Research, 6, 1579–1619.

    MathSciNet  MATH  Google Scholar 

  12. Bordes, A., Bottou, L., & Gallinari, P. (2009). SGD-QN: Careful quasi-Newton stochastic gradient descent. Journal of Machine Learning Research, 10, 1737–1754.

    MathSciNet  MATH  Google Scholar 

  13. Bordes, A., Bottou, L., Gallinari, P., Chang, J., & Smith, S. A. (2010). Erratum: SGDQN is less careful than expected. Journal of Machine Learning Research, 11, 2229–2240.

    MATH  Google Scholar 

  14. Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the 5th ACM Annals Workshop on Computational Learning Theory (COLT) (pp. 144–152).

    Google Scholar 

  15. Bottou, L., & Bousquet, O. (2007). The tradeoffs of large scale learning. Advances in neural information processing systems (Vol. 20, pp. 161–168). Cambridge: MIT Press.

    Google Scholar 

  16. Bouboulis, P., Theodoridis, S., Mavroforakis, C., & Evaggelatou-Dalla, L. (2015). Complex support vector machines for regression and quaternary classification. IEEE Transactions on Neural Networks and Learning Systems, 26(6), 1260–1274.

    Article  MathSciNet  Google Scholar 

  17. Burges, C. J. C. (1996). Simplified support vector decision rules. In Proceedings of 13th International Conference on Machine Learning (pp. 71–77). Bari, Italy.

    Google Scholar 

  18. Burges, C. J. C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2, 121–167.

    Article  Google Scholar 

  19. Cauwenberghs, G., & Poggio, T. (2001). Incremental and decremental support vector machine learning. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 409–415). Cambridge: MIT Press.

    Google Scholar 

  20. Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: A library for support vector machines. Technical Report, Department of Computer Science and Information Engineering, National Taiwan University.

    Google Scholar 

  21. Chang, C.-C., & Lin, C.-J. (2001). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 13(9), 2119–2147.

    Article  MATH  Google Scholar 

  22. Chang, C. C., & Lin, C. J. (2002). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 14, 1959–1977.

    Article  MATH  Google Scholar 

  23. Chang, M.-W., & Lin, C.-J. (2005). Leave-one-out bounds for support vector regression model selection. Neural Computation, 17, 1188–1222.

    Article  MATH  Google Scholar 

  24. Chang, K.-W., Hsieh, C.-J., & Lin, C.-J. (2008). Coordinate descent method for large-scale L2-loss linear support vector machines. Journal of Machine Learning Research, 9, 1369–1398.

    MathSciNet  MATH  Google Scholar 

  25. Chang, F., Guo, C.-Y., Lin, X.-R., & Lu, C.-J. (2010). Tree decomposition for large-scale SVM problems. Journal of Machine Learning Research, 11, 2935–2972.

    MathSciNet  MATH  Google Scholar 

  26. Chapelle, O., & Zien, A. (2005). Semi-supervised classification by low density separation. In Proceedings of the 10th International Workshop on Artificial Intelligence Statistics (pp. 57–64).

    Google Scholar 

  27. Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19(5), 1155–1178.

    Article  MathSciNet  MATH  Google Scholar 

  28. Chapelle, O., Sindhwani, V., & Keerthi, S. S. (2008). Optimization techniques for semi-supervised support vector machines. Journal of Machine Learning Research, 9, 203–233.

    MATH  Google Scholar 

  29. Chen, P.-H., Fan, R.-E., & Lin, C.-J. (2006). A study on SMO-type decomposition methods for support vector machines. IEEE Transactions on Neural Networks, 17(4), 893–908.

    Article  Google Scholar 

  30. Cheong, S., Oh, S. H., & Lee, S.-Y. (2004). Support vector machines with binary tree architecture for multi-class classification. Neural Information Processing - Letters and Reviews, 2(3), 47–51.

    Google Scholar 

  31. Chew, H. G., Bogner, R. E., & Lim, C. C. (2001). Dual-\(\nu \) support vector machine with error rate and training size biasing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1269–1272).

    Google Scholar 

  32. Chiang, J.-H., & Hao, P.-Y. (2003). A new kernel-based fuzzy clustering approach: Support vector clustering with cell growing. IEEE Transactions on Fuzzy Systems, 11(4), 518–527.

    Article  Google Scholar 

  33. Choi, Y.-S. (2009). Least squares one-class support vector machine. Pattern Recognition Letters, 30, 1236–1240.

    Article  Google Scholar 

  34. Chu, W., Ong, C. J., & Keerthy, S. S. (2005). An improved conjugate gradient method scheme to the solution of least squares SVM. IEEE Transactions on Neural Networks, 16(2), 498–501.

    Article  Google Scholar 

  35. Chu, W., & Keerthi, S. S. (2007). Support vector ordinal regression. Neural Computation, 19, 792–815.

    Article  MathSciNet  MATH  Google Scholar 

  36. Collobert, R., & Bengio, S. (2001). SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Research, 1, 143–160.

    MathSciNet  MATH  Google Scholar 

  37. Collobert, R., Sinz, F., Weston, J., & Bottou, L. (2006). Large scale transductive SVMs. Journal of Machine Learning Research, 7, 1687–1712.

    MathSciNet  MATH  Google Scholar 

  38. Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 1–25.

    MATH  Google Scholar 

  39. Cox, D., & O’Sullivan, F. (1990). Asymptotic analysis of penalized likelihood and related estimators. Annals of Statistics, 18, 1676–1695.

    Article  MathSciNet  MATH  Google Scholar 

  40. Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2, 265–292.

    MATH  Google Scholar 

  41. Davenport, M. A., Baraniuk, R. G., & Scott, C. D. (2010). Tuning support vector machines for minimax and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10), 1888–1898.

    Article  Google Scholar 

  42. de Kruif, B. J., & de Vries, T. J. A. (2003). Pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 14(3), 696–702.

    Article  Google Scholar 

  43. Dietterich, T., & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2, 263–286.

    Article  MATH  Google Scholar 

  44. Downs, T., Gates, K. E., & Masters, A. (2001). Exact simplification of support vector solutions. Journal of Machine Learning Research, 2, 293–297.

    MATH  Google Scholar 

  45. Drineas, P., & Mahoney, M. W. (2005). On the Nystrom method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6, 2153–2175.

    MathSciNet  MATH  Google Scholar 

  46. Dufrenois, F., Colliez, J., & Hamad, D. (2009). Bounded influence support vector regression for robust single-model estimation. IEEE Transactions on Neural Networks, 20(11), 1689–1706.

    Article  Google Scholar 

  47. Ertekin, S., Bottou, L., & Giles, C. L. (2011). Nonconvex online support vector machines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 368–381.

    Article  Google Scholar 

  48. Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9, 1871–1874.

    MATH  Google Scholar 

  49. Fan, R.-E., Chen, P.-H., & Lin, C.-J. (2005). Working set selection using second order information for training support vector machines. Journal of Machine Learning Research, 6, 1889–1918.

    MathSciNet  MATH  Google Scholar 

  50. Fei, B., & Liu, J. (2006). Binary tree of SVM: A new fast multiclass training and classification algorithm. IEEE Transactions on Neural Networks, 17(3), 696–704.

    Article  MathSciNet  Google Scholar 

  51. Ferris, M. C., & Munson, T. S. (2000). Interior point methods for massive support vector machines. Technical Report 00-05. Madison, WI: Computer Sciences Department, University of Wisconsin.

    Google Scholar 

  52. Fine, S., & Scheinberg, K. (2001). Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2, 243–264.

    MATH  Google Scholar 

  53. Flake, G. W., & Lawrence, S. (2002). Efficient SVM regression training with SMO. Machine Learning, 46, 271–290.

    Article  MATH  Google Scholar 

  54. Franc, V., & Sonnenburg, S. (2009). Optimized cutting plane algorithm for large-scale risk minimization. Journal of Machine Learning Research, 10, 2157–2192.

    MathSciNet  MATH  Google Scholar 

  55. Friess, T., Cristianini, N., & Campbell, C. (1998). The kernel-adatron algorithm: A fast and simple learning procedure for support vector machines. In Proceedings of the 15th International Conference on Machine Learning (pp. 188–196). Madison, WI.

    Google Scholar 

  56. Fung, G., & Mangasarian, O. (2001). Proximal support vector machines. In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 77–86). San Francisco, CA.

    Google Scholar 

  57. Fung, G., & Mangasarian, O. (2001). Semi-supervised support vector machines for unlabeled data classification. Optimization Methods and Software, 15, 29–44.

    Article  MATH  Google Scholar 

  58. Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.

    MathSciNet  MATH  Google Scholar 

  59. Girosi, F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation, 10, 1455–1480.

    Article  Google Scholar 

  60. Glasmachers, T., & Igel, C. (2006). Maximum-gain working set selection for SVMs. Journal of Machine Learning Research, 7, 1437–1466.

    MathSciNet  MATH  Google Scholar 

  61. Glasmachers, T., & Igel, C. (2008). Second-order SMO improves SVM online and active learning. Neural Computation, 20, 374–382.

    Article  MathSciNet  MATH  Google Scholar 

  62. Grinblat, G. L., Uzal, L. C., Ceccatto, H. A., & Granitto, P. M. (2011). Solving nonstationary classification problems with coupled support vector machines. IEEE Transactions on Neural Networks, 22(1), 37–51.

    Article  Google Scholar 

  63. Gu, B., Sheng, V. S., Tay, K. Y., Romano, W., & Li, S. (2015). Incremental support vector learning for ordinal regression. IEEE Transactions on Neural Networks and Learning Systems, 26(7), 1403–1416.

    Article  MathSciNet  Google Scholar 

  64. Gu, B., Sheng, V. S., Wang, Z., Ho, D., Osman, S., & Li, S. (2015). Incremental learning for \(\nu \)-support vector regression. Neural Networks, 67, 140–150.

    Article  MATH  Google Scholar 

  65. Gu, B., Wang, J.-D., Yu, Y.-C., Zheng, G.-S., Huang, Y. F., & Xu, T. (2012). Accurate on-line \(\nu \)-support vector learning. Neural Networks, 27, 51–59.

    Article  MATH  Google Scholar 

  66. Gunter, L., & Zhu, J. (2007). Efficient computation and model selection for the support vector regression. Neural Computation, 19, 1633–1655.

    Article  MathSciNet  MATH  Google Scholar 

  67. Haasdonk, B. (2005). Feature space interpretation of SVMs with indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4), 482–92.

    Article  Google Scholar 

  68. Haasdonk, B., & Pekalska, E. (2008). Indefinite kernel Fisher discriminant. In Proceedings of the 19th International Conference on Pattern Recognition (pp. 1–4). Tampa, FL.

    Google Scholar 

  69. Hammer, B., & Gersmann, K. (2003). A note on the universal approximation capability of support vector machines. Neural Processing Letters, 17, 43–53.

    Article  Google Scholar 

  70. Hao, P.-Y. (2010). New support vector algorithms with parametric insensitive/margin model. Neural Networks, 23, 60–73.

    Article  MATH  Google Scholar 

  71. Hao, P.-Y. (2017). Pair-\(\nu \)-SVR: A novel and efficient pairing \(\nu \)-support vector regression algorithm. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2503–2515.

    Article  MathSciNet  Google Scholar 

  72. Hastie, T., Rosset, S., Tibshirani, R., & Zhu, J. (2004). The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5, 1391–1415.

    MathSciNet  MATH  Google Scholar 

  73. Hsu, C.-W., & Lin, C.-J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2), 415–425.

    Article  Google Scholar 

  74. Huang, K., Jiang, H., & Zhang, X.-Y. (2017). Field support vector machines. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(6), 454–463.

    Article  Google Scholar 

  75. Huang, K., Yang, H., King, I., & Lyu, M. R. (2008). Maxi-min margin machine: learning large margin classifiers locally and globally. IEEE Transactions on Neural Networks, 19(2), 260–272.

    Article  Google Scholar 

  76. Huang, K., Zheng, D., Sun, J., Hotta, Y., Fujimoto, K., & Naoi, S. (2010). Sparse learning for support vector classification. Pattern Recognition Letters, 31, 1944–1951.

    Article  Google Scholar 

  77. Huang, X., Shi, L., & Suykens, J. A. K. (2014). Support vector machine classifier with pinball loss. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 984–997.

    Article  Google Scholar 

  78. Hush, D., & Scovel, C. (2003). Polynomial-time decomposition algorithms for support vector machines. Machine Learning, 51, 51–71.

    Article  MATH  Google Scholar 

  79. Hush, D., Kelly, P., Scovel, C., & Steinwart, I. (2006). QP Algorithms with guaranteed accuracy and run time for support vector machines. Journal of Machine Learning Research, 7, 733–769.

    MathSciNet  MATH  Google Scholar 

  80. Ikeda, K., & Murata, N. (2005). Geometrical properties of Nu support vector machines with different norms. Neural Computation, 17, 2508–2529.

    Article  MathSciNet  MATH  Google Scholar 

  81. Ito, N., Takeda, A., & Toh, K.-C. (2017). A unified formulation and fast accelerated proximal gradient method for classification. Journal of Machine Learning Research, 18, 1–49.

    MathSciNet  MATH  Google Scholar 

  82. Jandel, M. (2010). A neural support vector machine. Neural Networks, 23, 607–613.

    Article  Google Scholar 

  83. Jayadeva, Khemchandani, R., & Chandra, S. (2007). Twin support vector machines for pattern classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 905–910.

    Google Scholar 

  84. Jiao, L., Bo, L., & Wang, L. (2007). Fast sparse approximation for least squares support vector machine. IEEE Transactions on Neural Networks, 18(3), 685–697.

    Article  Google Scholar 

  85. Joachims, T. (1999). Making large-scale SVM learning practical. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 169–184). Cambridge: MIT Press.

    Google Scholar 

  86. Joachims, T. (1999). Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning (pp. 200–209). San Mateo: Morgan Kaufmann.

    Google Scholar 

  87. Joachims, T. (2006). Training linear SVMs in linear time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 217–226).

    Google Scholar 

  88. Joachims, T., Finley, T., & Yu, C.-N. J. (2009). Cutting-plane training of structural SVMs. Machine Learning, 77, 27–59.

    Article  MATH  Google Scholar 

  89. Joachims, T., & Yu, C.-N. J. (2009). Sparse kernel SVMs via cutting-plane training. Machine Learning, 76, 179–193.

    Article  Google Scholar 

  90. Jung, K.-H., Lee, D., & Lee, J. (2010). Fast support-based clustering method for large-scale problems. Pattern Recognition, 43, 1975–1983.

    Article  MATH  Google Scholar 

  91. Kao, W.-C., Chung, K.-M., Sun, C.-L., & Lin, C.-J. (2004). Decomposition methods for linear support vector machines. Neural Computation, 16, 1689–1704.

    Article  MATH  Google Scholar 

  92. Karal, O. (2017). Maximum likelihood optimal and robust support vector regression with lncosh loss function. Neural Networks, 94, 1–12.

    Article  Google Scholar 

  93. Katagiri, S., & Abe, S. (2006). Incremental training of support vector machines using hyperspheres. Pattern Recognition Letters, 27, 1495–1507.

    Article  Google Scholar 

  94. Keerthi, S. S., Chapelle, O., & DeCoste, D. (2006). Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research, 7, 1493–1515.

    MathSciNet  MATH  Google Scholar 

  95. Keerthi, S. S., & DeCoste, D. (2005). A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6, 341–361.

    MathSciNet  MATH  Google Scholar 

  96. Keerthi, S. S., & Gilbert, E. G. (2002). Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning, 46, 351–360.

    Article  MATH  Google Scholar 

  97. Keerthi, S. S., & Shevade, S. K. (2003). SMO for least squares SVM formulations. Neural Computation, 15, 487–507.

    Article  MATH  Google Scholar 

  98. Keerthi, S. S., Shevade, S. K., Bhattacharyya, C., & Murthy, K. R. K. (2001). Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation, 13(3), 637–649.

    Article  MATH  Google Scholar 

  99. Khemchandan, R., Saigal, P., & Chandra, S. (2016). Improvements on \(\nu \)-twin support vector machine. Neural Networks, 79, 97–107.

    Article  Google Scholar 

  100. Klement, S., Anders, S., & Martinetz, T. (2013). The support feature machine: Classification with the least number of features and application to neuroimaging data. Neural Networks, 25(6), 1548–1584.

    MathSciNet  MATH  Google Scholar 

  101. Knebel, T., Hochreiter, S., & Obermayer, K. (2008). An SMO algorithm for the potential support vector machine. Neural Computation, 20, 271–287.

    Article  MathSciNet  MATH  Google Scholar 

  102. Kramer, K. A., Hall, L. O., Goldgof, D. B., Remsen, A., & Luo, T. (2009). Fast support vector machines for continuous data. IEEE Transactions on Systems, Man, and Cybernetics Part B, 39(4), 989–1001.

    Article  Google Scholar 

  103. Kressel, U. H.-G. (1999). Pairwise classification and support vector machines. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 255–268). Cambridge: MIT Press.

    Google Scholar 

  104. Kuh, A., & De Wilde, P. (2007). Comments on pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 18(2), 606–609.

    Article  Google Scholar 

  105. Laskov, P., Gehl, C., Kruger, S., & Muller, K.-R. (2006). Incremental support vector learning: Analysis, implementation and applications. Journal of Machine Learning Research, 7, 1909–1936.

    MathSciNet  MATH  Google Scholar 

  106. Lee, D., & Lee, J. (2007). Equilibrium-based support vector machine for semisupervised classification. IEEE Transactions on Neural Networks, 18(2), 578–583.

    Article  Google Scholar 

  107. Lee, K. Y., Kim, D.-W., Lee, K. H., & Lee, D. (2007). Density-induced support vector data description. IEEE Transactions on Neural Networks, 18(1), 284–289.

    Article  Google Scholar 

  108. Lee, Y.-J., Hsieh, W.-F., & Huang, C.-M. (2005). \(\varepsilon \)-SSVR: A smooth support vector machine for \(\varepsilon \)-insensitive regression. IEEE Transactions on Knowledge and Data Engineering, 17(5), 678–685.

    Article  Google Scholar 

  109. Lee, C.-P., & Lin, C.-J. (2014). Large-Scale Linear RankSVM. Neural Computation, 26, 781–817.

    Article  MathSciNet  MATH  Google Scholar 

  110. Lee, Y. J., & Mangasarian, O. L. (2001). RSVM: Reduced support vector machines. In Proceedings of the 1st SIAM International Conference on Data Mining (pp. 1–17). Chicago, IL.

    Google Scholar 

  111. Lee, Y. J., & Mangasarian, O. L. (2001). SSVM: A smooth support vector machine. Computational Optimization and Applications, 20(1), 5–22.

    Article  MathSciNet  MATH  Google Scholar 

  112. Li, B., Song, S., & Li, K. (2013). A fast iterative single data approach to training unconstrained least squares support vector machines. Neurocomputing, 115, 31–38.

    Article  Google Scholar 

  113. Liang, X. (2010). An effective method of pruning support vector machine classifiers. IEEE Transactions on Neural Networks, 21(1), 26–38.

    Article  Google Scholar 

  114. Liang, X., Chen, R.-C., & Guo, X. (2008). Pruning support vector machines without altering performances. IEEE Transactions on Neural Networks, 19(10), 1792–1803.

    Article  Google Scholar 

  115. Lin, C.-J. (2001). On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks, 12(6), 1288–1298.

    Article  Google Scholar 

  116. Lin, C.-J. (2002). Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks, 13(1), 248–250.

    Article  Google Scholar 

  117. Lin, C.-J., Weng, R. C., & Keerthi, S. S. (2008). Trust region Newton method for logistic regression. Journal of Machine Learning Research, 9, 627–650.

    MathSciNet  MATH  Google Scholar 

  118. Lin, Y.-L., Hsieh, J.-G., Wu, H.-K., & Jeng, J.-H. (2011). Three-parameter sequential minimal optimization for support vector machines. Neurocomputing, 74, 3467–3475.

    Article  Google Scholar 

  119. Loosli, G., & Canu, S. (2007). Comments on the core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 8, 291–301.

    MATH  Google Scholar 

  120. Loosli, G., Canu, S., & Ong, C. S. (2016). Learning SVM in Krein spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(6), 1204–1216.

    Article  Google Scholar 

  121. Luo, L., Xie, Y., Zhang, Z., & Li, W.-J. (2015). Support matrix machines. In Proceedings of the 32nd International Conference on Machine Learning. Lille, France.

    Google Scholar 

  122. Luss, R., & d’Aspremont, A. (2007). Support vector machine classification with indefinite kernels. Advances in Neural Information Processing Systems (Vol. 20, pp. 953–960). Vancouver, Canada.

    Google Scholar 

  123. Ma, J., Theiler, J., & Perkins, S. (2003). Accurate online support vector regression. Neural Computation, 15(11), 2683–2703.

    Article  MATH  Google Scholar 

  124. Ma, Y., Liang, X., Kwok, J. T., Li, J., Zhou, X., & Zhang, H. (2018). Fast-solving quasi-optimal LS-S\(^3\)VM based on an extended candidate set. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1120–1131.

    Article  MathSciNet  Google Scholar 

  125. Mall, R., & Suykens, J. A. K. (2015). Very sparse LSSVM reductions for large-scale data. IEEE Transactions on Neural Networks and Learning Systems, 26(5), 1086–1097.

    Article  MathSciNet  Google Scholar 

  126. Manevitz, L. M., & Yousef, M. (2001). One-class SVMs for document classification. Journal of Machine Learning Research, 2, 139–154.

    MATH  Google Scholar 

  127. Mangasarian, O. L. (2000). Generalized support vector machines. In A. Smola, P. Bartlett, B. Scholkopf, & D. Schuurmans (Eds.), Advances in large margin classifiers (pp. 135–146). Cambridge: MIT Press.

    Google Scholar 

  128. Mangasarian, O. L. (2002). A finite Newton method for classification. Optimization Methods and Software, 17(5), 913–929.

    Article  MathSciNet  MATH  Google Scholar 

  129. Mangasarian, O. L., & Musicant, D. R. (1999). Successive overrelaxation for support vector machines. IEEE Transactions on Neural Networks, 10(5), 1032–1037.

    Article  Google Scholar 

  130. Mangasarian, O. L., & Musicant, D. R. (2001). Lagrangian support vector machines. Journal of Machine Learning Research, 1, 161–177.

    MathSciNet  MATH  Google Scholar 

  131. Mangasarian, O. L., & Wild, E. W. (2006). Multisurface proximal support vector classification via generalized eigenvalues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1), 69–74.

    Article  Google Scholar 

  132. Marchand, M., & Shawe-Taylor, J. (2002). The set covering machine. Journal of Machine Learning Research, 3, 723–746.

    MathSciNet  MATH  Google Scholar 

  133. Martin, M. (2002). On-line support vector machine regression. In Proceedings of the 13th European Conference on Machine Learning, LNAI (Vol. 2430, pp. 282–294). Berlin: Springer.

    Google Scholar 

  134. Melacci, S., & Belkin, M. (2011). Laplacian support vector machines trained in the primal. Journal of Machine Learning Research, 12, 1149–1184.

    MathSciNet  MATH  Google Scholar 

  135. Munoz, A., & de Diego, I. M. (2006). From indefinite to positive semidefinite matrices. In Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition (pp. 764–772).

    Google Scholar 

  136. Musicant, D. R., & Feinberg, A. (2004). Active set support vector regression. IEEE Transactions on Neural Networks, 15(2), 268–275.

    Article  Google Scholar 

  137. Nandan, M., Khargonekar, P. P., & Talathi, S. S. (2014). Fast SVM training using approximate extreme points. Journal of Machine Learning Research, 15, 59–98.

    MathSciNet  MATH  Google Scholar 

  138. Navia-Vazquez, A. (2007). Support vector perceptrons. Neurocomputing, 70, 1089–1095.

    Article  Google Scholar 

  139. Nguyen, D., & Ho, T. (2006). A bottom-up method for simplifying support vector solutions. IEEE Transactions on Neural Networks, 17(3), 792–796.

    Article  Google Scholar 

  140. Nguyen, D. D., Matsumoto, K., Takishima, Y., & Hashimoto, K. (2010). Condensed vector machines: Learning fast machine for large data. IEEE Transactions on Neural Networks, 21(12), 1903–1914.

    Article  Google Scholar 

  141. Niu, G., Dai, B., Shang, L., & Sugiyama, M. (2013). Maximum volume clustering: A new discriminative clustering approach. Journal of Machine Learning Research, 14, 2641–2687.

    MathSciNet  MATH  Google Scholar 

  142. Ojeda, F., Suykens, J. A. K., & Moor, B. D. (2008). Low rank updated LS-SVM classifiers for fast variable selection. Neural Networks, 21, 437–449.

    Article  MATH  Google Scholar 

  143. Omitaomu, O. A., Jeong, M. K., & Badiru, A. B. (2011). Online support vector regression with varying parameters for time-dependent data. IEEE Transactions on Systems, Man, and Cybernetics Part A, 41(1), 191–197.

    Article  Google Scholar 

  144. Ong, C. S., Mary, X., Canu, S., & Smola, A. J. (2004). Learning with nonpositive kernels. In Proceedings of the 21th International Conference on Machine Learning (pp. 639–646). Banff, Canada.

    Google Scholar 

  145. Orabona, F., Castellini, C., Caputo, B., Jie, L., & Sandini, G. (2010). On-line independent support vector machines. Pattern Recognition, 43(4), 1402–1412.

    Article  MATH  Google Scholar 

  146. Osuna, E., Freund, R., & Girosi, F. (1997). An improved training algorithm for support vector machines. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing (pp. 276–285). New York.

    Google Scholar 

  147. Osuna, E., Freund, R., & Girosi, F. (1997). Support vector machines: Training and applications. Technical Report A.I. Memo No. 1602, MIT Artificial Intelligence Laboratory.

    Google Scholar 

  148. Peng, X. (2010). TSVR: An efficient twin support vector machine for regression. Neural Networks, 23(3), 365–372.

    Article  MATH  Google Scholar 

  149. Peng, X. (2010). Primal twin support vector regression and its sparse approximation. Neurocomputing, 73, 2846–2858.

    Article  Google Scholar 

  150. Peng, X. (2010). A \(\nu \)-twin support vector machine (\(\nu \)-TSVM) classifier and its geometric algorithms. Information Sciences, 180(20), 3863–3875.

    Article  MathSciNet  MATH  Google Scholar 

  151. Perez-Cruz, F., Navia-Vazquez, A., Rojo-Alvarez, J. L., & Artes-Rodriguez, A. (1999). A new training algorithm for support vector machines. In Proceedings of the 5th Bayona Workshop on Emerging Technologies in Telecommunications (pp. 116–120). Baiona, Spain.

    Google Scholar 

  152. Platt, J. (1999). Fast training of support vector machines using sequential minimal optimization. In B. Scholkopf, C. Burges, & A. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 185–208). Cambridge: MIT Press.

    Google Scholar 

  153. Pontil, M., & Verri, A. (1998). Properties of support vector machines. Neural Computation, 10, 955–974.

    Article  Google Scholar 

  154. Qi, Z., Tian, Y., & Shi, Y. (2015). Successive overrelaxation for Laplacian support vector machine. IEEE Transactions on Neural Networks and Learning Systems, 26(4), 674–683.

    Article  MathSciNet  Google Scholar 

  155. Rahimi, A., & Recht, B. (2007). Random features for large-scale kernel machines. In Advances in neural information processing systems (pp. 1177–1184).

    Google Scholar 

  156. Renjifo, C., Barsic, D., Carmen, C., Norman, K., & Peacock, G. S. (2008). Improving radial basis function kernel classification through incremental learning and automatic parameter selection. Neurocomputing, 72, 3–14.

    Article  Google Scholar 

  157. Rifkin, R., & Klautau, A. (2004). In defense of one-vs-all classification. Journal of Machine Learning Research, 5, 101–141.

    MathSciNet  MATH  Google Scholar 

  158. Roobaert, D. (2002). DirectSVM: A simple support vector machine perceptron. Journal of VLSI Signal Processing, 32, 147–156.

    Article  MATH  Google Scholar 

  159. Scheinberg, K. (2006). An efficient implementation of an active set method for SVMs. Journal of Machine Learning Research, 7, 2237–2257.

    MathSciNet  MATH  Google Scholar 

  160. Schleif, F.-M., & Tino, P. (2017). Indefinite core vector machine. Pattern Recognition, 71, 187–195.

    Article  Google Scholar 

  161. Scholkopf, B., Smola, A. J., Williamson, R. C., & Bartlett, P. L. (2000). New support vector algorithm. Neural Computation, 12(5), 1207–1245.

    Article  Google Scholar 

  162. Scholkopf, B., Herbrich, R., & Smola, A. J. (2001). A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory, LNCS (Vol. 2111, pp. 416–426). Berlin: Springer.

    Google Scholar 

  163. Scholkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., & Williamson, R. (2001). Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), 1443–1471.

    Article  MATH  Google Scholar 

  164. Scholkopf, B., Mika, S., Burges, C. J. C., Knirsch, P., Muller, K. R., Ratsch, G., et al. (1999). Input space versus feature space in kernel-based methods. IEEE Transactions on Neural Networks, 5(10), 1000–1017.

    Article  Google Scholar 

  165. Schraudolph, N., Yu, J., & Gunter, S. (2007). A stochastic quasi-Newton method for online convex optimization. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AIstats) (pp. 433–440). Society for AIstats.

    Google Scholar 

  166. Shalev-Shwartz, S., Singer, Y., & Srebro, N. (2007). Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of the 24th International Conference on Machine Learning (ICML) (pp. 807–814). New York: ACM Press.

    Google Scholar 

  167. Shao, Y.-H., & Deng, N.-Y. (2012). A coordinate descent margin based-twin support vector machine for classification. Neural Networks, 25, 114–121.

    Article  MATH  Google Scholar 

  168. Shao, Y.-H., Zhang, C.-H., Wang, X.-B., & Deng, N.-Y. (2011). Improvements on twin support vector machines. IEEE Transactions on Neural Networks, 22(6), 962–968.

    Article  Google Scholar 

  169. Shashua, A. A. (1999). On the equivalence between the support vector machine for classification and sparsified Fisher’s linear discriminant. Neural Processing Letters, 9(2), 129–139.

    Article  Google Scholar 

  170. Shashua, A., & Levin, A. (2002). Ranking with large margin principle: Two approaches. Advances in neural information processing systems (Vol. 15, pp. 937–944).

    Google Scholar 

  171. Shen, X., Niu, L., Qi, Z., & Tian, Y. (2017). Support vector machine classifier with truncated pinball loss. Pattern Recognition, 68, 199–210.

    Article  Google Scholar 

  172. Shevade, S. K., Keerthi, S. S., Bhattacharyya, C., & Murthy, K. R. K. (2000). Improvements to the SMO algorithm for SVM regression. IEEE Transactions on Neural Networks, 11(5), 1188–1193.

    Article  Google Scholar 

  173. Shi, Y., Chung, F.-L., & Wang, S. (2015). An improved TA-SVM method without matrix inversion and its fast implementation for nonstationary datasets. IEEE Transactions on Neural Networks and Learning Systems, 26(9), 2005–2018.

    Article  MathSciNet  Google Scholar 

  174. Shin, H., & Cho, S. (2007). Neighborhood property-based pattern selection for support vector machines. Neural Computation, 19, 816–855.

    Article  MATH  Google Scholar 

  175. Shilton, A., Palamiswami, M., Ralph, D., & Tsoi, A. (2005). Incremental training of support vector machines. IEEE Transactions on Neural Networks, 16, 114–131.

    Article  Google Scholar 

  176. Smola, A. J., & Scholkopf, B. (2000). Sparse greedy matrix approximation for machine learning. In Proceedings of the 17th International Conference on Machine Learning (pp. 911–918). Stanford University, CA.

    Google Scholar 

  177. Smola, A. J., & Scholkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14(3), 199–222.

    Article  MathSciNet  Google Scholar 

  178. Suykens, J. A. K., & Vandewalle, J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9, 293–300.

    Article  Google Scholar 

  179. Suykens, J. A. K., Lukas, L., Van Dooren, P., De Moor, B., & Vandewalle, J. (1999). Least squares support vector machine classifiers: A large scale algorithm. In Proceedings of European Conference on Circuit Theory and Design (pp. 839–842).

    Google Scholar 

  180. Suykens, J. A. K., Lukas, L., & Vandewalle, J. (2000). Sparse approximation using least squares support vector machines. In Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS) (Vol. 2, pp. 757–760). Genvea, Switzerland.

    Google Scholar 

  181. Suykens, J. A. K., Van Gestel, T., De Brabanter, J., De Moor, B., & Vandewalle, J. (2002). Least squares support vector machines. Singapore: World Scientific.

    Google Scholar 

  182. Suykens, J. A. K., De Brabanter, J., Lukas, L., & Vandewalle, J. (2002). Weighted least squares support vector machines: Robustness and sparse approximation. Neurocomputing, 48, 85–105.

    Article  MATH  Google Scholar 

  183. Steinwart, I. (2003). Sparseness of support vector machines. Journal of Machine Learning Research, 4, 1071–1105.

    MathSciNet  MATH  Google Scholar 

  184. Takahashi, N., & Nishi, T. (2006). Global convergence of decomposition learning methods for support vector machines. IEEE Transactions on Neural Networks, 17(6), 1362–1369.

    Article  Google Scholar 

  185. Takahashi, N., Guo, J., & Nishi, T. (2008). Global convergence of SMO algorithm for support vector regression. IEEE Transactions on Neural Networks, 19(6), 971–982.

    Article  Google Scholar 

  186. Tan, Y., & Wang, J. (2004). A support vector machine with a hybrid kernel and minimal Vapnik-Chervonenkis dimension. IEEE Transactions on Knowledge and Data Engineering, 16(4), 385–395.

    Article  Google Scholar 

  187. Tax, D. M. J., & Duin, R. P. W. (1999). Support vector domain description. Pattern Recognition Letters, 20, 1191–1199.

    Article  Google Scholar 

  188. Tax, D. M. J. (2001). One-class classification: Concept-learning in the absence of counter-examples. Ph.D. dissertation. Delft, The Netherlands: Electrical Engineering, Mathematics and Computer Science, Delft University of Technology.

    Google Scholar 

  189. Tax, D. M. J., & Duin, R. P. W. (2004). Support vector data description. Machine Learning, 54, 45–66.

    Article  MATH  Google Scholar 

  190. Teo, C. H., Smola, A., Vishwanathan, S. V., & Le, Q. V. (2007). A scalable modular convex solver for regularized risk minimization. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (pp. 727–736).

    Google Scholar 

  191. Teo, C. H., Vishwanthan, S. V. N., Smola, A., & Le, Q. (2010). Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11, 311–365.

    MathSciNet  MATH  Google Scholar 

  192. Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.

    MathSciNet  MATH  Google Scholar 

  193. Tong, S., & Koller, D. (2001). Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2, 45–66.

    MATH  Google Scholar 

  194. Tsang, I. W., Kwok, J. T., & Cheung, P.-M. (2005). Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6, 363–392.

    MathSciNet  MATH  Google Scholar 

  195. Tsang, I. W.-H., Kwok, J. T.-Y., & Zurada, J. M. (2006). Generalized core vector machines. IEEE Transactions on Neural Networks, 17(5), 1126–1140.

    Article  Google Scholar 

  196. Tsang, I. W., Kocsor, A., & Kwok, J. T. (2007). Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning (pp. 911–918). Corvalis, OR.

    Google Scholar 

  197. Valizadegan, H., & Jin, R. (2007). Generalized maximum margin clustering and unsupervised kernel learning. Advances in neural information processing systems (Vol. 19, pp. 1417–1424). Cambridge: MIT Press.

    Google Scholar 

  198. Vapnik, V. N. (1982). Estimation of dependences based on empirical data. New York: Springer.

    Google Scholar 

  199. Vapnik, V. N. (1995). The nature of statistical learning theory. New York: Springer.

    Google Scholar 

  200. Vapnik, V. N. (1998). Statistical learning theory. New York: Wiley.

    Google Scholar 

  201. Vapnik, V., & Chapelle, O. (2000). Bounds on error expectation for support vector machines. Neural Computation, 12, 2013–2036.

    Article  Google Scholar 

  202. Vishwanathan, S. V. N., Smola, A. J., & Murty, M. N. (2003). SimpleSVM. In Proceedings of the 20th International Conference on Machine Learning (pp. 760–767). Washington, DC.

    Google Scholar 

  203. Wang, Z., & Chen, S. (2007). New least squares support vector machines based on matrix patterns. Neural Processing Letters, 26, 41–56.

    Article  Google Scholar 

  204. Wang, G., Yeung, D.-Y., & Lochovsky, F. H. (2008). A new solution path algorithm in support vector regression. IEEE Transactions on Neural Networks, 19(10), 1753–1767.

    Article  Google Scholar 

  205. Wang, F., Zhao, B., & Zhang, C. (2010). Linear time maximum margin clustering. IEEE Transactions on Neural Networks, 21(2), 319–332.

    Article  Google Scholar 

  206. Wang, Z., Shao, Y.-H., Bai, L., & Deng, N.-Y. (2015). Twin support vector machine for clustering. IEEE Transactions on Neural Networks and Learning Systems, 26(10), 2583–2588.

    Article  MathSciNet  Google Scholar 

  207. Warmuth, M. K., Liao, J., Ratsch, G., Mathieson, M., Putta, S., & Lemmem, C. (2003). Support vector machines for active learning in the drug discovery process. Journal of Chemical Information Sciences, 43(2), 667–673.

    Article  Google Scholar 

  208. Weston, J., & Watkins, C. (1999). Multi-class support vector machines. In M. Verleysen (Ed.), Proceedings of European Symposium on Artificial Neural Networks. Brussels: D. Facto Press.

    Google Scholar 

  209. Williams, C. K. I., & Seeger, M. (2001). Using the Nystrom method to speed up kernel machines. In T. Leen, T. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 682–688). Cambridge: MIT Press.

    Google Scholar 

  210. Wu, Q., & Zhou, D.-X. (2005). SVM soft margin classifiers: Linear programming versus quadratic programming. Neural Computation, 17, 1160–1187.

    Article  MathSciNet  MATH  Google Scholar 

  211. Wu, M., Scholkopf, B., & Bakir, G. (2006). A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research, 7, 603–624.

    MathSciNet  MATH  Google Scholar 

  212. Xu, G., Hu, B.-G., & Principe, J. C. (2018). Robust C-loss kernel classifiers. IEEE Transactions on Neural Networks and Learning Systems, 29(3), 510–522.

    Article  MathSciNet  Google Scholar 

  213. Xu, L., Neufeld, J., Larson, B., & Schuurmans, D. (2004). Maximum margin clustering. Advances in neural information processing systems (Vol. 17). Cambridge: MIT Press.

    Google Scholar 

  214. Yang, H., Huang, K., King, I., & Lyu, M. R. (2009). Localized support vector regression for time series prediction. Neurocomputing, 72, 2659–2669.

    Article  Google Scholar 

  215. Yang, X., Lu, J., & Zhang, G. (2010). Adaptive pruning algorithm for least squares support vector machine classifier. Soft Computing, 14, 667–680.

    Article  MATH  Google Scholar 

  216. Yu, H., Yang, J., Han, J., & Li, X. (2005). Making SVMs scalable to large data sets using hierarchical cluster indexing. Data Mining and Knowledge Discovery, 11, 295–321.

    Article  MathSciNet  Google Scholar 

  217. Zeng, X. Y., & Chen, X. W. (2005). SMO-based pruning methods for sparse least squares support vector machines. IEEE Transactions on Neural Networks, 16(6), 1541–1546.

    Article  Google Scholar 

  218. Zhang, T., & Oles, F. J. (2001). Text categorization based on regularized linear classification methods. Information Retrieval, 4(1), 5–31.

    Article  MATH  Google Scholar 

  219. Zhang, K., Tsang, I. W., & Kwok, J. T. (2009). Maximum margin clustering made practical. IEEE Transactions on Neural Networks, 20(4), 583–596.

    Article  Google Scholar 

  220. Zheng, J., & Lu, B.-L. (2011). A support vector machine classifier with automatic confidence and its application to gender classification. Neurocomputing, 74, 1926–1935.

    Article  Google Scholar 

  221. Zhou, S. (2016). Sparse LSSVM in primal ssing Cholesky factorization for large-scale problems. IEEE Transactions on Neural Networks and Learning Systems, 27(4), 783–795.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer-Verlag London Ltd., part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Du, KL., Swamy, M.N.S. (2019). Support Vector Machines. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_21

Download citation

Publish with us

Policies and ethics