Skip to main content

Support Vector Machines

  • Chapter
  • First Online:
Neural Networks and Statistical Learning

Abstract

SVM [12, 201] is one of the most popular nonparametric classification algorithms. It is optimal and is based on computational learning theory [200, 202]. The goal of SVM is to minimize the VC dimension by finding the optimal hyperplane between classes, with the maximal margin, where the margin is defined as the distance of the closest point in each class to the separating hyperplane. It has a general-purpose linear learning algorithm and a problem-specific kernel that computes the inner product of input data points in a feature space. The key idea of SVM is to project the training set in a high-dimensional space into a lower-dimensional feature space by means of a set of nonlinear kernel functions, where the projections of the training examples are always linearly separable in the feature space. The hippocampus, a brain region critical for learning and memory processes, has been reported to possess pattern separation function similar to SVM [6].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adankon, M. M., Cheriet, M., & Biem, A. (2009). Semisupervised least squares support vector machine. IEEE Transactions on Neural Networks, 20(12), 1858–1870.

    Google Scholar 

  2. Aiolli, F., & Sperduti, A. (2005). Multiclass classification with multi-prototype support vector machines. Journal of Machine Learning Research, 6, 817–850.

    MATH  MathSciNet  Google Scholar 

  3. Allwein, E. L., Schapire, R. E., & Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1, 113–141.

    MathSciNet  Google Scholar 

  4. Amari, S., & Wu, S. (1999). Improving support vector machine classifiers by modifying Kernel functions. Neural Networks, 12, 783–789.

    Google Scholar 

  5. Angiulli, F., & Astorino, A. (2010). Scaling up support vector machines using nearest neighbor condensation. IEEE Transactions on Neural Networks, 21(2), 351–357.

    Google Scholar 

  6. Baker, J. L. (2003). Is there a support vector machine hiding in the dentate gyrus? Neurocomputing, 52–54, 199–207.

    Google Scholar 

  7. Ben-Hur, A., Horn, D., Siegelmann, H., & Vapnik, V. (2001). Support vector clustering. Journal of Machine Learning Research, 2, 125–137.

    Google Scholar 

  8. Bo, L., Wang, L., & Jiao, L. (2007). Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, 19(4), 1082–1096.

    Google Scholar 

  9. Bordes, A., Ertekin, S., Wesdon, J., & Bottou, L. (2005). Fast kernel classifiers for online and active learning. Journal of Machine Learning Research, 6, 1579–1619.

    MATH  Google Scholar 

  10. Bordes, A., Bottou, L., & Gallinari, P. (2009). SGD-QN: Careful quasi-Newton stochastic gradient descent. Journal of Machine Learning Research, 10, 1737–1754.

    MATH  MathSciNet  Google Scholar 

  11. Bordes, A., Bottou, L., Gallinari, P., Chang, J., & Smith, S. A. (2010). Erratum: SGDQN is less careful than expected. Journal of Machine Learning Research, 11, 2229–2240.

    MATH  MathSciNet  Google Scholar 

  12. Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the 5th ACM Annals Workshop on Computational Learning Theory (COLT) (pp. 144–152).

    Google Scholar 

  13. Bottou, L., & Bousquet, O. (2007). The tradeoffs of large scale learning. In J. C. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in neural information processing systems (NIPS) (Vol. 20, pp. 161–168). Cambridge, MA: MIT Press.

    Google Scholar 

  14. Burges, C. J. C. (1996). Simplified support vector decision rules. In Proceedings of 13th International Conference on Machine Learning (pp. 71–77). Bari, Italy.

    Google Scholar 

  15. Burges, C. J. C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2, 121–167.

    Google Scholar 

  16. Cauwenberghs, G., & Poggio, T. (2001). Incremental and decremental support vector machine learning. In T. K. Leen, T. G. Dietterich & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 409–415). Cambridge, MA: MIT Press.

    Google Scholar 

  17. Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: A library for support vector machines. Technical Report. Taipei, Taiwan: Department of Computer Science and Information Engineering, National Taiwan University.

    Google Scholar 

  18. Chang, C.-C., & Lin, C.-J. (2001). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 13(9), 2119–2147.

    Google Scholar 

  19. Chang, C. C., & Lin, C. J. (2002). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 14, 1959–1977.

    Google Scholar 

  20. Chang, M.-W., & Lin, C.-J. (2005). Leave-one-out bounds for support vector regression model selection. Neural Computation, 17, 1188–1222.

    Google Scholar 

  21. Chang, K.-W., Hsieh, C.-J., & Lin, C.-J. (2008). Coordinate descent method for large-scale L2-loss linear support vector machines. Journal of Machine Learning Research, 9, 1369–1398.

    MATH  MathSciNet  Google Scholar 

  22. Chang, F., Guo, C.-Y., Lin, X.-R., & Lu, C.-J. (2010). Tree decomposition for large-scale SVM problems. Journal of Machine Learning Research, 11, 2935–2972.

    MATH  MathSciNet  Google Scholar 

  23. Chapelle, O., & Zien, A. (2005). Semi-supervised classification by low density separation. In Proceedings of the 10th International Workshop on Artificial Intelligence Statistics (pp. 57–64).

    Google Scholar 

  24. Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19(5), 1155–1178.

    Google Scholar 

  25. Chapelle, O., Sindhwani, V., & Keerthi, S. S. (2008). Optimization techniques for semi-supervised support vector machines. Journal of Machine Learning Research, 9, 203–233.

    MATH  Google Scholar 

  26. Chen, P.-H., Fan, R.-E., & Lin, C.-J. (2006). A study on SMO-type decomposition methods for support vector machines. IEEE Transactions on Neural Networks, 17(4), 893–908.

    Google Scholar 

  27. Chen, H., Tino, P., & Yao, X. (2009). Probabilistic classification vector machines. IEEE Transactions on Neural Networks, 20(6), 901–914.

    Google Scholar 

  28. Cheng, S., & Shih, F. (2007). An improved incremental training algorithm for support vector machines using active query. Pattern Recognition, 40, 964–971.

    MATH  Google Scholar 

  29. Cheong, S., Oh, S. H., & Lee, S.-Y. (2004). Support vector machines with binary tree architecture for multi-class classification. Neural Information Processing—Letters and Reviews, 2(3), 47–51.

    Google Scholar 

  30. Chew, H. G., Bogner, R. E., & Lim, C. C. (2001). Dual-\(\nu \) support vector machine with error rate and training size biasing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1269–1272).

    Google Scholar 

  31. Chiang, J.-H., & Hao, P.-Y. (2003). A new kernel-based fuzzy clustering approach: Support vector clustering with cell growing. IEEE Transactions on Fuzzy Systems, 11(4), 518–527.

    Google Scholar 

  32. Choi, Y.-S. (2009). Least squares one-class support vector machine. Pattern Recognition Letters, 30, 1236–1240.

    Google Scholar 

  33. Chu, W., Keerthi, S. S., & Ong, C. J. (2003). Bayesian trigonometric support vector classifier. Neural Computation, 15, 2227–2254.

    Google Scholar 

  34. Chu, W., Ong, C. J., & Keerthy, S. S. (2005). An improved conjugate gradient method scheme to the solution of least squares SVM. IEEE Transactions on Neural Networks, 16(2), 498–501.

    Google Scholar 

  35. Chu, W., & Keerthi, S. S. (2007). Support vector ordinal regression. Neural Computation, 19, 792–815.

    Google Scholar 

  36. Collobert, R., & Bengio, S. (2001). SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Research, 1, 143–160.

    MathSciNet  Google Scholar 

  37. Collobert, R., Sinz, F., Weston, J., & Bottou, L. (2006). Large scale transductive SVMs. Journal of Machine Learning Research, 7, 1687–1712.

    MATH  MathSciNet  Google Scholar 

  38. Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 1–25.

    Google Scholar 

  39. Cox, D., & O’Sullivan, F. (1990). Asymptotic analysis of penalized likelihood and related estimators. Annals of Statistics, 18, 1676–1695.

    MATH  MathSciNet  Google Scholar 

  40. Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2, 265–292.

    Google Scholar 

  41. Davenport, M. A., Baraniuk, R. G., & Scott, C. D. (2010). Tuning support vector machines for minimax and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10), 1888–1898.

    Google Scholar 

  42. Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Smooth \(\varepsilon \)-insensitive regression by loss symmetrization. Journal of Machine Learning Research, 6, 711–741.

    MATH  MathSciNet  Google Scholar 

  43. Dietterich, T., & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2, 263–286.

    MATH  Google Scholar 

  44. Dinuzzo, F., Neve, M., De Nicolao, G., & Gianazza, U. P. (2007). On the representer theorem and equivalent degrees of freedom of SVR. Journal of Machine Learning Research, 8, 2467–2495.

    MATH  Google Scholar 

  45. Dong, J.-X., Krzyzak, A., & Suen, C. Y. (2005). Fast SVM training algorithm with decomposition on very large data sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4), 603–618.

    Google Scholar 

  46. Downs, T., Gates, K. E., & Masters, A. (2001). Exact simplification of support vector solutions. Journal of Machine Learning Research, 2, 293–297.

    Google Scholar 

  47. Duan, K., Keerthi, S. S., & Poo, A. N. (2003). Evaluation of simple performance measures for tuning SVM hyperparameters. Neurocomputing, 51, 41–59.

    Google Scholar 

  48. Dufrenois, F., Colliez, J., & Hamad, D. (2009). Bounded influence support vector regression for robust single-model estimation. IEEE Transactions on Neural Networks, 20(11), 1689–1706.

    Google Scholar 

  49. Ertekin, S., Bottou, L., & Giles, C. L. (2011). Nonconvex online support vector machines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 368–381.

    Google Scholar 

  50. Fan, R.-E., Chen, P.-H., & Lin, C.-J. (2005). Working set selection using second order information for training support vector machines. Journal of Machine Learning Research, 6, 1889–1918.

    MATH  MathSciNet  Google Scholar 

  51. Fei, B., & Liu, J. (2006). Binary tree of SVM: A new fast multiclass training and classification algorithm. IEEE Transactions on Neural Networks, 17(3), 696–704.

    Google Scholar 

  52. Ferris, M. C., & Munson, T. S. (2000). Interior point methods for massive support vector machines. Technical Report 00–05. Madison, WI: Computer Sciences Department, University of Wisconsin.

    Google Scholar 

  53. Fine, S., & Scheinberg, K. (2001). Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2, 243–264.

    Google Scholar 

  54. Flake, G. W., & Lawrence, S. (2002). Efficient SVM regression training with SMO. Machine Learning, 46, 271–290.

    MATH  Google Scholar 

  55. Forero, P. A., Cano, A., & Giannakis, G. B. (2010). Consensus-based distributed support vector machines. Journal of Machine Learning Research, 11, 1663–1707.

    MATH  MathSciNet  Google Scholar 

  56. Franc, V., & Sonnenburg, S. (2009). Optimized cutting plane algorithm for large-scale risk minimization. Journal of Machine Learning Research, 10, 2157–2192.

    MATH  MathSciNet  Google Scholar 

  57. Friess, T., Cristianini, N., & Campbell, C. (1998). The kernel-adatron algorithm: a fast and simple learning procedure for support vector machines. In Proceedings of the 15th International Conference on Machine Learning (pp. 188–196), Madison, WI.

    Google Scholar 

  58. Fung, G., & Mangasarian, O. (2001). Proximal support vector machines. In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 77–86). San Francisco, CA

    Google Scholar 

  59. Fung, G., & Mangasarian, O. (2001). Semi-supervised support vector machines for unlabeled data classification. Optimization Methods and Software, 15, 29–44.

    Google Scholar 

  60. Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.

    MathSciNet  Google Scholar 

  61. Van Gestel, T., Suykens, J., Baesens, B., Viaene, S., Vanthienen, J., Dedene, G., et al. (2004). Benchmarking least squares support vector machine classifiers. Machine Learning, 54(1), 5–32.

    MATH  Google Scholar 

  62. Girosi, F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation, 10, 1455–1480.

    Google Scholar 

  63. Glasmachers, T., & Igel, C. (2006). Maximum-gain working set selection for SVMs. Journal of Machine Learning Research, 7, 1437–1466.

    MATH  MathSciNet  Google Scholar 

  64. Glasmachers, T., & Igel, C. (2008). Second-order SMO improves SVM online and active learning. Neural Computation, 20, 374–382.

    Google Scholar 

  65. Gonen, M., Tanugur, A. G., & Alpaydin, E. (2008). Multiclass posterior probability support vector machines. IEEE Transactions on Neural Networks, 19(1), 130–139.

    Google Scholar 

  66. Grinblat, G. L., Uzal, L. C., Ceccatto, H. A., & Granitto, P. M. (2011). Solving nonstationary classification problems with coupled support vector machines. IEEE Transactions on Neural Networks, 22(1), 37–51.

    Google Scholar 

  67. Gunter, L., & Zhu, J. (2007). Efficient computation and model selection for the support vector regression. Neural Computation, 19, 1633–1655.

    Google Scholar 

  68. Guo, X. C., Yang, J. H., Wu, C. G., Wang, C. Y., & Liang, Y. C. (2008). A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing, 71, 3211–3215.

    Google Scholar 

  69. Haasdonk, B. (2005). Feature space interpretation of SVMs with indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4), 482–492.

    Google Scholar 

  70. Hammer, B., & Gersmann, K. (2003). A note on the universal approximation capability of support vector machines. Neural Processing Letters, 17, 43–53.

    Google Scholar 

  71. Hao, P.-Y. (2010). New support vector algorithms with parametric insensitive/margin model. Neural Networks, 23, 60–73.

    Google Scholar 

  72. Hastie, T., Rosset, S., Tibshirani, R., & Zhu, J. (2004). The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5, 1391–1415.

    MATH  MathSciNet  Google Scholar 

  73. Hsu, C. W., & Lin, C. J. (2002). A simple decomposition method for support vector machines. Machine Learning, 46, 291–314.

    MATH  Google Scholar 

  74. Hsu, C.-W., & Lin, C.-J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2), 415–425.

    Google Scholar 

  75. Hu, M., Chen, Y., & Kwok, J. T.-Y. (2009). Building sparse multiple-kernel SVM classifiers. IEEE Transactions on Neural Networks, 20(5), 827–839.

    Google Scholar 

  76. Huang, K., Yang, H., King, I., & Lyu, M. R. (2008). Maxi-min margin machine: learning large margin classifiers locally and globally. IEEE Transactions on Neural Networks, 19(2), 260–272.

    Google Scholar 

  77. Huang, K., Zheng, D., Sun, J., Hotta, Y., Fujimoto, K., & Naoi, S. (2010). Sparse learning for support vector classification. Pattern Recognition Letters, 31, 1944–1951.

    Google Scholar 

  78. Hush, D., & Scovel, C. (2003). Polynomial-time decomposition algorithms for support vector machines. Machine Learning, 51, 51–71.

    MATH  Google Scholar 

  79. Hush, D., Kelly, P., Scovel, C., & Steinwart, I. (2006). QP Algorithms with guaranteed accuracy and run time for support vector machines. Journal of Machine Learning Research, 7, 733–769.

    MATH  MathSciNet  Google Scholar 

  80. Ikeda, K., & Murata, N. (2005). Geometrical properties of Nu support vector machines with different norms. Neural Computation, 17, 2508–2529.

    Google Scholar 

  81. Jandel, M. (2010). A neural support vector machine. Neural Networks, 23, 607–613.

    Google Scholar 

  82. Jayadeva, J., Khemchandani, R., & Chandra, S. (2007). Twin support vector machines for pattern classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 905–910.

    Google Scholar 

  83. Jiao, L., Bo, L., & Wang, L. (2007). Fast sparse approximation for least squares support vector machine. IEEE Transactions on Neural Networks, 18(3), 685–697.

    Google Scholar 

  84. Joachims, T. (1999). Making large-scale SVM learning practical. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods support vector learning (pp. 169–184). Cambridge, MA: MIT Press.

    Google Scholar 

  85. Joachims, T. (1999). Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning (pp. 200–209). San Mateo, CA: Morgan Kaufmann

    Google Scholar 

  86. Joachims, T. (2006). Training linear SVMs in linear time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 217–226).

    Google Scholar 

  87. Joachims, T., Finley, T., & Yu, C.-N. J. (2009). Cutting-plane training of structural SVMs. Machine Learning, 77, 27–59.

    MATH  Google Scholar 

  88. Joachims, T., & Yu, C.-N. J. (2009). Sparse kernel SVMs via cutting-plane training. Machine Learning, 76, 179–193.

    Google Scholar 

  89. Jung, K.-H., Lee, D., & Lee, J. (2010). Fast support-based clustering method for large-scale problems. Pattern Recognition, 43, 1975–1983.

    MATH  Google Scholar 

  90. Kao, W.-C., Chung, K.-M., Sun, C.-L., & Lin, C.-J. (2004). Decomposition methods for linear support vector machines. Neural Computation, 16, 1689–1704.

    Google Scholar 

  91. Katagiri, S., & Abe, S. (2006). Incremental training of support vector machines using hyperspheres. Pattern Recognition Letters, 27, 1495–1507.

    Google Scholar 

  92. Keerthi, S. S., Chapelle, O., & DeCoste, D. (2006). Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research, 7, 1493–1515.

    MATH  MathSciNet  Google Scholar 

  93. Keerthi, S. S., & DeCoste, D. (2005). A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6, 341–361.

    MATH  MathSciNet  Google Scholar 

  94. Keerthi, S. S., & Gilbert, E. G. (2002). Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning, 46, 351–360.

    MATH  Google Scholar 

  95. Keerthi, S. S., & Shevade, S. K. (2003). SMO for least squares SVM formulations. Neural Computation, 15, 487–507.

    Google Scholar 

  96. Keerthi, S. S., Shevade, S. K., Bhattacharyya, C., & Murthy, K. R. K. (2001). Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation, 13(3), 637–649.

    Google Scholar 

  97. Kitamura, T., Takeuchi, S., Abe, S., & Fukui, K. (2009). Subspace-based support vector machines for pattern classification. Neural Networks, 22, 558–567.

    Google Scholar 

  98. Knebel, T., Hochreiter, S., & Obermayer, K. (2008). An SMO algorithm for the potential support vector machine. Neural Computation, 20, 271–287.

    Google Scholar 

  99. Kramer, K. A., Hall, L. O., Goldgof, D. B., Remsen, A., & Luo, T. (2009). Fast support vector machines for continuous data. IEEE Transactions on Systems, Man, and Cybernetics Part B, 39(4), 989–1001.

    Google Scholar 

  100. Kressel, U. H.-G. (1999). Pairwise classification and support vector machines. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods: support vector learning (pp. 255–268). Cambridge, MA: MIT Press.

    Google Scholar 

  101. de Kruif, B. J., & de Vries, T. J. (2003). Pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 14(3), 696–702.

    Google Scholar 

  102. Kuh, A., & De Wilde, P. (2007). Comments on pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 18(2), 606–609.

    Google Scholar 

  103. Lanckriet, G. R. G., Cristianini, N., Bartlett, P., El Ghaoui, L., & Jordan, M. I. (2004). Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5, 27–72.

    MATH  Google Scholar 

  104. Laskov, P., Gehl, C., Kruger, S., & Muller, K.-R. (2006). Incremental support vector learning: Analysis, implementation and applications. Journal of Machine Learning Research, 7, 1909–1936.

    MATH  MathSciNet  Google Scholar 

  105. Lawrence, N., Seeger, M., & Herbrich, R. (2003). Fast sparse Gaussian process methods: The informative vector machine. In S. Becker, S. Thrun & K. Obermayer (Eds.), Advances in neural information processing systems (Vol. 15, pp. 609–616). Cambridge, MA: MIT Press

    Google Scholar 

  106. Lee, Y. J., & Mangasarian, O. L. (2001). RSVM: Reduced support vector machines. In Proceedings of the 1st SIAM International Conference on Data Mining. Chicago, IL.

    Google Scholar 

  107. Lee, Y. J., & Mangasarian, O. L. (2001). SSVM: A smooth support vector machine. Computational Optimization and Applications, 20(1), 5–22.

    MATH  MathSciNet  Google Scholar 

  108. Lee, Y.-J., Hsieh, W.-F., & Huang, C.-M. (2005). \(\varepsilon \)-SSVR: A smooth support vector machine for \(\varepsilon \)-insensitive regression. IEEE Transactions on Knowledge and Data Engineering, 17(5), 678–685.

    Google Scholar 

  109. Lee, D., & Lee, J. (2007). Equilibrium-based support vector machine for semisupervised classification. IEEE Transactions on Neural Networks, 18(2), 578–583.

    Google Scholar 

  110. Lee, K. Y., Kim, D.-W., Lee, K. H., & Lee, D. (2007). Density-induced support vector data description. IEEE Transactions on Neural Networks, 18(1), 284–289.

    Google Scholar 

  111. Li, Y., & Long, P. M. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.

    MATH  Google Scholar 

  112. Li, B., Song, S., Li, K. (2013). A fast iterative single data approach to training unconstrained least squares support vector machines. Neurocomputing, 115, 31–38.

    Google Scholar 

  113. Liang, X., Chen, R.-C., & Guo, X. (2008). Pruning support vector machines without altering performances. IEEE Transactions on Neural Networks, 19(10), 1792–1803.

    Google Scholar 

  114. Liang, X. (2010). An effective method of pruning support vector machine classifiers. IEEE Transactions on Neural Networks, 21(1), 26–38.

    Google Scholar 

  115. Liang, Z., & Li, Y. (2009). Incremental support vector machine learning in the primal and applications. Neurocomputing, 72, 2249–2258.

    Google Scholar 

  116. Lin, C.-J. (2001). On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks, 12(6), 1288–1298.

    Google Scholar 

  117. Lin, C.-J. (2002). Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks, 13(1), 248–250.

    Google Scholar 

  118. Lin, C.-J., Weng, R. C., & Keerthi, S. S. (2008). Trust region Newton method for logistic regression. Journal of Machine Learning Research, 9, 627–650.

    MATH  MathSciNet  Google Scholar 

  119. Lin, Y.-L., Hsieh, J.-G., Wu, H.-K., & Jeng, J.-H. (2011). Three-parameter sequential minimal optimization for support vector machines. Neurocomputing, 74, 3467–3475.

    Google Scholar 

  120. Loosli, G., & Canu, S. (2007). Comments on the core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 8, 291–301.

    MATH  Google Scholar 

  121. Lopez, J., & Suykens, J. A. K. (2011). First and second order SMO algorithms for LS-SVM classifiers. Neural Processing Letters, 33, 31–44.

    Google Scholar 

  122. Lu, Y., Roychowdhury, V., & Vandenberghe, L. (2008). Distributed parallel support vector machines in strongly connected networks. IEEE Transactions on Neural Networks, 19(7), 1167–1178.

    Google Scholar 

  123. Manevitz, L. M., & Yousef, M. (2001). One-class SVMs for document classification. Journal of Machine Learning Research, 2, 139–154.

    Google Scholar 

  124. Mangasarian, O. L., & Musicant, D. R. (1999). Successive overrelaxation for support vector machines. IEEE Transactions on Neural Networks, 10, 1032–1037.

    Google Scholar 

  125. Mangasarian, O. L. (2000). Generalized support vector machines. In A. Smola, P. Bartlett, B. Scholkopf, & D. Schuurmans (Eds.), Advances in large margin classifiers (pp. 135–146). Cambridge, MA: MIT Press.

    Google Scholar 

  126. Mangasarian, O. L., & Musicant, D. R. (2001). Lagrangian support vector machines. Journal of Machine Learning Research, 1, 161–177.

    MATH  MathSciNet  Google Scholar 

  127. Mangasarian, O. L. (2002). A finite Newton method for classification. Optimization Methods and Software, 17(5), 913–929.

    Google Scholar 

  128. Mangasarian, O. L., & Wild, E. W. (2006). Multisurface proximal support vector classification via generalized eigenvalues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1), 69–74.

    Google Scholar 

  129. Marchand, M., & Shawe-Taylor, J. (2002). The set covering machine. Journal of Machine Learning Research, 3, 723–746.

    MathSciNet  Google Scholar 

  130. Martin, M. (2002). On-line support vector machines for function approximation. Technical report, Universitat Politecnica de Catalunya, Departament de Llengatges i Sistemes Informatics.

    Google Scholar 

  131. Melacci, S., & Belkin, M. (2011). Laplacian support vector machines trained in the primal. Journal of Machine Learning Research, 12, 1149–1184.

    MATH  MathSciNet  Google Scholar 

  132. Mercer, T. (1909). Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London Series A, 209, 415–446.

    Google Scholar 

  133. Musicant, D. R., & Feinberg, A. (2004). Active set support vector regression. IEEE Transactions on Neural Networks, 15(2), 268–275.

    Google Scholar 

  134. Navia-Vazquez, A., Gutierrez-Gonzalez, D., Parrado-Hernandez, E., & Navarro-Abellan, J. J. (2006). Distributed support vector machines. IEEE Transactions on Neural Networks, 17(4), 1091–1097.

    Google Scholar 

  135. Nguyen, D., & Ho, T. (2006). A bottom-up method for simplifying support vector solutions. IEEE Transactions on Neural Networks, 17(3), 792–796.

    Google Scholar 

  136. Ma, J., Theiler, J., & Perkins, S. (2003). Accurate online support vector regression. Neural Computation, 15(11), 2683–2703.

    Google Scholar 

  137. Navia-Vazquez, A. (2007). Support vector perceptrons. Neurocomputing, 70, 1089–1095.

    Google Scholar 

  138. Nguyen, D. D., Matsumoto, K., Takishima, Y., & Hashimoto, K. (2010). Condensed vector machines: Learning fast machine for large data. IEEE Transactions on Neural Networks, 21(12), 1903–1914.

    Google Scholar 

  139. Ojeda, F., Suykens, J. A. K., & Moor, B. D. (2008). Low rank updated LS-SVM classifiers for fast variable selection. Neural Networks, 21, 437–449.

    MATH  Google Scholar 

  140. Omitaomu, O. A., Jeong, M. K., & Badiru, A. B. (2011). Online support vector regression with varying parameters for time-dependent data. IEEE Transactions on Systems, Man, and Cybernetics Part A, 41(1), 191–197.

    Google Scholar 

  141. Orabona, F., Castellini, C., Caputo, B., Jie, L., & Sandini, G. (2010). On-line independent support vector machines. Pattern Recognition, 43(4), 1402–1412.

    MATH  Google Scholar 

  142. Ortiz-Garcia, E. G., Salcedo-Sanz, S., Perez-Bellido, A. M., & Portilla-Figueras, J. A. (2009). Improving the training time of support vector regression algorithms through novel hyper-parameters search space reductions. Neurocomputing, 72, 3683–3691.

    Google Scholar 

  143. Osuna, E., Freund, R., & Girosi, F. (1997). An improved training algorithm for support vector machines. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing (pp. 276–285). New York.

    Google Scholar 

  144. Osuna, E., Freund, R., & Girosi, F. (1997). Training support vector machines: An application to face detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 130–136).

    Google Scholar 

  145. Osuna, E., Freund, R., & Girosi, F. (1997). Support vector machines: Training and applications. Tech. Report A.I. Memo No. 1602, MIT Artificial Intelligence Laboratory.

    Google Scholar 

  146. Panagiotakopoulos, C., & Tsampouka, P. (2011). The Margitron: A generalized perceptron with margin. IEEE Transactions on Neural Networks, 22(3), 395–407.

    Google Scholar 

  147. Peng, X. (2010). TSVR: An efficient twin support vector machine for regression. Neural Networks, 23(3), 365–372.

    Google Scholar 

  148. Peng, X. (2010). Primal twin support vector regression and its sparse approximation. Neurocomputing, 73, 2846–2858.

    Google Scholar 

  149. Perez-Cruz, F., Navia-Vazquez, A., Rojo-Alvarez, J. L., & Artes-Rodriguez, A. (1999). A new training algorithm for support vector machines. In Proceedings of the 5th Bayona Workshop on Emerging Technologies in Telecommunications (pp. 116–120). Bayona, Spain.

    Google Scholar 

  150. Perez-Cruz, F., Bousono-Calzon, C., & Artes-Rodriguez, A. (2005). Convergence of the IRWLS procedure to the support vector machine solution. Neural Computation, 17, 7–18.

    Google Scholar 

  151. Platt, J. (1999). Fast training of support vector machines using sequential minimal optimization. In B. Scholkopf, C. Burges, & A. Smola (Eds.), Advances in kernel methods: Support vector learning (pp. 185–208). Cambridge, MA: MIT Press.

    Google Scholar 

  152. Pontil, M., & Verri, A. (1998). Properties of support vector machines. Neural Computation, 10, 955–974.

    Google Scholar 

  153. Renjifo, C., Barsic, D., Carmen, C., Norman, K., & Peacock, G. S. (2008). Improving radial basis function kernel classification through incremental learning and automatic parameter selection. Neurocomputing, 72, 3–14.

    Google Scholar 

  154. Rifkin, R., & Klautau, A. (2004). In defense of one-vs-all classification. Journal of Machine Learning Research, 5, 101–141.

    MATH  MathSciNet  Google Scholar 

  155. Romero, E., & Toppo, D. (2007). Comparing support vector machines and feedforward neural networks with similar hidden-layer weights. IEEE Transactions on Neural Networks, 18(3), 959–963.

    Google Scholar 

  156. Roobaert, D. (2002). DirectSVM: A simple support vector machine perceptron. Journal of VLSI Signal Processing, 32, 147–156.

    MATH  Google Scholar 

  157. Scheinberg, K. (2006). An efficient implementation of an active set method for SVMs. Journal of Machine Learning Research, 7, 2237–2257.

    MATH  MathSciNet  Google Scholar 

  158. Scholkopf, B., Smola, A. J., Williamson, R. C., & Bartlett, P. L. (2000). New support vector algorithm. Neural Computation, 12(5), 1207–1245.

    Google Scholar 

  159. Scholkopf, B., Herbrich, R., & Smola, A. J. (2001). A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory, LNCS (Vol. 2111, pp. 416–426). Berlin: Springer.

    Google Scholar 

  160. Scholkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., & Williamson, R. (2001). Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), 1443–1471.

    Google Scholar 

  161. Segata, N., & Blanzieri, E. (2010). Fast and scalable local kernel machines. Journal of Machine Learning Research, 11, 1883–1926.

    MATH  MathSciNet  Google Scholar 

  162. Scholkopf, B., Mika, S., Burges, C. J. C., Knirsch, P., Muller, K. R., Ratsch, G., et al. (1999). Input space versus feature space in kernel-based methods. IEEE Transactions on Neural Networks, 5(10), 1000–1017.

    Google Scholar 

  163. Schraudolph, N., Yu, J., & Gunter, S. (2007). A stochastic quasi-Newton method for online convex optimization. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AIstats) (pp. 433–440). Society for AIstats.

    Google Scholar 

  164. Shalev-Shwartz, S., Singer, Y., & Srebro, N. (2007). Pegasos: primal estimated sub-gradient solver for SVM. In Proceedings of the 24th International Conference on Machine Learning (ICML) (pp. 807–814). NewYork: ACM Press.

    Google Scholar 

  165. Shao, Y.-H., & Deng, N.-Y. (2012). A coordinate descent margin based-twin support vector machine for classification. Neural Networks, 25, 114–121.

    MATH  Google Scholar 

  166. Shashua, A. A. (1999). On the equivalence between the support vector machine for classification and sparsified Fisher’s linear discriminant. Neural Processing Letters, 9(2), 129–139.

    Google Scholar 

  167. Shashua, A., & Levin, A. (2002). Ranking with large margin principle: Two approaches. In Neural Information Processing Systems (Vol. 15, pp. 937–944).

    Google Scholar 

  168. Shevade, S. K., Keerthi, S. S., Bhattacharyya, C., & Murthy, K. R. K. (2000). Improvements to the SMO algorithm for SVM regression. IEEE Transactions on Neural Network, 11(5), 1188–1193.

    Google Scholar 

  169. Shin, H., & Cho, S. (2007). Neighborhood property-based pattern selection for support vector machines. Neural Computation, 19, 816–855.

    Google Scholar 

  170. Shilton, A., Palamiswami, M., Ralph, D., & Tsoi, A. (2005). Incremental training of support vector machines. IEEE Transactions on Neural Network, 16, 114–131.

    Google Scholar 

  171. Smola, A. J., Scholkopf, B., & Ratsch, G. (1999). Linear programs for automatic accuracy control in regression. In Proceedings of the 9th International Conference on Artificial Neural Networks (Vol. 2, pp. 575–580). Edinburgh, UK.

    Google Scholar 

  172. Smola, A. J., & Scholkopf, B. (2000). Sparse greedy matrix approximation for machine learning. In Proceedings of the 17th International Conference on Machine Learning (pp. 911–918). Stanford University, CA.

    Google Scholar 

  173. Smola, A. J., & Scholkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14(3), 199–222.

    MathSciNet  Google Scholar 

  174. Smola, A. J., Vishwanathan, S. V. N., & Le, Q. (2008). Bundle methods for machine learning. In J. C. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in neural information processing systems (Vol. 20). Cambridge, MA: MIT Press.

    Google Scholar 

  175. Suykens, J. A. K., & Vandewalle, J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9, 293–300.

    MathSciNet  Google Scholar 

  176. Suykens, J. A. K., Lukas, L., Van Dooren, P., De Moor, B., & Vandewalle, J. (1999). Least squares support vector machine classifiers: A large scale algorithm. In Proceedings of European Conference on Circuit Theory and Design (pp. 839–842).

    Google Scholar 

  177. Suykens, J. A. K., Lukas, L., & Vandewalle, J. (2000). Sparse approximation using least squares support vector machines. In Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS) (Vol. 2, pp. 757–760). Genvea, Switzerland.

    Google Scholar 

  178. Suykens, J. A. K., Van Gestel, T., De Brabanter, J., De Moor, B., & Vandewalle, J. (2002). Least squares support vector machines. Singapore: World Scientific.

    MATH  Google Scholar 

  179. Suykens, J. A. K., De Brabanter, J., Lukas, L., & Vandewalle, J. (2002). Weighted least squares support vector machines: Robustness and sparse approximation. Neurocomputing, 48, 85–105.

    MATH  Google Scholar 

  180. Steinwart, I. (2003). Sparseness of support vector machines. Journal of Machine Learning Research, 4, 1071–1105.

    MathSciNet  Google Scholar 

  181. Takahashi, N., & Nishi, T. (2006). Global convergence of decomposition learning methods for support vector machines. IEEE Transactions on Neural Networks, 17(6), 1362–1369.

    Google Scholar 

  182. Takahashi, N., Guo, J., & Nishi, T. (2008). Global convergence of SMO algorithm for support vector regression. IEEE Transactions on Neural Networks, 19(6), 971–982.

    Google Scholar 

  183. Tan, Y., & Wang, J. (2004). A support vector machine with a hybrid kernel and minimal Vapnik-Chervonenkis dimension. IEEE Transactions on Knowledge and Data Engineering, 16(4), 385–395.

    Google Scholar 

  184. Tao, Q., Wu, G., Wang, F., & Wang, J. (2005). Posterior probability support vector machines for unbalanced data. IEEE Transactions on Neural Networks, 16(6), 1561–1573.

    Google Scholar 

  185. Tao, Q., Chu, D., & Wang, J. (2008). Recursive support vector machines for dimensionality reduction. IEEE Transactions on Neural Networks, 19(1), 189–193.

    Google Scholar 

  186. Tax, D. M. J., & Duin, R. P. W. (1999). Support vector domain description. Pattern Recognition Letters, 20, 1191–1199.

    Google Scholar 

  187. Tax, D. M. J. (2001). One-class classification: Concept-learning in the absence of counter-examples. Ph.D. Dissertation. Delft, The Netherlands: Electrical Engineering, Mathematics and Computer Science, Delft University of Technology.

    Google Scholar 

  188. Tax, D. M. J., & Laskov, P. (2003). Online SVM learning: From classification to data description and back. In C. Molina et al. (Ed.), Proceedings of NNSP (pp. 499–508).

    Google Scholar 

  189. Tax, D. M. J., & Duin, R. P. W. (2004). Support vector data description. Machine Learning, 54, 45–66.

    MATH  Google Scholar 

  190. Teo, C. H., Smola, A., Vishwanathan, S. V., & Le, Q. V. (2007). A scalable modular convex solver for regularized risk minimization. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (pp. 727–736). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  191. Teo, C. H., Vishwanthan, S. V. N., Smola, A., & Le, Q. (2010). Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11, 311–365.

    MATH  Google Scholar 

  192. Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.

    MATH  MathSciNet  Google Scholar 

  193. Tipping, M. E. (2000). The relevance vector machine. In S. A. Solla, T. K. Leen & K.-R. Müller (Eds.), Advances in neural information processing systems (Vol. 12, pp. 652–658). San Mateo: Morgan Kaufmann.

    Google Scholar 

  194. Tong, S., & Koller, D. (2001). Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2, 45–66.

    Google Scholar 

  195. Tsang, I. W., Kwok, J. T., & Cheung, P.-M. (2005). Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6, 363–392.

    MATH  MathSciNet  Google Scholar 

  196. Tsang, I. W.-H., Kwok, J. T.-Y., & Zurada, J. M. (2006). Generalized core vector machines. IEEE Transactions on Neural Networks, 17(5), 1126–1140.

    Google Scholar 

  197. Tsang, I. W., Kocsor, A., & Kwok, J. T. (2007). Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning (pp. 911–918). Corvalis, OR.

    Google Scholar 

  198. Tzikas, D. G., Likas, A. C., & Galatsanos, N. P. (2009). Sparse Bayesian modeling with adaptive kernel learning. IEEE Transactions on Neural Networks, 20(6), 926–937.

    Google Scholar 

  199. Valizadegan, H., & Jin, R. (2007). Generalized maximum margin clustering and unsupervised kernel learning. In Advances in neural information processing systems (Vol. 19, pp. 1417–1424). Cambridge, MA: MIT Press.

    Google Scholar 

  200. Vapnik, V. N. (1982). Estimation of dependences based on empirical data. New York: Springer.

    MATH  Google Scholar 

  201. Vapnik, V. N. (1995). The nature of statistical learning theory. New York: Springer.

    MATH  Google Scholar 

  202. Vapnik, V. N. (1998). Statistical learning theory. New York: Wiley.

    MATH  Google Scholar 

  203. Vapnik, V., & Chapelle, O. (2000). Bounds on error expectation for support vector machines. Neural Computation, 12, 2013–2036.

    Google Scholar 

  204. Vincent, P., & Bengio, Y. (2002). Kernel matching pursuit. Machine Learning, 48, 165–187.

    MATH  Google Scholar 

  205. Vishwanathan, S. V. N., Smola, A. J., & Murty, M. N. (2003). SimpleSVM. In Proceedings of the 20th International Conference on Machine Learning (pp. 760–767). Washington, DC.

    Google Scholar 

  206. Wang, Z., & Chen, S. (2007). New least squares support vector machines based on matrix patterns. Neural Processing Letters, 26, 41–56.

    Google Scholar 

  207. Wang, D., Yeung, D. S., & Tsang, E. C. C. (2007). Weighted Mahalanobis distance kernels for support vector machines. IEEE Transactions on Neural Networks, 18(5), 1453–1462.

    MATH  Google Scholar 

  208. Wang, J.-S., & Chiang, J.-C. (2008). A cluster validity measure with outlier detection for support vector clustering. IEEE Transactions on Systems, Man, and Cybernetics Part B, 38(1), 78–89.

    Google Scholar 

  209. Wang, G., Yeung, D.-Y., & Lochovsky, F. H. (2008). A new solution path algorithm in support vector regression. IEEE Transactions on Neural Networks, 19(10), 1753–1767.

    Google Scholar 

  210. Wang, Y. C. F., & Casasent, D. (2008). New support vector-based design method for binary hierarchical classifiers for multi-class classification problems. Neural Networks, 21, 502–510.

    Google Scholar 

  211. Wang, F., Zhao, B., & Zhang, C. (2010). Linear time maximum margin clustering. IEEE Transactions on Neural Networks, 21(2), 319–332.

    Google Scholar 

  212. Warmuth, M. K., Liao, J., Ratsch, G., Mathieson, M., Putta, S., & Lemmem, C. (2003). Support vector machines for active learning in the drug discovery process. Journal of Chemical Information Sciences, 43(2), 667–673.

    Google Scholar 

  213. Wen, W., Hao, Z., & Yang, X. (2008). A heuristic weight-setting strategy and iteratively updating algorithm for weighted least-squares support vector regression. Neurocomputing, 71, 3096–3103.

    Google Scholar 

  214. Weston, J., & Watkins, C. (1999). Multi-class support vector machines. In M. Verleysen (Ed.), Proceedings of European Symposium on Artificial Neural Networks Brussels, Belgium: D. Facto Press.

    Google Scholar 

  215. Weston, J., Elisseeff, A., Scholkopf, B., & Tipping, M. (2003). Use of the zero-norm with linear models and kernel methods. Journal of Machine Learning Research, 3, 1439–1461.

    MATH  MathSciNet  Google Scholar 

  216. Williams, C. K. I., & Seeger, M. (2001). Using the Nystrom method to speed up kernel machines. In T. Leen, T. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13). Cambridge, MA: MIT Press.

    Google Scholar 

  217. Williams, P., Li, S., Feng, J., & Wu, S. (2007). A geometrical method to improve performance of the support vector machine. IEEE Transactions on Neural Networks, 18(3), 942–947.

    Google Scholar 

  218. Wu, Q., & Zhou, D.-X. (2005). SVM soft margin classifiers: Linear programming versus quadratic programming. Neural Computation, 17, 1160–1187.

    Google Scholar 

  219. Wu, M., Scholkopf, B., & Bakir, G. (2006). A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research, 7, 603–624.

    MATH  MathSciNet  Google Scholar 

  220. Xu, L., Neufeld, J., Larson, B., & Schuurmans, D. (2004). Maximum margin clustering. In L. K. Saul, Y. Weiss & L. Bottou (Eds.), Advances in neural information processing systems (Vol. 17). Cambridge, MA: MIT Press.

    Google Scholar 

  221. Yang, H., Huang, K., King, I., & Lyu, M. R. (2009). Localized support vector regression for time series prediction. Neurocomputing, 72, 2659–2669.

    Google Scholar 

  222. Yang, X., Lu, J., & Zhang, G. (2010). Adaptive pruning algorithm for least squares support vector machine classifier. Soft Computing, 14, 667–680.

    MATH  Google Scholar 

  223. Yu, H., Yang, J., Han, J., & Li, X. (2005). Making SVMs scalable to large data sets using hierarchical cluster indexing. Data Mining and Knowledge Discovery, 11, 295–321.

    MathSciNet  Google Scholar 

  224. Zanghirati, G., & Zanni, L. (2003). A parallel solver for large quadratic programs in training support vector machines. Parallel Computing, 29, 535–551.

    MathSciNet  Google Scholar 

  225. Zanni, L., Serafini, T., & Zanghirati, G. (2006). Parallel software for training large scale support vector machines on multiprocessor systems. Journal of Machine Learning Research, 7, 1467–1492.

    MATH  MathSciNet  Google Scholar 

  226. Zeng, X. Y., & Chen, X. W. (2005). SMO-based pruning methods for sparse least squares support vector machines. IEEE Transactions on Neural Networks, 16(6), 1541–1546.

    Google Scholar 

  227. Zhang, T., & Oles, F. J. (2001). Text categorization based on regularized linear classification methods. Information Retrieval, 4(1), 5–31.

    MATH  Google Scholar 

  228. Zhang, K., Tsang, I. W., & Kwok, J. T. (2009). Maximum margin clustering made practical. IEEE Transactions on Neural Networks, 20(4), 583–596.

    MATH  Google Scholar 

  229. Zhao, Y., & Sun, J. (2008). Robust support vector regression in the primal. Neural Networks, 21, 1548–1555.

    MATH  Google Scholar 

  230. Zheng, J., & Lu, B.-L. (2011). A support vector machine classifier with automatic confidence and its application to gender classification. Neurocomputing, 74, 1926–1935.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag London

About this chapter

Cite this chapter

Du, KL., Swamy, M.N.S. (2014). Support Vector Machines. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-5571-3_16

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5571-3_16

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5570-6

  • Online ISBN: 978-1-4471-5571-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics