Skip to main content

Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance

  • Chapter
  • First Online:
Support Vector Machines: Theory and Applications

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 177))

Abstract

The chapter introduces the latest developments and results of Iterative Single Data Algorithm (ISDA) for solving large-scale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA) method (originating from a gradient ascent learning approach) and the Sequential Minimal Optimization (SMO) learning algorithm (based on an analytic quadratic programming step for a model without bias term b) in designing SVMs with positive definite kernels is shown for both the nonlinear classification and the nonlinear regression tasks. The chapter also introduces the classic Gauss-Seidel procedure and its derivative known as the successive over-relaxation algorithm as viable (and usually faster) training algorithms. The convergence theorem for these related iterative algorithms is proven. The second part of the chapter presents the effects and the methods of incorporating explicit bias term b into the ISDA. The algorithms shown here implement the single training data based iteration routine (a.k.a. per-pattern learning). This makes the proposed ISDAs remarkably quick. The final solution in a dual domain is not an approximate one, but it is the optimal set of dual variables which would have been obtained by using any of existing and proven QP problem solvers if they only could deal with huge data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Editor information

Lipo Wang

Rights and permissions

Reprints and permissions

About this chapter

Cite this chapter

Kecman, V., Huang, TM., Vogt, M. Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance. In: Wang, L. (eds) Support Vector Machines: Theory and Applications. Studies in Fuzziness and Soft Computing, vol 177. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10984697_12

Download citation

  • DOI: https://doi.org/10.1007/10984697_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-24388-5

  • Online ISBN: 978-3-540-32384-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics