Selection of Meta-parameters for Support Vector Regression

  • Vladimir Cherkassky
  • Yunqian Ma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2415)

Abstract

We propose practical recommendations for selecting metaparameters for SVM regression (that is, ε -insensitive zone and regularization parameter C). The proposed methodology advocates analytic parameter selection directly from the training data, rather than resampling approaches commonly used in SVM applications. Good generalization performance of the proposed parameter selection is demonstrated empirically using several low-dimensional and high-dimensional regression problems. In addition, we compare generalization performance of SVM regression (with proposed choiceε) with robust regression using ‘least-modulus’ loss function (ε=0). These comparisons indicate superior generalization performance of SVM regression.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    O. Chapelle & V. Vapnik, Model Selection for Support Vector Machines. In Advances in Neural Information Processing Systems, Vol. 12, (1999)Google Scholar
  2. 2.
    V. Cherkassky & F. Mulier, Learning from Data: Concepts, Theory and Methods. Wiley, (1998)Google Scholar
  3. 3.
    B. Schölkopf, J. Burges, A. Smola, Advances in Kernel Methods: Support Vector Machines. MIT Press, (1998)Google Scholar
  4. 4.
    B. Schölkopf, P. Bartlett, A. Smola, and R. Williamson. Support Vector regression with automatic accuracy control. Proc. ICANN, (1998)Google Scholar
  5. 5.
    K. Muller, A. Smola, G. Ratsch, B. Scholkopf, J. Kohlmorgen, V. Vapnik, Using Support Vector Machines for Time Series Prediction. (2000)Google Scholar
  6. 6.
    D. Mattera & S. Haykin, Support Vector Machines for Dynamic Reconstruction of a Chaotic System. in Advances in Kernel Methods: Support Vector Machine, MIT Press, (1998)Google Scholar
  7. 7.
    V. Vapnik. The Nature of Statistical Learning Theory (2nd ed.). Springer, (1999)Google Scholar
  8. 8.
    A. Smola, N. Murata, B. Schölkopf and K. Muller, Asymptotically optimal choice of ε loss for support vector machines, Proc. ICANN, (1998)Google Scholar
  9. 9.
    J.T. Kwok, Linear Dependency between ε and the Input Noise in ε-Support Vector Regression, Proc. ICANN (2001) 405–410Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Vladimir Cherkassky
    • 1
  • Yunqian Ma
    • 1
  1. 1.Department of Electrical and Computer EngineeringUniversity of MinnesotaMinneapolis

Personalised recommendations