Advertisement

Predicting time series with support vector machines

  • K. -R. Müller
  • A. J. Smola
  • G. Rätsch
  • B. Schölkopf
  • J. Kohlmorgen
  • V. Vapnik
Part VII: Prediction, Forecasting and Monitoring
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)

Abstract

Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

Keywords

Support Vector Machine Radial Basis Function Support Vector Regression Radial Basis Function Network Quadratic Programming Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Aizerman, E. Braverman, L. Rozonoér (1964), Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837.Google Scholar
  2. 2.
    C.M. Bishop (1995), Neural networks for pattern recognition, Oxford U. Press.Google Scholar
  3. 3.
    B. E. Boser, I M. Guyon, and V. N. Vapnik. (1992), A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proc. of COLT'92, 144.Google Scholar
  4. 4.
    C. Burges, B. Schölkopf. (1997), Improving speed and accuracy of Support Vector Machines, NIPS'96.Google Scholar
  5. 5.
    H. Drucker, C. Burges, L. Kaufman, A. Smola, V. Vapnik (1997), Linear support vector regression machines, NIPS'96.Google Scholar
  6. 6.
    P. J. Huber (1972), Robust statistics: a review. Ann. Statist., 43:1041.Google Scholar
  7. 7.
    M. C. Mackey and L. Glass (1977), Science, 197:287–289.Google Scholar
  8. 8.
    J. Moody and C. Darken (1989), Neural Computation, 1(2):281–294.Google Scholar
  9. 9.
    K. Pawelzik, J. Kohlmorgen, K.-R. Müller (1996), Neural Comp., 8(2):342–358.Google Scholar
  10. 10.
    K. Pawelzik, K.-R. Müller, J. Kohlmorgen (1996), Prediction of Mixtures, in ICANN '96, LNCS 1112, Springer Berlin, 127–132 and GMD Tech.Rep. 1069.Google Scholar
  11. 11.
    B. Schölkopf, C. Burges, V. Vapnik (1995), Extracting support data for a given task. KDD'95 (eds. U. Fayyad, R. Uthurusamy), AAAI Press, Menlo Park, CA.Google Scholar
  12. 12.
    A. J. Smola, B. Sch6lkopf (1997) On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica to appear.Google Scholar
  13. 13.
    A.S. Weigend, N.A. Gershenfeld (Eds.) (1994), Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley.Google Scholar
  14. 14.
    V. Vapnik (1995), The Nature of Statistical Learning Theory. Springer NY.Google Scholar
  15. 15.
    V. Vapnik, S. Golowich, A. Smola (1997), Support vector method for function approximation, regression estimation, and signal processing, NIPS'96.Google Scholar
  16. 16.
    X. Zhang, J. Hutchinson (1994). Simple architectures on fast machines: practical issues in nonlinear time series prediction in [13].Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • K. -R. Müller
    • 1
  • A. J. Smola
    • 1
  • G. Rätsch
    • 1
  • B. Schölkopf
    • 2
  • J. Kohlmorgen
    • 1
  • V. Vapnik
    • 3
  1. 1.GMD FIRSTBerlinGermany
  2. 2.Max-Planck-Institut f. biol. KybernetikGermany
  3. 3.AT&T ResearchHolmdelUSA

Personalised recommendations