A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns

  • Sven F. Crone
  • Jose Guajardo
  • Richard Weber
Part of the IFIP International Federation for Information Processing book series (IFIPAICT, volume 217)


Recently, novel learning algorithms such as Support Vector Regression (SVR) and Neural Networks (NN) have received increasing attention in forecasting and time series prediction, offering attractive theoretical properties and successful applications in several real world problem domains. Commonly, time series are composed of the combination of regular and irregular patterns such as trends and cycles, seasonal variations, level shifts, outliers or pulses and structural breaks, among others. Conventional parametric statistical methods are capable of forecasting a particular combination of patterns through ex ante selection of an adequate model form and specific data preprocessing. Thus, the capability of semi-parametric methods from computational intelligence to predict basic time series patterns without model selection and preprocessing is of particular relevance in evaluating their contribution to forecasting. This paper proposes an empirical comparison between NN and SVR models using radial basis function (RBF) and linear kernel functions, by analyzing their predictive power on five artificial time series: stationary, additive seasonality, linear trend, linear trend with additive seasonality, and linear trend with multiplicative seasonality. Results obtained show that RBF SVR models have problems in extrapolating trends, while NN and linear SVR models without data preprocessing provide robust accuracy across all patterns and clearly outperform the commonly used RBF SVR on trended time series.


Root Mean Square Error Radial Basis Function Support Vector Regression Mean Absolute Percentage Error Mean Absolute Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    K. P. Liao and R. Fildes, The accuracy of a procedural approach to specifying feedforward neural networks for forecasting, Computers & Operations Research 32(8) (2005) 2151–2169.zbMATHCrossRefGoogle Scholar
  2. 2.
    G.P. Zhang, B.E. Patuwo, and M.Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 1,14, 35–62 (1998)CrossRefGoogle Scholar
  3. 3.
    G.P. Zhang and M. Qi, Neural network forecasting for seasonal and trend time series, European Journal of Operational Research 160, 501–514 (2005).zbMATHCrossRefGoogle Scholar
  4. 4.
    L. J. Tashman, Out-of-sample tests of forecasting accuracy: an analysis and review, International Journal of Forecasting 16(4) (2000) 437–450.CrossRefGoogle Scholar
  5. 5.
    V. Vapnik, The nature of statistical learning theory (Springer, New York, 1995).zbMATHGoogle Scholar
  6. 6.
    A.J. Smola and B. Schölkopf, A Tutorial on Support Vector Regression, NeuroCOLT Technical Report NC-TR-98-030, 1998 (Royal Holloway College, University of London, UK).Google Scholar
  7. 7.
    K. Müller, A. Smola, G. Rätseh, B. Schölkopf, J. Kohlmorgen, and V. Vapnik, in: Advances in Kernel Methods: Support Vector Learning/Using Support Vector Machines for Time Series Prediction, edited by B. Schölkopf, J. Burges, and A. Smola (MIT Press, 1999), pp. 243–254.Google Scholar
  8. 8.
    V. Vapnik, Statistical Learning Theory (John Wiley and Sons, New York, 1998).zbMATHGoogle Scholar
  9. 9.
    J.V. Hansen, J.B. McDonald, and R.D. Nelson, Some evidence on forecasting time-series with Support Vector Machines, Journal of the Operational Research Society, 1, 1–11, 2005.Google Scholar
  10. 10.
    J. Guajardo, J. Miranda, and R. Weber, A Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression, 5th International Conference on Hybrid Intelligent Systems HIS 2005 (Rio de Janeiro, Brazil, 2005), pp. 341–346.Google Scholar
  11. 11.
    C. M. Bishop, Neural networks for pattern recognition. Clarendon Press; Oxford University Press, Oxford, 1995.zbMATHGoogle Scholar
  12. 12.
    S. Haykin, Neural networks: a comprehensive foundation, 2nd ed. Prentice Hall, Upper Saddle River, N.J., 1999.Google Scholar
  13. 13.
    A. Lapedes, R. Farber, and Los Alamos National Laboratory, Nonlinear signal processing using neural networks: prediction and system modelling, Los Alamos National Laboratory, Los Alamos, N.M. LA-UR-87-2662, 1987.Google Scholar
  14. 14.
    V. Cherkassky and Y. Ma, Practical selection of SVM parameters and noise estimation for SVM regression, Neural Networks 17(1), 113–126 (2004).zbMATHCrossRefGoogle Scholar
  15. 15.
    D. Mattera and S. Haykin, in: Advances in Kernel Methods: Support Vector Learning/Support Vector Machines for Dynamic Reconstruction of a Chaotic System, edited by B. Schölkopf, J. Burges and A. Smola (MIT Press, 1999), pp. 211–242.Google Scholar

Copyright information

© International Federation for Information Processing 2006

Authors and Affiliations

  • Sven F. Crone
    • 1
  • Jose Guajardo
    • 2
  • Richard Weber
    • 2
  1. 1.Department of Management Science, LancasterLancaster UniversityLancasterUK
  2. 2.Department of Industrial Engineering, RepublicaUniversity of ChileSantiagoChile

Personalised recommendations