Advertisement

Journal of Signal Processing Systems

, Volume 65, Issue 3, pp 391–402 | Cite as

Online SVR Training by Solving the Primal Optimization Problem

  • Dominik Brugger
  • Wolfgang Rosenstiel
  • Martin Bogdan
Article

Abstract

Online estimation of regression functions becomes important in presence of drifts and rapid changes in the training data. In this article we propose a new online training algorithm for SVR, called Priona, which is based on the idea of computing approximate solutions to the primal optimization problem. For the solution of the primal SVR problem we investigated the trade-off between computation time and prediction accuracy for the gradient, diagonally scaled gradient, and Newton descent direction. The choice of a particular buffering strategy did not influence the performance of the algorithm. By using a line search Priona does not require a priori selection of a learning rate which facilitates its practical application. On various benchmark data sets Priona is shown to perform better in terms of prediction accuracy in comparison to the Norma and Silk online SVR algorithms. Further, tests on two artificial data sets show that the online SVR algorithms are able to track temporal changes and drifts of the regression function, if the buffer size and learning rate are selected appropriately.

Keywords

Support vector regression  Online training Primal optimization 

References

  1. 1.
    Schölkopf, B., & Smola, A. J. (2002). Learning with kernels. MIT Press.Google Scholar
  2. 2.
    Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., & Watkins, C. (2002). Text classification using string kernels. The Journal of Machine Learning Research, 2, 419–444.MATHGoogle Scholar
  3. 3.
    Gärtner, T., Flach, P., & Wrobel, S. (2003). On graph kernels: Hardness results and efficient alternatives. In LNAI (Vol. 2777, pp. 129–143).Google Scholar
  4. 4.
    Shphigelman, I., Singer, Y., Paz, R., & Vaadia, E. (2005). Spikernels: Predicting arm movements by embedding population spike rate patterns in inner-product spaces. Neural Computation, 17, 671–690.CrossRefGoogle Scholar
  5. 5.
    Lal, T. N., Schröder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N., et al. (2004). Support vector channel selection in BCI. IEEE Transactions on Biomedical Engineering, 51(6), 1003–1010.CrossRefGoogle Scholar
  6. 6.
    Rasch, M. J., Gretton, A., Murayama, Y., Maass, W., & Logothetis, N. K. (2008). Inferring spike trains from local field potentials. Journal of Neurophysiology, 99, 1461–1476.CrossRefGoogle Scholar
  7. 7.
    Brugger, D., Butovas, S., Bogdan, M., Schwarz, C., & Rosenstiel, W. (2008). Direct and inverse solution for a stimulus adaptation problem using SVR. In ESANN proceedings (pp. 397–402). Bruges.Google Scholar
  8. 8.
    Saunders, C., Gammerman, A., & Vovk, V. (1998). Ride regression learning algorithm in dual variables. In Proceedings of the 15th international conference on machine learning.Google Scholar
  9. 9.
    Rojo-Alvarez, J. L., Martínez-Ramón, M., de Prado-Cumplido, M., Artes-Rodriguez, A., & Figueiras-Vidal, A. R. (2004). Support vector method for robust ARMA system identification. IEEE Transactions on Signal Processing, 52(1), 155–164.MathSciNetCrossRefGoogle Scholar
  10. 10.
    Ma, J., Theiler, J., & Perkins, S. (2003). Accurate on-line support vector regression. Neural Computation, 15, 2683–2703.MATHCrossRefGoogle Scholar
  11. 11.
    Cauwenberghs, G., & Poggio, T. (2000). Incremental and decremental support vector machine learning. In NIPS (pp. 409–415).Google Scholar
  12. 12.
    Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2175.MathSciNetCrossRefGoogle Scholar
  13. 13.
    Vishwanathan, S. V. N., Schraudolph, N. N., & Smola, A. J. (2006). Step size adaptation in reproducing kernel Hilbert space. JMLR, 7, 1107–1133.MathSciNetMATHGoogle Scholar
  14. 14.
    Cheng, L., Vishwanathan, S. V. N., Schuurmans, D., Wang, S., & Caelli, T. (2007). Implicit online learning with kernels. In NIPS (pp. 249–256). MIT Press.Google Scholar
  15. 15.
    Bordes, A., Ertekin, S., Weston, J., & Bottou, L. (2005). Fast kernel classifiers with online and active learning. Journal of Machine Learning Research, 6, 1579–1619.MathSciNetMATHGoogle Scholar
  16. 16.
    Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19, 1135–1178.MathSciNetCrossRefGoogle Scholar
  17. 17.
    Vapnik, V. N. (1999). The nature of s tatistical learning t heory (2nd ed.). Springer.Google Scholar
  18. 18.
    Bo, L., Wang, L., & Jiao, L. (2007). Recursive finite newton algorithm for support vector regression in the primal. Neural Computation, 19, 1082–1096.MathSciNetMATHCrossRefGoogle Scholar
  19. 19.
    Aronszajn, N. (1950). Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3), 337–404.MathSciNetMATHCrossRefGoogle Scholar
  20. 20.
    Kimeldorf, G. S., & Wahba, G. (1970). A correspondence between bayesian estimation on stochastic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2), 495–502.MathSciNetMATHCrossRefGoogle Scholar
  21. 21.
    Bertsekas, D. P. (2003). Nonlinear programming (2nd ed.). Athena Scientific.Google Scholar
  22. 22.
    Crammer, K., Kandola, J. S., & Singer, Y. (2003). Online classification on a budget. In NIPS.Google Scholar
  23. 23.
    Weston, J., Bordes, A., & Bottou, L. (2005). Online (and offline) on an even tighter budget. In Proc. of the 10th int. workshop on artificial intelligence and statistics (pp. 413–420).Google Scholar
  24. 24.
    Csató, L., & Opper, M. (2002). Sparse on-line gaussian processes. Neural Computation, 14(3), 641–668.MATHCrossRefGoogle Scholar
  25. 25.
    Keerthi, S. S., & DeCoste, D. (2005). A modified finite newton method for fast solution of large scale linear SVMs. JMLR, 6, 341–361.MathSciNetMATHGoogle Scholar
  26. 26.
    DiCiccio, T. J., & Efron, B. (1996). Bootstrap confidence intervals. Statistical Science, 11(3), 189–228.MathSciNetMATHCrossRefGoogle Scholar
  27. 27.
    Chapelle, O., Vapnik, V., Bousquet, O., & Mukherjee, S. (2002). Choosing multiple parameters for support vector machines. Machine Learning, 46(1–3), 131–159.MATHCrossRefGoogle Scholar
  28. 28.
    Chang, M. W., & Lin, C. J. (2005). Leave-one-out bounds for support vector regression model selection. Neural Computation, 17, 1188–1222.MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Dominik Brugger
    • 1
  • Wolfgang Rosenstiel
    • 2
  • Martin Bogdan
    • 3
  1. 1.Hertie-Institut für klinische HirnforschungAbteilung kognitive Neurologie, Universität TübingenTübingenGermany
  2. 2.Technische InformatikUniversität TübingenTübingenGermany
  3. 3.Technische InformatikUniversität LeipzigLeipzigGermany

Personalised recommendations