Skip to main content
Log in

Dynamical memory control based on projection technique for online regression

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In this paper, a dynamical memory control strategy based on projection technique is proposed for kernel-based online regression. Namely, when an instance is removed from the memory, its contribution will be kept by projecting the regression function onto the subspace expanded instead of throwing it away cheaply. This strategy is composed of incremental and decremental controls. To the former, a new example will be added to the memory if it brings a significant change to the regression function, otherwise discarded by the projection technique. The latter is applied when a new instance is added to the memory, or the memory size has reached a predefined budget. The proposed method is analyzed theoretically and its performance is tested on four benchmark data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bi T, Zhang B, Xu R (2011) Dynamics of intraday serial correlation in China’s stock market. Comm Stat-Simul Comput 40(10):1637–1650

    Article  MathSciNet  MATH  Google Scholar 

  • Cesa-Bianchi N, Conconi A, Gentile C (2006) Tracking the best hyperplane with a simple budget perceptron. In: Proceedings of the 19th annual conference on computational learning theory, vol 4005. Springer, LNCS, Berlin, pp 483–496

  • Chang C, Lin C (2001) LIBSVM-a library for support vector machines. http://www.csie.ntu.edu.tw/~cjlin/libsvm

  • Crammer K, Kandola J, Singer Y (2003) Online classification on a budget. In: Thrun T, Saul L, Schölkopf B (eds) Advances in neural information processing systems, vol 16. MIT Press, Cambridge, pp 225–232

  • Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive–aggressive algorithms. J Mach Learn Res 7:551–585

    MathSciNet  MATH  Google Scholar 

  • Csató L, Opper M (2000) Sparse representation for Gaussian process models. In: Leen TK, Dietterich TG, Tresp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 444–450

  • Cucker F, Zhou DX (2007) Learning theory: an approximation theory viewpoint. Cambridge University Press, New York

  • Dekel O, Shalev-Shwartz S, Singer Y (2007) The Forgetron: a kernel-based perceptron on a budget. SIAM J Comput 37(5):1342–1372

    Article  MathSciNet  Google Scholar 

  • Downs T, Gates KE, Masters A (2001) Exact simplification of support vectors solutions. J Mach Learn Res 2:293–297

    Google Scholar 

  • Engel Y, Mannor S, Meir R (2004) The kernel recursive least-squares algorithm. IEEE Trans. Signal Process 52(8):2275–2285

    Article  MathSciNet  Google Scholar 

  • Ferrari S, Bellocchio F, Piuri V, Alberto Borghese N (2010) A hierarchical RBF online learning algorithm for real-time 3-D scanner. IEEE Trans Neural Netw 21(2):275–285

    Article  Google Scholar 

  • Freund Y, Schapire RE (1999) Large margin classification using the Perceptron algorithm. Mach Learn 37(3):277–296

    Article  MATH  Google Scholar 

  • He W (2008) Forecasting electricity load with optimized local learning models. Int J Elec Power Energy Syst 30(10):603–608

    Article  Google Scholar 

  • He W (2009) Limited stochastic meta-descent for kernel-based online learning. Neural Comput 21(9):2667–2686

    Article  MathSciNet  MATH  Google Scholar 

  • He W, Wu S (2012) A kernel-based perceptron with dynamic memory. Neural Netw 25(1):106–113

    Article  Google Scholar 

  • Karasuyama M, Takeuchi I (2010) Multiple incremental decremental learning of support vector machines. IEEE Trans Neural Netw 21(7):1048–1059

    Article  Google Scholar 

  • Keogh E, Xi X, Wei L, Ratanamahatana CA (2006) The UCR time series classification/clustering. http://www.cs.ucr.edu/eamonn/time_series_data/

  • Kivinen J, Warmuth M (1997) Exponentiated gradient versus gradient descent for linear predictors. Inf Computat 132(1):1–64

    Article  MathSciNet  MATH  Google Scholar 

  • Kivinen J, Smola AJ, Williamson RC (2004) Online learning with kernels. IEEE Trans Signal Process 52(8):2165-2176

    Google Scholar 

  • Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 17(6):1411–1423

    Article  Google Scholar 

  • Orabona F, Keshet J, Caputo B (2009) Bounded kernel-based online learning. J Mach Learn Res 10:2643–2666

    MathSciNet  MATH  Google Scholar 

  • Ozawa S, Roy A, Roussinov D (2009) A multitask learning model for online pattern recognition. IEEE Trans Neural Netw 20(3):430–445

    Article  Google Scholar 

  • Schölkopf B, Herbrich R, Smola AJ (2001) A generalized representer theorem. In: Proceedings of the 14th annual conference on computational learning theory, vol 2111. Springer, LNCS, Berlin, pp 416–426

  • Shalev-Shwartz S, Singer Y (2007) A primal–dual perspective of online learning algorithms. Mach Learn 69(2–3):115–142

    Article  Google Scholar 

  • Shalev-Shwartz S, Singer Y, Srebro N (2007) Pegasos: primal estimated sub-gradient solver for svm. In: Proceedings of the 24th international conference on machine learning, Corvalis, pp 807–814

  • Wang Z, Crammer K, Vucetic S (2010) Multi-class pegasos on a budget. In: Proceedings of the 27th international conference machine learning, Haifa, pp 1143–1150

  • Xu J, Kannan D, Zhang B (2001) Optimal dynamic control for the defined benefit pension plans with stochastic benefit outgo. Stoch Anal Appl 25(1):201–236

    Article  MathSciNet  Google Scholar 

  • Xu Z, Zhang R, Jing W (2009) When does online BP training converge? IEEE Trans Neural Netw 20(10):1529–1539

    Article  Google Scholar 

  • Yang H, Xu Z, King I, Lyu M (2010) Online learning for group Lasso. In: Proceedings of the 27th international conference on machine learning, Haifa, pp 1191–1198

  • Zhang K, Kwok JT (2010) Clustered Nystrom method for large scale manifold learning and dimension reduction. IEEE Trans Neural Netw 21(10):1576–1587

    Google Scholar 

  • Zhang H, Wu W, Liu F, Yao M (2009) Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans Neural Netw 20(6):1050–1054

    Article  Google Scholar 

Download references

Acknowledgments

Zhang’s research was partially supported by the Fundamental Research Funds for the Central Universities, the Research Funds of Renmin University of China (10XNL007). Jiang’s research was partially supported by the NSFC (71071155).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Zhang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jiang, H., Zhang, B. Dynamical memory control based on projection technique for online regression. Soft Comput 17, 587–596 (2013). https://doi.org/10.1007/s00500-012-0929-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-012-0929-y

Keywords

Navigation