Advertisement

An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies

  • Yuri KalnishkanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9925)

Abstract

The paper presents a competitive prediction-style upper bound on the square loss of the Aggregating Algorithm for Regression with Changing Dependencies in the linear case. The algorithm is able to compete with a sequence of linear predictors provided the sum of squared Euclidean norms of differences of regression coefficient vectors grows at a sublinear rate.

Notes

Acknowledgement

The author has been supported by the Leverhulme Trust through the grant RPG-2013-047 ‘Online self-tuning learning algorithms for handling historical information’. The author would like to thank Vladimir Vovk, Dmitry Adamskiy, and Vladimir V’yugin for useful discussions. Special thanks to Alexey Chernov, who helped to simplify the statement of the main result.

References

  1. [AKCV12]
    Adamskiy, D., Koolen, W.M., Chernov, A., Vovk, V.: A closer look at adaptive regret. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds.) ALT 2012. LNCS, vol. 7568, pp. 290–304. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  2. [AW01]
    Azoury, K.S., Warmuth, M.K.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Mach. Learn. 43, 211–246 (2001)CrossRefzbMATHGoogle Scholar
  3. [BB61]
    Beckenbach, E.F., Bellman, R.E.: Inequalities. Springer, Heidelberg (1961)CrossRefzbMATHGoogle Scholar
  4. [BK07a]
    Busuttil, S., Kalnishkan, Y.: Online regression competitive with changing predictors. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 181–195. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. [BK07b]
    Busuttil, S., Kalnishkan, Y.: Weighted kernel regression for predicting changing dependencies. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNAI, vol. 4701, pp. 535–542. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. [CBCG05]
    Cesa-Bianchi, N., Conconi, A., Gentile, C.: A second-order perceptron algorithm. SIAM J. Comput. 34(3), 640–668 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  7. [For99]
    Forster, J.: On relative loss bounds in generalized linear regression. In: Ciobanu, G., Păun, G. (eds.) FCT 1999. LNCS, vol. 1684, pp. 269–280. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  8. [HJ94]
    Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1994)zbMATHGoogle Scholar
  9. [HJ13]
    Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)zbMATHGoogle Scholar
  10. [HW01]
    Herbster, M., Warmuth, M.K.: Tracking the best linear predictor. J. Mach. Learn. Res. 1, 281–309 (2001)MathSciNetzbMATHGoogle Scholar
  11. [MVC15]
    Moroshko, E., Vaits, N., Crammer, K.: Second-order non-stationary on-line learning for regression. J. Mach. Learn. Res. 16, 1481–1517 (2015)MathSciNetzbMATHGoogle Scholar
  12. [Sal06]
    Salkuyeh, D.K.: Comments on “A note on a three-term recurrence for a tridiagonal matrix”. Appl. Math. Comput. 176(2), 442–444 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  13. [Vov90]
    Vovk, V.: Aggregating strategies. In: Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pp. 371–383. Morgan Kaufmann, San Mateo (1990)Google Scholar
  14. [Vov98]
    Vovk, V.: A game of prediction with expert advice. J. Comput. Syst. Sci. 56, 153–173 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  15. [Vov01]
    Vovk, V.: Competitive on-line statistics. Int. Stat. Rev. 69(2), 213–248 (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Computer Learning Research Centre and Department of Computer ScienceRoyal Holloway, University of LondonEghamUK

Personalised recommendations