An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies
The paper presents a competitive prediction-style upper bound on the square loss of the Aggregating Algorithm for Regression with Changing Dependencies in the linear case. The algorithm is able to compete with a sequence of linear predictors provided the sum of squared Euclidean norms of differences of regression coefficient vectors grows at a sublinear rate.
The author has been supported by the Leverhulme Trust through the grant RPG-2013-047 ‘Online self-tuning learning algorithms for handling historical information’. The author would like to thank Vladimir Vovk, Dmitry Adamskiy, and Vladimir V’yugin for useful discussions. Special thanks to Alexey Chernov, who helped to simplify the statement of the main result.
- [Vov90]Vovk, V.: Aggregating strategies. In: Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pp. 371–383. Morgan Kaufmann, San Mateo (1990)Google Scholar