Ensembles of Nearest Neighbor Forecasts
Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries. For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced.
To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries. We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.
KeywordsRoot Mean Square Error Single Predictor Time Series Prediction Chaotic Time Series Good Single Predictor
- 1.Atkeson, C., Moore, A., Schaal, S.: Locally weighted learning. Artificial Intelligence Review (1996)Google Scholar
- 2.Weigend, A., Gershenfeld, N.: Time Series Prediction. Forecasting the Future and Understanding the Past. Addison-Wesley Publishing Company, Reading (1994)Google Scholar
- 3.Casdagli, M., Weigend, A.: Exploring the continuum between deterministic and stochastic modeling. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 347–366 (1994)Google Scholar
- 6.Goldin, D., Kanellakis, P.: On similarity queries for time-series data: Constraint specification and implementation. LNCS, vol. 976(7), pp. 137–153 (January 1995)Google Scholar
- 10.Sauer, T.: Time series prediction by using delay coordinate embedding. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 175–193 (1994)Google Scholar
- 11.Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)Google Scholar