Advertisement

Ensembles of Nearest Neighbor Forecasts

  • Dragomir Yankov
  • Dennis DeCoste
  • Eamonn Keogh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)

Abstract

Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries. For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced.

To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries. We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.

Keywords

Root Mean Square Error Single Predictor Time Series Prediction Chaotic Time Series Good Single Predictor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Atkeson, C., Moore, A., Schaal, S.: Locally weighted learning. Artificial Intelligence Review (1996)Google Scholar
  2. 2.
    Weigend, A., Gershenfeld, N.: Time Series Prediction. Forecasting the Future and Understanding the Past. Addison-Wesley Publishing Company, Reading (1994)Google Scholar
  3. 3.
    Casdagli, M., Weigend, A.: Exploring the continuum between deterministic and stochastic modeling. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 347–366 (1994)Google Scholar
  4. 4.
    Farmer, J., Sidorowich, J.: Predicting chaotic time series. Physical Review Letters 59(8), 845–848 (1987)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias/variance dilemma. Neural Computation 4(1), 1–58 (1992)CrossRefGoogle Scholar
  6. 6.
    Goldin, D., Kanellakis, P.: On similarity queries for time-series data: Constraint specification and implementation. LNCS, vol. 976(7), pp. 137–153 (January 1995)Google Scholar
  7. 7.
    Keerthi, S., Lin, C.: Asymptotic behaviors of Support Vector Machines with Gaussian kernel. Neural Computation 15, 1667–1689 (2003)MATHCrossRefGoogle Scholar
  8. 8.
    McNames, J., Suykens, J., Vandewalle, J.: Winning entry of the K.U.Leuven time series prediction competition. Internation Journal of Bifurcation and Chaos 9(8), 1485–1500 (1999)CrossRefMathSciNetMATHGoogle Scholar
  9. 9.
    Murray, D.: Forecasting a chaotic time series using an improved metric for embedding space. Physica D 68(8), 318–325 (1993)MATHCrossRefGoogle Scholar
  10. 10.
    Sauer, T.: Time series prediction by using delay coordinate embedding. Time Series Prediction. Forecasting the Future and Understanding the Past 59(8), 175–193 (1994)Google Scholar
  11. 11.
    Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)Google Scholar
  12. 12.
    Takens, F.: Detecting strange attractors in turbulence. Lecture Notes in Mathematics, Dynamical Systems and Turbulence 898(7), 366–381 (1981)MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Dragomir Yankov
    • 1
  • Dennis DeCoste
    • 2
  • Eamonn Keogh
    • 1
  1. 1.University of CaliforniaRiversideUSA
  2. 2.Yahoo! ResearchBurbankUSA

Personalised recommendations