An Adaptive Sparse Grid Approach for Time Series Prediction

Conference paper
Part of the Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 88)

Abstract

A real valued, deterministic and stationary time series can be embedded in a—sometimes high-dimensional—real vector space. This leads to a one-to-one relationship between the embedded, time dependent vectors in \({\mathbb{R}}^{d}\) and the states of the underlying, unknown dynamical system that determines the time series. The embedded data points are located on an m-dimensional manifold (or even fractal) called attractor of the time series. Takens’ theorem then states that an upper bound for the embedding dimension d can be given by d ≤ 2m + 1.The task of predicting future values thus becomes, together with an estimate on the manifold dimension m, a scattered data regression problem in d dimensions. In contrast to most of the common regression algorithms like support vector machines (SVMs) or neural networks, which follow a data-based approach, we employ in this paper a sparse grid-based discretization technique. This allows us to efficiently handle huge amounts of training data in moderate dimensions. Extensions of the basic method lead to space- and dimension-adaptive sparse grid algorithms. They become useful if the attractor is only located in a small part of the embedding space or if its dimension was chosen too large.We discuss the basic features of our sparse grid prediction method and give the results of numerical experiments for time series with both, synthetic data and real life data.

References

  1. 1.
    H.-J. Bungartz and M. Griebel. Sparse grids. Acta Numerica, 13:147–269, 2004.MathSciNetCrossRefGoogle Scholar
  2. 2.
    M. Casdagli, T. Sauer, and J. Yorke. Embedology. Journal of Statistical Physics, 65:576–616, 1991.MathSciNetGoogle Scholar
  3. 3.
    C. Chang and C. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  4. 4.
    C. Feuersänger. Sparse Grid Methods for Higher Dimensional Approximation. PhD thesis, Institute for Numerical Simulation, University of Bonn, 2010.Google Scholar
  5. 5.
    C. Feuersänger and M. Griebel. Principal manifold learning by sparse grids. Computing, 85(4), 2009.Google Scholar
  6. 6.
    J. Garcke. Maschinelles Lernen durch Funktionsrekonstruktion mit verallgemeinerten dünnen Gittern. PhD thesis, Institute for Numerical Simulation, University of Bonn, 2004.Google Scholar
  7. 7.
    J. Garcke. A dimension adaptive combination technique using localised adaptation criteria. In H. Bock, X. Hoang, R. Rannacher, and J. Schlöder, editors, Modeling, Simulation and Optimization of Complex Processes, pages 115–125. Springer, 2012.Google Scholar
  8. 8.
    J. Garcke, T. Gerstner, and M. Griebel. Intraday foreign exchange rate forecasting using sparse grids. In J. Garcke and M. Griebel, editors, Sparse grids and applications, 81–105, 2012.Google Scholar
  9. 9.
    J. Garcke and M. Hegland. Fitting multidimensional data using gradient penalties and the sparse grid combination technique. Computing, 84(1–2):1–25, 2009.MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    T. Gerstner and M. Griebel. Dimension–adaptive tensor–product quadrature. Computing, 71(1):65–87, 2003.MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    G. Golub and W. Kahan. Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, 2(2):205–224, 1965.MathSciNetCrossRefGoogle Scholar
  12. 12.
    P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica, D9:189–208, 1983.MathSciNetGoogle Scholar
  13. 13.
    M. Griebel. Adaptive sparse grid multilevel methods for elliptic PDEs based on finite differences. Computing, 61(2):151–179, 1998.MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    M. Griebel and M. Hegland. A finite element method for density estimation with Gaussian priors. SIAM Journal on Numerical Analysis, 47(6), 2010.Google Scholar
  15. 15.
    M. Griebel and P. Oswald. Tensor product type subspace splitting and multilevel iterative methods for anisotropic problems. Adv. Comput. Math., 4:171–206, 1995.MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    M. Griebel, M. Schneider, and C. Zenger. A combination technique for the solution of sparse grid problems. In P. de Groen and R. Beauwens, editors, Iterative Methods in Linear Algebra, pages 263–281. IMACS, Elsevier, North Holland, 1992.Google Scholar
  17. 17.
    M. Hegland. Adaptive sparse grids. ANZIAM J., 44:C335–C353, 2003.MathSciNetGoogle Scholar
  18. 18.
    M. Hénon. A two-dimensional mapping with a strange attractor. Communications in Mathematical Physics, 50:69–77, 1976.MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    J. Huke. Embedding nonlinear dynamical systems: A guide to Takens’ theorem, 2006. Manchester Institute for Mathematical Sciences EPrint: 2006.26.Google Scholar
  20. 20.
    J. Imai and K. Tan. Minimizing effective dimension using linear transformation. In Monte Carlo and Quasi-Monte Carlo Methods 2002, pages 275–292. Springer, 2004.Google Scholar
  21. 21.
    H. Kantz and T. Schreiber. Nonlinear Time Series Analysis. Cambridge University Press, 2004. 2nd edition.Google Scholar
  22. 22.
    A. Krueger. Implementation of a fast box-counting algorithm. Computer Physics Communications, 98:224–234, 1996.CrossRefGoogle Scholar
  23. 23.
    L. Liebovitch and T. Toth. A fast algorithm to determine fractal dimensions by box counting. Physics Letters A, 141(8,9):386–390, 1989.Google Scholar
  24. 24.
    B. Schölkopf and A. Smola. Learning with Kernels – Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press – Cambridge, Massachusetts, 2002.Google Scholar
  25. 25.
    F. Takens. Detecting strange attractors in turbulence. Dynamical Systems and Turbulence, Lecture Notes in Mathematics, (898):366–381, 1981.Google Scholar
  26. 26.
    J. Theiler. Efficient algorithm for estimating the correlation dimension from a set of discrete points. Physical Review A, 36(9):4456–4462, 1987.MathSciNetCrossRefGoogle Scholar
  27. 27.
    A. Tikhonov. Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl., 4:1035–1038, 1963.Google Scholar
  28. 28.
    G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series In Applied Mathematics. SIAM: Society for Industrial and Applied Mathematics, 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Institute for Numerical SimulationUniversity of BonnBonnGermany

Personalised recommendations