Skip to main content
Log in

Neural learning of chaotic dynamics

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In recent years, considerable progress has been made in modeling chaotic time series with neural networks. Most of the work concentrates on the development of architectures and learning paradigms that minimize the prediction error. A more detailed analysis of modeling chaotic systems involves the calculation of the dynamical invariants which characterize a chaotic attractor. The features of the chaotic attractor are captured during learning only if the neural network learns the dynamical invariants. The two most important of these are the largest Lyapunov exponent which contains information on how far in the future predictions are possible, and the Correlation or Fractal Dimension which indicates how complex the dynamical system is. An additional useful quantity is the power spectrum of a time series which characterizes the dynamics of the system as well, and this in a more thorough form than the prediction error does.

In this paper, we introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenondynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the Hénon map, the logistic map and a real world chaotic series that corresponds to the concentration of one of the chemicals as a function of time in experiments on the Belousov-Zhabotinskii reaction in a well-stirred flow reactor.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Principe, A. Rathie, J. Kuo. Prediction of chaotic time series with neural networks and the issue of dynamic modeling.Bifurcation and Chaos, vol. 2, no. 4, p. 989, 1992.

    Google Scholar 

  2. B. Pearlmutter. Learning state space trajectories in recurrent neural networks.Neural Computation, vol. 1, pp. 239–269, 1989.

    Google Scholar 

  3. M. Casdagli. Nonlinear prediction of chaotic time series.Physica D, vol. 35, pp. 335–356, 1989.

    Article  ADS  MATH  MathSciNet  Google Scholar 

  4. J. Eckmann, D. Rueile. Ergodic theory of chaos and strange attractors.Rev. Mod. Phys., vol. 57, pp. 617–656, 1985.

    ADS  Google Scholar 

  5. M. Henon. A two-dimensional mapping with a strange attractor.Comm. Math. Phys., vol. 50, p. 69, 1976.

    MATH  MathSciNet  Google Scholar 

  6. J. Roux, R. Simoyi, H. Swinney. Observation of a strange attractor.Physica D, vol. 8, pp. 257–266, 1983.

    Article  ADS  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Deco, G., Schürmann, B. Neural learning of chaotic dynamics. Neural Process Lett 2, 23–26 (1995). https://doi.org/10.1007/BF02312352

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02312352

Keywords

Navigation