Historical Consistent Neural Networks: New Perspectives on Market Modeling, Forecasting and Risk Analysis

  • Hans-Georg Zimmermann
  • Christoph Tietz
  • Ralph Grothmann
Part of the Studies in Computational Intelligence book series (SCI, volume 410)

Abstract

From a mathematical point of view, neural networks allow the construction of models, which are able to handle high-dimensional problems along with a high degree of nonlinearity. In this chapter we deal with a special type of time-delay recurrent neural networks. In these models we understand a part of the world as a large recursive system which is only partially observable. We model and forecast all observables, avoiding the problem in open systems that we do not know the external drivers from present time on. This framework goes far beyond the paradigms of standard regression theory and allows us to forecast financial markets and perform a new way of risk analysis.

Keywords

Recurrent Neural Network Ensemble Forecast State Transition Matrix Memory Length Individual Forecast 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Calvert, D., Kremer, S.: Networks with Adaptive State Transitions. In: Kolen, J.F., Kremer, S. (ed.) A Field Guide to Dynamical Recurrent Networks, pp. 15–25. IEEE Press (2001)Google Scholar
  2. 2.
    Elton, E.J., Gruber, M.J., Brown, J., Goetzmann, W.N.: Modern Portfolio Theory and Investment Analysis, 7th edn. John Wiley & Sons (2007)Google Scholar
  3. 3.
    Föllmer, H.: Alles richtig und trotzdem falsch? Anmerkungen zur Finanzkrise und Finanzmathematik. In: MDMV, vol. 17, pp. 148–154 (2009)Google Scholar
  4. 4.
    Haykin, S.: Neural Networks. A Comprehensive Foundation, 2nd edn. Macmillan College Publishing, New York (1998)Google Scholar
  5. 5.
    Hull, J.: Options, Futures and Other Derivative Securities. Prentice-Hall, Englewood Cliffs (2001)Google Scholar
  6. 6.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2, 359–366 (1989)CrossRefGoogle Scholar
  7. 7.
    McNeil, A., Frey, R., Embrechts, P.: Quantitative Risk Management: Concepts. Princeton University Press, Princeton (2005)MATHGoogle Scholar
  8. 8.
    Pearlmatter, B.: Gradient Calculations for Dynamic Recurrent Neural Networks. In: Kolen, J.F., Kremer, S. (eds.) A Field Guide to Dynamical Recurrent Networks, pp. 179–206. IEEE Press (2001)Google Scholar
  9. 9.
    Pearlmatter, B.: Gradient Calculations for Dynamic Recurrent Neural Networks: A survey. IEEE Transactions on Neural Networks 6(5), 1212–1228 (1995)CrossRefGoogle Scholar
  10. 10.
    Poddig, T., Huber, C.: Renditeprognose mit Neuronalen Netzen. In: Kleeberg, J.M., Rehkugler, H. (eds.) Handbuch Portfoliomanagement, pp. 349–484. Bad Soden/Ts (1998)Google Scholar
  11. 11.
    Poddig, T., Sidorovitch, S.: Künstliche Neuronale Netze: Überblick, Einsatzmöglichkeiten und Anwendungsprobleme. In: Hippner, H., Küsters, U., Meyer, M., Wilde, K. (eds.) Handbuch Data Mining im Marketing, pp. 363–402 (2001)Google Scholar
  12. 12.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D.E., McClelland, J.L., et al. (eds.) Parallel Distributed Processing, Foundations. vol. 1. MIT Press, Cambridge (1986)Google Scholar
  13. 13.
    Schäfer, A.M., Zimmermann, H.-G.: Recurrent Neural Networks are Universal Approximators. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 632–640. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Wei, W.S.: Time Series Analysis: Univariate and Multivariate Methods. Addison-Wesley Publishing Company, N.Y. (1990)MATHGoogle Scholar
  15. 15.
    Werbos, P.J.: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD. Thesis, Harvard University (1974)Google Scholar
  16. 16.
    Williams, R.J., Zipser, D.: A Learning Algorithm for continually running fully recurrent neural networks. Neural Computation 1(2), 270–280 (1989)CrossRefGoogle Scholar
  17. 17.
    Zimmermann, H.G., Grothmann, R., Neuneier, R.: Modeling of Dynamical Systems by Error Correction Neural Networks. In: Soofi, A., Cao, L. (eds.) Modeling and Forecasting Financial Data, Techniques of Nonlinear Dynamics. Kluwer (2002)Google Scholar
  18. 18.
    Zimmermann, H.G., Grothmann, R., Schäfer, A., Tietz, C.: Modeling Large Dynamical Systems with Dynamical Consistent Neural Networks. In: Haykin, S., Principe, J.C., Sejnowski, T.J., McWhirter, J. (eds.) New Directions in Statistical Signal Processing: From Systems to Brain. MIT Press, Cambridge (2006)Google Scholar
  19. 19.
    Zimmermann, H.G.: Neuronale Netze als Entscheidungskalkül. In: Rehkugler, H., Zimmermann, H.G. (eds.) Neuronale Netze in der Ökonomie, Grundlagen und wissenschaftliche Anwendungen. Vahlen, Munich (1994)Google Scholar
  20. 20.
    Zimmermann, H.G., Neuneier, R.: Neural Network Architectures for the Modeling of Dynamical Systems. In: Kolen, J.F., Kremer (eds.) A Field Guide to Dynamical Recurrent Networks, pp. 311–350. IEEE Press (2001)Google Scholar
  21. 21.
    Zimmermann, H.G., von Jouanne-Diedrich, H., Grothmann, R., Tietz, C.: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks. In: Hu, B., et al. (eds.) Operations Research Proceedings 2010, Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), pp. 531–536. Springer, Heidelberg (2011)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2013

Authors and Affiliations

  • Hans-Georg Zimmermann
    • 1
  • Christoph Tietz
    • 1
  • Ralph Grothmann
    • 1
  1. 1.Intelligent Systems and ControlSiemens AG, Corporate TechnologyMunichGermany

Personalised recommendations