Skip to main content

A Recurrent Neural Network for Time-series Modelling

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

This paper describes an architecture for discrete time feedback neural networks. Some analytical results for networks with linear nodal activation functions are derived, while simulations demonstrate the performance of recurrent networks with nonlinear (sigmoidal) activation functions. The need for models with a capacity to consider correlated noise sequences is pointed out, and it is shown that the recurrent networks can perform state estimation and entertain models with coloured noise.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pineda, F.J., (1988) “Dynamics and Architecture for Neural Computation”, Journal of Complexity 4, 216–245.

    Article  MathSciNet  MATH  Google Scholar 

  2. Lapedes, A. and R. Farber, (1987) “How Neural Nets Work”, Neural Information Processing Systems (ed. D.Z. Anderson) 442–456, American Institute of Physics, New York.

    Google Scholar 

  3. Sejnowski, T.J. and C.R. Rosenberg, (1987) “Parallel Network that Learn to Pronounce English Text”, Complex Systems 1, 145–168.

    MATH  Google Scholar 

  4. Jordan, M.I., (1986) “Attractor dynamics and parallelism in a connectionist sequential machine”, Proceedings of the Eight Annual Conference of the Cognitive Science Society, Amherst, p. 531–546. Hillsdale: Erlbaum.

    Google Scholar 

  5. Elman, J.L,(1990) “Finding Structure in Time”, Cognitive Science 14, 179–211.

    Google Scholar 

  6. Cleeremans, A., D. Servan-Schreiber and J.L. McClelland, (1989) “Finite State Automata and Simple Recurrent Networks”, Neural Computation 1, 372–381.

    Article  Google Scholar 

  7. Croft, E.A. and J.P. Huissoon, (1992) “Neural Network Controller for an Autonomous Guided Vehicle”, Proceedings of 11th IASTED International Conference: Modelling, Identification and Control (Ed. M.H. Hamza), p. 46-49, Acta Press, Zurich, 1992.

    Google Scholar 

  8. Pham, D.T. and X. Liu, (1992) “Dynamic System Modelling Using Partially Recurrent Neural Networks”, Journal of System Engineering 2, 90–97.

    Google Scholar 

  9. Mozer, M.C., (1989) “A Focused Back-Propagation Algorithm for Temporal Pattern Recognition”, Complex Systems 3, 349–381.

    MathSciNet  MATH  Google Scholar 

  10. Stornetta, W.S, T. Hogg and B.A. Huberman, (1987) “A Dynamic Approach to Temporal Pattern Recognition”, Neural Information Processing Systems (ed. D.Z. Anderson) 750–759, American Institute of Physics, New York.

    Google Scholar 

  11. Back, A.D. and A.C. Tsoi, (1991) “FIR and IIR Synapses, a new neural network architecture for time series modelling”, Neural Computation 3, 375–385.

    Article  Google Scholar 

  12. Rumelhart, D.E., G.E. Hinton and R.J. Williams, (1986) “Learning Internal Representations by Error Propagation”, In Parallel Distributed Processing, Vol. 1, MIT Press.

    Google Scholar 

  13. Marquardt, D.W., (1963) “An algorithm for least-squares estimation of nonlinear parameters”, J. SIAM 11, 431–441.

    MathSciNet  MATH  Google Scholar 

  14. Bulsari, A.B. and H. Saxén, (1991) “System identification of a biochemical process using feedforward neural networks”, Neurocomputing 3, 125–133.

    Article  MATH  Google Scholar 

  15. Box, G.E.P. and G.M. Jenkins, (1976) Time Series Analysis. Forecasting and Control, Holden-Day, Oakland, CA.

    Google Scholar 

  16. Aoki, M., (1987) State Space Modeling of Time Series, Springer-Verlag, New York.

    MATH  Google Scholar 

  17. Bennett, R.J., (1979) Spatial Time Series, Pion Limited, London, England.

    Google Scholar 

  18. Bulsari A.B. and H. Saxén, (1991) “A class of partially recurrent neural networks in discrete time”, Technical report 91–17, Heat Eng. Lab., Åbo Akademi, Åbo, Finland, 1991.

    Google Scholar 

  19. Pandit, S.M. and S.M. Wu., (1983) Time Series Analysis with Applications, John Wiley and Sons, New York.

    MATH  Google Scholar 

  20. Pearlmutter, B., (1989) “Learning state space trajectories in recurrent neural networks”, Neural Computation 1, 263–269.

    Article  Google Scholar 

  21. Pearlmutter, B., (1990) “Dynamic Recurrent Neural Networks”, Technical report CMU-CS-90-196, Carnegie Mellon University, Pittsburgh, USA.

    Google Scholar 

  22. Bulsari, A.B. and H. Saxén, (1992) “A Partially Recurrent Connectionist Model”, Proc. of the 10th European Conf. on Artificial Intelligence (ECAI’92), (Ed. B. Neumann), Vienna, Austria, 1992, p. 198-202.

    Google Scholar 

  23. Williams, R.J. and D. Zipser, (1989) “Experimental Analysis of the Real-time Recurrent Learning Algorithm”, Connection Science 1, 87–111.

    Article  Google Scholar 

  24. Billings, S.A., H.B. Jamaluddin and S. Chen, (1992) “Properties of neural networks with applications to modelling non-linear dynamical systems”, Int. J. Control 55 193–224.

    Article  MathSciNet  MATH  Google Scholar 

  25. Bulsari, A.B. and H. Saxén, (1992) “A Recurrent Neural Network Model”, in Artificial Neural Networks, 2 (Eds. I. Aleksander and J. Taylor) (Proceedings of the International Conference on Artificial Neural Networks, Brighton, UK, September 1992), Vol. 2, p. 1091–1094.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag/Wien

About this paper

Cite this paper

Bulsari, A.B., Saxén, H. (1993). A Recurrent Neural Network for Time-series Modelling. In: Albrecht, R.F., Reeves, C.R., Steele, N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7533-0_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-7533-0_43

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-82459-7

  • Online ISBN: 978-3-7091-7533-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics