Skip to main content
Log in

Discrete time neural networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Traditional feedforward neural networks are static structures that simply map input to output. To better reflect the dynamics in the biological system, time dependency is incorporated into the network by using Finite Impulse Response (FIR) linear filters to model the processes of axonal transport, synaptic modulation, and charge dissipation. While a constructive proof gives a theoretical equivalence between the class of problems solvable by the FIR model and the static structure, certain practical and computational advantages exist for the FIR model. Adaptation of the network is achieved through an efficient gradient descent algorithm, which is shown to be a temporal generalization of the popular backpropagation algorithm for static networks. Applications of the network are discussed with a detailed example of using the network for time series prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. D. E. Rumelhart, J. L. McClelland, and the PDP Research Group,Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, MIT Press: Cambridge, MA, 1986.

    Google Scholar 

  2. P. Werbos, “Beyond regression: new tools for prediction and analysis in the behavioral sciences,” Dissertation, Harvard University, Cambridge, MA, 1974.

    Google Scholar 

  3. B. Widrow and S. D. Stearns,Adaptive Signal Processing, Prentice Hall: Englewood Cliffs, NJ, 1985.

    Google Scholar 

  4. D. Tank and J. Hopfield, “Simple ‘neural’ optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit,”IEEE Trans. Circuits Systems vol. 33, pp. 533–541, 1986.

    Google Scholar 

  5. K. Hornik, M. Stinchombe, and H. White, “Multilayer feedforward networks are universal approximators,”Neural Networks vol. 2, pp. 359–366, 1989.

    Article  Google Scholar 

  6. G. Cybenko, “Approximation by superpositions of a sigmoidal function,”Math. of Control Signals Systems vol. 2, no. 4, pp. 303–314, 1989.

    Google Scholar 

  7. B. Irie and S. Miyake, “Capabilities of three-layered perceptrons,”Proc. IEEE Second Int. Conf. Neural Networks, vol. 1, San Diego, CA, July 1988, pp. 641–647.

  8. B. Widrow and M. Lehr, “30 years of adaptive neural networks: perceptron, madaline, and backpropagation,”Proc. IEEE vol. 78, no. 9, pp. 1415–1442, September 1990.

    Google Scholar 

  9. D. Junge,Nerve and Muscle Excitation, Third Edition, Sinauer Associates, Inc., Sunderland, MA, 1991.

    Google Scholar 

  10. C. Koch and I. Segev (eds.),Methods in Neuronal Modeling: From Synapses to Networks, MIT Press: Cambridge, MA, 1989.

    Google Scholar 

  11. R. MacGregor and E. Lewis,Neural Modeling Plenum Press: New York, 1977.

    Google Scholar 

  12. E. Wan, “Temporal backpropagation for FIR neural networks,”Int. Joint Conf. Neural Networks, vol. 1, San Diego, CA, 1990, pp. 575–580.

  13. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. “Phoneme recognition using time-delay neural networks,”IEEE Trans. Acoust. Speech Signal Processing vol. 37, no. 3, pp. 328–339, 1989.

    Google Scholar 

  14. E. Wan, “Finite Impulse Response neural networks for autoregressive time series prediction,” to appear inProc. NATO Adv. Workshop Time Series Prediction Analysis, edited by A. Weigend and N. Gershenfeld, Addison-Wesley: Reading, MA, 1993.

    Google Scholar 

  15. E. Wan, “Temporal backpropagation: an efficient algorithm for finite impulse response neural networks,”Proc. 1990 Connectionist Models Summer School, Morgan Kaufmann: San Mateo, CA, 1990, pp. 131–140.

    Google Scholar 

  16. G. Yule, “On a method of investigating periodicity in disturbed series with special reference to Wolfer's sunspot numbers,”Philos. Trans. R. Soc. Lond. A vol. 226, pp. 267–298, 1927.

    Google Scholar 

  17. L. Ljung,System Identification: Theory for the User, Prentice-Hall: Englewood Cliffs, NJ, 1987.

    Google Scholar 

  18. S. Wei and W. William,Time Series Analysis: Univariate and Multivariate Methods, Addison-Wesley: Reading, MA, 1990.

    Google Scholar 

  19. H. White. “Learning in artificial neural networks: a statistical perspective,”Neural Computations vol. 1, pp. 425–464, 1989.

    Google Scholar 

  20. E. Wan, “Neural network classification: a Bayesian interpretation,”IEEE Trans. Neural Networks vol. 1, no. 4, pp. 303–305, 1990.

    Google Scholar 

  21. U. Huebner, N. B. Abrahm, and C. O. Weiss, “Dimension and entropies of a chaotic intensity pulsations in a single-mode far-unfrared NH3 laser,”Phys. Rev. A vol. 40, pp. 6345, 1989.

    Google Scholar 

  22. Proc. NATO Adv. Res. Workshop on Time Series Prediction and Analysis, Santa Fe, New Mexico, May 14–17, 1992, edited by A. Weigend and N. Gershenfeld, Addison-Wesley: Reading, MA, 1993.

  23. A. Weigend and N. Gershenfeld, “Results of the time series prediction competition at the Santa Fe Institute,” inNeural Information Processing Syst., Denver, CO, December 1992.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wan, E.A. Discrete time neural networks. Appl Intell 3, 91–105 (1993). https://doi.org/10.1007/BF00871724

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00871724

Key words

Navigation