Temporally Continuous vs. Clocked Networks

  • Barak A. Pearlmutter

Abstract

We discuss advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones, and continue with some “tricks of the trade” of continuous time and recurrent neural networks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    L. B. Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In Maureen Caudill and Charles Butler, editorsIEEE First International Conference on Neural Networksvolume 2, pages 609–618, San Diego, CA, June 21–24 1987.Google Scholar
  2. [2]
    Pierre Baldi and Fernando Pineda. Contrastive learning and neural oscillations.Neural Computation3(4):526–545, 1991.CrossRefGoogle Scholar
  3. [3]
    Sue Becker and Yann le Cun. Improving the convergence of back-propagation learning with second order methods. In David S. Touretzky, Geoffrey E. Hinton, Terrence J. Sejnowski, editorsProceedings of the 1988 Connectionist Models Summer School.Morgan Kaufmann, 1989. Also published as Technical Report CRG-TR-88–5, Department of Computer Science, University of Toronto.Google Scholar
  4. [4]
    Hugues Bersini, Luis Gonzalez Sotelino, and Eric Decossaux. Hopfield net generation of trajectories in constrained environment.In this volume. Google Scholar
  5. [5]
    U. Bodenhausen. Learning internal representations of pattern sequences in a neural network with adaptive time-delays. InInternational Joint Conference on Neural NetworksSan Diego, CA, June 1990. IEEE.Google Scholar
  6. [6]
    J. P. Crutchfield and B. S. McNamara. Equations of motion from a data series.Complex Systems1:417–452, 1987.MathSciNetMATHGoogle Scholar
  7. [7]
    Shawn P. Day and Michael R. Davenport. Continuous-time temporal back-propagation with adaptable time delays. Available by ftparchive.cis.ohiostate.edupub/neuroprose/day.temporal.ps.Z, August 1991.Google Scholar
  8. [8]
    Bert de Vries and Jose C. Principe. A theory for neural networks with time delays. In Lippmann et al. [21], pages 162–168.Google Scholar
  9. [9]
    R. Durbin and D. Willshaw. An analogue approach to the travelling salesman problem using an elastic net method.Nature326:689–691,1987.CrossRefGoogle Scholar
  10. [10]
    Yan Fang and Terrence J. Sejnowski. Faster learning for dynamic recurrent backpropagation.Neural Computation2(3):270–273, 1990.CrossRefGoogle Scholar
  11. [11]
    Michael Gherrity. A learning algorithm for analog, fully recurrent neural networks. In IJCNN89 [14], pages 643–644.Google Scholar
  12. [12]
    Marco Gori, Yoshua Bengio, and Renato De Mori. Bps: A learning algorithm for capturing the dynamic nature of speech. In IJCNN89 [14], pages 417–423.Google Scholar
  13. [13]
    Geoffrey E. Hinton. Deterministic Boltzmann learning performs steepest descent in weight-space.Neural Computation1(1):143–150, 1989.CrossRefGoogle Scholar
  14. [14]
    International Joint Conference on Neural Networks, Washington DC, June 18–22 1989. IEEE.Google Scholar
  15. [15]
    Robert Jacobs. Increased rates of convergence through learning rate adaptation. Technical Report COINS 87–117, University of Massachusetts, Amherst, MA 01003, 1987.Google Scholar
  16. [16]
    Michael I. Jordan. Attractor dynamics and parallelism in a connectionist sequential machine InProceedings of the.1986 Cognitive Science Conferencepages 531–546. Lawrence Erlbaum Associates, 1986.Google Scholar
  17. [17]
    Arthur E. Bryson Jr. A steepest ascent method for solving optimum programming problems.Journal of Applied Mechanics29(2):247, 1962.CrossRefGoogle Scholar
  18. [18]
    Gary Kuhn. A first look at phonetic discrimination using connectionist models with recurrent links. SCIMP working paper 82018, Institute for Defense Analysis, Princeton, New Jersey, April 1987.Google Scholar
  19. [19]
    Kevin J. Lang, Geoffrey E. Hinton, and Alex Waibel. A time-delay neural network architecture for isolated word recognition.Neural Networks3(1):23–43, 1990.CrossRefGoogle Scholar
  20. [20]
    Alan Lapedes and Robert Farber.Nonlinear signal processing using neural networks: Prediction and system modelling. Technical report, Theoretical Division, Los Alamos National Laboratory, 1987.Google Scholar
  21. [21]
    Richard P. Lippmann, John E. Moody, and David S. Touretzky, editors.Advances in Neural Information Processing Systems 3.Morgan Kaufmann, 1991.Google Scholar
  22. [22]
    Shawn R. Lockery, Yan Fang, and Terrence J. Sejnowski. A dynamic neural network model of sensorimotor transformations in the leech.Neural Computation2(3):274–282, 1990.CrossRefGoogle Scholar
  23. [23]
    Shawn R. Lockery and W. B. Kristan Jr. Distributed processing of sensory information in the leech i: Input-output relations of the local bending relex.Journal of Neuroscience1990.Google Scholar
  24. [24]
    Shawn R. Lockery and W. B. Kristan Jr. Distributed processing of sensory information in the leech ii: Identification of interneurons contributing to the local bending reflex.Journal of Neuroscience1990.Google Scholar
  25. [25]
    Shawn R. Lockery, G. Wittenberg, W. B. Kristan Jr., N. Qian, and T. J. Sejnowski. Neural network analysis of distributed representations of sensory information in the leech. In Touretzky [37], pages 28–35.Google Scholar
  26. [26]
    M. B. Matthews. Neural network nonlinear adaptive filtering using the extended kalman filter algorithm. InProceedings of the International Neural Networks Conferencevolume 1, pages 115–119, Paris, France, July 1990.Google Scholar
  27. [27]
    John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors.Advances in Neural Information Processing Systems.4. Morgan Kaufmann, 1992.Google Scholar
  28. [28]
    S. J. Nowlan and G. E. Hinton. Adaptive soft weight tying using gaussian mixtures. In Moody et al. [27].Google Scholar
  29. [29]
    Barak A. Pearlmutter. Dynamic recurrent neural networks. Technical Report CMU-CS-90–196, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, December 1990.Google Scholar
  30. [30]
    Barak A. Pearlmutter. Two new learning procedures for recurrent networks.Neural Network Review3(3):99–101, 1990.MathSciNetGoogle Scholar
  31. [31]
    Fernando Pineda. Generalization of back-propagation to recurrent neural networks.Physical Review Letters19(59):2229–2232, 1987.CrossRefGoogle Scholar
  32. [32]
    A. J. Robinson and F. Fallside. Static and dynamic error propagation networks with application to speech coding. In Dana Z. Anderson, editorNeural Information Processing Systemspages 632–641, New York, New York, 1987. American Institute of Physics.Google Scholar
  33. [33]
    Juergen Schmidhuber. An0(n 3 )learning algorithm for fully recurrent networks. Technical Report FKI-151–91, Institut fuer Informatik, Muenchen, Germany, May 1991. Or ftp flop.informatik.tu-muenchen.de pub/fki/fki151.ps.Z.Google Scholar
  34. [34]
    Patrice Y. Simard, Jean Pierre Rayzs, and Bernard Victorri. Shaping the state space landscape in recurrent networks. In Lippmann et al. [21], pages 105–112.Google Scholar
  35. [35]
    G.Z. Sun, H.H. Chen, and Y.C. Lee. Green’s function method for fast on-line learning algorithm of recurrent method for fast on-line learning algorithm of recurrent nn. In Moody et al. [27].Google Scholar
  36. [36]
    D.W. Tank and J.J. Hopfield. Neural computation by time compression.Proceedings of the National Academy of Sciences84:1896–1900, 1987.MathSciNetCrossRefGoogle Scholar
  37. [37]
    David S. Touretzky, editor.Advances in Neural Information Processing Systems II.Morgan Kaufmann, 1990.Google Scholar
  38. [38]
    Tadasu Uchiyama, Katsunori Shimohara, and Yukio Tokunaga. A modified leaky integrator network for temporal pattern recognition. In IJCNN89 [14], pages 469–475.Google Scholar
  39. [39]
    R. L. Watrous, B. Laedendorf, and G. Kuhn. Complete gradient optimization of a recurrent network applied to bdg descrimination.Journal of the Acoustic Society1989, in press.Google Scholar
  40. [40]
    A. S. Weigend, D. E. Rumelhart, and B. A. Huberman. Generalization by weight-elimination with application to forecasting. In Lippmann et al. [21], pages 875–882.Google Scholar
  41. [41]
    R. J. Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report NU-CCS-89–27, College of Computer Science, Northeastern University, Boston, MA, 1989.Google Scholar
  42. [42]
    R. J. Williams. Some observations on the use of the extended kalman filter as a recurrent network learning algorithm. Technical Report NU-CCS-92–1, College of Computer Science, Northeastern University, Boston, MA, 1992.Google Scholar
  43. [43]
    Ronald J. WIlliams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories.Neural Computation2(4):490–501, 1990.CrossRefGoogle Scholar
  44. [44]
    Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Technical Report ICS Report 8805, UCSD, La Jolla, CA 92093, November 1988.Google Scholar
  45. [45]
    Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks.Neural Computation1(2):270–280,1989.CrossRefGoogle Scholar
  46. [46]
    David Zipser. Subgrouping reduces complexity and speeds up learning in recurrent networks. In Touretzky [37], pages 638–641.Google Scholar

Copyright information

© Springer Science+Business Media New York 1993

Authors and Affiliations

  • Barak A. Pearlmutter
    • 1
  1. 1.Yale University Department of PsychologyNew Haven

Personalised recommendations