Skip to main content

Unsupervised Learning of Temporal Constancies by Pyramidal-Type Neurons.

  • Chapter
Mathematics of Neural Networks

Part of the book series: Operations Research/Computer Science Interfaces Series ((ORCS,volume 8))

  • 2317 Accesses

Abstract

An unsupervised learning principle is proposed for individual neurons with complex synaptic structure and dynamical input. The learning goal is a neuronal response to temporal constancies: If some input patterns often occur in close temporal succession, then the neuron should respond either to all of them or to none. It is shown that linear threshold neurons can achieve this learning goal, if each synapse stores not only a weight, but also a short-term memory trace. The online learning process requires no biologically implausible interactions. The sequence of temporal associations can be interpreted as a random walk on the state transition graph of the input dynamics. In numerical simulations the learning process turned out to be robust against parameter changes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Eisele, M., Vereinfachung generierender Partitionen von chaotischen Attraktoren, (German) PhD-thesis, Appendix D, ISSN 0944-2952 Jül-report 3021, KFA-Jülich, Jülich (1995).

    Google Scholar 

  2. Földiak, P., Learning invariance from transformation sequences, Neural Computation, Vol.3 (1991), pp194–200.

    Article  Google Scholar 

  3. Mitchison, G. J., Removing time variation with the anti-hebbian differential synapse. Neural Computation, Vol.3 (1991), pp312–320.

    Article  Google Scholar 

  4. Miyashita, Y. & Chang, H.-S., Neuronal correlate of pictorial short-term memory in the primate temporal cortex, Nature, Vol. 331 (1988), pp68–70.

    Article  Google Scholar 

  5. Oram, M. W. &; Perret, D. I., Modeling visual recognition from neurobiological constraints, Neural Networks, Vol.7 (1994), pp945–972.

    Article  Google Scholar 

  6. Wallis, G., Rolls, E. T., &; Foldiak, P., Learning invariant responses to the natural transformations of objects, Int. Joint Conf. on Neural Net., Vol.2 (1993), pp1087–1090.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Eisele, M. (1997). Unsupervised Learning of Temporal Constancies by Pyramidal-Type Neurons.. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_27

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-6099-9_27

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7794-8

  • Online ISBN: 978-1-4615-6099-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics