Skip to main content

Stochastic Neurons

  • Chapter
Neural Networks

Part of the book series: Physics of Neural Networks ((NEURAL NETWORKS))

  • 979 Accesses

Abstract

We now consider a simple generalization of the neural networks discussed in the previous chapter which permits a more powerful theoretical treatment. For this purpose we replace the deterministic evolution law (3.5) for the neural activity by a stochastic law, which does not assign a definite value to s i (t + 1), but only gives the probabilities that s i (t + 1) takes one of the values +1 or -1. We request that the value s i (t + 1) = ±1 will occur with probability fh i ): where the activation function f(h) must have the proper limiting values f(h→ - ∞) = 0, f(h → +∞) = 1. Between these limits the activation function must rise monotonously, smoothly interpolating between 0 and 1. Such functions are often called sigmoidal functions. A standard choice, depicted in Fig. 4.1, is given by [Li74, Li78] which satisfies the condition f(h) + f(-h) = 1. Among physicists this function is commonly called the Fermi function, because it describes the thermal energy distribution in a system of identical fermions. In this case the parameter β has the meaning of an inverse temperature, T = β-1. We shall also use this nomenclature in connection with neural networks, although this does not imply that the parameter β-1 should denote a physical temperature at which the network operates. The rule (4.3) should rather be considered as a model for a stochastically operating network which has been conveniently designed to permit application of the powerful formal methods of statistical physics.1

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. A certain degree of stochastic behavior is in fact observed in biological neural networks. Neurons may spontaneously become active without external stimulus or if the synaptic excitation does not exceed the activation threshold. This phenomenon does not, however, appear to be a simple thermal effect, rather it is a consequence of random emission of neurotransmitters at the synapses.

    Google Scholar 

  2. Our discussion here actually applies to the general case of a single stored pattern σi, which can be seen as follows: the transformation always maps this into the ferromagnetic case. Such a transformation is called a gauge transformation, since it corresponds to a reinterpretation of the meaning of “up” and “down” at each lattice site. The transformed Hebbian synaptic connections are.

    Google Scholar 

  3. A second, much more respective approach can be taken to define the memory capacity. One may require error-free recall, which means that the stored patterna have to be attractors of the network, without there being any errors introduced by interference among them. The number of patterns that can be perfectly memorized is much smaller than αc N, namely Here c = 2 if a typical single pattern and c = 4 if all the patterns together are to be reproduced without error. This can be derived using tools from probability theory [We85, Mc87].

    Google Scholar 

  4. Note that these results strictly apply only in the thermodynamic limit N → ∞, i.e. for systems with infinitely many neurons.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Müller, B., Reinhardt, J., Strickland, M.T. (1995). Stochastic Neurons. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57760-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-57760-4_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60207-1

  • Online ISBN: 978-3-642-57760-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics