Advertisement

Learning Process in an Asymmetric Threshold Network

  • Yann Le Cun
Part of the NATO ASI Series book series (volume 20)

Abstract

Threshold functions and related operators are widely used as basic elements of adaptive and associative networks [Nakano 72, Amari 72, Hopfield 82]. There exist numerous learning rules for finding a set of weights to achieve a particular correspondence between input-output pairs. But early works in the field have shown that the number of threshold functions (or linearly separable functions) in N binary variables is small compared to the number of all possible boolean mappings in N variables, especially if N is large. This problem is one of the main limitations of most neural networks models where the state is fully specified by the environment during learning: they can only learn linearly separable functions of their inputs. Moreover, a learning procedure which requires the outside world to specify the state of every neuron during the learning session can hardly be considered as a general learning rule because in real-world conditions, only a partial information on the “ideal” network state for each task is available from the environment. It is possible to use a set of so-called “hidden units” [Hinton,Sejnowski,Ackley. 84], without direct interaction with the environment, which can compute intermediate predicates. Unfortunately, the global response depends on the output of a particular hidden unit in a highly non-linear way, moreover the nature of this dependence is influenced by the states of the other cells.

Keywords

Learning Rule Associative Memory Hide Unit Threshold Function Separable Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Amari S.I.]
    “Learning patterns and patterns sequences by self-organizing net of threshold elements”. IEEE Trans. Com. Vol C-21, No 11, Nov 72.Google Scholar
  2. [Duda R.O., Hart P.E.]
    “Pattern classification and scene analysis”. Wiley 73.Google Scholar
  3. [Hopfield J.J.]
    “Neural networks and physical systems with emergent collective computational abilities”. P. Nat. Ac. Sci. USA, Nov 82.Google Scholar
  4. [Hinton G., Sejnowski T., Ackley D.]
    “Boltzmann Machines, constraint satisfaction networks that learn”. CMU Tech. Rep. CS-84–119, May 84Google Scholar
  5. [Hinton G.]
    Private communication 1985.Google Scholar
  6. [Kohonen T.]
    “Representation of associated data by matrix operators”. IEEE Trans. Computers, July 1973.Google Scholar
  7. [Kohonen T.]
    “An adaptive associative memory principle”. IEEE T. Comp, Apr 74Google Scholar
  8. [Kohonen T.]
    “Self-organization and associative memories”. Springer 1984.Google Scholar
  9. [Le Cun Y.]
    “A learning scheme for asymmetric threshold network”. Proc. of Cognitiva 85, Paris, June 1985 (in french).Google Scholar
  10. [Minsky M., Papert S.]
    “Perceptron”. M.I.T. Press, 1968.Google Scholar
  11. [Nakano K.]
    “Associatron, a model of associative memory”. IEEE Trans Syst. Man Cyb., Vol SMC -2, No 3, July 1972.Google Scholar
  12. [Widrow B., Hoff H.E.]
    “Adaptive switching circuits”. 1960 IRE Wescon Conv. Record, Part 4, 96–104, Aug 1960.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1986

Authors and Affiliations

  • Yann Le Cun
    • 1
    • 2
  1. 1.Ecole Supérieure d’Ingénieurs en Electrotechnique et ElectroniqueParisFrance
  2. 2.Laboratoire de dynamique des réseauxParisFrance

Personalised recommendations