Abstract
Threshold functions and related operators are widely used as basic elements of adaptive and associative networks [Nakano 72, Amari 72, Hopfield 82]. There exist numerous learning rules for finding a set of weights to achieve a particular correspondence between input-output pairs. But early works in the field have shown that the number of threshold functions (or linearly separable functions) in N binary variables is small compared to the number of all possible boolean mappings in N variables, especially if N is large. This problem is one of the main limitations of most neural networks models where the state is fully specified by the environment during learning: they can only learn linearly separable functions of their inputs. Moreover, a learning procedure which requires the outside world to specify the state of every neuron during the learning session can hardly be considered as a general learning rule because in real-world conditions, only a partial information on the “ideal” network state for each task is available from the environment. It is possible to use a set of so-called “hidden units” [Hinton,Sejnowski,Ackley. 84], without direct interaction with the environment, which can compute intermediate predicates. Unfortunately, the global response depends on the output of a particular hidden unit in a highly non-linear way, moreover the nature of this dependence is influenced by the states of the other cells.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
“Learning patterns and patterns sequences by self-organizing net of threshold elements”. IEEE Trans. Com. Vol C-21, No 11, Nov 72.
“Pattern classification and scene analysis”. Wiley 73.
“Neural networks and physical systems with emergent collective computational abilities”. P. Nat. Ac. Sci. USA, Nov 82.
“Boltzmann Machines, constraint satisfaction networks that learn”. CMU Tech. Rep. CS-84–119, May 84
Private communication 1985.
“Representation of associated data by matrix operators”. IEEE Trans. Computers, July 1973.
“An adaptive associative memory principle”. IEEE T. Comp, Apr 74
“Self-organization and associative memories”. Springer 1984.
“A learning scheme for asymmetric threshold network”. Proc. of Cognitiva 85, Paris, June 1985 (in french).
“Perceptron”. M.I.T. Press, 1968.
“Associatron, a model of associative memory”. IEEE Trans Syst. Man Cyb., Vol SMC -2, No 3, July 1972.
“Adaptive switching circuits”. 1960 IRE Wescon Conv. Record, Part 4, 96–104, Aug 1960.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1986 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Le Cun, Y. (1986). Learning Process in an Asymmetric Threshold Network. In: Bienenstock, E., Soulié, F.F., Weisbuch, G. (eds) Disordered Systems and Biological Organization. NATO ASI Series, vol 20. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-82657-3_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-82657-3_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-82659-7
Online ISBN: 978-3-642-82657-3
eBook Packages: Springer Book Archive