Representation of Uncertainty in Self-Organizing Neural Networks
The ability to represent multiple hypotheses about the classification of an ambiguous input pattern is an extremely useful network property. Yet typically, self-organizing neural networks are designed to make only winner-take-all pattern classification decisions. The winner-take-all decisions may often turn out to be wrong when they are based on insufficient or ambiguous input data. In a self-organizing neural network, incorrect decisions can impair or prevent the development of efficient codes for classifying the input environment. A new method is described which allows a neural network to maintain a representation of its own uncertainty. A key benefit of the new technique is that incorrect learning is avoided and efficient representation structures can thereby self-organize. A new anti-Hebbian inhibitory learning rule is combined with a variant of a Hebbian excitatory learning rule. Inhibitory learning permits the superposition of multiple simultaneous partial neural activations, under strictly regulated circumstances. An ambiguous input pattern then partially activates a set of neurons, each of which represents a hypothesis about the pattern’s likely classification. If disambiguating pattern information is subsequently added, only the neuron that codes the most likely hypothesis becomes fully active; the activations of other neurons are suppressed. Efficient pattern codes can then self-organize in neural networks exposed to input environments which, like our own perceptual world, contain a great deal of ambiguity.
KeywordsInput Pattern Learning Rule Motion Parallax Input Environment Excitatory Connection
Unable to display preview. Download preview PDF.
- Földiâk, P. (1989). “Adaptive Network for Optimal Linear Feature Extraction.” Proceedings of the International Joint Conference on Neural Networks, Washington, DC, June 1989, I., 401–405.Google Scholar
- Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley.Google Scholar
- Marshall, J.A. (1988). “Self-Organizing Neural Networks for Perception of Visual Motion.” Technical Report 88–010, Boston University Computer Science Department.Google Scholar
- Marshall, J.A. (1989). “Self-Organizing Neural Network Architectures for Computing Visual Depth from Motion Parallax.” Proceedings of the International Joint Conference on Neural Networks, Washington DC, June 1989, II., 227–234.Google Scholar
- Marshall, J.A. (1990a). “Self-Organizing Neural Networks for Perception of Visual Motion.” Neural Networks,In press.Google Scholar
- Marshall, J.A. (1990b). “A Self-Organizing Scale-Sensitive Neural Network.” UMSI Technical Report, Minnesota Supercomputer Institute.Google Scholar