Advertisement

Representation of Uncertainty in Self-Organizing Neural Networks

  • Jonathan A. Marshall

Abstract

The ability to represent multiple hypotheses about the classification of an ambiguous input pattern is an extremely useful network property. Yet typically, self-organizing neural networks are designed to make only winner-take-all pattern classification decisions. The winner-take-all decisions may often turn out to be wrong when they are based on insufficient or ambiguous input data. In a self-organizing neural network, incorrect decisions can impair or prevent the development of efficient codes for classifying the input environment. A new method is described which allows a neural network to maintain a representation of its own uncertainty. A key benefit of the new technique is that incorrect learning is avoided and efficient representation structures can thereby self-organize. A new anti-Hebbian inhibitory learning rule is combined with a variant of a Hebbian excitatory learning rule. Inhibitory learning permits the superposition of multiple simultaneous partial neural activations, under strictly regulated circumstances. An ambiguous input pattern then partially activates a set of neurons, each of which represents a hypothesis about the pattern’s likely classification. If disambiguating pattern information is subsequently added, only the neuron that codes the most likely hypothesis becomes fully active; the activations of other neurons are suppressed. Efficient pattern codes can then self-organize in neural networks exposed to input environments which, like our own perceptual world, contain a great deal of ambiguity.

Keywords

Input Pattern Learning Rule Motion Parallax Input Environment Excitatory Connection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amari, S. & Takeuchi, A. (1978). “Mathematical Theory on Formation of Category Detecting Nerve Cells.” Biological Cybernetics, 29, 127–136.MathSciNetCrossRefzbMATHGoogle Scholar
  2. Easton, P. & Gordon, P.E. (1984). “Stabilization of Hebbian Neural Nets by Inhibitory Learning.” Biological Cybernetics, 51, 1–9.MathSciNetCrossRefzbMATHGoogle Scholar
  3. Földiâk, P. (1989). “Adaptive Network for Optimal Linear Feature Extraction.” Proceedings of the International Joint Conference on Neural Networks, Washington, DC, June 1989, I., 401–405.Google Scholar
  4. Grossberg, S. (1976). “Adaptive Pattern Classification and Universal Recoding: I. Parallel Development and Coding of Neural Feature Detectors.” Biological Cybernetics, 23, 121–134.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Grossberg, S. & Marshall, J.A. (1989). “Stereo Boundary Fusion by Cortical Complex Cells: A System of Maps, Filters, and Feedback Networks for Multiplexing Distributed Data.” Neural Networks, 2, 29–51.CrossRefGoogle Scholar
  6. Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley.Google Scholar
  7. Kohonen, T. (1984). Self-Organization and Associative Memory. New York: Springer-Verlag.zbMATHGoogle Scholar
  8. Marshall, J.A. (1988). “Self-Organizing Neural Networks for Perception of Visual Motion.” Technical Report 88–010, Boston University Computer Science Department.Google Scholar
  9. Marshall, J.A. (1989). “Self-Organizing Neural Network Architectures for Computing Visual Depth from Motion Parallax.” Proceedings of the International Joint Conference on Neural Networks, Washington DC, June 1989, II., 227–234.Google Scholar
  10. Marshall, J.A. (1990a). “Self-Organizing Neural Networks for Perception of Visual Motion.” Neural Networks,In press.Google Scholar
  11. Marshall, J.A. (1990b). “A Self-Organizing Scale-Sensitive Neural Network.” UMSI Technical Report, Minnesota Supercomputer Institute.Google Scholar
  12. Nagano, T. & Kurata, K. (1981). “A Self-Organizing Neural Network Model for the Development of Complex Cells.” Biological Cybernetics, 40, 195–200.CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1990

Authors and Affiliations

  • Jonathan A. Marshall
    • 1
  1. 1.Center for Research in Learning, Perception, and Cognition and Minnesota Supercomputer InstituteUniversity of MinnesotaMinneapolisUSA

Personalised recommendations