Advertisement

ICANN ’94 pp 314-317 | Cite as

Adaptive modulation of Receptive Fields in self-organizing networks

  • F. Firenze
  • P. Morasso
Conference paper

Abstract

Most self-organizing neural networks involve units which, similarly to biological neurons, have Receptive Fields (RFs) of finite sizes, defined as the regions in the features-space where they achieve non-vanishing activation. In some models,the widths of the RFs are slowly restricted during learning in order to achieve global convergence through weights “freezing” (Martinetz & Schulten, 1991); in other cases, they are adjusted, either in supervised mode (Reilly et al., 1982)or heuristically (Moody & Darken, 1989), with the aim to favour development of “locally tuned” units. We propose a new mechanism of adaptive modulation of the RFs, driven by the local density of the input data distribution, which can be coupled with many self-organizing models, making them more robust and flexible. In particular, we shall focus our attention on two interesting aspects: “self-stabilization” of learning parameters during “on-line” learning and function approximation with “adaptive resolution”.

Keywords

Receptive Field Adaptive Modulation Radial Basis Function Biological Cybernetic Supervise Mode 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. Bridle, J. (1989). Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Soulie, F. F., & Herault, J. (Eds.), Neurocomputing Vol. NATO ASI F-68, pp. 227–236. Springer Verlag, Berlin.Google Scholar
  2. Firenze, F., & Morasso, P. (1993). The capture effect model: a new approach to self-organized clustering. In Neuro-Nimes93, Sixth Int. Conf. on Neural Networks and their industrial cognitive applications pp. 65–74 Nimes.Google Scholar
  3. Grossberg, S. (1976). Adaptive Pattern Classification and Universal Recoding, (I). Biological Cybernetics, 23. Google Scholar
  4. Martinetz, T., & Schulten, K. (1991). A’Neural-Gas’ network learns topologies. In Kohonen, T., Makisara, K., Simula, 0., & Kangas, J. (Eds.), Artificial Neural Networks pp. 397–402 Amsterdam. North-Holland.Google Scholar
  5. Moody, J., & Darken, C. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation 1, 281–294.CrossRefGoogle Scholar
  6. Poggio, T., & Girosi, F. (1990). Networks for approximation and learning. Proceedings of the IEEE 78, 1481–1495.CrossRefGoogle Scholar
  7. Reggia, J., D’Autrechy, C., Sutton, G., & Weinrich, M. (1992). A competitive distribution theory of neocortical dynamics. Neural Computation 4,287–317.CrossRefGoogle Scholar
  8. Reilly, D., Cooper, L., & Elbaum, C. (1982). A neural model for category learning. Biological Cybernetic 45, 35–41CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 1994

Authors and Affiliations

  • F. Firenze
    • 1
  • P. Morasso
    • 1
  1. 1.Department of Informatics, Systems and CommunicationsUniversity of GenoaItaly

Personalised recommendations