Skip to main content

Competitive learning by entropy minimization

  • Technical Papers
  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1992)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 743))

Included in the following conference series:

  • 144 Accesses

Abstract

In this paper, we present a method of entropy minimization for competitive learning with winner-take-all activation rule. In the competitive learning, only one unit is turned on as a winner, while all the other units are off as losers. Thus, the learning is mainly considered to be a process of entropy minimization. If entropy in competitive layer is minimized, only one unit is on, while all the other units are turned off. If entropy is maximized, all the units are equally activated.

We applied this method of entropy minimization to two problems: autoencoder as feature detector and the organization of internal representation: the estimation of well-formedness of English sentences. For an autoencoder, we observed that networks with entropy method could classify four input patterns into two categories clearly. For a sentence well-formedness problem, a feature of input patterns was explicitly seen in competitive hidden layer. In other words, explicit internal representation could be obtained. In two cases, multiple inhibitory connections were observed to be produced. Thus, entropy minimization method is completely equivalent to competitive learning approaches through mutual inhibition. Entropy minimization method is more simple and easy to calculate. In the formulation and experiments, supervised learning (autoencoder) was used. However, the entropy method can be extended to fully unsupervised learning, which may replace ordinary competitive learning with winner-take-all activation rule.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H. B. Barlow, T. P. Kaushal and G. J. Mitchison, “Finding minimum entropy codes,” Neural Computing, vol.1, pp.412–423, 1989.

    Google Scholar 

  2. S. Becker, “Unsupervised learning procedures for neural networks,” International Journal of Neural Systems, Vol.2, No.1, pp.17–33, 1991.

    Google Scholar 

  3. G. A. Carpenter and S. Grossberg, “The ART of adaptive pattern recognition by a self-organizing neural network,” Computer, March, pp. 77–88, 1988.

    Google Scholar 

  4. J. E. Dayhoff, Neural Network Architectures, New York: Van Nostrand Reinhold, 1990.

    Google Scholar 

  5. T. Kohonen, Self-Organization and Associated Memory, New York: Springer-Verlag, 1988.

    Google Scholar 

  6. R. Hecht-Nielsen, Neurocomputing, Addison-Wesley Publishing Company, 1989.

    Google Scholar 

  7. J. Hertz, A. Krough and R. G. Palmer, Introduction to the Theory of Neural Computation, Redwood City, CA: Addison-Wesley, 1991.

    Google Scholar 

  8. R. Kamimura, “Acquisition of the grammatical competence with recurrent neural networks, ” in Artificial Neural Networks, T. Kohonen, K. Makisara, O. Simula, and J. Kangas, Ed, Amsterdam, The Netherlands, Vol.1, 1991, pp.903–908.

    Google Scholar 

  9. R. Kamimura, “Minimum entropy method in neural networks,” Research Report, Information Science Laboratory, Tokai University, ISL-RR-92-05, 1992.

    Google Scholar 

  10. F.J. Pineda, “Generalization of back-propagation to recurrent neural networks,” Physical Review Letters, Vol.59, No.19, pp. 2229–2232, 1987.

    PubMed  Google Scholar 

  11. D. E. Rumelhart and D. Zipser, “Feature discovery by competitive learning,” in Parallel Distributed Processing, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, Cambridge, Massachusetts: the MIT press, Vol.1, pp.318–362, 1986.

    Google Scholar 

  12. H. Uchida and M. Ishikawa, “A structural learning of neural networks based on entropy criterion” (in Japanese), IEICE Technical Report, The Institute of Electronics, Information and Communication Engineers, Vol.91, No.530, pp. 161–167, 1992.

    Google Scholar 

  13. W. Zhang, A. Hasegawa, K. Itoh and Y. Ichioka, “Error back propagation with minimum-entropy weights: A technique for better generalization of 2-D Shift-Invariant NNs,” in Proceedings of International Joint Conference on Neural Networks, (Seattle, WA), Vol.1, 1991, pp.645–648.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Shuji Doshita Koichi Furukawa Klaus P. Jantke Toyaki Nishida

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kamimura, R. (1993). Competitive learning by entropy minimization. In: Doshita, S., Furukawa, K., Jantke, K.P., Nishida, T. (eds) Algorithmic Learning Theory. ALT 1992. Lecture Notes in Computer Science, vol 743. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57369-0_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-57369-0_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57369-2

  • Online ISBN: 978-3-540-48093-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics