Advertisement

Feature Discovery by Enhancement and Relaxation of Competitive Units

  • Ryotaro Kamimura
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5326)

Abstract

In this paper, we introduce a new concept of enhancement and relaxation to discover features in input patterns in competitive learning. We have introduced mutual information to realize competitive processes. Because mutual information is an average over all input patterns and competitive units, it cannot be used to detect detailed feature extraction. To examine in more detail how a network is organized, we introduce the enhancement and relaxation of competitive units through some elements in a network. With this procedure, we can estimate how the elements are organized with more detail. We applied the method to a simple artificial data and the famous Iris problem to show how well the method can extract the main features in input patterns. Experimental results showed that the method could more explicitly extract the main features in input patterns than the conventional techniques of the SOM.

Keywords

Mutual Information Input Pattern Feature Discovery Input Unit Blind Deconvolution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gokcay, E., Principe, J.: Information theoretic clustering. IEEE Transactions on Pattern Analysis and Machine 24(2), 158–171 (2002)CrossRefGoogle Scholar
  2. 2.
    Lehn-Schioler, D.E.T., Hegde, A., Principe, J.C.: Vector-quantization using information theoretic concepts. Natural Computation 4(1), 39–51 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Torkkola, K.: Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research 3, 1415–1438 (2003)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Linsker, R.: Self-organization in a perceptual network. Computer 21, 105–117 (1988)CrossRefGoogle Scholar
  5. 5.
    Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output. Neural Computation 1, 402–411 (1989)CrossRefGoogle Scholar
  6. 6.
    Bell, A.J., Sejnowski, T.J.: An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7(6), 1129–1159 (1995)CrossRefGoogle Scholar
  7. 7.
    Kamimura, R.: Information-theoretic competitive learning with inverse euclidean distance. Neural Processing Letters 18, 163–184 (2003)CrossRefGoogle Scholar
  8. 8.
    Kamimura, R.: Unifying cost and information in information-theoretic competitive learning. Neural Networks 18, 711–718 (2006)CrossRefGoogle Scholar
  9. 9.
    Mozer, M.C., Smolensky, P.: Using relevance to reduce network size automatically. Connection Science 1(1), 3–16 (1989)CrossRefGoogle Scholar
  10. 10.
    Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks 1(2) (1990)Google Scholar
  11. 11.
    Le Cun, J.S.D.Y., Solla, S.A.: Optimal brain damage. In: Advanced in Neural Information Processing, pp. 598–605 (1990)Google Scholar
  12. 12.
    Reed, R.: Pruning algorithms-a survey. IEEE Transactions on Neural Networks 4(5) (1993)Google Scholar
  13. 13.
    Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cognitive Science 9, 75–112Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ryotaro Kamimura
    • 1
  1. 1.IT Education CenterTokai UniversityKanagawaJapan

Personalised recommendations