Skip to main content
Log in

Enhancing and Relaxing Competitive Units for Feature Discovery

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, we propose a new information-theoretic method called enhancement and relaxation to discover main features in input patterns. We have so far shown that competitive learning is a process of mutual information maximization between input patterns and connection weights. However, because mutual information is an average over all input patterns and competitive units, it is not adequate for discovering detailed information on the roles of elements in a network. To extract information on the roles of elements in a networks, we enhance or relax competitive units through the elements. Mutual information should be changed by these processes. The change in information is called enhanced information. The enhanced information can be used to discover features in input patterns, because the information includes detailed information on elements in a network. We applied the method to the symmetry data, the well-known Iris problem and the OECD countries classification. In all cases, we succeeded in extracting the main features in input patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ahalt SC, Krishnamurthy AK, Chen P, Melton DE (1990) Competitive learning algorithms for vector quantization. Neural Netw 3: 277–290

    Article  Google Scholar 

  2. Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput 7(6): 1129–1159

    Article  Google Scholar 

  3. Deco G, Finnof W, Zimmermann HG (1995) Unsupervised mutual information criterion for elimination of overtraining in supervised multiplayer networks. Neural Comput 7: 86–107

    Article  Google Scholar 

  4. DeSieno D (1988) Adding a conscience to competitive learning. In: Proceedings of IEEE International Conference on Neural Networks, (San Diego), pp 117–124, IEEE

  5. Erdogmus D, Principe J (2004) Lower and upper bounds for misclassification probability based on renyi’s information. J VLST Signal Process Syst 37(2/3): 305–317

    Article  MATH  Google Scholar 

  6. Gatlin LL (1972) Information theory and living systems. Columbia University Press, New York

    Google Scholar 

  7. Gokcay E, Principe J (2002) Information theoretic clustering. IEEE Trans Pattern Anal Mach 24(2): 158–171

    Article  Google Scholar 

  8. Gorman RP, Sejnowski TJ (1988) Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw 1: 75–89

    Article  Google Scholar 

  9. Hulle MMV (1997) The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals. Neural Comput 9(3): 595–606

    Article  Google Scholar 

  10. Kamimura R (2003) Information-theoretic competitive learning with inverse euclidean distance. Neural Processing Letters 18: 163–184

    Article  Google Scholar 

  11. Kamimura R (2003) Progressive feature extraction by greedy network-growing algorithm. Complex Systems 14(2): 127–153

    MathSciNet  Google Scholar 

  12. Kamimura R (2003) Information theoretic competitive learning in self-adaptive multi-layered networks. Connection Science 13(4): 323–347

    Article  MathSciNet  Google Scholar 

  13. Kamimura R (2006) Unifying cost and information in information-theoretic competitive learning. Neural Netw 18: 711–718

    Article  Google Scholar 

  14. Kamimura R (2006) Cooperative information maximization with gauissian activation functions for self-organizing maps. IEEE Trans Neural Netw 17(4): 909–919

    Article  Google Scholar 

  15. Kamimura R (2006) Improving information-theoretic competitive learning by accentuated information maximization. Int J Gen Syst 34(3): 219–233

    Article  Google Scholar 

  16. Kamimura R, Kamimura T, Uchida O (2001) Flexible feature discovery and structural information. Connection Science 13(4): 323–347

    Article  Google Scholar 

  17. Kamimura R, Kamimura T, Takeuchi H (2002) Greedy information acquisition algorithm: a new information theoretic approach to dynamic information acquisition in neural networks. Connection Science 14(2): 137–162

    Article  Google Scholar 

  18. Karnin ED (1990) A simple procedure for pruning back-propagation trained networks. IEEE Trans Neural Netw 1(2): 239–242

    Article  Google Scholar 

  19. Kaski S, Nikkilä J, Kohonen T (1998) Methods for interpreting a self-organized map in data analysis. In: Verleysen M, (eds) Proceedings of ESANN’98, 6th European Symposium on Artificial Neural Networks Bruges, D-Facto, Brussels, Belgium, pp 185–190, 22–24 April

  20. Kumagai E, Funao N (2007) Data mining by R (in Japanese). 9-Ten Publishing Company

  21. Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advanced in neural information processing, pp 598–605

  22. Lehn-Schioler DET, Hegde Anant, Principe JC (2004) Vector-quantization using information theoretic concepts. Natural Comput 4(1): 39–51

    Article  MathSciNet  Google Scholar 

  23. Linsker R (1988) Self-organization in a perceptual network. Computer 21: 105–117

    Article  Google Scholar 

  24. Linsker R (1989) How to generate ordered maps by maximizing the mutual information between input and output. Neural Comput 1: 402–411

    Article  Google Scholar 

  25. Luk A, Lien S (2000) Properties of the generalized lotto-type competitive learning. In: Proceedings of International conference on neural information processing, Morgan Kaufmann Publishers, San Mateo: CA, pp 1180–1185

  26. Micheli A, Sperduti A, Starita A. (2001) Analysis of the internal representations developed by neural networks for structures applied to quantitative structure-activity relationship studies of benzodiazepines. J Chem Inf Comput Sci 41: 202–218

    Google Scholar 

  27. Mozer MC, Smolensky P (1989) Using relevance to reduce network size automatically. Connection Science 1(1): 3–16

    Article  Google Scholar 

  28. Reed R (1993) Pruning algorithms-a survey. IEEE Trans Neural Netw 4(5): 740–747

    Article  Google Scholar 

  29. Rumelhart DE, Hinton GE, Williams RJ et al (1986) Learning internal representations by error progagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing vol.1. MIT Press, Cambridge, pp 318–362

    Google Scholar 

  30. Slonim N, Tishby N (1999) Agglomerative information bottleneck. In: Advanced in neural information processing, pp 617–623

  31. Towell GG, Shavlik JW (1993) Extracting rules from knowledge-based neural networks. Mach Learn 13: 71–101

    Google Scholar 

  32. Torkkola K (2003) Feature extraction by non-parametric mutual information maximization. J Mach Learn Res 3: 1415–1438

    Article  MATH  MathSciNet  Google Scholar 

  33. Vesanto J, Alhoniemi E (2000) Clustering of the self-organizing map. IEEE-NN 11: 586

    Google Scholar 

  34. Vesanto J (1999) SOM-based data visualization methods. Intelligent-Data-Analysis 3: 111–126

    Article  MATH  Google Scholar 

  35. Xu L (1993) Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Trans Neural Netw 4(4): 636–649

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryotaro Kamimura.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kamimura, R. Enhancing and Relaxing Competitive Units for Feature Discovery. Neural Process Lett 30, 37–57 (2009). https://doi.org/10.1007/s11063-009-9109-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-009-9109-1

Keywords

Navigation