Skip to main content
Log in

Input information maximization for improving self-organizing maps

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In this paper, we propose a new type of information-theoretic method called “input information maximization” to improve the performance of self-organizing maps. We consider outputs from input neurons by focusing on winning neurons. The outputs are based on the difference between input neurons and the corresponding winning neurons. Then, we compute the uncertainty of input neurons by normalizing the outputs. Input information is defined as a decrease in the uncertainty of input neurons from a maximum and observed value. When input information increases, fewer input neurons tend to be activated. In the maximum state, only one neuron is on, and all others are off. We applied the method to two data sets, namely, the Senate and voting attitude data sets. In both, experimental results confirmed that when input information increased, quantization and topographic errors decreased. In addition, clearer class structure could be extracted by increasing input information. In comparison to our previous methods to detect the importance of input neurons, the present method turned out to be good at producing faithful representations with much more simplified computational procedures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://www1.ics.uci.edu/~mlearn/MLRepository.html.

References

  1. Kohonen T (1988) Self-organization and associative memory. Springer, New York

    Book  MATH  Google Scholar 

  2. Kohonen T (1995) Self-organizing maps. Springer, Berlin

    Book  Google Scholar 

  3. Kohonen T (1990) The self-organization map. In: Proceedings of the IEEE, vol 78, pp 1464–1480

    Google Scholar 

  4. Linsker R (1988) Self-organization in a perceptual network. Computer 21:105–117

    Article  Google Scholar 

  5. Linsker R (1989) How to generate ordered maps by maximizing the mutual information between input and output. Neural Comput 1:402–411

    Article  Google Scholar 

  6. Linsker R (1992) Local synaptic rules suffice to maximize mutual information in a linear network. Neural Comput 4:691–702

    Article  Google Scholar 

  7. Linsker R (2005) Improved local learning rule for information maximization and related applications. Neural Netw 18:261–265

    Article  MATH  Google Scholar 

  8. Sindhwani V, Raskshit S, Deodhare D, Erdogmus D, Principe JC (2004) Feature selection in mlps and svms based on maximum output information. IEEE Trans Neural Netw 15(4):937–948

    Article  Google Scholar 

  9. Fisher JW, Principe JC (1998) A methodology for information theoretic feature extraction. In: IEEE world congress on computational intelligence, pp 1712–1716

    Google Scholar 

  10. Morejon RA, Principe JC (2005) A new classifier based information-theoretic learning with unlabel data. Neural Netw 18:719–726

    Article  Google Scholar 

  11. Jeon K, Xu J, Erdogmus D, Principe JC (2005) A new classifier based information-theoretic learning with unlabel data. Neural Netw 18:719–726

    Article  Google Scholar 

  12. Lehn-Schioler DET, Hegde A, Principe JC (2004) Vector-quantization using information theoretic concepts. Nat Comput 4(1):39–51

    Article  MathSciNet  Google Scholar 

  13. Torkkola K (2001) Nonlinear feature transform using maximum mutual information. In: Proceedings of international joint conference on neural networks, pp 2756–2761

    Google Scholar 

  14. Torkkola K (2003) Feature extraction by non-parametric mutual information maximization. J Mach Learn Res 3:1415–1438

    MATH  MathSciNet  Google Scholar 

  15. Hammer B, Hasenfuss A, Villmann T (2007) Magnification control for batch neural gas. Neural Comput 70:1225–1234

    Google Scholar 

  16. Hasenfuss A, Hammer B, Geweniger T, Willmann T (2008) Magnification control in relative neural gas. In: Proceedings of ESANN2008

    Google Scholar 

  17. Villmann T, Merenyi E, Hammer B (2003) Neural maps in remote sensing image analysis. Neural Netw 16:389–403

    Article  Google Scholar 

  18. Villmann T, Claussen JC (2006) Magnification control in self-organizing maps and neural gas. Neural Comput 18:446–469

    Article  MATH  MathSciNet  Google Scholar 

  19. Merenyi E, Jain A, Villmann T (2007) Explicit magnification control of self-organizing maps for forbidden data. IEEE Trans Neural Netw 18(3):786–797

    Article  Google Scholar 

  20. Van Hulle M (1996) Topographic map formation by maximizing unconditional entropy: a plausible strategy for ‘on-line’ unsupervised competitive learning and nonparametric density estimation. IEEE Trans Neural Netw 7(5):1299–1305

    Article  Google Scholar 

  21. Hulle MMV (1997) The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals. Neural Comput 9(3):595–606

    Article  Google Scholar 

  22. Van Hulle MM (1999) Faithful representations with topographic maps. Neural Netw 12(6):803–823

    Article  Google Scholar 

  23. Van Hulle MM (2004) Entropy-based kernel modeling for topographic map formation. IEEE Trans Neural Netw 15(4):850–858

    Article  Google Scholar 

  24. Kamimura R, Kamimura T, Shultz TR (2001) Information theoretic competitive learning and linguistic rule acquisition. Trans Jpn Soc Artif Intell 16(2):287–298

    Article  Google Scholar 

  25. Kamimura R, Kamimura T, Uchida O (2001) Flexible feature discovery and structural information control. Connect Sci 13(4):323–347

    Article  Google Scholar 

  26. Kamimura R (2003) Information-theoretic competitive learning with inverse Euclidean distance output units. Neural Process Lett 18:163–184

    Article  Google Scholar 

  27. Rumelhart DE, Zipser D (1986) Feature discovery by competitive learning. In: Rumelhart DE, et al (eds) Parallel distributed processing, vol 1. MIT Press, Cambridge, pp 151–193

    Google Scholar 

  28. Grossberg S (1987) Competitive learning: from interactive activation to adaptive resonance. Cogn Sci 11:23–63

    Article  Google Scholar 

  29. Fukushima K (1975) Cognitron: a self-organizing multi-layered neural network. Biol Cybern 20:121–136

    Article  Google Scholar 

  30. Fukushima K (1975) Neocognitron: a hierarchical neural network capable of visual pattern recognition. Biol Cybern 20:121–136

    Article  Google Scholar 

  31. Fukushima K (1983) Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans Syst Man Cybern 13:826–834

    Article  Google Scholar 

  32. DeSieno D (1988) Adding a conscience to competitive learning. In: Proceedings of IEEE international conference on neural networks, San Diego. IEEE, New York, pp 117–124

    Chapter  Google Scholar 

  33. Ahalt SC, Krishnamurthy AK, Chen P, Melton DE (1990) Competitive learning algorithms for vector quantization. Neural Netw 3:277–290

    Article  Google Scholar 

  34. Xu L (1993) Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Trans Neural Netw 4(4):636–649

    Article  Google Scholar 

  35. Luk A, Lien S (2000) Properties of the generalized lotto-type competitive learning. In: Proceedings of international conference on neural information processing. Morgan Kaufmann, San Mateo, pp 1180–1185

    Google Scholar 

  36. Katsavounidis I, Kuo CCJ, Zhang Z. A new initialization technique for generalized Lloyd iteration. IEEE Signal Process Lett 1

  37. Xiong H, Swamy MNS, Ahmad MO (2004) Competitive splitting for codebook initialization. IEEE Signal Process Lett 11:474–477

    Article  Google Scholar 

  38. Xiong H, Swamy MNS (2004) Branching competitive learning network: a novel self-creating model. IEEE Trans Neural Netw 15(2):417–429

    Article  Google Scholar 

  39. Wu S, Liew W, Yuan H, Yang M (2004) Cluster analysis of gene expression data based on self-splitting and merging competitive learning. IEEE Trans Inf Technol Biomed 8(1):5–15

    Article  Google Scholar 

  40. Maass W (2000) On the computational power of winner-take-all. Neural Comput 12(11):2519–2535

    Article  MathSciNet  Google Scholar 

  41. Wolfe W, Mathis D, Anderson C, Rothman J, Gottler M, Brady G, Walker R, Duane G, Alaghband G (1991) K-winner networks. IEEE Trans Neural Netw 2(2):310–315

    Article  Google Scholar 

  42. Urahama K, Nagao T (1995) K-winners-take-all circuit with o (n) complexity. IEEE Trans Neural Netw 6(3):776–778

    Article  Google Scholar 

  43. Ridella S, Rovetta S, Zunino R (2001) K-winner machines for pattern classification. IEEE Trans Neural Netw 12(2):371–385

    Article  Google Scholar 

  44. Yen JC, Guo JI, Chen HC (1998) A new k-winners-take-all neural networks and its array architecture. IEEE Trans Neural Netw 9(5):901–912

    Article  Google Scholar 

  45. Calvert BD, Marinov GA (2000) Another k-winners-take-all analog neural network. IEEE Trans Neural Netw 11(4):829–838

    Article  Google Scholar 

  46. Martinez TM, Berkovich SG, Schulten KJ (1993) Neural-gas network for vector quantization and its application to time-series prediction. IEEE Trans Neural Netw 4(4):558–569

    Article  Google Scholar 

  47. Zeger EYK, Gersho A (1992) Competitive learning and soft competition for vector quantizer designs. IEEE Trans Signal Process 40(2):294–308

    Article  Google Scholar 

  48. Kamimura R (2008) Conditional information and information loss for flexible feature extraction. In: IEEE international joint conference on neural networks, IJCNN 2008. IEEE, New York, pp 2074–2083

    Google Scholar 

  49. Kamimura R (2007) Information loss to extract distinctive features in competitive learning. In: IEEE international conference on systems, man and cybernetics, ISIC 2007. IEEE, New York, pp 1217–1222

    Google Scholar 

  50. Kamimura R (2010) Information-theoretic enhancement learning and its application to visualization of self-organizing maps. Neurocomputing 73(13–15):2642–2664

    Article  Google Scholar 

  51. Kamimura R (2011) Double enhancement learning for explicit internal representations: unifying self-enhancement and information enhancement to incorporate information on input variables. Appl Intell, 1–23

  52. Kamimura R (2011) Self-enhancement learning: target-creating learning and its application to self-organizing maps. Biol Cybern, 1–34

  53. Kamimura R (2011) Selective information enhancement learning for creating interpretable representations in competitive learning. Neural Netw 24(4):387–405

    Article  Google Scholar 

  54. Shie J-D, Chen S-M (2008) Feature subset selection based on fuzzy entropy measures for handling classification problems. Appl Intell 28(1):69–82

    Article  Google Scholar 

  55. Shen Q, Jensen R (2008) Approximation-based feature selection and application for algae population estimation. Appl Intell 28(2):167–181

    Article  Google Scholar 

  56. Chen C-M, Lee H-M, Hwang C-W (2005) A hierarchical neural network document classifier with linguistic feature selection. Appl Intell 23(3):277–294

    Article  Google Scholar 

  57. Lee S, Park Y-T, d’Auriol BJ et al (2012) A novel feature selection method based on normalized mutual information. Appl Intell 37(1):100–120

    Article  Google Scholar 

  58. Ye H, Lo BW (2000) Feature competitive algorithm for dimension reduction of the self-organizing map input space. Appl Intell 13(3):215–230

    Article  Google Scholar 

  59. Vesanto J, Himberg J, Alhoniemi E, Parhankangas J (2000) SOM toolbox for Matlab. Tech rep, Laboratory of Computer and Information Science, Helsinki University of Technology

  60. Kiviluoto K (1996) Topology preservation in self-organizing maps. In: Proceedings of the IEEE international conference on neural networks, pp 294–299

    Google Scholar 

  61. Villmann T, Herrmann RDM, Martinez T (1997) Topology preservation in self-organizing feature maps: exact definition and measurement. IEEE Trans Neural Netw 8(2):256–266

    Article  Google Scholar 

  62. Bauer H-U, Pawelzik K (1992) Quantifying the neighborhood preservation of self-organizing maps. IEEE Trans Neural Netw 3(4):570–578

    Article  Google Scholar 

  63. Venna J, Kaski S (2001) Neighborhood preservation in nonlinear projection methods: an experimental study. In: Lecture notes in computer science, vol 2130, pp 485–491

    Google Scholar 

  64. Venna J (2007) Dimensionality reduction for visual exploration of similarity structures. Dissertation, Helsinki University of Technology

  65. Romesburg HC (1984) Cluster analysis for researchers. Krieger, Florida

    Google Scholar 

  66. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  67. Xing EP, Karp RM (2001) Cliff: clustering of high-dimensional microarray data via iterative feature filtering using normalized cuts. Bioinformatics 17:S306–S315

    Article  Google Scholar 

  68. Rakotomamonjy A (2003) Variable selection using SVM-based criteria. J Mach Learn Res 3:1357–1370

    MATH  MathSciNet  Google Scholar 

  69. Perkins S, Lacker K, Theiler J (2003) Grafting: fast, incremental feature selection by gradient descent in function space. J Mach Learn Res 3:1333–1356

    MATH  MathSciNet  Google Scholar 

  70. Reunanen J (2003) Overfitting in making comparisons between variable selection methods. J Mach Learn Res 3:1371–1382

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryotaro Kamimura.

Appendix: Information enhancement method

Appendix: Information enhancement method

In the second data set, the voting data, we compared the results of the input information with those of one of our methods for detecting the importance of input neurons. Before this input information method, we developed two types of information-theoretic methods to detect the importance of input neurons, namely, information enhancement and loss. The information enhancement method focuses on a specific component in a neural network and then computes mutual information. When the mutual information can be increased by this focus on the component or the enhancement of the component, we considered the component to be important. On the other hand, in the information loss method, a target component is ignored in a neural network, and then mutual information is computed. If the mutual information is decreased, the component was considered to be important. At the present stage of research, information loss has been incorporated into information enhancement. We here used a new version of information enhancement, but we present it in a simplified way, focusing on the main part of the method.

The new version of information enhancement lies in the repetition of the standard information enhancement. We present the core part of information enhancement. Let us compute the output by information enhancement. The output from the output neuron, when the tth neuron is enhanced, is computed by

$$ v_{j,k}^s = \exp \Biggl( - \sum _{l=1}^L { { (x_k^s - w_{kj})^2 } \over {2 \sigma_{kl}^2} } \Biggr). $$
(17)

As is the case with the input information, the spread parameter σ is defined by using the parameter β

$$\sigma_{kl} = \begin{cases} 1/\beta, & k=l; \\ \beta, & \mbox{otherwise.} \end{cases} $$

However, for fair comparison, we changed the spread parameter more carefully because the parameter β was too large to examine the performance. We tried to obtain the best result in terms of the topographic error. For that, the spread parameter σ is defined by another parameter, γ(>0)

$$\sigma_{kl} = \begin{cases} 1/\gamma, & k=l; \\ \gamma, & \mbox{otherwise.} \end{cases} $$

The parameter γ was computed by the other parameter β

$$ \gamma=\alpha(\beta-1) + 1. $$
(18)

We changed the value of α from 0.005 to 1 in steps of 0.005, and the best result in terms of the topographic error was taken.

By normalizing this output, we have the firing probability

$$ p(j \mid s ; k) = { {v_{j,k}^s} \over{\sum_{m=1}^M v_{m,k}^s} }. $$
(19)

By using this probability, we have mutual information for the kth input neuron

$$ MI(k)=\sum_{s=1}^S \sum _{j=1}^M p(s) p(j \mid s ; k) \log { {p(j \mid s ; k)} \over{p(j ; k)} }, $$
(20)

where

$$ p(j ; k) = {1 \over S} \sum_{s=1}^S p(j \mid s ;k). $$
(21)

By using this mutual information, we have the firing probability for input neurons

$$ p(k) = { {MI(k)} \over{\sum_{l=1}^M MI(l)} }. $$
(22)

We put this firing rate p(k) into the Eq. (15) to obtain the re-estimation rule.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kamimura, R. Input information maximization for improving self-organizing maps. Appl Intell 41, 421–438 (2014). https://doi.org/10.1007/s10489-014-0525-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-014-0525-1

Keywords

Navigation