Modification of the Growing Neural Gas Algorithm for Cluster Analysis

  • Fernando Canales
  • Max Chacón
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4756)

Abstract

In clusters analysis, a problem of great interest is having methods that allow the representation of the topology of input space without the need to know additional information about it. This gives rise to growing competitive neural methods which are capable of determining the structure of the network autonomously during the process of training. This work proposes a variation of the Growing Neural Gas (GNG) algorithm, calling GNG with post-pruning (GNG-PP), and a method of clustering based on the search for topological neighborhoods generated by the former. These were combined in a three-phase process to clustering the S&P100 set, which belongs to the macroeconomic field. This problem has a high dimensionality in the characteristics space. Its results are compared to those obtained by SOM, Growing Cell Structures (GCS), and a non-neural method. Evaluation of the results was made by means of the kappa coefficient, using as evaluation set the GICS industrial classification. The results show that when using the proposed methods the best clustering are generated, obtaining a kappa coefficient of 0.5643 in the GICS classification.

Keywords

clustering vectorial quantization GNG S&P100 

References

  1. 1.
    Martinetz, T., Schulten, K.: A neural gas network learns topology. Artificial Neural Networks, pp. 397–402. Elsevier Science Publishers, Amsterdam, Holanda (1991)Google Scholar
  2. 2.
    Vesanto, J., Alhoniemi, E.: Clustering of the self-organizing map. IEEE 3, 11 (2000)Google Scholar
  3. 3.
    Fritzke, B.: Growing self-organizing networks - Why? ESANN, Belgium pp. 61–72 (1996)Google Scholar
  4. 4.
    Fritzke, B.: A growing neural gas network learns topology. Advances in Neural Information Processing Systems, Cambridge, USA (1995)Google Scholar
  5. 5.
    Kohonen, T.: Self-organized formation of topologically correct feature maps. Biological Cybernetics 43, 59–69 (1982)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Martinetz, T., Berkovich, S., Schulten, K.: Neural gas network for vector quantization and its application to time-series prediction. IEEE 4, 4, 218–226 (1993)Google Scholar
  7. 7.
    Fritzke, B.: Growing cell structures - a self-organizing network for unsupervised and supervised learning. Neural Networks 1441–1460 (1994)Google Scholar
  8. 8.
    Martinetz, T.: Competitive hebbian learning rule forms perfectly topology preserving maps. In: ICANN 1993, pp. 427–434 (1993)Google Scholar
  9. 9.
    Inostroza-Ponta, M., Berretta, R., Mendes, A., Moscazo, P.: An automatic graph layout procedure to visualize correlated data. In: IFIP 19th World Computer Congress, Artificial Intelligence in Theory and Practice, August 21-24, Santiago. Chile, pp. 179–188 (2006)Google Scholar
  10. 10.
    Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33, 159–174 (1977)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Fritzke, B.: Some competitive learning methods. Biophysics Systems, Institute for Neural Computation, Ruhr-Universitat Bochum, Germany (1997)Google Scholar
  12. 12.
    Fritzke, B.: Kohonen feature maps and growing cell structures - A performance comparison. Advances in Neural Information Processing Systems 5, 115–122 (1993)Google Scholar
  13. 13.
    Kruskal, J.B.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29, 1–27 (1964)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Fernando Canales
    • 1
  • Max Chacón
    • 1
  1. 1.Universidad de Santiago de Chile; Depto. de Ingeniería Informática, Avda. Ecuador No 3659 - PoBox 10233; SantiagoChile

Personalised recommendations