Advertisement

Data Visualization and Analysis with Self-Organizing Maps in Learning Metrics

  • Samuel Kaski
  • Janne Sinkkonen
  • Jaakko Peltonen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2114)

Abstract

High-dimensional data can be visualized and analyzed with the Self-Organizing Map, a method for clustering data and visualizing it on a lower-dimensional display. Results depend on the (often Euclidean) distance measure of the data space. We introduce an improved metric that emphasizes important local directions by measuring changes in an auxiliary, interesting property of the data points, for example their class. A Self-Organizing Map is computed in the new metric and used for vi- sualizing and clustering the data. The trained map represents directions of highest relevance for the property of interest. In data analysis it is especially beneficial that the importance of the original data variables throughout the data space can be assessed and visualized. We apply the method to analyze the bankruptcy risk of Finnish enterprises.

Keywords

Gaussian Mixture Model Capital Structure Data Space Data Visualization Fisher Information Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amari, S.-I.: Differential-Geometrical Methods in Statistics. Springer, New York (1990)zbMATHGoogle Scholar
  2. 2.
    Amari, S.-I.: Natural Gradient Works Efficiently in Learning. In: Neural Computation 10 (1998) 251–276CrossRefGoogle Scholar
  3. 3.
    Card, S.K., Mackinlay, J.D., Shneiderman, B. (eds.): Readings in Information Visualization. Using Vision to Think, Morgan Kaufmann, San Francisco, CA (1999)Google Scholar
  4. 4.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via the EM Algorithm. In: Journal of the Royal Statistical Society, Series B 39 (1977) 1–38zbMATHMathSciNetGoogle Scholar
  5. 5.
    Fisher, J.W. III, Principe, J.: A Methodology for Information Theoretic Feature Extraction. In: Proc. IJCNN’98, International Joint Conference on Neural Networks, Vol. 3. IEEE Service Center, Piscataway, NJ (1998) 1712–1716Google Scholar
  6. 6.
    Hastie, T., Tibshirani, R., Buja, A.: Flexible Discriminant and Mixture Models. In: Kay, J., Titterington, D. (eds.): Proc. Conf. on Neural Networks and Statistics. Oxford University Press (1995)Google Scholar
  7. 7.
    Hastie, T., Tibshirani, R.: Discriminant Analysis by Gaussian Mixtures. JRSSB (1996)Google Scholar
  8. 8.
    Holmström, L., Koistinen, P., Laaksonen, J., Oja, E.: Neural and Statistical Classifiers Taxonomy and Two Case Studies. In: IEEE Transactions on Neural Networks 8 (1997) 5–17CrossRefGoogle Scholar
  9. 9.
    Hofmann, T.: Learning the Similarity of Documents: an Information-Geometric Approach to Document Retrieval and Categorization. In: Solla, S.A., Leen, T.K., Müller, K.-R. (eds.): Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA (2000) 914–920Google Scholar
  10. 10.
    Jaakkola, T.S., Haussler, D.: Exploiting Generative Models in Discriminative Classifiers. In: Kearns, Michael S., Solla, Sara A., Cohn, David A. (eds.): Advances in Neural Information Processing Systems 11. Morgan Kauffmann Publishers, San Mateo, CA (1999) 487–493Google Scholar
  11. 11.
    Kaski, S., Kangas, J., and Kohonen, T.: Bibliography of Self-Organizing Map (SOM) Papers: 1981-1997. Neural Computing Surveys 1 (1998) 1–176.Google Scholar
  12. 12.
    Kass, R.E., Vos, P.W.: Geometrical Foundations of Asymptotic Inference. Wiley, New York (1997)zbMATHGoogle Scholar
  13. 13.
    Keim, D.A., Kriegel, H.-P.: Visualization techniques for mining large databases: A comparison. IEEE Transactions on Knowledge and Data Engineering 8 (1996) 923–938CrossRefGoogle Scholar
  14. 14.
    Kiviluoto, K.: Predicting Bankruptcies with the Self-Organizing Map. Neurocomputing 21 (1998) 191–201zbMATHCrossRefGoogle Scholar
  15. 15.
    Kiviluoto, K., Bergius, P.: Exploring Corporate Bankruptcy with Two-Level Self-Organizing Maps. Decision technologies for computational management science. In: Proceedings of Fifth International Conference on Computational Finance. Kluwer Academic Publishers, Boston (1998) 373–380Google Scholar
  16. 16.
    Kohonen, T.: Self-organized Formation of Topologically Correct Feature Maps. In: Biological Cybernetics 43 (1982) 59–69zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Kohonen, T.: Self-Organizing Maps. Springer, Berlin (1995; second, extended edition 1997)Google Scholar
  18. 18.
    Kullback, S.: Information Theory and Statistics. Wiley, New York (1959)zbMATHGoogle Scholar
  19. 19.
    Murray, M.K., Rice, J.W.: Differential Geometry and Statistics. Chapman & Hall, London (1993)zbMATHGoogle Scholar
  20. 20.
    Rao, C.R.: Information and the Accuracy Attainable in the Estimation of Statistical Parameters. Bull. Calcutta Math. Soc. 37 (1945) 81–91zbMATHMathSciNetGoogle Scholar
  21. 21.
    Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, UK (1996)zbMATHGoogle Scholar
  22. 22.
    Torkkola, K., Campbell, W. M.: Mutual Information in Learning Feature Transformations. In: Proc. ICML’2000, The Seventeenth International Conference on Machine Learning (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Samuel Kaski
    • 1
  • Janne Sinkkonen
    • 1
  • Jaakko Peltonen
    • 1
  1. 1.Neural Networks Research CentreHelsinki University of TechnologyHUTFinland

Personalised recommendations