Speed Up of the SAMANN Neural Network Retraining

  • Viktor Medvedev
  • Gintautas Dzemyda
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4029)


Sammon’s mapping is a well-known procedure for mapping data from a higher-dimensional space onto a lower-dimensional one. The original algorithm has a disadvantage. It lacks generalization, which means that new points cannot be added to the obtained map without recalculating it. SAMANN neural network, that realizes Sammon’s algorithm, provides a generalization capability of projecting new data. Speed up of the SAMANN network retraining when the new data points appear has been analyzed in this paper. Two strategies for retraining the neural network that realizes the multidimensional data visualization have been proposed and then the analysis has been made.


Neural Network Projection Error Primary Dataset Iris Dataset Nonlinear Projection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hotelling, H.: Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology 24, 417–441, 498–520 (1993)CrossRefGoogle Scholar
  2. 2.
    Jain, A.K., Dubes, R.C.: Algorithms for Clustering Data. Prentice-Hall, Englewood Cliffs (1988)MATHGoogle Scholar
  3. 3.
    Jain, A.K., Mao, J.: Artificial neural network for nonlinear projection of multivariate data. In: Proc. IEEE International Joint Conference Neural Network, vol. 3, pp. 335–340 (1992)Google Scholar
  4. 4.
    Jain, A.K., Duin, R., Mao, J.: Statistical pattern recognition: A review. IEEE Trans. Pattern Analysis and Machine Intelligence 22(1), 4–37 (2000)CrossRefGoogle Scholar
  5. 5.
    Medvedev, V., Dzemyda, G.: Optimization of the local search in the training for SAMANN neural network. Journal of Global Optimization (to appear)Google Scholar
  6. 6.
    Mao, J., Jain, A.K.: Artificial neural networks for feature extraction and multivariate data projection. IEEE Trans. Neural Networks 6, 296–317 (1995)CrossRefGoogle Scholar
  7. 7.
    de Ridder, D., Duin, R.P.W.: Sammon’s mapping using neural networks: A comparison. Pattern Recognition Letters 18, 1307–1316 (1997)CrossRefGoogle Scholar
  8. 8.
    Sammon, J.J.: A nonlinear mapping for data structure analysis. IEEE Trans. Computer C-18(5), 401–409 (1969)CrossRefGoogle Scholar
  9. 9.
    Fisher, R.A.: The use of multiple measurements in taxonomic problem. Annual Eugenics 7, Part II, 179–188 (1936)Google Scholar
  10. 10.
  11. 11.
    Torgerson, W.S.: Multidimensional scaling, I: theory and methods. Psychometrica 17, 401–419 (1952)MATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Kohonen, T.: Self-Organizing Maps, 3rd edn. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (2001)MATHGoogle Scholar
  13. 13.
    Dzemyda, G.: Visualization of a set of parameters characterized by their correlation matrix. Computational Statistics & Data Analysis 36(1), 15–30 (2001)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Dzemyda, G., Kurasova, O.: Heuristic approach for minimizing the projection error in the integrated mapping. European Journal of Operational Research 171, 859–878 (2006)MATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Dzemyda, G., Kurasova, O.: Comparative analysis of the graphical result presentation in the SOM software. Informatica 13(3), 275–286 (2002)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Viktor Medvedev
    • 1
  • Gintautas Dzemyda
    • 1
  1. 1.Institute of Mathematics and InformaticsVilniusLithuania

Personalised recommendations