A Multi-Signal Variant for the GPU-Based Parallelization of Growing Self-Organizing Networks

  • Giacomo ParigiEmail author
  • Angelo Stramieri
  • Danilo Pau
  • Marco Piastra
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 283)


Among the many possible approaches for the parallelization of self-organizing networks, and in particular of growing self-organizing networks, perhaps the most common one is producing an optimized, parallel implementation of the standard sequential algorithms reported in the literature. In this chapter we explore an alternative approach, based on a new algorithm variant specifically designed to match the features of the large-scale, fine-grained parallelism of GPUs, in which multiple input signals are processed at once. Comparative tests have been performed, using both parallel and sequential implementations of the new algorithm variant, in particular for a growing self-organizing network that reconstructs surfaces from point clouds. The experimental results show that this approach allows harnessing in a more effective way the intrinsic parallelism that the self-organizing networks algorithms seem intuitively to suggest, obtaining better performances even with networks of smaller size.


Growing self-organizing networks Graphics processing unit  Parallelism Surface reconstruction Topology preservation 


  1. 1.
    Piastra, M.: Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples. Neural Netw. 41, 96–112 (2012)CrossRefGoogle Scholar
  2. 2.
    Lawrence, R., Almasi, G., Rushmeier, H.: A scalable parallel algorithm for self-organizing maps with applications to sparse data mining problems. Data Min. Knowl. Disc. 3, 171–195 (1999)CrossRefGoogle Scholar
  3. 3.
    Kohonen, T.: Self-Organizing Maps. Springer series in information  sciences, vol. 30. Springer, Berlin (2001)Google Scholar
  4. 4.
    Orts, S., Garcia-Rodriguez, J., Viejo, D., Cazorla, M., Morell, V.: Gpgpu implementation of growing neural gas: application to 3d scene reconstruction. J. Parallel Distrib. Comput. 72(10), 1361–1372 (2012)CrossRefGoogle Scholar
  5. 5.
    García-Rodríguez, J., Angelopoulou, A., Morell, V., Orts, S., Psarrou, A., García-Chamizo, J.: Fast image representation with gpu-based growing neural gas. Adv. Comput. Intell., Lect. Notes Comput. Sci 6692, 58–65 (2011)CrossRefGoogle Scholar
  6. 6.
    Luo, Z., Liu, H., Wu, X.: Artificial neural network computation on graphic process unit. In: Proceedings of the IEEE International Joint Conference on Neural Networks, IJCNN’05. IEEE 2005, vol. 1, pp. 622–626 (2005)Google Scholar
  7. 7.
    Campbell, A., Berglund, E., Streit, A.: Graphics hardware implementation of the parameter-less self-organising map. In: Intelligent Data Engineering and Automated Learning-IDEAL 2005, pp. 5–14 (2005)Google Scholar
  8. 8.
    Owens, J., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A., Purcell, T.: A survey of general-purpose computation on graphics hardware. Comput. Graphics Forum 26, 80–113 (2007) (Wiley Online Library)Google Scholar
  9. 9.
    Fritzke, B.: A growing neural gas network learns topologies. In: Advances in Neural Information Processing Systems 7, MIT Press (1995)Google Scholar
  10. 10.
    Marsland, S., Shapiro, J., Nehmzow, U.: A self-organising network that grows when required. Neural Netw. 15, 1041–1058 (2002)CrossRefGoogle Scholar
  11. 11.
    Martinetz, T., Schulten, K.: Topology representing networks. Neural Netw. 7, 507–522 (1994)CrossRefGoogle Scholar
  12. 12.
    Owens, J., Houston, M., Luebke, D., Green, S., Stone, J., Phillips, J.: Gpu computing. Proc. IEEE 96, 879–899 (2008)CrossRefGoogle Scholar
  13. 13.
    McCool, M.: Data-parallel programming on the cell be and the gpu using the rapidmind development platform. In: GSPx Multicore Applications Conference. vol. 9. (2006)Google Scholar
  14. 14.
    Papakipos, M.: The Peakstream Platform: High-Productivity Software Development for Multi-Core Processors. PeakStream Inc., Redwood City (2007)Google Scholar
  15. 15.
    NVIDIA Corporation: CUDA C programming guide, version 4.0. Santa Clara (2011)Google Scholar
  16. 16.
    Hensley, J.: AMD CTM overview. In: ACM SIGGRAPH 2007 courses, p. 7. ACM, New York (2007)Google Scholar
  17. 17.
    Stone, J., Gohara, D., Shi, G.: Opencl: a parallel programming standard for heterogeneous computing systems. Comput. Sci. Eng. 12, 66 (2010)CrossRefGoogle Scholar
  18. 18.
    Buck, I., Foley, T., Horn, D., Sugerman, J., Fatahalian, K., Houston, M., Hanrahan, P.: Brook for gpus: stream computing on graphics hardware. ACM Trans. Graph. 23, 777–786 (2004)CrossRefGoogle Scholar
  19. 19.
    Harris, M.: Optimizing parallel reduction in cuda. CUDA SDK Whitepaper (2007)Google Scholar
  20. 20.
    Liu, S., Flach, P., Cristianini, N.: Generic multiplicative methods for implementing machine learning algorithms on mapreduce. Arxiv, preprint arXiv:1111.2111 (2011)Google Scholar
  21. 21.
    Zhang, C., Li, F., Jestes, J.: Efficient parallel knn joins for large data in mapreduce. In: Proceedings of 15th International Conference on Extending Database Technology (EDBT 2012). (2012)Google Scholar
  22. 22.
    Edelsbrunner, H.: GGeometry and topology for mesh generation. Cambridge Monographs on Applied and Computational Mathematics, vol. 7. Cambridge University Press, Cambridge (2001)Google Scholar
  23. 23.
    Amenta, N., Bern, M.: Surface reconstruction by voronoi filtering. Discrete Comput. Geom. 22, 481–504 (1999)CrossRefzbMATHMathSciNetGoogle Scholar
  24. 24.
    Hockney, R.W., Eastwood, J.W.: Computer Simulation Using Particles. Taylor & Francis Inc., Bristol (1988)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Giacomo Parigi
    • 1
    Email author
  • Angelo Stramieri
    • 1
  • Danilo Pau
    • 2
  • Marco Piastra
    • 1
  1. 1.Computer Vision and Multimedia LabUniversity of Pavia PaviaItaly
  2. 2.Advanced System TechnologySTMicroelectronics Agrate BrianzaItaly

Personalised recommendations