Advertisement

The Journal of Supercomputing

, Volume 74, Issue 5, pp 1863–1884 | Cite as

Efficient sparse matrix-delayed vector multiplication for discretized neural field model

Article

Abstract

Computational models of the human brain provide an important tool for studying the principles behind brain function and disease. To achieve whole-brain simulation, models are formulated at the level of neuronal populations as systems of delayed differential equations. In this paper, we show that the integration of large systems of sparsely connected neural masses is similar to well-studied sparse matrix-vector multiplication; however, due to delayed contributions, it differs in the data access pattern to the vectors. To improve data locality, we propose a combination of node reordering and tiled schedules derived from the connectivity matrix of the particular system, which allows performing multiple integration steps within a tile. We present two schedules: with a serial processing of the tiles and one allowing for parallel processing of the tiles. We evaluate the presented schedules showing speedup up to \(2\,\times \) on single-socket CPU, and \(1.25\,\times \) on Xeon Phi accelerator.

Keywords

Neural field Sparse matrix SpMV Delay differential equations Data locality 

Notes

Acknowledgements

The work was supported from European Regional Development Fund—Project “CERIT Scientific Cloud” (No. CZ.02.1.01/0.0/0.0/16_013/0001802). The author would like to thank Jiří Filipovič for constructive criticism of the manuscript.

References

  1. 1.
    Bojak I, Oostendorp TF, Reid AT, Kötter R (2011) Towards a model-based integration of co-registered electroencephalography/functional magnetic resonance imaging data with realistic neural population meshes. Philos Trans R Soc A Math Phys Eng Sci 369(1952):3785–3801MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Bressloff PC (2011) Spatiotemporal dynamics of continuum neural fields. J Phys A Math Theor 45(3):033,001MathSciNetCrossRefGoogle Scholar
  3. 3.
    Byun JH, Lin R, Yelick KA, Demmel J (2012) Autotuning sparse matrix-vector multiplication for multicore. Technical report UCB/EECS-2012-215, EECS Department, University of California, BerkeleyGoogle Scholar
  4. 4.
    Cacciola F (2016) Triangulated surface mesh simplification. In: CGAL User and Reference Manual, 4.9 edn. CGAL Editorial Board. http://doc.cgal.org/4.9/Manual/packages.html#PkgSurfaceMeshSimplificationSummary. Accessed 03 Apr 2017
  5. 5.
    Coombes S, beim Graben P, Potthast R, Wright J (2014) Neural fields. Springer, BerlinCrossRefMATHGoogle Scholar
  6. 6.
    Craddock RC, James GA, Holtzheimer PE, Hu XP, Mayberg HS (2012) A whole brain fMRI atlas generated via spatially constrained spectral clustering. Hum Brain Mapp 33(8):1914–1928CrossRefGoogle Scholar
  7. 7.
    Cuthill E, McKee J (1969) Reducing the bandwidth of sparse symmetric matrices. In: Proceedings of the 1969 24th National Conference. ACM, pp 157–172Google Scholar
  8. 8.
    Datta K, Kamil S, Williams S, Oliker L, Shalf J, Yelick K (2009) Optimization and performance modeling of stencil computations on modern microprocessors. SIAM Rev 51(1):129–159CrossRefMATHGoogle Scholar
  9. 9.
    Demmel J, Hoemmen M, Mohiyuddin M, Yelick K (2008) Avoiding communication in sparse matrix computations. In: IEEE International Symposium on Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE, pp 1–12Google Scholar
  10. 10.
    Douglas CC, Hu J, Kowarschik M, Rüde U, Weiß C (2000) Cache optimization for structured and unstructured grid multigrid. Electron Trans Numer Anal 10:21–40MathSciNetMATHGoogle Scholar
  11. 11.
    Geuzaine C, Remacle JF (2009) Gmsh: a 3-D finite element mesh generator with built-in pre-and post-processing facilities. Int J Numer Methods Eng 79(11):1309–1331MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Green KR, van Veen L (2014) Open-source tools for dynamical analysis of Liley’s mean-field cortex model. J Comput Sci 5(3):507–516MathSciNetCrossRefGoogle Scholar
  13. 13.
    Grosser T, Cohen A, Holewinski J, Sadayappan P, Verdoolaege S (2014) Hybrid hexagonal/classical tiling for GPUs. In: Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization. ACM, p 66Google Scholar
  14. 14.
    Jirsa VK (2009) Neural field dynamics with local and global connectivity and time delay. Philos Trans R Soc A Math Phys Eng Sci 367(1891):1131–1143MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Korch M, Rauber T (2010) Parallel low-storage Runge–Kutta solvers for ODE systems with limited access distance. Int J High Perform Comput Appl 25(2):236–255CrossRefGoogle Scholar
  16. 16.
    L’Ecuyer P, Munger D, Oreshkin B, Simard R (2017) Random numbers for parallel computers: requirements and methods, with emphasis on gpus. Math Comput Simul 135:3–17MathSciNetCrossRefGoogle Scholar
  17. 17.
    Leon PS, Knock SA, Woodman MM, Domide L, Mersmann J, McIntosh AR, Jirsa V (2013) The Virtual Brain: a simulator of primate Brain network dynamics. Front Neuroinform 7:36–47Google Scholar
  18. 18.
    Liu X, Chow E, Vaidyanathan K, Smelyanskiy M (2012) Improving the performance of dynamical simulations via multiple right-hand sides. In: 2012 IEEE 26th International on Parallel & Distributed Processing Symposium (IPDPS). IEEE, pp 36–47Google Scholar
  19. 19.
    Malas T, Hager G, Ltaief H, Keyes D (2015) Multi-dimensional intra-tile parallelization for memory-starved stencil computations. arXiv preprint arXiv:1510.04995
  20. 20.
    Mitchell JS, Mount DM, Papadimitriou CH (1987) The discrete geodesic problem. SIAM J Comput 16(4):647–668MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Morlan J, Kamil S, Fox A (2012) Auto-tuning the matrix powers kernel with SEJITS. In: Daydé M, Marques O, Nakajima K (eds) High performance computing for computational science-VECPAR 2012. Springer, pp 391–403Google Scholar
  22. 22.
    Orozco D, Garcia E, Gao G (2010) Locality optimization of stencil applications using data dependency graphs. In: International Workshop on Languages and Compilers for Parallel Computing. Springer, pp 77–91Google Scholar
  23. 23.
    Proix T, Spiegler A, Schirner M, Rothmeier S, Ritter P, Jirsa VK (2016) How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models? NeuroImage 142:135–149CrossRefGoogle Scholar
  24. 24.
    Rafique A, Constantinides GA, Kapre N (2015) Communication optimization of iterative sparse matrix-vector multiply on GPUs and FPGAs. IEEE Trans Parallel Distrib Syst 26(1):24–34CrossRefGoogle Scholar
  25. 25.
    Sanz-Leon P, Knock SA, Spiegler A, Jirsa VK (2015) Mathematical framework for large-scale brain network modeling in The Virtual Brain. Neuroimage 111:385–430CrossRefGoogle Scholar
  26. 26.
    Spiegler A, Jirsa V (2013) Systematic approximations of neural fields through networks of neural masses in The Virtual Brain. NeuroImage 83:704–725CrossRefGoogle Scholar
  27. 27.
    Strout M, Carter L, Ferrante J (2001) Rescheduling for locality in sparse matrix computations. In: Computational Science—ICCS 2001. pp 137–146Google Scholar
  28. 28.
    Strout MM, Carter L, Ferrante J, Kreaseck B (2004) Sparse tiling for stationary iterative methods. Int J High Perform Comput Appl 18(1):95–113CrossRefGoogle Scholar
  29. 29.
    Strout MM, LaMielle A, Carter L, Ferrante J, Kreaseck B, Olschanowsky C (2016) An approach for code generation in the sparse polyhedral framework. Parallel Comput 53:32–57MathSciNetCrossRefGoogle Scholar
  30. 30.
    Thapliyal H, Arabnia HR (2006) A reversible programmable logic array (RPLA) using Fredkin and Feynman gates for industrial electronics and applications. In: Proceedings of the 2006 International Conference on Computer Design & Conference on Computing in Nanotechnology, CDES 2006, Las Vegas, 26–29 June 2006. pp 70–76Google Scholar
  31. 31.
    Thapliyal H, Arabnia HR, Bajpai R, Sharma KK (2007) Combined integer and variable precision (CIVP) floating point multiplication architecture for FPGAs. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, PDPTA 2007, Las Vegas, 25–28 June 2007, Vol 1. pp 449–452Google Scholar
  32. 32.
    Thapliyal H, Jayashree HV, Nagamani AN, Arabnia HR (2013) Progress in reversible processor design: a novel methodology for reversible carry look-ahead adder. Trans Comput Sci 17:73–97.  https://doi.org/10.1007/978-3-642-35840-1_4 Google Scholar
  33. 33.
    Treibig J, Hager G, Wellein G (2010) LIKWID: A lightweight performance-oriented tool suite for x86 multicore environments. In: Proceedings of PSTI2010, the First International Workshop on Parallel Software Tools and Tool Infrastructures, San DiegoGoogle Scholar
  34. 34.
    Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M (2002) Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15(1):273–289CrossRefGoogle Scholar
  35. 35.
    Venkat A, Shantharam M, Hall M, Strout MM (2014) Non-affine extensions to polyhedral code generation. In: Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization. ACM, p 185Google Scholar
  36. 36.
    Williams S, Oliker L, Vuduc R, Shalf J, Yelick K, Demmel J (2009) Optimization of sparse matrix-vector multiplication on emerging multicore platforms. Parallel Comput 35(3):178–194CrossRefGoogle Scholar
  37. 37.
    Wulf WA, McKee SA (1995) Hitting the memory wall: implications of the obvious. ACM SIGARCH Comput Archit News 23(1):20–24CrossRefGoogle Scholar
  38. 38.
    Yzelman AJN, Roose D (2014) High-level strategies for parallel shared-memory sparse matrix-vector multiplication. IEEE Trans Parallel Distrib Syst 25(1):116–125CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.Faculty of InformaticsMasaryk UniversityBrnoCzech Republic

Personalised recommendations