Skip to main content
Log in

Accelerating astrophysical particle simulations with programmable hardware (FPGA and GPU)

  • Special Issue Paper
  • Published:
Computer Science - Research and Development

Abstract

In a previous paper we have shown that direct gravitational N-body simulations in astrophysics scale very well for moderately parallel supercomputers (order 10–100 nodes). The best balance between computation and communication is reached if the nodes are accelerated by special purpose hardware; in this paper we describe the implementation of particle based astrophysical simulation codes on new types of accelerator hardware (field programmable gate arrays, FPGA, and graphical processing units, GPU). In addition to direct gravitational N-body simulations we also use the algorithmically similar “smoothed particle hydrodynamics” method as test application; the algorithms are used for astrophysical problems as e.g. evolution of galactic nuclei with central black holes and gravitational wave generation, and star formation in galaxies and galactic nuclei. We present the code performance on a single node using different kinds of special hardware (traditional GRAPE, FPGA, and GPU) and some implementation aspects (e.g. accuracy). The results show that GPU hardware for real application codes is as fast as GRAPE, but for an order of magnitude lower price, and that FPGA is useful for acceleration of complex sequences of operations (like SPH). We discuss future prospects and new cluster computers built with new generations of FPGA and GPU cards.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aarseth SJ (2003) Gravitational N-body Simulations. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  2. Benz W (1990) Smooth particle hydrodynamics: A review. In: Proceedings of the NATO advanced research workshop on the numerical modelling of nonlinear stellar pulsations problems and prospects, Les Arcs, France, 20–24 March 1986. Buchler JR (ed) Kluwer Academic Publishers, Dordrecht Boston, pp 269–288

  3. Berentzen I, Preto M, Berczik P, Merritt D, Spurzem R (2009) Binary black hole merger in galactic nuclei: post-Newtonian simulations. Astroph J 695:455, eprint arXiv:0812.2756

    Article  Google Scholar 

  4. Barnes J, Hut P (1986) A hierarchical O(NlogN) force-calculation algorithm. Nature 324:446–449

    Article  Google Scholar 

  5. van Albada TS, van Gorkom JH (1977) Experimental stellar dynamics for systems with axial symmetry. A&A 54:121

    Google Scholar 

  6. Greengard L, Rokhlin V (1987) A fast algorithm for particle simulations. J Comput Phys 73:325–348

    Article  MATH  MathSciNet  Google Scholar 

  7. Miller RH, Prendergast KH (1968) Stellar dynamics in a discrete phase space. ApJ 151:699

    Article  Google Scholar 

  8. Efstathiou G, Eastwood JW (1981) On the clustering of particles in an expanding universe. MNRAS 194:503–525

    Google Scholar 

  9. Spurzem R (1999) Direct N-body simulations. J Computat Appl Math 109:407–432

    Article  MATH  MathSciNet  Google Scholar 

  10. Dubinski J (1996) A parallel tree code. New Astron 1:133–147

    Article  Google Scholar 

  11. Pearce FR, Couchman HMP (1997) Hydra: a parallel adaptive grid code. New Astron 2:411–427

    Article  Google Scholar 

  12. Makino J (2002) An efficient parallel algorithm for O(N2) direct summation method and its variations on distributed-memory parallel machines. New Astron 7:373–384

    Article  Google Scholar 

  13. Dorband EN, Hemsendorf M, Merritt D (2003) Systolic and hyper-systolic algorithms for the gravitational N-body problem, with an application to Brownian motion. J Comput Phys 185:484–511

    Article  MATH  MathSciNet  Google Scholar 

  14. Sugimoto D, Chikada Y, Makino J, Ito T, Ebisuzaki T, Umemura M (1990) Nature 345:33

    Article  Google Scholar 

  15. Monaghan JJ (1992) Smoothed particle hydrodynamics. ARA&A 30:543

    Article  Google Scholar 

  16. Klessen R, Clark PC (2007) Modeling Star Formation with SPH, SPHERIC Proceedings, p 133

  17. Springel V (2005) The cosmological simulation code GADGET-2. MNRAS 364:1105

    Article  Google Scholar 

  18. SPHERIC – SPH European Research Interest Community, http://wiki.manchester.ac.uk/spheric/index.php

  19. Fukushige T, Makino J, Kawai A (2005) Grape-6a: A single-card grape-6 for parallel pc-grape cluster system. PASJ (57):1009–1021

  20. Harfst S, Gualandris A, Merritt D, Spurzem R, Portegies Zwart S, Berczik P (2007) Performance analysis of direct N-body algorithms on special-purpose supercomputers. New Astron 12:357

    Article  Google Scholar 

  21. Belleman R, Bédorf J, Portegies Zwart S (2008) High performance direct gravitational N-body simulations on graphics processing units ii: An implementation in cuda. New Astron 13(2):103–112

    Article  Google Scholar 

  22. Marcus G, Lienhart G, Kugel A, Männer R, Berczik P, Spurzem R, Wetzstein M, Naab T, Burkert A (2007) An FPGA-based hardware coprocessor for SPH computations. SPHERIC Proceedings, pp 63–66

  23. Berczik P, Nakasato N, Berentzen I, Spurzem R, Marcus G, Lienhart G, Kugel A, Maenner R, Burkert A, Wetzstein M, Naab T, Vazquez H, Vinogradov S (2007) Special, hardware accelerated, parallel sph code for galaxy evolution. SPHERIC Proceedings, pp 5–8

  24. Hamada T, Iitaka T (2007) The chamomile scheme: An optimized algorithm for N-body simulations on programmable graphics processing units. http://de.arxiv.org/abs/astro-ph/0703100v1

  25. Fujiwara K, Nakasato N (2009) Fast simulations of gravitational many-body problem on RV770 GPU. arXiv:astro-ph/0904.3659v1

  26. Brunner RJ, Kindratenko VV, Myers AD (2007) Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware. Appeared in Proc. NASA Science Technology Conference – NSTC’07. http://de.arxiv.org/abs/astro-ph/0711.3414

  27. Kindratenko VV, Brunner RJ, Myers AD (2008) Mitrion-C Application Development on SGI Altix 350/RC100. On speeding up clustering calculations using alternative hardware technologies, appeared in IEEE Symposium on Filed-Programmable Custom Computing Machines – FCCM’07. http://de.arxiv.org/abs/astro-ph/0805.2122

  28. Lienhart G, Kugel A, Maenner R (2006) Rapid development of high performance floating-point pipelines for scientific simulation. RAW Proceedings

  29. Liu G, Liu M (2005) Smoothed Particle Hydrodynamics: a meshfree particle method. World Scientific, Singapore

    Google Scholar 

  30. Marcus G, Lienhart G, Kugel A, Maenner R (2006) On buffer management strategies for high performance computing with reconfigurable hardware. FPL, pp 343–348. IEEE

  31. Monaghan J (1994) Simulating free surface flows with sph. J Comutat Phys (110):399–406

  32. Nitadori K, Makino J (2008) Sixth- and eighth-order Hermite integrator for N-body simulations. New Astron 13(7):498–507

    Article  Google Scholar 

  33. Nguyen H (2008) GPU Gems 3. Addison-Wesley, New York

    Google Scholar 

  34. Scrofano R, Gokhale MB, Trouw F, Prasanna VK (2007) Accelerating molecular dynamics simulations with reconfigurable computers. IEEE Transactions on Parallel and Distributed Systems

  35. Schive H-Y, Chien C-H, Wong S-K, Tsai Y-C, Chiueh T (2008) Graphic-card cluster for astrophysics (GraCCA) Performance tests. New Astron 13:418–435

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Spurzem.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Spurzem, R., Berczik, P., Marcus, G. et al. Accelerating astrophysical particle simulations with programmable hardware (FPGA and GPU) . Comp. Sci. Res. Dev. 23, 231–239 (2009). https://doi.org/10.1007/s00450-009-0081-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00450-009-0081-9

Keywords

Navigation