Advertisement

Lobachevskii Journal of Mathematics

, Volume 39, Issue 9, pp 1207–1216 | Cite as

A New Parallel Intel Xeon Phi Hydrodynamics Code for Massively Parallel Supercomputers

  • I. M. Kulikov
  • I. G. Chernykh
  • A. V. Tutukov
Part 1. Special issue “High Performance Data Intensive Computing” Editors: V. V. Voevodin, A. S. Simonov, and A. V. Lapin
  • 7 Downloads

Abstract

In this paper, a new hydrodynamics code called gooPhi to simulate astrophysical flows on modern Intel Xeon Phi processors with KNL architecture is presented. A new vector numerical method implemented in the form of a program code for massively parallel architectures is proposed. A detailed description is given and a parallel implementation of the code is made. A performance of 173 gigaflops and 48 speedup are obtained on a single Intel Xeon Phi processor. A 97 per cent scalability is reached with 16 processors.

Keywords and phrases

high performance computing Computational astrophysics Intel Xeon Phi 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    I. M. Kulikov, I. G. Chernykh, A. V. Snytnikov, B. M. Glinskiy, and A. V. Tutukov, “AstroPhi: a code for complex simulation of dynamics of astrophysical objects using hybrid supercomputers,” Comput. Phys. Commun. 186, 71–80 (2015). doi 10.1016/j. cpc. 2014. 09. 004CrossRefGoogle Scholar
  2. 2.
    I. Kulikov, I. Chernykh, and A. Tutukov, “A new hydrodynamic model for numerical simulation of interacting galaxies on Intel Xeon Phi supercomputers,” J. Phys.: Conf. Ser. 719, 012006 (2016). doi 10.1088/1742-6596/719/1/012006Google Scholar
  3. 3.
    B. Glinsky, I. Kulikov, I. Chernykh, et al., “The Co-design of Astrophysical Code for Massively Parallel Supercomputers,” Lect. NotesComput. Sci. 10049, 342–353 (2017). doi 10.1007/978-3-319-49956-7_27CrossRefGoogle Scholar
  4. 4.
    I. M. Kulikov, I. G. Chernykh, B. M. Glinskiy, and V. A. Protasov, “An efficient optimization of HLL method for the second generation of Intel Xeon Phi processor,” Lobachevskii J. Math. 39, 543–551 (2018). doi 10.1134/S1995080218040091MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    F. R. Pearcea, and H. M. P. Couchman, “Hydra: a parallel adaptive grid code,” New Astron. 2, 411–427 (1997). doi 10.1016/S1384-1076(97)00025-0CrossRefGoogle Scholar
  6. 6.
    J. W. Wadsley, J. Stadel, and T. Quinn, “Gasoline: a flexible, parallel implementation of TreeSPH,” New Astron. 9, 137–158 (2004). doi 10.1016/j. newast. 2003. 08. 004CrossRefGoogle Scholar
  7. 7.
    S. Matthias, “GRAPESPH: cosmological smoothed particle hydrodynamics simulations with the specialpurpose hardware GRAPE,” Mon. Not. R. Astron. Soc. 278, 1005–1017 (1996). doi 10.1093/mnras/278. 4. 1005CrossRefGoogle Scholar
  8. 8.
    V. Springel, “The cosmological simulation codeGADGET-2,” Not. R. Astron. Soc. 364, 1105–1134 (2005). doi 10.1111/j. 1365–2966. 2005. 09655. xCrossRefGoogle Scholar
  9. 9.
    U. Ziegler, “Self-gravitational adaptive mesh magnetohydrodynamics with the NIRVANA code,” Astron. Astrophys. 435, 385–395 (2005). doi 10.1051/0004-6361:20042451CrossRefGoogle Scholar
  10. 10.
    A. Mignone, T. Plewa, and G. Bodo, “The piecewise parabolic method for multidimensional relativistic fluid dynamics,” Astrophys. J. 160, 199–219 (2005). doi 10.1086/430905CrossRefGoogle Scholar
  11. 11.
    J. Hayes, M. Norman, R. Fiedler, et al., “Simulating radiating and magnetized flows in multiple dimensions with ZEUS-MP,” Astrophys. J. Suppl. Ser. 165, 188–228 (2006). doi 10.1086/504594CrossRefGoogle Scholar
  12. 12.
    B. O’Shea, G. Bryan, J. Bordner, et al., “Introducing Enzo, an AMR cosmology application,” Lect. Notes Comput. Sci. Eng. 41, 341–349 (2005). doi 10.1007/b138538CrossRefzbMATHGoogle Scholar
  13. 13.
    R. Teyssier, “Cosmological hydrodynamics with adaptive mesh refinement. A new high resolution code called RAMSES,” Astron. Astrophys. 385, 337–364 (2002). doi 10.1051/0004-6361:20011817CrossRefGoogle Scholar
  14. 14.
    A. Kravtsov, A. Klypin, and Y. Hoffman, “Constrained simulations of the real Universe. II. Observational signatures of intergalactic gas in the local supercluster region,” Astrophys. J. 571, 563–575 (2002). doi 10.1086/340046CrossRefGoogle Scholar
  15. 15.
    J. Stone, T. Gardiner, P. Teuben, et al., “Athena: a new code for astrophysical MHD,” Astrophys. J. Suppl. Ser. 178, 137–177 (2008). doi 10.1086/588755CrossRefGoogle Scholar
  16. 16.
    A. Brandenburg and W. Dobler, “Hydromagnetic turbulence in computer simulations,” Comput. Phys. Commun. 147, 471–475 (2002). doi 10.1016/S0010-4655(02)00334-XCrossRefzbMATHGoogle Scholar
  17. 17.
    M. Gonzalez, E. Audit, and P. Huynh, “HERACLES: a three-dimensional radiation hydrodynamics code,” Astron. Astrophys. 464, 429–435 (2007). doi 10.1051/0004-6361:20065486CrossRefGoogle Scholar
  18. 18.
    M. R. Krumholz, R. I. Klein, C. F. McKee, et al., “Equations and algorithms for mixed-frame flux-limited diffusion radiation hydrodynamics,” Astrophys. J. 667, 626–643 (2007). doi 10.1086/520791CrossRefGoogle Scholar
  19. 19.
    A. Mignone, G. Bodo, S. Massaglia, et al., “PLUTO: a numerical code for computational astrophysics,” Astrophys. J. Suppl. Ser. 170, 228–242 (2007). doi 10.1086/513316CrossRefGoogle Scholar
  20. 20.
    A. Almgren, V. Beckner, J. Bell, et al., “CASTRO: a new compressible astrophysical solver. I. Hydrodynamics and self-gravity,” Astrophys. J. 715, 1221–1238 (2010). doi 10.1088/0004-637X/715/2/1221CrossRefGoogle Scholar
  21. 21.
    H. Schive, Y. Tsai, and T. Chiueh, “GAMER: a GPU-accelerated adaptive-mesh-refinement code for astrophysics,” Astrophys. J. 186, 457–484 (2010). doi 10.1088/0067-0049/186/2/457CrossRefGoogle Scholar
  22. 22.
    J. Murphy and A. Burrows, “BETHE-hydro: an arbitrary Lagrangian–Eulerian multidimensional hydrodynamics code for astrophysical simulations,” Astrophys. J. Suppl. Ser. 179, 209–241 (2008). doi 10.1086/591272CrossRefGoogle Scholar
  23. 23.
    V. Springel, “E pur si muove: Galilean-invariant cosmological hydrodynamical simulations on a moving mesh,” Mon. Not. R. Astron. Soc. 401, 791–851 (2010). doi 10.1111/j. 1365–2966. 2009. 15715. xCrossRefGoogle Scholar
  24. 24.
    S. Bruenn, A. Mezzacappa, W. Hix, et al., “2D and 3D core-collapse supernovae simulation results obtained with the CHIMERA code,” J. Phys. 180, 012018 (2009). doi 10.1088/1742-6596/180/1/012018Google Scholar
  25. 25.
    P. Hopkins, “A new class of accurate, mesh-free hydrodynamic simulation methods,” Mon. Not. R. Astron. Soc. 450, 53–110 (2015). doi 10.1093/mnras/stv195CrossRefGoogle Scholar
  26. 26.
    B. Glinskiy, I. Kulikov, A. Snytnikov, A. Romanenko, I. Chernykh, and V. Vshivkov, “Co-design of parallel numerical methods for plasma physics and astrophysics,” Supercomput. Front. Innov. 1 (3), 88–98 (2014). doi 10.14529/jsfi140305Google Scholar
  27. 27.
    V. V. Rusanov, “The calculation of the interaction of non-stationary shock waves with barriers,” Comput. Math. Math. Phys. 1, 304–320 (1962). doi 10.1016/0041-5553(62)90062-9CrossRefGoogle Scholar
  28. 28.
    V. Vshivkov, G. Lazareva, A. Snytnikov, I. Kulikov, and A. Tutukov, “Computational methods for illposed problems of gravitational gasodynamics,” J. Inverse Ill-Posed Probl. 19, 151–166 (2011). doi 10.1515/jiip. 2011. 027MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    S. Godunov, and I. Kulikov, “Computation of discontinuous solutions of fluid dynamics equations with entropy nondecrease guarantee,” Comput. Math. Math. Phys. 54, 1012–1024 (2014). doi 10.1134/S0965542514060086MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    M. Frigo, and S. Johnson, “The design and implementation of FFTW3,” Proc. IEEE 93, 216–231 (2005). doi 10.1109/JPROC. 2004. 840301CrossRefGoogle Scholar
  31. 31.
    A. Kalinkin, Y. Laevsky, and S. Gololobov, “2D fast Poisson solver for high-performance computing,” Lect. Notes Comput. Sci. 5698, 112–120 (2009). doi 10.1007/978-3-642-03275-2_11CrossRefGoogle Scholar

Copyright information

© Pleiades Publishing, Ltd. 2018

Authors and Affiliations

  • I. M. Kulikov
    • 1
  • I. G. Chernykh
    • 1
  • A. V. Tutukov
    • 2
  1. 1.Institute of Computational Mathematics and Mathematical GeophysicsSiberian Branch of the Russian Academy of SciencesNovosibirskRussia
  2. 2.Institute of AstronomyRussian Academy of SciencesMoscowRussia

Personalised recommendations