Skip to main content
  • Part 1. Special issue “High Performance Data Intensive Computing” Editors: V. V. Voevodin, A. S. Simonov, and A. V. Lapin
  • Published:

A New Parallel Intel Xeon Phi Hydrodynamics Code for Massively Parallel Supercomputers

Abstract

In this paper, a new hydrodynamics code called gooPhi to simulate astrophysical flows on modern Intel Xeon Phi processors with KNL architecture is presented. A new vector numerical method implemented in the form of a program code for massively parallel architectures is proposed. A detailed description is given and a parallel implementation of the code is made. A performance of 173 gigaflops and 48 speedup are obtained on a single Intel Xeon Phi processor. A 97 per cent scalability is reached with 16 processors.

This is a preview of subscription content, access via your institution.

References

  1. 1.

    I. M. Kulikov, I. G. Chernykh, A. V. Snytnikov, B. M. Glinskiy, and A. V. Tutukov, “AstroPhi: a code for complex simulation of dynamics of astrophysical objects using hybrid supercomputers,” Comput. Phys. Commun. 186, 71–80 (2015). doi 10.1016/j. cpc. 2014. 09. 004

    Article  Google Scholar 

  2. 2.

    I. Kulikov, I. Chernykh, and A. Tutukov, “A new hydrodynamic model for numerical simulation of interacting galaxies on Intel Xeon Phi supercomputers,” J. Phys.: Conf. Ser. 719, 012006 (2016). doi 10.1088/1742-6596/719/1/012006

    Google Scholar 

  3. 3.

    B. Glinsky, I. Kulikov, I. Chernykh, et al., “The Co-design of Astrophysical Code for Massively Parallel Supercomputers,” Lect. NotesComput. Sci. 10049, 342–353 (2017). doi 10.1007/978-3-319-49956-7_27

    Article  Google Scholar 

  4. 4.

    I. M. Kulikov, I. G. Chernykh, B. M. Glinskiy, and V. A. Protasov, “An efficient optimization of HLL method for the second generation of Intel Xeon Phi processor,” Lobachevskii J. Math. 39, 543–551 (2018). doi 10.1134/S1995080218040091

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    F. R. Pearcea, and H. M. P. Couchman, “Hydra: a parallel adaptive grid code,” New Astron. 2, 411–427 (1997). doi 10.1016/S1384-1076(97)00025-0

    Article  Google Scholar 

  6. 6.

    J. W. Wadsley, J. Stadel, and T. Quinn, “Gasoline: a flexible, parallel implementation of TreeSPH,” New Astron. 9, 137–158 (2004). doi 10.1016/j. newast. 2003. 08. 004

    Article  Google Scholar 

  7. 7.

    S. Matthias, “GRAPESPH: cosmological smoothed particle hydrodynamics simulations with the specialpurpose hardware GRAPE,” Mon. Not. R. Astron. Soc. 278, 1005–1017 (1996). doi 10.1093/mnras/278. 4. 1005

    Article  Google Scholar 

  8. 8.

    V. Springel, “The cosmological simulation codeGADGET-2,” Not. R. Astron. Soc. 364, 1105–1134 (2005). doi 10.1111/j. 1365–2966. 2005. 09655. x

    Article  Google Scholar 

  9. 9.

    U. Ziegler, “Self-gravitational adaptive mesh magnetohydrodynamics with the NIRVANA code,” Astron. Astrophys. 435, 385–395 (2005). doi 10.1051/0004-6361:20042451

    Article  Google Scholar 

  10. 10.

    A. Mignone, T. Plewa, and G. Bodo, “The piecewise parabolic method for multidimensional relativistic fluid dynamics,” Astrophys. J. 160, 199–219 (2005). doi 10.1086/430905

    Article  Google Scholar 

  11. 11.

    J. Hayes, M. Norman, R. Fiedler, et al., “Simulating radiating and magnetized flows in multiple dimensions with ZEUS-MP,” Astrophys. J. Suppl. Ser. 165, 188–228 (2006). doi 10.1086/504594

    Article  Google Scholar 

  12. 12.

    B. O’Shea, G. Bryan, J. Bordner, et al., “Introducing Enzo, an AMR cosmology application,” Lect. Notes Comput. Sci. Eng. 41, 341–349 (2005). doi 10.1007/b138538

    Article  MATH  Google Scholar 

  13. 13.

    R. Teyssier, “Cosmological hydrodynamics with adaptive mesh refinement. A new high resolution code called RAMSES,” Astron. Astrophys. 385, 337–364 (2002). doi 10.1051/0004-6361:20011817

    Article  Google Scholar 

  14. 14.

    A. Kravtsov, A. Klypin, and Y. Hoffman, “Constrained simulations of the real Universe. II. Observational signatures of intergalactic gas in the local supercluster region,” Astrophys. J. 571, 563–575 (2002). doi 10.1086/340046

    Article  Google Scholar 

  15. 15.

    J. Stone, T. Gardiner, P. Teuben, et al., “Athena: a new code for astrophysical MHD,” Astrophys. J. Suppl. Ser. 178, 137–177 (2008). doi 10.1086/588755

    Article  Google Scholar 

  16. 16.

    A. Brandenburg and W. Dobler, “Hydromagnetic turbulence in computer simulations,” Comput. Phys. Commun. 147, 471–475 (2002). doi 10.1016/S0010-4655(02)00334-X

    Article  MATH  Google Scholar 

  17. 17.

    M. Gonzalez, E. Audit, and P. Huynh, “HERACLES: a three-dimensional radiation hydrodynamics code,” Astron. Astrophys. 464, 429–435 (2007). doi 10.1051/0004-6361:20065486

    Article  Google Scholar 

  18. 18.

    M. R. Krumholz, R. I. Klein, C. F. McKee, et al., “Equations and algorithms for mixed-frame flux-limited diffusion radiation hydrodynamics,” Astrophys. J. 667, 626–643 (2007). doi 10.1086/520791

    Article  Google Scholar 

  19. 19.

    A. Mignone, G. Bodo, S. Massaglia, et al., “PLUTO: a numerical code for computational astrophysics,” Astrophys. J. Suppl. Ser. 170, 228–242 (2007). doi 10.1086/513316

    Article  Google Scholar 

  20. 20.

    A. Almgren, V. Beckner, J. Bell, et al., “CASTRO: a new compressible astrophysical solver. I. Hydrodynamics and self-gravity,” Astrophys. J. 715, 1221–1238 (2010). doi 10.1088/0004-637X/715/2/1221

    Article  Google Scholar 

  21. 21.

    H. Schive, Y. Tsai, and T. Chiueh, “GAMER: a GPU-accelerated adaptive-mesh-refinement code for astrophysics,” Astrophys. J. 186, 457–484 (2010). doi 10.1088/0067-0049/186/2/457

    Article  Google Scholar 

  22. 22.

    J. Murphy and A. Burrows, “BETHE-hydro: an arbitrary Lagrangian–Eulerian multidimensional hydrodynamics code for astrophysical simulations,” Astrophys. J. Suppl. Ser. 179, 209–241 (2008). doi 10.1086/591272

    Article  Google Scholar 

  23. 23.

    V. Springel, “E pur si muove: Galilean-invariant cosmological hydrodynamical simulations on a moving mesh,” Mon. Not. R. Astron. Soc. 401, 791–851 (2010). doi 10.1111/j. 1365–2966. 2009. 15715. x

    Article  Google Scholar 

  24. 24.

    S. Bruenn, A. Mezzacappa, W. Hix, et al., “2D and 3D core-collapse supernovae simulation results obtained with the CHIMERA code,” J. Phys. 180, 012018 (2009). doi 10.1088/1742-6596/180/1/012018

    Google Scholar 

  25. 25.

    P. Hopkins, “A new class of accurate, mesh-free hydrodynamic simulation methods,” Mon. Not. R. Astron. Soc. 450, 53–110 (2015). doi 10.1093/mnras/stv195

    Article  Google Scholar 

  26. 26.

    B. Glinskiy, I. Kulikov, A. Snytnikov, A. Romanenko, I. Chernykh, and V. Vshivkov, “Co-design of parallel numerical methods for plasma physics and astrophysics,” Supercomput. Front. Innov. 1 (3), 88–98 (2014). doi 10.14529/jsfi140305

    Google Scholar 

  27. 27.

    V. V. Rusanov, “The calculation of the interaction of non-stationary shock waves with barriers,” Comput. Math. Math. Phys. 1, 304–320 (1962). doi 10.1016/0041-5553(62)90062-9

    Article  Google Scholar 

  28. 28.

    V. Vshivkov, G. Lazareva, A. Snytnikov, I. Kulikov, and A. Tutukov, “Computational methods for illposed problems of gravitational gasodynamics,” J. Inverse Ill-Posed Probl. 19, 151–166 (2011). doi 10.1515/jiip. 2011. 027

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    S. Godunov, and I. Kulikov, “Computation of discontinuous solutions of fluid dynamics equations with entropy nondecrease guarantee,” Comput. Math. Math. Phys. 54, 1012–1024 (2014). doi 10.1134/S0965542514060086

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    M. Frigo, and S. Johnson, “The design and implementation of FFTW3,” Proc. IEEE 93, 216–231 (2005). doi 10.1109/JPROC. 2004. 840301

    Article  Google Scholar 

  31. 31.

    A. Kalinkin, Y. Laevsky, and S. Gololobov, “2D fast Poisson solver for high-performance computing,” Lect. Notes Comput. Sci. 5698, 112–120 (2009). doi 10.1007/978-3-642-03275-2_11

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to I. M. Kulikov.

Additional information

(Submitted by V. V. Voevodin)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kulikov, I.M., Chernykh, I.G. & Tutukov, A.V. A New Parallel Intel Xeon Phi Hydrodynamics Code for Massively Parallel Supercomputers. Lobachevskii J Math 39, 1207–1216 (2018). https://doi.org/10.1134/S1995080218090135

Download citation

Keywords and phrases

  • high performance computing
  • Computational astrophysics
  • Intel Xeon Phi