Advertisement

GPU-Accelerated Particle-in-Cell Code on Minsky

  • Andreas Herten
  • Dirk Brömmel
  • Dirk Pleiter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10524)

Abstract

Particle-in-cell (PIC) methods are widely used on today’s supercomputers. In this paper we consider JuSPIC, an application for which good scaling properties could be demonstrated on a 6PFlop/s BlueGene/Q system. We report on efforts to port this application to emerging supercomputing architectures based on IBM POWER processors and NVIDIA graphics processing units.

Keywords

POWER8 GPU acceleration Performance analysis Minsky OpenPOWER NVIDIA Tesla P100 

Notes

Acknowledgements

This work has been carried out in the context of the POWER Acceleration and Design Center, a joined project between IBM, Forschungszentrum Jülich and NVIDIA, as well as the NVIDIA Application Lab at Jülich, a joined project between Forschungszentrum Jülich and NVIDIA. We acknowledge the support from Jiri Kraus (NVIDIA) and various helpful discussions with him. Research leading to these results has (in parts) been carried out on the Human Brain Project PCP Pilot Systems at the Juelich Supercomputing Centre, which received co-funding from the European Union (Grant Agreement no. 604102).

References

  1. 1.
    The Jülich Scalable Particle-in-Cell code, JuSPIC, http://www.fz-juelich.de/ias/jsc/juspic/
  2. 2.
  3. 3.
    JuSPIC Source Code Repository, https://trac.version.fz-juelich.de/juspic
  4. 4.
    Baumeister, P.F., Hater, T., Kraus, J., Pleiter, D., Wahl, P.: A performance model for GPU-accelerated FDTD applications. In: 2015 IEEE 22nd International Conference on High Performance Computing (HiPC), pp. 185–193, December 2015Google Scholar
  5. 5.
    Baumeister, P.F., Bornemann, M., Bühler, M., Hater, T., Krill, B., Pleiter, D., Zeller, R.: Addressing materials science challenges using GPU-accelerated POWER8 nodes. In: Dutot, P.-F., Trystram, D. (eds.) Euro-Par 2016. LNCS, vol. 9833, pp. 77–89. Springer, Cham (2016). doi: 10.1007/978-3-319-43659-3_6 Google Scholar
  6. 6.
    Birdsall, C.K., Langdon, A.B.: Plasma Physics via Computer Simulation. Series in Plasma Physics. Taylor & Francis, New York (2005)Google Scholar
  7. 7.
    Bonitz, M., Semkat, D. (eds.): Introduction to Computational Methods in Many Body Physics. Rinton Press, Princeton (2006)zbMATHGoogle Scholar
  8. 8.
    Brömmel, D., Gibbon, P., Garcia, M., Lopez, V., Marjanovic, V., Labarta, J.: Experience with the MPI/STARSS programming model on a large production code. In: International Conference on Parallel Computing: Accelerating Computational Science and Engineering (CSE). Advances in Parallel Computing, vol. 25, pp. 357–366, Munich, Germany, 10–13 September 2013. IOS Press (2014)Google Scholar
  9. 9.
    Bussmann, M., Burau, H., Cowan, T.E., Debus, A., Huebl, A., Juckeland, G., Kluge, T., Nagel, W.E., Pausch, R., Schmitt, F., Schramm, U., Schuchart, J., Widera, R.: Radiative signature of the relativistic Kelvin-Helmholtz instability. In: 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 1–12, November 2013Google Scholar
  10. 10.
    Decyk, V.K., Singh, T.V.: Adaptable particle-in-cell algorithms for graphical processing units. Comput. Phys. Commun. 182(3), 641–648 (2011), http://www.sciencedirect.com/science/article/pii/S0010465510004558
  11. 11.
    Germaschewski, K., Fox, W., Abbott, S., Ahmadi, N., Maynard, K., Wang, L., Ruhl, H., Bhattacharjee, A.: The Plasma Simulation Code: A modern particle-in-cell code with load-balancing and GPU support. ArXiv e-prints, October 2013Google Scholar
  12. 12.
    Gopal, A., Herzer, S., Schmidt, A., Singh, P., Reinhard, A., Ziegler, W., Brömmel, D., Karmakar, A., Gibbon, P., Dillner, U., May, T., Meyer, H.G., Paulus, G.G.: Observation of gigawatt-class THz pulses from a compact laser-driven particle accelerator. Phys. Rev. Lett. 111(7), 074802 (2013)CrossRefGoogle Scholar
  13. 13.
    Harlow, F.H.: The particle-in-cell method for numerical solution of problems in fluid dynamics, March 1962, http://www.osti.gov/scitech/servlets/purl/4769185
  14. 14.
    Herten, A., Pleiter, D., Brömmel, D.: Accelerating Plasma Physics with GPUs (Poster). Tech. rep., GPU Technology Conference (2017)Google Scholar
  15. 15.
    Hockney, R.W., Eastwood, J.W.: Computer simulation using particles. Institute of Physics, Bristol (1988) (English)Google Scholar
  16. 16.
    Kong, X., Huang, M.C., Ren, C., Decyk, V.K.: Particle-in-cell simulations with charge-conserving current deposition on graphic processing units. J. Comput. Phys. 230(4), 1676–1685 (2011), http://www.sciencedirect.com/science/article/pii/S0021999110006479
  17. 17.
    Mucci, P., ICL Team, T.: PAPI, the performance application programming interface, http://icl.utk.edu/papi/
  18. 18.
    Mucci, P.J., Browne, S., Deane, C., Ho, G.: PAPI: a portable interface to hardware performance counters. In. Proceedings of the Department of Defense HPCMP Users Group Conference, pp. 7–10 (1999)Google Scholar
  19. 19.
    Stantchev, G., Dorland, W., Gumerov, N.: Fast parallel particle-to-grid interpolation for plasma PIC simulations on the GPU. J. Parallel Distrib. Comput. 68(10), 1339–1349 (2008), http://dx.doi.org/10.1016/j.jpdc.2008.05.009
  20. 20.
    Weber, V., Malossi, A.C.I., Tavernelli, I., Laino, T., Bekas, C., Modani, M., Wilner, N., Heller, T., Curioni, A.: First experiences with ab initio molecular dynamics on OpenPOWER: the case of CPMD. In: Taufer, M., Mohr, B., Kunkel, J.M. (eds.) ISC High Performance 2016. LNCS, vol. 9945, pp. 228–234. Springer, Cham (2016). doi: 10.1007/978-3-319-46079-6_16 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Forschungszentrum Jülich, JSCJülichGermany

Personalised recommendations