Advertisement

Prediction of the Inter-Node Communication Costs of a New Gyrokinetic Code with Toroidal Domain

  • Andreas JockschEmail author
  • Noé Ohana
  • Emmanuel Lanti
  • Aaron Scheinberg
  • Stephan Brunner
  • Claudio Gheller
  • Laurent Villard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10777)

Abstract

We consider the communication costs of gyrokinetic plasma physics simulations running at large scale. For this we apply virtual decompositions of the toroidal domain in three dimensions and additional domain cloning to existing simulations done with the ORB5 code. The communication volume and the number of communication partners per timestep for every virtual task (node) are evaluated for the particles and the structured mesh. Thus the scaling properties of a code with the new domain decompositions are derived for simple models of a modern computer network and corresponding processing units. The effectiveness of the suggested decomposition has been shown. For a typical simulation with \(2\cdot 10^9\) particles and a mesh of \(256\times 1024\times 512\) grid points scaling to 2, 800 nodes should be achieved.

Keywords

Gyrokinetics Particle in cell Communication 

References

  1. 1.
    Adams, M.F., Ku, S.H., Worley, P., D’Azevedo, E., Cummings, J.C., Chang, C.S.: Scaling to 150K cores: recent algorithm and performance engineering developments enabling XGC1 to run at scale. In: Journal of Physics: Conference Series, vol. 180, p. 012036. IOP Publishing (2009).  https://doi.org/10.1088/1742-6596/180/1/012036
  2. 2.
    Bruck, J., Ho, C.T., Kipnis, S., Upfal, E., Weathersby, D.: Efficient algorithms for all-to-all communications in multiport message-passing systems. IEEE Trans. Parallel Distrib. 8(11), 1143–1156 (1997).  https://doi.org/10.1109/71.642949 CrossRefGoogle Scholar
  3. 3.
    Burau, H., Widera, R., Honig, W., Juckeland, G., Debus, A., Kluge, T., Schramm, U., Cowan, T.E., Sauerbrey, R., Bussmann, M.: PIConGPU: a fully relativistic particle-in-cell code for a GPU cluster. IEEE Trans. Plasma Sci. 38(10), 2831–2839 (2010).  https://doi.org/10.1109/TPS.2010.2064310 CrossRefGoogle Scholar
  4. 4.
    Chen, G., Chacón, L., Barnes, D.C.: An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm. J. Comput. Phys. 231(16), 5374–5388 (2012).  https://doi.org/10.1016/j.jcp.2012.04.040 MathSciNetCrossRefGoogle Scholar
  5. 5.
    Decyk, V.K.: Skeleton particle-in-cell codes on emerging computer architectures. Comput. Sci. Eng. 17(2), 47–52 (2015).  https://doi.org/10.1109/MCSE.2014.131 CrossRefGoogle Scholar
  6. 6.
    Ethier, S., Tang, W.M., Lin, Z.: Gyrokinetic particle-in-cell simulations of plasma microturbulence on advanced computing platforms. In: Journal of Physics: Conference Series, vol. 16, p. 1. IOP Publishing (2005).  https://doi.org/10.1088/1742-6596/16/1/001
  7. 7.
    Hariri, F., Tran, T.M., Jocksch, A., Lanti, E., Progsch, J., Messmer, P., Brunner, S., Gheller, C., Villard, L.: A portable platform for accelerated PIC codes and its application to GPUs using OpenACC. Comput. Phys. Commun. 207, 69–82 (2016).  https://doi.org/10.1016/j.cpc.2016.05.008 CrossRefGoogle Scholar
  8. 8.
    Hockney, R.W., Eastwood, J.W.: Computer Simulation Using Particles. CRC Press, Cambridge (1988)CrossRefzbMATHGoogle Scholar
  9. 9.
    Ishiguro, S.: Large scale Particle-In-Cell plasma simulation. In: Resch, M., Roller, S., Benkert, K., Galle, M., Bez, W., Kobayashi, H., Hirayama, T. (eds.) High Performance Computing on Vector Systems, pp. 139–144. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-540-85869-0_13 Google Scholar
  10. 10.
    Jolliet, S., Bottino, A., Angelino, P., Hatzky, R., Tran, T.M., McMillan, B.F., Sauter, O., Appert, K., Idomura, Y., Villard, L.: A global collisionless PIC code in magnetic coordinates. Comput. Phys. Commun. 177(5), 409–425 (2007).  https://doi.org/10.1016/j.cpc.2007.04.006 CrossRefGoogle Scholar
  11. 11.
    McMillan, B.F., Jolliet, S., Bottino, A., Angelino, P., Tran, T.M., Villard, L.: Rapid Fourier space solution of linear partial integro-differential equations in toroidal magnetic confinement geometries. Comput. Phys. Commun. 181(4), 715–719 (2010).  https://doi.org/10.1016/j.cpc.2009.12.001 MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Naitou, H., Hashimoto, H., Yamada, Y., Tokuda, S., Yagi, M.: Parallelization of gyrokinetic PIC code for MHD simulation. Progress in Nuclear Science and Technology 2, 657–662 (2011).  https://doi.org/10.15669/pnst.2.657 CrossRefGoogle Scholar
  13. 13.
    Ohana, N., Jocksch, A., Lanti, E., Tran, T.M., Brunner, S., Gheller, C., Hariri, F., Villard, L.: Towards the optimization of a gyrokinetic Particle-In-Cell (PIC) code on large-scale hybrid architectures. In: Journal of Physics: Conference Series, vol. 775, p. 012010. IOP Publishing (2016).  https://doi.org/10.1088/1742-6596/775/1/012010
  14. 14.
    Wang, B., Ethier, S., Tang, W., Ibrahim, K., Madduri, K., Williams, S., Oliker, L.: Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers. Int. J. High Perform. Comput. Appl. (2017).  https://doi.org/10.1177/1094342017712059
  15. 15.
    Wei, Y., Wang, Y., Cai, L., Tang, W., Wang, B., Ethier, S., See, S., Lin, J.: Performance and portability studies with OpenACC accelerated version of GTC-P. In: 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT) (2016).  https://doi.org/10.1109/PDCAT.2016.019

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Andreas Jocksch
    • 1
    Email author
  • Noé Ohana
    • 2
  • Emmanuel Lanti
    • 2
  • Aaron Scheinberg
    • 2
  • Stephan Brunner
    • 2
  • Claudio Gheller
    • 1
  • Laurent Villard
    • 2
  1. 1.CSCS, Swiss National Supercomputing CentreLuganoSwitzerland
  2. 2.Swiss Plasma CenterÉcole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland

Personalised recommendations