Advertisement

Hybrid Parallelization of Particle in Cell Monte Carlo Collision (PIC-MCC) Algorithm for Simulation of Low Temperature Plasmas

  • Bhaskar ChaudhuryEmail author
  • Mihir Shah
  • Unnati Parekh
  • Hasnain Gandhi
  • Paramjeet Desai
  • Keval Shah
  • Anusha Phadnis
  • Miral Shah
  • Mainak Bandyopadhyay
  • Arun Chakraborty
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 964)

Abstract

We illustrate the parallelization of PIC code, for kinetic simulation of Low Temperature Plasmas, on Intel Multicore (Xeon) and Manycore (Xeon Phi) architectures, and subsequently on a HPC cluster. The implementation of 2D-3v PIC-MCC algorithm described in the paper involves computational solution of Vlassov-Poisson equations, which provides the spatial and temporal evolution of charged-particle velocity distribution functions in plasmas under the effect of self-consistent electromagnetic fields and collisions. Stringent numerical constraints on total number of particles, number of grid points and simulation time-scale associated with PIC codes makes it computationally prohibitive on CPUs (serial code) in case of large problem sizes. We first describe a shared memory parallelization technique using OpenMP library and then propose a hybrid parallel scheme (OpenMP+MPI) consisting of a distributed memory system. OpenMP based PIC code has been executed on Xeon processor and Xeon-Phi co-processors (Knights Corner and Knights Landing) and we compare our results against a serial implementation on Intel core i5 processor. Finally, we compare the results of the hybrid parallel code with the OpenMP based parallel code. Hybrid strategy based on OpenMP and MPI, involving a three-level parallelization (instruction-level, thread-level over many cores and node-level across a cluster of Xeon processors), achieves a linear speedup on an HPC cluster with 4 nodes (total 64 cores). The results show that our particle decomposition based hybrid parallelization technique using private grids scale efficiently with increasing problem size and number of cores in the cluster.

Notes

Acknowledgement

The work has been carried out using the HPC facilities at DA-IICT and hardware received under BRNS-PFRC project. We would also like to acknowledge the help received from Colfax’s remote access program. We thank Siddarth Kamaria, Harshil Shah and Riddhesh Markandeya for their contribution towards the serial code development. Miral Shah thanks Department of Atomic Energy, Govt. of India for junior research fellowship (JRF) received under BRNS-PFRC project (No. 39/27/2015-BRNS/34081).

References

  1. 1.
    Adamovich, I., et al.: The 2017 plasma roadmap: low temperature plasma science and technology. J. Phys. D: Appl. Phys. 50(32), 323001 (2017)CrossRefGoogle Scholar
  2. 2.
    Birdsall, C.K., Langdon, A.B.: Plasma Physics via Computer Simulations. CRC Press, Boca Raton (1991)CrossRefGoogle Scholar
  3. 3.
    Birdsall, C.K., Fellow, L.: Particle-in-cell charged-particle simulations, plus Monte Carlo collision with neutral atom, PIC-MCC. IEEE Trans. Plasma Sci. 19(2), 65–85 (1991)CrossRefGoogle Scholar
  4. 4.
    Shah, H., Kamaria, S., Markandeya, R., Shah, M., Chaudhury, B.: A novel implementation of 2D3V particle-in-cell (PIC) algorithm for Kepler GPU architecture. In: IEEE 24th International Conference on High Performance Computing (HiPC), pp. 378–387 (2017)Google Scholar
  5. 5.
    Verboncoeur, J.P.: Particle simulation of plasmas: review and advances. Plasma Phys. Control. Fusion 47(5A), A231–A260 (2005)CrossRefGoogle Scholar
  6. 6.
    Hariri, F., et al.: A portable platform for accelerated PIC codes and its application to GPUs using OpenACC. Comput. Phys. Commun. 207, 69–82 (2016)CrossRefGoogle Scholar
  7. 7.
    Derouillat, J., et al.: SMILEI: a collaborative, open-source, multi-purpose particle-in-cell code for plasma simulation. Comput. Phys. Commun. 222, 351–373 (2018)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Decyk, V.K., Singh, T.V.: Particle-in-cell algorithms for emerging computer architectures. Comput. Phys. Commun. 185(3), 708–719 (2014)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Carmona, E.A., Chandler, L.J.: On parallel PIC versatility and the structure of parallel PIC approaches. Concurr. Comput.: Pract. Exp. 9, 1377–1405 (1997)Google Scholar
  10. 10.
    Adams, M.F., Ethier, S., Wichmann, N.: Performance of particle in cell methods on highly concurrent computational architectures. J. Phys.: Conf. Ser. 78, 012001 (2007)Google Scholar
  11. 11.
    Burau, H., et al.: PIConGPU: a fully relativistic particle-in-cell code for a GPU cluster. IEEE Trans. Plasma Sci. 38(10), 2831–2839 (2010)CrossRefGoogle Scholar
  12. 12.
    Claustre, J., Chaudhury, B., Fubiani, G., Paulin, M., Boeuf, J.P.: Particle-in-cell monte carlo collision model on GPU-application to a low-temperature magnetized plasma. IEEE Trans. Plasma Sci. 41(2), 391–399 (2013)CrossRefGoogle Scholar
  13. 13.
    Madduri, K., Su, J., Williams, S., Oliker, L., Ethier, S., Yelick, K.: Optimization of parallel particle-to-grid interpolation on leading multicore platforms. IEEE Trans. Parallel Distrib. Syst. 23(10), 1915–1922 (2012)CrossRefGoogle Scholar
  14. 14.
    Boeuf, J.P., Chaudhury, B., Garrigues, L.: Physics of a magnetic filter for negative ion sources. I. Collisional transport across the filter in an ideal, 1D filter. Phys. Plasmas 19(11), 113509 (2012)CrossRefGoogle Scholar
  15. 15.
    Fehske, H., Schneider, R., Weibe, A.: Computational Many-Particle Physics. Lecture Notes in Physics, vol. 739. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-74686-7CrossRefGoogle Scholar
  16. 16.
    Tskhakaya, D., Matyash, K., Schneider, R., Taccogna, F.: The particle-in-cell method. Contrib. Plasma Phys. 47(8–9), 563–594 (2007)CrossRefGoogle Scholar
  17. 17.
    Schenk, O., Gartner, K.: Solving unsymmetric sparse systems of linear equations with PARDISO. Futur. Gener. Comput. Syst. 20(3), 475–487 (2004)CrossRefGoogle Scholar
  18. 18.
    Boris, J.P.: Relativistic plasma simulation-optimization. In: 4th Conference on Numerical Simulation of Plasma, no. November 1970, p. 3 (1970)Google Scholar
  19. 19.
    Lapenta, G.: Particle-based simulation of plasmas. In: Plasma Modeling. IOP Publishing, Bristol (2016).  https://doi.org/10.1088/978-0-7503-1200-4ch4. Chap. 4Google Scholar
  20. 20.
    Sodani, A., et al.: Knights landing: second-generation Intel Xeon Phi product. IEEE Micro 36(2), 34–46 (2016)CrossRefGoogle Scholar
  21. 21.
    Bansal, G., et al.: Negative ion beam extraction in ROBIN. Fusion Eng. Des. 88, 778–782 (2013)CrossRefGoogle Scholar
  22. 22.
    Rabenseifner, R.: Hybrid parallel programming on HPC platforms. In: Fifth European Workshop on OpenMP, EWOMP 2003, Aachen, Germany, 22–26 September 2003 (2003)Google Scholar
  23. 23.
    Bowers, K.: Accelerating a particle-in-cell simulation using a hybrid counting sort. J. Comput. Phys. 173, 393–411 (2001)CrossRefGoogle Scholar
  24. 24.
    Hoefler, T., Belli, R.: Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p. 73 (2015)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Bhaskar Chaudhury
    • 1
    Email author
  • Mihir Shah
    • 1
  • Unnati Parekh
    • 1
  • Hasnain Gandhi
    • 1
  • Paramjeet Desai
    • 1
  • Keval Shah
    • 1
  • Anusha Phadnis
    • 1
  • Miral Shah
    • 1
  • Mainak Bandyopadhyay
    • 2
    • 3
  • Arun Chakraborty
    • 2
  1. 1.Group in Computational Science and HPCDA-IICTGandhinagarIndia
  2. 2.ITER-India, Institute for Plasma Research (IPR)GandhinagarIndia
  3. 3.Homi Bhabha National Institute (HBNI)MumbaiIndia

Personalised recommendations