Skip to main content

Parallel Programming in the Hybrid Model on the HPC Clusters

  • Conference paper
  • First Online:
High Performance Computing, Smart Devices and Networks (CHSN 2022)

Abstract

Support for an ever-expanding range of distributed and parallel strategies and support for performance analysis are constantly evolving. We propose message passing interface (MPI) and open multi-processing (OpenMP) standards that provide various software development tools covering a wide range of techniques not strictly limited to shared memory. The theoretical part focuses on a general overview of the current capabilities of MPI and OpenMP as application programming interfaces. The authors researched the high-performance computing (HPC) cluster environment, and this work provides results and representative examples. Also, in this study, it has been proven that to obtain favorable processing efficiency and write the code correctly—regardless of the programming paradigm used—it is necessary to know the basic concepts related to a given distributed and parallel architecture. Additionally, in this study, we investigated the need to adapt the tools used to solve a specific problem regarding improving processing efficiency. Finally, we presented that failure to apply a critical section may result in unpredictable results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amdahl GM (1967) Validity of the single processor approach to achieving large scale computing capabilities. In: AFIPS Spring joint computer conference, pp 483–485

    Google Scholar 

  2. González-Abad J, Lopez Garcia A, Kozlov V (2022) A container-based workflow for distributed training of deep learning algorithms in HPC clusters. Cluster Comput, 1–20. https://doi.org/10.1007/s10586-022-03798-7

  3. Jianqi L, Hang L, Zhengyu T, Hua L (2020) Hybrid MPI and CUDA parallelization for CFD applications on multi-GPU HPC clusters. Sci Progr, 1–15. https://doi.org/10.1155/2020/8862123

  4. Rak T (2017) Performance modeling using queueing Petri nets. In: Gaj P, Kwiecień A, Sawicki M (eds) Computer networks, CN 2017, Communications in computer and information science, vol 718. Springer, Cham. https://doi.org/10.1007/978-3-319-59767-6_26

  5. Rak T (2015) Response time analysis of distributed web systems using QPNs. Math Probl Eng. https://doi.org/10.1155/2015/490835

    Article  MathSciNet  MATH  Google Scholar 

  6. Subodh K (2022) 2—Parallel programming models. https://doi.org/10.1017/9781009071314.003

  7. Mustafa D (2022) A survey of performance tuning techniques and tools for parallel applications. IEEE Access 10:15036–15055. https://doi.org/10.1109/ACCESS.2022.3147846

    Article  Google Scholar 

  8. Velarde Martínez A (2022) Parallelization of array method with hybrid programming: OpenMP and MPI. Appl Sci 12:7706. https://doi.org/10.3390/app12157706

    Article  Google Scholar 

  9. Qiu Q, Lei Y, Wang D, Wang G (2021) An efficient hybrid MPI/OpenMP parallelization of the asynchronous ADMM algorithm. In: 2021 IEEE international conference on parallel and distributed processing with applications, big data and cloud computing, sustainable computing and communications, social computing and networking, pp 563–570. https://doi.org/10.1109/ISPA-BDCloud-SocialCom-SustainCom52081.2021.00083

  10. Abdullah A, Abdulraheem A, Fathy E (2021) Parallelization technique using hybrid programming model. Int J Adv Comput Sci Appl 12. https://doi.org/10.14569/IJACSA.2021.0120285

  11. Li L (2022) Performance optimization of HPC applications in large-scale cluster systems. https://doi.org/10.1145/3489525.3511696

  12. Saczek M, Wawrzak K, Tyliszczak A, Boguslawski A (2018) Hybrid MPI/OpenMP acceleration approach for high-order schemes for CFD. J Phys Conf Ser 1101:012031. https://doi.org/10.1088/1742-6596/1101/1/012031

  13. Vallée GR, Bernholdt D (2018) Improving support of MPI+OpenMP applications. In: Proceedings of the EuroMPI conference. ACM, New York, NY, USA, Article 4, 2 p. 10.475/123_4

    Google Scholar 

  14. Sala K, Bellón J, Farré P, Teruel X, Perez JM, Peña AJ, Holmes D, Beltran V, Labarta J (2018) Improving the interoperability between MPI and task-based programming models. In: Proceedings of the 25th European MPI users’ group meeting, Barcelona, Spain. https://doi.org/10.1145/3236367.3236382

  15. Yang X, Chang X, Wang X, Li F, Ma J, Xin L, Chang H (2019) A new parallel scheduling algorithm based on MPI. In: 2018 15th international computer conference on wavelet active media technology and information processing, pp 228–231

    Google Scholar 

  16. Rak T, Schiffer Ł (2021) Own HPC cluster based on virtual operating system. In: Cognitive informatics and soft computing. Advances in intelligent systems and computing, vol 1317. Springer, Singapore. https://doi.org/10.1007/978-981-16-1056-1_37

  17. Kwedlo W, Czochanski PJ (2019) A hybrid MPI/OpenMP parallelization of K-means algorithms accelerated using the triangle inequality. IEEE Access 7:42280–42297. https://doi.org/10.1109/ACCESS.2019.2907885

    Article  Google Scholar 

  18. Gopalakrishnan A, Cabral MA, Erwin JP, Ganapathi RB (2019) Improved MPI multi-threaded performance using OFI scalable endpoints. In: 2019 IEEE symposium on high-performance interconnects, pp 36–39. https://doi.org/10.1109/HOTI.2019.00022

  19. Hück A (2022) Compiler-aided type correctness of hybrid MPI-OpenMP applications. IT Professional 24(2):45–51. https://doi.org/10.1109/MITP.2021.3093949

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomasz Rak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rak, T. (2024). Parallel Programming in the Hybrid Model on the HPC Clusters. In: Malhotra, R., Sumalatha, L., Yassin, S.M.W., Patgiri, R., Muppalaneni, N.B. (eds) High Performance Computing, Smart Devices and Networks. CHSN 2022. Lecture Notes in Electrical Engineering, vol 1087. Springer, Singapore. https://doi.org/10.1007/978-981-99-6690-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-6690-5_15

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-6689-9

  • Online ISBN: 978-981-99-6690-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics