High Performance Computational Hydrodynamic Simulations: UPC Parallel Architecture as a Future Alternative
Developments in high performance computing (HPC) has today transformed the manner of how computational hydrodynamic (CHD) simulations are performed. Till now, the message passing interface (MPI) remains the common parallelism architecture and has been adopted widely in CHD simulations. However, its bottleneck problem remains for some large-scale simulation cases due to delays during message passing whereby the total communication time may exceed the total simulation runtime with an increasing number of computer processers. In this study, we utilise an alternative parallelism architecture, known as PGAS-UPC, to develop our own UPC-CHD model with a 2-step explicit scheme from the Lax-Wendroff family of predictors-correctors. The model is evaluated on three incompressible, adiabatic viscous 2D flow cases having moderate flow velocities. Model validation is achieved by the reasonably good agreement between the predicted and respective analytical values. We then compare the computational performance between UPC-CHD and that of MPI in its base design in a SGI UV-2000 server till 100 processers maximum in this study. The former achieves a near 1:1 speedup which demonstrates its efficiency potential for very large-scale CHD simulations, while the later experiences slowdown at some point. Extension of UPC-CHD remains our main objective which can be achieved by the following additions: (a) inclusions of other numerical schemes to accommodate for other types of fluid simulations, and (b) coupling UPC-CHD with Amazon Web Service (AWS) to further exploit its parallelism efficiency as a viable alternative.
KeywordsParallel computing Viscous incompressible laminar flow MPI UPC Computational hydrodynamic (CHD) simulations
This research study is funded by the internal core funding from the Nanyang Environment and Water Research Institute (NEWRI), Nanyang Technological University (NTU), Singapore. The first author is grateful to NTU for the 4-year Nanyang President Graduate Scholarship (NPGS) for his PhD study.
- Balaji, P., Buntinas, D., Goodell, D., Gropp, W., Kumar, S., Lusk, E., Thakur, R., Träff, J.L.: MPI on a million processors. In: Ropo, M., Westerholm, J., Dongarra, J. (eds.) EuroPVM/MPI 2009. LNCS, vol. 5759, pp. 20–30. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03770-2_9CrossRefGoogle Scholar
- Chen, W.Y., Bonachea, D., Duell, J., Husbands, P., Iancu, C., Yelick, K.: A Performance Analysis of the Berkeley UPC Compiler. Lawrence Berkeley National Laboratory (2003). https://escholarship.org/uc/item/91v1j2jw
- Johnson, A.A.: Using unified parallel C to enable new types of CFD applications on the cray X1/E. In: Cray User Group Conference (2006)Google Scholar
- Kermani, M., Plett, E.: Roe scheme in generalized coordinates. I – Formulations. In: 39th Aerospace Sciences Meeting and Exhibit. American Institute of Aeronautics and Astronautics (2001a)Google Scholar
- Kermani, M., Plett, E.: Roe scheme in generalized coordinates. II - Application to inviscid and viscous flows. In: 39th Aerospace Sciences Meeting and Exhibit. American Institute of Aeronautics and Astronautics (2001b)Google Scholar
- Munson, B.R., Young, D.F., Okiishi, T.H.: Fundamentals of Fluid Mechanics. 6th Ed., pp. 263–331. Wiley, Hoboken (2006). (Chapter 6)Google Scholar
- Simmendinger, C., Jägersküpper, J., Machado, R., Lojewski, C.: A PGAS-based implementation for the unstructured CFD solver TAU. Partitioned Global Address Space Programming Models, Galveston Island (2011)Google Scholar
- White, F.M.: Viscous Fluid Flow. 2nd Ed, pp. 457–528. McGraw-Hill (1991). (Chapter 7)Google Scholar