Skip to main content
Log in

Parallelization of Numerical Conjugate Heat Transfer Analysis in Parallel Plate Channel Using OpenMP

  • Research Article-Mechanical Engineering
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

Conjugate heat transfer and fluid flow is a common phenomenon occurring in parallel plate channels. Finite volume method (FVM) formulation-based semi-implicit pressure linked equations algorithm is a common technique to solve the Navier–Stokes equation for fluid flow simulation in such phenomena, which is computationally expensive. In this article, an indigenous FVM code is developed for numerical analysis of conjugate heat transfer and fluid flow, considering different problems. The computational time spent by the code is found to be around 90% of total execution time in solving the pressure (P) correction equation. The remaining time is spent on U, V velocity, and temperature (T) functions, which use tri-diagonal matrix algorithm. To carry out the numerical analysis faster, the developed FVM code is parallelized using OpenMP paradigm. All the functions of the code (U, V, T, and P) are parallelized using OpenMP, and the parallel performance is analyzed for different fluid flow, grid size, and boundary conditions. Using nested and without nested OpenMP parallelization, analysis is done on different computing machines having different configurations. From the complete analysis, it is observed that flow Reynolds number (Re) has a significant impact on the sequential execution time of the FVM code but has a negligible role in effecting speedup and parallel efficiency. OpenMP parallelization of the FVM code provides a maximum speedup of up to 1.5 for considered conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Abbreviations

Ar:

Aspect ratio of battery cell

L :

Length of battery cell

k :

Thermal conductivity

l o :

Length of extra outlet fluid domain

l i :

Length of extra fluid domain

L o :

Dimensionless length of extra outlet fluid domain

L i :

Dimensionless length of extra inlet fluid domain

q′′′ :

Volumetric heat generation

\( \bar{q} \) :

Non-dimensional heat flux

\( \bar{S}_{q} \) :

Dimensionless volumetric heat generation

Pr:

Prandtl number

Re:

Reynolds number

T :

Temperature

T o :

Maximum allowable temperature of battery cell

\( \bar{T} \) :

Non-dimensional temperature

u :

Velocity along the axial direction

U :

Non-dimensional velocity along the axial direction

u :

Free stream velocity

v :

Velocity along the transverse direction

Q r :

Heat removed from surface (non-dimensional)

V :

Non-dimensional velocity along the transverse direction

w :

Half-width

\( \bar{W} \) :

Non-dimensional width

x :

Axial direction

X :

Non-dimensional axial direction

y :

Transverse direction

Y :

Non-dimensional transverse direction

α :

Thermal diffusivity of fluid

ν :

Kinematic viscosity of fluid

ρ :

Density of fluid

ζ cc :

Conduction–convection parameter

μ :

Dynamic viscosity

c:

Center

f:

Fluid domain

s:

Solid domain (battery cell)

:

Free stream

m:

Mean

References

  1. Ling, Z.; Zhang, Z.; Shi, G.; Fang, X.; Wang, L.; Gao, X.; et al.: Review on thermal management systems using phase change materials for electronic components, Li-ion batteries, and photovoltaic modules. Renew. Sustain. Energy Rev. 31, 427–438 (2014). https://doi.org/10.1016/j.rser.2013.12.017

    Article  Google Scholar 

  2. Bazdidi-Tehrani, F.; Aghaamini, M.; Moghaddam, S.: Radiation effects on turbulent mixed convection in an asymmetrically heated vertical channel. Heat Transf. Eng. 38, 475–497 (2017). https://doi.org/10.1080/01457632.2016.1194695

    Article  Google Scholar 

  3. Shukla, A.; Singh, A.K.; Singh, P.: A comparative study of finite volume method and finite difference method for convection-diffusion problem. Am. J. Comput. Appl. Math. 1, 67–73 (2012). https://doi.org/10.5923/j.ajcam.20110102.13

    Article  Google Scholar 

  4. Kolditz, O.: Finite Volume Method. Computation. Springer, Berlin, Heidelberg (2013)

    Google Scholar 

  5. Patankar, S.: Numerical Heat Transfer and Fluid Flow. CRC Press, Boca Raton (1980)

    MATH  Google Scholar 

  6. Wang, X.; Guo, L.; Ge, W.; Tang, D.; Ma, J.; Yang, Z.; et al.: Parallel implementation of macro-scale pseudo-particle simulation for particle-fluid systems. Comput. Chem. Eng. 29, 1543–1553 (2005). https://doi.org/10.1016/j.compchemeng.2004.12.006

    Article  Google Scholar 

  7. Gerndt, A.; Sarholz, S.; Wolter, M.; Mey, D.A.; Bischof, C.; Kuhlen, T: Nested OpenMP for efficient computation of 3D critical points in multi-block CFD datasets. In: 2006 ACM/IEEE Conf Supercomput, p. 46 (2006). https://doi.org/10.1145/1188455.1188553

  8. Duvigneau, R.; Kloczko, T.; Praveen, C.: A three-level parallelization strategy for robust design in aerodynamics. In: 20th Int Conf Parallel Comput Fluid Dyn, pp. 101–108 (2008)

  9. Gourdain, N.; Gicquel, L.; Staffelbach, G.; Vermorel, O.; Duchaine, F.; Boussuge, J.-F.; et al.: High performance parallel computing of flows in complex geometries: I. Methods Comput. Sci. Discov. 2, 015003 (2009). https://doi.org/10.1088/1749-4699/2/1/015003

    Article  Google Scholar 

  10. Gropp, W.D.; Kaushik, D.K.; Keyes, D.E.; Smith, B.F.: High-performance parallel implicit CFD. Parallel Comput. 27, 337–362 (2001). https://doi.org/10.1016/S0167-8191(00)00075-2

    Article  MATH  Google Scholar 

  11. Passoni, G.; Cremonesi, P.; Alfonsi, G.: Analysis and implementation of a parallelization strategy on a Navier-Stokes solver for shear flow simulations. Parallel Comput. 27, 1665–1685 (2001). https://doi.org/10.1016/S0167-8191(01)00114-4

    Article  MathSciNet  MATH  Google Scholar 

  12. Schulz, M.; Krafczyk, M.; Tölke, J.; Rank, E.: Parallelization strategies and efficiency of CFD computations in complex geometries using lattice Boltzmann methods on high-performance computers. High Perform. Sci. Eng. Comput. 21, 115–122 (2002). https://doi.org/10.1007/978-3-642-55919-8

    Article  Google Scholar 

  13. Zhang, L.T.; Wagner, G.J.; Liu, W.K.: A parallelized meshfree method with boundary enrichment for large-scale CFD. J. Comput. Phys. 176, 483–506 (2002). https://doi.org/10.1006/jcph.2002.6999

    Article  MATH  Google Scholar 

  14. Peigin, S.; Epstein, B.: Embedded parallelization approach for optimization in aerodynamic design. J. Supercomput. 29, 243–263 (2004). https://doi.org/10.1023/B:SUPE.0000032780.68664.1b

    Article  MATH  Google Scholar 

  15. Eyheramendy, D.: Object-oriented parallel CFD with JAVA. Parallel Comput. Fluid Dyn. 2003 Adv. Numer. Methods, Softw. Appl. 2004, 409–416 (2003). https://doi.org/10.1016/B978-044451612-1/50052-4

    Article  MATH  Google Scholar 

  16. Jia, R.; Sundén, B.: Parallelization of a multi-blocked CFD code via three strategies for fluid flow and heat transfer analysis. Comput. Fluids 33, 57–80 (2004). https://doi.org/10.1016/S0045-7930(03)00029-X

    Article  MATH  Google Scholar 

  17. Lehmkuhl, O.: Borrell, R.; Soria, M.; Oliva, A.: TermoFluids: a new parallel unstructured CFD code for the simulation of turbulent industrial problems on low cost PC cluster. In: Parallel Comput. Fluid Dyn. 2007. Lect. Notes Comput. Sci. Eng., vol. 67, pp. 275–282. Springer, Berlin, Heidelberg (2009). https://doi.org/10.1007/978-3-540-92744-0

  18. Oktay, E.; Akay, H.U.; Merttopcuoglu, O.: Parallelized structural topology optimization and CFD coupling for design of aircraft wing structures. Comput. Fluids 49, 141–145 (2011). https://doi.org/10.1016/j.compfluid.2011.05.005

    Article  MathSciNet  MATH  Google Scholar 

  19. Amritkar, A.; Tafti, D.; Liu, R.; Kufrin, R.; Chapman, B.: OpenMP parallelism for fluid and fluid-particulate systems. Parallel Comput. 38, 501–517 (2012). https://doi.org/10.1016/j.parco.2012.05.005

    Article  Google Scholar 

  20. Amritkar, A.; Deb, S.; Tafti, D.: Efficient parallel CFD-DEM simulations using OpenMP. J. Comput. Phys. 256, 501–519 (2014). https://doi.org/10.1016/j.jcp.2013.09.007

    Article  MathSciNet  MATH  Google Scholar 

  21. Steijl, R.; Barakos, G.N.: Parallel evaluation of quantum algorithms for computational fluid dynamics. Comput. Fluids 173, 22–28 (2018). https://doi.org/10.1016/j.compfluid.2018.03.080

    Article  MathSciNet  MATH  Google Scholar 

  22. Gorobets, A.; Soukov, S.; Bogdanov, P.: Multilevel parallelization for simulating compressible turbulent flows on most kinds of hybrid supercomputers. Comput. Fluids 173, 171–177 (2018). https://doi.org/10.1016/j.compfluid.2018.03.011

    Article  MathSciNet  MATH  Google Scholar 

  23. Lai, J.; Li, H.; Tian, Z.: CPU/GPU heterogeneous parallel CFD solver and optimizations. In: Proc. 2018 Int. Conf. Serv. Robot. Technol., pp. 88–92. ACM, Chengdu (2018)

  24. Niedermeier, C.A.; Janssen, C.F.; Indinger, T.: Massively-parallel multi-GPU simulations for fast and accurate automotive aerodynamics. In: 6th Eur. Conf. Comput. Mech. (ECCM 6), Glasgow, UK, pp. 1–8 (2018)

  25. Shan, P.; Zhu, R.; Wang, F.; Wu, J.: Efficient approximation of free-surface Green function and OpenMP parallelization in frequency-domain wave–body interactions. J. Mar. Sci. Technol. (2018). https://doi.org/10.1007/s00773-018-0568-9

    Article  Google Scholar 

  26. Sheraton, M.V.; Sloot, P.M.A.: Parallel performance analysis of bacterial biofilm simulation models. In: Comput. Sci.—ICCS 2018. Lect. Notes Comput. Sci., vol. 10862, pp. 496–505. Springer International Publishing, Berlin (2018). https://doi.org/10.1007/978-3-319-93713-7

  27. Wang, Y.X.; Zhang, L.L.; Liu, W.; Cheng, X.H.; Zhuang, Y.; Chronopoulos, A.T.: Performance optimizations for scalable CFD applications on hybrid CPU + MIC heterogeneous computing system with millions of cores. Comput. Fluids 173, 226–236 (2018). https://doi.org/10.1016/j.compfluid.2018.03.005

    Article  MathSciNet  MATH  Google Scholar 

  28. Afzal, A.; Mohammed Samee, A.D.; Abdul Razak, R.K.; Ramis, M.K.: Effect of spacing on thermal performance characteristics of Li-ion battery cells. J. Therm. Anal. Calorim. 135, 1797–1811 (2019). https://doi.org/10.1007/s10973-018-7664-2

    Article  Google Scholar 

  29. Mudigere, D.; Sridharan, S.; Deshpande, A.; Park, J.; Heinecke, A.; Smelyanskiy, M.; et al.: Exploring shared-memory optimizations for an unstructured mesh cfd application on modern parallel systems. In: 29th Parallel Distrib. Process. Symp. (IPDPS), 2015 IEEE Int., pp. 723–732 (2015)

  30. Couder-Castaneda, C.; Barrios-Pina, H.; Gitler, I.; Arroyo, M.: Performance of a code migration for the simulation of supersonic ejector flow to SMP, MIC, and GPU using OpenMP, OpenMP + LEO, and OpenACC directives. Sci. Program 2015, 1–20 (2015). https://doi.org/10.1155/2015/739107

    Article  Google Scholar 

  31. Xu, Z.; Zhao, H.; Zheng, C.: Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing. J. Comput. Phys. 281, 844–863 (2015). https://doi.org/10.1016/j.jcp.2014.10.055

    Article  MathSciNet  MATH  Google Scholar 

  32. Niemeyer, K.E.; Sung, C.-J.: Recent progress and challenges in exploiting graphics processors in computational fluid dynamics. J. Supercomput. 67, 528–564 (2014). https://doi.org/10.1007/s11227-013-1015-7

    Article  Google Scholar 

  33. Grelck, C.S.S.: SAC—from high-level programming with arrays to efficient parallel execution. Parallel Process Lett. 13, 401–412 (2003)

    Article  MathSciNet  Google Scholar 

  34. Wang, J.L.R.: Assessment of linear finite-difference Poisson-Boltzmann solvers. J. Comput. Chem. 31, 1689–1698 (2010)

    Google Scholar 

  35. Vanderbauwhede, W.T.T.T.: Buffering: a simple and highly effective scheme for parallelization of successive over-relaxation on GPUs and other accelerators. In: Int Conf High Perform Comput Simul, pp. 436–43 (2015)

  36. Mattson, T., Eigenmann, R.: OpenMP: an API for writing portable SMP application software. In: Supercomput. 99 Conf. (1999)

  37. Kobler, R.; Kranzlmüller, D.; Volkert, J.: Debugging OpenMP programs using event manipulation. In: Eig, R., Voss, M.J. (eds.) OpenMP Shar. Mem. Parallel Program. WOMPAT 2001. Lect. Notes Comput. Sci., pp. 81–89. Springer, Berlin, Heidelberg (2001). https://doi.org/10.1007/3-540-44587-0_8

    Chapter  Google Scholar 

  38. Chapman, B.; Jost, G.; Van Der Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming, 10th edn. MIT Press, Cambridge (2008)

    Google Scholar 

  39. McCool, M.; Robison, A.; Reinders, J.: Structured Parallel Programming: Patterns for Efficient Computation. Elsevier, Amsterdam (2012)

    Google Scholar 

  40. Bücker, H.; Rasch, A.; Wolf, A.: A class of OpenMP applications involving nested parallelism. In: Proc. 2004 ACM Symp. Appl. Comput., ACM; n.d., pp. 220–224

  41. Darmana, D.; Deen, N.G.; Kuipers, J.A.M.: Parallelization of an Euler-Lagrange model using mixed domain decomposition and a mirror domain technique: application to dispersed gas-liquid two-phase flow. J. Comput. Phys. 220, 216–248 (2006). https://doi.org/10.1016/j.jcp.2006.05.011

    Article  MathSciNet  MATH  Google Scholar 

  42. Walther, J.H.; Sbalzarini, I.F.: Large-scale parallel discrete element simulations of granular flow. Eng. Comput. 26, 688–697 (2009). https://doi.org/10.1108/02644400910975478

    Article  Google Scholar 

  43. Mininni, P.D.; Rosenberg, D.; Reddy, R.; Pouquet, A.: A hybrid MPI–OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence. Parallel Comput. 37, 316–326 (2011)

    Article  Google Scholar 

  44. Shang, Y.: A distributed memory parallel Gauss—Seidel algorithm for linear algebraic systems. Comput. Math Appl. 57, 1369–1376 (2009). https://doi.org/10.1016/j.camwa.2009.01.034

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Asif Afzal or M. K. Ramis.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Afzal, A., Ansari, Z. & Ramis, M.K. Parallelization of Numerical Conjugate Heat Transfer Analysis in Parallel Plate Channel Using OpenMP. Arab J Sci Eng 45, 8981–8997 (2020). https://doi.org/10.1007/s13369-020-04640-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-020-04640-1

Keywords

Navigation