Skip to main content

Using deep learning to learn physics of conduction heat transfer

Abstract

In the present study, an advanced type of artificial intelligence, a deep neural network, is employed to learn the physic of conduction heat transfer in 2D geometries. A dataset containing 44,160 samples is produced by using the conventional finite volume method in a uniform grid of 64 × 64. The dataset includes four geometries of the square, triangular, regular hexagonal, and regular octagonal with random sizes and random Dirichlet boundary conditions. Then, the dataset of the solved problems was introduced to a convolutional Deep Neural Network (DNN) to learn the physics of 2D heat transfer without knowing the partial differential equation underlying the conduction heat transfer. Two loss functions based on the Mean Square Errors (MSE) and Mean of Maximum Square Errors (MMaSE) are introduced. The MMaSE is a new loss function, tailored for the physic of heat transfer. The 70%, 15%, and 15% of images are used for training DNN, testing DNN, and validation of the DNN during the training process, respectively. In the validation stage, the 2D domain with random boundary conditions, in which DNN has never seen them before, is introduced to DNN. Then, DNN is asked to estimate the temperature distribution. The results show that the DNNs are capable of learning physical problems without knowing the underlying fundamental governing equation. The error analysis for various training methods is reported and discussed. The outcomes reveal that DNNs are capable of learning physics, but using MMaSE as a tailored loss function could improve the training quality. A DNN trained by MMaSE provides a better temperature distribution compared to a DNN trained by MSE. As the 2D heat equation is a Laplace equation, which is practical in multiple physics, the results of the present study indicate a new direction for future computational methods and advanced modeling of physical phenomena, using a big dataset of observations.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Abbreviations

T :

Temperature (K)

T i :

Isothermal boundary conditions at the boundaries of the domain (K)

x :

x-Cartesian coordinate (m)

y :

y-Cartesian coordinate (m)

T :

Temperature field, actual temperature distribution image (target images)

P :

Predicted temperature distribution image (K)

N :

Number of images in a collection of images

R :

Number of rows in an image

C :

Number of columns in an image

β :

Parameters of Adam optimizer

β1:

First parameter of Adam optimizer

β2:

Second parameter of Adam optimizer

i:

The segment of the boundary condition

n:

Sigma index for summation on a collection of images

r:

Sigma index for summation on all rows

c:

Sigma index for summation on all columns

SE:

Square Error

MSE:

Mean Square Errors

MMaSE:

Mean of Maximum Square Errors

MaSE:

Maximum of Square Errors

AE:

Absolute Error

MAE:

Mean Absolute Errors

MMaAE:

Mean of Maximum Absolute Errors

MaAE:

Maximum of Absolute Errors

References

  1. 1.

    Sheikholeslami M, Jafaryar M, Shafee A, Babazadeh H. Acceleration of discharge process of clean energy storage unit with insertion of porous foam considering nanoparticle enhanced paraffin. J Clean Prod. 2020;261:121206.

    CAS  Article  Google Scholar 

  2. 2.

    Sheikholeslami M, Arabkoohsar A, Babazadeh H. Modeling of nanomaterial treatment through a porous space including magnetic forces. J Therm Anal Calorim. 2019;140:1–10.

    Google Scholar 

  3. 3.

    Shafee A, Sheikholeslami M, Jafaryar M, Selimefendigil F, Bhatti M, Babazadeh H. Numerical modeling of turbulent behavior of nanomaterial exergy loss and flow through a circular channel. J Therm Anal Calorim 2020; pp. 1–9.

  4. 4.

    Sheikholeslami M, Arabkoohsar A, Jafaryar M. Impact of a helical-twisting device on the thermal–hydraulic performance of a nanofluid flow through a tube. J Therm Anal Calorim. 2020;139(5):3317–29.

    CAS  Article  Google Scholar 

  5. 5.

    Shafee A, Bhatti M, Muhammad T, Kumar R, Nam ND, Babazadeh H. Simulation of convective MHD flow with inclusion of hybrid powders. Tc. 2020;10:2.

    Google Scholar 

  6. 6.

    Waqas H, Khan SU, Bhatti M, Imran M. Significance of bioconvection in chemical reactive flow of magnetized Carreau-Yasuda nanofluid with thermal radiation and second-order slip. J Therm Anal Calorim. 2020;140:1–14.

    Article  Google Scholar 

  7. 7.

    Håstad J, Goldmann M. On the power of small-depth threshold circuits. Comput Complex. 1991;1(2):113–29.

    Article  Google Scholar 

  8. 8.

    Farimani AB, Gomes J, Pande VS. Deep learning the physics of transport phenomena. 2017. arXiv preprint arXiv:170902432.

  9. 9.

    Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18(7):1527–54.

    Article  Google Scholar 

  10. 10.

    Bengio Y, Lamblin P, Popovici D, Larochelle H, editors. Greedy layer-wise training of deep networks. In: Proc. advances in neural information processing systems. vol. 19. 2006. p. 153–60.

  11. 11.

    Erhan D, Manzagol P-A, Bengio Y, Bengio S, Vincent P, editors. The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Artificial intelligence and statistics. Clearwater Beach, Florida, USA, 2009. p. 153–60.

  12. 12.

    Bengio Y, Delalleau O. Justifying and generalizing contrastive divergence. Neural Comput. 2009;21(6):1601–21.

    Article  Google Scholar 

  13. 13.

    Hinton GE, Salakhutdinov RR, editors. Using deep belief nets to learn covariance kernels for Gaussian processes. In: Advances in neural information processing systems. vol. 20. 2008. p. 1249–56.

  14. 14.

    Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.

    CAS  Article  Google Scholar 

  15. 15.

    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Imagenet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.

    Article  Google Scholar 

  16. 16.

    Graves A, Mohamed A-R, Hinton G, editors. Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE; 2013. p. 6645–9.

  17. 17.

    Sun Y, Liang D, Wang X, Tang X. Deepid3: Face recognition with very deep neural networks. 2015. arXiv preprint arXiv:150200873.

  18. 18.

    Zhining L, Xiaozhuo G, Quan Z, Taizhong X, editors. Combining statistics-based and cnn-based information for sentence classification. In: 2016 IEEE 28th international conference on tools with artificial intelligence (ICTAI). IEEE; 2016. p. 1012–8.

  19. 19.

    Wang T-C, Zhu J-Y, Hiroaki E, Chandraker M, Efros AA, Ramamoorthi R, editors. A 4d light-field dataset and cnn architectures for material recognition. In: European conference on computer vision. Springer; 2016. p. 121–38.

  20. 20.

    Wu L, Shen C, Hengel Avd. Personnet: Person re-identification with deep convolutional neural networks. 2016. arXiv preprint arXiv:160107255.

  21. 21.

    Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X et al., editors. Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 5907–15.

  22. 22.

    Pascual S, Bonafonte A, Serrà J. SEGAN: Speech enhancement generative adversarial network. 2017. arXiv preprint arXiv:170309452.

  23. 23.

    Choi Y, Choi M, Kim M, Ha J-W, Kim S, Choo J, editors. Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 8789–97.

  24. 24.

    Matsugu M, Mori K, Mitari Y, Kaneda Y. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 2003;16(5–6):555–9.

    Article  Google Scholar 

  25. 25.

    Yu N, Jiao P, Zheng Y, editors. Handwritten digits recognition base on improved LeNet5. In: The 27th Chinese control and decision conference (2015 CCDC). IEEE; 2015.

  26. 26.

    Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. arXiv preprint arXiv:150203167.

  27. 27.

    Agarap AF. Deep learning using rectified linear units (relu). 2018. arXiv preprint arXiv:180308375.

  28. 28.

    He K, Zhang X, Ren S, Sun J, editors. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 1026–34.

  29. 29.

    Han K, Mun YY, Gweon G, Lee J-G, editors. Understanding the difficulty factors for learning materials: a qualitative study. In: International conference on artificial intelligence in education. Springer; 2013. p. 615–8.

  30. 30.

    Krizhevsky A, Sutskever I, Hinton GE, editors. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105.

  31. 31.

    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al., editors. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1–9.

  32. 32.

    Goodfellow I. NIPS 2016 tutorial: Generative adversarial networks. 2016. arXiv preprint arXiv:170100160.

  33. 33.

    He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770–8.

  34. 34.

    Sharma R, Farimani AB, Gomes J, Eastman P, Pande V. Weakly-Supervised Deep Learning of Heat Transport via Physics Informed Loss. 2018. arXiv preprint arXiv:180711374.

  35. 35.

    Ronneberger O, Fischer P, Brox T, editors. U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.

  36. 36.

    Bergman TL, Incropera FP, Lavine AS, DeWitt DP. Introduction to heat transfer. Hoboken: Wiley; 2011.

    Google Scholar 

  37. 37.

    Mirza M, Osindero S. Conditional generative adversarial networks. 2014;9:24. arXiv:170902023.

  38. 38.

    Raissi M, Yazdani A, Karniadakis GE. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science. 2020;367(6481):1026–30.

    CAS  Article  Google Scholar 

  39. 39.

    Bhatti MM, Shahid A, Abbas T, Alamri SZ, Ellahi R. Study of activation energy on the movement of gyrotactic microorganism in a magnetized nanofluids past a porous plate. Processes. 2020;8(3):328.

    Article  Google Scholar 

  40. 40.

    Jaluria Y. Computational heat transfer. New York: Routledge; 2017.

    Book  Google Scholar 

  41. 41.

    Incropera FP, Lavine AS, Bergman TL, DeWitt DP. Principles of heat and mass transfer. Amsterdam: Wiley; 2013.

    Google Scholar 

  42. 42.

    Baldi P, editor. Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML workshop on unsupervised and transfer learning. 2012.

  43. 43.

    Masci J, Meier U, Cireşan D, Schmidhuber J, editors. Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks. Springer; 2011. p. 52–9.

  44. 44.

    Monti RP, Tootoonian S, Cao R, editors. Avoiding degradation in deep feed-forward networks by phasing out skip-connections. In: International conference on artificial neural networks. Springer; 2018. p. 447–56.

  45. 45.

    Oliphant TE. A guide to NumPy. New York: Trelgol Publishing USA; 2006.

    Google Scholar 

  46. 46.

    Hunter JD. Matplotlib: a 2D graphics environment. Comput Sci Eng. 2007;9(3):90.

    Article  Google Scholar 

  47. 47.

    Van der Walt S, Schönberger J, Nunez-Iglesias J, Boulogne F, Warner J, Yager N, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.

    Article  Google Scholar 

  48. 48.

    Chollet F, others. Keras. 2015. https://keras.io.

  49. 49.

    Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. TensorFlow: Large-scale machine learning on heterogeneous systems. 2015.

  50. 50.

    Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014. arXiv preprint arXiv:14126980.

  51. 51.

    Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010. p. 249–56.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mohammad Bagher Tavakoli.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Edalatifar, M., Tavakoli, M.B., Ghalambaz, M. et al. Using deep learning to learn physics of conduction heat transfer. J Therm Anal Calorim 146, 1435–1452 (2021). https://doi.org/10.1007/s10973-020-09875-6

Download citation

Keywords

  • Conduction heat transfer
  • Deep convolutional neural networks
  • Deep learning
  • Laplace equation
  • Large dataset