In the present study, an advanced type of artificial intelligence, a deep neural network, is employed to learn the physic of conduction heat transfer in 2D geometries. A dataset containing 44,160 samples is produced by using the conventional finite volume method in a uniform grid of 64 × 64. The dataset includes four geometries of the square, triangular, regular hexagonal, and regular octagonal with random sizes and random Dirichlet boundary conditions. Then, the dataset of the solved problems was introduced to a convolutional Deep Neural Network (DNN) to learn the physics of 2D heat transfer without knowing the partial differential equation underlying the conduction heat transfer. Two loss functions based on the Mean Square Errors (MSE) and Mean of Maximum Square Errors (MMaSE) are introduced. The MMaSE is a new loss function, tailored for the physic of heat transfer. The 70%, 15%, and 15% of images are used for training DNN, testing DNN, and validation of the DNN during the training process, respectively. In the validation stage, the 2D domain with random boundary conditions, in which DNN has never seen them before, is introduced to DNN. Then, DNN is asked to estimate the temperature distribution. The results show that the DNNs are capable of learning physical problems without knowing the underlying fundamental governing equation. The error analysis for various training methods is reported and discussed. The outcomes reveal that DNNs are capable of learning physics, but using MMaSE as a tailored loss function could improve the training quality. A DNN trained by MMaSE provides a better temperature distribution compared to a DNN trained by MSE. As the 2D heat equation is a Laplace equation, which is practical in multiple physics, the results of the present study indicate a new direction for future computational methods and advanced modeling of physical phenomena, using a big dataset of observations.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Price includes VAT (USA)
Tax calculation will be finalised during checkout.
- T :
- T i :
Isothermal boundary conditions at the boundaries of the domain (K)
- x :
x-Cartesian coordinate (m)
- y :
y-Cartesian coordinate (m)
- T :
Temperature field, actual temperature distribution image (target images)
- P :
Predicted temperature distribution image (K)
- N :
Number of images in a collection of images
- R :
Number of rows in an image
- C :
Number of columns in an image
- β :
Parameters of Adam optimizer
First parameter of Adam optimizer
Second parameter of Adam optimizer
The segment of the boundary condition
Sigma index for summation on a collection of images
Sigma index for summation on all rows
Sigma index for summation on all columns
Mean Square Errors
Mean of Maximum Square Errors
Maximum of Square Errors
Mean Absolute Errors
Mean of Maximum Absolute Errors
Maximum of Absolute Errors
Sheikholeslami M, Jafaryar M, Shafee A, Babazadeh H. Acceleration of discharge process of clean energy storage unit with insertion of porous foam considering nanoparticle enhanced paraffin. J Clean Prod. 2020;261:121206.
Sheikholeslami M, Arabkoohsar A, Babazadeh H. Modeling of nanomaterial treatment through a porous space including magnetic forces. J Therm Anal Calorim. 2019;140:1–10.
Shafee A, Sheikholeslami M, Jafaryar M, Selimefendigil F, Bhatti M, Babazadeh H. Numerical modeling of turbulent behavior of nanomaterial exergy loss and flow through a circular channel. J Therm Anal Calorim 2020; pp. 1–9.
Sheikholeslami M, Arabkoohsar A, Jafaryar M. Impact of a helical-twisting device on the thermal–hydraulic performance of a nanofluid flow through a tube. J Therm Anal Calorim. 2020;139(5):3317–29.
Shafee A, Bhatti M, Muhammad T, Kumar R, Nam ND, Babazadeh H. Simulation of convective MHD flow with inclusion of hybrid powders. Tc. 2020;10:2.
Waqas H, Khan SU, Bhatti M, Imran M. Significance of bioconvection in chemical reactive flow of magnetized Carreau-Yasuda nanofluid with thermal radiation and second-order slip. J Therm Anal Calorim. 2020;140:1–14.
Håstad J, Goldmann M. On the power of small-depth threshold circuits. Comput Complex. 1991;1(2):113–29.
Farimani AB, Gomes J, Pande VS. Deep learning the physics of transport phenomena. 2017. arXiv preprint arXiv:170902432.
Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18(7):1527–54.
Bengio Y, Lamblin P, Popovici D, Larochelle H, editors. Greedy layer-wise training of deep networks. In: Proc. advances in neural information processing systems. vol. 19. 2006. p. 153–60.
Erhan D, Manzagol P-A, Bengio Y, Bengio S, Vincent P, editors. The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Artificial intelligence and statistics. Clearwater Beach, Florida, USA, 2009. p. 153–60.
Bengio Y, Delalleau O. Justifying and generalizing contrastive divergence. Neural Comput. 2009;21(6):1601–21.
Hinton GE, Salakhutdinov RR, editors. Using deep belief nets to learn covariance kernels for Gaussian processes. In: Advances in neural information processing systems. vol. 20. 2008. p. 1249–56.
Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Imagenet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.
Graves A, Mohamed A-R, Hinton G, editors. Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE; 2013. p. 6645–9.
Sun Y, Liang D, Wang X, Tang X. Deepid3: Face recognition with very deep neural networks. 2015. arXiv preprint arXiv:150200873.
Zhining L, Xiaozhuo G, Quan Z, Taizhong X, editors. Combining statistics-based and cnn-based information for sentence classification. In: 2016 IEEE 28th international conference on tools with artificial intelligence (ICTAI). IEEE; 2016. p. 1012–8.
Wang T-C, Zhu J-Y, Hiroaki E, Chandraker M, Efros AA, Ramamoorthi R, editors. A 4d light-field dataset and cnn architectures for material recognition. In: European conference on computer vision. Springer; 2016. p. 121–38.
Wu L, Shen C, Hengel Avd. Personnet: Person re-identification with deep convolutional neural networks. 2016. arXiv preprint arXiv:160107255.
Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X et al., editors. Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 5907–15.
Pascual S, Bonafonte A, Serrà J. SEGAN: Speech enhancement generative adversarial network. 2017. arXiv preprint arXiv:170309452.
Choi Y, Choi M, Kim M, Ha J-W, Kim S, Choo J, editors. Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 8789–97.
Matsugu M, Mori K, Mitari Y, Kaneda Y. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 2003;16(5–6):555–9.
Yu N, Jiao P, Zheng Y, editors. Handwritten digits recognition base on improved LeNet5. In: The 27th Chinese control and decision conference (2015 CCDC). IEEE; 2015.
Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. arXiv preprint arXiv:150203167.
Agarap AF. Deep learning using rectified linear units (relu). 2018. arXiv preprint arXiv:180308375.
He K, Zhang X, Ren S, Sun J, editors. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 1026–34.
Han K, Mun YY, Gweon G, Lee J-G, editors. Understanding the difficulty factors for learning materials: a qualitative study. In: International conference on artificial intelligence in education. Springer; 2013. p. 615–8.
Krizhevsky A, Sutskever I, Hinton GE, editors. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al., editors. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1–9.
Goodfellow I. NIPS 2016 tutorial: Generative adversarial networks. 2016. arXiv preprint arXiv:170100160.
He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770–8.
Sharma R, Farimani AB, Gomes J, Eastman P, Pande V. Weakly-Supervised Deep Learning of Heat Transport via Physics Informed Loss. 2018. arXiv preprint arXiv:180711374.
Ronneberger O, Fischer P, Brox T, editors. U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.
Bergman TL, Incropera FP, Lavine AS, DeWitt DP. Introduction to heat transfer. Hoboken: Wiley; 2011.
Mirza M, Osindero S. Conditional generative adversarial networks. 2014;9:24. arXiv:170902023.
Raissi M, Yazdani A, Karniadakis GE. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science. 2020;367(6481):1026–30.
Bhatti MM, Shahid A, Abbas T, Alamri SZ, Ellahi R. Study of activation energy on the movement of gyrotactic microorganism in a magnetized nanofluids past a porous plate. Processes. 2020;8(3):328.
Jaluria Y. Computational heat transfer. New York: Routledge; 2017.
Incropera FP, Lavine AS, Bergman TL, DeWitt DP. Principles of heat and mass transfer. Amsterdam: Wiley; 2013.
Baldi P, editor. Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML workshop on unsupervised and transfer learning. 2012.
Masci J, Meier U, Cireşan D, Schmidhuber J, editors. Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks. Springer; 2011. p. 52–9.
Monti RP, Tootoonian S, Cao R, editors. Avoiding degradation in deep feed-forward networks by phasing out skip-connections. In: International conference on artificial neural networks. Springer; 2018. p. 447–56.
Oliphant TE. A guide to NumPy. New York: Trelgol Publishing USA; 2006.
Hunter JD. Matplotlib: a 2D graphics environment. Comput Sci Eng. 2007;9(3):90.
Van der Walt S, Schönberger J, Nunez-Iglesias J, Boulogne F, Warner J, Yager N, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.
Chollet F, others. Keras. 2015. https://keras.io.
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. TensorFlow: Large-scale machine learning on heterogeneous systems. 2015.
Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014. arXiv preprint arXiv:14126980.
Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010. p. 249–56.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Edalatifar, M., Tavakoli, M.B., Ghalambaz, M. et al. Using deep learning to learn physics of conduction heat transfer. J Therm Anal Calorim 146, 1435–1452 (2021). https://doi.org/10.1007/s10973-020-09875-6
- Conduction heat transfer
- Deep convolutional neural networks
- Deep learning
- Laplace equation
- Large dataset