Skip to main content
Log in

Improved Deep Neural Networks with Domain Decomposition in Solving Partial Differential Equations

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

An improved neural networks method based on domain decomposition is proposed to solve partial differential equations, which is an extension of the physics informed neural networks (PINNs). Although recent research has shown that PINNs perform effectively in solving partial differential equations, they still have difficulties in solving large-scale complex problems, due to using a single neural network and gradient pathology. In this paper, the proposed approach aims at implementing calculations on sub-domains and improving the expressiveness of neural networks to mitigate gradient pathology. By investigations, it is shown that, although the neural networks structure and the loss function are complicated, the proposed method outperforms the classical PINNs with respect to training effectiveness, computational accuracy, and computational cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29

Similar content being viewed by others

References

  1. Demlow, A.: Higher-order finite element methods and pointwise error estimates for elliptic problems on surfaces. SIAM J. Numer. Anal. 47(2), 805–827 (2009)

    MathSciNet  MATH  Google Scholar 

  2. Calhoun, D.A., Helzel, C.: A finite volume method for solving parabolic equations on logically cartesian curved surface meshes. SIAM J. Sci. Comput. 31(6), 4066–4099 (2010)

    MathSciNet  MATH  Google Scholar 

  3. Duo, S., van Wyk, H.W., Zhang, Y.: A novel and accurate finite difference method for the fractional Laplacian and the fractional Poisson problem. J. Comput. Phys. 355, 233–252 (2018)

    MathSciNet  MATH  Google Scholar 

  4. Yu, B.: The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6(1), 1–12 (2018)

    MathSciNet  MATH  Google Scholar 

  5. Berg, J., Nystrom, K.: A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing 317, 28–41 (2018)

    Google Scholar 

  6. Hayati, M., Karami, B.: Feedforward neural network for solving partial differential equations. J. Appl. Sci. 7(19), 2812–2817 (2007)

    Google Scholar 

  7. DeVore, R.A.: Nonlinear approximation and its applications. In: Multiscale, Nonlinear, Approximation, Adaptive, pp. 169–201. Springer, Heidelberg (2009)

    Google Scholar 

  8. Hornik, K., Stinchcombe, M., White, H.: Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 3(5), 551–560 (1990)

    Google Scholar 

  9. Lu, L., Jin, P., Pang, G., Zhang, Z., Karniadakis, G.E.: Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 3(3), 218–229 (2021)

    Google Scholar 

  10. Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)

    Google Scholar 

  11. Zhang, D., Lu, L., Guo, L., Karniadakis, G.E.: Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. J. Comput. Phys. 397, 108850 (2019)

    MathSciNet  MATH  Google Scholar 

  12. Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S. Yang, L.: Physics-informed machine learning. Nat. Rev. Phys. 3(6), 422–440 (2021)

  13. Lu, L., Meng, X., Mao, Z., Karniadakis, G.E.: DeepXDE: a deep learning library for solving differential equations. SIAM Rev. 63(1), 208–228 (2021)

    MathSciNet  MATH  Google Scholar 

  14. Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. J. Comput. Phys. 357, 125–141 (2018)

    MathSciNet  MATH  Google Scholar 

  15. Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019)

    MathSciNet  MATH  Google Scholar 

  16. Raissi, M., Perdikaris, P., Karniadakis, G.E.: Machine learning of linear differential equations using Gaussian processes. J. Comput. Phys. 348, 683–693 (2017)

    MathSciNet  MATH  Google Scholar 

  17. Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. J. Mach. Learn. Res. 18, 1–43 (2018)

    MathSciNet  MATH  Google Scholar 

  18. Kharazmi, E., Zhang, Z., Karniadakis, G.E.: hp-VPINNs: variational physics-informed neural networks with domain decomposition. Comput. Methods Appl. Mech. Eng. 374, 113547 (2021)

    MathSciNet  MATH  Google Scholar 

  19. Yang, L., Meng, X., Karniadakis, G.E.: B-PINNs: bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 425, 109913 (2021)

    MathSciNet  MATH  Google Scholar 

  20. Yang, Y., Perdikaris, P.: Adversarial uncertainty quantification in physics-informed neural networks. J. Comput. Phys. 394, 136–152 (2019)

    MathSciNet  MATH  Google Scholar 

  21. Jagtap, A.D., Kharazmi, E., Karniadakis, G.E.: Conservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 365, 113028 (2020)

    MathSciNet  MATH  Google Scholar 

  22. Jagtap, A.D., Karniadakis, G.E.: Extended physics-informed neural networks (xpinns): a generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 28(5), 2002–2041 (2020)

    MathSciNet  MATH  Google Scholar 

  23. Shukla, K., Jagtap, A.D., Karniadakis, G.E.: Parallel physics-informed neural networks via domain decomposition. J. Comput. Phys. 447, 110683 (2021)

    MathSciNet  MATH  Google Scholar 

  24. Pang, G., Lu, L., Karniadakis, G.E.: fPINNs: fractional physics-informed neural networks. SIAM J. Sci. Comput. 41(4), A2603–A2626 (2019)

    MathSciNet  MATH  Google Scholar 

  25. Arthurs, C.J., King, A.P.: Active training of physics-informed neural networks to aggregate and interpolate parametric solutions to the Navier-Stokes equations. J. Comput. Phys. 438, 110364 (2021)

    MathSciNet  MATH  Google Scholar 

  26. Rao, C., Sun, H., Liu, Y.: Physics-informed deep learning for incompressible laminar flows. Theor. Appl. Mech. Lett. 10(3), 207–212 (2020)

    Google Scholar 

  27. Bararnia, H., Esmaeilpour, M.: On the application of physics informed neural networks (PINN) to solve boundary layer thermal-fluid problems. Int. Commun. Heat Mass Transf. 132, 105890 (2022)

    Google Scholar 

  28. Sun, L., Gao, H., Pan, S., Wang, J.X.: Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Comput. Methods Appl. Mech. Eng. 361, 112732 (2020)

    MathSciNet  MATH  Google Scholar 

  29. Wu, P., Pan, K., Ji, L., Gong, S., Feng, W., Yuan, W., Pain, C.: Navier-stokes Generative Adversarial Network: a physics-informed deep learning model for fluid flow generation. Neural Comput. Appl. 34(14), 11539–11552 (2022)

  30. Lou, Q., Meng, X., Karniadakis, G.E.: Physics-informed neural networks for solving forward and inverse flow problems via the Boltzmann-BGK formulation. J. Comput. Phys. 447, 110676 (2021)

    MathSciNet  MATH  Google Scholar 

  31. Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks. A review. Acta Mech. Sin. PINNs Fluid Mech. 1–12 (2022)

  32. Jin, X., Cai, S., Li, H., Karniadakis, G.E.: NSFnets (Navier-Stokes flow nets): physics-informed neural networks for the incompressible Navier-Stokes equations. J. Comput. Phys. 426, 109951 (2021)

    MathSciNet  MATH  Google Scholar 

  33. Xiang, Z., Peng, W., Zheng, X., Zhao, X., Yao, W.: Self-adaptive loss balanced Physics-informed neural networks for the incompressible Navier-Stokes equations. arXiv preprint arXiv:2104.06217 (2021)

  34. Mao, Z., Jagtap, A.D., Karniadakis, G.E.: Physics-informed neural networks for high-speed flows. Comput. Methods Appl. Mech. Eng. 360, 112789 (2020)

    MathSciNet  MATH  Google Scholar 

  35. Wu, J.L., Xiao, H., Paterson, E.: Physics-informed machine learning approach for augmenting turbulence models: a comprehensive framework. Phys. Rev. Fluids 3(7), 074602 (2018)

    Google Scholar 

  36. Jagtap, A.D., Kawaguchi, K., Karniadakis, G.E.: Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. J. Comput. Phys. 404, 109136 (2020)

    MathSciNet  MATH  Google Scholar 

  37. Jagtap, A.D., Kawaguchi, K., Karniadakis, G.E.: Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A 476(2239), 20200334 (2020)

    MathSciNet  MATH  Google Scholar 

  38. Wang, S., Wang, H., Perdikaris, P.: On the eigenvector bias of fourier feature networks: from regression to solving multi-scale pdes with physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 384, 113938 (2021)

    MathSciNet  MATH  Google Scholar 

  39. Bihlo, A., Popovych, R.O.: Physics-informed neural networks for the shallow-water equations on the sphere. J. Comput. Phys. 456, 111024 (2022)

    MathSciNet  MATH  Google Scholar 

  40. Kissas, G., Yang, Y., Hwuang, E., Witschey, W.R., Detre, J.A., Perdikaris, P.: Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 358, 112623 (2020)

  41. Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: Multifidelity modeling for physics-informed neural networks (pinns). J. Comput. Phys. 451, 110844 (2022)

    MathSciNet  MATH  Google Scholar 

  42. Patel, R.G., Manickam, I., Trask, N.A., Wood, M.A., Lee, M., Tomas, I., Cyr, E.C.: Thermodynamically consistent physics-informed neural networks for hyperbolic systems. J. Comput. Phys. 449, 110754 (2022)

  43. Ihunde, T.A., Olorode, O.: Application of physics informed neural networks to compositional modeling. J. Petrol. Sci. Eng. 211, 110175 (2022)

    Google Scholar 

  44. Wang, S., Teng, Y., Perdikaris, P.: Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 43(5), A3055–A3081 (2021)

    MathSciNet  MATH  Google Scholar 

  45. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp. 249–256. (2010)

  46. Yu, J., Lu, L., Meng, X., Karniadakis, G.E.: Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. Comput. Methods Appl. Mech. Eng. 393, 114823 (2022)

    MathSciNet  MATH  Google Scholar 

  47. Tang, H.S., Haynes, R.D., Houzeaux, G.: A review of domain decomposition methods for simulation of fluid flows: concepts, algorithms, and applications. Arch. Comput. Methods Eng. 28(3), 841–873 (2021)

    MathSciNet  Google Scholar 

  48. Li, K., Tang, K., Wu, T., Liao, Q.: D3M: a deep domain decomposition method for partial differential equations. IEEE Access 8, 5283–5294 (2019)

    Google Scholar 

  49. Huerta, A., Nadal, E., Chinesta, F.: Proper generalized decomposition solutions within a domain decomposition strategy. Int. J. Numer. Meth. Eng. 113(13), 1972–1994 (2018)

    MathSciNet  Google Scholar 

  50. Smetana, K., Patera, A.T.: Optimal local approximation spaces for component-based static condensation procedures. SIAM J. Sci. Comput. 38(5), A3318–A3356 (2016)

    MathSciNet  MATH  Google Scholar 

  51. Bhouri, M.A., Patera, A.T.: A two-level parameterized model-order reduction approach for time-domain elastodynamics. Comput. Methods Appl. Mech. Eng. 385, 114004 (2021)

    MathSciNet  MATH  Google Scholar 

  52. Hetmaniuk, U.L., Lehoucq, R.B.: A special finite element method based on component mode synthesis. ESAIM Math. Model. Numer. Anal. 44(3), 401–420 (2010)

    MathSciNet  MATH  Google Scholar 

  53. Abadi, M., Barham, P., Chen, J., Chen, Z. Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J. Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.: Tensorflow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. (2016)

  54. Fei, Y., Rong, G., Wang, B., Wang, W.: Parallel L-BFGS-B algorithm on gpu. Comput. Graph. 40, 1–9 (2014)

    Google Scholar 

  55. Wight, C.L., Zhao, J.: Solving allen-cahn and cahn-hilliard equations using the adaptive physics informed neural networks. arXiv preprint arXiv:2007.04542 (2020)

  56. Obeso, A.M. Pineau, J.B. Vázquez, M.S.G., Acosta, A.A.R.: Visual vs internal attention mechanisms in deep neural networks for image classification and object detection. Pattern Recogn. 123, 108411 (2022)

  57. Cho, K., Van Merrienboer, B., Gulcehre, C. Bougares, F., Schwenk, H., Bahdanau, D., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  58. Cho, K., Van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)

Download references

Acknowledgements

This work is supported by the scientific research plan of universities in Xinjiang (Grant No. XJEDU2020Y001 and XJEDU2020I001), the Research Fund from Key Laboratory of Xinjiang Province (Grant No. 2020D04002) and the Natural Science Foundation of China (Grant No. 12001466, U19A2079, 11671345, 91852106, 92152301).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Xu.

Ethics declarations

Computational Environment

All our numerical experiments use the Tensorflow deep learning framework. The basic environment is as follows: Anaconda-3-5.2.0 Windows x64, conda-4.5.4, Tensorflow-1.15.0-gpu, Python-3.6.5, and Pycharm-2021.3.1. All computations are performed using GPU on a workstation with an Intel Core i5-6300HQ CPU and a NVIDIA GeForce GTX 950 GPU.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Code Snippets Required to Implement Algorithm 2

In this Appendix, we show the source code fragment of the classical steps that must be followed to implement Algorithm 2 with Tensorflow. This approach is intended to implement the general procedure.

Firstly, residual training points \(\left\{ {\textbf {x}}_{\textit{i}}^{\textit{re}},\textit{t}_{\textit{i}}^{\textit{re}}\right\} ^{\textit{N}_{\textit{re}}}_{\textit{i}=1}\) are generated in the whole computational domain using the latin hypercube sampling, with boundary points and initial points coming from the PDEs. Before the neural networks are trained, the neural networks are initialized using Xavier as follows.

figure c

Next, the initialization of the neural networks is performed to define the empty list of weights and biases, and the neural networks are constructed by reading the number of layers of the neural networks using the for loop and reading the record weights and biases using the append function.

figure d

Finally, two simple neural networks U, V are constructed according to the introduction of additional weights and biases, and the improved neural networks are constructed as defined in Eq. (4). Meanwhile, according to the idea of domain decomposition proposed in Sect. 3.3, the computational domain is decomposed into four sub-domains, and the appropriate interface points \(\left\{ {\textbf {x}}_{\textit{q}}^{\textit{if}},\textit{t}_{\textit{q}}^{\textit{if}}\right\} ^{\textit{N}_{\textit{if}}}_{\textit{q}=1}\) are selected to construct the interface conditions according to Eq. (8) and return the function of the corresponding interface conditions.

Appendix B: Results of Networks with Different Depths and Widths

The parameters of the neural networks play a crucial role in the convergence of the loss function as well as in the prediction effectiveness. Due to the high expressiveness of the deep neural networks, it can be shown that the network can approximate complex functions as the depth and width increase. However, in the method proposed in this paper, the expressiveness of the neural networks is further improved due to the introduction of additional weights and biases. To analyze the method proposed in this paper, we will use 9-layer and 10-layer networks corresponding to 50, 80, and 100 neurons in that order. Experiments are conducted mainly for the Helmholtz equation and Allen-Cahn equation, as detailed in Tables 16 and 17.

Table 16 Helmholtz equation: Comparison of relative \(\textit{L}^{2}\)-error for each sub-domain at different widths and depths under Case 4
Table 17 Allen-Cahn equation: Comparison of relative \(\textit{L}^{2}\)-error for each sub-domain at different widths and depths under Case 4

We can see from the extended numerical experiments on the Helmholtz equation and Allen-Cahn equation that increasing the depth and width of the neural networks do not have much effect on the results. The relative \(\textit{L}^{2}\)-error may decrease a bit, but still remains an order of magnitude. So it is also reasonable to use 8-layer neural networks with 50 neurons per layer in the method proposed in this paper.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, W., Feng, X. & Xu, H. Improved Deep Neural Networks with Domain Decomposition in Solving Partial Differential Equations. J Sci Comput 93, 20 (2022). https://doi.org/10.1007/s10915-022-01980-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-01980-y

Keywords

Navigation