Solving PDEs by variational physics-informed neural networks: an a posteriori error analysis

We consider the discretization of elliptic boundary-value problems by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain. We define an a posteriori error estimator, made of a residual-type term, a loss-function term, and data oscillation terms. We prove that the estimator is both reliable and efficient in controlling the energy norm of the error between the exact and VPINN solutions. Numerical results are in excellent agreement with the theoretical predictions.


Introduction
The possibility of using deep-learning tools for solving complex physical models has attracted the attention of many scientists over the last few years.We have in mind in this paper models that are mathematically described by partial differential equations, supplemented by suitable boundary and initial conditions.In the most general setting, if no information on the model is available except the knowledge of some of its solutions, the model may be completely surrogated by one or more neural network, trained by data (i.e., by the known solutions).However, in most situations of interest, the mathematical model is known (e.g., the Navier-Stokes equations describing an incompressible flow), and such information may be suitably exploited in training the network(s): one gets the so-called Physics Informed Neural Networks (PINNs).This approach was first proposed in [13], and it inspired further works such as e.g.[15] or [17], until the recent paper [9] which presents a very general framework for the solution of operator equations by deep neural networks.PINNs are trained by using the strong form of the differential equations, which are enforced at a set of points in the domain by suitably defining the loss function.In this sense, PINNs can be viewed as particular instances of least-square/collocations methods.
Based on the weak formulation of the differential model, the so-called Variational Physics-Informed Neural Networks (VPINNs), proposed in [6], enforce the equations by means of suitably chosen test functions, not necessarily represented by neural networks [7]; they are instances of least-square/Petrov-Galerkin methods.While the construction of the loss function is generally less expensive for PINNs than for VPINNs, the latter allow for the treatment of models with less regular solutions, as well as an easier enforcement of boundary conditions.In addition, the error analysis for VPINNs takes advantage of the available results for the discretization of variational problems, in fulfilling the assumptions of Lax-Richmyer's theorem 'stability plus consistency imply convergence'.Actually, consistency results follow rather easily from the recently established approximation properties of neural networks in Sobolev spaces (see, e.g., [3], [5], [11], [8], [12], [4]), whereas the derivation of stability estimates for the neural network solution appears to be a less trivial task: indeed, a neural network is identified by its weights, which are usually much more than the conditions enforced in its training.In other words, the training of a neural network is functionally an ill-posed problem.
To this respect, we considered in [1] a Petrov-Galerkin framework in which trial functions are defined by means of neural networks, whereas test functions are made of continuous, piecewise linear functions on a triangulation of the domain.Relying on an inf-sup condition between spaces of piecewise polynomial functions, we derived an a priori error estimate in the energy norm between the exact solution of an elliptic boundary-value problem and a high-order interpolant of a deep neural network, which minimizes the loss function.Numerical results indicate that the error follows a similar behavior when the interpolation operator is turned off.
The purpose of the present paper is to perform an a posteriori error analysis for VPINNs, i.e., to get estimates on the error which only depend on the computed VPINN solution, rather than the unknown exact solution.This is important to get a practical and quantitative information on the quality of the approximation.After setting the model elliptic boundary-value problem in Sect.2, and the corresponding VPINN discretization in Sect.2.1, we define in Sect. 3 a computable residual-type error estimator, and prove that it is both reliable and efficient in controlling the energy error between the exact solution and the VPINN solution.Reliability means that the global error is upper bounded by a constant times the estimator, efficiency means that the estimator cannot over-estimate the energy error, since the latter is lower bounded by a constant times the former up to data oscillation terms.The proposed estimator is obtained by summing up several terms: one is the classical residual-type estimator in finite elements, measuring the bulk error inside each element of the triangulation as well as the inter-element gradient jumps; another term accounts for the magnitude of the loss function after minimization is performed; the remaining terms measure data oscillations, i.e., the errors committed by locally projecting the equation's coefficients and right-hand side upon suitable polynomial spaces.The estimator can be written as a sum of elemental contributions, thereby allowing its use within an adaptive discretization strategy which refines the elements carrying the largest contributions to the estimator.

The model boundary-value problem
Let Ω ⊂ R n be a bounded polygonal/polyhedral domain with Lipschitz boundary Γ = ∂Ω.

Let us consider the model elliptic boundary-value problem
where Setting V = H 1 0 (Ω), define the bilinear and linear forms denote by α ≥ µ 0 the coercivity constant of the form a, and by a , F the continuity constants of the forms a and F .Problem (1) is formulated variationally as follows: Remark 2.1 (Other boundary conditions).The forthcoming formulation of the discretized problem and the a posteriori error analysis can be extended without pain to cover the case of mixed Dirichlet-Neumann boundary conditions, namely u = g on Γ D , µ∂ n u = ψ on Γ N , with Γ D ∪ Γ N = Γ.We just consider homogeneous Dirichlet conditions to avoid an excess of technicalities.

The VPINN discretization
We aim at approximating the solution of Problem (1) by a generalized Petrov-Galerkin strategy.
To define the subset of V of trial functions, let us choose a fully-connected feed-forward neural network structure NN, with n input variables and 1 output variable, identified by the number of layers L, the layer widths N ℓ , ℓ = 1, . . ., L, and the activation function ρ.Thus, each choice of the weights w ∈ R N defines a mapping w NN : x → w(x, w), which we think as restricted to the closed domain Ω; let us denote by W NN the manifold containing all functions that can be generated by this neural network structure.We enforce the homogeneous Dirichlet boundary conditions by multiplying each w by a fixed smooth function Φ ∈ V (we refer to [14] for a general strategy to construct this function); we assume that v NN = Φw NN belongs to V for any w NN ∈ W NN .In conclusion, our manifold of trial functions will be To define the subspace of V of test functions, let us introduce a conforming, shape-regular triangulation T h = {E} of Ω with meshsize h > 0 and let V h ⊂ V be the linear subspace formed by the functions which are piecewise linear polynomials over the triangulation T h .Furthermore, let us introduce computable approximations of the forms a and F by numerical quadratures.Precisely, for any E ∈ T h , let {(ξ E ι , ω E ι ) : ι ∈ I E } be the nodes and weights of a quadrature formula of precision q ≥ 2 on E.Then, assuming that all data µ, β, σ, f are continuous in each element of the triangulation, we define the approximate forms With these ingredients at hand, we would like to approximate the solution of Problem ( 4) by some In order to handle this problem by the neural network, let us introduce a basis in V h , say V h = span{ϕ i : i ∈ I h }, and for any w ∈ V let us define the residuals as well as the loss function Then, we search for a global minimum of the loss function in V NN , i.e., we consider the following minimization problem: Note that any solution u NN of (7) annihilates the loss function, hence it is a solution of (10); such a solution may not be unique, since the set of equations ( 7) may be underdetermined (in particular, for f = 0 one may obtain a non-zero u NN , see [1,Sect. 6.3]).On the other hand, system (7) may be overdetermined, and admit no solution; in this case, the loss function will have strictly positive minima.Remark 2.2 (Discretization with interpolation).In order to reduce and control the randomic effects related to the use of a network depending upon a large number of weights, in [1] we proposed to locally project the neural network upon a space of polynomials, before computing the loss function.
To be precise, we have considered a conforming, shape-regular partition T H = {G} of Ω, which is equal to or coarser than T h (i.e., each element E ∈ T h is contained in an element G ∈ T H ) but compatible with T h (i.e., its meshsize H > 0 satisfies H h). Let V H ⊂ V be the linear subspace formed by the functions which are piecewise polynomials of degree k int = q + 1 over the triangulation T H , and let I H : C 0 ( Ω) → V H be the associated element-wise Lagrange interpolation operator.
Given a neural network w ∈ V NN , let us denote by w H = I H w NN ∈ V H its piecewise polynomial interpolant.Then, the definition (8) of local residuals is modified as ) consequently, the loss function takes the form and we define a new approximation of the solution of Problem (4) by setting ũNN In [1] we derived an a priori error estimate for the error u − ũNN H V , and we documented the error decay as h → ∞, which turns out to have a more regular behavior that the error u − u NN V , although the latter is usually smaller.
The subsequent a posteriori error analysis could be extended to give a control on the error produced by ũNN H as well.For the sake of simplicity, we do not pursue such a task here.

The a posteriori error estimator
In order to build an error estimator, let us first choose, for any E ∈ T h and any k ≥ 0, a projection operator Π E,k : This allows us to introduce approximate bilinear and linear forms which are useful in the forthcoming derivation.Indeed, the coercivity of the form a allows us to bound the V -norm of the error as follows: We split the numerator as and we proceed to bound each term on the right-hand side.
The terms (I) and (II) account for the element-wise projection error upon polynomial spaces; they are estimated in the next two Lemmas.Lemma 3.1.The quantity (I) defined in (18) satisfies Proof.Setting m E (v) = 1 |E| E v and using ( 14), we get and we conclude using the bound Lemma 3.2.The quantity (II) defined in (18) satisfies with Proof.It holds where we have used again (14).We conclude as in the proof of Lemma 3.1.
Let us now focus on the quantity (III), which can be written as in turn, the quantity (V) can be written as The bound of (IV) is standard in finite-element a posteriori error analysis: it involves the local bulk residuals ) and the interelement jumps at each edge e shared by two elements, say E 1 and E 2 with opposite normal unit vectors n 1 and n 2 , namely jump e (u NN ) = Π E1,q (µ∇u NN ) • n 1 + Π E2,q (µ∇u NN ) • n 2 ; (26) in addition, one defines jump(u NN , e) = 0 if e ⊂ ∂Ω.
To derive the bound, the test function v h in (23) is chosen as where where with bulk E (u NN ) defined in (25) and jump e (u NN ) defined in (26).
Proof.We refer e.g. to [16] for more details.
Before considering the quantity (VI), let us state a useful result of equivalence of norms.
Lemma 3.4.For any ) i∈I h be the vector of its coefficients.There exist constants 0 < c h ≤ C h , possibly depending on h such that Proof.The result expresses the equivalence of norms in finite dimensional spaces.If the triangulation T h is quasi uniform, then one can prove by a standard reference-element argument that c h ≃ h 1−n/2 whereas C h ≃ h −n/2 .
We are now able to bound the quantity (VI) in terms of the loss function introduced in (9), as follows.
Lemma 3.5.The quantity (VI) defined in (24) satisfies We conclude by using (30) and observing that We are left with the problem of bounding the terms (VII) and (VIII) in (24).They are similar to the terms (I) and (II), respectively, but reflect the presence of the quadrature formula introduced in ( 5) and ( 6).In the forthcoming analysis, it will be useful to introduce the following notation for the quadrature-based discrete (semi-)norm on C 0 (E): Let us start with the quantity (VII).Recalling that the adopted quadrature rule has precision q and test functions v h are piecewise linear polynomials, it holds .
On the one hand, recalling the assumption q ≥ 2 and inequality (33) one has On the other hand, we first observe that, by the exactness of the quadrature rule and ( 14), we get Hence, Summarizing, we obtain the following result, which is anologous to that in Lemma 3.1.
Lemma 3.6.The quantity (VII) defined in (24) satisfies The last term in (24), (VIII), can be written as Concerning (VIIIa), by the exactness of the quadrature rule and the fact that ∇v h is piecewise constant, one has which easily gives The terms (VIIIb) and (VIIIc) are similar to the term (VII) above, in which f is replaced by β • ∇u NN and σu NN , respectively.Hence, they can be bounded as done for (VII).Summarizing, we obtain the following result, which is anologous to that in Lemma 3.2.
Lemma 3.7.The quantity (VIII) defined in (24) satisfies with At this point, we are ready to derive the announced a posteriori error estimates.In order to get an upper bound of the error, we concatenate ( 17), ( 18), ( 23), ( 24), and use the bounds given in Lemmas 3.1 to 3.7, arriving at the following result.
Theorem 3.8 (a posteriori upper bound of the error).Let u NN ∈ V NN satisfy (10).Then, the error u − u NN can be estimated from above as follows: where We realize that the global estimator η = η res + η loss + η coeff + η rhs is the sum of four contributions: η res is the classical residual-based estimator, η loss measures how small the minimized loss function is, i.e., how well the discrete variational equations ( 7) are fulfilled, whereas η coef and η rhs reflect the error in approximating elementwise the coefficients of the operator and the right-hand side by polynomials of degrees related to the precision of the quadrature formula.
It is possible to derive from (43) an element-based a posteriori error estimator, which can be used to design an adaptive strategy of mesh refinement (see, e.g.[10]).To this end, from now on we assume that the basis {ϕ i : i ∈ I h } of V h , introduced to define (8), is the canonical Lagrange basis associated with the nodes of the triangulation T h .Given any E ∈ T h , we introduce the elemental index set where supp ϕ i is the support of ϕ i , and we define a local contribution to the term η loss as follows: which satisfies With this definition at hand, we can introduce the following elemental error estimator.
Then, Theorem 3.8 can be re-formulated in terms of these quantities.
Corollary 3.10 (localized a posteriori error estimator).The error u − u NN can be estimated as follows: Inequality (47) guarantees the reliability of the proposed error estimator, namely the estimator does provide a computable upper bound of the discretization error.Next result assures that the estimator is also efficient, namely it does not overestimate the error.
Theorem 3.11 (a posteriori lower bound of the error).Let u NN ∈ V NN satisfy (10).Then, the error u − u NN can be locally estimated from below as follows: for any E ∈ T h it holds Proof.To derive (48), let us first consider the bulk contribution to the estimator.We apply a classical argument in a posteriori analysis, namely we introduce a non-negative bubble function b E ∈ V with support in E and such that φ 0,E ≃ b Using Let us now turn to the jump contribution to the estimator.Given an edge e ⊂ ∂E shared with the element E ′ , we introduce a non-negative bubble function b e ∈ V , with support in E ∪ E ′ and such that φ 0,e ≃ b 1/2 e φ 0,e and (h φ 0,e for all φ ∈ P q (E).
Let us extend the function jump e (u NN ) onto E ∪E ′ to be constant in the normal direction to e, obtaining a polynomial of degree q in each element.We now recall that as well as ∇ • (µ∇u) = −f + β • ∇u + σu.We write u = u NN + (u − u NN ) and we proceed as in the proof of (50), using now the bounds w e 0,Ei h Ei jump e (u NN ) 0,e and |w e | 1,Ei h −1/2 Ei jump e (u NN ) 0,e , arriving at the bound h Together with (50), this gives the bound (48).In order to derive (49), we write (45) as which is supported in D E , and recalling (8), we have By the left-hand inequality in (30), we obtain .

Now we write
] can be bounded as done for the terms (I) and (VII) above, yielding Similarly, the term a(u NN , v E h ) − a h (u NN , v E h ) can be handled as done for the terms (III) and (VIII) above, obtaining , thereby concluding the proof of (49).

Numerical results
Let us consider the two-dimensional domain Ω = (0, 1) 2 and the Poisson problem: with the functions f and g such that the exact solution, represented in Fig. 1, is Problem ( 52) is numerically solved by the VPINN discretization described in Section 2.1, extended to handle nonhomogeneous Dirichlet condition as mentioned in Remark 2.1.The used VPINN is a feed-forward fully connected neural network comprised by an input layer with input dimension n = 2, three hidden layers with 50 neurons each and an output layer with a single output variable; it thus contains 7851 trainable weights; furthermore, in all the layers except the output one the activation function is the hyperbolic tangent.The VPINN output is modified as described in [1] to exactly impose the Dirichlet boundary conditions.Gaussian quadrature rules of order q = 3 are used in the definition of the loss function.
For ease of implementation, the orthogonal projection operators Π E,k , defined in Section 3, are mimiked by interpolation operators as follows.Let us initially consider the elemental Lagrange interpolation operator I E,k : C 0 (E) → P k (E); then, to guarantee orthogonality to constants, the projection operator ΠE,k : where, in practice, the integral E (ϕ − I E,k ϕ) can be computed with quadrature rules that are more accurate than the ones used in the other operations.In this work we use quadrature rules of order 7 in each element.
The VPINN is trained on different meshes and the corresponding error estimators E∈T h η 2 (E) 1/2 are computed.
Once more, when exact integrals are involved, they are approximated with higher order quadrature rules.The obtained results are shown in Fig. 2, where the values of the H 1 -error and the a posteriori estimator are displayed for several meshes of stepsize h.Remarkably, the error estimators (red dots) behave very similarly to the corresponding energy errors (blue dots).Moreover, coherently with the results discussed in [1], after an initial preasymptotic phase all dots It is also interesting to note that the terms appearing in the a posteriori estimator (recall (43)) exhibit different behaviors during the training of a single VPINN.This phenomenon is highlighted in Fig. 3, where one can observe the evolution of the quantities η rhs , η coef , η res , η loss , η and |u − u NN | 1,Ω , where η .= E∈T h η 2 .(E) 1/2 .It can be observed that, during this training, while the value of the loss function decreases, the accuracy remains almost constant because other sources of error, independent of the neural network, prevail.

Conclusions
We considered the discretization of a model elliptic boundary-value problem by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain.
The scheme can be viewed as an instance of a least-square/Petrov-Galerkin method.
We introduced an a posteriori error estimator, which sums-up four contributions: the equation residual (measuring the elemental bulk residuals and the edge jump terms, for approximated coefficients and right-hand side), the coefficients' oscillation, the right-hand side's oscillation, and a scaled value of the loss-function.The latter term corresponds to an inexact solve of the algebraic system arising from the discretization of the variational equations.
The main result of the paper is the proof that the estimator provides a global upper bound and a local lower bound for the energy norm of the error between the exact and VPINN solutions.In other words, the a posteriori estimator is both reliable and efficient.Numerical results show an excellent agreement with the theoretical predictions.
In a forthcoming paper, we will investigate the use of the proposed estimator to design an adaptive strategy of discretization.
since we have chosen v h = I C h v and (27) holds.

Fig. 1
Fig. 1 Graphical representation of the exact solution u(x, y) in (53)

Fig. 2 HFig. 3
Fig. 2 H 1 errors (blue dots) obtained by training the same VPINN on different meshes, and corresponding error estimators (red dots)