A space-time certified reduced basis method for quasilinear parabolic partial differential equations

In this paper, we propose a certified reduced basis (RB) method for quasilinear parabolic problems. The method is based on a space-time variational formulation. We provide a residual-based a-posteriori error bound on a space-time level and the corresponding efficiently computable estimator for the certification of the method. We use the Empirical Interpolation method (EIM) to guarantee the efficient offline-online computational procedure. The error of the EIM method is then rigorously incorporated into the certification procedure. The Petrov-Galerkin finite element discretization allows to benefit from the Crank-Nicolson interpretation of the discrete problem and to use a POD-Greedy approach to construct the reduced-basis spaces of small dimensions. It computes the reduced basis solution in a time-marching framework while the RB approximation error in a space-time norm is controlled by the estimator. Therefore the proposed method incorporates a POD-Greedy approximation into a space-time certification.


Introduction
The certified reduced basis method is knows as the efficient method for model order reduction of parametrized partial differential equations. The efficiency comes from the use of the Greedy search algorithms in basis construction for numerical approximation of the problem and aposteriori control of the approximation error. The later serves not only for rigorous certification of the method, but as the criteria in the Greedy selection process, which provides incrementally good bases for the approximation and further significant speed up in the multi-query numerical simulations (see, e.g. [1,2] for further details). The reduced basis method was successfully applied to the linear [3,4,5] and the nonlinear [6,7,8] parabolic problems, where the spatial differential operator is coercive [4,6] or inf-sup stable [5,7,8]. In general, there are two approaches [9] for the reduced basis methods for unsteady problems: (1) first discretize, then estimate and reduce, (2) first estimate, then discretize and reduce. The first approach [3,6,10] is based on a time-marching problem in the offline phase. The error certification for the linear evolution problem is then based on the time-marching L 2 (Ω) error bound [10], adapted well for the finite volume schemes. However, this bound grows extremely quickly with time in case of finite element discretization and therefore is not suitable for the broad class of problems. For the finite element discretization, the use of very specific (time-marching) norms for coercive problems results in the time-marching energy error bounds, suitable for long-term integration [3,6]. In this case, the error bound is given with respect to the "surrogate" norm, which is not a natural norm for a given problem and thus serves more as an error indicator. Since there is a time-marching problem behind the first approach, time is treated as an additional parameter.
Therefore the POD-Greedy procedure [10] is commonly used to construct the reduced-basis spaces and the Empirical Interpolation Method (EIM) [11,12] is used to treat non-affine and nonlinear problems [6]. The second approach [4,5,7,8] starts from a space-time variational formulation, where the error bounds are then derived in the appropriate Bochner spaces with respect to the natural space-time norms. In this approach time is treated as a variable and thus it resembles the reduced-basis setting for the elliptic problems [13]. The corresponding discretization of a space-time variational problem is usually chosen with the space-time tensor product discrete spaces, coupling together the space and time structure of the equation. This coupling allows to obtain the discrete solution in "one shot", i.e. by solving a single (and possibly huge) system of equations. The reduced-basis space is consequently constructed in the offline phase out of the space-time snapshots, obtained, for example, with the Petrov-Galerkin discrete scheme. However, the appropriate choice of the discrete spaces in the Petrov-Galerkin scheme results in a time-marching interpretation (see, e.g. [5,7]) of the discrete problem. In this way, the time-marching procedure allows to use the standard POD-Greedy approximation and to treat time as the parameter, which leads to the reduced-basis time-marching problem, but the error certification is accomplished with the natural space-time norm error bound.
In this paper we treat the quasilinear parabolic problems with the second approach. We propose the new L 2 (0, T ; V ) a-posteriori error bound, based on the space-time variational formulation of the quasilinear parabolic PDEs with strongly monotone differential operators. This class of problems finds its place in the important applications, such as the computation of magnetic fields in the presence of eddy currents in electrical machines [14]. We provide the Crank-Nicolson interpretation of the discrete Petrov-Galerkin problem and use consequently the POD-Greedy procedure to construct the reduced-basis spaces of small dimension. A timemarching interpretation also allows to treat the nonlinearity with the EIM. To the extend of authors knowledge, there is no space-time analogue of EIM procedure, therefore reduction to a time-marching scheme is desirable in order to have offline-online decomposition for our problem.
Moreover, the parameter separability in time, achieved with EIM, leads to a significant speed up factor. The error of Empirical Interpolation method is then also incorporated in the error bound and can be appropriately evaluated on the discrete level. The proposed reduced basis scheme is finally tested on 1-D magnetoquasistatic approximation of Maxwell's equations.

Space-Time formulation
Let V, H be separable Hilbert spaces with inner products ·, · V , ·, · H , induced norms ψ V := ψ, ψ V , ψ H := ψ, ψ H and associated Gelfand triple V → H → V with associated duality pairing ·, · V V . The norm of l ∈ V is defined by Let Ω ⊂ R d be the spatial domain, I = (0, T ] be the time interval. We consider a quasilinear, bounded differential operator A : V → V with induced quasilinear form The quasilinear form a[·](·, ·) is given by where the nonlinear coefficient ν : Ω × R → R satisfies We assume that the form (1) is strongly monotone on V with monotonicity constant m a > 0, i.e.
and Lipschitz continuous on V with Lipschitz constant L > 0, i.e.
We note that if the coefficient ν(x, s) is not constant in s, then the differential operator is not monotone (see [15] for a counterexample). If the nonlinearity, for example, depends on the gradient norm of the solution, i.e. we have ν(x, u) := ν(x, |∇u|), then the strong monotonicity of ν(|·|)· : R d → R d implies the strong monotonicity of the quasilinear form (3) with monotonicity constant m a = ν LB . For further details see, e.g. [16] and numerical example (39).
Then we consider the quasilinear parabolic initial value problem of finding u(t) ∈ V, t ∈ I a.e., such thatu where u o ∈ H is the initial condition andu := ∂u ∂t , f o ∈ L 2 (I; V ).
We now define a space-time variational formulation of (5) . We use as the trial space The norm on X is the graph norm w X := ẇ L 2 (I;V ) + w L 2 (I;V ) . The test space is Y := (1) , v (2) ). Coupling L 2 (I; V ) × H in the definition of Y is used to enforce the initial condition in a weak form. We then have the following weak formulation of the "truth" problem: find u ∈ X such that where We note that (3) implies coercivity of the quasilinear form a[·](·, ·) and (4) implies hemicontinuity. All together, the well-posedeness of problem (6) follows (see [17], Theorem 30.A).

Petrov-Galerkin Truth Approximation
For our temporal discretization of (6) we partition the interval I ≡ (0, T ] into K nonoverlapping test spaces and u δ ∈ X δ is the discrete approximation of u ∈ X , where subscript δ = ( t, h) indicates that spaces are dependent on both the temporal and spatial meshes of high resolution.
For our purposes we introduce the discrete trial space and the discrete test space With these choice of spaces the full discrete "truth" approximation problem reads: find u δ ∈ X δ , such that We will show that the Petrov-Galerkin space-time discrete formulation (9) can be interpreted as the (nonlinear) Crank-Nicolson scheme. Indeed, since the test space Y δ consists of piecewise constant polynomials in time, the problem can be solved via the time-marching procedure for k = 1, ..., K as follows: Since the trial space X δ consists of piecewise linear polynomials in time with the values u k δ := u δ (t k ) and u k−1 δ := u δ (t k−1 ), we can represent u δ on I k as the linear function Inserting the linear ansatz (11) into (10), one obtains We test (12) against the basis functions of V h and apply the trapezoidal quadrature rule to the integrals. In this way we derive the standard Crank-Nicolson time-stepping scheme, which for where the initial condition u 0 δ is obtained as an H-projection of u 0 onto V h . Given the ansatz of nonlinear algebraic equations, where are the finite element mass and nonlinear stiffness matrices. The right-hand side is given by The nonlinear algebraic equations (14) are then solved by applying the Newton's method and to finding the root of Then the Newton's iteration reads: starting with u k,(0) δ , for z = 0, 1, ... solve the linear equation to obtain δu k,(z) δ and update the solution u k,(z+1) δ The system jacobian matrix is given by where where A (u) : V → V is the Frechet derivative of the nonlinear operator A. 7 3 The Reduced Basis method

Reduced basis approximation
Let X , Y be the trial and test Bochner spaces and let D ⊂ R p be the parameter domain. We consider parametric forms B : X × Y × D → R and F : Y × D → R. The parametrized weak formulation of the problem (6) then reads: find u := u(µ) ∈ X such that Let V N := span{ξ 1 , ..., ξ N } ⊂ V h be a reduced-basis space provided by the POD-Greedy procedure, see e.g. [10] and algorithm 2 below. Then we introduce the corresponding reduced trial In our setting, the POD-Greedy alogorithm constructs iteratively nested (Lagrangian) spaces V n , 1 ≤ n ≤ N using an a-posteriori error estimator (Y ; µ) (see the next section for details on a-posteriori error analysis), which predicts the expected approximation error for a given parameter µ in the space Y := Y t,n . We want the expected approximation error to be less than the prescribed tolerance ε RB . We initiate the algorithm with the choice of the initial basis vector ξ 1 := u 0 δ / u 0 δ V , which is chosen as the normalized initial condition, projected on a finite element mesh. The snapshots u δ (µ) for the procedure are provided by the parametrized "truth" approximation (9). Next we proceed as stated in the following algorithm 1. [ε n , µ n ] ← arg max µ∈D train (Y t,n−1 , µ) 3: e k n := u k δ (µ n ) − P Vn u k δ (µ n ), k = 1, ..., K 4: The EIM algorithm is initiated with an arbitrary chosen sample point (µ ν 1 , k M 1 ) ∈ D × I and then associated quantities are computed as follows .
Similarly, we have the affine decomposition for the right-hand side. We note that in numerical scheme we use the reduced-basis solution u k N,M as an input for the interpolation scheme, assuming that u k N,M approximates u k δ sufficiently well.
The reduced-basis approximation of the problem (9) then reads: find u N,M := u N,M (µ) ∈ X t,N , such that with the scheme which reads in (17), which is required for the Newton's method. We have With EI-approximation of the nonlinearity it follows that where The proposed reduced numerical scheme contains parameter-separable matrices and thus al-

Reduced basis certification
The important ingredient of the reduced basis methodology is verification of the error (certification of the reduced basis method). For these purposes we provide an a-posteriori error bound, based on the residual, which allows quick evaluation. We denote by R M (·; µ) ∈ Y the residual of the problem, defined naturally as: We have the following and assume that u 0 ∈ V N . Then the error e(µ) = u(µ)−u N,M (µ) of reduced basis approximation is bounded by with the approximation error of the nonlinearity.
Proof. Since in the case e = 0 there is nothing to show, we assume that e = 0. We have e Y = e L 2 (I;V ) by the assumption u 0 ∈ V N . We use the identity Dividing both sides by e Y yields the result.
The computation of the dual norm of the residual R M (·; µ) Y in (30) requires the knowledge of its Riesz representation v δ,R (µ) ∈ Y δ . Thanks to the Riesz representation theorem, on the discrete level it can be found from the equation Since the test space Y δ consists of piecewise constant polynomials in time, the problem (32) can be solved via the time-marching procedure for k = 1, ..., K as follows: We note that v k R (µ) := v δ,R (µ)| I k is constant in time, hence the integration on the left-hand side of (33) is exact. For the right-hand side of (33) we represent u N,M (µ) ∈ X t,N as the linear function (11) on I k and use it as an input for the residual (29). We then apply the trapezoidal quadrature rule for the approximate evaluation of the integral. We thus need to solve the following problems: where the right-hand side is given by Therefore the computation of Riesz representation leads to a sequence of K uncoupled spatial problems in V h . However, the parameter-separability structure of the residual is transferred, thanks to the linearity of the Riesz isomorphism, to the parameter-separability of its Riesz representation v k R (µ). Therefore we have Finally we state the formula for the residual norm as well as the solution spatio-temporal norm evaluation.
. Define the following matrices: Then for a given µ ∈ D, the norm of the residual R(·; µ) Y can be computed efficiently in the online phase at Proof.
Since v δ,R (µ)| I k is constant in time, the integration on I k is exact and we can compute the spatio-temporal norm of v δ,R (µ) as follows: Isometry of the Riesz isomorphism implies R M (·; µ) Y = v δ,R (µ) Y and (37) follows. Since which yields (38).
We note that our a-posteriori error bound takes into account the error of the reduced-basis approximation (31) and the error of the nonlinearity approximation. In practice it is computed with the reduced-basis solution where Ω h ⊂ Ω is the appropriate discretization of the domain Ω.

Numerical results
Set f (x, t) := sin(2πx) sin(2πt) + cos(2πx) cos(4πt) and ν(s; µ) = exp (µs 2 ) + 1. Then the form, induced by the spatial nonlinear differential operator is given by It is possible to show that if the function g(s; µ) = ν(s; µ)s is strongly monotone, then (41) satisfies the strong monotonicity condition (3) (see, e.g. [16]). Furthermore, the monotonicity constant is then given by m a = inf µ∈D inf s∈R + ν(s; µ), hence we have m a = 2 for our problem. For discretization of the weak form (9) of (39) we divide both Ω and I into 100 and 500 subintervals, i.e., N h = 99 and K = 500. The space V h consists of piecewise linear, continuous finite elements and the trial space and the test space are chosen as described in section 2.2. We then solve the problem with the Crank-Nicolson scheme, applying the Newton's method on each time step.
We iterate unless the norm of the residual is less than the tolerance level, which we set to 10 −8 .
The tolerance level 10 −8 is used for the RB Newton's method.
We generate the RB-EIM model as follows: we start from D The strategy is to balance two contributions in (41) for the specified tolerance level ε RB . We plot in fig. 2 (a)     In Table 1   We can see that the effectivities are lower bounded by 1 and not too large, thus the error estimator is reliable and there is no significant overestimation of the true error. We then plot the values of the estimator and the true error for N max = 7, M max = 8 for every parameter µ ∈ D test in fig. 2(b). Although the problem at hand is merely chosen to illustrate the methodology, we report on the average CPU time comparison. The finite element method takes ≈ 0.47 sec to obtain the solution, and the RB method (N max = 7, M max = 8), which takes ≈ 0.14 sec, results in the speedup factor of 3.35. 1