1 Motivations and Objectives

The DGTD method is nowadays a very popular numerical method in the computational electromagnetics community. A lot of works are mostly concerned with time explicit DGTD methods relying on the use of a single global time step computed so as to ensure stability of the simulation. It is however well known that when combined with an explicit time integration method and in the presence of an unstructured locally refine mesh, a high order DGTD method suffers from a severe time step size restriction. An alternative approach that has been considered in [5, 7, 16] is to use a hybrid explicit-implicit (or locally implicit) time integration strategy. Such a strategy relies on a component splitting deduced from a partitioning of the mesh cells in two sets respectively gathering coarse and fine elements. The computational efficiency of this locally implicit DGTD method depends on the size of the set of fine elements that directly influences the size of the sparse part of the matrix system to be solved at each time. Therefore, an approach for reducing the size of the subsystem of globally coupled (i.e. implicit) unknowns is worth considering if one wants to solve very large-scale problems.

A particularly appealing solution in this context is given by the concept of hybridizable discontinuous Galerkin (HDG) method. The HDG method has been first introduced by Cockbrun et al. in [4] for a model elliptic problem and has been subsequently developed for a variety of PDE systems in continuum mechanics [13]. The essential ingredients of a HDG method are a local Galerkin projection of the underlying system of PDEs at the element level onto spaces of polynomials to parameterize the numerical solution in terms of the numerical trace; a judicious choice of the numerical flux to provide stability and consistency; and a global jump condition that enforces the continuity of the numerical flux to arrive at a global weak formulation in terms of the numerical trace. The HDG methods are fully implicit, high-order accurate and most importantly, they reduce the globally coupled unknowns to the approximate trace of the solution on element boundaries, thereby leading to a significant reduction in the degrees of freedom. HDG methods for the system of time-harmonic Maxwell equations have been proposed in [9, 10, 14]. We have only developed the implicit HDG method for the time-domain Maxwell equations [3]. In view of devising a hybrid explicit-implicit HDG method, a preliminary step is therefore to elaborate on the principles of a fully explicit HDG formulation. It happens that fully explicit HDG methods have been studied recently for the acoustic wave equation by Kronbichler et al. [8] and Stanglmeier et al. [15]. In [15] the authors present a fully explicit, high order accurate in both space and time HDG method. In this paper we outline the formulation of this explicit HDGTD, present numerical results including a preliminary assessment of its superconvergence properties. We adopt a low storage Runge-Kutta scheme [2] for the time integration of the semi-discrete HDG equations. This work is a first step towards the construction of a hybrid explicit-implicit HDG method for time-domain electromagnetics.

2 Problem Statement and Notations

We consider the system of 3D time-domain Maxwell equations on a bounded polyhedral domain \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \)

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(1)

where the symbol t denotes a time derivative, J the current density, T a final time, E(x, t) and H(x, t) are the electric and magnetic fields. The dielectric permittivity ε and the magnetic permeability μ are varying in space, time-invariant and both positive functions. The boundary of Ω is defined as ∂Ω = Γ m ∪ Γ a with Γ m ∩ Γ a = ∅. The boundary conditions are chosen as

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(2)

Here n denotes the unit outward normal to ∂Ω and (E inc, H inc) a given incident field. The first boundary condition is often referred as a metallic boundary condition and is applied on a perfectly conducting surface. The second relation is an absorbing boundary condition and takes here the form of the Silver-Müller condition. It is applied on a surface corresponding to an artificial truncation of a theoretically unbounded propagation domain. Finally, the system is supplemented with initial conditions: E 0(x) = E(x, 0) and H 0(x) = H(x, 0). For sake of simplicity, we omit the volume source term J in what follows.

We introduce now the notations and approximation spaces. We first consider a partition \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) into a set of tetrahedron. Each non-empty intersection of two elements K + and K is called an interface. We denote by \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) the union of all interior interfaces of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), by \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) the union of all boundary interfaces of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \). Note that \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) represents all the interfaces ∂K for all \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \). As a result, an interior interface shared by two elements appears twice in \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), unlike in \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) where this interface is evaluated once. For an interface \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), let v ± be the traces of v on F from the interior of K ±. On this interior face, we define mean values as \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and jumps as \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) where the unit outward normal vector to K is denoted by n ±. For the boundary faces these expressions are modified as \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) since we assume v is single-valued on the boundaries. In the following, we introduce the discontinuous finite element spaces and some basic operations on these spaces for later use. Let \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) denotes the space of polynomial functions of degree at most p K on the element \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \). The discontinuous finite element space is introduced as

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(3)

where L 2(Ω) is the space of square integrable functions on the domain Ω. The functions in V h are continuous inside each element and discontinuous across the interfaces between elements. In addition, we introduce a traced finite element space

(4)

For two vectorial functions u and v in \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), we denote (u, v)D =∫D u ⋅v dx provided D is a domain in \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), and we denote < u, v >F =∫F u ⋅v ds if F is a two-dimensional face. Accordingly, for the mesh \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) we have

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$

We set \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) where v t and v n are the tangential and normal components of v such as v = v t + v n.

3 Principles and Formulation of the HDG Method

Following the classical DG approach, approximate solutions (E h,H h), for all t ∈ [0, T], are seeked in the space V h ×V h satisfying for all K in \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \)

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(5)

Applying Green’s formula, on both equations of (5) introduces boundary terms which are replaced by numerical traces \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) in order to ensure the connection between element-wise solutions and global consistency of the discretization. This leads to the global formulation for all t ∈ [0, T]

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(6)

It is straightforward to verify that n ×v = n ×v t and < H, n ×v >= − < n ×H, v > . Therefore, using numerical traces defined in terms of the tangential components \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), we can rewrite (6) as

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(7)

The hybrid variable Λ h introduced in the setting of a HDG method [4] is here defined for all the interfaces of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) as

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(8)

We want to determine the fields \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) in each element K of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) by solving system (7) and assuming that Λ h is known on all the faces of an element K. We consider a numerical trace \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) for all K given by

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(9)

where τ K is a local stabilization parameter which is assumed to be strictly positive. We recall that \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \). The definitions of the hybrid variable (8) and numerical trace (9) are exactly those adopted in the context of the formulation of HDG methods for the 3D time-harmonic Maxwell equations [10,11,12, 14].

Following the HDG approach, when the hybrid variable Λ h is known for all the faces of the element K, the electromagnetic field can be determined by solving the local system (7) using (8) and (9).

From now on we will note by g inc the L 2 projection of g inc on M h. Summing the contributions of (7) over all the elements and enforcing the continuity of the tangential component of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), we can formulate a problem which is to find (E h, H h, Λ h) ∈V h ×V h ×M h such that for all t ∈ [0, T]

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(10)

where the last equation is called the conservativity condition with which we ask the tangential component of \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) to be weakly continuous across any interface between two neighboring elements.

We now reformulate the system with numerical fluxes. We can deduce from the third equation of (10) that

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(11)

By replacing (11) in (9) we obtain \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) with

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(12)

Thus, the numerical traces (8) and (9) have been reformulated from the conservativity condition. This means that the conservativity condition is now included in the new formulation of the numerical fluxes and can be neglected in the global system of equations. Hence, the local system (6) takes the form of a classical DG formulation, ∀v ∈V h

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$
(13)

where the numerical fluxes are defined by (11) and (12).

Remark 3

Let \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) be the local admittance associated to cell K and Z K = 1∕Y K the corresponding local impedance. If we set τ K = Z K in (11) and 1∕τ K = Y K in (12), the obtained numerical traces coincide with those adopted in the classical upwind flux DGTD method [6].

4 Numerical Results

In order to validate and study the numerical convergence of the proposed HDG method, we consider the propagation of an eigenmode in a closed cavity (Ω is the unit square) with perfectly metallic walls. The frequency of the wave is \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) where c 0 is the speed of light in vacuum. The electric permittivity and the magnetic permeability are set to the constant vacuum values. The exact time-domaine solution is given in [6].

We start our study by assuming that the penalization parameter τ is equal to 1. In order to insure the stability of the method, numerical CFL conditions are determined for each value of the interpolation order p K. In our particular case we have ε K and μ k are constant = 1 \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \), so we have verified that, as we said in Remark 3, for τ = 1, the values of CFL number correspond to the classical upwind flux-based DG method. In Table 1 we summarize the maximum Δt obtained numerically to insure the stability of the scheme

Table 1 Numerically obtained values of Δt max

Given these values of Δt max, the L 2-norm of the error is calculated for a uniform tetrahedral mesh with 3072 elements which is constructed from a finite difference grid with n x = n y = n z = 9 points, each cell of this grid yielding 6 tetrahedrons. The wave is propagated in the cavity during a physical time t max corresponding to 8 periods (as shown in Fig. 1). Figure 2 depicts a comparison of the time evolution of the L 2-norm of the error between the solution obtained with an HDG method and a classical upwind flux-based DG method for p K = 4. An optimal convergence with order p K + 1 is obtained as shown in Fig. 3.

Fig. 1
figure 1

Time evolution of the exact and the numerical solution of E x at point A(0.25, 0.25, 0.25) with a \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) interpolation

Fig. 2
figure 2

Time evolution of the L 2-norm of the error for P4

Fig. 3
figure 3

Numerical convergence order of the time explicit HDG method for τ = 1

Now, we keep the same case than previously and we assess the behavior of the HDG method for various values of the penalization parameter τ. We observe that the time evolution of the electromagnetic energy for any order of interpolation, for different values of the parameter τ ≠ 1 and when the Δt used is fixed to the values defined in Table 1, the energy increases in time. In fact, It is necessary to decrease the Δt max for each value of τ to assure the stability (see Table 2 and Fig. 4). For this example, the optimal cost will be for the parameter τ = 1 (having the same cost as an upwind flux for a DG method) otherwise we will spend more time to finish our simulation. On Fig. 5, we show the time evolution of the L 2-error for several values of τ with respect to the maximal time step for the considered parameters. In addition, Table 3 sums up numerical results in term of maximum L 2 errors and convergence rates. It appears that the order of convergence is not affected when the stabilization parameter is varied from 1 (with their associated CFL conditions).

Fig. 4
figure 4

Variation of the Δt max as a function of τ

Fig. 5
figure 5

Time evolution of the L 2-error as a function of τ with a \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) interpolation

Table 2 Numerically obtained values of the CFL number as a function of the stabilization parameter τ for a \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) interpolation
Table 3 Maximum L2-errors and convergence orders

5 Local Postprocessing

We define here, following the ideas of the local postprocessing developed in [1], new approximations for electric and magnetic field and expect that both \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) converge with order k + 1 in the \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \)-norm, whereas \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) converge with order k in the \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \)-norm. To postprocess \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) we first compute an approximation \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) to the curl of E, p 1(t n) = ∇×E(t n) and the curl of H, p 2(t n) = ∇×H(t n) by locally solving the below system

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$

and,

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$

We then find \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) such that

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$

and,

$$\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} $$

It is important to point out that we can compute \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) and \(\displaystyle \begin{aligned} \begin{aligned} u_t + f(u)_x &= 0, & (x,t) \in \mathbb{R}\times \mathbb{R}_{+},\\ u(0) &= u_0, \end{aligned} {} \end{aligned} \) at any time step without advancing in time. Hence, the local postprocessing can be performed whenever we need higher accuracy at particular time steps. Numerical results given in Table 4 shows that a second order convergence rate is obtained for the post-processed solution.

Table 4 Errors and orders of convergence before and after postprocessing

6 Conclusion

In this paper we have presented an explicit HDG method to solve the system of Maxwell equations in 3D. The next step is to couple explicit and implicit HDG methods to treat the case of a locally refined mesh.