Appendix A: Proof of Proposition 2.1
We now give two (formal) proofs which allow to write that the fluid limit of the kinetic system
$$ \forall q\in \{1,2\}:\quad \partial_t f_q^{\varepsilon}+v_q^{\varepsilon} \partial _xf_q^{\varepsilon}=\frac{1}{\varepsilon} \bigl(M_q^{\varepsilon }-f_q^{\varepsilon}\bigr)\quad \mbox{with }v_q^{\varepsilon }=(-1)^q\sqrt{ \frac{\nu}{\varepsilon}} $$
(112)
is the equation
$$ \partial_t\rho^{\varepsilon}+\partial_x\bigl(u \rho^{\varepsilon}\bigr)=\nu\partial _{xx}^2 \rho^{\varepsilon}+\mathcal{O}(\varepsilon), $$
(113)
and, thus, is the convection-diffusion equation (1) since we can neglect the error of the order of ε in (113). One of the difficulties is linked to the fact that the kinetic velocity \(v_{q}^{\varepsilon }:=(-1)^{q}\sqrt{\frac{\nu}{\varepsilon}}\) depends on the collision time ε which is not at all classical in the framework of the kinetic theory.
The first proof is based on a Chapman-Enskog expansion; the second proof is based on a Hilbert expansion. The proof based on the Chapman-Enskog expansion is easier than the one based on the Hilbert expansion. Moreover, the Chapman-Enskog expansion allows to obtain
$$\begin{aligned} f_q^{\varepsilon}(t,x) =& \frac{\rho^{\varepsilon }}{2}\biggl[1+(-1)^q \sqrt{\varepsilon} \biggl(\frac{u}{\sqrt{\nu}}-\sqrt {\nu}\frac{\partial_x\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \\ &{}+(-1)^q\varepsilon^{3/2} \biggl(\frac{u}{\sqrt{\nu }}\cdot \frac{\partial_x(u\rho^{\varepsilon})-\nu\partial_{xx}^2\rho ^{\varepsilon}}{\rho^{\varepsilon}}-\sqrt{\nu}\frac{\partial _{xx}^2(u\rho^{\varepsilon})}{\rho^{\varepsilon}}+\nu^{3/2} \frac {\partial_{xxx}^3\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \biggr] \\ &{}+\mathcal{O}\bigl(\varepsilon^2 \bigr). \end{aligned}$$
(114)
With the Hilbert expansion, we only obtain
$$ f_q^{\varepsilon}(t,x) = \frac{\rho^{\varepsilon}}{2} \biggl[1+(-1)^q\sqrt{\varepsilon} \biggl( \frac{u}{\sqrt{\nu}}-\sqrt{\nu}\frac {\partial_x\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \biggr]+\mathcal{O}\bigl( \varepsilon^{3/2}\bigr) $$
(115)
which is less accurate than (114).
The fact that the Chapman-Enskog approach is easier than the Hilbert approach is classical in the kinetic theory. In fact, the compressible Navier-Stokes system—which is the fluid limit of the classical Boltzmann equation—is obtained with a Chapman-Enskog expansion and not with a Hilbert expansion which is too complicate to give the result. Here, it is possible to obtain the fluid limit with a Hilbert expansion because the kinetic velocity set is a discrete and finite set, which implies that the linear operators are simple 2×2 matrix. Moreover, it seems to us that the Hilbert expansion is more adapted than the Chapman-Enskog expansion to clearly justify the fluid limit (113) of the kinetic system (112) because the Hilbert approach is based on a sequence of PDEs that we can study a posteriori (we do not try to do such theoretical study in the present paper). At last, the Hilbert expansion can also be seen as a (formal) justification of the Chapman-Enskog expansion since both expansions give the same result. That is why we also write the proof based on the Hilbert expansion.
At last, let us note that in the following analysis, we forget any possible influence of boundary conditions on ∂Ω: in other words, we suppose that \(\varOmega\subset \mathbb{R}\) is periodic. Any analysis of the influence of non-periodic boundary conditions on ∂Ω on the fluid limit of (112) is really complicate because of possible Knudsen layers in the vicinity of ∂Ω where the distribution \(f_{q}^{\varepsilon}\) is not close to the Maxwellian \(M_{q}^{\varepsilon}\) even when ε≪1. As a consequence, we can only expect that the fluid limit (113) is valid far from the boundary ∂Ω when the boundary conditions are not periodic.
1.1 A.1 Proof Based on a Chapman-Enskog Expansion
Let us suppose that the solution \(f_{q}^{\varepsilon}\) of (112) can be expanded with the Chapman-Enskog expansion
$$ f_q^{\varepsilon}=M_q^{\varepsilon}\cdot \bigl(1+\sqrt{ \varepsilon} g_{1,q}^{\varepsilon}+\varepsilon g_{2,q}^{\varepsilon}+ \varepsilon ^{3/2} g_{3,q}^{\varepsilon} \bigr)+\mathcal{O} \bigl(\varepsilon^2\bigr) $$
(116)
under the constraints
$$ \sum_{q\in\{1,2\}}M_q^{\varepsilon} \bigl(g_{1,q}^{\varepsilon}+\sqrt {\varepsilon}g_{2,q}^{\varepsilon} \bigr)=0 $$
(117)
and
$$ \sum_{q\in\{1,2\}}M_q^{\varepsilon}g_{3,q}^{\varepsilon}=0 $$
(118)
where \(g_{k,q}^{\varepsilon}\) is supposed to be of order one. We recall that the Maxwellian \(M_{q}^{\varepsilon}\) is given by
$$M_q^{\varepsilon}:=\frac{\rho^{\varepsilon}}{2} \biggl(1+\frac {u}{v_q^{\varepsilon}} \biggr)=\frac{\rho^{\varepsilon}}{2} \biggl[1+(-1)^q\sqrt{\frac{\varepsilon}{\nu}} \cdot u \biggr] $$
where \(\rho^{\varepsilon}:=f_{1}^{\varepsilon}+f_{2}^{\varepsilon}\) and verifies
$$ \sum _{q\in\{1,2\}} \left ( \begin{array}{c} 1\\ v_q^{\varepsilon} \end{array} \right ) M_q^{\varepsilon}= \left ( \begin{array}{c} \rho^{\varepsilon}\\ \rho^{\varepsilon} u \end{array} \right ). $$
(119)
It is important to note that the constraint (117) is not classical in the framework of Chapman-Enskog expansions. Indeed, we should a priori impose
$$ \sum_{q\in\{1,2\}}M_q^{\varepsilon}g_{1,q}^{\varepsilon}=0 \quad \mbox{and}\quad\sum_{q\in\{1,2\}}M_q^{\varepsilon }g_2^{\varepsilon}=0. $$
(120)
Here, we replace (120) by (117) because the set of kinetic velocities \(\{ v_{q}^{\varepsilon}\}_{q\in\{1,2\}}\) depends on \(\sqrt{\varepsilon}\) which implies that \(M_{q}^{\varepsilon}g_{1,q}^{\varepsilon}\) has a term of order \(\sqrt{\varepsilon}\) (and, thus, of order \(\sqrt{\varepsilon }g_{2,q}^{\varepsilon}\)) since \(g_{1,q}^{\varepsilon}\) (and \(g_{2,q}^{\varepsilon}\)) is of order one.
By injecting expansion (116) into (112), we obtain:
-
Order
\(\sqrt{\varepsilon}^{-1}\)
: We obtain the equality
$$ M_q^{\varepsilon}g_{1,q}^{\varepsilon}=- \sqrt{\varepsilon }v_q^\varepsilon\partial_x M_q^{\varepsilon}=-(-1)^q\sqrt{\nu}\partial _x M_q^{\varepsilon} $$
(121)
that is to say
$$ M_q^{\varepsilon}g_{1,q}^{\varepsilon} =-(-1)^q\sqrt{\nu}\frac{\partial _x\rho^{\varepsilon}}{2}-\sqrt{\varepsilon} \frac{\partial_x(u\rho ^{\varepsilon})}{2} $$
(122)
since \(M_{q}^{\varepsilon}=\frac{\rho^{\varepsilon}}{2} [1+(-1)^{q}\sqrt{\frac{\varepsilon}{\nu}}\cdot u ]\).
-
Order
\(\sqrt{\varepsilon}^{0}\)
: We obtain the equality
$$ M_q^{\varepsilon}g_{2,q}^{\varepsilon}=- \bigl[\partial_t M_q^{\varepsilon}+\sqrt{ \varepsilon}v_q^\varepsilon\partial_x \bigl(M_q^{\varepsilon}g_{1,q}^{\varepsilon}\bigr) \bigr]. $$
(123)
-
Order
\(\sqrt{\varepsilon}\)
: We obtain the equality
$$ M_q^{\varepsilon}g_{3,q}^{\varepsilon}=- \bigl[\partial_t \bigl(M_q^{\varepsilon}g_{1,q}^{\varepsilon} \bigr)+\sqrt{\varepsilon }v_q^\varepsilon\partial_x \bigl(M_q^{\varepsilon}g_{2,q}^{\varepsilon }\bigr) \bigr]. $$
(124)
Moreover, by summing (112) over the set \(\{v_{q}^{\varepsilon}\}_{q\in\{1,2\}}\) and by injecting the expansion (116), we obtain
$$\begin{aligned} &\partial_t\rho^{\varepsilon}+\partial_x\bigl(u \rho^{\varepsilon }\bigr) \\ &\quad=-\partial_x \biggl[\sum _{q\in\{1,2\} }\bigl(v_q^\varepsilon-u \bigr)f_q^{\varepsilon} \biggr]\quad\biggl(\mathrm{since}\ \sum _{q\in\{1,2\}}uf_q^{\varepsilon}=u \rho^{\varepsilon}\ \mathrm{and}\ \sum_{q\in\{1,2\}}\frac{1}{ \varepsilon}\bigl(M_q^{\varepsilon }-f_q^{\varepsilon} \bigr)=0\biggr) \\ &\quad=-\partial_x \biggl[\sum_{q\in\{1,2\} } \bigl(v_q^\varepsilon-u\bigr)M_q^{\varepsilon} \biggr]-\sqrt{\varepsilon}\partial _x \biggl[\sum _{q\in\{1,2\}}\bigl(v_q^\varepsilon-u \bigr)M_q^{\varepsilon }g_{1,q}^{\varepsilon} \biggr] \\ &\qquad{}- \varepsilon\partial_x \biggl[\sum_{q\in\{1,2\}} \bigl(v_q^\varepsilon-u\bigr)M_q^{\varepsilon}g_{2,q}^{\varepsilon } \biggr]+\mathcal{O}(\varepsilon) \\ &\quad=-\partial_x \biggl[\sum_{q\in\{1,2\} } \bigl(v_q^\varepsilon-u\bigr)M_q^{\varepsilon} \biggr]-\sqrt{\varepsilon}\partial _x \biggl[\sum _{q\in\{1,2\}}v_q^\varepsilon M_q^{\varepsilon }g_{1,q}^{\varepsilon} \biggr]-\varepsilon\partial_x \biggl[\sum _{q\in\{1,2\}}v_q^\varepsilon M_q^{\varepsilon}g_{2,q}^{\varepsilon } \biggr] \\ &\qquad{}+\sqrt{\varepsilon}\partial_x \biggl[u\sum _{q\in\{ 1,2\}}M_q^{\varepsilon}\bigl(g_{1,q}^{\varepsilon}+ \sqrt{\varepsilon }g_2^{\varepsilon}\bigr) \biggr]+\mathcal{O}( \varepsilon). \end{aligned}$$
(125)
By taking into account (119), (121) and (123), we obtain
$$\begin{aligned} \varepsilon\partial_x \biggl[\sum_{q\in\{1,2\} }v_q^\varepsilon M_q^{\varepsilon}g_{2,q}^{\varepsilon} \biggr] =& - \varepsilon\partial_x \biggl\{ \sum_{q\in\{1,2\} }v_q^\varepsilon \bigl[\partial_t M_q^{\varepsilon}+\sqrt{\varepsilon }v_q^\varepsilon\partial_x \bigl(M_q^{\varepsilon}g_{1,q}^{\varepsilon } \bigr) \bigr] \biggr\} \\ =& -\varepsilon\partial_{xt}^2 \bigl(u \rho^{\varepsilon }\bigr)+\varepsilon\partial_x \biggl[\sum _{q\in\{1,2\}}v_q^\varepsilon \sqrt{ \varepsilon}v_q^\varepsilon\partial_x \bigl(\sqrt{ \varepsilon }v_q^\varepsilon\partial_x M_q^{\varepsilon}\bigr) \biggr] \\ =& -\varepsilon\partial_{xt}^2 \bigl(u \rho^{\varepsilon }\bigr)+\varepsilon\nu\partial_{xxx}^3 \bigl(u\rho^{\varepsilon}\bigr)=\mathcal {O}(\varepsilon). \end{aligned}$$
Thus, by also taking into account (117), we obtain
$$\begin{aligned} \partial_t\rho^{\varepsilon}+\partial_x\bigl(u \rho^{\varepsilon}\bigr) =& -\sqrt{\varepsilon}\partial_x \biggl[ \sum_{q\in\{1,2\} }v_q^\varepsilon M_q^{\varepsilon}g_{1,q}^{\varepsilon} \biggr]+\mathcal {O}(\varepsilon)=\varepsilon\partial_x \biggl[\sum _{q\in\{1,2\} } \bigl(v_q^\varepsilon \bigr)^2\partial_x M_q^{\varepsilon} \biggr]+\mathcal{O}(\varepsilon) \\ =& \nu\partial_{xx}^2 \biggl(\sum _{q\in\{1,2\}} M_q^{\varepsilon} \biggr)+\mathcal{O}( \varepsilon)=\nu \partial_{xx}^2\rho^{\varepsilon}+ \mathcal{O}(\varepsilon) \end{aligned}$$
which gives (113) that is to say
$$ \partial_t\rho^{\varepsilon}+\partial_x\bigl(u \rho^{\varepsilon}\bigr)=\nu\partial _{xx}^2 \rho^{\varepsilon}+\mathcal{O}(\varepsilon). $$
(126)
We deduce from (122), (123) and (126) that
$$\begin{aligned} M_q^{\varepsilon}g_{2,q}^{\varepsilon} =& - \partial_t M_q^{\varepsilon}+\nu\partial_{xx}^2M_q^{\varepsilon} \\ =& -\frac{\partial_t \rho^{\varepsilon}}{2}-(-1)^q\sqrt {\frac{\varepsilon}{\nu}}\cdot \frac{\partial_t(u\rho^{\varepsilon })}{2}+\nu\frac{\partial_{xx}^2\rho^{\varepsilon}}{2}+(-1)^q\sqrt {\varepsilon\nu} \cdot\frac{\partial_{xx}^2(u\rho^{\varepsilon})}{2} \\ =& \frac{\partial_x (u\rho^{\varepsilon} )}{2}+(-1)^q\sqrt{\frac{\varepsilon}{\nu}}u\cdot \frac{\partial_x(u\rho ^{\varepsilon})-\nu\partial_{xx}^2\rho^{\varepsilon}}{2}+(-1)^q\sqrt {\varepsilon\nu}\cdot \frac{\partial_{xx}^2(u\rho^{\varepsilon })}{2} \\ &{}+\mathcal{O}(\varepsilon) \end{aligned}$$
(127)
(we also use the fact that u(x) does not depend on the time t). This last equality encourages us to take
$$ M_q^{\varepsilon}g_{2,q}^{\varepsilon}= \frac{\partial _x (u\rho^{\varepsilon} )}{2}+(-1)^q\sqrt{\frac{\varepsilon }{\nu}}u\cdot \frac{\partial_x(u\rho^{\varepsilon})-\nu\partial _{xx}^2\rho^{\varepsilon}}{2}+(-1)^q\sqrt{\varepsilon\nu}\cdot\frac {\partial_{xx}^2(u\rho^{\varepsilon})}{2} $$
(128)
since the term of order ε in (127) is a term of order ε
2 in (116) and, thus, of order ε in (126). We deduce from (122) and (128) that
$$\begin{aligned} M_q^{\varepsilon}\cdot\bigl(\sqrt{\varepsilon}g_{1,q}^{\varepsilon }+ \varepsilon g_{2,q}^{\varepsilon}\bigr) =&\frac{\rho^{\varepsilon}}{2} \biggl[-(-1)^q\sqrt{\varepsilon\nu}\frac{\partial_x\rho^{\varepsilon}}{\rho ^{\varepsilon}}+(-1)^q \frac{\varepsilon^{3/2}}{\sqrt{\nu}}u\cdot\frac {\partial_x(u\rho^{\varepsilon})-\nu\partial_{xx}^2\rho^{\varepsilon }}{\rho^{\varepsilon}}\\ &{}+(-1)^q\varepsilon^{3/2} \sqrt{\nu}\cdot\frac {\partial_{xx}^2(u\rho^{\varepsilon})}{\rho^{\varepsilon}} \biggr] \end{aligned}$$
which verifies the constraint (117). We deduce from (122), (124) and (128) that
$$M_q^{\varepsilon}g_{3,q}^{\varepsilon}=(-1)^q \sqrt{\nu}\frac{\partial _{tx}^2\rho^{\varepsilon}}{2}-(-1)^q\sqrt{\nu}\frac{\partial_{xx}^2 (u\rho^{\varepsilon} )}{2}+ \mathcal{O}(\sqrt{\varepsilon}) $$
which allows to obtain
$$\begin{aligned} M_q^{\varepsilon}g_{3,q}^{\varepsilon} =& (-1)^q\sqrt{\nu }\frac{\nu\partial_{xxx}^3\rho^{\varepsilon}-\partial_{xx}^2(u\rho ^{\varepsilon})}{2}-(-1)^q\sqrt{\nu} \frac{\partial_{xx}^2 (u\rho ^{\varepsilon} )}{2}+\mathcal{O}(\sqrt{\varepsilon}) \\ =& (-1)^q\nu^{3/2}\frac{\partial_{xxx}^3\rho^{\varepsilon }}{2}-(-1)^q \sqrt{\nu}\partial_{xx}^2 \bigl(u\rho^{\varepsilon} \bigr)+ \mathcal{O}(\sqrt{\varepsilon}) \end{aligned}$$
(129)
by using (126). This last equality encourages us to take
$$ M_q^{\varepsilon}g_{3,q}^{\varepsilon} = (-1)^q\nu^{3/2}\frac{\partial _{xxx}^3\rho^{\varepsilon}}{2}-(-1)^q\sqrt{ \nu}\partial_{xx}^2 \bigl(u\rho^{\varepsilon} \bigr) $$
(130)
since the term of order \(\sqrt{\varepsilon}\) in (129) is a term of order ε
2 in (116) and, thus, of order ε in (126). Let us note that (130) verifies the constraint (118). Thus, by taking into account (126), we obtain
$$\begin{aligned} &M_q^{\varepsilon}\cdot\bigl(\sqrt{\varepsilon}g_{1,q}^{\varepsilon }+ \varepsilon g_{2,q}^{\varepsilon}+\varepsilon ^{3/2}g_{3,q}^{\varepsilon} \bigr) \\ &\quad= \frac{\rho^{\varepsilon }}{2}\biggl[1+(-1)^q\sqrt{\varepsilon} \biggl( \frac{u}{\sqrt{\nu}}-\sqrt {\nu}\frac{\partial_x\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \\ &\qquad{}+(-1)^q\frac{\varepsilon^{3/2}}{\sqrt{\nu}}u\cdot \frac{\partial_x(u\rho^{\varepsilon})-\nu\partial_{xx}^2\rho ^{\varepsilon}}{\rho^{\varepsilon}}+(-1)^q \varepsilon^{3/2}\sqrt{\nu }\frac{\partial_{xx}^2(u\rho^{\varepsilon})}{\rho^{\varepsilon}} \\ &\qquad{}+(-1)^q\varepsilon^{3/2}\nu^{3/2} \frac{\partial _{xxx}^3\rho^{\varepsilon}}{\rho^{\varepsilon}}-2(-1)^q\varepsilon ^{3/2}\sqrt{\nu} \frac{\partial_{xx}^2 (u\rho^{\varepsilon} )}{\rho^{\varepsilon}}\biggr] \end{aligned}$$
that is to say
$$\begin{aligned} &M_q^{\varepsilon}\cdot\bigl(\sqrt{\varepsilon}g_{1,q}^{\varepsilon }+ \varepsilon g_{2,q}^{\varepsilon}+\varepsilon ^{3/2}g_{3,q}^{\varepsilon} \bigr) \\ &\quad= \frac{\rho^{\varepsilon }}{2}\biggl[1+(-1)^q\sqrt{\varepsilon} \biggl( \frac{u}{\sqrt{\nu}}-\sqrt {\nu}\frac{\partial_x\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \\ &\qquad{}+(-1)^q\varepsilon^{3/2} \biggl(\frac{u}{\sqrt{\nu }}\cdot \frac{\partial_x(u\rho^{\varepsilon})-\nu\partial_{xx}^2\rho ^{\varepsilon}}{\rho^{\varepsilon}}-\sqrt{\nu}\frac{\partial _{xx}^2(u\rho^{\varepsilon})}{\rho^{\varepsilon}}+\nu^{3/2} \frac {\partial_{xxx}^3\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr)\biggr] \end{aligned}$$
which gives (114) by using (116).
1.2 A.2 Proof Based on a Hilbert Expansion
Let us suppose that the solution \(f_{q}^{\varepsilon}\) of (112) can be expanded with the Hilbert expansion
$$ f_q^{\varepsilon}=m_q^{\varepsilon}\cdot \bigl(g_{0,q}^{\varepsilon}+\sqrt {\varepsilon} g_{1,q}^{\varepsilon}+{ \sqrt{\varepsilon}}^2 g_{2,q}^{\varepsilon}+\cdots\bigr) $$
(131)
where
$$m_q^{\varepsilon}:=1+\frac{u}{v_q^{\varepsilon}}=1+(-1)^q\sqrt{ \frac {\varepsilon}{\nu}}\cdot u. $$
The density \(\rho^{\varepsilon}:=f_{1}^{\varepsilon}+f_{2}^{\varepsilon}\) is given by
$$\rho^{\varepsilon}=\rho_{0}^{\varepsilon}+\sqrt{\varepsilon} \rho _{1}^{\varepsilon}+{\sqrt{\varepsilon}}^2 \rho_{2}^{\varepsilon}+\cdots $$
with
$$ \forall n:\quad \rho_n^{\varepsilon}=\sum_{q\in\{1,2\}}m_q^{\varepsilon }g_{n,q}^{\varepsilon}. $$
(132)
And, the Maxwellian \(M_{q}^{\varepsilon}\) defined with
$$M_q^{\varepsilon}:=\frac{\rho^{\varepsilon}}{2} \biggl[1+\frac {u}{v_q^{\varepsilon}} \biggr]=\frac{\rho^{\varepsilon}}{2} \biggl[1+(-1)^q\sqrt{\frac{\varepsilon}{\nu}} \cdot u \biggr] $$
whose density is equal to ρ
ε is given by
$$M_q^{\varepsilon}=m_q^{\varepsilon}\cdot\bigl( \rho_{0}^{\varepsilon}+\sqrt {\varepsilon} \rho_{1}^{\varepsilon}+{ \sqrt{\varepsilon}}^2 \rho _{2}^{\varepsilon}+\cdots \bigr). $$
In the sequel, we will prove that when the Hilbert expansion (131) is valid, the density ρ
ε is necessarily solution of (113). Moreover, by computing \(g_{0,q}^{\varepsilon}\), \(g_{1,q}^{\varepsilon}\) and \(g_{2,q}^{\varepsilon}\), we will obtain (115).
Let us note that the difference between the Chapman-Enskog expansion (116) and the Hilbert expansion (131) can be underlined by comparing the constraints (117)(118) and the relations (132) which are not constraints, \(\rho _{n}^{\varepsilon}\) being unknown which are solution of a sequence of PDEs (see below).
By injecting expansion (131) into (112), we obtain:
-
Order
ε
−1
: We obtain the equality
$$ g_{0,1}^{\varepsilon}=g_{0,2}^{\varepsilon}= \rho_0^{\varepsilon}(t,x). $$
(133)
-
Order
\((\sqrt{\varepsilon})^{n-1/2}\)
(
\(n\in \mathbb{N}\)
): We obtain the following PDEs whose \(\{g_{n,q}^{\varepsilon}\}_{n\geq0}\) is solution:
$$ (\sqrt{\varepsilon})^n\bigl[\partial_t \bigl(m_q^{\varepsilon}g_{n,q}^{\varepsilon } \bigr)+v_q^{\varepsilon}\partial_x\bigl(m_q^{\varepsilon}g_{n,q}^{\varepsilon } \bigr)\bigr]=m_q^{\varepsilon}(\sqrt{\varepsilon})^{n-1} \biggl(\sum_{k\in\{ 1,2\}}m_k^{\varepsilon}g_{n+1,k}^{\varepsilon}-g_{n+1,q}^{\varepsilon } \biggr). $$
(134)
We recall that \(\mathcal{O}(|v_{q}^{\varepsilon}|)=1/\sqrt{\varepsilon}\) which implies that \((\sqrt{\varepsilon})^{n}v_{q}^{\varepsilon}\partial _{x}(m_{q}^{\varepsilon}g_{n,q}^{\varepsilon})\) and \(m_{q}^{\varepsilon}(\sqrt {\varepsilon})^{n-1} (\sum_{k\in\{1,2\}}m_{k}^{\varepsilon }g_{n+1,k}^{\varepsilon}-g_{n+1,q}^{\varepsilon} )\) are formally of the same order. Moreover, we keep the unstationary term \((\sqrt {\varepsilon})^{n}\partial_{t}(m_{q}^{\varepsilon}g_{n,q}^{\varepsilon})\) to obtain an initial value problem for \(m_{q}^{\varepsilon }g_{n,q}^{\varepsilon}\). PDEs (134) can be written with the equivalent formulation
$$ \forall n\geq0:\quad\mathbf{A}^{\varepsilon}\cdot\mathcal {G}_{n+1}= \mathcal{B}^{\varepsilon}\bigl(\mathcal{G}_n^{\varepsilon}\bigr) $$
(135)
where \(\mathcal{G}_{n}^{\varepsilon}=(g_{n,1}^{\varepsilon },g_{n,2}^{\varepsilon})^{T}\), \(\mathcal{B}^{\varepsilon }=(b_{1}^{\varepsilon},b_{2}^{\varepsilon})^{T}\) and where
$$ \left \{ \begin{array}{l@{\quad}l} \mathbf{A}^{\varepsilon}=\left ( \begin{array}{c@{\quad}c} -m_2^{\varepsilon}&m_2^{\varepsilon}\\ m_1^{\varepsilon}&-m_1^{\varepsilon} \end{array} \right ),&\mbox{(a)} \\ \displaystyle b_q^{\varepsilon}\bigl(\mathcal{G}_n^{\varepsilon} \bigr)=\frac{\sqrt {\varepsilon}}{m_q^{\varepsilon}}\cdot\bigl[\partial_t\bigl(m_q^{\varepsilon }g_{n,q}^{\varepsilon} \bigr)+v_q^{\varepsilon}\partial_x\bigl(m_q^{\varepsilon }g_{n,q}^{\varepsilon} \bigr)\bigr].&\mbox{(b)} \end{array} \right . $$
(136)
Since the matrix A
ε is not invertible, we have to study carefully linear system (135). By applying the Fredholm alternative, we obtain the following result:
Lemma A.1
Let
\(\mathcal{G}_{-1}^{\varepsilon}=0\)
and
$$ \mathcal{G}_{0}^{\varepsilon}= \left ( \begin{array}{c} 1\\ 1 \end{array} \right ) \rho_0^{\varepsilon}. $$
(137)
Then, (135) has an unique solution under the constraints
$$ \forall n\geq0:\quad\partial_t\rho_{n}^{\varepsilon}+ \partial_x\bigl(u\rho _n^{\varepsilon}\bigr)= \mathcal{F}^{\varepsilon}\bigl(\mathcal {G}_{n-1}^{\varepsilon}\bigr) $$
(138)
where
$$ \forall n\geq0:\quad \mathcal{F}^{\varepsilon}\bigl(\mathcal {G}_{n-1}^{\varepsilon}\bigr):= \partial_x \biggl[\sum_{q\in\{1,2\} } \bigl(v_q^{\varepsilon}-u\bigr)m_q^{\varepsilon}b_q^{\varepsilon} \bigl(\mathcal {G}_{n-1}^{\varepsilon}\bigr) \biggr]. $$
(139)
Moreover, \(\{\mathcal{G}_{n}^{\varepsilon}\}_{n\geq1}\)
is given by the recurrence relation
$$ \forall n \geq1:\quad\mathcal{G}_n^{\varepsilon}=-\mathcal {B}^{\varepsilon} \bigl(\mathcal{G}_{n-1}^{\varepsilon}\bigr)+\rho_n^{\varepsilon} \left ( \begin{array}{c} 1\\ 1 \end{array} \right ). $$
(140)
Thus, the construction process to obtain \(\{\mathcal {G}_{n}^{\varepsilon}\}_{n\geq0}\) is the following:
$$\begin{aligned} &\left \{ \begin{array}{l} \mbox{Firstly, we note that $\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_{-1}^{\varepsilon}\bigr)=0$ since $\mathcal{G}_{-1}^{\varepsilon}=0$};\\ \mbox{secondly, we compute $\rho_{0}^{\varepsilon}$ with (138)};\\ \mbox{thirdly, we compute $\mathcal{G}_{0}^{\varepsilon}$ with (137)}. \end{array} \right . \\ &\quad\rightarrow \left \{ \begin{array}{l} \mbox{Firstly, we compute $\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_{0}^{\varepsilon}\bigr)$ with (139)};\\ \mbox{secondly, we compute $\rho_{1}^{\varepsilon}$ with (138)};\\ \mbox{thirdly, we compute $\mathcal{G}_{1}^{\varepsilon}$ with (140)}. \end{array} \right . \rightarrow\cdots \\ &\quad\rightarrow \left \{ \begin{array}{l} \mbox{Firstly, we compute $\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_{n-1}^{\varepsilon}\bigr)$ with (139)};\\ \mbox{secondly, we compute $\rho_{n}^{\varepsilon}$ with (138)};\\ \mbox{thirdly, we compute $\mathcal{G}_{n}^{\varepsilon}$ with (140)}. \end{array} \right . \rightarrow\cdots \end{aligned}$$
(141)
By using (138), we obtain that
$$ \left \{ \begin{array}{l@{\quad}l} \displaystyle \partial_t\rho_0^{\varepsilon}+\partial_x\bigl(u\rho_0^{\varepsilon}\bigr)=0\quad \mbox{(constraint (138) with $n=0$)}, & \mbox{(a)}\\ \displaystyle \partial_t\rho_1^{\varepsilon}+\partial_x\bigl(u\rho_1^{\varepsilon }\bigr)=\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_0^{\varepsilon}\bigr)\quad\mbox{(constraint (138) with $n=1$)} & \mbox{(b)}\\ \displaystyle \partial_t\rho_2^{\varepsilon}+\partial_x\bigl(u\rho_2^{\varepsilon }\bigr)=\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon}\bigr)\quad\mbox {(constraint (138) with $n=2$)} & \mbox{(c)} \end{array} \right . $$
(142)
that is to say
$$ \partial_t \rho^{\varepsilon}+\partial_x\bigl(u\rho^{\varepsilon}\bigr)=\sqrt {\varepsilon}\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_0^{\varepsilon } \bigr)+\varepsilon\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon } \bigr)+\mathcal{O}(\varepsilon) $$
(143)
since \(\rho^{\varepsilon}=\rho_{0}+\sqrt{\varepsilon}\rho_{1}+\varepsilon \rho_{2}+\mathcal{O}(\varepsilon^{3/2})\). Let us note that the term of order ε in (143) is obtained by supposing that \(\mathcal{F}^{\varepsilon}(\mathcal{G}_{n}^{\varepsilon})=\mathcal {O}(1/\sqrt{\varepsilon})\) (∀n≥0) because of the velocity \(v_{q}^{\varepsilon}\) in (139). Thus, we obtain (113) by using the following lemma:
Lemma A.2
We have
$$ \left \{ \begin{array} {l@{\quad}l} \displaystyle \mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_0^{\varepsilon } \bigr)=\frac{\nu}{\sqrt{\varepsilon}}\partial_{xx}^2\rho_0^{\varepsilon }+ \mathcal{O}(\sqrt{\varepsilon}), & {(\mathrm{a})} \\ \displaystyle \mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon } \bigr)=\frac{\nu}{\sqrt{\varepsilon}}\partial_{xx}^2\rho_1^{\varepsilon }+ \mathcal{O}(1). & {(\mathrm{b})} \end{array} \right . $$
(144)
Moreover, we have the following result which will allow us to obtain (115):
Lemma A.3
We have
$$ \left \{ \begin{array} {l@{\quad}l} \displaystyle g_{1,q}^{\varepsilon}= \sqrt{\varepsilon}\partial_x\bigl(u\rho _0^{\varepsilon} \bigr)-(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon }\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}+\rho_1^{\varepsilon}, & {(\mathrm{a})} \\ \displaystyle g_{2,q}^{\varepsilon}=-\nu \biggl[\partial_{xx}^2 \rho _0^{\varepsilon}-\frac{\partial_{xx}^2 (m_q^{\varepsilon}\rho _0^{\varepsilon} )}{m_q^{\varepsilon}} \biggr]-(-1)^q \sqrt{\nu}\frac {\partial_x (m_q^{\varepsilon}\rho_1^{\varepsilon} )}{m_q^{\varepsilon}}+\rho_2^{\varepsilon}+\mathcal{O}(\sqrt{ \varepsilon }). & {(\mathrm{b})} \end{array} \right . $$
(145)
Thus, by using (145), we obtain
$$\begin{aligned} m_q^{\varepsilon}\bigl(g_{0,q}^{\varepsilon}+\sqrt{ \varepsilon }g_{1,q}^{\varepsilon}+\varepsilon g_{2,q}^{\varepsilon} \bigr) =&\frac{1}{2} \biggl(1+(-1)^q\sqrt{\frac{\varepsilon}{\nu }}u \biggr) \\ &{}\times\biggl\{ \rho_0^{\varepsilon}+\sqrt{\varepsilon}\rho _1^{\varepsilon}+\varepsilon\rho_2^{\varepsilon}-(-1)^q \frac{\sqrt {\varepsilon\nu}}{m_q^{\varepsilon}}\partial_x \bigl[m_q^{\varepsilon } \bigl(\rho_0^{\varepsilon}+\sqrt{\varepsilon}\rho_1^{\varepsilon} \bigr) \bigr] \\ &{}+\varepsilon\partial_x\bigl(u\rho_0^{\varepsilon } \bigr)-\varepsilon\nu \biggl[\partial_{xx}^2 \rho_0-\frac{\partial_{xx}^2 (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}} \biggr]\biggr\} +\mathcal{O}\bigl( \varepsilon^{3/2}\bigr). \end{aligned}$$
By noting that \(\partial_{x}m_{q}^{\varepsilon}=(-1)^{q}\sqrt{\frac {\varepsilon}{\nu}}\cdot\frac{u'(x)}{2}\), we deduce from the previous equality that
$$\begin{aligned} &m_q^{\varepsilon}\bigl(g_{0,q}^{\varepsilon}+\sqrt{ \varepsilon }g_{1,q}^{\varepsilon}+\varepsilon g_{2,q}^{\varepsilon} \bigr) \\ &\quad= \frac{1}{2} \biggl(1+(-1)^q\sqrt{\frac{\varepsilon}{\nu}}u \biggr) \\ &\qquad{}\times \biggl[\rho_0^{\varepsilon}+\sqrt{\varepsilon}\rho _1^{\varepsilon}+\varepsilon\rho_2^{\varepsilon}-(-1)^q \sqrt{\varepsilon \nu}\partial_x\bigl(\rho_0^{\varepsilon}+ \sqrt{\varepsilon}\rho _1^{\varepsilon}\bigr)-\varepsilon \rho_0^{\varepsilon}\frac {u'(x)}{2m_q^{\varepsilon}}+\varepsilon \partial_x\bigl(u\rho_0^{\varepsilon }\bigr) \biggr]\\ &\qquad{}+ \mathcal{O}\bigl(\varepsilon^{3/2}\bigr) \\ &\quad=\frac{1}{2} \bigl(\rho_0^{\varepsilon}+\sqrt{\varepsilon}\rho _1^{\varepsilon}+\varepsilon\rho_2^{\varepsilon} \bigr) \biggl(1+(-1)^q\sqrt{\frac{\varepsilon}{\nu}}u \biggr) \\ &\qquad{}-(-1)^q\frac{\sqrt{\varepsilon\nu}}{2}\partial_x\bigl(\rho _0^{\varepsilon}+\sqrt{\varepsilon}\rho_1^{\varepsilon} \bigr)-\frac {\varepsilon}{2} \biggl[\rho_0^{\varepsilon} \frac{u'(x)}{2m_q^{\varepsilon }}+u\partial_x\rho_0^{\varepsilon}+ \partial_x\bigl(u\rho_0^{\varepsilon }\bigr) \biggr]+ \mathcal{O}\bigl(\varepsilon^{3/2}\bigr) \end{aligned}$$
that is to say
$$f_q^{\varepsilon}=\frac{\rho^{\varepsilon}}{2} \biggl(1+(-1)^q \sqrt{\frac {\varepsilon}{\nu}}u \biggr)-(-1)^q\frac{\sqrt{\varepsilon\nu}}{2}\partial _x\rho^{\varepsilon}-\frac{\varepsilon}{2} \biggl[ \rho_0^{\varepsilon}\frac {u'(x)}{2m_q^{\varepsilon}}+u\partial_x \rho_0^{\varepsilon}-\partial _x\bigl(u \rho_0^{\varepsilon}\bigr) \biggr]+\mathcal{O}\bigl( \varepsilon^{3/2}\bigr). $$
Since \(\rho_{0}^{\varepsilon}\frac{u'(x)}{2m_{q}^{\varepsilon}}+u\partial _{x}\rho_{0}^{\varepsilon}=\partial_{x}(u\rho_{0}^{\varepsilon})+\mathcal {O}(\sqrt{\varepsilon})\), we obtain
$$ f_q^{\varepsilon}(t,x)=\frac{\rho^{\varepsilon}}{2} \biggl[1+(-1)^q \sqrt {\varepsilon} \biggl(\frac{u(x)}{\sqrt{\nu}}-\sqrt{\nu}\frac{\partial _x\rho^{\varepsilon}}{\rho^{\varepsilon}} \biggr) \biggr]+\mathcal {O}\bigl(\varepsilon^{3/2}\bigr) $$
(146)
which is exactly the expansion (115).
It remains to prove Lemmas A.1, A.2 and A.3:
Proof of Lemma A.1
The matrix A
ε is not invertible and its kernel is given by
$$\mathit{Ker} \mathbf{A}^{\varepsilon}= \bigl\{ X\in \mathbb{R}^2\mbox{ such that } X=\mu(1,1)^T,\mu\in \mathbb{R}\bigr\} . $$
Moreover, A
ε admits the eigenvalue λ=−1 whose eigenspace is given by
$$\begin{aligned} \mathcal{E}^{\varepsilon,\lambda=-1} =& \bigl\{ X\in \mathbb{R}^2\mbox{ such that } X= \mu\bigl(m_2^{\varepsilon},-m_1^{\varepsilon} \bigr)^T,\mu\in \mathbb{R}\bigr\} \quad\mbox{(a)} \\ =& \biggl\{ X\in \mathbb{R}^2\mbox{ such that }\sum _{q\in\{1,2\}}X_qm_q^{\varepsilon}=0 \biggr\} . \quad \mbox{(b)} \end{aligned}$$
(147)
Let us note that \(\mathcal{E}^{\varepsilon,\lambda=-1}\) depends on ε—which is not the case of Ker
A
ε—, that \(\mathit{Ker} \mathbf{A}^{\varepsilon}\oplus\mathcal{E}^{\varepsilon ,\lambda=-1}=\mathbb{R}^{2}\) and that \(\mathcal{E}^{\varepsilon,\lambda=-1}\perp (m_{1}^{\varepsilon},m_{2}^{\varepsilon})^{T}\). The linear application A
ε:X↦A
ε⋅X defines a bijection from \(\mathcal{E}^{\varepsilon,\lambda=-1}\) into \(\mathcal{E}^{\varepsilon,\lambda=-1}\). Thus, we can solve linear system (135) if and only if
$$ \forall n\geq0:\quad\mathcal{B}^{\varepsilon}\bigl(\mathcal {G}_n^{\varepsilon}\bigr)\in\mathcal{E}^{\varepsilon,\lambda=-1}. $$
(148)
This corresponds to the Fredholm alternative in finite dimension. Thus, by using (147)(b), the vector \(\mathcal {B}^{\varepsilon}(\mathcal{G}_{n}^{\varepsilon})\) has to verify the constraint
$$ \forall n\geq0:\quad\sum _{q\in\{1,2\}}m_q^{\varepsilon }b_q^{\varepsilon} \bigl(\mathcal{G}_n^{\varepsilon}\bigr)=0 $$
(149)
that is to say
$$\forall n\geq0:\quad\sum_{q\in\{1,2\}} \bigl[\partial _t\bigl(m_q^{\varepsilon}g_{n,q}^{\varepsilon} \bigr)+v_q^{\varepsilon}\partial _x\bigl(m_q^{\varepsilon}g_{n,q}^{\varepsilon} \bigr) \bigr]=0 $$
which is equivalent to
$$ \forall n\geq0:\quad \partial_t\rho_n^{\varepsilon }+\partial_x \bigl(u\rho_n^{\varepsilon}\bigr)=-\partial_x \biggl[ \sum_{q\in\{ 1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}g_{n,q}^{\varepsilon} \biggr] $$
(150)
by using (132). Moreover, we have
$$\begin{aligned} &\forall n\geq0:\quad\mathbf{A}^{\varepsilon}\cdot\mathcal {G}_{n+1}^{\varepsilon}= \mathcal{B}^{\varepsilon}\bigl(\mathcal {G}_n^{\varepsilon}\bigr) \quad\mbox{and}\quad\mathcal{B}^{\varepsilon }\bigl(\mathcal{G}_n^{\varepsilon} \bigr)\in\mathcal{E}^{\varepsilon,\lambda =-1}\\ &\quad\Longrightarrow\quad\mathcal{G}_{n+1}^{\varepsilon}=- \mathcal {B}^{\varepsilon}\bigl(\mathcal{G}_n^{\varepsilon}\bigr)+ \mu_{n+1} \left ( \begin{array}{c} 1\\ 1 \end{array} \right ) \end{aligned}$$
where \(\mu_{n+1}\in \mathbb{R}\). Thus, by using (132) and (149), we obtain \(\rho_{n+1}^{\varepsilon}=\sum_{q\in\{1,2\}}m_{q}^{\varepsilon}g_{n+1,q}^{\varepsilon}=0+\mu _{n+1}\) which implies that \(\mathcal{G}_{n+1}^{\varepsilon}\) is given by (140). As a consequence, we have
$$\begin{aligned} -\partial_x \biggl[\sum_{q\in\{1,2\}} \bigl(v_q^{\varepsilon }-u\bigr)m_q^{\varepsilon}g_{n,q}^{\varepsilon} \biggr] =& -\partial_x \biggl\{ \sum_{q\in\{1,2\}} \bigl(v_q^{\varepsilon }-u\bigr)m_q^{\varepsilon} \bigl[-b_q^{\varepsilon}\bigl(\mathcal {G}_{n-1}^{\varepsilon} \bigr)+\rho_n^{\varepsilon} \bigr] \biggr\} \\ =& \mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_{n-1}^{\varepsilon} \bigr)+0 \end{aligned}$$
by using the fact that
$$\sum_{q\in\{1,2\}} \left ( \begin{array}{c} 1\\ v_q^{\varepsilon} \end{array} \right ) m_q^{\varepsilon}= \left ( \begin{array}{c} 1\\ u \end{array} \right ) $$
and Definition (139), which allows to obtain (138) by taking into account (150). Finally, we have proven that (135) admits a solution \(\{\mathcal {G}_{n}^{\varepsilon}\}_{n\geq0}\) under the constraints (138). Moreover, this solution is unique since (138) are linear PDEs which admit an unique solution. □
Proof of Lemmas A.2 and A.3
We firstly prove (144)(a) and (145)(a); then, we prove (144)(b) and (145)(b). We have
$$\begin{aligned} \mathcal{F}^{\varepsilon}(\mathcal{G}_0) =&\partial_x \biggl[\sum_{q\in\{1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon }b_q^{\varepsilon}\bigl( \mathcal{G}_0^{\varepsilon}\bigr) \biggr]\\ =& \sqrt{\varepsilon} \partial_x \biggl[\sum_{q\in\{1,2\} } \bigl(v_q^{\varepsilon}-u\bigr)m_q^{\varepsilon} \partial_t\rho_0^{\varepsilon } \biggr]+\sqrt{ \varepsilon}\partial_x \biggl[\sum_{q\in\{1,2\} } \bigl(v_q^{\varepsilon}-u\bigr)v_q^{\varepsilon} \partial_x\bigl(m_q^{\varepsilon}\rho _0^{\varepsilon}\bigr) \biggr] \\ =&0+\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{ 1,2\}}{v_q^{\varepsilon}}^2 \partial_x\bigl(m_q^{\varepsilon}\rho _0^{\varepsilon}\bigr) \biggr]-\sqrt{\varepsilon} \partial_x \biggl[u\partial _x \biggl(\sum _{q\in\{1,2\}}v_q^{\varepsilon}m_q^{\varepsilon} \rho _0^{\varepsilon} \biggr) \biggr] \\ =&\frac{\nu}{\sqrt{\varepsilon}}\partial_x \biggl[\sum _{q\in\{1,2\}}\partial_x\bigl(m_q^{\varepsilon} \rho_0^{\varepsilon }\bigr) \biggr]-\sqrt{\varepsilon} \partial_x \bigl[u\partial_x\bigl(u\rho _0^{\varepsilon}\bigr) \bigr]=\frac{\nu}{\sqrt{\varepsilon }} \partial_{xx}^2\rho_0^{\varepsilon}+ \mathcal{O}(\sqrt{\varepsilon}) \end{aligned}$$
which gives (144)(a). Moreover, we have
$$\begin{aligned} b_q^{\varepsilon}\bigl(\mathcal{G}_0^{\varepsilon} \bigr) =&\sqrt {\varepsilon}\frac{\partial_t (m_q^{\varepsilon}\rho_0^{\varepsilon } )}{m_q^{\varepsilon}}+\sqrt{\varepsilon}v_q^{\varepsilon} \frac {\partial_x (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}=\sqrt{\varepsilon}\partial_t\rho _0^{\varepsilon}+(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon }\rho_0^{\varepsilon} )}{m_q^{\varepsilon}} \\ =&-\sqrt{\varepsilon}\partial_x\bigl(u\rho_0^{\varepsilon } \bigr)+(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon}\rho _0^{\varepsilon} )}{m_q^{\varepsilon}} \end{aligned}$$
by using (142)(a). We obtain (145)(a) by using (140). In the same way, we have
$$\begin{aligned} \mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon} \bigr) =& \partial_x \biggl[\sum_{q\in\{1,2\}} \bigl(v_q^{\varepsilon }-u\bigr)m_q^{\varepsilon}b_q^{\varepsilon}( \mathcal{G}_1) \biggr]\\ =&\sqrt{\varepsilon}\partial_x \biggl[\sum_{q\in\{ 1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}\partial _tg_{1,q}^{\varepsilon} \biggr]+\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)v_q^{\varepsilon}\partial _x\bigl(m_q^{\varepsilon}g_{1,q}^{\varepsilon} \bigr) \biggr] \\ =&\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{ 1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}\partial _tg_{1,q}^{\varepsilon} \biggr]+\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{1,2\}}{v_q^{\varepsilon}}^2 \partial_x\bigl(m_q^{\varepsilon }g_{1,q}^{\varepsilon} \bigr) \biggr]\\ &{}-\sqrt{\varepsilon}\partial_x \biggl[u \partial_x \biggl(\sum_{q\in\{1,2\}}v_q^{\varepsilon }m_q^{\varepsilon}g_{1,q}^{\varepsilon} \biggr) \biggr] \\ =&\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{ 1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}\partial _tg_{1,q}^{\varepsilon} \biggr]+\frac{\nu}{\sqrt{\varepsilon }}\partial_x \biggl[\sum _{q\in\{1,2\}}\partial_x\bigl(m_q^{\varepsilon }g_{1,q}^{\varepsilon} \bigr) \biggr]+\mathcal{O}(1) \\ =&\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{ 1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}\partial _tg_{1,q}^{\varepsilon} \biggr]+\frac{\nu}{\sqrt{\varepsilon }}\partial_{xx}^2 \rho_1^{\varepsilon}+\mathcal{O}(1). \end{aligned}$$
But, by using (145)(a), we have also
$$\begin{aligned} &\sqrt{\varepsilon}\partial_x \biggl[\sum _{q\in\{1,2\} }\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon}\partial_tg_{1,q}^{\varepsilon } \biggr] \\ &\quad= \sqrt{\varepsilon}\partial^2_{tx}\sum _{q\in\{1,2\}}\bigl(v_q^{\varepsilon}-u \bigr)m_q^{\varepsilon} \biggl[\sqrt {\varepsilon} \partial_x\bigl(u\rho_0^{\varepsilon} \bigr)-(-1)^q\sqrt{\nu}\frac {\partial_x (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}+\rho_1^{\varepsilon} \biggr] \\ &\quad= 0-\sqrt{\varepsilon\nu}\partial^2_{tx}\sum _{q\in\{1,2\}}(-1)^q\bigl(v_q^{\varepsilon}-u \bigr)\partial_x \bigl(m_q^{\varepsilon} \rho_0^{\varepsilon} \bigr)=\mathcal{O}(1). \end{aligned}$$
Thus, we can write that
$$\mathcal{F}^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon}\bigr)= \frac{\nu}{\sqrt {\varepsilon}}\partial_{xx}^2\rho_1^{\varepsilon}+ \mathcal{O}(1) $$
which gives (144)(b). Moreover, by taking into account (145)(a), we obtain
$$\begin{aligned} b_q^{\varepsilon}\bigl(\mathcal{G}_1^{\varepsilon} \bigr) =&\sqrt {\varepsilon}\frac{\partial_t (m_q^{\varepsilon}g_{1,q}^{\varepsilon } )}{m_q^{\varepsilon}}+\sqrt{\varepsilon}v_q^{\varepsilon} \frac {\partial_x (m_q^{\varepsilon}g_{1,q}^{\varepsilon} )}{m_q^{\varepsilon}} \\ =&\sqrt{\varepsilon}\partial_t \biggl(\sqrt{\varepsilon } \partial_x\bigl(u\rho_0^{\varepsilon} \bigr)-(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}+\rho _1^{\varepsilon} \biggr) \\ &{}+(-1)^q\sqrt{\nu}\frac{\partial_x [m_q^{\varepsilon } (\sqrt{\varepsilon}\partial_x(u\rho_0^{\varepsilon})-(-1)^q\sqrt {\nu}\frac{\partial_x (m_q^{\varepsilon}\rho _0^{\varepsilon} )}{m_q^{\varepsilon}}+\rho_1^{\varepsilon} ) ]}{m_q^{\varepsilon}} \\ =&\sqrt{\varepsilon}\partial_t\rho_1^{\varepsilon}- \nu \frac{\partial^2_{xx} (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}+(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon}\rho_1^{\varepsilon} )}{m_q^{\varepsilon }}+ \mathcal{O}(\sqrt{\varepsilon}) \\ =&\nu\partial_{xx}^2\rho_0^{\varepsilon}-\nu \frac{\partial ^2_{xx} (m_q^{\varepsilon}\rho_0^{\varepsilon} )}{m_q^{\varepsilon}}+(-1)^q\sqrt{\nu}\frac{\partial_x (m_q^{\varepsilon}\rho_1^{\varepsilon} )}{m_q^{\varepsilon }}+\mathcal{O}( \sqrt{\varepsilon}) \end{aligned}$$
by also using (142)(b) and (144)(a). Then, we obtain (145)(b) by using (140). □
Appendix B: The LBM Scheme Written in Function of f
q
when u(x)=0
When u(x)=0, the LBM scheme (35) is given by
$$ \left \{ \begin{array} {l} \displaystyle g_{1,i}^{n+1}=g_{1,i+1}^n \biggl(1-\frac{\eta}{2} \biggr)+g_{2,i+1}^n\frac{\eta}{2}, \\ \displaystyle g_{2,i}^{n+1}=g_{2,i-1}^n \biggl(1-\frac{\eta}{2} \biggr)+g_{1,i-1}^n \frac{\eta}{2}, \\ \displaystyle \rho_i^{n+1}=g_{1,i}^{n+1}+g_{2,i}^{n+1}. \end{array} \right . $$
(151)
On the other side, by using (20), we have \(g_{q}=f_{q}-\frac{\Delta t}{2\varepsilon}(M_{q}-f_{q})\) that is to say
$$ \left \{ \begin{array} {l} \displaystyle g_1=f_1 \biggl(1+\frac{1}{4C_d} \biggr)-\frac{f_2}{4C_d}, \\ \displaystyle g_2=f_2 \biggl(1+\frac{1}{4C_d} \biggr)-\frac{f_1}{4C_d} \end{array} \right . $$
(152)
since ε=C
d
Δt and \(M_{1}=\frac {f_{1}+f_{2}}{2}\). By injecting (152) in (151), we obtain
$$A \left ( \begin{array}{c} f_1\\ f_2 \end{array} \right )_i^{n+1} =b $$
with
$$A= \left ( \begin{array}{c@{\quad}c} 1+\frac{1}{4C_d} & -\frac{1}{4C_d}\\ -\frac{1}{4C_d} & 1+\frac{1}{4C_d} \end{array} \right ) $$
and
$$b= \left ( \begin{array}{c} [f_{1,i+1}^n (1+\frac{1}{4C_d} )-\frac {f_{2,i+1}^n}{4C_d} ]\cdot (1-\frac{\eta}{2} )+ [f_{2,i+1}^n (1+\frac{1}{4C_d} )-\frac{f_{1,i+1}^n}{4C_d} ]\cdot\frac{\eta}{2}\\ {}[f_{2,i-1}^n (1+\frac{1}{4C_d} )-\frac {f_{1,i-1}^n}{4C_d} ]\cdot (1-\frac{\eta}{2} )+ [f_{1,i-1}^n (1+\frac{1}{4C_d} )-\frac{f_{2,i-1}^n}{4C_d} ]\cdot\frac{\eta}{2} \end{array} \right ). $$
By using the fact that
$$A^{-1}= \frac{1}{C_d+1/2}\left ( \begin{array}{c@{\quad}c} C_d+1/4 & 1/4\\ 1/4 & C_d+1/4 \end{array} \right ) $$
and that \(\eta=\frac{1}{C_{d}+1/2}\), we obtain
$$\begin{aligned} f_{1,i}^{n+1} =& \frac {C_d+1/4}{(C_d+1/2)^2} \biggl\{ C_d \biggl[f_{1,i+1}^n \biggl(1+\frac{1}{4C_d} \biggr)-\frac{f_{2,i+1}^n}{4C_d} \biggr]+\frac{1}{2} \biggl[f_{2,i+1}^n \biggl(1+\frac{1}{4C_d} \biggr)-\frac{f_{1,i+1}^n}{4C_d} \biggr] \biggr\} \\ &{}+\frac{1}{4(C_d+1/2)^2} \biggl\{ C_d \biggl[f_{2,i-1}^n \biggl(1+\frac{1}{4C_d} \biggr)- \frac{f_{1,i-1}^n}{4C_d} \biggr]\\ &{}+\frac{1}{2} \biggl[f_{1,i-1}^n \biggl(1+\frac{1}{4C_d} \biggr)-\frac{f_{2,i-1}^n}{4C_d} \biggr] \biggr\} \end{aligned}$$
that is to say
$$\begin{aligned} f_{1,i}^{n+1} =& \frac {4C_d+1}{4(C_d+1/2)^2} \biggl\{ f_{1,i+1}^n\cdot\frac {8C_d^2+2C_d-1}{8C_d}+f_{2,i+1}^n \cdot\frac{C_d+1/2}{4C_d} \biggr\} \\ &{}+\frac{1}{4(C_d+1/2)^2} \biggl\{ f_{2,i-1}^n \cdot\frac {8C_d^2+2C_d-1}{8C_d}+f_{1,i-1}^n\cdot\frac{C_d+1/2}{4C_d} \biggr\} . \end{aligned}$$
By noting that \(8C_{d}^{2}+2C_{d}-1=2(4C_{d}-1)\cdot(C_{d}+1/2)\), we finally obtain
$$f_{1,i}^{n+1}=\frac {f_{1,i+1}^n(16C_d^2-1)+f_{2,i+1}^n(4C_d+1)+f_{2,i-1}^n(4C_d-1)+f_{1,i-1}^n}{16C_d(C_d+\frac{1}{2})} $$
which gives (36)(a). We obtain (36)(b) by symmetry.
Appendix C: Proof of Property 3.1
The scheme (37) is equivalent to the scheme
$$\left \{ \begin{array}{l} \displaystyle g_{1,i+1}^{n}=g_{1,i}^{n+1}(1-\hat{\eta})+M_{1,i}^{n+1}\hat{\eta},\\ \displaystyle g_{2,i-1}^{n}=g_{2,i}^{n+1}(1-\hat{\eta})+M_{2,i}^{n+1}\hat{\eta} \end{array} \right . $$
that is to say to the scheme
$$\left \{ \begin{array} {l} \displaystyle g_{1,i+1}^{n}=g_{1,i}^{n+1} \biggl[1-\frac{\hat{\eta}}{2} \biggl(1+\frac{\Delta t}{\Delta x}u(x_i) \biggr) \biggr]+g_{2,i}^{n+1}\frac{\hat {\eta}}{2} \biggl(1- \frac{\Delta t}{\Delta x}u(x_i) \biggr), \\ \displaystyle g_{2,i-1}^{n}=g_{2,i}^{n+1} \biggl[1-\frac{\hat{\eta}}{2} \biggl(1-\frac{\Delta t}{\Delta x}u(x_i) \biggr) \biggr]+g_{1,i}^{n+1}\frac{\hat {\eta}}{2} \biggl(1+ \frac{\Delta t}{\Delta x}u(x_i) \biggr) \end{array} \right . $$
since \(M_{q,i}=\frac{g_{1,i}+g_{2,i}}{2} (1+(-1)^{q}\frac{\Delta t}{\Delta x}u(x_{i}) )\). We end the proof by noting that
$$\begin{aligned} &\left ( \begin{array}{c@{\quad}c} 1-\frac{\hat{\eta}}{2} (1+\frac{\Delta t}{\Delta x}u(x_i) )&\frac{\hat{\eta}}{2} (1-\frac{\Delta t}{\Delta x}u(x_i) )\\ \frac{\hat{\eta}}{2} (1+\frac{\Delta t}{\Delta x}u(x_i) )&1-\frac{\hat{\eta}}{2} (1-\frac{\Delta t}{\Delta x}u(x_i) ) \end{array} \right )^{-1}\\ &\quad = \left ( \begin{array}{c@{\quad}c} 1-\frac{\eta}{2} (1+\frac{\Delta t}{\Delta x}u(x_i) )&\frac{\eta}{2} (1-\frac{\Delta t}{\Delta x}u(x_i) )\\ \frac{\eta}{2} (1+\frac{\Delta t}{\Delta x}u(x_i) )&1-\frac{\eta}{2} (1-\frac{\Delta t}{\Delta x}u(x_i) ) \end{array} \right ) \end{aligned}$$
which comes from the fact that \(\hat{\eta}+\eta=\hat{\eta}\eta\).
Appendix D: Proof of Lemmas 4.1, 4.2 and 4.3
In the following proof, we firstly focus on the LBM∗ scheme. Then, we focus on the LBM scheme which is less easy to study.
Proof of Lemma 4.1
We deduce from the LBM∗ scheme (38) that \(\rho _{i}^{n+1}=g_{1,i+1}^{n}+g_{2,i-1}^{n}\) (n≥0). Thus, by applying again (38), we find
$$\begin{aligned} \rho_{i}^{n+1} &= g_{1,i+2}^{n-1} \biggl[1-\frac{\eta }{2} \biggl(1+\frac{\Delta t}{\Delta x}u(x_{i+1}) \biggr) \biggr]+g_{2,i}^{n-1}\frac{\eta}{2} \biggl(1- \frac{\Delta t}{\Delta x}u(x_{i+1}) \biggr) \\ &\quad{}+g_{2,i-2}^{n-1} \biggl[1-\frac{\eta}{2} \biggl(1- \frac{\Delta t}{\Delta x}u(x_{i-1}) \biggr) \biggr]+g_{1,i}^{n-1} \frac{\eta}{2} \biggl(1+\frac{\Delta t}{\Delta x}u(x_{i-1}) \biggr)\quad \mbox{with } n\geq1. \end{aligned}$$
(153)
By noting that
$$ \left \{ \begin{array}{l} \displaystyle \rho_{i+1}^{n}=g_{1,i+2}^{n-1}+g_{2,i}^{n-1},\\ \displaystyle \rho_{i-1}^{n}=g_{1,i}^{n-1}+g_{2,i-2}^{n-1} \end{array} \right . \quad\mbox{with } n\geq1, $$
(154)
we deduce from (153) that
$$\begin{aligned} \rho_{i}^{n+1} =& \bigl(\rho_{i+1}^n-g_{2,i}^{n-1} \bigr) \biggl(1-\frac{\eta}{2} \biggr)+g_{2,i}^{n-1} \frac{\eta}{2}+\bigl(\rho _{i-1}^n-g_{1,i}^{n-1} \bigr) \biggl(1-\frac{\eta}{2} \biggr)+g_{1,i}^{n-1} \frac {\eta}{2} \\ &{} -\frac{\eta}{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_{i+1}) \rho_{i+1}^n-u(x_{i-1})\rho_{i-1}^n \bigr]\quad\mbox {with } n\geq1 \end{aligned}$$
that is to say
$$\begin{aligned} \rho_i^{n+1}&= \rho_{i+1}^n \biggl(1-\frac{\eta}{2} \biggr)+ \rho_{i-1}^n \biggl(1-\frac{\eta}{2} \biggr)+ \rho_i^{n-1}(\eta-1) \\ &\quad{}-\frac{\eta}{2}\cdot \frac {\Delta t}{\Delta x} \bigl[u(x_{i+1})\rho_{i+1}^n-u(x_{i-1}) \rho _{i-1}^n \bigr]\quad\mbox{with } n\geq1. \end{aligned}$$
(155)
By using the fact that \(\eta=\frac{1}{C_{d}+\frac{1}{2}}\), we obtain
$$\begin{aligned} &(2C_d+1)\rho_i^{n+1}=2C_d\bigl( \rho_{i+1}^n+\rho_{i-1}^n \bigr)+(1-2C_d)\rho _i^{n-1}-\frac{\Delta t}{\Delta x} \bigl[u(x_{i+1})\rho _{i+1}^n-u(x_{i-1}) \rho_{i-1}^n \bigr]\\ &\quad\mbox{with } n\geq1 \end{aligned}$$
that is to say
$$\begin{aligned} &\rho_i^{n+1}-\rho_i^{n-1}=2C_d \bigl(\rho_{i+1}^n-\rho_i^{n+1}-\rho _i^{n-1}+\rho_{i-1}^n\bigr)- \frac{\Delta t}{\Delta x} \bigl[u(x_{i+1})\rho _{i+1}^n-u(x_{i-1}) \rho_{i-1}^n \bigr]\\ &\quad\mbox{with } n\geq1 \end{aligned}$$
which is equivalent to
$$\begin{aligned} &\frac{\rho_{i}^{n+1}-\rho_i^{n-1}}{2\Delta t}=\frac{\nu}{\Delta x^2}\bigl(\rho_{i+1}^n- \rho_i^{n+1}-\rho_i^{n-1}+ \rho_{i-1}^n\bigr)-\frac {1}{2\Delta x} \bigl[u(x_{i+1}) \rho_{i+1}^n-u(x_{i-1})\rho_{i-1}^n \bigr]\\ &\quad\mbox{with } n\geq1. \end{aligned}$$
We conclude the proof by noting that
$$\left \{ \begin{array} {l} \displaystyle g_{1,i}^{0}= \rho_i^{0}\cdot \biggl[(1-\alpha)-\beta\frac {\Delta t}{\Delta x}u(x_i) \biggr], \\ \displaystyle g_{2,i}^{0}=\rho_i^{0} \cdot \biggl[\alpha+\beta\frac{\Delta t}{\Delta x}u(x_i) \biggr] \end{array} \right . $$
coupled to the LBM∗ scheme (38) implies that
$$\rho_i^{n=1}:=\alpha\rho_{i-1}^{0}+(1- \alpha)\rho_{i+1}^{0}-\beta\frac {\Delta t}{\Delta x} \bigl[ \rho_{i+1}^0u(x_{i+1})-\rho _{i-1}^0u(x_{i-1}) \bigr]. $$
We deduce from the LBM scheme (35) that
$$\rho_i^{n+1}=\bigl(g_{1,i+1}^n+g_{2,i-1}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+\bigl(g_{1,i-1}^n+g_{2,i+1}^n \bigr)\frac{\eta}{2}-\frac{\eta}{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_{i+1})\rho_{i+1}^n-u(x_{i-1}) \rho_{i-1}^n \bigr]. $$
Thus, by applying again (35), we find
$$\begin{aligned} \rho_i^{n+1} =& \biggl[g_{1,i+2}^{n-1} \biggl(1-\frac{\eta }{2} \biggr)+g_{2,i+2}^{n-1} \frac{\eta}{2}-\frac{\rho_{i+2}^{n-1}}{2}\cdot\eta \frac{\Delta t}{\Delta x}u(x_{i+2}) +g_{2,i-2}^{n-1} \biggl(1-\frac{\eta}{2} \biggr)\\ &{}+g_{1,i-2}^{n-1}\frac{\eta }{2}+\frac{\rho_{i-2}^{n-1}}{2} \cdot\eta\frac{\Delta t}{\Delta x}u(x_{i-2}) \biggr] \biggl(1- \frac{\eta}{2} \biggr) \\ &{}\times \biggl[g_{1,i}^{n-1} \biggl(1-\frac{\eta}{2} \biggr)+g_{2,i}^{n-1}\frac{\eta}{2}-\frac{\rho_{i}^{n-1}}{2} \cdot\eta\frac {\Delta t}{\Delta x}u(x_{i}) +g_{2,i}^{n-1} \biggl(1-\frac{\eta}{2} \biggr)\\ &{}+g_{1,i}^{n-1} \frac{\eta }{2}+\frac{\rho_{i}^{n-1}}{2}\cdot\eta\frac{\Delta t}{\Delta x}u(x_{i}) \biggr]\frac{\eta}{2} \\ &{}-\frac{\eta}{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_{i+1}) \rho_{i+1}^n-u(x_{i-1})\rho_{i-1}^n \bigr]\quad\mbox {with } n\geq1 \end{aligned}$$
which is equivalent to
$$\begin{aligned} \rho_i^{n+1}&= \biggl[\bigl(g_{1,i+2}^{n-1}+g_{2,i}^{n-1} \bigr) \biggl(1-\frac{\eta}{2} \biggr)+\bigl(g_{1,i}^{n-1}+g_{2,i+2}^{n-1} \bigr)\frac{\eta}{2} \\ &\quad{}-\frac {\eta}{2}\cdot\frac{\Delta t}{\Delta x} \bigl(u(x_{i+2})\rho _{i+2}^{n-1}-u(x_i) \rho_i^{n-1}\bigr) \biggr] \biggl(1-\frac{\eta}{2} \biggr) \\ &\quad{}+ \biggl[\bigl(g_{1,i}^{n-1}+g_{2,i-2}^{n-1} \bigr) \biggl(1-\frac{\eta }{2} \biggr)+\bigl(g_{1,i-2}^{n-1}+g_{2,i}^{n-1} \bigr)\frac{\eta}{2} \\ &\quad{}-\frac{\eta}{2}\cdot \frac{\Delta t}{\Delta x} \bigl(u(x_i)\rho_i^{n-1}-u(x_{i-2}) \rho _{i-2}^{n-1}\bigr) \biggr] \biggl(1-\frac{\eta}{2} \biggr) \\ &\quad{}+\bigl(g_{1,i}^{n-1}+g_{2,i}^{n-1} \bigr) (\eta-1)-\frac{\eta}{2}\cdot \frac{\Delta t}{\Delta x}\bigl(u(x_{i+1}) \rho_{i+1}^n-u(x_{i-1})\rho _{i-1}^n \bigr)\quad\mbox{with } n\geq1. \end{aligned}$$
(156)
Moreover, we have
$$\begin{aligned} &\left \{ \begin{array} {l} \displaystyle \rho_{i+1}^n= \bigl(g_{1,i+2}^{n-1}+g_{2,i}^{n-1}\bigr) \biggl(1- \frac {\eta}{2} \biggr)+\bigl(g_{1,i}^{n-1}+g_{2,i+2}^{n-1} \bigr)\frac{\eta}{2}\\ \displaystyle\phantom{\rho_{i+1}^n=}{}-\frac{\eta }{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_{i+2})\rho _{i+2}^{n-1}-u(x_i) \rho_i^{n-1} \bigr], \\ \displaystyle \rho_{i-1}^n=\bigl(g_{1,i}^{n-1}+g_{2,i-2}^{n-1} \bigr) \biggl(1-\frac {\eta}{2} \biggr)+\bigl(g_{1,i-2}^{n-1}+g_{2,i}^{n-1} \bigr)\frac{\eta}{2}\\ \displaystyle\phantom{\rho_{i-1}^n=}{}-\frac{\eta }{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_i)\rho_i^{n-1}-u(x_{i-2})\rho _{i-2}^{n-1} \bigr] \end{array} \right . \\ & \quad\mbox{with }n\geq1 \end{aligned}$$
which allows to obtain (155) by using (156). We conclude the proof as for the LBM∗ scheme by noting that
$$\left \{ \begin{array} {l} \displaystyle g_{1,i}^{0}= \rho_i^{0}\cdot \biggl[(1-\alpha)-\beta\frac {\Delta t}{\Delta x}u(x_i) \biggr], \\ \displaystyle g_{2,i}^{0}=\rho_i^{0} \cdot \biggl[\alpha+\beta\frac{\Delta t}{\Delta x}u(x_i) \biggr] \end{array} \right . $$
coupled to the LBM scheme (35) implies that
$$\begin{aligned} \rho_i^1 =& \biggl\{ \rho_{i+1}^{0} \cdot \biggl[(1-\alpha )-\beta\frac{\Delta t}{\Delta x}u(x_{i+1}) \biggr]+ \rho_{i-1}^{0}\cdot \biggl[\alpha+\beta\frac{\Delta t}{\Delta x}u(x_{i-1}) \biggr] \biggr\} \biggl(1-\frac{\eta}{2} \biggr) \\ &{}+ \biggl\{ \rho_{i-1}^{0}\cdot \biggl[(1-\alpha)-\beta \frac {\Delta t}{\Delta x}u(x_{i-1}) \biggr]+\rho_{i+1}^{0} \cdot \biggl[\alpha +\beta\frac{\Delta t}{\Delta x}u(x_{i+1}) \biggr] \biggr\} \frac{\eta}{2} \\ &{}-\frac{\eta}{2}\cdot\frac{\Delta t}{\Delta x} \bigl[u(x_{i+1}) \rho_{i+1}^0-u(x_{i-1})\rho_{i-1}^0 \bigr] \\ =& \biggl[\alpha \biggl(1-\frac{\eta}{2} \biggr)+(1-\alpha) \frac {\eta}{2} \biggr]\rho_{i-1}^0+ \biggl[(1-\alpha) \biggl(1-\frac{\eta}{2} \biggr)+\alpha\frac{\eta}{2} \biggr] \rho_{i+1}^0 \\ &{}-\frac{\Delta t}{\Delta x}\rho_{i+1}^0u(x_{i+1}) \biggl[\beta \biggl(1-\frac{\eta}{2} \biggr)-\beta\frac{\eta}{2}+ \frac{\eta}{2} \biggr]\\ &{}+\frac{\Delta t}{\Delta x}\rho_{i-1}^0u(x_{i-1}) \biggl[\beta \biggl(1-\frac{\eta}{2} \biggr)-\beta\frac{\eta}{2}+ \frac{\eta}{2} \biggr] \end{aligned}$$
that is to say
$$\rho_i^1=\xi\rho_{i-1}^0+(1-\xi) \rho_{i+1}^0-\gamma\frac{\Delta t}{\Delta x} \bigl[ \rho_{i+1}^0u(x_{i+1})-\rho_{i-1}^0u(x_{i-1}) \bigr] $$
where \(\xi=\frac{\eta}{2}+\alpha(1-\eta)\) and \(\gamma=\frac{\eta}{2}+\beta(1-\eta)\). □
Proof of Lemma 4.2
To prove Lemma 4.1 in the case of the LBM∗ scheme (38), we used (153) and (154) which come from an application of the LBM∗ scheme in the cells i and i±1. Thus, to obtain the equivalence between the LBM∗ scheme (51) (obtained when u(x)=0) and the Du Fort-Frankel scheme (54) in the cell i=1, the LBM∗ scheme has to be applied when i=0, i=1 and i=2. When i=2, we do not have any difficulty to apply the LBM∗ scheme (51). Nevertheless, when i=0 and i=1, \(g_{2,-1}^{n}\) and \(g_{2,0}^{n}\) have to be defined. When the boundary conditions are periodic, \(g_{2,-1}^{n}\) and \(g_{2,0}^{n}\) are defined. But, when the boundary conditions are not periodic, \(g_{2,-1}^{n}\) and \(g_{2,0}^{n}\) are not defined a priori. We will define these quantities in such a way the discrete Neumann boundary condition
$$ \rho_{i=0}^n=\rho_{i=1}^n $$
(157)
is satisfied. Let us apply the LBM∗ scheme (51) when i=0. We have
$$\rho_0^{n+1}=g_{1,1}^n+g_{2,-1}^n $$
which implies by using (157) at the time t
n+1 that
$$g_{2,-1}^n=\rho_{1}^{n+1}-g_{1,1}^n. $$
But, we have also
$$g_{2,0}^{n+1}=g_{2,-1}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{1,1}^n\frac{\eta}{2}. $$
Thus, we have
$$g_{2,0}^{n+1}=\bigl(\rho_{1}^{n+1}-g_{1,1}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+g_{1,1}^n \frac{\eta}{2}. $$
Let us now apply the LBM∗ scheme (51) when i=1. We have
$$\rho_{1}^{n+1}=g_{1,2}^n+g_{2,0}^n. $$
This means that
$$\begin{aligned} g_{2,0}^{n+1} =&\bigl(g_{1,2}^n+g_{2,0}^n-g_{1,1}^{n} \bigr) \biggl(1-\frac{\eta}{2} \biggr)+g_{1,1}^n \frac{\eta}{2} \\ =& g_{1,2}^n \biggl(1-\frac{\eta}{2} \biggr)+g_{2,0}^n\frac {\eta}{2}+\bigl(g_{2,0}^n-g_{1,1}^n \bigr) (1-\eta). \end{aligned}$$
But, we have also
$$g_{1,1}^{n+1}=g_{1,2}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{2,0}^n\frac{\eta}{2}. $$
Thus
$$g_{2,0}^{n+1}=g_{1,1}^{n+1}+ \bigl(g_{2,0}^n-g_{1,1}^n\bigr) (1- \eta) $$
which gives (53)(a). We conclude by noting that (53)(b) is a consequence of (52)(b) and (55).
Let us apply the LBM scheme (50) when i=0 and i=1. We have
$$\rho_0^{n+1}=\bigl(g_{1,1}^n+g_{2,-1}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+\bigl(g_{1,-1}^n+g_{2,1}^n \bigr)\frac{\eta}{2} $$
and
$$\rho_1^{n+1}=\bigl(g_{1,2}^n+g_{2,0}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+\bigl(g_{1,0}^n+g_{2,2}^n \bigr)\frac{\eta}{2}. $$
Thus, by taking into account (157) at the time t
n+1, we obtain
$$ \bigl(g_{1,1}^n+g_{2,-1}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+\bigl(g_{1,-1}^n+g_{2,1}^n \bigr)\frac{\eta}{2}=\bigl(g_{1,2}^n+g_{2,0}^n \bigr) \biggl(1-\frac {\eta}{2} \biggr)+\bigl(g_{1,0}^n+g_{2,2}^n \bigr)\frac{\eta}{2}. $$
(158)
We have also
$$g_{1,1}^{n+1}=g_{1,2}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{2,2}^n\frac{\eta}{2} $$
and
$$g_{2,0}^{n+1}=g_{2,-1}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{1,-1}^n\frac{\eta}{2}. $$
Thus, we deduce from (158) that
$$g_{2,0}^{n+1}=g_{1,1}^{n+1}+ \bigl(g_{2,0}^n-g_{1,1}^n\bigr) \biggl(1-\frac{\eta }{2} \biggr)+\bigl(g_{1,0}^n-g_{2,1}^n \bigr)\frac{\eta}{2} $$
that is to say
$$\begin{aligned} g_{2,0}^{n+1} =& g_{1,1}^{n+1}+ \bigl(g_{2,0}^n-g_{1,1}^n\bigr) (1- \eta )+\bigl[\bigl(g_{2,0}^n-g_{1,1}^n \bigr)+\bigl(g_{1,0}^n-g_{2,1}^n\bigr) \bigr]\frac{\eta}{2} \\ =& g_{1,1}^{n+1}+\bigl(g_{2,0}^n-g_{1,1}^n \bigr) (1-\eta)+\bigl(\rho _0^n-\rho_1^n \bigr)\frac{\eta}{2} \end{aligned}$$
which gives (57)(b) by taking into account (157). By using (57)(b), we obtain
$$g_{1,0}^{n+1}+g_{2,0}^{n+1}=g_{1,0}^{n+1}+g_{1,1}^{n+1}+ \bigl(g_{2,0}^n-g_{1,1}^n\bigr) (1- \eta ). $$
Thus, by using (157) at the time t
n+1, we obtain
$$g_{1,1}^{n+1}+g_{2,1}^{n+1}=g_{1,0}^{n+1}+g_{1,1}^{n+1}+ \bigl(g_{2,0}^n-g_{1,1}^n\bigr) (1- \eta ) $$
which gives (57)(a). Moreover, (57)(c,d) is a consequence of (52) and (55). At last, we obtain that
$$\rho_i^1=\xi\rho_{i-1}^0+(1-\xi) \rho_{i+1}^0 $$
as in the periodic case. □
Proof of Lemma 4.3
The proof is similar to the one of Lemma 4.2.
Let us apply the LBM∗ scheme (51) when i=0. We have
$$\rho_0^{n+1}=g_{1,1}^n+g_{2,-1}^n. $$
Thus, by applying the boundary condition
$$ \rho_{i=0}^n=\rho_{x_{\min}} $$
(159)
at the time t
n+1, we obtain that
$$g_{2,-1}^n=\rho_{x_{\min}}-g_{1,1}^n. $$
But, we have also
$$g_{2,0}^{n+1}=g_{2,-1}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{1,1}^n\frac{\eta}{2}. $$
Thus, we have
$$g_{2,0}^{n+1}=\bigl(\rho_{x_{\min}}-g_{1,1}^n \bigr) \biggl(1-\frac{\eta}{2} \biggr)+g_{1,1}^n \frac{\eta}{2} $$
that is to say
$$g_{2,0}^{n+1}=\frac{\rho_{x_{\min}}}{2}+ \biggl( \frac{\rho_{x_{\min }}}{2}-g_{1,1}^n \biggr) (1-\eta) $$
which gives (63)(a). We conclude the proof as in the periodic case.
Let us apply the LBM scheme (50) when i=0. We have
$$g_{1,0}^{n+1}=g_{1,1}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{2,1}^n\frac{\eta}{2} $$
which gives (65)(a). We obtain (65)(b) by applying (159) at the time t
n+1. We conclude the proof as in the periodic case. □
Appendix E: Proof of Propositions 5.1 and 5.2
5.1 E.1 Proof of Proposition 5.1
To prove Proposition 5.1, we use the following lemma:
Lemma E.1
Let us define
$$\forall i:\quad \left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^0:=g_{1,i}^0-K_1,\\ \displaystyle \tilde{g}_{2,i}^0:=g_{2,i}^0-K_2 \end{array} \right . $$
and let us apply the LBM
∗
scheme (38)(44) with the initial conditions
\((\tilde{g}_{1,i}^{0},\tilde{g}_{2,i}^{0})\). Then, we have
$$\forall i,\ \forall n\geq0:\quad\tilde{\rho}_i^n= \rho_i^n-K $$
with
\(\tilde{\rho}_{i}^{n}:=\tilde{g}_{1,i}^{n}+\tilde{g}_{2,i}^{n}\)
and
K:=K
1+K
2.
Proof of Lemma E.1
We have by construction
$$\tilde{\rho}_i^0=\rho_i^0-K. $$
Moreover, we have also
$$\left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^1=a\tilde{g}_{1,i+1}^0+b\tilde{g}_{2,i-1}^0,\\ \displaystyle \tilde{g}_{2,i}^1=(1-a)\tilde{g}_{1,i+1}^0+(1-b)\tilde{g}_{2,i-1}^0 \end{array} \right . $$
with
$$ \left \{ \begin{array} {l} \displaystyle a=1-\frac{\eta}{2} \biggl(1+ \frac{\Delta t}{\Delta x}u_0 \biggr), \\ \displaystyle b=\frac{\eta}{2} \biggl(1-\frac{\Delta t}{\Delta x}u_0 \biggr). \end{array} \right . $$
(160)
Thus, we can write
$$\left \{ \begin{array} {l} \displaystyle \tilde{g}_{1,i}^1=ag_{1,i+1}^0+bg_{2,i-1}^0-(aK_1+bK_2), \\ \displaystyle \tilde{g}_{2,i}^1=(1-a)g_{1,i+1}^0+(1-b)g_{2,i-1}^0- \bigl[(1-a)K_1+(1-b)K_2\bigr]. \end{array} \right . $$
that is to say
$$\left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^1=g_{1,i}^1-(aK_1+bK_2),\\ \displaystyle \tilde{g}_{2,i}^1=g_{2,i}^1-K+(aK_1+bK_2). \end{array} \right . $$
This allows to obtain
$$\tilde{\rho}_i^1=\rho_i^1-K. $$
We now prove that
$$ \forall i:\quad \left \{ \begin{array}{l} \displaystyle \tilde{\rho}_i^{n-1}=\rho_i^{n-1}-K,\\ \displaystyle \tilde{\rho}_i^n=\rho_i^n-K \end{array} \right . $$
(161)
implies that
$$ \forall i:\quad\tilde{\rho}_i^{n+1}=\rho_i^{n+1}-K. $$
(162)
We know that \(\tilde{\rho}_{i}^{n-1}\), \(\tilde{\rho}_{i}^{n}\) and \(\tilde{\rho }_{i}^{n+1}\) are linked through the relation (45) applied to \(\tilde{\rho}\), that is to say
$$\frac{\tilde{\rho}_{i}^{n+1}-\tilde{\rho}_i^{n-1}}{2\Delta t}+\frac {u_0}{2\Delta x}\bigl(\tilde{\rho}_{i+1}^n- \tilde{\rho}_{i-1}^n\bigr)=\frac{\nu }{\Delta x^2}\bigl(\tilde{ \rho}_{i+1}^n-\tilde{\rho}_i^{n+1}- \tilde{\rho }_i^{n-1}+\tilde{\rho}_{i-1}^n \bigr). $$
Thus, by using (161), we can write that
$$\begin{aligned} &\frac{\tilde{\rho}_{i}^{n+1}-\rho_i^{n-1}+K}{2\Delta t}+\frac {u_0}{2\Delta x}\bigl(\rho_{i+1}^n- \rho_{i-1}^n\bigr)\\ &\quad=\frac{\nu}{\Delta x^2}\bigl(\rho _{i+1}^n-\rho_i^{n+1}- \rho_i^{n-1}+\rho_{i-1}^n\bigr)+ \frac{\nu}{\Delta x^2}\bigl(\rho_i^{n+1}-\tilde{ \rho}_i^{n+1}-K\bigr). \end{aligned}$$
On the other side, we have also
$$\frac{\rho_i^{n+1}-\rho_i^{n-1}}{2\Delta t}+\frac{u_0}{2\Delta x}\bigl(\rho _{i+1}^n- \rho_{i-1}^n\bigr)=\frac{\nu}{\Delta x^2}\bigl( \rho_{i+1}^n-\rho _i^{n+1}- \rho_i^{n-1}+\rho_{i-1}^n\bigr) $$
by using again (45). This allows to write that
$$\bigl(\tilde{\rho}_{i}^{n+1}-\rho_i^{n+1}+K \bigr) \biggl(\frac{1}{2\Delta t}+\frac {\nu}{\Delta x^2} \biggr)=0 $$
which proves (162). We conclude by noting that (161) is verified when n=1. □
Proof of Proposition 5.1
Let us define \(C:=\frac{C_{d}\Delta x}{\nu}|u_{0}|\geq0\). We have by construction \(\Delta t=C\frac{\Delta x}{|u_{0}|}\). Thus, we have
$$ \frac{\eta}{2}(1-C)\leq\frac{\eta}{2} \biggl(1+\frac{\Delta t}{\Delta x}u_0 \biggr)\leq\frac{\eta}{2}(1+C) $$
(163)
since \(\frac{\Delta t}{\Delta x}u_{0}=\pm C\) and η>0. Moreover, Condition (70)(a) is equivalent to
$$ C\in\bigl[0, \min(1,2C_d)\bigr]. $$
(164)
Since \(\frac{\eta}{2}(1\pm C)=\frac{1\pm C}{1+2C_{d}}\), by using (163) and (164), we obtain
$$0\leq\frac{\eta}{2} \biggl(1+\frac{\Delta t}{\Delta x}u_0 \biggr)\leq1. $$
In the same way, we obtain
$$0\leq\frac{\eta}{2} \biggl(1-\frac{\Delta t}{\Delta x}u_0 \biggr)\leq1. $$
As a consequence, we can write that a and b defined with (160) verifies
$$ \left \{ \begin{array}{l} \displaystyle 0\leq a\leq1,\\ \displaystyle 0\leq b\leq1. \end{array} \right . $$
(165)
Let us now define
$$\left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^0:=g_{1,i}^0-K_1,\\ \displaystyle \tilde{g}_{2,i}^0:=g_{2,i}^0-K_2 \end{array} \right . $$
with
$$ \left \{ \begin{array} {l} \displaystyle K_1= \biggl[(1- \alpha)-\beta\frac{\Delta t}{\Delta x}u_0 \biggr]\max_j \rho_j^0, \\ \displaystyle K_2= \biggl[\alpha+\beta\frac{\Delta t}{\Delta x}u_0 \biggr]\max_j\rho_j^0. \end{array} \right . $$
(166)
We now suppose that α∈[0,1] and β=0 or β=min(1−α,α). In these two cases, we have
$$\left \{ \begin{array}{l} \displaystyle (1-\alpha)-\beta\frac{\Delta t}{\Delta x}u_0\geq0,\\ \displaystyle \alpha+\beta\frac{\Delta t}{\Delta x}u_0\geq0. \end{array} \right . $$
Thus, since \((g_{1,i}^{0},g_{2,i}^{0})\) is defined with (44), we obtain
$$\forall i:\quad \left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^0\leq0,\\ \displaystyle \tilde{g}_{2,i}^0\leq0. \end{array} \right . $$
And, by using the fact that
$$\left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^{n+1}=a\tilde{g}_{1,i+1}^n+b\tilde{g}_{2,i-1}^n,\\ \displaystyle \tilde{g}_{2,i}^{n+1}=(1-a)\tilde{g}_{1,i+1}^n+(1-b)\tilde{g}_{2,i-1}^n \end{array} \right . $$
and (165), we can write that
$$\forall i,\ \forall n\geq0:\quad \left \{ \begin{array}{l} \displaystyle \tilde{g}_{1,i}^n\leq0,\\ \displaystyle \tilde{g}_{2,i}^n\leq0 \end{array} \right . $$
which implies that
$$ \forall i,\ \forall n\geq0:\quad \tilde{\rho}_i^n\leq0. $$
(167)
On the other side, by using Lemma E.1, we obtain that
$$\forall i,\ \forall n\geq0:\quad\tilde{\rho}_i^n= \rho_i^n-K $$
with \(K:=K_{1}+K_{2}=\max_{j}\rho_{j}^{0}\). By using (167), we obtain
$$\forall i,\ \forall n\geq0:\quad\rho_i^n\leq\max _j\rho_j^0. $$
We obtain
$$\forall i,\ \forall n\geq0:\quad \rho_i^n\geq\min _j\rho_j^0 $$
with the same approach.
By using the stability result in L
∞ of the LBM∗ scheme (38)(44) and by using Lemma 4.1, we obtain the stability in L
∞ of the LFCCDF scheme (45)(47) under the condition (70).
By using the stability result in L
∞ of the LFCCDF scheme (45)(46) and by using again Lemma 4.1, we obtain the stability in L
∞ of the LBM scheme (35)(44) under the condition (70).
When n≥2 and when \(\Delta t=C_{d}\frac {\Delta x^{2}}{\nu}\) (C
d
≥0), the LFCCDF scheme (45) is consistent and its consistency error is of order Δx
2 [33]. Let us study the first iterate (47). We have
$$\begin{aligned} \rho_i^{1} =&\alpha\rho_{i-1}^{0}+(1- \alpha)\rho _{i+1}^{0}-\beta\frac{\Delta t}{\Delta x} \bigl[\rho _{i+1}^0u(x_{i+1})-\rho_{i-1}^0u(x_{i-1}) \bigr] \\ =&\rho_{\mathrm{exact}}(0,x_i)+\mathcal{O}\bigl(\Delta x^{\theta }\bigr) \\ =&\rho_{\mathrm{exact}}(\Delta t,x_i)+\mathcal{O}\bigl(\Delta t, \Delta x^{\theta}\bigr) \end{aligned}$$
with
$$\left \{ \begin{array}{l} \displaystyle (\alpha,\beta)\neq(1/2,1/2)\quad\Longrightarrow\quad\theta=1,\\ \displaystyle (\alpha,\beta)=(1/2,1/2)\quad\Longrightarrow\quad\theta=2 \end{array} \right . $$
where ρ
exact is the exact solution of the convection-diffusion equation. Thus, the consistency error is of order Δx when (α,β)≠(1/2,1/2) and is of order Δx
2 when (α,β)=(1/2,1/2).
We obtain the convergence in L
∞ of the LFCCDF scheme by applying the Lax Theorem. Thus, by using again Lemma 4.1, we also obtain the convergence in L
∞ of the LBM and LBM∗ schemes. □
5.2 E.2 Proof of Proposition 5.2
Firstly, we prove the stability in L
∞ for any C
d
≥0 of the LBM∗ scheme (51). Indeed, it is more simple to analyze this scheme than the LBM scheme (50). Then, by applying Lemmas 4.1, 4.2 and 4.3 (and the Lax Theorem), we easily obtain the other results.
Since η∈]0,2], we deduce from (51) that
$$ \max_i\bigl(\bigl\vert g_{1,i}^{n+1} \bigr\vert ,\bigl\vert g_{2,i}^{n+1}\bigr\vert \bigr)\leq \max_i\bigl(\bigl\vert g_{1,i}^n \bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr) $$
(168)
which proves the unconditional stability in L
∞ as soon as the initial condition is bounded. Moreover, since \(\rho _{i}^{n}=g_{1,i}^{n}+g_{2,i}^{n}\), we have
$$\max_i\bigl\vert \rho_i^n\bigr\vert \leq2\max_i\bigl(\bigl\vert g_{1,i}^n \bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr). $$
Thus, we deduce from (168) that
$$\max_i\bigl\vert \rho_i^n\bigr\vert \leq2\max_i\bigl(\bigl\vert g_{1,i}^0 \bigr\vert ,\bigl\vert g_{2,i}^0\bigr\vert \bigr) $$
that is to say
$$ \max _i\bigl\vert \rho_i^n\bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\cdot\max _i\bigl\vert \rho_i^0\bigr\vert $$
(169)
by using the initial condition (52).
Since η∈ ]0,2], we deduce from (51) that
$$ \max_{i\geq1}\bigl(\bigl\vert g_{1,i}^{n+1} \bigr\vert ,\bigl\vert g_{2,i}^{n+1}\bigr\vert \bigr)\leq \max \Bigl[\bigl\vert g_{2,0}^n\bigr\vert ,\max _{i\geq1}\bigl(\bigl\vert g_{1,i}^n\bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr)\Bigr]. $$
(170)
Inequalities (168) and (170) are different because of the boundary term \(|g_{2,0}^{n}|\) in (170) which does not exist when the boundary conditions are periodic. The difficulty to obtain the stability in L
∞ comes from this term. We deduce from the boundary condition (53)(a) that
$$\begin{aligned} g_{2,0}^{n+1} =& g_{1,1}^{n+1}+(1- \eta)g_{2,0}^n-(1-\eta)g_{1,1}^n \\ =& g_{1,1}^{n+1}+(1-\eta)\bigl[g_{1,1}^n+(1- \eta)g_{2,0}^{n-1}-(1-\eta )g_{1,1}^{n-1} \bigr]-(1-\eta)g_{1,1}^n \\ =& g_{1,1}^{n+1}+(1-\eta)^2g_{2,0}^{n-1}-(1- \eta)^2g_{1,1}^{n-1} \\ =& \cdots \\ =& g_{1,1}^{n+1}+(1-\eta)^{n+1}g_{2,0}^0-(1- \eta)^{n+1}g_{1,1}^0 \end{aligned}$$
that is to say
$$ g_{2,0}^{n+1}=g_{1,1}^{n+1}+(1- \eta)^{n+1}\bigl(g_{2,0}^0-g_{1,1}^0 \bigr). $$
(171)
On the other side, we have
$$g_{1,1}^{n+1}=g_{1,2}^n \biggl(1- \frac{\eta}{2} \biggr)+g_{2,0}^n\frac{\eta}{2}. $$
Thus, by using (171), we obtain
$$g_{2,0}^{n+1}\leq\max\bigl(\bigl\vert g_{1,2}^n \bigr\vert ,\bigl\vert g_{2,0}^n\bigr\vert \bigr)+|1- \eta|^{n+1}\cdot \bigl\vert g_{2,0}^0-g_{1,1}^0 \bigr\vert . $$
By injecting this inequality in (170), we find
$$\begin{aligned} & \max\Bigl[ \bigl\vert g_{2,0}^{n+1}\bigr\vert ,\max _{i\geq 1}\bigl(\bigl\vert g_{1,i}^{n+1}\bigr\vert ,\bigl\vert g_{2,i}^{n+1}\bigr\vert \bigr)\Bigr] \\ &\quad\leq\max \Bigl[\max \bigl(\bigl\vert g_{1,2}^n\bigr\vert ,\bigl\vert g_{2,0}^n\bigr\vert \bigr)+|1- \eta|^{n+1}\cdot \bigl\vert g_{2,0}^0-g_{1,1}^0 \bigr\vert ,\bigl\vert g_{2,0}^n\bigr\vert ,\max _{i\geq 1}\bigl(\bigl\vert g_{1,i}^n\bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr) \Bigr]. \end{aligned}$$
(172)
Let us now define
$$ G^n:=\max\Bigl[\bigl\vert g_{2,0}^n\bigr\vert ,\max_{i\geq1} \bigl(\bigl\vert g_{1,i}^n\bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr)\Bigr]. $$
(173)
We deduce from (172) that
$$G^{n+1}\leq G^n+|1-\eta|^{n+1}\cdot\bigl\vert g_{2,0}^0-g_{1,1}^0\bigr\vert $$
that is to say
$$ G^{n+1}\leq G^0+|1-\eta|S_n\cdot\bigl\vert g_{2,0}^0-g_{1,1}^0 \bigr\vert $$
(174)
where
$$ S_n:=\sum _{k=0}^n|1-\eta|^k. $$
(175)
Let us now suppose that η∈ ]0,2[ that is to say C
d
>0. By noting that \(S_{n}\leq\frac{1}{1-|\eta-1|}\) when η∈ ]0,2[, we obtain
$$ G^{n+1}\leq G^0+\frac{|1-\eta|}{1-|1-\eta|}\bigl\vert g_{2,0}^0-g_{1,1}^0\bigr\vert $$
(176)
which proves the unconditional stability in L
∞ as soon as the initial condition is bounded. Moreover, by applying the arguments used to obtain (169) in the periodic case, we deduce from (176) that
$$\max_{i\geq0}\bigl\vert \rho_i^{n+1} \bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\cdot \max _{i\geq0}\bigl\vert \rho_i^0\bigr\vert +2|2\alpha-1|\frac{|1-\eta|}{1-|1-\eta |}\bigl\vert \rho_1^0 \bigr\vert $$
that is to say
$$\max_{i\geq1}\bigl\vert \rho_i^{n+1} \bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\cdot \max _{i\geq1}\bigl\vert \rho_i^0\bigr\vert +2|2\alpha-1|\frac{|1-\eta|}{1-|1-\eta |}\bigl\vert \rho_1^0 \bigr\vert $$
when η∈]0,2[ since \(\rho_{0}^{n}=\rho_{1}^{n}\). When η=2 that is to say C
d
=0, we obtain
$$ \max_{i\geq1}\bigl\vert \rho_i^{n+1}\bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\cdot \max_{i\geq1}\bigl\vert \rho_i^0\bigr\vert $$
(177)
by using Lemma 7.2. At last, when \(\alpha =\frac{1}{2}\) and for any η∈]0,2], we have \(g_{2,0}^{0}=g_{1,1}^{0}\) which implies that G
n+1≤G
0 by using (174). Thus, (177) is also satisfied.
Inequality (170) is still satisfied. Moreover, we deduce from the boundary condition (63)(a) that
$$\bigl\vert g_{2,0}^{n+1}\bigr\vert \leq \biggl(1- \frac{\eta}{2} \biggr)|\rho_{x_{\min}}|+|\eta -1|\cdot\bigl\vert g_{1,1}^n\bigr\vert . $$
Thus, by using (170), we obtain
$$\begin{aligned} & \max\Bigl[ \bigl\vert g_{2,0}^{n+1}\bigr\vert ,\max _{i\geq 1}\bigl(\bigl\vert g_{1,i}^{n+1}\bigr\vert ,\bigl\vert g_{2,i}^{n+1}\bigr\vert \bigr)\Bigr] \\ &\quad\leq\max \biggl[ \biggl(1-\frac{\eta }{2} \biggr)\vert \rho_{x_{\min}} \vert +|\eta-1|\cdot\bigl\vert g_{1,1}^n\bigr\vert , \bigl\vert g_{2,0}^n\bigr\vert ,\max _{i\geq1}\bigl(\bigl\vert g_{1,i}^n\bigr\vert ,\bigl\vert g_{2,i}^n\bigr\vert \bigr) \biggr]. \end{aligned}$$
(178)
We deduce from (178) that
$$ G^{n+1}\leq\max \biggl[ \biggl(1- \frac{\eta}{2} \biggr)|\rho_{x_{\min}}|+|\eta -1|G^n,G^n \biggr]. $$
(179)
where G
n is defined with (173). Thus, we have also
$$\begin{aligned} G^{n+1} \leq&\max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)|\rho _{x_{\min}}|+|\eta-1|\max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|+|\eta-1|G^{n-1},G^{n-1} \biggr],\\ &\phantom{\max\ } \biggl(1- \frac{\eta}{2} \biggr)|\rho _{x_{\min}}|+|\eta-1|G^{n-1},G^{n-1} \biggr] \end{aligned}$$
that is to say
$$\begin{aligned} G^{n+1} \leq&\max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)|\rho _{x_{\min}}|\cdot\bigl(1+\vert \eta-1\vert \bigr)+|\eta-1|^2G^{n-1},\\ &\phantom{\max\ }\biggl(1-\frac{\eta }{2} \biggr)|\rho_{x_{\min}}|+|\eta-1|G^{n-1},G^{n-1} \biggr]. \end{aligned}$$
The previous inequalities incite us to prove that
$$\begin{aligned} G^{n+1} \leq& \max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|S_m+|\eta-1|^{m+1}G^{n-m}, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|S_{m-1}+| \eta-1|^mG^{n-m},\ldots, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|S_0+| \eta-1|G^{n-m},G^{n-m}\biggr] \end{aligned}$$
(180)
where S
m
is defined with (175). We know that (180) is verified when m=0 and m=1. Let us now suppose that (180) is verified at the rank m. By injecting (179) in (180), we obtain
$$\begin{aligned} G^{n+1} \leq& \max \biggl\{ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|S_m+|\eta-1|^{m+1}\\ &\phantom{\max\ }\max \biggl[ \biggl(1- \frac{\eta}{2} \biggr)|\rho_{x_{\min }}|+|\eta-1|G^{n-m-1},G^{n-m-1} \biggr], \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|S_{m-1}+| \eta-1|^m\\ &\phantom{\max\ }\max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|+|\eta-1|G^{n-m-1},G^{n-m-1} \biggr],\ldots, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min}}|S_0+| \eta -1|\max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min}}|+| \eta -1|G^{n-m-1},G^{n-m-1} \biggr], \\ &\phantom{\max\ } \max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)|\rho _{x_{\min}}|+|\eta-1|G^{n-m-1},G^{n-m-1} \biggr]\biggr\} \end{aligned}$$
which gives
$$\begin{aligned} G^{n+1} \leq& \max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|\bigl(S_m+|\eta-1|^{m+1}\bigr)+| \eta-1|^{m+2}G^{n-m-1},\\ &\phantom{\max\ } \biggl(1-\frac{\eta }{2} \biggr)| \rho_{x_{\min}}|S_m+|\eta-1|^{m+1}G^{n-m-1} \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}| \bigl(S_{m-1}+|\eta-1|^m\bigr)+|\eta-1|^{m+1}G^{n-m-1},\\ &\phantom{\max\ } \biggl(1-\frac{\eta }{2} \biggr)|\rho_{x_{\min}}|S_{m-1}+| \eta-1|^mG^{n-m-1},\ldots, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min}}|(S_0+| \eta -1|)+|\eta-1|^2G^{n-m-1},\\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|S_0+|\eta-1|G^{n-m-1} \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|+| \eta-1|G^{n-m-1},G^{n-m-1}\biggr] \end{aligned}$$
that is to say
$$\begin{aligned} G^{n+1} \leq& \max \biggl[ \biggl(1-\frac{\eta}{2} \biggr)| \rho_{x_{\min }}|S_{m+1}+|\eta-1|^{m+2}G^{n-m-1}, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min}}|S_m+| \eta -1|^{m+1}G^{n-m-1},\ldots, \\ &\phantom{\max\ } \biggl(1-\frac{\eta}{2} \biggr)|\rho_{x_{\min }}|S_0+| \eta-1|G^{n-m-1},G^{n-m-1}\biggr]. \end{aligned}$$
Thus, (180) is also verified at the rank m+1, which proves (180) for any m∈{0,…,n−1}. Let us now suppose that η∈ ]0,2[ that is to say C
d
>0. By applying (180) at the rank n−1, by noting that \(S_{m}\leq\frac {1}{1-|\eta-1|}\) and that |η−1|m≤1 for any \(m\in \mathbb{N}\), we obtain that
$$ G^{n+1}\leq\frac{1-\frac{\eta}{2}}{1-|\eta-1|}|\rho_{x_{\min}}|+G^0 $$
(181)
which proves the unconditional stability in L
∞ as soon as the initial condition is bounded. Moreover, by applying the arguments used to obtain (169) in the periodic case, we deduce from (181) that
$$\max_{i\geq0}\bigl\vert \rho_i^{n+1} \bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\max _{i\geq0}\bigl\vert \rho_i^0\bigr\vert +\frac{2 (1-\frac{\eta }{2} )}{1-|\eta-1|}|\rho_{x_{\min}}| $$
that is to say
$$\max_{i\geq1}\bigl\vert \rho_i^{n+1} \bigr\vert \leq2\max\bigl(\vert 1-\alpha \vert ,|\alpha|\bigr)\cdot \max \Bigl(\max_{i\geq1}\bigl\vert \rho_i^0 \bigr\vert ,|\rho_{x_{\min}}| \Bigr)+\frac {2 (1-\frac{\eta}{2} )}{1-|\eta-1|}| \rho_{x_{\min}}| $$
since \(\rho_{i}^{n}=\rho_{x_{\min}}\). When η=2 that is to say C
d
=0, we obtain (177) by using Lemma 7.2.
Appendix F: Proof of Propositions 6.1 and 6.2, and of Lemma 6.1
Proof of Proposition 6.1
We focus on the LBM∗ scheme (51) since this scheme is more simple than the LBM scheme (50). Then, by applying Lemmas 4.1 and 4.2, we obtain the results for the Du Fort-Frankel scheme and, then, for the LBM scheme (50) (this approach was also used for the proof of Proposition 5.2).
The LBM∗ scheme (51) implies that
$$\min_j\bigl(g_{1,j}^{n},g_{2,j}^{n} \bigr)\leq g_{q,i}^{n+1}\leq\max_j \bigl(g_{1,j}^{n},g_{2,j}^{n}\bigr)\quad \bigl(q\in\{1,2\}\bigr) $$
since η∈]0,2]. Thus, we have
$$ \min_j\bigl(g_{1,j}^0,g_{2,j}^0 \bigr)\leq g_{q,i}^{n+1}\leq\max_j \bigl(g_{1,j}^0,g_{2,j}^0\bigr)\quad \bigl(q\in\{1,2\}\bigr). $$
(182)
Thus, by using (52),we obtain
$$ \min\bigl[(1-\alpha),\alpha\bigr]\cdot\min_j \rho_j^0\leq g_{q,i}^{n}\leq\max \bigl[(1-\alpha),\alpha\bigr]\cdot\max_j \rho_j^0\quad\bigl(q\in\{1,2\}\bigr) $$
(183)
when α∈[0,1]. Since \(\rho_{i}^{n}=g_{1,i}^{n}+g_{2,i}^{n}\), we deduce from (183) that
$$ 2\min\bigl[(1- \alpha),\alpha\bigr]\cdot\min_j\rho_j^0 \leq\rho_i^n\leq2\max \bigl[(1-\alpha),\alpha\bigr]\cdot \max_j\rho_j^0. $$
(184)
Thus, we deduce from (184) that the discrete maximum principle (74) is verified when \(\alpha=\frac{1}{2}\).
The discrete maximum principle (74) cannot be deduced from (184) when \(\alpha\neq\frac{1}{2}\). Nevertheless, we now prove that (74) is still satisfied when α∈[0,1]. To obtain this result, we prove that
$$ \left \{ \begin{array}{l@{\quad}l} \displaystyle g_{1,i}^n=\sum_k\varGamma_k^ng_{1,i^1_k}^0+\sum_k\widetilde{\varGamma}_k^ng_{2,i^2_k}^0,&\mbox{(a)}\\ \displaystyle g_{2,i}^n=\sum_k\widetilde{\varGamma }_k^ng_{1,i^1_k}^0+\sum_k\varGamma_k^ng_{2,i^2_k}^0,&\mbox{(b)}\\ \displaystyle \sum_k\bigl(\varGamma_k^n+\widetilde{\varGamma}_k^n\bigr)=1,&\mbox{(c)}\\ \varGamma_k^n\geq0,&\mbox{(d)}\\ \displaystyle \widetilde{\varGamma}_k\geq0&\mbox{(e)} \end{array} \right . $$
(185)
where \(\{i^{1}_{k}\}_{k}\) and \(\{i^{2}_{k}\}_{k}\) are two sequences which depend on i, and where \(\{\varGamma_{k}^{n}\}_{k}\) and \(\{\widetilde{\varGamma}_{k}^{n}\}_{k}\) are two positive real sequences. It is obvious that (185) is verified when n=1 since
$$\left \{ \begin{array} {l} \displaystyle g_{1,i}^{1}=g_{1,i+1}^0 \biggl(1-\frac{\eta}{2} \biggr)+g_{2,i-1}^0 \frac{\eta}{2}, \\ \displaystyle g_{2,i}^{1}=g_{2,i-1}^0 \biggl(1-\frac{\eta}{2} \biggr)+g_{1,i+1}^0\frac{\eta}{2}. \end{array} \right . $$
Let us suppose that (185) is satisfied at the rank n. Then, the LBM∗ scheme (51) can be written with
$$\left \{ \begin{array} {l} \displaystyle g_{1,i}^{n+1}= \biggl(1-\frac{\eta}{2} \biggr) \biggl(\sum_k \varGamma_k^ng_{1,i^1_k+1}^0+\sum _k\widetilde{\varGamma }_k^ng_{2,i^2_k+1}^0 \biggr)+\frac{\eta}{2} \biggl(\sum_k\widetilde {\varGamma}_k^ng_{1,i^1_k-1}^0+\sum _k\varGamma_k^ng_{2,i^2_k-1}^0 \biggr), \\ \displaystyle g_{2,i}^{n+1}=\frac{\eta}{2} \biggl(\sum _k\varGamma _k^ng_{1,i^1_k+1}^0+ \sum_k\widetilde{\varGamma }_k^ng_{2,i^2_k+1}^0 \biggr)+ \biggl(1-\frac{\eta}{2} \biggr) \biggl(\sum _k\widetilde{\varGamma}_k^ng_{1,i^1_k-1}^0+ \sum_k\varGamma _k^ng_{2,i^2_k-1}^0 \biggr) \end{array} \right . $$
that is to say with
$$ \left \{ \begin{array} {l} \displaystyle g_{1,i}^{n+1}=\sum _k \biggl[ \biggl(1-\frac{\eta }{2} \biggr) \varGamma_k^ng_{1,i^1_k+1}^0+ \frac{\eta}{2}\widetilde{\varGamma }_k^ng_{1,i^1_k-1}^0 \biggr]\\ \displaystyle\phantom{g_{1,i}^{n+1}=}{}+\sum_k \biggl[ \biggl(1-\frac{\eta }{2} \biggr)\widetilde{\varGamma}_k^ng_{2,i^2_k+1}^0+ \frac{\eta}{2}\varGamma _k^ng_{2,i^2_k-1}^0 \biggr], \\ \displaystyle g_{2,i}^{n+1}=\sum_k \biggl[\frac{\eta}{2}\varGamma _k^ng_{1,i^1_k+1}^0+ \biggl(1-\frac{\eta}{2} \biggr)\widetilde{\varGamma }_k^ng_{1,i^1_k-1}^0 \biggr]\\ \displaystyle\phantom{g_{1,i}^{n+1}=}{}+\sum_k \biggl[\frac{\eta}{2}\widetilde {\varGamma}_k^ng_{2,i^2_k+1}^0+ \biggl(1- \frac{\eta}{2} \biggr)\varGamma _k^ng_{2,i^2_k-1}^0 \biggr]. \end{array} \right . $$
(186)
But, (186) can be written with (185)(a, b) at the rank n+1 after a reorganization of the sequences. Moreover, we have
$$\sum_k \biggl[ \biggl(1-\frac{\eta}{2} \biggr)\varGamma_k^n+\frac{\eta }{2}\widetilde{ \varGamma}_k^n \biggr]+\sum_k \biggl[\frac{\eta }{2}\varGamma_k^n+ \biggl(1- \frac{\eta}{2} \biggr)\widetilde{\varGamma}_k^n \biggr]=\sum_k\bigl(\varGamma_k^n+ \widetilde{\varGamma}_k^n\bigr)=1 $$
and
$$\left \{ \begin{array} {l} \displaystyle \biggl(1-\frac{\eta}{2} \biggr)\varGamma_k^n\geq0, \\ \displaystyle \frac{\eta}{2}\widetilde{\varGamma}_k^n \geq0, \\ \displaystyle \frac{\eta}{2}\varGamma_k^n\geq0, \\ \displaystyle \biggl(1-\frac{\eta}{2} \biggr)\widetilde{\varGamma}_k^n \geq0 \end{array} \right . $$
since η∈]0,2]. Thus, (185) is satisfied for any n≥1. By using the fact that \(\rho_{i}^{n}=g_{1,i}^{n}+g_{2,i}^{n}\) and by using (52), (185)(a, b) implies that
$$\rho_i^{n}=\sum_k\bigl( \varGamma_k^n+\widetilde{\varGamma }_k^n \bigr) \bigl(g_{1,i^1_k}^0+g_{2,i^2_k}^0 \bigr)=\sum_k\bigl(\varGamma_k^n+ \widetilde {\varGamma}_k^n\bigr)\bigl[(1-\alpha) \rho_{i^1_k}^0+\alpha\rho_{i^2_k}^0\bigr] $$
since \(\rho_{i}^{n}=g_{1,i}^{n}+g_{2,i}^{n}\). Thus, because of (185)(c, d, e), we obtain that \(\rho_{i}^{n}\) is a convex combination of \(\{\rho_{j}^{0}\}_{j}\) when α∈[0,1] which allows to obtain (74).
A priori, the proof in the periodic case when α∈[0,1] is not valid in the Neumann case because of the boundary conditions (53) in x=x
min. Nevertheless, when \(\alpha=\frac{1}{2}\), the boundary conditions (53) are given by
$$ \forall n\geq0:\quad g_{2,0}^{n}=g_{1,1}^{n} $$
(187)
since \(\alpha=\frac{1}{2}\Longrightarrow g_{1,1}^{0}=g_{2,1}^{0}=\frac{\rho _{1}^{0}}{2}\) and \(g_{2,0}^{0}=\frac{\rho_{1}^{0}}{2}\) that is to say \(g_{2,0}^{0}=g_{1,1}^{0}\). As a consequence, the proof in the periodic case with \(\alpha=\frac{1}{2}\) becomes valid in the Neumann case. □
Proof of Lemma 6.1
In the Dirichlet case, the boundary conditions (63) in x=x
min can be rewritten with
$$ \left \{ \begin{array}{l@{\quad}l} \displaystyle g_{2,i=0}^{n+1}=\frac{\rho_{x_{\min}}}{2}(2-\eta )+g_{1,i=1}^n(\eta-1),&\mbox{(a)}\\ \displaystyle g_{2,i=0}^{n=0}=\alpha\rho_{x_{\min}}.&\mbox{(b)} \end{array} \right . $$
(188)
We deduce from (188) that when η∈[1,2] that is to say when \(C_{d}\in[0,\frac{1}{2}]\), we have
$$\min \biggl[\frac{\rho_{x_{\min}}}{2},g_{1,i=1}^n \biggr]\leq g_{2,i=0}^{n+1}\leq\max \biggl[\frac{\rho_{x_{\min}}}{2},g_{1,i=1}^n \biggr]. $$
Thus, the proof in the periodic case with \(\alpha=\frac{1}{2}\) can be applied. □
Proof of Proposition 6.2
The proof is identical to the periodic case with \(\alpha=\frac{1}{2}\) by replacing (182) with
$$\min \biggl[\frac{\rho_{x_{\min}}}{2},\min_{j\geq 1} \bigl(g_{1,j}^{n},g_{2,j}^{n}\bigr) \biggr]\leq g_{1,i}^{n+1}\leq\max \biggl[\frac {\rho_{x_{\min}}}{2}, \max_{j\geq1}\bigl(g_{1,j}^{n},g_{2,j}^{n} \bigr) \biggr]. $$
□