1 Introduction

In this paper, we report on some recent results obtained in [3] about the problem of deriving rigorously some hydrodynamic limit from the Boltzmann equation for inelastic hard spheres with small inelasticity. Our aim here is to give an account of the main aspects of our work [3] in a shorter—reader-friendly—version that includes the main results as well as the main ideas and arguments. We shall only sketch the proofs of our results, referring the reader to [3] for complete versions and details.

1.1 The Problem

1.1.1 The Kinetic Model

We consider here the (freely cooling) Boltzmann equation which provides a statistical description of identical smooth hard spheres suffering binary and inelastic collisions:

$$\begin{aligned} \partial _{t}F + v\cdot \nabla _{x} F=\mathcal {Q}_{\alpha }(F,F) \end{aligned}$$
(1.1)

supplemented with initial condition \(F(0,x,v)=F^{\text {in}}(x,v)\), where \(F=F(t,x,v)\) is the density of granular gases having position \(x \in \mathbb {T}_\ell ^{d}\) and velocity \(v \in {\mathbb {R}}^{d}\) at time \(t\geqslant 0.\) We consider here for simplicity the case offlat torus

$$\begin{aligned} \mathbb {T}_{\ell }^{d}={\mathbb {R}}^{d}/(2\pi \,\ell \,{\mathbb {Z}})^{d} \end{aligned}$$
(1.2)

for some typical length-scale \(\ell >0\). The so-called restitution coefficient \(\alpha \) belongs to (0, 1] and the collision operator \(\mathcal {Q}_{\alpha }\) is defined in weak form as

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^{d}} \mathcal {Q}_{\alpha } (g,f)(v)\, \psi (v)\, \text {d}v = \frac{1}{2} \int _{{\mathbb {R}}^{2d}} f(v)\,g(v_{*})\,|v-v_{*}| \mathcal {A}_{\alpha }[\psi ](v,v_{*})\, \text {d}v_{*}\, \text {d}v, \end{aligned} \end{aligned}$$
(1.3)

where

$$\begin{aligned} \mathcal {A}_{\alpha }[\psi ](v,v_{*}) := \int _{{\mathbb {S}}^{d-1}}(\psi (v')+\psi (v_{*}')-\psi (v)-\psi (v_{*}))\, b(\sigma \cdot {\bar{q}})\, \text {d}{\sigma }, \end{aligned}$$
(1.4)

and the post-collisional velocities \((v',v_{*}')\) are given by

$$\begin{aligned} \begin{aligned} v'=v+\frac{1+\alpha }{4}\,(|q|\sigma -q),&\qquad v_{*}'=v_{*}-\frac{1+\alpha }{4}\,(|q|\sigma -q),\\ \text {where} \qquad q=v-v_{*},&\qquad {\bar{q}}=q/|q|. \end{aligned} \end{aligned}$$
(1.5)

Here, \(\text {d}\sigma \) denotes the Lebesgue measure on \({\mathbb {S}}^{d-1}\) and the angular part \(b=b(\sigma \cdot {\bar{q}})\) of the collision kernel appearing in (1.4) is a non-measurable mapping integrable over \({\mathbb {S}}^{d-1}\). There is no loss of generality assuming

$$\begin{aligned} \int _{{\mathbb {S}}^{d-1}}b(\sigma \cdot {\bar{q}})\, \text {d}{\sigma }=1, \qquad \forall \,{\bar{q}} \in {\mathbb {S}}^{d-1}. \end{aligned}$$

Notice that one can also give a strong formulation of the collision operator \(\mathcal {Q}_\alpha \) (see [3,  Appendix A]). This strong formulation is simpler in the elastic case (\(\alpha =1\)), we here give it for later use:

$$\begin{aligned} \mathcal {Q}_1(g,f)(v) = \int _{{\mathbb {R}}^d \times {\mathbb {S}}^{d-1}} \left( g(v_*') f(v') - g(v_*) f(v)\right) |v-v_*| \,b(\sigma \cdot {\bar{q}})\, \text {d}{\sigma }\, \text {d}v_*. \end{aligned}$$
(1.6)

The true definition actually involves pre-collisional velocities and not post-collisional velocities \(v'\) and \(v'_*\) but they match in the elastic case, which explains the formula (1.6).

The fundamental distinction between the classical elastic Boltzmann equation and the associated to granular gases lies in the role of the parameter \(\alpha \in (0,1)\), the coefficient of restitution that we suppose constant. This coefficient is given by the ratio between the magnitude of the normal component (along the line of separation between the centers of the two spheres at contact) of the relative velocity after and before the collision. The case \(\alpha = 1\) corresponds to perfectly elastic collisions where kinetic energy is conserved. However, when \(\alpha < 1\), part of the kinetic energy of the relative motion is lost since

$$\begin{aligned} |v'|^{2}+|v_*'|^{2}-|v|^{2}-|v_*|^{2}=-\frac{1-\alpha ^{2}}{4}|q|^{2}\,\left( 1-\sigma \cdot {\bar{q}}\right) \leqslant 0. \end{aligned}$$

Notice that the microscopic description (1.5) preserves the momentum

$$\begin{aligned} v'+v_*'=v+v_*\end{aligned}$$

and, taking \(\psi =1\) and then \(\psi =v\) in (1.3) yields the following conservation of macroscopic density and bulk velocity defined as

$$\begin{aligned} \varvec{R}(t):=\int _{\mathbb {T}^{d}_{\ell } \times {\mathbb {R}}^{d}}F(t,x,v)\, \text {d}v\, \text {d}x \quad \text {and} \quad \varvec{U}(t) := \int _{\mathbb {T}^{d}_{\ell } \times {\mathbb {R}}^{d}}v F(t,x,v)\, \text {d}v\, \text {d}x, \end{aligned}$$

for some solution F(txv) to (1.1):

$$\begin{aligned} \dfrac{\text {d}}{\text {d}t}\varvec{R}(t)= \frac{\text {d}}{\text {d}t}\varvec{U}(t)=0. \end{aligned}$$

Consequently, there is no loss of generality in assuming that

$$\begin{aligned} \varvec{R}(t)=\varvec{R}(0)=1, \qquad \varvec{U}(t)=\varvec{U}(0)=0, \qquad \forall \, t \geqslant 0. \end{aligned}$$

The main contrast between elastic and inelastic gases is that in the latter the granular temperature,

$$\begin{aligned} \varvec{T}(t):=\frac{1}{|\mathbb {T}^{d}_{\ell }|}\int _{{\mathbb {R}}^{d}\times \mathbb {T}^{d}_{\ell }}|v|^{2}F(t,x,v)\, \text {d}v\, \text {d}x \end{aligned}$$

is constantly decreasing

$$\begin{aligned} \dfrac{\text {d}}{\text {d}t}\varvec{T}(t)=-(1-\alpha ^{2})\mathcal {D}_{\alpha }(F(t),F(t)) \leqslant 0, \end{aligned}$$

where \(\mathcal {D}_{\alpha }(\cdot ,\cdot )\) denotes the normalised energy dissipation associated to \(\mathcal {Q}_{\alpha }\), see [16], given by

$$\begin{aligned} \mathcal {D}_{\alpha }(g,g):= \frac{\gamma _{b}}{4}\int _{\mathbb {T}^{d}_{\ell }}\frac{\text {d}x}{|\mathbb {T}^{d}_{\ell }|}\int _{{\mathbb {R}}^d \times {\mathbb {R}}^d}g(x,v)g(x,v_*)|v-v_*|^{3}\, \text {d}v\, \text {d}v_*, \end{aligned}$$
(1.7)

where \(\gamma _b\) is a positive constant depending only on the angular kernel b.

1.1.2 The Problem of Hydrodynamic Limits

To capture some hydrodynamic behaviour of the gas, we need to write the above equation in nondimensional form introducing the dimensionless Knudsen number which is proportional to the mean free path between collisions. We then introduce the classical Navier-Stokes rescaling of time and space (see [5]) to capture the hydrodynamic limit and introduce the particle density

$$\begin{aligned} F_{\varepsilon }(t,x,v):=F\left( \frac{t}{\varepsilon ^{2}},\frac{x}{\varepsilon },v\right) , \qquad t \geqslant 0. \end{aligned}$$
(1.8)

In this case, we choose for simplicity \(\ell =\varepsilon \) in (1.2) which ensures now that \(F_{\varepsilon }\) is defined on \({\mathbb {R}}^{+}\times \mathbb {T}^{d}\times {\mathbb {R}}^{d}\) with \(\mathbb {T}^{d}:=\mathbb {T}_{1}^{d}\). Under such a scaling, \(F_{\varepsilon }\) satisfies the rescaled Boltzmann equation

$$\begin{aligned} \varepsilon ^{2}\partial _{t}F_{\varepsilon } + \varepsilon \,v\cdot \nabla _{x} F_{\varepsilon } =\mathcal {Q}_{\alpha }(F_{\varepsilon },F_{\varepsilon }) \quad \text {on} \quad \mathbb {T}^{d}\times {\mathbb {R}}^{d}, \end{aligned}$$
(1.9a)

supplemented with the initial condition

$$\begin{aligned} F_{\varepsilon }(0,x,v)=F^{\text {in}}_{\varepsilon }(x,v):=F^{\text {in}}(\tfrac{x}{\varepsilon }, v). \end{aligned}$$
(1.9b)

Conservation of mass and density is preserved under this scaling, if \(F_\varepsilon \) solves (1.9a), then

$$\begin{aligned} \frac{\text {d}}{\text {d}t} \varvec{R}_{\varepsilon }(t) = \frac{\text {d}}{\text {d}t} \varvec{U}_{\varepsilon }(t) =0 \end{aligned}$$

where \( \varvec{R}_{\varepsilon }(t):=\int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}F_{\varepsilon }(t,x,v)\, \text {d}v\, \text {d}x\) and \( \varvec{U}_{\varepsilon }(t):=\int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}F_{\varepsilon }(t,x,v)v\, \text {d}v\, \text {d}x, \) whereas the cooling of the granular gas is given by the equation

$$\begin{aligned} \frac{\text {d}}{\text {d}t}\varvec{T}_{\varepsilon }(t)= -\frac{1-\alpha ^{2}}{\varepsilon ^{2}}\mathcal {D}_{\alpha }(F_{\varepsilon }(t),F_{\varepsilon }(t)), \end{aligned}$$
(1.10)

where \(\varvec{T}_{\varepsilon }(t):=\frac{1}{|\mathbb {T}^{d}|}\int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}|v|^{2}F_{\varepsilon }(t,x,v)\, \text {d}v\, \text {d}x\) and we recall that \(\mathcal {D}_\alpha \) is defined in (1.7). The conservation properties of the equation imply that there is no loss of generality assuming that

$$\begin{aligned} \varvec{R}_{\varepsilon }(t) = 1, \quad \varvec{U}_{\varepsilon }(t) = 0, \quad \forall \,\varepsilon >0,\, t \geqslant 0. \end{aligned}$$

In order to understand the free-cooling inelastic Boltzmann equation (1.9a)–(1.9b), we perform aself-similar change of variables, which allows us to introduce an intermediate asymptotic and ensures that our equation has a non trivial steady state (see [15,16,17] for more details). After this change of variables, we are led to study the equation

$$\begin{aligned} \varepsilon ^{2}\partial _{t} f_{\varepsilon }+\varepsilon v \cdot \nabla _{x} f_{\varepsilon }+ (1-\alpha )\,\nabla _{v}\cdot (v f_{\varepsilon }) = \mathcal {Q}_{\alpha }(f_{\varepsilon },f_{\varepsilon }), \end{aligned}$$
(1.11)

with initial condition

$$\begin{aligned} f_{\varepsilon }(0,x,v)=F_{\varepsilon }^{\text {in}}(x,v). \end{aligned}$$

Note that the drift term acts as an energy supply which prevents the total cooling down of the gas. It has been shown that there exists a spatially homogeneous steady state \(G_{\alpha }\) to (1.11). More specifically, there exists \(\alpha _{0} \in (0,1)\) (where \(\alpha _0\) is an explicit threshold value) such that for \(\alpha \in (\alpha _{0},1)\), there exists a unique distribution \(G_{\alpha }=G_\alpha (v)\) satisfying

$$\begin{aligned} (1-\alpha ) \nabla _{v}\cdot (v \, G_{\alpha })= \mathcal {Q}_{\alpha }(G_{\alpha },G_{\alpha }) \quad \text {with} \quad \int _{{\mathbb {R}}^{d}}G_{\alpha }(v) \begin{pmatrix} 1 \\ v \end{pmatrix} \, \text {d}v= \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{aligned}$$
(1.12)

Moreover, there exists some constant \(C>0\) independent of \(\alpha \) such that

$$\begin{aligned} \Vert G_{\alpha }-{\mathcal {M}}\Vert _{L^{1}_v(\langle v \rangle ^2)} \leqslant C (1-\alpha ) \end{aligned}$$
(1.13)

where \({\mathcal {M}}\) is the Maxwellian distribution

$$\begin{aligned} {\mathcal {M}}(v) :=(2\pi \vartheta _{1})^{-d/2}\exp \left( -\frac{|v|^{2}}{2\vartheta _{1}}\right) , \qquad v \in {\mathbb {R}}^{d}, \end{aligned}$$
(1.14)

for some explicit temperature \(\vartheta _{1} >0\). The Maxwellian distribution \({\mathcal {M}}\) is a steady solution for \(\alpha =1\) and its prescribed temperature \(\vartheta _{1}\) (which ensures (1.13) to hold) will play a role in the rest of the analysis.

It is important to emphasize that, in all the sequel, all the threshold values on \(\varepsilon \) and the various constants involved are actually depending only on this initial choice.

In order to reach some incompressible Navier-Stokes type equation in the limit \(\varepsilon \rightarrow 0\), we introduce the following fluctuation \(h_{\varepsilon }\) around the equilibrium \(G_{\alpha }\):

$$\begin{aligned} f_{\varepsilon }(t,x,v)=G_{\alpha }(v)+\varepsilon \,h_{\varepsilon }(t,x,v). \end{aligned}$$

Our problem boils down to look at the following equation on \(h_{\varepsilon }\):

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\partial _{t} h_{\varepsilon }+\dfrac{1}{\varepsilon } v \cdot \nabla _{x} h_{\varepsilon } =\dfrac{1}{\varepsilon ^{2}} {\mathscr {L}}_{\alpha }h_{\varepsilon } +\dfrac{1}{\varepsilon } \mathcal {Q}_{\alpha }(h_{\varepsilon },h_{\varepsilon })\\ &{} h_{\varepsilon }(t=0)=h_\varepsilon ^{{\text {in}}}:= \dfrac{1}{\varepsilon } (F_{\varepsilon }^{\text {in}} - G_\alpha ), \end{array}\right. } \end{aligned}$$
(1.15)

where \({\mathscr {L}}_{\alpha }\) is the linearized collision operator (local in the x-variable) defined as

$$\begin{aligned} {\mathscr {L}}_{\alpha }h:=\mathcal {Q}_{\alpha }(G_\alpha ,h)+\mathcal {Q}_{\alpha }(h,G_\alpha ) - (1-\alpha )\nabla _{v}\cdot (vh). \end{aligned}$$
(1.16)

We also denote by \({\mathscr {L}}_{1}\) the linearized operator around \(G_{1}={\mathcal {M}}\), that is,

$$\begin{aligned} {\mathscr {L}}_{1}h:=\mathcal {Q}_{1}({\mathcal {M}},h)+\mathcal {Q}_{1}(h,{\mathcal {M}}). \end{aligned}$$
(1.17)

From now on, we will always assume that

$$\begin{aligned} {\int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}F^{\text {in}}_{\varepsilon }(x,v)\left( \begin{array}{c}1 \\ v \\ |v|^{2}\end{array}\right) \, \text {d}v \, \text {d}x} =\left( \begin{array}{c}1 \\ 0 \\ {E_\varepsilon }\end{array}\right) \quad {\text {with} \quad E_\varepsilon >0 \quad \text {and} \quad \frac{E_\varepsilon - d\vartheta _1}{\varepsilon } \xrightarrow [\varepsilon \rightarrow 0]{}0}.\nonumber \\ \end{aligned}$$
(1.18)

The choice of prescribing as initial energy some constant \(E_\varepsilon >0\) satisfying \(\varepsilon ^{-1} (E_\varepsilon -d\vartheta _1) \rightarrow 0\) as \(\varepsilon \rightarrow 0\) for our problem is natural because \(d\vartheta _1\) is the energy of the Maxwellian \({\mathcal {M}}\) introduced in (1.14) and as we shall see later on, the restitution coefficient \(\alpha \) is intended to tend to 1 as \(\varepsilon \) goes to 0 in our analysis (see (1.22)). It is also worth noticing that assumption (1.18) and (1.12) result in

$$\begin{aligned} \int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}h_\varepsilon ^{\text {in}}(x,v) \begin{pmatrix} 1 \\ v \end{pmatrix} \, \text {d}v \, \text {d}x=\begin{pmatrix} 0 \\ 0 \end{pmatrix}. \end{aligned}$$

Moreover, equation (1.15) preserves mass andvanishing momentum since, if \(h_\varepsilon \) solves (1.15), then one formally has

$$\begin{aligned} \begin{aligned} \frac{\text {d}}{\text {d}t} \int _{\mathbb {T}^d \times {\mathbb {R}}^{d}} h_\varepsilon (t,x,v) v \, \text {d}v \, \text {d}x&=\int _{\mathbb {T}^d \times {\mathbb {R}}^{d}} \nabla _v \cdot (v h_{\varepsilon }(t,x,v)) v \, \text {d}v \, \text {d}x \\&= - \int _{\mathbb {T}^d \times {\mathbb {R}}^{d}} h_\varepsilon (t,x,v) v \, \text {d}v \, \text {d}x. \end{aligned} \end{aligned}$$
(1.19)

Consequently, there is no loss of generality assuming that

$$\begin{aligned} \int _{\mathbb {T}^d \times {\mathbb {R}}^{d}}h_\varepsilon (t,x,v) \begin{pmatrix} 1 \\ v \end{pmatrix} \, \text {d}v \, \text {d}x =\begin{pmatrix} 0 \\ 0 \end{pmatrix}, \quad \forall \, t \geqslant 0. \end{aligned}$$
(1.20)

1.1.3 Relation Between the Restitution Coefficient and the Knudsen Number

The central underlying assumption in our study is the following relation between the restitution coefficient and the Knudsen number.

Assumption 1.1

The restitution coefficient \(\alpha (\cdot )\) is a continuously decreasing function of the Knudsen number \(\varepsilon \) satisfying the scaling behaviour

$$\begin{aligned} \alpha (\varepsilon )=1-\varepsilon ^{2} (\lambda _0 + \eta (\varepsilon )) \end{aligned}$$
(1.21)

with \(\lambda _{0} \geqslant 0\) and some function \(\eta (\cdot )\) that tends to 0 as \(\varepsilon \) goes to 0. If \(\lambda _0=0\), we assume furthermore that there exists \(\varepsilon _\star >0\) such that \(\eta (\cdot )\) is positive on \((0, \varepsilon _\star )\).

Notice that under this assumption, the hypothesis made on the energy of the initial data in (1.18) implies that

$$\begin{aligned} \int _{\mathbb {T}^d \times {\mathbb {R}}^d} h_\varepsilon ^{\text {in}}(x,v) \,|v|^2 \, \text {d}v \, \text {d}x \xrightarrow [\varepsilon \rightarrow 0]{} 0. \end{aligned}$$
(1.22)

Indeed, using (1.18) and Assumption 1.1 combined with (1.13), we obtain

$$\begin{aligned} \int _{\mathbb {T}^d \times {\mathbb {R}}^d} h_\varepsilon ^{\text {in}}(x,v) |v|^2\, \text {d}v \, \text {d}x = \frac{1}{\varepsilon }\int _{\mathbb {T}^d \times {\mathbb {R}}^d} \left( F_\varepsilon ^{\text {in}}(x,v)- G_{\alpha (\varepsilon )}(v)\right) |v|^2\, \text {d}v \, \text {d}x \\ = \frac{E_\varepsilon - d \vartheta _1}{\varepsilon } + \frac{1}{\varepsilon }\int _{\mathbb {T}^d \times {\mathbb {R}}^d} \left( {\mathcal {M}}(v)- G_{\alpha (\varepsilon )}(v)\right) |v|^2\, \text {d}v \, \text {d}x \xrightarrow [\varepsilon \rightarrow 0]{} 0. \end{aligned}$$

Still under Assumption 1.1, we formally obtain that if \(\varepsilon \rightarrow 0\) in (1.15), then \(h_\varepsilon \rightarrow \varvec{h}\) with \(\varvec{h} \in {\text {Ker}}{\mathscr {L}}_1\) where \({\mathscr {L}}_1\) is defined in (1.17). We recall that when seeing \({\mathscr {L}}_1\) as an operator acting only on velocity on the space \(L^2_v({\mathcal {M}}^{-1/2})\), then

$$\begin{aligned} {\text {Ker}}{\mathscr {L}}_1 = {\text {Span}} \{{\mathcal {M}}, v_1 {\mathcal {M}}, \cdots v_d {\mathcal {M}}, |v|^2 {\mathcal {M}}\} \end{aligned}$$

and the projection \(\varvec{\pi }_0\) onto \({\text {Ker}} {\mathscr {L}}_1\) is given by

$$\begin{aligned} \varvec{\pi }_{0}(g) : =\sum _{i=1}^{d+2}\left( \int _{ {\mathbb {R}}^{d} }g\,\Psi _{i}\,\text {d}v \right) \,\Psi _{i}\,{\mathcal {M}}, \end{aligned}$$
(1.23)

where

$$\begin{aligned} \Psi _{1}(v):=1, \quad \Psi _{i}(v):=\frac{1}{\sqrt{\vartheta _{1}}}v_{i-1}, \quad i=2,\ldots ,d+1 \quad \text {and} \quad \Psi _{d+2}(v):=\frac{|v|^{2}-d\vartheta _{1}}{\vartheta _{1}\sqrt{2d}}.\qquad \end{aligned}$$
(1.24)

We deduce formally that \(\varvec{h}\) takes the following form

$$\begin{aligned} \varvec{h}(t,x,v)=\left( \varrho (t,x)+u(t,x)\cdot v + \frac{1}{2}\theta (t,x)(|v|^{2}-d\vartheta _{1})\right) {\mathcal {M}}(v) \end{aligned}$$

with

$$\begin{aligned}&\varrho (t,x) := \int _{\mathbb {T}^d \times {\mathbb {R}}^d} \varvec{h}(t,x,v) \, \text {d}v, \quad u(t,x) := \frac{1}{\vartheta _1} \int _{\mathbb {T}^d \times {\mathbb {R}}^d} \varvec{h}(t,x,v) v \, \text {d}v, \nonumber \\&\quad \theta (t,x) := \int _{\mathbb {T}^d \times {\mathbb {R}}^d} \varvec{h}(t,x,v) \frac{|v|^2-d\vartheta _1}{\vartheta _1^2d}\, \text {d}v. \end{aligned}$$
(1.25)

It is worth mentioning that a careful spectral analysis of the linearized collision operator \({\mathscr {L}}_{\alpha }\) defined in (1.16) shows that unless one assumes \(1-\alpha \) at least of order \(\varepsilon ^{2}\), the eigenfunction associated to the energy dissipation would explode and prevent some exponential stability for (1.15) to hold true (see Theorem 2.1). Actually, in our study, we will require \(\lambda _0\) to be relatively small with respect to the spectral gap associated to the elastic linearized operator to ensure stability in the inelastic case. If one assumes \(\lambda _0=0\) (for example, one could assume \(1-\alpha \) of order \(\varepsilon ^q\) with \(q>2\)), the effect of the inelasticity is too weak in the hydrodynamic scale and the expected model is the classical Navier–Stokes–Fourier system. In short, we are left with two cases:

Case 1 If \(\lambda _{0}=0\), the expected model is the classical Navier–Stokes–Fourier system.

Case 2 If \(0< \lambda _{0}< \infty \) is small enough (compared to some explicit quantities), the cumulative effect of inelasticity is visible in the hydrodynamic scale and we expect a different model to the Navier–Stokes–Fourier system accounting for that.

In this nearly elastic regime, the energy dissipation rate in the system happens in a controlled fashion since the inelasticity parameter is compensated accordingly to the number of collisions per time unit. Other regimes can be considered depending on the rate at which kinetic energy is dissipated; for example, an interesting regime is the mono-kinetic one which considers the extreme case of infinite energy dissipation rate. In this way, the limit is formally described by enforcing a Dirac mass solution in the kinetic equation yielding the pressureless Euler system (corresponding to sticky particles). Such a regime has been rigorously addressed in the one-dimensional framework in the interesting contribution [11]. It is an open question to extend such analysis to higher dimensionssince the approach of [11] uses the so-called Bony functional which is a tool specifically tailored for 1D kinetic equations.

1.2 Notations and Definitions

Let us introduce some useful notations for functional spaces. For any nonnegative weight function \(m\,:\,{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{+}\), we define, for all \(p >1\) the space \(L^{p}(m)\) through the norm

$$\begin{aligned} \Vert f\Vert _{L^{p}(m)}:=\left( \int _{{\mathbb {R}}^{d}}|f(\xi )|^{p}m(\xi )^{p}\, \text {d}\xi \right) ^{1/p}, \end{aligned}$$

We also define, for \(p \geqslant 1\)

$$\begin{aligned} \mathbb {W}^{k,p}(m)=\left\{ f \in L^{p}(m)\;;\;\partial _{\xi }^{\beta }f \in L^{p}(m) \,\forall \, |\beta | \leqslant k\right\} \end{aligned}$$

with the usual norm, i.e., for \(k \in {\mathbb {N}}\):

$$\begin{aligned} \Vert f\Vert _{\mathbb {W}^{k,p}(m)}^{p}=\sum _{|\beta | \leqslant k}\Vert \partial _{\xi }^{\beta }f\Vert _{L^{p}(m)}^{p}. \end{aligned}$$

For \(m \equiv 1\), we simply denote the associated spaces by \(L^{p}\) and \(\mathbb {W}^{k,p}\). Notice that all the weights we consider here will depend only on velocity, i.e. \(m=m(v)\). We will also use the notation \(\langle \xi \rangle := \sqrt{1+|\xi |^2}\) for \(\xi \in {\mathbb {R}}^d\).

On the complex plane, for any \(a \in {\mathbb {R}}\), we set

$$\begin{aligned} {\mathbb {C}}_{a}:=\{z \in {\mathbb {C}}\;;\;\text {Re}\,z >-a\}, \qquad {\mathbb {C}}_{a}^{\star }:={\mathbb {C}}_{a}\setminus \{0\} \end{aligned}$$
(1.26)

and, for any \(r >0\), we set

$$\begin{aligned} \mathbb {D}(r)=\{z \in {\mathbb {C}}\;;\;|z| \leqslant r\}. \end{aligned}$$

We also introduce the following notion of hypo-dissipativity in a general Banach space \((X,\Vert \cdot \Vert )\). A closed (unbounded) linear operator \(A\,:\,{\mathscr {D}}(A) \subset X \rightarrow X\) is said to be hypo-dissipative on X if there exists a norm, denoted by \({\left| \left| \left| \cdot \right| \right| \right| }\), equivalent to the \(\Vert \cdot \Vert \)–norm such that A is dissipative on the space \((X,{\left| \left| \left| \cdot \right| \right| \right| })\), that is,

$$\begin{aligned} {\left| \left| \left| (\lambda -A)h \right| \right| \right| } \geqslant \lambda \,{\left| \left| \left| h \right| \right| \right| }, \qquad \forall \, \lambda >0,\,\;h \in {\mathscr {D}}(A). \end{aligned}$$

Given two Banach spaces X and Y, we denote with \(\Vert \cdot \Vert _{X \rightarrow Y}\) the operator norm on the space of \({\mathscr {B}}(X,Y)\) linear and continuous operators from X to Y.

Note also that in what follows, for two positive quantities A and B, we denote by \(A \lesssim B\) if there exists a universal positive constant C (which is in particular independent of the parameters \(\alpha \) and \(\varepsilon \)) such that \(A \leqslant CB\).

1.3 Main Results

The main results are both about the solutions to (1.15). The first one is the following Cauchy theorem regarding the existence and uniqueness of close-to-equilibrium solutions to (1.15). The functional spaces at stake are \({L^{1}_{v}L^2_x}\)-based Sobolev spaces \(\mathcal {E}_{1} \hookrightarrow \mathcal {E}\) defined through

$$\begin{aligned} \mathcal {E}:={\mathbb {W}^{k,1}_{v}\mathbb {W}^{m,2}_{x}}(\langle v\rangle ^{q}), \quad \mathcal {E}_{1}:={\mathbb {W}^{k,1}_{v}\mathbb {W}^{m,2}_{x}}(\langle v\rangle ^{q+1}) \quad \text {with} \quad {m > d}, \quad m-1 \geqslant k \geqslant 0, \quad q \geqslant 3.\nonumber \\ \end{aligned}$$
(1.27)

Theorem 1.2

Under Assumption 1.1, for \(\varepsilon ,\lambda _0\) and \(\eta _0\) sufficiently small (with explicit bounds), if \(h^{{\hbox {in}}}_\varepsilon \in \mathcal {E}\) is such that

$$\begin{aligned} \Vert h^{{\hbox {in}}}_\varepsilon \Vert _{\mathcal {E}} \leqslant \eta _0, \end{aligned}$$

then the inelastic Boltzmann equation (1.15) has a unique solution

$$\begin{aligned} h_{\varepsilon } \in \mathcal {C}\big ([0,\infty ); \mathcal {E}\big ) \cap L^1\big ([0,\infty ); \mathcal {E}_{1}\big ) \end{aligned}$$

satisfying for any \(r \in (0,1)\),

$$\begin{aligned} \left\| h_\varepsilon (t)\right\| _{\mathcal {E}}\leqslant C \,\eta _0\,\exp \left( -(1-r){\lambda }_{\varepsilon }\,t\right) , \qquad \forall \, t >0 \end{aligned}$$

for some positive constant \(C=C(r) >0\) independent of \(\varepsilon \) and where \({\lambda }_{\varepsilon }\underset{\varepsilon \rightarrow 0}{\sim }\lambda _{0}+\eta (\varepsilon )\) with \(\lambda _0\) and \(\eta =\eta (\varepsilon )\) that have been introduced in Assumption 1.1.

Remark 1.3

It is worth pointing out that the close-to-equilibrium solutions we construct are shown to decay with an exponential rate as close as we want to \({\lambda }_{\varepsilon } \sim \frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}\) (which is the energy eigenvalue of the linearized operator, see Theorem 2.1 hereafter). The rate of convergence can thus be made uniform with respect to the Knudsen number \(\varepsilon \) (notice that if \(\lambda _0=0\), we obtain a rate of decay as close as we want to \(\eta (\varepsilon )\), we thus obtain a uniform bound in time but not a uniform rate of decay).

The estimates on the solution \(h_{\varepsilon }\) provided by Theorem 1.2 are enough to prove that the solution \(h_{\varepsilon }(t)\) converges towards some hydrodynamic solution \(\varvec{h}\) which depends on (tx) only through macroscopic quantities \((\varrho (t,x),u(t,x),\theta (t,x))\) which are solutions to a suitable modification of the incompressible Navier-Stokes system. This is done under an additional assumption on the initial datum that is lightly restrictive. Before stating our main convergence result, we introduce the notation

$$\begin{aligned} {\mathscr {W}}_\ell := \left( \mathbb {W}^{\ell ,2}_x\left( \mathbb {T}^d\right) \right) ^{d+2}, \quad \ell \in {\mathbb {N}}\end{aligned}$$

and we furthermore assume that in the definition of the functional spaces (1.27), the following conditions are satisfied:

$$\begin{aligned} m > d, \quad m-1 \geqslant k \geqslant 1, \quad q \geqslant 5. \end{aligned}$$

Theorem 1.4

We suppose that the assumptions of Theorem 1.2 are satisfied. We assume furthermore that there exists \((\varrho _{0},u_{0},\theta _{0}) \in {\mathscr {W}}_m\) such that

figure a

where we recall that \(\varvec{\pi }_0\) is the projection onto the kernel of \({\mathscr {L}}_1\) defined in (1.23) and

$$\begin{aligned} h_{0}(x,v):=\left( \varrho _{0}(x)+u_{0}(x)\cdot v + \frac{1}{2}\theta _{0}(x)(|v|^{2}-d\vartheta _{1})\right) {\mathcal {M}}(v). \end{aligned}$$
(1.28)

Then, for any \(T >0\), the family of solutions \(\left\{ h_{\varepsilon } \right\} _{\varepsilon }\) constructed in Theorem 1.2 converges in some weak sense to a limit \(\varvec{h}=\varvec{h}(t,x,v)\) which is such that

$$\begin{aligned} \varvec{h}(t,x,v)=\left( \varrho (t,x)+u(t,x)\cdot v + \frac{1}{2}\theta (t,x)(|v|^{2}-d\vartheta _{1})\right) {\mathcal {M}}(v), \end{aligned}$$
(1.29)

where

$$\begin{aligned} (\varrho ,u,\theta ) \in \mathcal {C}\left( [0,T]\,;\,{\mathscr {W}}_{m-1}\right) \cap L^2\left( (0,T)\,;\,{\mathscr {W}}_m\right) \end{aligned}$$

is solution to the following incompressible Navier–Stokes–Fourier system with forcing

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t}u-{\frac{{\nu }}{\vartheta _1}}\,\Delta _{x}u +{\vartheta _{1}}\,u\cdot \nabla _{x}\,u+\nabla _{x}p=\lambda _{0}u,\\ \partial _{t}\,\theta -\frac{\gamma }{\vartheta _{1}^{2}}\,\Delta _{x}\theta + \vartheta _{1}\,u\cdot \nabla _{x}\theta =\dfrac{\lambda _{0}\,{\bar{c}}}{2(d+2)}\sqrt{\vartheta _{1}}\,\theta ,\\ \text {div}_{x}u=0, \qquad \varrho + \vartheta _{1}\,\theta = 0, \end{array}\right. } \end{aligned}$$
(1.30)

subject to initial conditions \((\varrho _{\text {in}},u_{\text {in}},\theta _{\text {in}})\) defined by

$$\begin{aligned} u_{\text {in}}:=\mathcal {P}u_{0}, \quad \theta _{\text {in}}:=\frac{d}{d+2}\theta _{0}-\frac{2}{(d+2)\vartheta _{1}}\varrho _{0}, \quad \varrho _{\text {in}}:=-\vartheta _{1}\theta _{\text {in}} \end{aligned}$$
(1.31)

where \(\mathcal {P}\) is the Leray projection on divergence-free vector fields and \((\varrho _0,u_0,\theta _0)\) have been introduced in (1.28). The viscosity \({\nu } >0\) and heat conductivity \(\gamma >0\) are explicit and \(\lambda _{0} >0\) is the parameter appearing in Assumption 1.1. The parameter \({\bar{c}} >0\) is depending on the collision kernel \(b(\cdot )\).

Remark 1.5

The data that we consider here are actually quite general. Indeed, the assumption that we make only tells that the macroscopic projection of \(h^{\text {in}}_\varepsilon \) converges towards some macroscopic distribution and we do not make any assumption on the macroscopic quantities of this distribution. Namely, we do not suppose that the divergence free and the Boussinesq relations are satisfied by \((\varrho _0,u_0,\theta _0)\), the initial layer that could be created by such a lack of assumption is actually absorbed in our notion of weak convergence, the precise notion of which being very peculiar and strongly related to the a priori estimates used for the proof of Theorem 1.2 (see Theorem 4.2 for more details on the type of convergence).

To prove Theorem 1.4, our approach is reminiscent of the program established in [5, 6, 9, 19] but simpler because our solutions are stronger than the renormalized ones that are used in [9]. It is based on computations and compactness arguments that were already used in the elastic case. Let us point out that in our case, additional terms appear due to the inelasticity and they can be handled in the framework of Assumption 1.1. In Sect. 4, we present the proof but only mention its main steps and arguments (details can be found in [3, Sect. 6]).

2 Study of the Kinetic Linearized Problem

2.1 Main Result on the Linearized Operator

The first step in the proof of Theorem 1.2 is the spectral analysis of the linearized problem associated to (1.15). To that end, we introduce

$$\begin{aligned} \mathcal {G}_{\alpha ,\varepsilon }h:=-\frac{1}{\varepsilon } v \cdot \nabla _{x}h +\frac{1}{\varepsilon ^2}{\mathscr {L}}_{\alpha }h. \end{aligned}$$

We are going to state our main result on \(\mathcal {G}_{\alpha ,\varepsilon }\) in the space \(\mathcal {E}\) defined in (1.27) but our analysis actually allows to treat even larger spaces (namely, we can obtain the same result under the softer constraints \(m \geqslant k \geqslant 0\) and \(q>2\)) but we only state the linear result in this case because it is the only one that will be used in the rest of the paper. Let us also recall that, in any reasonable space (in particular in \(\mathcal {E}\) and \(\mathbb {Y}_j\) for \(j = -1,0,1\) defined in (2.6)–(2.8)), the elastic operator has a spectral gap: there exists \(\mu _{\star } >0\) such that

$$\begin{aligned} {\mathfrak {S}}(\mathcal {G}_{1,\varepsilon })\cap {\mathbb {C}}_{\mu _\star }=\{0\} \end{aligned}$$
(2.1)

where 0 is an eigenvalue of algebraic multiplicity \(d+2\) of \(\mathcal {G}_{1,\varepsilon }\) associated to the eigenfunctions

$$\begin{aligned} \{{\mathcal {M}},v_{1}{\mathcal {M}},\dots ,v_d{\mathcal {M}},|v|^{2}{\mathcal {M}}\} \end{aligned}$$

(recall that \({\mathbb {C}}_{\mu _\star }\) is defined in (1.26)). This can be proven by an enlargement argument due to [10] based on the fact that in the Hilbert space

$$\begin{aligned} \mathcal {H}:= \mathbb {W}^{m,2}_{x,v} ({\mathcal {M}}^{-1/2}), \quad m>d \end{aligned}$$
(2.2)

a result of hypocoercivity has been proven in [7] (the constraint \(m \geqslant 1\) would actually be enough but we will only make use of this result for \(m>d\) in the sequel). More precisely, introducing the other Hilbert space

$$\begin{aligned} \mathcal {H}_1 := \mathbb {W}^{m,2}_{x,v} ({\mathcal {M}}^{-1/2} \langle v \rangle ^{1/2}), \end{aligned}$$
(2.3)

there exists \(\mu _\star >0\) and a norm equivalent to the usual one uniformly in \(\varepsilon \) (we still denote it by \(\Vert \cdot \Vert _\mathcal {H}\) and \(\langle \cdot , \cdot \rangle _\mathcal {H}\) its associated scalar product to lighten the notations) such that

$$\begin{aligned} \langle \mathcal {G}_{1,\varepsilon } h,h \rangle _\mathcal {H}\leqslant - \frac{\mu _\star }{\varepsilon ^2} \Vert (\mathbf {Id} - \varvec{\pi }_0) h\Vert ^2_{\mathcal {H}_1} - \mu _\star \Vert h\Vert ^2_{\mathcal {H}_1}. \end{aligned}$$
(2.4)

As we shall see in the following result, the scaling (1.21) in Assumption 1.1 is precisely the one which allows to preserve exactly \(d+2\) eigenvalues in the neighborhood of zero for \(\mathcal {G}_{\alpha ,\varepsilon }\). Let us now state our main spectral result (see Fig. 1 for an illustration where we have denoted \(\lambda _\varepsilon :=-\lambda _{d+2}(\varepsilon )\)):

Fig. 1
figure 1

The set \({\mathbb {C}}_{\mu } \setminus \mathbb {D}(\mu _{\star }-\mu )\) and the eigenvalue \(-{\lambda }_{\varepsilon }\)

Theorem 2.1

Assume that Assumption 1.1 is met. For \(\mu \) close enough to \(\mu _{\star }\) defined in (2.1) (in an explicit way), there are some explicit \({\overline{\varepsilon }}>0\) and \({{\overline{\lambda }}}>0\) depending only on \(\chi :=\mu _\star -\mu \) such that, for all \(\varepsilon \in (0,{\overline{\varepsilon }})\) and \(\lambda _0 \in [0,{{\overline{\lambda }}})\), the linearized operator \(\mathcal {G}_{\alpha (\varepsilon ),\varepsilon }\) has the following spectral property in \(\mathcal {E}\):

$$\begin{aligned} {\mathfrak {S}}(\mathcal {G}_{\alpha (\varepsilon ),\varepsilon }) \cap {\mathbb {C}}_\mu =\{\lambda _{1}(\varepsilon ),\ldots ,\lambda _{d+2}(\varepsilon )\}, \end{aligned}$$
(2.5)

with

$$\begin{aligned} \lambda _{1}(\varepsilon )=0, \qquad \lambda _{j}(\varepsilon )=\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}, \qquad j=2,\ldots ,d+1, \end{aligned}$$

and

$$\begin{aligned} \lambda _{d+2}(\varepsilon )=-\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}+\text {O}(\varepsilon ^{2}) \quad \text {as} \quad \varepsilon \rightarrow 0. \end{aligned}$$

Remark 2.2

It is worth noticing that the eigenvalue \(\lambda _1(\varepsilon )=0\) corresponds with the property of mass conservation of the operator \(\mathcal {G}_{\alpha (\varepsilon ),\varepsilon }\). Concerning the intermediate eigenvalues \(\lambda _{j}(\varepsilon )\) for \(j=2,\ldots ,d+1\), as one can see on their definition, they may be positive, this is due to the fact that the collision operator \(\mathcal {Q}_\alpha \) preserves momentum while the drift operator \(\nabla _v ( v \cdot )\) does not. However, using (1.19), one can prove that the vanishing momentum is preserved by the whole operator \(\mathcal {G}_{\alpha (\varepsilon ),\varepsilon }\), consequently, those eigenvalues won’t affect the long-time analysis of our problem. Finally, the eigenvalue \(\lambda _{d+2}(\varepsilon )\) is directly linked to the non-preservation of energy property of \(\mathcal {G}_{\alpha (\varepsilon ),\varepsilon }\).

We are going to prove Theorem 2.1 in two stages. First, we perform a perturbative argument (reminiscent of [21]) in a \(L^2_{v,x}\)-based Sobolev space, namely in

$$\begin{aligned} \mathbb {Y}:=\mathbb {W}^{s,2}_{v}\mathbb {W}^{\ell ,2}_{x} (\langle v \rangle ^{r}), \quad \ell \in {\mathbb {N}}, \, \, \, \, s \in {\mathbb {N}}^*, \, \, \, \, \ell \geqslant s+1, \, \, \, \, r>r^\star + \kappa +2 \end{aligned}$$
(2.6)

where

$$\begin{aligned} r^\star := 4\sqrt{\frac{\sigma _{1}}{\sigma _{0}}} + \frac{3}{2} \end{aligned}$$

with \(\sigma _0\) and \(\sigma _1\) defined in (2.9) and \(\kappa >d/2\). The key point of our approach is to see \(\mathcal {G}_{\alpha ,\varepsilon }\) as a perturbation of the elastic linearized operator \(\mathcal {G}_{1,\varepsilon }\). We then use an enlargement argument (from [10]) to extend the result from \(\mathbb {Y}\) to the space \(\mathcal {E}\) defined in (1.27).

Several remarks are in order:

  1. (i)

    First, let us remind that the global equilibrium of our equation \(G_\alpha \) defined in (1.12) has some exponential fat tail and in particular, decays more slowly than a standard Maxwellian distribution (see [17]). As a consequence, we can not rely on classical works on the elastic linearized operator which are developed in spaces of type \(L^2_{v,x}({\mathcal {M}}^{-1/2})\) with \({\mathcal {M}}\) defined in (1.14). To overcome this difficulty, we exploit results coming from [10] in which an enlargement theory has been carried out. The results proven in [10] include a spectral analysis of the elastic Boltzmann operator \(\mathcal {G}_{1,1}\) in larger spaces (in particular of type \(L^2_{v,x}\)) with “soft weights” that can be polynomial or stretched exponential. In the same line of ideas, these results have been extended to the rescaled elastic operator \(\mathcal {G}_{1,\varepsilon }\) in [8].

  2. (ii)

    Let us also point out that the perturbation at stake does not fall into the realm of the classical perturbation theory of unbounded operators as described in [12] because the perturbation is not relatively bounded. Indeed, the domain of \(\mathcal {G}_{1,\varepsilon }\) in \(\mathbb {Y}\) is given by \(\mathbb {W}^{s+1,2}_{v}\mathbb {W}^{\ell +1,2}_{x} (\langle v \rangle ^{r+1})\) while if one wants to be sharp in terms of rate, the best estimate in terms of functional spaces that we are able to get is

    $$\begin{aligned} \Vert \mathcal {G}_{\alpha ,\varepsilon } - \mathcal {G}_{1,\varepsilon }\Vert _{\mathbb {Y}_j \rightarrow \mathbb {Y}_{j-1}} ={1 \over \varepsilon ^2} \Vert {\mathscr {L}}_{\alpha } -{\mathscr {L}}_{1}\Vert _{\mathbb {Y}_j \rightarrow \mathbb {Y}_{j-1}} \lesssim \frac{1-\alpha }{\varepsilon ^2}, \quad j=0,1, \end{aligned}$$
    (2.7)

    where the spaces \(\mathbb {Y}_j\) are defined through

    $$\begin{aligned} \qquad \quad \mathbb {Y}_{-1} := \mathbb {W}^{s-1,2}_v\mathbb {W}^{\ell ,2}_x (\langle v \rangle ^{r-\kappa -2}), \quad \mathbb {Y}_0 := \mathbb {Y}, \quad \mathbb {Y}_1 := \mathbb {W}^{s+1,2}_v \mathbb {W}^{\ell ,2}_x ( \langle v \rangle ^{r+\kappa +2})\nonumber \\ \end{aligned}$$
    (2.8)

    with \(\kappa >d/2\). These estimates (whose proofs can be found in [3, Lemma 3.3]) are a generalization and optimisation of estimates obtained in [17] and are sharp in terms of rate, this sharpness being needed in our analysis since it allows us to deal with the case \(\lambda _0>0\) in Assumption 1.1.

  3. (iii)

    As a consequence, we have to use refined perturbation arguments whose key insights come from [21]. Note however that we drastically simplify the analysis performed in [21] by remarking that the difference operator \(\mathcal {G}_{\alpha ,\varepsilon } - \mathcal {G}_{1,\varepsilon }\) does not involve any spatial derivative and that we “only” need to develop a spectral analysis of \(\mathcal {G}_{\alpha ,\varepsilon }\) without being able to obtain decay properties on the associated semigroup. As a consequence, we don’t need to use a spectral mapping theorem, nor do we need to use an iterated version of Duhamel formula and this is crucial in order to reach the optimal scaling (1.21) for our restitution coefficient.

  4. (iv)

    Let us finally mention that we perform our perturbative argument in \(\mathbb {Y}\) which is a \(L^2_{v,x}\)-based Sobolev space instead of performing it in \(\mathcal {E}\) (which is \(L^1_vL^2_x\)-based) directly. This intermediate step seems necessary because even if \({\mathscr {L}}_{\alpha }-{\mathscr {L}}_1\) satisfies nice estimates in \(L^1_v\), the use of Fubini theorem is actually crucial to get the rate \((1-\alpha )/\varepsilon ^2\) in estimates of type (2.7).

2.2 Elements of Proof of Theorem 2.1

As mentioned above, the basis of the proof of this theorem is to see \({\mathscr {L}}_{\alpha }\) as a perturbation of \({\mathscr {L}}_{1}\).

We start by giving a splitting of it into two parts: one which has some good regularizing properties (in the velocity variable) and another one which is dissipative. For any \(\delta >0\), one can write \({\mathscr {L}}_1= \mathcal {A}^{(\delta )} + \mathcal {B}_1^{(\delta )}\) with \(\mathcal {A}^{(\delta )}\) and \(\mathcal {B}_1^{(\delta )}\) defined through an appropriate mollification-truncation process (see [10,  Sect. 4.3.3] and [3,  Sect. 2.2] for the details). The elastic collision operator \({\mathscr {L}}_1\) writes (see the strong formulation of \(\mathcal {Q}_1\) in (1.6)):

$$\begin{aligned} {\mathscr {L}}_1 g = \int _{{\mathbb {R}}^d \times {\mathbb {S}}^{d-1}} b(\sigma \cdot {\bar{q}}) |v-v_*| ({\mathcal {M}}'_* g' +{\mathcal {M}}' g'_* - {\mathcal {M}}g_*)\, \text {d}{\sigma }\, \text {d}v_* - \int _{{\mathbb {R}}^{d}}{\mathcal {M}}_*|v-v_{*}|\, \text {d}v_{*} g \end{aligned}$$

where we have used the shorthand notations \(g = g(v)\), \(g_* = g(v_*)\), \(g' = g(v')\), \(g'_* = g(v'_*)\). We define

$$\begin{aligned} {\mathcal {A}}^{(\delta )} g&:= \int _{{\mathbb {R}}^d \times {\mathbb {S}}^{d-1}} \Theta _\delta \, b(\sigma \cdot {\bar{q}}) |v-v_*| ({\mathcal {M}}'_* g' +{\mathcal {M}}' g'_* - {\mathcal {M}}g_*)\, \text {d}{\sigma }\, \text {d}v_* \\ \mathcal {B}_1^{(\delta )} g&:= \int _{{\mathbb {R}}^d \times {\mathbb {S}}^{d-1}} (1-\Theta _\delta ) \, b(\sigma \cdot {\bar{q}}) |v-v_*| ({\mathcal {M}}'_* g' +{\mathcal {M}}' g'_* - {\mathcal {M}}g_*)\, \text {d}{\sigma }\, \text {d}v_*\\&\quad - g\int _{{\mathbb {R}}^{d}}{\mathcal {M}}_*|v-v_{*}|\, \text {d}v_{*} \end{aligned}$$

where \(\Theta _\delta = \Theta _\delta (v,v_*,\sigma )\) is an appropriate truncature function.The dissipativity property of \(\mathcal {B}_1^{(\delta )}\) comes from the fact that the truncature function \(\Theta _\delta \) is defined so that the first term is small as \(\delta \) goes to 0 and the fact that there exist \(\sigma _0>0\) and \(\sigma _1>0\) such that

$$\begin{aligned} \sigma _0 \langle v \rangle \leqslant \int _{{\mathbb {R}}^{d}}{\mathcal {M}}_{*}|v-v_{*}|\, \text {d}v_{*} \leqslant \sigma _1 \langle v \rangle , \quad v \in {\mathbb {R}}^d. \end{aligned}$$
(2.9)

As a consequence, \( \mathcal {B}_1^{(\delta )}\) is going to be dissipative for \(\delta \) small enough. This leads to the following decomposition of \({\mathscr {L}}_{\alpha }\):

$$\begin{aligned} {\mathscr {L}}_{\alpha }=\mathcal {B}_{\alpha }^{(\delta )} + {\mathcal {A}}^{(\delta )}\,, \qquad \text {where} \quad \mathcal {B}_{\alpha }^{(\delta )}:=\underbrace{\mathcal {B}_{1}^{(\delta )}}_{\text {dissipative}} +\underbrace{\left[ {\mathscr {L}}_{\alpha }-{\mathscr {L}}_{1}\right] }_{\text {small as}} ~{{\alpha \rightarrow 1}} \end{aligned}$$
(2.10)

and then the following decomposition of \(\mathcal {G}_{\alpha ,\varepsilon }\):

$$\begin{aligned} \mathcal {G}_{\alpha ,\varepsilon }={\mathcal {A}}_{\varepsilon }^{(\delta )}+ \mathcal {B}_{\alpha ,\varepsilon }^{(\delta )}\,, \qquad \text {where} \qquad {\mathcal {A}}_{\varepsilon }^{(\delta )}:=\frac{1}{\varepsilon ^2} {\mathcal {A}}^{(\delta )}, \quad \mathcal {B}_{\alpha ,\varepsilon }^{(\delta )}:=\frac{1}{\varepsilon ^2} \mathcal {B}_{\alpha }^{(\delta )} -\frac{1}{\varepsilon } v\cdot \nabla _{x}\,.\nonumber \\ \end{aligned}$$
(2.11)

Our analysis of this splitting and then of the spectrum of \(\mathcal {G}_{\alpha ,\varepsilon }\) relies on several elements: the nice properties of the above-mentioned splitting of \({\mathscr {L}}_1= {\mathcal {A}}^{(\delta )} + \mathcal {B}_1^{(\delta )}\) coming from [10], some refined bilinear estimates on the collision operator coming from [1], new estimates on \(\mathcal {G}_\alpha -{\mathcal {M}}\) that are reminiscent of estimates proven in [4] (see [3,  Lemma 2.3]) and also new estimates on \(\mathcal {Q}_\alpha - \mathcal {Q}_1\) (see [3,  Lemmas 2.1 and 2.2]). Concerning the latter point, we exploit ideas developed in [17] but our situation is more involved because we work in polynomially weighted spaces whereas in [17], the authors were working with stretched exponential weights.

In the following lemma, we provide some regularization and hypodissipativity results on the splitting \(\mathcal {G}_{\alpha ,\varepsilon }={\mathcal {A}}_{\varepsilon }^{(\delta )}+ \mathcal {B}_{\alpha ,\varepsilon }^{(\delta )}\) (see [3,  Lemma 2.7 and Proposition 2.9]):

Lemma 2.3

There holds:

  1. (1)

    For any \(k \in {\mathbb {N}}\) and \(\delta >0,\) there are two positive constants \(C_{k,\delta },R_\delta >0\) such that \(\text {supp}\left( {\mathcal {A}}^{(\delta )}g\right) \subset B(0,R_{\delta })\) and

    $$\begin{aligned} {\Vert {\mathcal {A}}^{(\delta )}g\Vert _{\mathbb {W}^{k,2}_{v}({\mathbb {R}}^{d})}} \leqslant C_{k,\delta }\Vert g\Vert _{L^{1}_{v}(\langle v \rangle )}, \qquad \forall \, g \in L^{1}_v(\langle v \rangle ). \end{aligned}$$
    (2.12)
  2. (2)

    There exist \(\delta _0\), \(\alpha _0\), \(\nu _0\) such that for all \(\alpha \in (\alpha _0,1)\) and \(\delta \in (0,\delta _0)\),

    figure b

    where we recall that the spaces \(\mathcal {E}\) and \(\mathbb {Y}_j\) are respectively defined in (1.27) and (2.8).

In what follows, we suppose that Assumption 1.1 is satisfied. We introduce \(\varepsilon _0\) which is such that \(\alpha (\varepsilon _0) = \alpha _0\) (and thus \(\alpha (\varepsilon ) \in (\alpha _0,1)\) for all \(\varepsilon \in (0,\varepsilon _0)\)) and consider \(\delta \in (0,\delta _0)\), \(\varepsilon \in (0,\varepsilon _0)\). We will denote \(\mathcal {G}_\varepsilon := \mathcal {G}_{\alpha ,\varepsilon }\) as well as \({\mathcal {A}}_\varepsilon := {\mathcal {A}}_\varepsilon ^{(\delta )}\) and \(\mathcal {B}_\varepsilon := \mathcal {B}^{(\delta )}_{\alpha ,\varepsilon }\) but do not change the notations \(\mathcal {G}_{1,\varepsilon }\) and \(\mathcal {B}_{1,\varepsilon }\). The following corollary states immediate consequences of the previous lemma (we denote by \(\mathcal {R}(\cdot ,\mathcal {B}_{\varepsilon })\) the resolvent of the operator \(\mathcal {B}_\varepsilon \)):

Corollary 2.4

There holds:

  1. (1)

    For any \(i,j \in \{-1,0,1\}\), we have

    $$\begin{aligned} \Vert {\mathcal {A}}_\varepsilon \Vert _{\mathbb {Y}_{i}\rightarrow \mathbb {Y}_{j}} \lesssim \frac{1}{\varepsilon ^2}.\end{aligned}$$
  2. (2)

    If \(\nu >0\) is fixed, then for \(\varepsilon \) small enough (in terms of \(\nu _0\) and \(\nu \)) and \(j=-1,0,1\),

    $$\begin{aligned} \Vert \mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\Vert _{\mathbb {Y}_j\rightarrow \mathbb {Y}_{j}} \lesssim \frac{1}{\text {Re}\,\lambda +\varepsilon ^{-2}{\nu _0}}\lesssim \varepsilon ^2, \qquad \forall \, \text {Re}\,\lambda >-\nu \,. \end{aligned}$$

The second keypoint to develop our perturbative argument is to have a good understanding of the spectrum of the operator \(\mathcal {G}_{1,\varepsilon }\). We here give some estimates on the associated resolvent that are a consequence of a result of decay of the associated semigroup (see [3, Theorem 2.12] which gives an improved version of [8, Theorem 2.1]):

Lemma 2.5

There exists \(\varepsilon _1 \in (0,\varepsilon _0)\) such that for \(j=-1,0,1\), for any \(\lambda \in {\mathbb {C}}_{\mu _{\star }}^{\star }\) and any \(\varepsilon \in (0,\varepsilon _1)\),

$$\begin{aligned} \Vert \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon })\Vert _{\mathbb {Y}_j\rightarrow \mathbb {Y}_{j}} \lesssim \max \bigg (\frac{1}{|\lambda |},\frac{1}{\text {Re}\lambda +\mu _{\star }}\bigg ) \end{aligned}$$

where \(\mu _\star \) has been defined in (2.1).

Let us now explain how we develop our perturbative argument to prove Theorem 2.1. The following proposition (which is an adaptation of [21,  Lemma 2.16]) is the first step in the development of the perturbative argument and its proof relies on Corollary 2.4 and Lemma 2.5.

Proposition 2.6

For all \(\lambda \in {\mathbb {C}}_{\mu _{\star }}^{\star }\), let

$$\begin{aligned} {\mathcal {J}}_{\varepsilon }(\lambda )=\left( \mathcal {G}_{\varepsilon } -\mathcal {G}_{1,\varepsilon }\right) \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\,\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon }). \end{aligned}$$

Then, for any \(\mu \in (0,\mu _{\star })\) and \(\lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\), there exists \(\varepsilon _2 \in (0,\varepsilon _1)\) such that for any \(\varepsilon \in (0,\varepsilon _2)\),

$$\begin{aligned} \left\| {\mathcal {J}}_{\varepsilon }(\lambda )\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \frac{1}{\mu _\star -\mu }\,\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}. \end{aligned}$$
(2.13)

In addition, there exists \(\varepsilon _3 \in (0,\varepsilon _2)\) and \(\lambda _3>0\) such that for \(\varepsilon \in (0,\varepsilon _3)\) and \(\lambda _0 \in [0,\lambda _3)\) (where \(\lambda _0\) is defined in Assumption 1.1), \(\mathbf {Id}-{\mathcal {J}}_{\varepsilon }(\lambda )\) and \(\lambda -\mathcal {G}_{\varepsilon }\) are invertible in \(\mathbb {Y}\) with

$$\begin{aligned} \mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })=\Gamma _{\varepsilon }(\lambda )(\mathbf {Id}-{\mathcal {J}}_{\varepsilon }(\lambda ))^{-1}, \qquad \lambda \in {\mathbb {C}}_{\mu } \setminus \mathbb {D}(\mu _{\star }-\mu ), \end{aligned}$$
(2.14)

where \(\Gamma _{\varepsilon }(\lambda ):=\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })+\mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\,\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\). Finally, we have for \(\varepsilon \in (0,\varepsilon _3)\),

$$\begin{aligned} \Vert \mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \frac{1}{\mu _\star -\mu }, \qquad \lambda \in {\mathbb {C}}_{\mu } \setminus \mathbb {D}(\mu _{\star }-\mu ). \end{aligned}$$
(2.15)

Sketch of the proof

The estimate on \({\mathcal {J}}_{\varepsilon }(\lambda )\) can be easily deduced from (2.7), Corollary 2.4 and Lemma 2.5. First, fix \(\mu \in (0,\mu _\star )\) and notice that from Corollary 2.4, we clearly have that there exists \(\varepsilon _2 \in (0,\varepsilon _1)\) (which depends on \(\nu _0\) and \(\mu \)) such that for any \(\text {Re}\,\lambda >-\mu \), we have:

$$\begin{aligned} \left\| {\mathcal {A}}_{\varepsilon }\,\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}_1} \lesssim 1. \end{aligned}$$

We can then deduce that for \(\mu \in (0,\mu _{\star })\), for any \(\text {Re}\,\lambda >-\mu \), \(|\lambda | \geqslant \mu _\star -\mu \),

$$\begin{aligned} \begin{aligned} \left\| {{\mathcal {J}}}_{\varepsilon }(\lambda )\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}}&\leqslant \left\| \mathcal {G}_{\varepsilon }-\mathcal {G}_{1,\varepsilon }\right\| _{\mathbb {Y}_{1}\rightarrow \mathbb {Y}}\,\Vert \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon })\Vert _{\mathbb {Y}_{1}\rightarrow \mathbb {Y}_{1}}\,\left\| {\mathcal {A}}_{\varepsilon }\,\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}_{1}}\\&\leqslant C \, \frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}\,\,\max \left( \frac{1}{|\lambda |},\,\frac{1}{\text {Re}\lambda +\mu _{\star }}\right) \leqslant C \,\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}} \frac{1}{\mu _\star -\mu } \end{aligned} \end{aligned}$$
(2.16)

for some \(C>0\). Moreover, one can choose \(\varepsilon _3 \in (0,\varepsilon _2)\) and \(\lambda _3 >0\) depending on the difference \(\chi = \mu _{\star }-\mu \), so that if \(\lambda _0 \in [0,\lambda _3)\) (recall that \(\lambda _0\) is defined in Assumption 1.1 and is such that \((1-\alpha (\varepsilon )) \varepsilon ^{-2} \sim \lambda _0 + \eta (\varepsilon )\))

$$\begin{aligned} \rho (\varepsilon ):=\frac{C}{\mu _\star -\mu }\,\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}} < 1, \qquad \forall \, \varepsilon \in (0,\varepsilon _3). \end{aligned}$$
(2.17)

Under such an assumption, one sees that, for all \(\lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\), \(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda )\) is invertible in \(\mathbb {Y}\) with

$$\begin{aligned} (\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}=\sum _{p=0}^{\infty }\left[ {{\mathcal {J}}}_{\varepsilon }(\lambda )\right] ^{p}, \qquad \forall \, \varepsilon \in (0,\varepsilon _3). \end{aligned}$$

Let us fix then \(\varepsilon \in (0,\varepsilon _3)\) and \(\lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\). The range of \(\Gamma _{\varepsilon }(\lambda )\) is clearly included in \({\mathscr {D}}(\mathcal {B}_{\varepsilon })={\mathscr {D}}(\mathcal {G}_{1,\varepsilon })\). Then, writing \(\mathcal {G}_{\varepsilon }={\mathcal {A}}_{\varepsilon }+\mathcal {B}_{\varepsilon }\), we easily get that

$$\begin{aligned} (\lambda -\mathcal {G}_{\varepsilon })\Gamma _{\varepsilon }(\lambda )=\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ) \end{aligned}$$

i.e. \(\Gamma _{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\) is a right-inverse of \((\lambda -\mathcal {G}_{\varepsilon }).\) To prove that \(\lambda -\mathcal {G}_{\varepsilon }\) is invertible, it is therefore enough to prove that it is one-to-one, which can be done up to reducing the value of \(\varepsilon _3\) using (2.7), Lemmas 2.3 and 2.5 . Thus, for \(\varepsilon \in (0,\varepsilon _3)\), \({\mathbb {C}}_{\mu } \setminus \mathbb {D}(\mu _{\star }-\mu )\) is included into the resolvent set of \(\mathcal {G}_{\varepsilon }\) and this shows (2.14). To estimate now \(\Vert \mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })\Vert _{\mathbb {Y} \rightarrow \mathbb {Y}}\), one simply notices that

$$\begin{aligned} \Vert (\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \leqslant \sum _{p=0}^{\infty }\Vert {{\mathcal {J}}}_{\varepsilon }(\lambda )\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}}^{p}\leqslant \frac{1}{1-\rho (\varepsilon )}, \quad \forall \, \lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\nonumber \\ \end{aligned}$$
(2.18)

from which, as soon as \(\lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\),

$$\begin{aligned} \Vert \mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}}\leqslant \frac{1}{1-\rho (\varepsilon )}\,\Vert \Gamma _{\varepsilon }(\lambda )\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}}\,. \end{aligned}$$

One checks, using the previous computations, that for \(\lambda \in {\mathbb {C}}_{\mu }\setminus \mathbb {D}(\mu _{\star }-\mu )\),

$$\begin{aligned} \Vert \Gamma _{\varepsilon }(\lambda )\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \varepsilon ^{2}+\Vert {\mathcal {A}}\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}}\Vert \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon })\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \end{aligned}$$
(2.19)

and deduces (2.15). This achieves the proof. \(\square \)

A first obvious consequence of Proposition 2.6 is that, for any \(\mu \in (0,\mu _{\star })\), there is \(\varepsilon _3>0\) depending only on \(\chi =\mu _{\star }-\mu \) such that

$$\begin{aligned} {\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap {\mathbb {C}}_\mu \subset \mathbb {D}(\mu _\star -\mu ),s\qquad \forall \, \varepsilon \in (0,\varepsilon _3). \end{aligned}$$

We denote by \({\mathbf {P}}_{\varepsilon }\) (resp. \({\mathbf {P}}_0\)) the spectral projection associsated to the set

$$\begin{aligned} {\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap {\mathbb {C}}_{\mu }={\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap \mathbb {D}(\mu _{\star }-\mu ) \quad (\text {resp.} \quad {\mathfrak {S}}(\mathcal {G}_{1,\varepsilon }) \cap {\mathbb {C}}_{\mu } = \{0\}). \end{aligned}$$

One can deduce then the following lemma whose proof is similar to [21, Lemma 2.17].

Lemma 2.7

For any \(\mu \in (0,\mu _\star )\) such that \({\mathbb {C}}_\mu \subset \mathbb {D}(\mu _\star -\mu )\), there exist \(\varepsilon _4 \in (0,\varepsilon _3)\) and \(\lambda _4 \in (0,\lambda _3)\) depending only on \(\chi =\mu _{\star }-\mu \) such that if \(\lambda _0 \in [0,\lambda _4)\) (where \(\lambda _0\) is defined in Assumption 1.1),

$$\begin{aligned} \left\| {\mathbf {P}}_{\varepsilon }-{\mathbf {P}}_{0}\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}} < 1, \qquad \forall \, \varepsilon \in (0,\varepsilon _4). \end{aligned}$$

In particular,

$$\begin{aligned} \mathrm{dim}\,\mathrm{Range}({\mathbf {P}}_{\varepsilon })=\mathrm{dim}\,\mathrm{Range}({\mathbf {P}}_{0})=d+2, \qquad \forall \, \varepsilon \in (0,\varepsilon _4). \end{aligned}$$
(2.20)

Sketch of the proof

Let \(\mu \in (0,\mu _\star )\) be close enough to \(\mu _\star \) so that \(\mathbb {D}(\mu _\star - \mu ) \subset {\mathbb {C}}_\mu \) and \(0< r < \chi =\mu _{\star }-\mu \). One has \(\mathbb {D}(r) \subset {\mathbb {C}}_{\mu }^{\star }\). We set \(\gamma _{r}:=\{z \in {\mathbb {C}}\;;\;|z|=r\}\). Recall that by definition

$$\begin{aligned} {\mathbf {P}}_{\varepsilon }:=\frac{1}{2i\pi }\oint _{\gamma _{r}}\mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })\, \text {d}\lambda , \qquad {\mathbf {P}}_{0}:=\frac{1}{2i\pi }\oint _{\gamma _{r}}\mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon })\, \text {d}\lambda . \end{aligned}$$

For \(\lambda \in \gamma _{r}\), set

$$\begin{aligned} \mathcal {Z}_{\varepsilon }(\lambda )=\mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon }) \end{aligned}$$

so that \(\Gamma _{\varepsilon }(\lambda )=\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })+\mathcal {Z}_{\varepsilon }(\lambda )\). Recall from (2.14) that, for \(\lambda \in \gamma _{r}\),

$$\begin{aligned} \mathcal {R}(\lambda ,\mathcal {G}_{\varepsilon })= & {} \mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}+\mathcal {Z}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\\= & {} \mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })+\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon }){{\mathcal {J}}}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}+\mathcal {Z}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1} \end{aligned}$$

where we wrote \((\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}=\mathbf {Id}+{{\mathcal {J}}}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\) to get the second equality. One also has

$$\begin{aligned} \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }) =\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })+\mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\left[ \mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })-\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\right] +\mathcal {Z}_{\varepsilon }(\lambda ). \end{aligned}$$

One can then obtain (see the proof of [3, Lemma 3.8] for the details)

$$\begin{aligned} {\mathbf {P}}_{\varepsilon }-{\mathbf {P}}_{0}= & {} \frac{1}{2i\pi }\oint _{\gamma _{r}}\Gamma _{\varepsilon }(\lambda ){{\mathcal {J}}}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\, \text {d}\lambda \\&\quad +\frac{1}{2i\pi }\oint _{\gamma _{r}}\mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\left[ \mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })-\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })\right] \, \text {d}\lambda . \end{aligned}$$

The first part is estimated thanks to (2.16), (2.18) and (2.19) combined with Lemma 2.5:

$$\begin{aligned} \Vert \Gamma _{\varepsilon }(\lambda ){{\mathcal {J}}}_{\varepsilon }(\lambda )(\mathbf {Id}-{{\mathcal {J}}}_{\varepsilon }(\lambda ))^{-1}\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \frac{1}{r^2} \frac{1}{1-\rho (\varepsilon )} \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}. \end{aligned}$$

For the second part, notice first that from Lemma 2.5,

$$\begin{aligned} \left\| \mathcal {R}(\lambda ,\mathcal {G}_{1,\varepsilon }){\mathcal {A}}_{\varepsilon }\left[ \mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })-\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })\right] \right\| _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \frac{1}{r}\,\left\| {\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })-{\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}}. \end{aligned}$$

Then, for \(\lambda \in \gamma _{r}\), we have

$$\begin{aligned} {\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })-{\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })={\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\left[ \mathcal {B}_{\varepsilon }-\mathcal {B}_{1,\varepsilon }\right] \mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon }) \end{aligned}$$

which implies that

$$\begin{aligned}&\left\| {\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })-{\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })\right\| _{\mathbb {Y}\rightarrow \mathbb {Y}}\\&\quad \leqslant \Vert {\mathcal {A}}_{\varepsilon }\mathcal {R}(\lambda ,\mathcal {B}_{\varepsilon })\Vert _{\mathbb {Y}_{-1}\rightarrow \mathbb {Y}}\,\Vert \mathcal {B}_{\varepsilon }-\mathcal {B}_{1,\varepsilon }\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}_{-1}}\,\Vert \mathcal {R}(\lambda ,\mathcal {B}_{1,\varepsilon })\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \lesssim \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}. \end{aligned}$$

Proceeding as in the proof of Lemma 2.6, one can conclude that for any \(0<r<\chi =\mu _{\star }-\mu \),

$$\begin{aligned} \Vert {\mathbf {P}}_{\varepsilon }-{\mathbf {P}}_{0}\Vert _{\mathbb {Y}\rightarrow \mathbb {Y}} \leqslant \frac{C}{r} \,\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}} \left( \frac{1}{r (1-\rho (\varepsilon ))} +1 \right) :=\ell (\varepsilon ). \end{aligned}$$
(2.21)

Thanks to Assumption 1.1, one can find \(\varepsilon _4\) and \(\lambda _4\) depending only on \(\chi \) such that \(\ell (\varepsilon ) < 1\) for any \(\varepsilon \in (0,\varepsilon _4)\) and \(\lambda _0 \in [0,\lambda _4)\). In particular, we deduce (2.20) from [12, Paragraph I.4.6]. \(\square \)

With Lemma 2.7, we can now end the proof of Theorem 2.1.

Sketch of the proof of Theorem 2.1

The structure of \({\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap {\mathbb {C}}_{\mu }\) in the space \(\mathbb {Y}\) comes directly from Lemma 2.7 together with Proposition 2.6. To describe more precisely the spectrum, one first remarks that

$$\begin{aligned} {\mathfrak {S}}({\mathscr {L}}_{\alpha (\varepsilon )}) \cap {\mathbb {C}}_\mu \subset {\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap {\mathbb {C}}_\mu . \end{aligned}$$

This comes from the fact that for each eigenvalue of \({\mathscr {L}}_{\alpha (\varepsilon )}\), the eigenfunction depends only on v and thus remains an eigenfunction for the operator \(\mathcal {G}_\varepsilon \). Since, for \(\varepsilon \) small enough, the same perturbative argument that we developed above implies that the spectral projection \(\Pi _{{\mathscr {L}}_{\alpha (\varepsilon )}}\) associated to \({\mathfrak {S}}({\mathscr {L}}_\alpha ) \cap {\mathbb {C}}_{\mu }\) satisfies

$$\begin{aligned} \text {dim(Range}(\Pi _{{\mathscr {L}}_{\alpha (\varepsilon )}}))=\text {dim(Range}(\Pi _{{\mathscr {L}}_{1}}))=d+2=\text {dim(Range}({\mathbf {P}}_{\varepsilon })), \end{aligned}$$

we get that

$$\begin{aligned} {\mathfrak {S}}({\mathscr {L}}_{\alpha (\varepsilon )}) \cap {\mathbb {C}}_{\mu } = {\mathfrak {S}}(\mathcal {G}_{\varepsilon }) \cap {\mathbb {C}}_{\mu }\,, \end{aligned}$$
(2.22)

that is, the eigenvalues \(\lambda _{j}(\varepsilon )\) are actually eigenvalues of \({\mathscr {L}}_{\alpha (\varepsilon )}\). The development of the energy eigenvalue \(\lambda _{d+2}(\varepsilon )\) comes from [17]. The conservation of mass gives us that 0 is an eigenvalue for our problem. The intermediate eigenvalues \(\lambda _j(\varepsilon )\) for \(j=2, \dots , d+1\) are obtained thanks to the fact that

$$\begin{aligned} \int _{ {\mathbb {R}}^{d}}{\mathscr {L}}_{\alpha (\varepsilon )} \varphi (v)\,v_{i}\,\text {d}v =-\frac{1-\alpha (\varepsilon )}{\varepsilon ^2} \int _{{\mathbb {R}}^{d}}v_{i}\nabla \cdot (v\varphi (v))\,\text {d}v=\frac{1-\alpha (\varepsilon )}{\varepsilon ^2}\int _{{\mathbb {R}}^{d}}v_{i}\,\varphi (v)\,\text {d}v. \end{aligned}$$

Notice that all this allows us to find eigenfunctions (that depend only on v) in \(L^2_{v,x}(\langle v \rangle ^r)\). Using once more the splitting \({\mathscr {L}}_{\alpha }={\mathcal {A}}^{(\delta )}+\mathcal {B}_{\alpha }^{\delta }\) defined in (2.10) and the regularizing properties of \({\mathcal {A}}^{(\delta )}\), one can actually prove that our eigenfunctions lie in \(\mathbb {Y}\), which yields the conclusion of Theorem 2.1 in the space \(\mathbb {Y}\). To extend the result to the space \(\mathcal {E}\), we use an enlargement argument coming from [10], we omit the details here and just mention that this argument is based on the splitting \(\mathcal {G}_\varepsilon = {\mathcal {A}}_\varepsilon +\mathcal {B}_\varepsilon \) introduced in (2.11). \(\square \)

3 Study of the Kinetic Nonlinear Problem

Let us recall that the spaces \(\mathcal {E}\) and \(\mathcal {E}_1\) are defined in (1.27). In this section, we assume that Assumption 1.1 is met and consider \(\varepsilon \in (0,{{\overline{\varepsilon }}})\), \(\lambda _0 \in \big [0,{{\overline{\lambda }}}\big ]\) where \({{\overline{\varepsilon }}}\) and \({{\overline{\lambda }}}\) are defined in Theorem 2.1. As in Sect. 2, to lighten the notations, we write \(\mathcal {G}_\varepsilon = \mathcal {G}_{\alpha (\varepsilon ),\varepsilon }\) as well as \(\mathcal {B}_\varepsilon =\mathcal {B}_{\alpha (\varepsilon ),\varepsilon }\).

3.1 Splitting of the Nonlinear Inelastic Boltzmann Equation

Now that the spectral analysis of the linearized operator \(\mathcal {G}_\varepsilon \) in the space \(\mathcal {E}\) has been performed, in order to prove Theorem 1.2, we are going to prove several a priori estimates for the solutions to (1.15). The crucial point in the analysis lies in the splitting of (1.15) into a system of two equations mimicking a spectral enlargement method from a PDE perspective(see [18,  Sect. 2.3] and [8] for pioneering ideas on such a method). More precisely, using (2.11), the splitting amounts to look for a solution of (1.15) of the form

$$\begin{aligned} h_{\varepsilon }(t)=h^{0}_{\varepsilon }(t)+h^{1}_{\varepsilon }(t) \end{aligned}$$

with \(h^{0}_{\varepsilon }\) solution to

$$\begin{aligned} \left\{ \begin{array}{ccl} \partial _{t} h^{0}_\varepsilon &{}=&{}\,\mathcal {B}_{\varepsilon }h^{0}_\varepsilon + \frac{1}{\varepsilon }\mathcal {Q}_{\alpha (\varepsilon )}(h^{0}_\varepsilon ,h^{0}_\varepsilon ) + \frac{1}{\varepsilon }\Big [\mathcal {Q}_{{\alpha (\varepsilon )}}(h^{0}_\varepsilon ,h^{1}_\varepsilon )+\mathcal {Q}_{{\alpha (\varepsilon )}}(h^{1}_\varepsilon ,h^{0}_\varepsilon )\Big ] \\ &{}&{} +{\Big [\mathcal {G}_{\varepsilon }h^{1}_\varepsilon -\mathcal {G}_{1,\varepsilon }h^{1}_\varepsilon \Big ] + \frac{1}{\varepsilon }\Big [\mathcal {Q}_{{\alpha (\varepsilon )}}(h^{1}_\varepsilon ,h^{1}_\varepsilon )-\mathcal {Q}_{1}(h^{1}_\varepsilon ,h^{1}_\varepsilon )\Big ]},\\ h^{0}_\varepsilon (0,x,v)&{}=&{}\,\!h_{\varepsilon }^{{\text {in}}}(x,v) \in \mathcal {E}, \end{array}\right. \end{aligned}$$
(3.1)

and \(h^{1}_\varepsilon \) solution to

$$\begin{aligned} \left\{ \begin{array}{ccl} \partial _{t} h^{1}_\varepsilon &{}=&{}{\mathcal {G}_{1,\varepsilon }h^{1}_\varepsilon } + \frac{1}{\varepsilon }\mathcal {Q}_{1}(h^{1}_\varepsilon ,h^{1}_\varepsilon ) + {\mathcal {A}}_\varepsilon h^{0}_\varepsilon ,\\ h^{1}_\varepsilon (0,x,v)&{}=&{}0. \end{array}\right. \end{aligned}$$
(3.2)

In order to lighten the notations, in this section, we will write \(h^{\text {in}}\)h\(h^{0}\) and \(h^{1}\) instead of \(h^{\text {in}}_\varepsilon \)\(h_\varepsilon \)\(h^{0}_\varepsilon \) and \(h^{1}_\varepsilon \). The goal is to obtain nice nested a priori estimates on \(h^{0}\) and \(h^{1}\). Notice first that our splitting is more complicated than the one of [8] because it relies on perturbative considerations around the elastic case that come out in the equation satisfied by \(h^{0}\). As a consequence, our a priori estimates are more intricate and require the use of non standard Gronwall lemma. Notice also that since the initial datum of \(h^{1}\) is vanishing, we can study the equation on \(h^{1}\) in any functional space. In particular, we can study it in the Hilbert space \(\mathcal {H}= \mathbb {W}_{x,v}^{m,2}\left( {\mathcal {M}}^{-1/2}\right) \) in which we have a good understanding of the elastic linearized operator \(\mathcal {G}_{1,\varepsilon }\). Indeed, in this type of spaces, the symmetries of the collision operator \(\mathcal {Q}_1\) allow to get some nice hypocoercive estimates (see (2.4)).

Remark 3.1

In [10], the authors treat the elastic case (\(\alpha =1\)) of the non-rescaled equation (\(\varepsilon =1\)) and they do not resort to such a splitting method to study the nonlinear equation, their approach is based on the use of a norm which is equivalent to the usual one and is such that \(\mathcal {G}_{1,1}\) is dissipative in this norm in large spaces. Such an approach is no longer usable when one wants to deal with rescaled equations and obtain uniform in \(\varepsilon \) estimates. Indeed, the definition of the equivalent norm in [10] does not take into account the different behaviors of microscopic and macroscopic parts of the solution with respect to \(\varepsilon \): typically, the microscopic part of the solution vanishes as \(\varepsilon \rightarrow 0\) whereas the macroscopic one does not. Conversely, in the splitting method, the equation that defines \(h^{1}\) is treated thanks to hypocoercivity tricks that allow to distinguish microscopic and macroscopic behaviors.

3.2 Estimating \(h^{0}\)

Concerning \(h^{0}\), let us first mention that the dissipativity properties of \(\mathcal {B}_{\varepsilon }\) stated in Lemma 2.3 can actually be improved a bit. More precisely, one can show that there exist norms on the spaces \(\mathcal {E}\) and \(\mathcal {E}_1\) that are equivalent to the standard ones (with multiplicative constants independent of \(\varepsilon \)) that we still denote \(\Vert \cdot \Vert _{\mathcal {E}}\) and \(\Vert \cdot \Vert _{\mathcal {E}_1}\) and that satisfy:

$$\begin{aligned} \frac{\text {d}}{\text {d}t}\Vert S_{\mathcal {B}_{\varepsilon }}(t)g\Vert _{\mathcal {E}} \leqslant -\frac{\nu _{0}}{\varepsilon ^2}\Vert S_{\mathcal {B}_{\varepsilon }}(t)g\Vert _{\mathcal {E}_{1}} \end{aligned}$$
(3.3)

where we have denoted by \(\left( S_{\mathcal {B}_{\varepsilon }}(t)\right) _{t\geqslant 0}\) the semigroup generated by \(\mathcal {B}_\varepsilon \) and \(\nu _0\) is defined in Lemma 2.3. Let us also introduce the Banach space \(\mathcal {E}_2\)

$$\begin{aligned} \mathcal {E}_{2}:={\mathbb {W}^{k+1,2}_{v}\mathbb {W}^{m,2}_{x}(\varvec{\varpi }_{q+{2\kappa }+2}), \quad \kappa > \frac{d}{2}} \end{aligned}$$

which satisfies the following continuous embeddings: \(\mathcal {H}\hookrightarrow \mathcal {E}_2 \hookrightarrow \mathcal {E}_1\) (recall that \(\mathcal {E}_1\) is defined in (1.27)). Let us point out that the spaces \(\mathcal {E}_1\) and \(\mathcal {E}_2\) allow us to get the following estimates (see [3,  Remark 3.5] and [1, 2]):

$$\begin{aligned} \Vert (\mathcal {Q}_\alpha -\mathcal {Q}_1)(g,f)\Vert _\mathcal {E}\lesssim (1-\alpha ) \Vert g\Vert _{\mathcal {E}_2} \Vert f\Vert _{\mathcal {E}_2} \quad \text {and} \quad \Vert \mathcal {Q}_\alpha (g,f)\Vert _{\mathcal {E}} \lesssim \Vert g\Vert _\mathcal {E}\Vert f\Vert _{\mathcal {E}_1} + \Vert g\Vert _{\mathcal {E}_1} \Vert f\Vert _\mathcal {E}\nonumber \\ \end{aligned}$$
(3.4)

where the multiplicative constants are uniform in \(\alpha \). One can then obtain the following proposition:

Proposition 3.2

Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty . \end{aligned}$$

For \(\nu \in (0,\nu _0)\) (where \(\nu _0\) is defined in Lemma 2.3), there exists an explicit \(\varepsilon _5 \in (0,{{\overline{\varepsilon }}})\) (where \({{\overline{\varepsilon }}}\) is defined in Theorem 2.1) such that:

$$\begin{aligned} \begin{aligned} \Vert h^{0}(t)\Vert _{\mathcal {E}} \lesssim \Vert h^{\text {in}}\Vert _{\mathcal {E}}\,e^{-\frac{\nu }{\varepsilon ^2}t} + {\lambda }_{\varepsilon } \int ^{t}_{0}e^{-\frac{\nu }{\varepsilon ^2}(t-s)}\Vert h^{1}(s)\Vert _{\mathcal {H}}\,\text {d}s \end{aligned} \end{aligned}$$
(3.5)

where we recall that \(\lambda _\varepsilon \underset{\varepsilon \rightarrow 0}{\sim } \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}\) is defined in Theorem 2.1.

Sketch of the proof

Using (3.3) as well as (3.4) and recalling that \(h^{0}\) solves (3.1), we can compute the evolution of \(\Vert h^{0}(t)\Vert _\mathcal {E}\) and estimate it:

$$\begin{aligned} \frac{\text {d}}{\text {d}t} \Vert h^{0}(t)\Vert _{\mathcal {E}}&\leqslant -\frac{\nu _0}{\varepsilon ^2} \Vert h^{0}(t)\Vert _{\mathcal {E}_1} + \frac{C}{\varepsilon } \left( \Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{\mathcal {E}_1}\right) \Vert h^{0}(t)\Vert _{\mathcal {E}_1} \nonumber \\&\quad + C \, \frac{1-\alpha (\varepsilon )}{\varepsilon ^2} \Vert h^{1}(t)\Vert _{\mathcal {E}_2} + C \, \frac{1-\alpha (\varepsilon )}{\varepsilon } \Vert h^{1}(t)\Vert ^2_{\mathcal {E}_2}. \end{aligned}$$
(3.6)

Using that the embedding \(\mathcal {E}_2 \hookrightarrow \mathcal {H}\) is continuous, recalling that \(h^0(0)=h^{\text {in}}\) and choosing \(\varepsilon _5\) small enough so that \(C \, \varepsilon _5 \, \Delta _0 \leqslant \nu _0-\nu \), we obtain

$$\begin{aligned} \begin{aligned} \Vert h^{0}(t)\Vert _{\mathcal {E}} \lesssim \Vert h^{\text {in}}\Vert _{\mathcal {E}}\,e^{-\frac{\nu }{\varepsilon ^2}t} + {\lambda }_{\varepsilon } \int ^{t}_{0}e^{-\frac{\nu }{\varepsilon ^2}(t-s)}\Vert h^{1}(s)\Vert _{\mathcal {H}}\,\text {d}s +\varepsilon \, {\lambda }_{\varepsilon } \int ^{t}_{0}e^{-\frac{\nu }{\varepsilon ^2}(t-s)}\Vert h^{1}(s)\Vert ^2_{\mathcal {H}}\,\text {d}s. \end{aligned} \end{aligned}$$

We conclude to (3.5) by assuming furthermore that \(\varepsilon _5 \Delta _0 \leqslant 1\). \(\square \)

3.3 Estimating \(h^{1}\)

We now comment and study the equation satisfied by \(h^{1}\). Let us point out that getting estimates on \(h^{1}\) is trickier than in [8], indeed, in the latter paper, the idea is to estimate separately \({\mathbf {P}}_0 h^1\) and \((\mathbf {Id} -{\mathbf {P}}_0) h^{1}\) where \({\mathbf {P}}_{0}\) is the projector onto \({\text {Ker}} (\mathcal {G}_{1,\varepsilon })\) defined by

$$\begin{aligned} {\mathbf {P}}_{0}g:=\sum _{i=1}^{d+2}\left( \int _{\mathbb {T}^{d}\times {\mathbb {R}}^{d}}g\,\Psi _{i}\, \text {d}v\,\text {d}x \right) \,\Psi _{i}\,{\mathcal {M}}\end{aligned}$$
(3.7)

where the functions \(\Psi _i\) have been defined in (1.24), and thanks to the properties of preservation of mass, momentum and energy of the whole equation, one could write that \({\mathbf {P}}_{0} h =0\) so that \({\mathbf {P}}_{0} h^{1}= -{\mathbf {P}}_{0} h^{0}\) and directly get an estimate on \({\mathbf {P}}_{0} h^{1}\) from the one on \(h^{0}\). In our case, the energy is no longer preserved which induces additional difficulties. However, we keep the same strategy and start by estimating \({\mathbf {P}}_0 h^{1}\) (see Remark 3.4 for a comment on this choice of strategy).

For the sequel, we also introduce

$$\begin{aligned} \mathbb {P}_{0}h=\sum _{i=1}^{d+1}\left( \int _{\mathbb {T}^{d}\times {\mathbb {R}}^{d}}h\,\Psi _{i}\,\text {d}v\, \text {d}x\right) \,\Psi _{i}\,{\mathcal {M}}\,, \quad \Pi _{0}h=\left( \int _{\mathbb {T}^{d}\times {\mathbb {R}}^{d}}h\Psi _{d+2}\,\text {d}v\, \text {d}x\right) \,\Psi _{d+2}\,{\mathcal {M}}.\nonumber \\ \end{aligned}$$
(3.8)

Notice that thanks to Cauchy-Schwarz inequality in velocity, one can easily prove that we have \(\mathbb {P}_0 \in {\mathscr {B}}(\mathcal {E},\mathcal {H})\). One can then obtain the following proposition:

Proposition 3.3

Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty . \end{aligned}$$

For \(\varepsilon \in (0,\varepsilon _5)\) \((\varepsilon _5\) is defined in Proposition 3.2),

$$\begin{aligned} \Vert {\mathbf {P}}_0 h^1(t)\Vert _\mathcal {E}&\lesssim \Vert h^0(t)\Vert _\mathcal {E}+ \Vert h^{\text {in}}\Vert _\mathcal {E}\, e^{-{{\overline{\lambda }}}_\varepsilon t} \nonumber \\&\quad + \lambda _\varepsilon \int _0^t e^{-{{\overline{\lambda }}}_\varepsilon (t-s)} \left( \Vert h^{0}(s)\Vert _\mathcal {E}+ \Vert (\mathbf {Id} -{\mathbf {P}}_0)h^{1}(s)\Vert _\mathcal {H}\right) \, \text {d}s \nonumber \\&\quad + \varepsilon \lambda _\varepsilon \int _0^t e^{-{{\overline{\lambda }}}_\varepsilon (t-s)} \Vert h^{1}(s)\Vert _\mathcal {H}\, \text {d}s \end{aligned}$$
(3.9)

where \({{\overline{\lambda }}}_\varepsilon = \lambda _\varepsilon + {\text {O}}(1-\alpha (\varepsilon ))\) with \(\lambda _\varepsilon \underset{\varepsilon \rightarrow 0}{\sim } \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}\) defined in Theorem 2.1.

Sketch of the proof

Due to the properties of preservation of mass and vanishing momentum of our equation, we have \(\mathbb {P}_0 h = 0\) which implies that \(\mathbb {P}_0 h^{1}= - \mathbb {P}_0 h^{0}\). Consequently, we easily get an estimate on \(\mathbb {P}_0 h^{1}\) using that \(\mathbb {P}_0 \in {\mathscr {B}}(\mathcal {E},\mathcal {H})\):

$$\begin{aligned} \Vert \mathbb {P}_0 h^{1}(t)\Vert _\mathcal {H}\lesssim \Vert h^{0}(t)\Vert _\mathcal {E}. \end{aligned}$$
(3.10)

It now remains to estimate \(\Pi _0 h^{1}\). To this end, we first notice that

$$\begin{aligned} \Pi _0 h^{1}={\mathbf {P}}_0 h^{1}- \mathbb {P}_0 h^{1}={\mathbf {P}}_0 h - {\mathbf {P}}_0 h^{0}- \mathbb {P}_0 h^{1}= \Pi _0 h-{\mathbf {P}}_0 h^{0}- \mathbb {P}_0 h^{1}\end{aligned}$$

where we used that \({\mathbf {P}}_0 h = \Pi _0 h\) due to the preservation of mass and vanishing momentum so, using (3.5) and (3.10), we only need to estimate \(\Pi _0 h\) to get an estimate on \({\mathbf {P}}_0 h^{1}\). To this end, we start by computing the evolution of \(\Pi _0 h\):

$$\begin{aligned} \partial _t (\Pi _0 h) = \Pi _0 (\mathcal {G}_\varepsilon h) + \frac{1}{\varepsilon }\Pi _0 \mathcal {Q}_{\alpha (\varepsilon )} (h,h). \end{aligned}$$

By direct inspection, using the definition of \(\Pi _0\) given in (3.8) and the dissipation of energy (1.7) (see [3,  Lemmas 4.2 and 4.5]), we obtain: as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \Pi _0 (\mathcal {G}_\varepsilon h) = -{{\overline{\lambda }}}_\varepsilon \Pi _0 h + {\text {O}}\left( \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}\, \Vert (\mathbf {Id}- {\mathbf {P}}_0) h\Vert _{\mathcal {E}}\right) \end{aligned}$$

with

$$\begin{aligned} {{\overline{\lambda }}}_\varepsilon = \lambda _\varepsilon + {\text {O}} (1-\alpha (\varepsilon )) \underset{\varepsilon \rightarrow 0}{\sim } \lambda _\varepsilon . \end{aligned}$$

Similarly, we have by direct computation that

$$\begin{aligned} |\Pi _0 \mathcal {Q}_{\alpha (\varepsilon )} (h,h)| = (1-\alpha ^2) |\mathcal {D}_{\alpha (\varepsilon )} (h,h)| \Psi _{d+2} \, {\mathcal {M}}\end{aligned}$$

so that, using Minkowski’s inequality to estimate \(\mathcal {D}_{\alpha (\varepsilon )} (h,h)\), we obtain

$$\begin{aligned} \Vert \Pi _0 \mathcal {Q}_{\alpha (\varepsilon )} (h,h)\Vert _\mathcal {E}\lesssim \varepsilon ^2 \Vert h\Vert ^2_\mathcal {E}. \end{aligned}$$
(3.11)

Gathering previous estimates, we are able to deduce that

$$\begin{aligned}&\Vert {\mathbf {P}}_0 h^1(t)\Vert _\mathcal {E}\lesssim \Vert h^0(t)\Vert _\mathcal {E}+ \Vert h^{\text {in}}\Vert _\mathcal {E}e^{-{{\overline{\lambda }}}_\varepsilon t}\\&\quad + \lambda _\varepsilon \int _0^t e^{-{{\overline{\lambda }}}_\varepsilon (t-s)} \left( \Vert h^{0}(s)\Vert _\mathcal {E}+ \Vert (\mathbf {Id}-{\mathbf {P}}_0) h^{1}(s)\Vert _\mathcal {H}\right) \, \text {d}s \\&\quad + \varepsilon \lambda _\varepsilon \int _0^t e^{-{{\overline{\lambda }}}_\varepsilon (t-s)} \left( \Vert h^{0}(s)\Vert _\mathcal {E}^2 + \Vert h^{1}(s)\Vert ^2_\mathcal {H}\right) \, \text {d}s. \end{aligned}$$

With this, inequality (3.9) holds by using \(\varepsilon _5 \Delta _0 \leqslant 1\) from the proof of Proposition 3.2. \(\square \)

Remark 3.4

A natural approach would have been to adapt the method of [8] by applying \({\mathbf {P}}_\varepsilon \) (the projector associated to the eigenvalues \(\lambda _j(\varepsilon )\) for \(j = 1, \dots , d+2\) of \(\mathcal {G}_\varepsilon \) around 0 that have been exhibited in Theorem 2.1) to our equation instead of \({\mathbf {P}}_0\). It implies that one would have had to estimate \(\Pi _\varepsilon h\) where \(\Pi _\varepsilon \) is the projector associated to the energy eigenvalue \(-\lambda _\varepsilon = \lambda _{d+2}(\varepsilon )\) defined in Theorem 2.1. On the one hand, it simplifies the approach because \(\Pi _\varepsilon \mathcal {G}_\varepsilon h = - \lambda _\varepsilon \Pi _\varepsilon h\) by definition. On the other hand, this projector is not explicit contrary to \(\Pi _0\) and when applying \(\Pi _{\varepsilon }\) to the equation satisfied by h

$$\begin{aligned} \partial _{t} h= \mathcal {G}_{\varepsilon }h + \frac{1}{\varepsilon }\mathcal {Q}_{\alpha (\varepsilon )}(h,h), \end{aligned}$$

nothing guarantees that \(\Pi _{\varepsilon }\left[ \varepsilon ^{-1}\mathcal {Q}_{\alpha (\varepsilon )}(h,h)\right] \) remains of order 1 with respect to \(\varepsilon \) whereas we have seen in (3.11) that due to the dissipation of kinetic energy, \(\Pi _{0}\left[ \varepsilon ^{-1}\mathcal {Q}_{\alpha (\varepsilon )}(h,h)\right] \) is actually of order \(\varepsilon \). This explains our choice of strategy.

Let us now focus on the estimate of \((\mathbf {Id} -{\mathbf {P}}_0) h^{1}\). We can proceed similarly as in [8], using in particular that \({\mathbf {P}}_0 \mathcal {Q}_1 = 0\). Another crucial point is that the source term \({\mathcal {A}}_\varepsilon h^{0}\) can be bounded in \(\mathcal {H}\) using the fact that \({\mathcal {A}}_\varepsilon \in {\mathscr {B}}(\mathcal {E},\mathcal {H})\) (see Lemma 2.3). Moreover, it is important to mention that the fact that the bound on \({\mathcal {A}}_\varepsilon \) induces a rate of \(\varepsilon ^{-2}\) will be counterbalanced by the fact that the semigroup associated with \(\mathcal {B}_{\varepsilon }\) has an exponential decay rate of type \(e^{-\nu t/\varepsilon ^2}\) (see (3.3)). We recall that the Hilbert space \(\mathcal {H}_1\) is defined in (2.3) and is such that

$$\begin{aligned} \Vert \mathcal {Q}_1(g,g)\Vert _\mathcal {H}\lesssim \Vert g\Vert _\mathcal {H}\Vert g\Vert _{\mathcal {H}_1}. \end{aligned}$$
(3.12)

Proposition 3.5

Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty . \end{aligned}$$

For \(\mu \in (0,\mu _\star )\) (where \(\mu _\star \) is defined in (2.1)) and for \(\Delta _0\) small enough, we have that:

$$\begin{aligned} \begin{aligned} \Vert (\mathbf {Id}-{\mathbf {P}}_0) h^1(t)\Vert ^2_{\mathcal {H}} \lesssim \Delta _0^2 \int _0^t e^{-\mu (t-s)} \Vert h^1(s)\Vert _\mathcal {H}^2 \, \text {d}s + \frac{1}{\varepsilon ^2} \int _0^t e^{-\mu (t-s)} \Vert h^1(s)\Vert _\mathcal {H}\Vert h^0(s)\Vert _\mathcal {E}\, \text {d}s. \end{aligned}\nonumber \\ \end{aligned}$$
(3.13)

Sketch of the proof

From (3.2), the fact that \({\mathbf {P}}_0 \mathcal {Q}_1(g,g)=0\) and the fact that \({\mathbf {P}}_0\) commutes with \(\mathcal {G}_{1,\varepsilon }\), we can compute the evolution of \(\Phi (t) := (\mathbf {Id} -{\mathbf {P}}_0) h^{1}\):

$$\begin{aligned} \partial _t \Phi = \mathcal {G}_{1,\varepsilon } \Phi + \frac{1}{\varepsilon }\mathcal {Q}_1(h^1,h^1) + (\mathbf {Id} -{\mathbf {P}}_0) {\mathcal {A}}_\varepsilon h^0. \end{aligned}$$

We now use the hypocoercive norm on \(\mathcal {H}\) for \(\mathcal {G}_{1,\varepsilon }\) introduced in (2.4) and also denote by \(\Phi ^\perp \) the microscopic part of \(\Phi \), namely \(\Phi ^\perp := (\mathbf {Id} - \varvec{\pi }_0) \Phi \) where we recall that \(\varvec{\pi }_0\) is the projection onto the kernel of \({\mathscr {L}}_1\) that has been introduced in (1.23). We compute the evolution of \(\Vert \Phi (t)\Vert ^2_\mathcal {H}\):

$$\begin{aligned} \frac{1}{2} \frac{\text {d}}{\text {d}t} \Vert \Phi (t)\Vert ^2= & {} \langle \mathcal {G}_{1,\varepsilon } \Phi (t),\Phi (t) \rangle _\mathcal {H}+ \frac{1}{\varepsilon }\langle \mathcal {Q}_1(h^1(t),h^1(t)),\Phi ^\perp (t) \rangle _\mathcal {H}\\&+ \langle (\mathbf {Id} -{\mathbf {P}}_0) {\mathcal {A}}_\varepsilon h^0(t), \Phi (t) \rangle _\mathcal {H}. \end{aligned}$$

Notice that we have been able to replace \(\Phi \) by \(\Phi ^\perp \) in the second term due to the conservation laws satisfied by \(\mathcal {Q}_1\) and the fact that \(\varvec{\pi }_0\) is orthogonal in \(\mathcal {H}\). Then, from the properties of the hypocoercive norm (see (2.4)), using (3.12) and the facts that \({\mathbf {P}}_0 \in {\mathscr {B}}(\mathcal {H})\), \({\mathcal {A}}\in {\mathscr {B}}(\mathcal {E},\mathcal {H})\) (from Lemma 2.3) as well as Cauchy-Schwarz inequality, we obtain that

$$\begin{aligned} \frac{1}{2} \frac{\text {d}}{\text {d}t} \Vert \Phi (t)\Vert _\mathcal {H}^2\leqslant & {} - \frac{\mu _\star }{\varepsilon ^2} \Vert \Phi ^\perp (t)\Vert _{\mathcal {H}_1}^2 - \mu _\star \Vert \Phi (t)\Vert _{\mathcal {H}_1}^2\\&\quad + \frac{C}{\varepsilon } \Vert h^1(t)\Vert _\mathcal {H}\Vert h^1(t)\Vert _{\mathcal {H}_1}\Vert \Phi ^\perp (t)\Vert _\mathcal {H}+ \frac{C}{\varepsilon ^2} \Vert h^0(t)\Vert _\mathcal {E}\Vert \Phi (t)\Vert _\mathcal {H}. \end{aligned}$$

Making an appropriate use of Young inequality to treat the third term, we obtain that for \(\mu \in (0,\mu _\star )\),

$$\begin{aligned}\begin{aligned} \frac{1}{2} \frac{\text {d}}{\text {d}t} \Vert \Phi (t)\Vert ^2_\mathcal {H}&\leqslant - \frac{\mu }{\varepsilon ^2} \Vert \Phi ^\perp (t)\Vert ^2_{\mathcal {H}_1} {-} \mu _\star \Vert \Phi (t)\Vert _{\mathcal {H}_1}^2 +C \Vert h^1(t)\Vert ^2_\mathcal {H}\Vert h^1(t)\Vert ^2_{\mathcal {H}_1} {+} \frac{C}{\varepsilon ^2} \Vert h^0(t)\Vert _\mathcal {E}\Vert \Phi (t)\Vert _\mathcal {H}\\&\leqslant - \mu _\star \Vert \Phi (t)\Vert _{\mathcal {H}_1}^2 +C \Vert h^1(t)\Vert ^2_\mathcal {H}\Vert h^1(t)\Vert ^2_{\mathcal {H}_1} + \frac{C}{\varepsilon ^2} \Vert h^0(t)\Vert _\mathcal {E}\Vert \Phi (t)\Vert _\mathcal {H}. \end{aligned}\end{aligned}$$

In the second term, we decompose \(h^1\) into two parts: \(h^1 = {\mathbf {P}}_0 h^1 + \Phi \) and use that \({\mathbf {P}}_0 = {\mathbf {P}}_0^2\) together with the fact that \({\mathbf {P}}_0 \in {\mathscr {B}}(\mathcal {E},\mathcal {H})\) to obtain

$$\begin{aligned} \Vert h^1(t)\Vert ^2_\mathcal {H}\Vert h^1(t)\Vert ^2_{\mathcal {H}_1} \lesssim \Delta _0^2 \left( \Vert h^1(t)\Vert _\mathcal {H}^2 + \Vert \Phi (t)\Vert ^2_{\mathcal {H}_1}\right) . \end{aligned}$$

We can thus conclude the proof by taking \(\Delta _0\) small enough and integrating the above differential inequality. Notice that the inequality stated in the Proposition holds for the equivalent “hypocoercive norm” introduced above and thus also holds for the usual norm on \(\mathcal {H}\) because of the equivalence (uniformly in \(\varepsilon \)) between those two norms. \(\square \)

Combining estimates from Propositions 3.2 and 3.5 , one can obtain that

Corollary 3.6

Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty . \end{aligned}$$

For \(\mu \in (0,\mu _\star )\) (where \(\mu _\star \) is defined in (2.4)), for \(\Delta _0\) small enough and for any \(\delta >0\), we have that:

$$\begin{aligned} \begin{aligned} \Vert (\mathbf {Id} -{\mathbf {P}}_0)h^1(t)\Vert ^2_{\mathcal {H}} \lesssim \frac{1}{\delta }\, \Vert h^{\text {in}}\Vert ^2_\mathcal {E}\, e^{-\mu t} + (\Delta _0^2+\delta +\lambda _\varepsilon ) \int _0^t e^{-\mu (t-s)} \Vert h^1(s)\Vert _\mathcal {H}^2 \, \text {d}s. \end{aligned}\nonumber \\ \end{aligned}$$
(3.14)

Remark 3.7

The fact that we are able to obtain a multiplicative constant that can be chosen small in front of the second term is very important to recover a decay for \(h^1\). Indeed, in Proposition 3.3, in the estimate of \({\mathbf {P}}_0h^1\), the term

$$\begin{aligned} \lambda _\varepsilon \int _0^t e^{-{{\overline{\lambda }}}_\varepsilon (t-s)} \Vert (\mathbf {Id} - {\mathbf {P}}_0)h^1(s)\Vert _\mathcal {H}\, \text {d}s \end{aligned}$$

is problematic when applying Gronwall lemma if one hopes to recover some decay in time but the extra small constant that appears in the estimate of \((\mathbf {Id} -{\mathbf {P}}_0)h^1\) in (3.14) allows us to circumvent this difficulty.

In the end, we are able to prove the following proposition:

Corollary 3.8

Let \(r \in (0,1)\). Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty \end{aligned}$$

where \(\Delta _0\) is small enough so that the conclusion of Corollary 3.6 holds. There exists \(\varepsilon _6 \in (0,\varepsilon _5)\) (where \(\varepsilon _5\) is defined in Proposition 3.2) and \(\lambda _6 \in (0, \lambda _4)\) (where \(\lambda _4\) is defined in Lemma 2.7) such that for any \(\varepsilon \in (0,\varepsilon _6)\) and any \(\lambda _0 \in [0,\lambda _6)\) (where \(\lambda _0\) is defined in Assumption 1.1),

$$\begin{aligned} \Vert h^1(t)\Vert _\mathcal {H}\leqslant C \, \Vert h^{\text {in}}\Vert _\mathcal {H}\, e^{-(1-r)\overline{\lambda }_\varepsilon t} \end{aligned}$$

where \({{\overline{\lambda }}}_\varepsilon \) has been introduced in Proposition 3.3 and the constant C depends on r, \(\Delta _0\), \(\mu _\star \) (defined in (2.4)) and \(\nu _0\) (defined in Lemma 2.3).

3.4 Estimates on the Kinetic Problem

Combining the previous corollary with Proposition 3.2, we are able to get our final a priori estimates on h in the space \(\mathcal {E}\):

Proposition 3.9

Let \(r \in (0,1)\). Assume that \(h^{0}\in \mathcal {E}\), \(h^{1}\in \mathcal {H}\) are such that

$$\begin{aligned} \sup _{t\geqslant 0}\big (\Vert h^{0}(t)\Vert _{\mathcal {E}} + \Vert h^{1}(t)\Vert _{{{\mathcal {H}}}} \big ) \leqslant \Delta _0 <\infty \end{aligned}$$

where \(\Delta _0\) is small enough so that the conclusion of Corollary 3.6 holds. There exists \(\varepsilon ^\dagger \in (0,\varepsilon _6)\), \(\lambda ^\dagger \in (0, \lambda _6)\) (where \(\varepsilon _6\) and \(\lambda _6\) are defined in Proposition 3.8) such that for any \(\varepsilon \in (0,\varepsilon ^\dagger )\) and any \(\lambda _0 \in [0,\lambda ^\dagger )\) (where \(\lambda _0\) is defined in Assumption 1.1),

$$\begin{aligned} \Vert h(t)\Vert _\mathcal {E}\leqslant C \, \Vert h^{\text {in}}\Vert _\mathcal {E}\, e^{-(1-r) \lambda _\varepsilon t} \qquad \text {and} \quad \int _0^t \Vert h(s)\Vert _{\mathcal {E}_1} \, \text {d}s \leqslant C \, \Vert h^{\text {in}}\Vert _\mathcal {E}\min \left\{ 1 + t, 1 + \frac{1}{\lambda _\varepsilon } \right\} \end{aligned}$$

where \(\lambda _\varepsilon \underset{\varepsilon \rightarrow 0}{\sim } (1-\alpha (\varepsilon ))/\varepsilon ^2\) has been defined in Theorem 2.1 and the constant C depends on r, \(\Delta _0\), \(\mu _\star \) (defined in (2.1)) and \(\nu _0\) (defined in Lemma 2.3).

Remark 3.10

Notice that for a fixed \(\varepsilon >0\), the second a priori estimate shows that \(h=h_\varepsilon \) belongs to the space \(L^1([0,\infty ),\mathcal {E}_1)\). If one is interested in getting bounds on the family \(\{h_\varepsilon \}_\varepsilon \), then we obtain that if \(\lambda _0>0\) (in Assumption 1.1), then the family is bounded in \(L^1([0,\infty ),\mathcal {E}_1)\) and if \(\lambda _0=0\), then for any \(T>0\), the family is bounded in \(L^1([0,T),\mathcal {E}_1)\).

Thanks to the above a priori estimates, we can prove Theorem 1.2 by introducing a suitable iterative scheme that is stable and convergent. We refer to [3,  Sect. 5] for the details of the proof. We can actually prove the following more precise estimates (which will be useful in what follows) on \(h^0_\varepsilon \) and \(h^1_\varepsilon \) that are respectively solutions to (3.1) and (3.2):

$$\begin{aligned} \Vert h^0_\varepsilon \Vert _{L^\infty ([0,\infty )\,;\,\mathcal {E})} \lesssim 1 \quad \text {and} \quad \Vert h^0_\varepsilon \Vert _{L^1([0,\infty )\,;\,\mathcal {E}_1)} \lesssim \varepsilon ^2 \end{aligned}$$
(3.15)

as well as

$$\begin{aligned} \Vert h^1_\varepsilon \Vert _{L^\infty ([0,\infty )\,;\,\mathcal {H})} \lesssim 1 \quad \text {and} \quad \Vert h^1_\varepsilon \Vert _{L^2([0,\infty )\,;\,\mathcal {H}_1)} \lesssim 1 \end{aligned}$$
(3.16)

where we recall that the spaces \(\mathcal {H}\) and \(\mathcal {H}_1\) are respectively defined in (2.2) and (2.3). Notice that in the previous inequalities, the multiplicative constants only involve quantities related to the initial data of the problem and are independent of \(\varepsilon \).

4 Derivation of the Fluid Limit System

The Cauchy theory developed in the previous results give all the a priori estimates that will allow to prove Theorem 1.4. To this end, we make additional assumptions in the definition of the spaces \(\mathcal {E}\) and \(\mathcal {E}_1\), namely, in this section, those spaces are defined through:

$$\begin{aligned} \mathcal {E}:= & {} {\mathbb {W}^{k,1}_{v}\mathbb {W}^{m,2}_{x}}(\langle v\rangle ^{q}), \quad \mathcal {E}_{1}:={\mathbb {W}^{k,1}_{v}\mathbb {W}^{m,2}_{x}}(\langle v\rangle ^{q+1}) \quad \text {with} \quad m > d, \quad \nonumber \\ m-1\geqslant & {} k \geqslant 1, \quad q \geqslant 5. \end{aligned}$$
(4.1)

We assume that Assumption 1.1 is met, consider \(\varepsilon \), \(\lambda _0\) and \(\eta _0\) sufficiently small so that the conclusion of Theorem 1.2 holds in those spaces and consider \(\{h_\varepsilon \}_\varepsilon \) a family of solutions to (1.15) constructed in this theorem that splits as \(h_\varepsilon = h^{0}_\varepsilon +h^{1}_\varepsilon \) with \(h^{0}_\varepsilon \) and \(h^{1}_\varepsilon \) defined in Sect. 3. We also fix \(T>0\) for the rest of the section.

4.1 Weak Convergence

We start by the following lemma which in particular tells that the microscopic part of \(h_\varepsilon \) vanishes in the limit \(\varepsilon \rightarrow 0\):

Lemma 4.1

For any \(0 \leqslant t_1 \leqslant t_2 \leqslant T\), there holds:

$$\begin{aligned} \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h_\varepsilon (\tau )\Vert _{\mathcal {E}} \, \text {d}\tau \lesssim \varepsilon \sqrt{t_2-t_1}, \end{aligned}$$
(4.2)

where we recall that \(\varvec{\pi }_0\) is the projection onto the kernel of \({\mathscr {L}}_1\) defined in (1.23).

Proof

We first remark that

$$\begin{aligned} \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h_\varepsilon (\tau )\Vert _{\mathcal {E}} \, \text {d}\tau \lesssim \left( \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h^0_\varepsilon (\tau )\Vert ^2_{\mathcal {E}} \, \text {d}\tau \right) ^{1/2} \sqrt{t_2-t_1} \\ + \left( \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h^1_\varepsilon (\tau )\Vert ^2_{\mathcal {H}_1} \, \text {d}\tau \right) ^{1/2} \sqrt{t_2-t_1}. \end{aligned}$$

The first term is estimated thanks to (3.15), which gives:

$$\begin{aligned} \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h^0_\varepsilon (\tau )\Vert ^2_{\mathcal {E}} \, \text {d}\tau \lesssim \Vert (\mathbf {Id} - \varvec{\pi }_0)h^0_\varepsilon \Vert _{L^\infty ((0,T) \, ; \, \mathcal {E})} \Vert (\mathbf {Id} - \varvec{\pi }_0)h^0_\varepsilon \Vert _{L^1((0,T) \, ; \, \mathcal {E}_1)} \lesssim \varepsilon ^2. \end{aligned}$$

Concerning the second one, we perform similar computations as in the proof of Proposition 3.5. We recall that \(h^1_\varepsilon \) solves (3.2) and consider \(\Vert \cdot \Vert _\mathcal {H}\) an hypocoercive norm on \(\mathcal {H}\) (see (2.4)). We then have for \(\mu \in (0,\mu _\star )\):

$$\begin{aligned} \frac{1}{2} \frac{\text {d}}{\text {d}t} \Vert h^1_\varepsilon (t)\Vert _\mathcal {H}^2\leqslant & {} - \frac{\mu }{\varepsilon ^2} \Vert (\mathbf {Id}- \varvec{\pi }_0) h^1_\varepsilon (t)\Vert _{\mathcal {H}_1}^2 - \mu _\star \Vert h^1_\varepsilon (t)\Vert _{\mathcal {H}_1}^2 \\&\quad +\,C \Vert h^1_\varepsilon (t)\Vert ^2_\mathcal {H}\Vert h^1_\varepsilon (t)\Vert ^2_{\mathcal {H}_1} + \frac{C}{\varepsilon ^2} \Vert h^0_\varepsilon (t)\Vert _\mathcal {E}\Vert h^1_\varepsilon (t)\Vert _\mathcal {H}\end{aligned}$$

from which we deduce that

$$\begin{aligned}&\frac{1}{\varepsilon ^2} \int _{t_1}^{t_2} \Vert (\mathbf {Id} - \varvec{\pi }_0)h^1_\varepsilon (\tau )\Vert ^2_{\mathcal {H}_1} \, \text {d}\tau \lesssim \Vert h^1_\varepsilon (t_1)\Vert ^2_{\mathcal {H}} \\&\quad + \int _{t_1}^{t_2} \Vert h^1_\varepsilon (\tau )\Vert ^2_\mathcal {H}\Vert h^1_\varepsilon (\tau )\Vert ^2_{\mathcal {H}_1} \, \text {d}\tau + \frac{1}{\varepsilon ^2}\int _{t_1}^{t_2} \Vert h^0_\varepsilon (\tau )\Vert _\mathcal {E}\Vert h^1_\varepsilon (\tau )\Vert _\mathcal {H}\, \text {d}\tau \lesssim 1 \end{aligned}$$

where we used (3.15) and (3.16) to get the last estimate. Therefore, as for \(h^0_\varepsilon \) one has

$$\begin{aligned} \int _{t_1}^{t_2}\Vert (\mathbf {Id} - \varvec{\pi }_0)h^1_\varepsilon (\tau )\Vert ^2_{\mathcal {H}_1} \, \text {d}\tau \lesssim \varepsilon ^2 \end{aligned}$$

and this allows to conclude to the wanted estimate. \(\square \)

Using estimates (3.15), (3.16) and (4.2), one can prove the following result of weak convergence (we refer to [3,  Theorem 6.4] for more details on the proof):

Theorem 4.2

Up to extraction of a subsequence, one has

$$\begin{aligned} {\left\{ \begin{array}{ll} \left\{ h^{0}_{\varepsilon }\right\} _{\varepsilon } \text {converges to} ~0 ~\text {strongly in } L^{1}((0,T);\,{\mathcal {E}_1}), \\ \left\{ h^{1}_{\varepsilon }\right\} _{\varepsilon } \text {converges to }~\varvec{h}~\text { weakly in } L^{2}\left( (0,T)\,;\mathcal {H}\right) , \end{array}\right. }\end{aligned}$$
(4.3)

where \(\varvec{h}=\varvec{\pi }_{0}(\varvec{h})\). In particular, there exist

$$\begin{aligned}&\varrho \in L^{2}\left( (0,T);\,\mathbb {W}^{m,2}_{x}(\mathbb {T}^{d})\right) , \quad u \in L^{2}\left( (0,T);\;\left( \mathbb {W}^{m,2}_{x}(\mathbb {T}^{d})\right) ^{d}\right) ,\\&\theta \in L^{2}\left( (0,T);\,\mathbb {W}^{m,2}_x(\mathbb {T}^{d})\right) , \end{aligned}$$

such that

$$\begin{aligned} \varvec{h}(t,x,v)=\left( \varrho (t,x)+u(t,x)\cdot v + \frac{1}{2}\theta (t,x)(|v|^{2}-d\vartheta _{1})\right) {\mathcal {M}}(v) \end{aligned}$$
(4.4)

where \({\mathcal {M}}\) is the Maxwellian distribution introduced in (1.14).

Remark 4.3

Recall that \((\varrho ,u,\theta )\) can be expressed in terms of \(\varvec{h}\) through the following equalities:

$$\begin{aligned} \varrho (t,x)= & {} \int _{{\mathbb {R}}^d} \varvec{h}(t,x,v) \, \text {d}v, \quad u(t,x) = \frac{1}{\vartheta _1} \int _{{\mathbb {R}}^d} \varvec{h}(t,x,v) v \, \text {d}v, \quad \nonumber \\ \theta (t,x)= & {} \int _{{\mathbb {R}}^d} \varvec{h}(t,x,v) \frac{|v|^2-d\vartheta _1}{\vartheta _1^2d}\, \text {d}v. \end{aligned}$$
(4.5)

4.2 Limit System

As mentioned in the introduction, the path that we use to derive the limit system follows the same lines as in the elastic case. The main idea is to write equations satisfied by averages in velocity of \(h_\varepsilon \) and to study the convergence of each term. It is worth mentioning that with the notion of weak convergence at hand presented above, we can adopt an approach which is reminiscent of the program established in [5, 6] but simpler. In particular, we can adapt some of the main ideas of [9] regarding the delicate convergence of nonlinear terms. The detailed computations and arguments are included in [3,  Sect. 6], we only mention the main steps and keypoints of the proof hereafter. In what follows, we will use the following notation: for \(g=g(x,v)\),

$$\begin{aligned} \langle g \rangle := \int _{{\mathbb {R}}^d} g(\cdot ,v) \, \text {d}v. \end{aligned}$$

4.2.1 Local Conservation Laws

We introduce

$$\begin{aligned} \varvec{A} (v) := v \otimes v - \frac{1}{d} |v|^2 \mathbf {Id} \quad \text {and} \quad p_\varepsilon := \Big \langle \frac{1}{d} |v|^2 h_\varepsilon \Big \rangle \end{aligned}$$
(4.6)

so that \( \Big \langle v \otimes v \, h_\varepsilon \Big \rangle = \Big \langle \varvec{A} \, h_\varepsilon \Big \rangle + p_\varepsilon \, \mathbf {Id}. \) We integrate in velocity equation (1.15) multiplied by 1, \(v_{i}\), \(\frac{1}{2}\,|v|^{2}\), to obtain

$$\begin{aligned}&\partial _{t}\Big \langle h_{\varepsilon }\Big \rangle +\frac{1}{\varepsilon }\text {div}_{x}\Big \langle v\,h_{\varepsilon }\Big \rangle =0, \end{aligned}$$
(4.7a)
$$\begin{aligned}&\partial _{t} \Big \langle v\,h_{\varepsilon }\Big \rangle +\frac{1}{\varepsilon } \text {Div}_{x}\Big \langle \varvec{A} \, h_{\varepsilon } \Big \rangle + \frac{1}{\varepsilon }\nabla _x p_\varepsilon =\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}\Big \langle v\,h_{\varepsilon }\Big \rangle , \end{aligned}$$
(4.7b)
$$\begin{aligned}&\partial _{t}\Big \langle \tfrac{1}{2}|v|^{2}h_{\varepsilon }\Big \rangle +\frac{1}{\varepsilon }\text {div}_{x}\,\Big \langle \tfrac{1}{2}|v|^{2}v\,h_{\varepsilon }\Big \rangle \,=\frac{1}{\varepsilon ^{3}} {\mathscr {J}}_{\alpha (\varepsilon )}(f_{\varepsilon },f_{\varepsilon })+\frac{2(1-\alpha (\varepsilon ))}{\varepsilon ^{2}}\Big \langle \tfrac{1}{2}|v|^{2}h_{\varepsilon }\Big \rangle ,\nonumber \\ \end{aligned}$$
(4.7c)

where we recall that \(f_\varepsilon =G_{\alpha (\varepsilon )} + \varepsilon h_\varepsilon \) and where we have introduced

$$\begin{aligned} {\mathscr {J}}_{\alpha }(f,f):=\int _{{\mathbb {R}}^{d}}\left[ \mathcal {Q}_{\alpha }(f,f)-\mathcal {Q}_{\alpha }(G_{\alpha },G_{\alpha })\right] \,|v|^{2}\,\text {d}v. \end{aligned}$$

The goal is to study the convergence of each term in (4.7a)–(4.7b)–(4.7c). A first important remark to address this point is that thanks to the estimates recalled in (3.15)–(3.16), one can prove that for any function \(\psi = \psi (v)\) satisfying the bound \(|\psi (v)| \lesssim \langle v \rangle ^q\), we have the following convergence in the distributional sense:

$$\begin{aligned} \langle \psi \, h_\varepsilon \rangle \xrightarrow [\varepsilon \rightarrow 0]{} \langle \psi \, \varvec{h} \rangle \quad \text {in} \quad {\mathscr {D}}'_{t,x} \end{aligned}$$
(4.8)

where \(\varvec{h}\) is defined in (4.4) (see [3,  Lemma 6.6]).

Roughly speaking, the convergence of the terms in the LHS of (4.7a)–(4.7b)–(4.7c) is treated as in the elastic case. The RHS is going to be handled as a source term which takes into account the drift term and the dissipation of kinetic energy at the microscopic level. In this regard, using (4.8), we first remark that under Assumption 1.1,

$$\begin{aligned} \frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}\Big \langle v\,h_{\varepsilon }\Big \rangle \xrightarrow [\varepsilon \rightarrow 0]{} {\vartheta _{1}}\lambda _0u \quad \text {in} \quad {{\mathscr {D}}'_{t,x}}, \end{aligned}$$
(4.9)

since \(\lambda _0=\lim _{\varepsilon \rightarrow 0^{+}}\varepsilon ^{-2}(1-\alpha (\varepsilon ))\) and from the definition of u in (4.5). We then present a result of convergence for \(\varepsilon ^{-3}{\mathscr {J}}_{\alpha (\varepsilon )}(f_\varepsilon ,f_\varepsilon )\) in the following lemma in the proof of which there are not major difficulties. The said proof is thus omitted, we just mention that it is based on Assumption 1.1, on the estimates on \(h_\varepsilon \) coming from (3.15)–(3.16) and involves the dissipation of energy (1.7), we refer to [3,  Lemma 6.9] for more details.

Lemma 4.4

It holds that

$$\begin{aligned} \frac{1}{\varepsilon ^{3}}{\mathscr {J}}_{\alpha (\varepsilon )}(f_{\varepsilon },f_{\varepsilon }) \xrightarrow [\varepsilon \rightarrow 0]{} {{\mathcal {J}}}_{0} \quad \text {in} \quad {\mathscr {D}}'_{t,x}, \end{aligned}$$

where

$$\begin{aligned} {{\mathcal {J}}}_{0}(t,x):=-\lambda _{0}\,{\bar{c}}\,\vartheta _{1}^{\frac{3}{2}}\left( \varrho (t,x)+\frac{3}{4}\vartheta _{1}\,\theta (t,x)\right) \end{aligned}$$

for some positive constant \({\bar{c}}\) depending only on the angular kernel \(b(\cdot )\) and d and where \(\lambda _0\) is defined in Assumption 1.1.

4.2.2 Incompressibility Condition and Boussinesq Relation

Using (4.8) in the Eqs. (4.7a)–(4.7b), using also that the restitution coefficient satisfies Assumption 1.1, we can easily obtain the incompressibility condition as well as the Boussinesq relation:

$$\begin{aligned} {\text {div}}_x u = 0 \quad \text {and} \quad \nabla _x (\varrho + \vartheta _1 \theta ) = 0 \end{aligned}$$
(4.10)

where we recall that \(\varrho \), u and \(\theta \) are defined in (4.5). Using furthermore that the global mass of \(h_\varepsilon \) vanishes (see (1.20)), we have that

$$\begin{aligned} 0=\int _{\mathbb {T}^d \times {\mathbb {R}}^d}h_\varepsilon (t,x,v) \, \text {d}v \, \text {d}x \xrightarrow [\varepsilon \rightarrow 0]{} \int _{\mathbb {T}^d} \varrho (t,x) \, \text {d}x \quad \text {in} \quad {\mathscr {D}}'_t \end{aligned}$$

and thus that \(\int _{\mathbb {T}^d} \varrho (t,x) \, \text {d}x =0\). It implies that we have the following strengthened Boussinesq relation: for almost every \((t,x) \in (0,T) \times \mathbb {T}^d\),

$$\begin{aligned} \varrho + \vartheta _1(\theta - E) = 0 \quad \text {with} \quad E=E(t) := \int _{\mathbb {T}^d} \theta (t,x) \, \text {d}x. \end{aligned}$$
(4.11)

Remark 4.5

Notice here that the derivation of the strong Boussinesq relation \(\varrho + \vartheta _1 \theta =0\) is not as straightforward as in the elastic case. In the elastic case, the classical Boussinesq relation \(\nabla _x (\varrho + \vartheta _1 \theta ) = 0\) straightforwardly implies the strong form of Boussinesq because the two functions \(\varrho \) and \(\theta \) have zero spatial averages. This cannot be deduced directly in the granular context due to the dissipation of energy and we will see later on how to obtain it (see Proposition 4.8).

4.2.3 Equations of motion and temperature

In order to identify the equations satisfied by u and \(\theta \), as in the elastic case, we start by studying the convergence of quantities that are related to

$$\begin{aligned} \varrho _\varepsilon (t,x):= & {} \int _{{\mathbb {R}}^d} h_\varepsilon (t,x,v) \, \text {d}v, \quad u_\varepsilon (t,x) := \frac{1}{\vartheta _1} \int _{{\mathbb {R}}^d} h_\varepsilon (t,x,v) v \, \text {d}v,\nonumber \\&\quad \theta _\varepsilon (t,x) := \int _{{\mathbb {R}}^d} h_\varepsilon (t,x,v) \frac{|v|^2-d\vartheta _1}{\vartheta _1^2d}\, \text {d}v. \end{aligned}$$
(4.12)

More precisely, we inverstigate the convergence of

$$\begin{aligned} \varvec{u}_\varepsilon := \exp \left( -t \frac{1-\alpha (\varepsilon )}{\varepsilon ^2}\right) \mathcal {P} u_\varepsilon \quad \text {and} \quad \varvec{\theta }_\varepsilon := \Big \langle \tfrac{1}{2} (|v|^2-(d+2)\vartheta _1) h_\varepsilon \Big \rangle \end{aligned}$$

where \(\mathcal {P}\) is the Leray projection on divergence-free vector fields. Notice that if we compare our approach to the elastic case, we have added the exponential term in the definition of \(\varvec{u}_\varepsilon \) in order to absorbe the term in the RHS in (4.7b). We compute the evolution of \(\varvec{u}_\varepsilon \) and \(\varvec{\theta }_\varepsilon \) (by applying the Leray projector \(\mathcal {P}\) to (4.7b) and by making an appropriate linear combination of (4.7a) and (4.7c)) and obtain:

$$\begin{aligned} \partial _{t} \varvec{u}_{\varepsilon } =-\exp \left( -t\frac{1-\alpha (\varepsilon )}{\varepsilon ^{2}}\right) \mathcal {P}\left( \vartheta _{1}^{-1}\text {Div}_{x}\Big \langle \tfrac{1}{\varepsilon }\varvec{A}\,h_{\varepsilon } \Big \rangle \right) \end{aligned}$$
(4.13)

where \(\varvec{A}\) is defined in (4.6) and

$$\begin{aligned}&\partial _{t}\varvec{\theta }_{\varepsilon }+\frac{1}{\varepsilon }\text {div}_{x}\Big \langle \varvec{b}\,h_{\varepsilon }\Big \rangle =\frac{1}{\varepsilon ^{3}}{\mathscr {J}}_{\alpha (\varepsilon )}(f_{\varepsilon },f_{\varepsilon })+\frac{2(1-\alpha (\varepsilon ))}{\varepsilon ^{2}}\Big \langle \tfrac{1}{2}|v|^{2}h_{\varepsilon }\Big \rangle \nonumber \\&\quad \text {with} \quad \varvec{b}(v):=\frac{1}{2}\left( |v|^{2}-(d+2)\vartheta _{1}\right) . \end{aligned}$$
(4.14)

The study of the limit \(\varepsilon \rightarrow 0\) in those equations is more favorable because compared to (4.7a)–(4.7b)–(4.7c), the gradient term in (4.7b) has been eliminated thanks to the Leray projector and also because \(\varvec{A}\) and \(\varvec{b}\) belong to the range of \(\mathbf {Id} - \varvec{\pi }_0\) so that thanks to Lemma 4.1, we know that the quantities \(\varepsilon ^{-1} \text {Div}_{x}\Big \langle \varvec{A}\,h_{\varepsilon } \Big \rangle \) and \(\varepsilon ^{-1}\text {div}_{x}\Big \langle \varvec{b}\,h_{\varepsilon }\Big \rangle \) are bounded in \(\mathbb {W}^{m-1,2}_x\). Then, applying a precised version of Aubin-Lions lemma [20,  Corollary 4], we are able to prove that up to the extraction of a subsequence, \(\{\varvec{u}_\varepsilon \}_\varepsilon \) and \(\{\varvec{\theta }_\varepsilon \}_\varepsilon \) converge strongly in \(L^1 \left( (0,T) \,;\, \mathbb {W}^{m-1,2}_x\right) \) respectively towards

$$\begin{aligned} \mathcal {P} u = u \quad \text {and} \quad \varvec{\theta }_0 := \Big \langle \tfrac{1}{2} (|v|^2-(d+2)\vartheta _1) \varvec{h} \Big \rangle = \frac{d \vartheta _1^2}{2} E - \frac{d+2}{2} \vartheta _1 \varrho \end{aligned}$$
(4.15)

where we used the incompressibility condition and the strong Boussinesq relation given in (4.10)–(4.11). We refer to [3, Lemma 6.10] for more details.

4.2.4 About Initial Data

Recall that, in Theorem 4.2, the convergence of \({h}_{\varepsilon }\) to \(\varvec{h}\) given by (4.4) is known to hold only for a subsequence and, in particular, at initial time, different subsequences could converge towards different initial datum and therefore \((\varrho ,u,\theta )\) could be different solutions to the same system. In Theorem 1.4, the initial datum is prescribed by ensuring the convergence of \(\varvec{\pi }_0h_{\text {in}}^{\varepsilon }\) towards a single possible limit where \(\varvec{\pi }_0\) is defined in (1.23) (recall that the initial data for \((\varrho ,u,\theta )\) is defined in (1.31)).

Using Lemma 4.1, one can apply Arzelà-Ascoli theorem to get that \(\mathcal {P} u_\varepsilon \) and \(\varvec{\theta }_\varepsilon \) converge strongly in \(\mathcal {C}\big ([0,T] \, ; \, \mathbb {W}^{m-1,2}_x\big )\) towards respectively u and \(\varvec{\theta }_0\) defined in (4.15) that also belong to \(\mathcal {C}\big ([0,T] \, ; \, \mathbb {W}^{m-1,2}_x\big )\). We refer to [3,  Proposition 6.19] for more details.

4.2.5 Limit Equations

To get the limit equations, we need to study the convergence of the terms \(\varepsilon ^{-1} \mathcal {P} \text {Div}_{x}\Big \langle \varvec{A}\,h_{\varepsilon } \Big \rangle \) and \(\varepsilon ^{-1}\text {div}_{x}\Big \langle \varvec{b}\,h_{\varepsilon }\Big \rangle \) in (4.13) and (4.14). To this end, our approach relies on arguments coming from [9] (in particular, the tricky convergence of the nonlinear terms is treated thanks to a compensated compactness argument coming from [13]), the main difference being that we force the elastic collision operator to appear in our computations, we thus introduce terms that involve differences between the elastic and the inelastic collision operators. Those remainder terms vanish in the limit \(\varepsilon \rightarrow 0\) thanks to Assumption 1.1. We refer to [3,  Lemmas 6.12-6.13-6.14] for more details. In the end, writing \(\mathcal {P}\text {Div}_{x}(u \otimes u)=\text {Div}_{x}(u \otimes u) + \vartheta _1^{-1}\nabla _{x} p\) (see [14,  Proposition 1.6]), we obtain the following result:

Proposition 4.6

There are some constants \(\nu >0\) and \(\gamma >0\) such that the limit velocity \(u=u(t,x)\) in (4.4) satisfies

$$\begin{aligned} \partial _{t}u-\frac{\nu }{\vartheta _1}\,\Delta _{x}u + \vartheta _{1} \text {Div}_{x} \left( u\otimes u\right) +\nabla _{x}p=\lambda _0u \end{aligned}$$
(4.16)

where \(\lambda _0\) is defined in Assumption 1.1, while the limit temperature \(\theta =\theta (t,x)\) in (4.4) satisfies

$$\begin{aligned} \partial _{t}\theta -\frac{\gamma }{\vartheta _{1}^{2}}\,\Delta _{x}\theta + \vartheta _{1}\,u\cdot \nabla _{x}\theta =\frac{2}{(d+2)\vartheta _{1}^{2}}{{\mathcal {J}}}_{0}+\frac{2d\lambda _0}{d+2}E+\frac{2}{d+2}\frac{\text {d}}{\text {d}t}E, \end{aligned}$$
(4.17)

where we recall that \({{\mathcal {J}}}_0\) is defined in Lemma 4.4 and E is defined in (4.11).

Remark 4.7

The viscosity and heat conductivity coefficients \(\nu \) and \(\gamma \) are explicit and fully determined by the elastic linearized collision operator \({\mathscr {L}}_1\) (see [3,  Lemma C.1]). Notice also that, due to (4.10), \(\text {Div}_{x}(u\otimes u)=\left( u \cdot \nabla _{x}\right) u\) and (4.16) is nothing but areinforced Navier-Stokes equation associated to a divergence-free source term given by \(\lambda _0u\) which can be interpreted as an energy supply/self-consistent force acting on the hydrodynamical system because of the self-similar rescaling.

To end the identification of the limit equations, we go back to the strong Boussinesq equation (4.11) and prove the following result:

Proposition 4.8

It holds that

$$\begin{aligned} E(t)=0, \quad t \in [0,T], \end{aligned}$$

where \(E=E(t)\) is defined in (4.11). Consequently, the limiting temperature \(\theta (t,x)\) in (4.4) satisfies

$$\begin{aligned} \partial _{t}\,\theta -\frac{\gamma }{\vartheta _{1}^{2}}\,\Delta _{x}\theta + \vartheta _{1}\,u\cdot \nabla _{x}\theta =\frac{\lambda _{0}\,{\bar{c}}}{2(d+2)}\sqrt{\vartheta _{1}}\,\theta . \end{aligned}$$
(4.18)

where \(\gamma \) is defined in Proposition 4.6\(\lambda _0\) in Assumption 1.1 and \({{\bar{c}}}\) in Lemma 4.4. Moreover, the strong Boussinesq relation holds true:

$$\begin{aligned} \varrho +\vartheta _{1}\theta =0 \quad \text {on} \quad [0,T] \times \mathbb {T}^d. \end{aligned}$$
(4.19)

Proof

Using Lemma 4.4 and averaging in position the equation (4.17), it is easy to prove that

$$\begin{aligned} \frac{\text {d}}{\text {d}t} E(t) ={{\bar{c}}}_0 \, E(t) \end{aligned}$$

for some some constant \({{\bar{c}}}_0 \in {\mathbb {R}}\). Moreover, on the one hand, from (1.31), we have

$$\begin{aligned} E(0) = \int _{\mathbb {T}^d} \theta (0,x) \, \text {d}x = - \frac{1}{\vartheta _1} \int _{\mathbb {T}^d} \varrho (0,x) \, \text {d}x. \end{aligned}$$
(4.20)

On the other hand, from the definition of \(\varvec{\theta }_0\) in (4.15), we also have

$$\begin{aligned} E(0) = \frac{2}{\vartheta _1^2 d} \int _{\mathbb {T}^d} \varvec{\theta }_0(0,x) \, \text {d}x + \frac{2}{\vartheta _1 d} \int _{\mathbb {T}^d} \varrho (0,x) \, \text {d}x. \end{aligned}$$
(4.21)

We also know that \(\varvec{\theta }_\varepsilon \) converges towards \(\varvec{\theta }_0\) in \(\mathcal {C}\big ([0,T] \, ; \, \mathbb {W}^{m-1,2}_x\big )\). Consequently, we deduce that

$$\begin{aligned} \int _{\mathbb {T}^d} \varvec{\theta }_0(0,x) \, \text {d}x = \lim _{\varepsilon \rightarrow 0} \int _{\mathbb {T}^d} \Big \langle \tfrac{|v|^2-(d+2) \vartheta _1}{2} h_\varepsilon (0,x) \Big \rangle \, \text {d}x = \lim _{\varepsilon \rightarrow 0} \int _{\mathbb {T}^d} \Big \langle \tfrac{1}{2} |v|^2 h_\varepsilon (0,x) \Big \rangle \, \text {d}x \end{aligned}$$

where we used (1.20) to get the last equality. From (1.22), we deduce that

$$\begin{aligned} \int _{\mathbb {T}^d} \varvec{\theta }_0(0,x) \, \text {d}x=0. \end{aligned}$$

Coming back to (4.20)–(4.21), we deduce that

$$\begin{aligned} E(0) = - \frac{1}{\vartheta _1} \int _{\mathbb {T}^d} \varrho (0,x) \, \text {d}x = \frac{2}{\vartheta _1 d} \int _{\mathbb {T}^d} \varrho (0,x) \, \text {d}x \end{aligned}$$

which implies that \(E(0)=0\). This concludes the proof. \(\square \)

Gathering the results we obtained in Propositions 4.6 and 4.8 , we are able to end the proof of Theorem 1.4.