1 Introduction

In this paper, we study the Cauchy Problem in \({\mathcal {C}}^\infty\) for some weakly hyperbolic equation of third order. We are interested in Levi conditions, that is in conditions on lower-order terms which ensure the well posedness of the Cauchy Problem.

In the case of strictly hyperbolic equations of order m (that is when the characteristic roots are real and distinct, m is a natural number), Petrowski [30] (see also [24]) proved well posedness of the Cauchy Problem in \({\mathcal {C}}^\infty\) for any lower-order term. Then, Oleinik [28] studied weakly hyperbolic equations of second order (that is the two characteristic roots are real but may coincide) with \({\mathcal {C}}^\infty\) coefficients and lower-order terms and gave some sufficient conditions for well posedness of the Cauchy Problem. Nishitani [25] found necessary and sufficient conditions for second-order equations when there is only one space variable and the coefficients are analytic. In the papers [6] and [7], some second-order hyperbolic equations with coefficients depending only on t (in many space variables) were studied. They studied the Cauchy problem both in \({\mathcal {C}}^\infty\) and in Gevrey classes. They studied the case of \({\mathcal {C}}^\infty\) and of analytic coefficients and gave some sufficient conditions on the lower-order terms for the well posedness of the Cauchy problem (we call them logarithmic conditions). We generalize these conditions to equations of third order with time-dependent non-smooth coefficients.

Although there are many papers on higher-order equations, only few general result has been obtained (for a general framework see, e.g.,  [24]). We recall that a necessary and sufficient condition for the \({\mathcal {C}}^\infty\) well posedness has been obtained only for few classes of operators, in particular operators with constant coefficients [16, 17, 34] (or, more generally operators whose principal part has constant coefficients [14, 36]) and operators with characteristics of constant multiplicities [5, 13, 15].

Concerning third-order operators, which is the main topic of the present paper, many paper treat this subject (see, e.g., [1,2,3,4, 8, 10, 21,22,23, 26, 27, 37]). However, no completely satisfactory result has yet been obtained.

In particular, Wakabayashi [37] has studied this problem, obtaining results similar to ours. He considers operators with double characteristics and operators of third order. In this case, his sub-principal symbol is the same as ours, while the sub-sub-principal symbol (of order 1) is different. His conditions on lower-order terms are similar to ours in one space variable. In general, his conditions imply our logarithmic conditions. There are two important differences between our works: Wakabayashi supposes that the coefficients of the equations are analytic, while we suppose that they are \({\mathcal {C}}^2\) and his conditions are pointwise while our conditions are integral. We do not know if they are equivalent in many space variables, when the coefficients are analytic. Finally, our conditions can be expressed simply in terms of the coefficients of the operator (at least if the coefficients are analytic).

In [8], it is considered homogeneous higher-order operators with coefficients depending only on the time variable and finite degeneracy; and a necessary and sufficient condition for the \({\mathcal {C}}^\infty\) well posedness is stated. This result has been extended in [10], to non-homogeneous equations, and in [32] and [33] to equations with principal part depending only on one space variable.

Recently, Nishitani [26,  Theorem 6.1] considered third-order equations with analytic coefficients in one space variable and generalizes the results of [8] and [32] to equations whose principal symbol depends on both t and x. We compare these results with our Theorem 2 in Example 3 of Sect. 7.

We have the following results:

Theorem 1

([7, 11, 12]) Let us consider a second-order equation

$$\begin{aligned} \partial _t^2 u + \sum _{j=1}^d a_j(t)\partial _t\partial _{x_j} u + \sum _{j,h=1}^d b_{jh}(t) \partial _{x_j}\partial _{x_h} u + c_0(t) \partial _t u + \sum _{j=1}^d c_j(t)\partial _{x_j} u + d(t) u = f \,. \end{aligned}$$

Suppose that the coefficients are real and \({\mathcal {C}}^\infty\) in t and do not depend on the space variables.

Suppose that the symbol of the principal part

$$\begin{aligned} L(t,\tau ,\xi ) = \tau ^2 + a(t,\xi )\tau + b(t, \xi ) \,, \end{aligned}$$

where \(a(t,\xi ) = \sum _{j=1}^d a_j(t)\xi _j\), \(b(t,\xi ) = \sum _{j,h=1}^d b_{jh}(t)\xi _j\xi _h\) has real zeros in \(\tau\) for any \(\xi \in {\mathbb {R}}^d\), \(t\in [0,T]\):

$$\begin{aligned} \Delta (t,\xi ) = a^2(t,\xi ) - 4 b(t,\xi ) \geqslant 0 \end{aligned}$$

(weak hyperbolicity).

If

$$\begin{aligned} \int _0^T \frac{\bigl |\partial _t\Delta (t,\xi )\bigr |}{\Delta (t,\xi )+1}\, \mathrm{d}t \leqslant C\log |\xi | \end{aligned}$$
(1.1)

for any \(\xi \in {\mathbb {R}}^n\) with \(|\xi | \geqslant C_1>1\) (this condition is automatically satisfied if the coefficients are analytic), and if

$$\begin{aligned} \int _0^T \frac{\Bigl |-1/2 c_0(t) \sum _{j=1}^n a_j(t) \xi _j + \sum _{j=1}^n c_j(t) \xi _j - 1/2 \sum _{j=1}^n \partial _t a_j(t) \xi _j\Bigr |}{\sqrt{\Delta (t,\xi ) +1\,}}\, \mathrm{d}t \leqslant C \log |\xi | \end{aligned}$$
(1.2)

for any \(\xi\) with \(|\xi | \geqslant C_1>1\) (logarithmic condition), then the Cauchy problem is well posed in \({\mathcal {C}}^\infty\).

Now, we consider a third-order equation

$$\begin{aligned} \partial _t^3 u + \sum _{j=0}^2\sum _{|\alpha |\le 3-j} a_{j,\alpha }(t)\partial _t^j\partial _x^\alpha u = f \,, \end{aligned}$$
(1.3)

with initial conditions

$$\begin{aligned} u(0,x) = u_0 \,, \quad \partial _t u(0,x) = u_1 \,, \quad \partial _t^2 u(0,x) = u_2 \,. \end{aligned}$$
(1.4)

Let

$$\begin{aligned} L(t,\tau ,\xi )&\overset{\mathrm {def}}{=}\tau ^3 + \sum _{j+|\alpha |=3} a_{j,\alpha }(t)\tau ^j\xi ^\alpha, \\ M(t,\tau ,\xi )&\overset{\mathrm {def}}{=}\sum _{j+|\alpha |=2} a_{j,\alpha }(t) \tau ^j\xi ^\alpha, \\ N(t,\tau ,\xi )&\overset{\mathrm {def}}{=}\sum _{j+|\alpha |=1} a_{j,\alpha }(t) \tau ^j\xi ^\alpha, \\ p(t)&\overset{\mathrm {def}}{=}a_{0,0}(t), \end{aligned}$$

so that Eq. (1.3) can be rewritten as

$$\begin{aligned} L(t,\partial _t,\partial _x)u + M(t,\partial _t,\partial _x)u + N(t,\partial _t,\partial _x)u + p(t)u = f \,. \end{aligned}$$

We assume that the coefficients of L belong to \({\mathcal {C}}^2\bigl ([0,T]\bigr )\), those of M and N belong to \({\mathcal {C}}^1\bigl ([0,T]\bigr )\), whereas p(t) belongs to \(L^\infty \bigl ([0,T]\bigr )\).

The principal part \(L(t,\tau ,\xi )\), as a polynomial in \(\tau\), has only real roots:

$$\begin{aligned} \tau _1(t,\xi ) \leqslant \tau _2(t,\xi )\leqslant \tau _3(t,\xi ) \end{aligned}$$

for any \(t,\xi\), (weak hyperbolicity). This is equivalent to say that the discriminant of L is nonnegative:

$$\begin{aligned} \Delta _L(t,\xi )&\overset{\mathrm {def}}{=}\bigl (\tau _1(t,\xi )-\tau _2(t,\xi )\bigr )^2 \bigl (\tau _2(t,\xi )-\tau _3(t,\xi )\bigr )^2 \bigl (\tau _3(t,\xi )-\tau _1(t,\xi )\bigr )^2 \\&= A_1^2(t,\xi ) A_2^2(t,\xi ) - 4 A_2^3(t,\xi ) - 4 A_1^3(t,\xi ) A_3(t,\xi ) \\&\qquad + 18 A_1(t,\xi ) A_2(t,\xi ) A_3(t,\xi ) - 27 A_3^2(t,\xi ) \geqslant 0 , \end{aligned}$$

where

$$\begin{aligned} A_j(t,\xi ) \overset{\mathrm {def}}{=}\sum _{|\alpha |=3-j} a_{j,\alpha }(t) \, \xi ^\alpha \,. \end{aligned}$$
(1.5)

We set also

$$\begin{aligned} \Delta _L^{(1)}(t,\xi )&\overset{\mathrm {def}}{=}\bigl (\tau _1(t,\xi )-\tau _2(t,\xi )\bigr )^2 + \bigl (\tau _2(t,\xi )-\tau _3(t,\xi )\bigr )^2 + \bigl (\tau _3(t,\xi )-\tau _1(t,\xi )\bigr )^2 \\&= 2\,[A_1^2(t,\xi ) - 3A_2(t,\xi )] \, . \end{aligned}$$

Note that, if \(\Delta _L({\overline{t}},{\overline{\xi }}) = 0\) and \(\Delta _L^{(1)}({\overline{t}},{\overline{\xi }}) \ne 0\), then L has a double root (for example \(\tau _1({\overline{t}},{\overline{\xi }})=\tau _2({\overline{t}},{\overline{\xi }})\) and \(\tau _1({\overline{t}},{\overline{\xi }})\ne \tau _3({\overline{t}},{\overline{\xi }})\)), whereas if \(\Delta _L^{(1)}({\overline{t}},{\overline{\xi }}) = 0\), then L has a triple root: \(\tau _1({\overline{t}},{\overline{\xi }}) = \tau _2({\overline{t}},{\overline{\xi }}) = \tau _3({\overline{t}},{\overline{\xi }})\).

Note that, (cf. Lemma A.1)

$$\begin{aligned} \Delta _L^{(1)}(t,\xi ) = \frac{9}{2} \, \Delta _{\partial L}(t,\xi ) \,, \end{aligned}$$

where

$$\begin{aligned} \Delta _{\partial L}(t,\xi ) = \bigl (\sigma _1(t,\xi ) - \sigma _2(t,\xi )\bigr )^2 \end{aligned}$$

is the discriminant of the polynomial

$$\begin{aligned} \partial _\tau L(t,\tau ,\xi ) \overset{\mathrm {def}}{=}3\,\tau ^2 + 2\,A_1(t,\xi ) \, \tau + A_2(t,\xi ) = 3\,\bigl (\tau -\sigma _1(t,\xi )\bigr ) \, \bigl (\tau -\sigma _2(t,\xi )\bigr ) \,. \end{aligned}$$

Notations

In the following, we note

$$\begin{aligned} {\mathcal {S}}_2&= \Bigl \{ \ (1,2) \ , \ (2,3) \ , \ (3,1) \ \Bigr \} \\ {\mathcal {S}}_3&= \Bigl \{ \ (1,2,3) \ , \ (2,3,1) \ , \ (3,1,2) \ \Bigr \} \, . \end{aligned}$$

Let \(f(t,\xi )\) and \(g(t,\xi )\) be positive functions, we will write \(f\lesssim g\) (or, equivalently \(g\gtrsim f\)) to mean that there exists a positive constant C such that

$$\begin{aligned} f(t,\xi ) \leqslant C \, g(t,\xi ) \,, \quad \text {for any}\; (t,\xi )\in [0,T]\in {\mathbb {R}}^n \,. \end{aligned}$$

Similarly, we will write \(f\approx g\) to mean that \(f \lesssim g\) and \(g\lesssim f\).

These notations will make the formulas more readable and will allow us to focus only on the important terms of the estimates.

Consider the auxiliary polynomial

$$\begin{aligned} {\mathcal {L}}(t,\tau ,\xi ) \overset{\mathrm {def}}{=}L(t,\tau ,\xi ) - \partial _t^2 L(t,\tau ,\xi ) \,, \end{aligned}$$
(1.6)

we can prove (see Lemma 2.1 below) that its roots \(\lambda _j(t,\xi )\) are real and distinct for \(\xi \ne 0\), and, there exist positive constants \(C_1\) and \(C_2\) such that

$$\begin{aligned} \bigl |\lambda _j(t,\xi ) - \tau _j (t,\xi )\bigr |&\leqslant C_1, \\ \bigl |\lambda _j(t,\xi ) - \lambda _k(t,\xi )|&\geqslant C_2, \end{aligned}$$

for all \((t,\xi ) \in [0,T]\times {\mathbb {R}}^n\setminus \{0\}\) and \((j,k)\in {\mathcal {S}}_2\).

We denote by \(\mu _1\) and \(\mu _2\) the roots of \(\partial _\tau {\mathcal {L}}(t,\tau ,\xi )\).

Define the symbols

$$\begin{aligned} \check{M}(t,\tau ,\xi )&\overset{\mathrm {def}}{=}M(t,\tau ,\xi ) - \frac{1}{2} \partial _t\partial _\tau L(t,\tau ,\xi ), \end{aligned}$$
(1.7)
$$\begin{aligned} \check{N}(t,\tau ,\xi )&\overset{\mathrm {def}}{=}N(t,\tau ,\xi ) - \frac{1}{2} \partial _t\partial _\tau M(t,\tau ,\xi ) + \frac{1}{12} \partial _t^2\partial _\tau ^2 L(t,\tau ,\xi ) \, . \end{aligned}$$
(1.8)

We can now state our main result.

Theorem 2

Assume that

$$\begin{aligned} \int _{0}^{T} \sum _{(j,k)\in {\mathcal {S}}_{2}} \frac{\bigl |\partial _t \lambda _j (t,\xi ) - \partial _t \lambda _k (t,\xi )\bigr |}{\bigl |\lambda _j (t,\xi ) - \lambda _k (t,\xi )\bigr |} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.9)
$$\begin{aligned} \int _0^T \sum _{(j,k)\in {\mathcal {S}}_2} \frac{\bigl |\partial _t^2 \lambda _j (t,\xi ) - \partial _t^2 \lambda _k (t,\xi )\bigr |}{\bigl |\partial _t\lambda _j(t,\xi ) - \partial _t \lambda _k (t,\xi )\bigr |+1} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.10)
$$\begin{aligned} \int _{0}^{T} \sum _{j=1}^3 \frac{\bigl |\partial _t \check{M}\bigl (t,\lambda _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\check{M}\bigl (t,\lambda _j(t,\xi ),\xi \bigr )\bigl |+1} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.11)
$$\begin{aligned} \int _0^T \sum _{j=1}^2 \frac{\bigl |\partial _t \check{N}\bigl (t,\mu _{j}(t,\xi ),\xi \bigr )\bigr |}{\bigl |\check{N}\bigl (t,\mu _{j}(t,\xi ),\xi \bigr )\bigr |+1} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.12)
$$\begin{aligned} \int _0^T \sum _{(j,k,l)\in {\mathcal {S}}_3} \frac{\bigl |\check{M}\bigl (t,\lambda _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\lambda _j(t,\xi ) - \lambda _k(t,\xi )\bigr | \cdot \bigl |\lambda _j(t,\xi ) - \lambda _l(t,\xi )\bigr |} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.13)
$$\begin{aligned} \int _0^T \sum _{j=1}^2 \sqrt{ \frac{\bigl |\check{N}\bigl (t,\mu _{j}(t,\xi ),\xi \bigr )\bigr |}{\bigl |\mu _2(t,\xi ) - \mu _1(t,\xi )\bigr |} \, } \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(1.14)

Then, the Cauchy problem (1.3, 1.4) is well posed in \({\mathcal {C}}^\infty\).

In the following, we will say that a function \(f(t,\xi )\) verifies the logarithmic condition if

$$\begin{aligned} \int _{0}^{T} \bigl | f(t,\xi ) \bigr | \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr ) \,. \end{aligned}$$

Remark 1.1

Conditions (1.9)–(1.12) are hypothesis on the regularity of the coefficients. Indeed if the coefficients are analytic, then they are satisfied, see Sect. 4.

Conditions (1.13) and (1.14) are Levi conditions on the lower-order terms. They are necessary if the coefficients of the principal symbols are constant, see Sect. 6.

Remark 1.2

The hypothesis in Theorem 2 can be expressed in terms of the coefficients of the operator. This is possible either by expliciting the roots of \({\mathcal {L}}\) and \(\partial _\tau {\mathcal {L}}\), or by transforming Hypothesis (1.9)–(1.14) into symmetric rational functions of the roots of \({\mathcal {L}}\). This will be developed in Sect. 3.

The plan of the paper is the following. In Sect. 2, we will prove Theorem 2. In Sect. 3, we give some different forms of the Levi conditions (1.13) and (1.14). In Sect. 4, we will show that if the coefficients are analytic, then (1.9)–(1.12) are satisfied. In Sect. 5, we give some sufficient pointwise conditions that are equivalent to ours in space dimension \(n=1\). We show also that the Levi conditions (1.13) and (1.14) are equivalent to the condition of good decomposition [13], which is necessary and sufficient for the well posedness for operators with characteristics of constant multiplicities [5, 13, 15]. In Sect. 6, we show that the Levi conditions (1.13) and (1.14) are necessary for the \({\mathcal {C}}^\infty\) well posedness if the coefficients of the principal symbols are constant. Finally, in Sect. 7, we give some examples.

In the proofs, for the sake of simplicity, we will omit the dependence on t and \(\xi\) in the notations.

2 Proof of theorem 2

Lemma 2.1

([19]) Consider the polynomial

$$\begin{aligned} L_\varepsilon (t,\tau ,\xi ) = L(t,\tau ,\xi ) - \varepsilon ^2 |\xi |^2\partial _\tau ^2 L(t,\tau ,\xi ) \,. \end{aligned}$$

Its roots \(\tau _{j,\varepsilon } (t,\xi )\) are real and distinct, moreover,

$$\begin{aligned} \bigl |\tau _{j,\varepsilon } -\tau _j\bigr |&\lesssim \varepsilon |\xi |, &j = 1,2,3, \\ \bigl |\tau _{j,\varepsilon } - \tau _{k,\varepsilon }\bigr |&\gtrsim \varepsilon |\xi | ,&(j,k)\in {\mathcal {S}}_2 \, . \end{aligned}$$

Remark 2.2

By direct calculation (cf. [21,  par. 1020]), the discriminant of \(L_\varepsilon\) is given by

$$\begin{aligned} \Delta _{L_\varepsilon } = \Delta _L + \frac{1}{2} \varepsilon ^2 |\xi |^2 \, \Delta _{\partial _\tau L}^2 + 36\,\varepsilon ^4 |\xi |^4\,\Delta _{\partial _\tau L} + 864\,\varepsilon ^6 |\xi |^6 \,, \end{aligned}$$

where \(\Delta _{\partial _\tau L}\) is the discriminant of the polynomial \(\partial _\tau L\).

Similarly,

$$\begin{aligned} \Delta _{\partial _\tau L_\varepsilon } = \Delta _{\partial _\tau L} + 72\,\varepsilon ^2 |\xi |^2 \,. \end{aligned}$$
(2.1)

We will take \(\varepsilon = 1/ |\xi |\).

We consider

$$\begin{aligned} L_{j,\varepsilon }(t,\tau ,\xi )&= \tau - i\tau _{j,\varepsilon }(t,\xi ), &j = 1,2,3, \\ L_{jk,\varepsilon }(t,\tau ,\xi )&= \bigl (\tau - i\tau _{j,\varepsilon }(t,\xi )\bigr ) \bigl (\tau - i\tau _{k,\varepsilon }(t,\xi )\bigr ) \, ,&(j,k)\in {\mathcal {S}}_2 \, , \\ L_{123,\varepsilon }(t,\tau ,\xi )&= \bigl (\tau - i\tau _{1,\varepsilon }(t,\xi )\bigr ) \bigl (\tau - i\tau _{2,\varepsilon }(t,\xi )\bigr ) \bigl (\tau - i\tau _{3,\varepsilon }(t,\xi )\bigr ) \\&= L_\varepsilon (t,\tau ,i\xi ) = L(t,\tau ,i\xi ) + \varepsilon ^2 |\xi |^2 \partial _\tau ^2 L(t,\tau ,i\xi ) \, . \end{aligned}$$

We define also the operators

$$\begin{aligned} \widetilde{L}_{jh,\varepsilon }(t,\partial _t,\xi )&= \frac{1}{2} \bigl [ L_{j,\varepsilon }(t,\partial _t,\xi ) \circ L_{h,\varepsilon }(t,\partial _t,\xi ) + L_{h,\varepsilon }(t,\partial _t,\xi ) \circ L_{j,\varepsilon }(t,\partial _t,\xi ) \bigr ] \, , \end{aligned}$$
(2.2)

for any \((j,h)\in {\mathcal {S}}_2\), and

$$\begin{aligned} \widetilde{L}_{123,\varepsilon }(t,\partial _t,\xi )&= \frac{1}{6} \sum _{\begin{array}{c} j,h,l=1,2,3 \\ j\ne h\,,\,j\ne l\,,\,h\ne l \end{array}} L_{j,\varepsilon }(t,\partial _t,\xi ) \circ L_{h,\varepsilon }(t,\partial _t,\xi ) \circ L_{l,\varepsilon }(t,\partial _t,\xi ) \, . \end{aligned}$$
(2.3)

If \(\varepsilon =0\), we will write \(L_{j}\), \(L_{jh}\), \(\widetilde{L}_{jh}\), ..., instead of \(L_{j,0}\), \(L_{jh,0}\), \(\widetilde{L}_{jh,0}\), ....

Lemma 2.3

For any \((j,h)\in {\mathcal {S}}_2\), we have

$$\begin{aligned} \widetilde{L}_{jh,\varepsilon } = L_{jh,\varepsilon } + \frac{1}{2} \, \partial _t \partial _\tau L_{jh,\varepsilon } \,, \end{aligned}$$
(2.4)

and

$$\begin{aligned} L_{j,\varepsilon } \circ L_{h,\varepsilon } - \widetilde{L}_{jh,\varepsilon } = \frac{i}{2} \, (\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }') \,. \end{aligned}$$
(2.5)

Proof

As

$$\begin{aligned} L_{j,\varepsilon } \circ L_{h,\varepsilon }&= (\partial _t - i\tau _{j,\varepsilon }) \circ (\partial _t - i\tau _{h,\varepsilon }) = L_{jh,\varepsilon } - i \tau _{h,\varepsilon }' \, , \end{aligned}$$
(2.6)

we have

$$\begin{aligned} (\partial _t - i\tau _{j,\varepsilon })&\circ (\partial _t - i\tau _{h,\varepsilon }) + (\partial _t - i\tau _{h,\varepsilon }) \circ (\partial _t - i\tau _{j,\varepsilon }) \nonumber \\&= 2 L_{jh,\varepsilon } - i (\tau _{j,\varepsilon }' + \tau _{h,\varepsilon }') \nonumber \\&= 2 L_{jh,\varepsilon } + (\partial _t\partial _\tau L_{jh,\varepsilon }) \, , \end{aligned}$$
(2.7)

from which (2.4) follows.

Identity (2.5) follows from (2.6) and (2.7). \(\square\)

Note that,

$$\begin{aligned} L_{j,\varepsilon } v - L_{h,\varepsilon } v = -i(\tau _{j,\varepsilon } - \tau _{h,\varepsilon })v \,, \end{aligned}$$
(2.8)

whereas, from (2.7),

$$\begin{aligned} \begin{aligned} \widetilde{L}_{jh,\varepsilon }v-\widetilde{L}_{jl,\varepsilon }v&= L_{jh,\varepsilon }v-L_{jl,\varepsilon }v - \frac{i}{2} (\tau _{j,\varepsilon }' + \tau _{h,\varepsilon }')v + \frac{i}{2} (\tau _{j,\varepsilon }' + \tau _{l,\varepsilon }')v \\&= -i(\tau _{h,\varepsilon }-\tau _{l,\varepsilon }) \, L_{j,\varepsilon }v - \frac{i}{2} (\tau _{h,\varepsilon }' - \tau _{l,\varepsilon }')v \,. \end{aligned} \end{aligned}$$
(2.9)

Lemma 2.4

We have

$$\begin{aligned} \widetilde{L}_{123,\varepsilon }&= L_{123,\varepsilon } + \frac{1}{2} \, \partial _t \partial _\tau L_{123,\varepsilon } + \frac{1}{6} \, \partial _t^2 \partial _\tau ^2 L_{123,\varepsilon } \end{aligned}$$
(2.10)
$$\begin{aligned} L_{1,\varepsilon } \circ L_{2,\varepsilon } \circ L_{3,\varepsilon } - \widetilde{L}_{123,\varepsilon }&= \frac{i}{2} (\tau _1'-\tau _2') \, L_{3,\varepsilon } + \frac{i}{2} (\tau _2'-\tau _3') \, L_{1,\varepsilon } - \frac{i}{2} (\tau _3'-\tau _1') \, L_{2,\varepsilon }\nonumber \\&\quad - \frac{1}{3}\,i\,(\tau _3''-\tau _1'') - \frac{1}{3}\,i\,(\tau _3''-\tau _2'') \end{aligned}$$
(2.11)
$$\begin{aligned} \widetilde{L}_{123,\varepsilon } - \widetilde{L}_{123,0}&= 2 \, \varepsilon ^2 |\xi |^2 \sum _{j=1}^3 L_{j,\varepsilon } \, . \end{aligned}$$
(2.12)

Proof

We have

$$\begin{aligned} L_{1,\varepsilon } \circ L_{2,\varepsilon } \circ L_{3,\varepsilon }&= (\partial _t - i\tau _{1,\varepsilon }) \circ (\partial _t - i\tau _{2,\varepsilon }) \circ (\partial _t - i\tau _{3,\varepsilon }) \nonumber \\&= L_{123,\varepsilon } + (-i\tau _3') L_{1,\varepsilon } + \partial _t L_{23,\varepsilon } + (-i\tau _3'') \nonumber \\&= L_{123,\varepsilon } + (-i\tau _3') L_{1,\varepsilon } + (-i\tau _2') L_{3,\varepsilon } + (-i\tau _3') L_{2,\varepsilon } + (-i\tau _3'') \, . \end{aligned}$$
(2.13)

Summing over all permutations, we get (2.10).

Identity (2.11) follows from (2.13).

As (2.10) with \(\varepsilon =0\) gives

$$\begin{aligned} \widetilde{L}_{123,0} = L + \frac{1}{2} \, \partial _t \partial _\tau L + \frac{1}{6} \, \partial _t^2 \partial _\tau ^2 L \,, \end{aligned}$$
(2.14)

we get

$$\begin{aligned} \widetilde{L}_{123,\varepsilon } - \widetilde{L}_{123,0}&= L_{123,\varepsilon } - L + \frac{1}{2} \, \partial _t \partial _\tau [ L_{123,\varepsilon } - L] + \frac{1}{6} \, \partial _t^2 \partial _\tau ^2 [ L_{123,\varepsilon } - L ] \\&= \varepsilon ^2 |\xi |^2 \partial _\tau ^2 L = \varepsilon ^2 |\xi |^2 \partial _\tau ^2 L_{123,\varepsilon } = 2 \, \varepsilon ^2 |\xi |^2 \sum _{j=1}^3 L_{j,\varepsilon } \, . \end{aligned}$$

\(\square\)

We define an energy, after a Fourier transform with respect to the variables x (\(v= {\mathcal {F}}_x u\)):

$$\begin{aligned} E(t,\xi ) \overset{\mathrm {def}}{=}k(t,\xi ) \Biggl [ \ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v|^2 + {\mathcal {H}}^2(t,\xi ) \, \biggl [ \ \sum _{j=1}^3 |L_{j,\varepsilon } v|^2 + |v|^2 \ \biggr ] \Biggr ], \end{aligned}$$

where the weight \(k(t,\xi )\) is defined by

$$\begin{aligned} k(t,\xi ) \overset{\mathrm {def}}{=}\exp \Biggl [ -\eta \int _{-T}^t {\mathcal {K}}(s,\xi ) \, ds\Biggr ] \,, \end{aligned}$$

with

$$\begin{aligned} {\mathcal {K}}(t,\xi ) \overset{\mathrm {def}}{=}&\sum _{(j,h)\in {\mathcal {S}}_2} \frac{|\tau _{j,\varepsilon }'-\tau _{h,\varepsilon }'|}{|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|} + \sum _{(j,h)\in {\mathcal {S}}_2} \frac{|\tau _{j,\varepsilon }''-\tau _{h,\varepsilon }''|}{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|+1} \\&\quad + \sum _{j=1}^3 \frac{\bigl |\partial _t \check{M}(\tau _{j,\varepsilon })\bigr |}{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |+1} + \sum _{j=1}^2 \frac{\bigl |\partial _t \check{N}(\sigma _{j,\varepsilon })\bigr |}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1} \\&\quad + \sum _{(j,h,l)\in {\mathcal {S}}_3} \frac{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |}{|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|\cdot |\tau _{j,\varepsilon } - \tau _{l,\varepsilon }|} + \sum _{j=1}^2 \sqrt{ \frac{\bigl |\check{N}(\sigma _{1,\varepsilon })\bigr |}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \,} + \log |\xi | \, , \\ {\mathcal {H}}(t,\xi )&\overset{\mathrm {def}}{=}1 + \sum _{(j,h)\in {\mathcal {S}}_2} \frac{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|}{|\tau _{j,\varepsilon } - \tau _{h,\varepsilon }|} \\&\quad + \sum _{(j,h,l)\in {\mathcal {S}}_3} \frac{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |}{|\tau _{j,\varepsilon } - \tau _{h,\varepsilon }| \cdot |\tau _{j,\varepsilon } - \tau _{l,\varepsilon }|} + \sum _{j=1}^2 \sqrt{ \frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \,} \, , \end{aligned}$$

\(\check{M}\) and \(\check{N}\) are defined in (1.7) and (1.8).

Differentiating the energy with respect to time, we get

$$\begin{aligned} E'(t,\xi )&= -\eta \, {\mathcal {K}}(t,\xi ) \, E(t,\xi ) \\&\qquad + k(t,\xi ) \Biggl [ \ 2 \, \sum _{(j,h)\in {\mathcal {S}}_2} \mathrm{Re}\left\langle \partial _t \widetilde{L}_{jh,\varepsilon }v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \\&\qquad + 2 \, {\mathcal {H}}(t,\xi ) \,{\mathcal {H}}'(t,\xi ) \, \biggl [ \ \sum _{j=1}^3 |L_{j,\varepsilon } v|^2 + |v|^2 \ \biggr ] \\&\qquad + {\mathcal {H}}^2(t,\xi ) \, \biggl [ \ \sum _{j=1}^3 2 \, \mathrm{Re}\left\langle \partial _t L_{j,\varepsilon } v \, , \, L_{j,\varepsilon } v \right\rangle + 2 \, \mathrm{Re}\left\langle \partial _t v \, , \, v \right\rangle \ \biggr ] \Biggr ] \, . \end{aligned}$$

Now we show that the second, third and forth summand can be estimated by \(C\,{\mathcal {K}}(t,\xi ) \, E(t,\xi )\), for some suitable positive constant C.

2.1 Estimation of the terms \(2\; \mathrm{Re}\left\langle \partial _t \widetilde{L}_{jh,\varepsilon }v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

As

$$\begin{aligned} \partial _t \widetilde{L}_{jh,\varepsilon }v = (L_l \circ \widetilde{L}_{jh,\varepsilon }) v + i\,\tau _{l,\varepsilon } \, \widetilde{L}_{jh,\varepsilon } v, \end{aligned}$$

we have

$$\begin{aligned} 2 \, \mathrm{Re}\left\langle \partial _t \widetilde{L}_{jh,\varepsilon }v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle&= 2 \, \mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle + 2 \, \mathrm{Re}\left\langle i\,\tau _{l,\varepsilon } \, \widetilde{L}_{jh,\varepsilon } v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \\&= 2 \, \mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \, . \end{aligned}$$

Define

$$\begin{aligned} \begin{aligned} \widetilde{M}&\overset{\mathrm {def}}{=}\check{M} + \frac{1}{2} \partial _t\partial _\tau \check{M} \\&= M - \frac{1}{2} \partial _t\partial _\tau L + \frac{1}{2} \partial _t\partial _\tau M - \frac{1}{4} \partial _t^2\partial _\tau ^2 L \,. \end{aligned} \end{aligned}$$
(2.15)

so that

$$\begin{aligned} \widetilde{L}_{123,0} + \widetilde{M} + \check{N} = L + M + N \,, \end{aligned}$$
(2.16)

hence

$$\begin{aligned} 2 \, \mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle&= 2 \, \mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \\&\qquad + 2 \, \mathrm{Re}\left\langle \widetilde{L}_{123,\varepsilon } v - \widetilde{L}_{123,0} v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \\&\qquad + 2 \, \mathrm{Re}\left\langle Lv + Mv + Nv \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \\&\qquad - 2 \, \mathrm{Re}\left\langle \widetilde{M} v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle - 2 \, \mathrm{Re}\left\langle \check{N} v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \end{aligned}$$

2.1.1 Estimation of  \(2 \, \mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

First of all, we have

$$\begin{aligned} \Bigl |\mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \Bigr | \lesssim \bigl |(L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v\bigr | \, |\widetilde{L}_{jh,\varepsilon }v| \,. \end{aligned}$$

According to (2.11), \((L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v\) is a linear combination, with constant coefficients, of terms like \((\tau _{\alpha ,\varepsilon }'-\tau _{\beta ,\varepsilon }') \, L_{\gamma ,\varepsilon }v\), with \((\alpha ,\beta ,\gamma )\in {\mathcal {S}}_3\), and \((\tau _{\alpha ,\varepsilon }''-\tau _{\beta ,\varepsilon }'')v\), with \((\alpha ,\beta )\in {\mathcal {S}}_2\), hence:

$$\begin{aligned} \bigl |(L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v\bigr | \lesssim \sum _{(\alpha ,\beta ,\gamma )\in {\mathcal {S}}_3} |\tau _{\alpha ,\varepsilon }'-\tau _{\beta ,\varepsilon }'| \, |L_{\gamma ,\varepsilon }v| + \sum _{(\alpha ,\beta )\in {\mathcal {S}}_2} |\tau _{\alpha ,\varepsilon }''-\tau _{\beta ,\varepsilon }''| \, |v| \,. \end{aligned}$$

Concerning the first sum, from (2.9) and (2.8), we have

$$\begin{aligned} L_{\gamma ,\varepsilon }v&= i\,\frac{\widetilde{L}_{\gamma \alpha ,\varepsilon }v-\widetilde{L}_{\gamma \beta ,\varepsilon }v}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }} - \frac{1}{2} \, \frac{\tau _{\alpha ,\varepsilon }' - \tau _{\beta ,\varepsilon }'}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }} \, v \\&= i\,\frac{\widetilde{L}_{\gamma \alpha ,\varepsilon }v-\widetilde{L}_{\gamma \beta ,\varepsilon }v}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }} - \frac{i}{2} \, \frac{\tau _{\alpha ,\varepsilon }' - \tau _{\beta ,\varepsilon }'}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }} \, \frac{L_{\alpha ,\varepsilon }v - L_{\beta ,\varepsilon }v}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }} \, , \end{aligned}$$

hence

$$\begin{aligned} |\tau _{\alpha ,\varepsilon }'-\tau _{\beta ,\varepsilon }'| \, |L_{\gamma ,\varepsilon }v|&\lesssim \biggl |\frac{\tau _{\alpha ,\varepsilon }'-\tau _{\beta ,\varepsilon }'}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }}\biggr | \bigl [|\widetilde{L}_{\gamma \alpha ,\varepsilon }v| + |\widetilde{L}_{\gamma \beta ,\varepsilon }v|\bigr ] \\&\qquad + \biggl |\frac{\tau _{\alpha ,\varepsilon }'-\tau _{\beta ,\varepsilon }'}{\tau _{\alpha ,\varepsilon }-\tau _{\beta ,\varepsilon }}\biggr |^2 \, \bigl [|L_{\alpha ,\varepsilon }v| + |L_{\beta ,\varepsilon }v|\bigr ] \\&\lesssim {\mathcal {K}}(t,\xi ) \, \biggl [ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v| + {\mathcal {H}}(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v|\biggr ] \, . \end{aligned}$$

Concerning the second sum, from (2.8), we have

$$\begin{aligned} \bigl |(\tau _{\alpha ,\varepsilon }'' - \tau _{\beta ,\varepsilon }'' )v\bigr |&= \frac{|\tau _{\alpha ,\varepsilon }'' - \tau _{\beta ,\varepsilon }''|}{|\tau _{\alpha ,\varepsilon } - \tau _{\beta ,\varepsilon }|} \, \bigl |L_{\beta ,\varepsilon }v - L_{\alpha ,\varepsilon }v\bigr | \\&= \frac{|\tau _{\alpha ,\varepsilon }'' - \tau _{\beta ,\varepsilon }''|}{|\tau _{\alpha ,\varepsilon }' - \tau _{\beta ,\varepsilon }'| + 1} \, \frac{|\tau _{\alpha ,\varepsilon }' - \tau _{\beta ,\varepsilon }'| + 1}{|\tau _{\alpha ,\varepsilon } - \tau _{\beta ,\varepsilon }|} \, \bigl |L_{\beta ,\varepsilon }v - L_{\alpha ,\varepsilon }v\bigr | \\&\leqslant {\mathcal {K}}(t,\xi ) \, {\mathcal {H}}(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v| \, . \end{aligned}$$

Combining the above estimates, we get

$$\begin{aligned} \Bigl |\mathrm{Re}\left\langle (L_l \circ \widetilde{L}_{jh,\varepsilon }) v - \widetilde{L}_{123,\varepsilon } v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \Bigr | \lesssim {\mathcal {K}}(t,\xi ) \, \biggl [ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v|^2 + {\mathcal {H}}^2(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v|^2\biggr ] \,. \end{aligned}$$

2.1.2 Estimation of  \(2 \, \mathrm{Re}\left\langle \widetilde{L}_{123,\varepsilon } v - \widetilde{L}_{123,0} v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

Using (2.12), we have

$$\begin{aligned} 2 \, \mathrm{Re}\left\langle \widetilde{L}_{123,\varepsilon } v - \widetilde{L}_{123,0} v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \lesssim \sum _{j=1}^3 |L_{j,\varepsilon }v|^2 + |\widetilde{L}_{jh,\varepsilon }v|^2 \,. \end{aligned}$$

2.1.3 Estimation of  \(2 \, \mathrm{Re}\left\langle Lv + Mv + Nv \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

As

$$\begin{aligned} Lv + Mv + Nv = -pv \end{aligned}$$
$$\begin{aligned} 2 \, \mathrm{Re}\left\langle Lv + Mv + Nv \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle = - 2 \, \mathrm{Re}\left\langle pv \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \lesssim |v|^2 + |\widetilde{L}_{jh,\varepsilon }v|^2 \,. \end{aligned}$$

2.1.4 Estimation of  \(2 \, \mathrm{Re}\left\langle \widetilde{M} v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

To estimate \(\check{M} = M - \frac{1}{2} \partial _t\partial _\tau L\), we can use the Lagrange interpolation formula

$$\begin{aligned} \check{M} = \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _j(t,\xi ) \, L_{hl,\varepsilon } \,, \quad \text {where}\quad \ell _j(t,\xi ) \overset{\mathrm {def}}{=}\frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon })(\tau _{j,\varepsilon }-\tau _{l,\varepsilon })} \,. \end{aligned}$$
(2.17)

If \(\widetilde{M}\) was a linear combination of the \(\widetilde{L}_{hl,\varepsilon }\), then we could estimate it.

Now we observe that from \(\check{M} = \sum \ell _j L_{hl,\varepsilon }\), it does not follow \(\widetilde{M} = \sum \ell _j \widetilde{L}_{hl,\varepsilon }\), but it follows

$$\begin{aligned} \widetilde{M}&= \check{M} + \frac{1}{2} \partial _t\partial _\tau \check{M} \\&= \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _j L_{hl,\varepsilon } + \frac{1}{2} \, \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _j \partial _t\partial _\tau L_{hl,\varepsilon } + \frac{1}{2} \, \sum _{(j,h,l)\in {\mathcal {S}}_3} \partial _t \ell _j \partial _\tau L_{hl,\varepsilon } \\&= \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _j \widetilde{L}_{hl,\varepsilon } + \frac{1}{2} \, \sum _{(j,h,l)\in {\mathcal {S}}_3} \partial _t \ell _j [L_{h,\varepsilon } + L_{l,\varepsilon }] \, . \end{aligned}$$

We have

$$\begin{aligned} \partial _t \ell _j&= \partial _t \frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })} \\&= \frac{\partial _t\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })} - \frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })} \cdot \left( \frac{\tau _{j,\varepsilon }'-\tau _{h,\varepsilon }'}{\tau _{j,\varepsilon }-\tau _{h,\varepsilon }} + \frac{\tau _{j,\varepsilon }'-\tau _{l,\varepsilon }'}{\tau _{j,\varepsilon }-\tau _{l,\varepsilon }} \right) , \end{aligned}$$

and

$$\begin{aligned} \Biggl |\frac{\partial _t\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })}\Biggr |&= \frac{\bigl |\partial _t\check{M}(\tau _{j,\varepsilon })\bigr |}{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |+1} \cdot \frac{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |+1}{|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|\,|\tau _{j,\varepsilon }-\tau _{l,\varepsilon }|} \\&\lesssim \frac{\bigl |\partial _t\check{M}(\tau _{j,\varepsilon })\bigr |}{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |+1} \left( \frac{\bigl |\check{M}(\tau _{j,\varepsilon })\bigr |}{|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|\,|\tau _{j,\varepsilon }-\tau _{l,\varepsilon }|} + 1 \right) \end{aligned}$$

since \(|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|\geqslant C>0\). Hence,

$$\begin{aligned} \Biggl |\partial _t\frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })}\Biggr | \leqslant {\mathcal {K}}(t,\xi ) \, {\mathcal {H}}(t,\xi ) \,. \end{aligned}$$
(2.18)

Thus, we get

$$\begin{aligned} |\widetilde{M} v|&\lesssim {\mathcal {K}}(t,\xi ) \Biggl [ \ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v| + {\mathcal {H}}(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v| \Biggr ] , \end{aligned}$$

hence

$$\begin{aligned} \biggl |2 \, \mathrm{Re}\left\langle \widetilde{M} v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle \biggr |&\lesssim {\mathcal {K}}(t,\xi ) \Biggl [ \ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v|^2 + {\mathcal {H}}^2(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v|^2 \Biggr ] . \end{aligned}$$

2.1.5 Estimation of  \(2 \, \mathrm{Re}\left\langle \check{N} v \,, \, \widetilde{L}_{jh,\varepsilon }v \right\rangle\)

Using Lagrange’s interpolation formula:

$$\begin{aligned} \check{N}(\tau ) = \frac{\check{N}(\sigma _{1,\varepsilon })}{\sigma _{1,\varepsilon } - \sigma _{2,\varepsilon }} \, (\tau -\sigma _{2,\varepsilon }) + \frac{\check{N}(\sigma _{2,\varepsilon })}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }} \, (\tau -\sigma _{1,\varepsilon }) \,. \end{aligned}$$

Now, since \(\tau _{1,\varepsilon } \leqslant \sigma _{1,\varepsilon } \leqslant \tau _{2,\varepsilon } \leqslant \sigma _{2,\varepsilon } \leqslant \tau _{3,\varepsilon }\), we can find \(\theta _1,\theta _2\in [0,1]\) such that

$$\begin{aligned} \sigma _{j,\varepsilon } = \theta _j \, \tau _{1,\varepsilon } + (1 - \theta _j) \, \tau _{3,\varepsilon } \,, \end{aligned}$$

hence

$$\begin{aligned} (\tau -\sigma _{j,\varepsilon }) = \theta _j \, (\tau -\tau _{1,\varepsilon }) + (1 - \theta _j) \, (\tau -\tau _{3,\varepsilon }) \,, \end{aligned}$$

thus \(\check{N}(\tau )\) is a linear combination, with bounded coefficients, of terms of the form

$$\begin{aligned} \frac{\check{N}(\sigma _{j,\varepsilon })}{\sigma _{1,\varepsilon } - \sigma _{2,\varepsilon }} \, (\tau -\tau _{h,\varepsilon }) \,, \end{aligned}$$

\(j=1,2\), \(h=1,3\).

We get

$$\begin{aligned} 2 \, \mathrm{Re}\left\langle \check{N} v \, , \, \widetilde{L}_{jh,\varepsilon }v \right\rangle&\lesssim \frac{\bigl |\check{N}(\sigma _{1,\varepsilon })\bigr | + \bigl |\check{N}(\sigma _{2,\varepsilon })\bigr |}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \Bigl [ |L_{1,\varepsilon }v| + |L_{3,\varepsilon }v| \Bigr ] \, |\widetilde{L}_{jh,\varepsilon }v| \\&\lesssim \sqrt{ \frac{\bigl |\check{N}(\sigma _{1,\varepsilon })\bigr | + \bigl |\check{N}(\sigma _{2,\varepsilon })\bigr |}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \,} \Biggl [ |\widetilde{L}_{jh,\varepsilon }v|^2 \\&\quad + \frac{\bigl |\check{N}(\sigma _{1,\varepsilon })\bigr | + \bigl |\check{N}(\sigma _{2,\varepsilon })\bigr |}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \Bigl [ |L_{1,\varepsilon }v|^2 + |L_{3,\varepsilon }v|^2 \Bigr ] \Biggr ] \\&\lesssim {\mathcal {K}}(t,\xi ) \Biggl [ \ \sum _{(j,h)\in {\mathcal {S}}_2} |\widetilde{L}_{jh,\varepsilon }v|^2 + {\mathcal {H}}^2(t,\xi ) \, \sum _{j=1}^3 |L_{j,\varepsilon } v|^2 \Biggr ] . \end{aligned}$$

2.2 Estimation of the term \(2 \, {\mathcal {H}}(t,\xi ) \,{\mathcal {H}}'(t,\xi )\)

Lemma 2.5

There exists \(C_H>0\) such that

$$\begin{aligned} {\mathcal {H}}'(t,\xi ) \leqslant C_H \, {\mathcal {K}}(t,\xi ) \, {\mathcal {H}}(t,\xi ) \,. \end{aligned}$$

Proof

For any \((j,h)\in {\mathcal {S}}_2\), we have

$$\begin{aligned} \partial _t \frac{\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'}{\tau _{j,\varepsilon } - \tau _{h,\varepsilon }}&= \frac{\tau _{j,\varepsilon }'' - \tau _{h,\varepsilon }''}{\tau _{j,\varepsilon } - \tau _{h,\varepsilon }} - \frac{(\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }')^2}{(\tau _{j,\varepsilon } - \tau _{h,\varepsilon })^2} \, , \end{aligned}$$

hence

$$\begin{aligned} \partial _t \frac{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|}{|\tau _{j,\varepsilon } - \tau _{h,\varepsilon }|}&\leqslant \frac{|\tau _{j,\varepsilon }'' - \tau _{h,\varepsilon }''|}{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|+1} \cdot \frac{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|+1}{|\tau _{j,\varepsilon } - \tau _{h,\varepsilon }|} + \frac{|\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'|^2}{|\tau _{j,\varepsilon } - \tau _{h,\varepsilon }|^2} \\&\lesssim {\mathcal {K}}(t,\xi ) \, {\mathcal {H}}(t,\xi ) \, . \end{aligned}$$

The terms \(\partial _t\frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }-\tau _{l,\varepsilon })}\) are estimated as in (2.18).

For any \((j,h)\in {\mathcal {S}}_2\), we have

$$\begin{aligned} \partial _t \frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|}&= \frac{1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \cdot \frac{\check{N}(\sigma _{j,\varepsilon })}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |} \cdot \partial _t \check{N}(\sigma _{j,\varepsilon }) - \frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \cdot \frac{\sigma _{2,\varepsilon }' - \sigma _{1,\varepsilon }'}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }} \\&= \frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr | + 1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \Biggl [ \frac{\check{N}(\sigma _{j,\varepsilon })}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |} \cdot \frac{\partial _t \check{N}(\sigma _{j,\varepsilon })}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr | + 1} - \frac{\sigma _{2,\varepsilon }' - \sigma _{1,\varepsilon }'}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }} \Biggr ] \, , \end{aligned}$$

hence

$$\begin{aligned} \partial _t \sqrt{\frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|}\,}&= \frac{1}{2\sqrt{\dfrac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|}}\,} \, \partial _t \frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr | + 1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|} \\&= \frac{1}{2} \, \sqrt{\frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|}\,} \Biggl [ \frac{\check{N}(\sigma _{j,\varepsilon })}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |} \cdot \frac{\partial _t \check{N}(\sigma _{j,\varepsilon })}{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr | + 1} - \frac{\sigma _{2,\varepsilon }' - \sigma _{1,\varepsilon }'}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }} \Biggr ] . \end{aligned}$$

All the terms but the last can be estimated by \({\mathcal {K}}(t,\xi )\) or \({\mathcal {H}}(t,\xi )\). For the last, we remark that (cf. Lemma A.1)

$$\begin{aligned} (\tau _{1,\varepsilon }-\tau _{2,\varepsilon })^2+(\tau _{2,\varepsilon }-\tau _{3,\varepsilon })^2+(\tau _{3,\varepsilon }-\tau _{1,\varepsilon })^2 = \frac{9}{2} \, (\sigma _{2,\varepsilon }-\sigma _{1,\varepsilon })^2 \,, \end{aligned}$$

thus

$$\begin{aligned} \frac{\sigma _{2,\varepsilon }' - \sigma _{1,\varepsilon }'}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }}&= \frac{1}{2} \, \frac{[(\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon })^2]'}{(\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon })^2} = \frac{1}{2} \, \frac{\Bigl [ \sum _{j,h\in {\mathcal {S}}_2} (\tau _{j,\varepsilon }-\tau _{h,\varepsilon })^2 \Bigr ]'}{\sum _{j,h\in {\mathcal {S}}_2} (\tau _{j,\varepsilon }-\tau _{h,\varepsilon })^2} \\&= \sum _{j,h\in {\mathcal {S}}_2} \frac{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon }) \, (\tau _{j,\varepsilon }'-\tau _{h,\varepsilon }') }{\sum _{j,h\in {\mathcal {S}}_2} (\tau _{j,\varepsilon }-\tau _{h,\varepsilon })^2} \, , \end{aligned}$$

which gives

$$\begin{aligned} \biggl |\frac{\sigma _{2,\varepsilon }' - \sigma _{1,\varepsilon }'}{\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }} \biggr |&\leqslant \sum _{j,h\in {\mathcal {S}}_2} \frac{|\tau _{j,\varepsilon }'-\tau _{h,\varepsilon }'|}{|\tau _{j,\varepsilon }-\tau _{h,\varepsilon }|} \, , \end{aligned}$$

which can be estimated by \({\mathcal {K}}(t,\xi )\) or \({\mathcal {H}}(t,\xi )\).

Finally, we get

$$\begin{aligned} \partial _t \sqrt{\frac{\bigl |\check{N}(\sigma _{j,\varepsilon })\bigr |+1}{|\sigma _{2,\varepsilon } - \sigma _{1,\varepsilon }|}\,} \lesssim {\mathcal {K}}(t,\xi ) \, {\mathcal {H}}(t,\xi ) \,. \end{aligned}$$

\(\square\)

2.3 Estimation of the terms \(2 \, {\mathcal {H}}^2 \mathrm{Re}\left\langle \partial _t L_{j,\varepsilon }v \,, \, L_{j,\varepsilon }v \right\rangle\)

As

$$\begin{aligned} \partial _t (L_{j,\varepsilon } v)&= [L_{h,\varepsilon } + i \tau _{h,\varepsilon }] L_{j,\varepsilon } v \\&= (L_{h,\varepsilon } \circ L_{j,\varepsilon }) v + i \tau _{h,\varepsilon } L_{j,\varepsilon } v \, , \end{aligned}$$

we have

$$\begin{aligned} 2\mathrm{Re}\left\langle \partial _t L_{j,\varepsilon }v \, , \, L_{j,\varepsilon }v \right\rangle&= 2\mathrm{Re}\left\langle (L_{h,\varepsilon } \circ L_{j,\varepsilon }) v \, , \, L_{j,\varepsilon }v \right\rangle + 2\mathrm{Re}\left\langle i \tau _{h,\varepsilon } L_{j,\varepsilon } \, , \, L_{j,\varepsilon }v \right\rangle \\&= 2\mathrm{Re}\left\langle (L_{h,\varepsilon } \circ L_{j,\varepsilon }) v \, , \, L_{j,\varepsilon }v \right\rangle \, , \end{aligned}$$

as \(\tau _{h,\varepsilon }\) is a real function, hence

$$\begin{aligned} 2 \, {\mathcal {H}}^2 \mathrm{Re}\left\langle \partial _t L_{j,\varepsilon } v \, , \, L_{j,\varepsilon } v \right\rangle&\leqslant {\mathcal {K}}\, \Bigl [ \bigl |(L_{h,\varepsilon } \circ L_{j,\varepsilon }) v\bigr |^2 + {\mathcal {H}}^2 \, |L_{j,\varepsilon }|^2 \Bigr ] \, . \end{aligned}$$

Now, using (2.5), we have

$$\begin{aligned} L_{j,\varepsilon } \circ L_{h,\varepsilon } - \widetilde{L}_{jh,\varepsilon }&= \frac{i}{2} \, (\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }') \\&= -\frac{i}{2} \, \frac{\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'}{\tau _{j,\varepsilon } - \tau _{h,\varepsilon }} (L_{j,\varepsilon }v - L_{h,\varepsilon }v) \, , \end{aligned}$$

hence

$$\begin{aligned} \bigl |(L_{h,\varepsilon } \circ L_{j,\varepsilon }) v\bigr |^2&\leqslant 2\bigl |\widetilde{L}_{jh,\varepsilon }v\bigr |^2 +2\bigl |(L_{j,\varepsilon } \circ L_{h,\varepsilon } - \widetilde{L}_{jh,\varepsilon }) v\bigr |^2 \\&\leqslant 2\bigl |\widetilde{L}_{jh,\varepsilon }v\bigr |^2 + \Bigl |\frac{\tau _{j,\varepsilon }' - \tau _{h,\varepsilon }'}{\tau _{j,\varepsilon } - \tau _{h,\varepsilon }}\Bigr |^2 \Bigl (|L_{j,\varepsilon }v|^2 + |L_{h,\varepsilon }v|^2\Bigr ) \, . \end{aligned}$$

2.4 Estimation of the terms \(2\mathrm{Re}\left\langle \partial _t v \,, \, v \right\rangle\)

As

$$\begin{aligned} \partial _t v = L_{1,\varepsilon } v + i \tau _{1,\varepsilon } v \,, \end{aligned}$$

we have

$$\begin{aligned} 2 \, \mathrm{Re}\left\langle \partial _t v \, , \, v \right\rangle&= 2 \, \mathrm{Re}\left\langle L_{1,\varepsilon } v \, , \, v \right\rangle + 2 \mathrm{Re}\left\langle i \tau _{1,\varepsilon } v \, , \, v \right\rangle \\&\leqslant |L_{1,\varepsilon } v|^2 + |v|^2 \, . \end{aligned}$$

Taking \(\eta\) large, we arrive to the estimates

$$\begin{aligned} E'(t) \leqslant C \, E(t) \,, \end{aligned}$$
$$\begin{aligned} E(t) \leqslant \exp \bigl ( C(t-t_0)\bigr ) \, E(t_0) \,, \end{aligned}$$

from this taking into account the inequality

$$\begin{aligned} {\mathcal {K}}(t,\xi ) \geqslant \bigl (2+|\xi |\bigr )^{-c_0} \end{aligned}$$

it follows that the Cauchy problem for the given equation is well posed.

This concludes the proof of Theorem 2.

3 Equivalent forms of the Levi conditions

In this paragraph, we can give some alternative forms of the Levi conditions. In particular, we express these conditions in terms of the roots of L.

We recall that

$$\begin{aligned} \bigl |\tau _{j,\varepsilon }-\tau _{h,\varepsilon }\bigl |&\approx \bigl |\tau _j-\tau _h\bigl |+1, &\text {for any}\; (j,h)\in {\mathcal {S}}_2 \, , \end{aligned}$$
(3.1)

and

$$\begin{aligned} \bigl |\tau _j-\tau _{j,\varepsilon }\bigl |&\lesssim 1 \, ,&\text{for any}\; j=1,2,3, \end{aligned}$$
(3.2)

Proposition 3.1

Hypothesis (1.13) is equivalent to the conditions

$$\begin{aligned} \int _0^T \sum _{(j,h,l)\in {\mathcal {S}}_3} \frac{\bigl |\check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\bigr |}{\Bigl (\bigl |\tau _j(t,\xi )-\tau _h(t,\xi )\bigr |+1\Bigr ) \Bigl (\bigl |\tau _j(t,\xi )-\tau _l(t,\xi )\bigr |+1\Bigr )} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr ) \,, \end{aligned}$$
(3.3a)
$$\begin{aligned} \int _0^T \frac{\bigl |\partial _\tau \check{M}\bigl (t,\tau _1(t,\xi ),\xi \bigr )\bigr | + \bigl |\partial _\tau \check{M}\bigl (t,\tau _3(t,\xi ),\xi \bigr )\bigr |}{\bigl |\tau _1(t,\xi )-\tau _3(t,\xi )\bigr |+1} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr ) \,. \end{aligned}$$
(3.3b)

Proof

For the sake of simplicity, we omit the t and \(\xi\) variables.

We start by proving that (1.13) implies (3.3b).

By the Lagrange interpolation formula, we have

$$\begin{aligned} \check{M}(\tau ) = \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _{j,\varepsilon } \, L_{hl,\varepsilon } (\tau ) \,, \quad \text {where}\quad \ell _{j,\varepsilon } \overset{\mathrm {def}}{=}\frac{\check{M}(\tau _{j,\varepsilon })}{(\tau _{j,\varepsilon }-\tau _{h,\varepsilon })(\tau _{j,\varepsilon }-\tau _{l,\varepsilon })} \,. \end{aligned}$$

Differentiating with respect to \(\tau\):

$$\begin{aligned} \partial _\tau \check{M}(\tau )&= \sum _{(j,h,l)\in {\mathcal {S}}_3} \ell _{j,\varepsilon } \, \bigl [L_{k,\varepsilon }(\tau ) + L_{l,\varepsilon }(\tau )\bigr ] \\&= \sum _{(j,h,l)\in {\mathcal {S}}_3} \bigl [ \ell _{j,\varepsilon } + \ell _{k,\varepsilon } \bigr ] \, L_{l,\varepsilon }(\tau ) \, , \end{aligned}$$

hence \(\partial _\tau \check{M}(\tau )\) is a linear combination of \(L_{1,\varepsilon }(\tau )\), \(L_{2,\varepsilon }(\tau )\) and \(L_{3,\varepsilon }(\tau )\) with coefficients verifying the logarithmic condition.

To prove that \(\partial _\tau \check{M}(\tau )\) is a linear combination of only \(L_{1,\varepsilon }\) and \(L_{3,\varepsilon }\) we note that, since \(\tau _{1,\varepsilon } \leqslant \tau _{2,\varepsilon } \leqslant \tau _{3,\varepsilon }\), we can find \(\theta \in [0,1]\) such that

$$\begin{aligned} \tau _{2,\varepsilon } = \theta \, \tau _{1,\varepsilon } + (1 - \theta ) \, \tau _{3,\varepsilon } \,, \end{aligned}$$

hence

$$\begin{aligned} L_{2,\varepsilon } = \theta \, L_{1,\varepsilon } + (1 - \theta ) \, L_{3,\varepsilon } \,, \end{aligned}$$

, and then,

$$\begin{aligned} \partial _\tau \check{M}(\tau ) = b_{1,\varepsilon } \, L_{1,\varepsilon }(\tau ) + b_{3,\varepsilon } \, L_{3,\varepsilon }(\tau ) \,, \end{aligned}$$
(3.4)

where \(b_{1,\varepsilon }\) and \(b_{3,\varepsilon }\) are some linear combination of the \(\ell _{1,\varepsilon }\), \(\ell _{2,\varepsilon }\) and \(\ell _{3,\varepsilon }\), hence they verify the logarithmic condition.

Substituting \(\tau\) with \(\tau _{1,\varepsilon }\) and \(\tau _{3,\varepsilon }\) in (3.4), we get

$$\begin{aligned} \int _0^T \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr | + \bigl |\partial _\tau \check{M}(\tau _{3,\varepsilon })\bigr |}{\bigl |\tau _{1,\varepsilon }-\tau _{3,\varepsilon }\bigl |} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr ) \,. \end{aligned}$$
(3.5)

Since

$$\begin{aligned} \partial _\tau \check{M}(\tau _1)&= \partial _\tau \check{M}(\tau _{1,\varepsilon }) + \partial _\tau ^2 \check{M}(\tau _{1,\varepsilon }) \, (\tau _1 - \tau _{1,\varepsilon }) \end{aligned}$$

we get

$$\begin{aligned} \bigl |\partial _\tau \check{M}(\tau _1)\bigr |&\lesssim \bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr | + 1 \, , \end{aligned}$$

hence

$$\begin{aligned} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr |}{|\tau _3-\tau _1|+1}&\lesssim \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + 1 \, . \end{aligned}$$

An analogous estimate holds true with \(\tau _1\) and \(\tau _{1,\varepsilon }\) replaced by \(\tau _3\) and \(\tau _{3,\varepsilon }\). Combining such inequalities with (3.5), we get (3.3b).

To prove that (1.13) implies (3.3a), we remark that, since \(\check{M}\) is a polynomial of degree 2, it coincides with its Taylor’s expansion of order 2, hence, we have

$$\begin{aligned} \check{M}(\tau _1) = \check{M}(\tau _{1,\varepsilon }) + \partial _\tau \check{M}(\tau _{1,\varepsilon }) \, (\tau _1 - \tau _{1,\varepsilon }) + \frac{1}{2} \, \partial _\tau ^2 \check{M}(\tau _{1,\varepsilon }) \, (\tau _1 - \tau _{1,\varepsilon })^2 \,. \end{aligned}$$

Taking into account (3.1) and (3.2), we have

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _1)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _1|+1\bigr )} \lesssim \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }| \, |\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + 1 \,. \end{aligned}$$

An analogous estimate holds true for \(\check{M}(\tau _3)\).

To prove the estimate for \(\check{M}(\tau _2)\), we split the phase space \([0,T] \times {\mathbb {R}}^n\) in two sub-zones:

$$\begin{aligned} Z_1&= \Bigl \{ \ (t,\xi ) \in [0,T] \times {\mathbb {R}}^n \ \Bigm | \ |\tau _2-\tau _1| \leqslant |\tau _3-\tau _2| \ \Bigr \} \\ Z_2&= \Bigl \{ \ (t,\xi ) \in [0,T] \times {\mathbb {R}}^n \ \Bigm | \ |\tau _3-\tau _2| \leqslant |\tau _2-\tau _1| \ \Bigr \} \, . \end{aligned}$$

For \((t,\xi ) \in Z_1\), we write

$$\begin{aligned} \check{M}(\tau _2) = \check{M}(\tau _{1,\varepsilon }) + \partial _\tau \check{M}(\tau _{1,\varepsilon }) \, (\tau _2 - \tau _{1,\varepsilon }) + \frac{1}{2} \, \partial _\tau ^2 \check{M}(\tau _{1,\varepsilon }) \, (\tau _2 - \tau _{1,\varepsilon })^2 \,, \end{aligned}$$

and, since

$$\begin{aligned} |\tau _2 - \tau _{1,\varepsilon }| \lesssim |\tau _2 - \tau _1| + 1 \leqslant |\tau _3 - \tau _2| + 1 \,, \end{aligned}$$

we get

$$\begin{aligned}&\frac{\bigl |\check{M}(\tau _2)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \\&\qquad \lesssim \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} + \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |\, |\tau _2 - \tau _{1,\varepsilon }|}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \\&\quad \qquad + \frac{1}{2} \, \bigl |\partial _\tau ^2 \check{M}(\tau _{1,\varepsilon })\bigr | \, \frac{|\tau _2 - \tau _{1,\varepsilon }|^2}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \\&\qquad \lesssim \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} + \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _3-\tau _2|+1} + 1 \, . \end{aligned}$$

Now, using the fact that \((t,\xi ) \in Z_1\) if, and only if \(|\tau _3-\tau _1| \leqslant 2 |\tau _3-\tau _2|\), we can estimate the second term by \(\frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _3-\tau _1|+1}\). Finally, by (3.1), we get

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _2)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \lesssim \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }| \, |\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + 1 \,. \end{aligned}$$
(3.6)

For \((t,\xi ) \in Z_2\), we repeat the above calculation, with \(\tau _{1,\varepsilon }\) and \(\tau _{3,\varepsilon }\) exchanged, and we get

$$\begin{aligned}&\frac{\bigl |\check{M}(\tau _2)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \nonumber \\&\quad \lesssim \frac{\bigl |\check{M}(\tau _{3,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }| \, |\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _{3,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + 1 \,. \end{aligned}$$
(3.7)

Combining (3.6) and (3.7), we get

$$\begin{aligned}&\frac{\bigl |\check{M}(\tau _2)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigr ) \, \bigl (|\tau _3-\tau _2|+1\bigr )} \\&\quad \lesssim \frac{\bigl |\check{M}(\tau _{3,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }| \, |\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }| \, |\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _{1,\varepsilon })\bigr | + \bigl |\partial _\tau \check{M}(\tau _{3,\varepsilon })\bigr |}{|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + 1 \,. \end{aligned}$$

Thanks to (3.5) we see that (1.13) implies (3.3a).

Finally, we prove that (3.3a) and (3.3b) imply (1.13). Using Taylor expansion as before, we have

$$\begin{aligned} \check{M}(\tau _{j,\varepsilon }) = \check{M}(\tau _j) + \partial _\tau \check{M}(\tau _j) \, (\tau _{j,\varepsilon } - \tau _j) + \frac{1}{2} \, \partial _\tau ^2 \check{M}(\tau _j) \, (\tau _{j,\varepsilon } - \tau _j)^2 \,, \quad \text{for}\; j=1,2,3\,. \end{aligned}$$
(3.8)

Using (3.8) with \(j=1\), we have

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|}&\lesssim \frac{\bigl |\check{M}(\tau _1)\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr |\,|\tau _{1,\varepsilon } - \tau _1|}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} \\&\qquad + \frac{|\tau _{1,\varepsilon } - \tau _1|^2}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} \, , \end{aligned}$$

and, using (3.1) and (3.2), we get

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _{1,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{1,\varepsilon }|} \lesssim \frac{\bigl |\check{M}(\tau _1)\bigr |}{\bigl (|\tau _2-\tau _1|+1\bigl ) \,\bigl (|\tau _3-\tau _1|+1\bigl )} + \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr |}{|\tau _3-\tau _1|+1} + 1 \,. \end{aligned}$$

Analogously, exchanging \(\tau _1\) with \(\tau _3\) and \(\tau _{1,\varepsilon }\) with \(\tau _{3,\varepsilon }\):

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _{3,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{3,\varepsilon }|\,|\tau _{1,\varepsilon }-\tau _{3,\varepsilon }|} \lesssim \frac{\bigl |\check{M}(\tau _3)\bigr |}{\bigl (|\tau _3-\tau _2|+1\bigl ) \,\bigl (|\tau _3-\tau _1|+1\bigl )} + \frac{\bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _3-\tau _1|+1} + 1 \,. \end{aligned}$$

Using (3.8) with \(j=2\), (3.1) and (3.2), we have

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _{2,\varepsilon })\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }|}&\lesssim \frac{\bigl |\check{M}(\tau _2)\bigr |}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }|} + \frac{\bigl |\partial _\tau \check{M}(\tau _2)\bigr |\,|\tau _{2,\varepsilon } - \tau _2|}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }|} \\&\qquad + \frac{|\tau _{2,\varepsilon } - \tau _2|^2}{|\tau _{2,\varepsilon }-\tau _{1,\varepsilon }|\,|\tau _{3,\varepsilon }-\tau _{2,\varepsilon }|} \\&\lesssim \frac{\bigl |\check{M}(\tau _2)\bigr |}{\bigl (|\tau _3-\tau _2|+1\bigl ) \,\bigl (|\tau _2-\tau _1|+1\bigl )} + \frac{\bigl |\partial _\tau \check{M}(\tau _2)\bigr |}{\bigl (|\tau _3-\tau _2|+1\bigl ) \,\bigl (|\tau _2-\tau _1|+1\bigl )} + 1 \, . \end{aligned}$$

To estimate the second term, we note that, since \(\tau _1 \leqslant \tau _2 \leqslant \tau _3\), we can find \(\theta \in [0,1]\) such that

$$\begin{aligned} \tau _2 = \theta \, \tau _1 + (1 - \theta ) \, \tau _3 \,, \end{aligned}$$

hence

$$\begin{aligned} \partial _\tau \check{M}(\tau _2) = \theta \, \partial _\tau \check{M}(\tau _1) + (1 - \theta ) \, \partial _\tau \check{M}(\tau _3) \,, \end{aligned}$$

, and then,

$$\begin{aligned} \bigl |\partial _\tau \check{M}(\tau _2)\bigr | \leqslant \bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr | \,. \end{aligned}$$

We note also that if \((t,\xi ) \in Z_1\), then \(|\tau _3-\tau _1| \leqslant 2 |\tau _3-\tau _2|\), whereas if \((t,\xi ) \in Z_2\), then \(|\tau _3-\tau _1| \leqslant 2 |\tau _2-\tau _1|\), thus

$$\begin{aligned} \bigl (|\tau _3-\tau _2|+1\bigl ) \,\bigl (|\tau _2-\tau _1|+1\bigl ) \gtrsim |\tau _3-\tau _1|+1 \,. \end{aligned}$$

Combining the above estimates, we get

$$\begin{aligned} \frac{\bigl |\partial _\tau \check{M}(\tau _2)\bigr |}{\bigl (|\tau _3-\tau _2|+1\bigl ) \,\bigl (|\tau _2-\tau _1|+1\bigl )} \leqslant \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr |+\bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _3-\tau _1|+1} \,. \end{aligned}$$

\(\square\)

Proposition 3.2

Hypothesis (1.14) is equivalent to the condition

$$\begin{aligned} \int _0^T \sum _{j=1}^2 \sqrt{\frac{\bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _2(t,\xi ) - \sigma _1(t,\xi )\bigr |+1}\,} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(3.9)

where \(\sigma _1(t,\xi )\) and \(\sigma _2(t,\xi )\) are the roots in \(\tau\) of \(\partial _\tau L(t,\tau ,\xi )\).

Proof

First of all, we remark that, since

$$\begin{aligned} \Bigl | \bigl |\check{N}(\mu _1)\bigr | - \bigl |\check{N}(\mu _2)\bigr |\Bigr | \lesssim |\mu _2-\mu _1| \,, \end{aligned}$$

condition (1.14) is equivalent to

$$\begin{aligned} \int _0^T \sqrt{ \frac{\bigl |\check{N}\bigl (t,\mu _{1}(t,\xi ),\xi \bigr )\bigr |}{\bigl |\mu _{2}(t,\xi ) - \mu _{1}(t,\xi )\bigr |} \, } \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,. \end{aligned}$$
(3.10)

Analogously, since

$$\begin{aligned} \Bigl | \bigl |\check{N}(\sigma _1)\bigr | - \bigl |\check{N}(\sigma _2)\bigr |\Bigr | \lesssim |\sigma _2-\sigma _1| \,, \end{aligned}$$

condition (3.9) is equivalent to

$$\begin{aligned} \int _0^T \sqrt{\frac{\bigl |\check{N}\bigl (t,\sigma _1(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _2(t,\xi ) - \sigma _1(t,\xi )\bigr |+1}\,} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,. \end{aligned}$$
(3.11)

Now, by (2.1), we see that

$$\begin{aligned} |\mu _2-\mu _1| \approx |\sigma _2-\sigma _1|+1 \,, \end{aligned}$$

whereas, by direct computation, we see that

$$\begin{aligned} |\mu _1-\sigma _1| \lesssim 1 \,, \end{aligned}$$

from which we deduce the equivalence of (3.10) and (3.11). \(\square\)

Remark 3.3

More generally, Hypothesis (1.14) is equivalent to each of the conditions

$$\begin{aligned} \int _0^T \sqrt{\frac{{\mathscr {N}}(t,\xi )}{{\mathscr {D}}(t,\xi )}\,} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$
(3.12)

where \({\mathscr {N}}\) can be any of the symbol

$$\begin{aligned}&\bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2 \, ,&\bigl |\check{N}\bigl (t,\mu _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2 \, , \\&\bigl |\check{N}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2,3 \, ,&\bigl |\check{N}\bigl (t,\lambda _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2,3 \, , \end{aligned}$$

and \({\mathscr {D}}\) can be any of the symbol

$$\begin{aligned}&\bigl |\sigma _2(t,\xi )-\sigma _1(t,\xi )\bigr |+1&\bigl |\mu _2(t,\xi )-\mu _1(t,\xi )\bigr | \\&\sqrt{\Delta _L^{(1)}(t,\xi ) \,} + 1&\sqrt{\Delta _{{\mathcal {L}}}^{(1)}(t,\xi ) \,} \\&\bigl |\tau _3(t,\xi )-\tau _1(t,\xi )\bigr |+1&\bigl |\lambda _3(t,\xi )-\lambda _1(t,\xi )\bigr | \, . \end{aligned}$$

Indeed, from Lemma A.1, we have

$$\begin{aligned} (\tau _{1}-\tau _{2})^2+(\tau _{2}-\tau _{3})^2+(\tau _{3}-\tau _{1})^2 = \frac{9}{2} \, (\sigma _{2}-\sigma _{1})^2 \,, \end{aligned}$$

hence

$$\begin{aligned} |\sigma _2-\sigma _1|+1 \approx \sqrt{\Delta _L^{(1)}\,} + 1 \,. \end{aligned}$$

Next, from the elementary inequality

$$\begin{aligned} \max (\alpha ,\beta ,\gamma ) \leqslant \sqrt{\alpha ^2+\beta ^2+\gamma ^2\,} \leqslant \sqrt{3\,} \, \max (\alpha ,\beta ,\gamma ) \,, \qquad \text {for any}\; \alpha ,\beta ,\gamma \geqslant 0 \,, \end{aligned}$$

we see that

$$\begin{aligned} |\tau _{3}-\tau _{1}| \leqslant \sqrt{(\tau _{3}-\tau _{1})^2+(\tau _{3}-\tau _{2})^2+(\tau _{2}-\tau _{1})^2\,} \leqslant \sqrt{3\,} \, |\tau _{3}-\tau _{1}|, \end{aligned}$$
(3.13)

and, consequently,

$$\begin{aligned} \sqrt{\Delta _L^{(1)}\,} + 1 \approx |\tau _{3}-\tau _{1}| + 1 \,. \end{aligned}$$

We prove that

$$\begin{aligned} |\mu _2-\mu _1| \approx \sqrt{\Delta _{{\mathcal {L}}}^{(1)}\,} \approx |\lambda _3-\lambda _1| \end{aligned}$$

in a similar way.

Since

$$\begin{aligned} \Bigl | \bigl |\check{N}(\sigma _1)\bigr | - \bigl |\check{N}(\tau _1)\bigr |\Bigr | \lesssim |\sigma _1-\tau _1| \,, \end{aligned}$$

and

$$\begin{aligned} |\sigma _1-\tau _1| \leqslant |\tau _3-\tau _1| \,, \end{aligned}$$

we see that

$$\begin{aligned} \frac{\bigl |\check{N}(\sigma _1)\bigr |}{|\tau _3-\tau _1|+1} \approx \frac{\bigl |\check{N}(\tau _1)\bigr |}{|\tau _3-\tau _1|+1} + 1 \,. \end{aligned}$$

We can prove the other equivalences in a similar way.

Remark 3.4

By similar arguments, we can prove that (3.3b) is equivalent to

$$\begin{aligned} \int _0^T \frac{\bigl |\partial _\tau \check{M}\bigl (t,\sigma _1(t,\xi ),\xi \bigr )\bigr | + \bigl |\partial _\tau \check{M}\bigl (t,\sigma _2(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _1(t,\xi )-\sigma _2(t,\xi )\bigr |+1} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr ) \,, \end{aligned}$$
(3.14)

where \(\sigma _1\) and \(\sigma _2\) are the roots of \(\partial _\tau L(t,\tau ,\xi )\).

More generally, Hypothesis (3.3b) is equivalent to each of the conditions

$$\begin{aligned} \int _0^T \sqrt{\frac{{\mathscr {M}}(t,\xi )}{{\mathscr {D}}(t,\xi )}\,} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \end{aligned}$$

where \({\mathscr {M}}\) can be any of the symbol

$$\begin{aligned}&\bigl |\partial _\tau \check{M}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2 \, ,&\bigl |\partial _\tau \check{M}\bigl (t,\mu _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2 \, , \\&\bigl |\partial _\tau \check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2,3 \, ,&\bigl |\partial _\tau \check{M}\bigl (t,\lambda _j(t,\xi ),\xi \bigr )\bigr | \, , \quad \text {for}\; j=1,2,3\, , \end{aligned}$$

and \({\mathscr {D}}\) is as in the previous Remark.

4 The case of analytic coefficients

In this section, we show that if the coefficients are analytic, then hypothesis (1.9)–(1.12) are satisfied.

Lemma 4.1

([7, 21, 29]) Let \(f_1,\dotsc ,f_d\) be analytic functions on an open set \({\mathscr {O}}\subset {\mathbb {C}}\), \(f_j\not \equiv 0\) for \(j = 1,\dotsc ,d\). For any \(\alpha = (\alpha _1,\dotsc ,\alpha _d)\in {\mathbb {R}}^d\) set

$$\begin{aligned} \varphi _\alpha (x):= \sum _{j=1}^d \alpha _j f_j(x) \,. \end{aligned}$$

Then, for any compact set \({\mathscr {K}} \subset {\mathscr {O}}\), there exists \(\nu \in {\mathbb {N}}\) such that either \(\varphi _\alpha (x) \equiv 0\) or \(\varphi _\alpha (x)\) has at most \(\nu\) zeros in \({\mathscr {K}}\), if counted with their multiplicity.

Proof

With no loss of generality, we can assume that \(f_1,\dotsc ,f_d\) are linearly independent.

If, for any \(k\in {\mathbb {N}}\), there exists \(\alpha ^{(k)}\) with \(\Vert \alpha ^{(k)}\Vert =1\) such that \(\varphi _\alpha (x)\) has at least k zeros in \({\mathscr {K}}\), by passing to a suitable subsequence, we can assume that \(\alpha ^{(k)}\) converges to some \(\alpha ^*\). Hence, \(\varphi _{\alpha ^*}\) must have an infinite number of zeros, and hence, it is identically zero. This contradicts the fact that the \(f_1,\dotsc ,f_d\) are linearly independent. \(\square\)

As any polynomial \(\Phi (t,\xi )\) in \(\xi\) with analytic coefficients can be regarded as a linear combination of its coefficients, we deduce from Lemma 4.1.

Corollary 4.2

Let \(\Phi (t,\xi )\) be a polynomial in \(\xi\) with analytic coefficients on an open set \({\mathscr {O}}\subset {\mathbb {C}}\).

Then, for any compact set \({\mathscr {K}} \subset {\mathscr {O}}\), there exists \(\nu \in {\mathbb {N}}\) such that either \(\Phi (t,\xi ) \equiv 0\) or \(\Phi (\cdot ,\xi )\) has at most \(\nu\) zeros in \({\mathscr {K}}\), if counted with their multiplicity.

Proposition 4.3

Let \(P(t,\tau ,\xi )\) be a third-order monic hyperbolic polynomial in \(\tau\), whose coefficients are polynomial in \(\xi\) and analytic in \(t\in {\mathscr {O}}\), \({\mathscr {O}}\) open set in \({\mathbb {C}}\). Assume that the roots in \(\tau\) of \(P(t,\tau ,\xi )\) are distinct for \(\xi\) large:

$$\begin{aligned} \tau _1(t,\xi )< \tau _2(t,\xi ) < \tau _3(t,\xi ) \,, \qquad \text {if}\; |\xi | \geqslant R\,. \end{aligned}$$

Let \(Q(t,\tau ,\xi )\) be another polynomial in \(\tau\) whose coefficients are polynomial in \(\xi\) and analytic in \(t\in {\mathscr {O}}\).

Then, for any compact set \({\mathscr {K}} \subset {\mathscr {O}}\), there exists \(\nu \in {\mathbb {N}}\) such that for any \(\xi \in {\mathbb {R}}^n\), with \(|\xi | \geqslant R\), the functions

$$\begin{aligned} t \mapsto Q\bigl (t,\tau _j(t,\xi ),\xi \bigr ) \,, \qquad j=1,2,3 \,, \end{aligned}$$

are either identically zero or have at most \(\nu\) zeros in \({\mathscr {K}}\).

Remark 4.4

If a root \(\tau _j(t,\xi )\) was a linear function of \(\xi\), then \(Q\bigl (t,\tau _j(t,\xi ),\xi \bigr )\) would be a polynomial in \(\xi\), and, by Corollary 4.2, we can get easily the result.

Unfortunately, in general, the roots \(\tau _j(t,\xi )\) are not linear function of \(\xi\), hence, we cannot apply Corollary 4.2 directly.

Proof

First of all, we remark that, by the implicit function theorem, the roots \(\tau _j=\tau _j(t,\xi )\) are analytic functions for \(\xi\) large, thus so are the functions \(Q\bigl (t,\tau _j(t,\xi ),\xi \bigr )\), \(j=1,2,3\).

Let

$$\begin{aligned} \Phi _1(t,\xi )&= Q\bigl (t,\tau _1(t,\xi ),\xi \bigr )+Q\bigl (t,\tau _2(t,\xi ),\xi \bigr )+Q\bigl (t,\tau _3(t,\xi ),\xi \bigr ) \\ \Phi _2(t,\xi )&= Q\bigl (t,\tau _1(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _2(t,\xi ),\xi \bigr ) \\&\qquad + Q\bigl (t,\tau _2(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _3(t,\xi ),\xi \bigr ) + Q\bigl (t,\tau _3(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _1(t,\xi ),\xi \bigr ) \\ \Phi _3(t,\xi )&= Q\bigl (t,\tau _1(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _2(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _3(t,\xi ),\xi \bigr ) \, . \end{aligned}$$

These functions are symmetric in the \(\tau _j\)’s and so they are polynomials in \(\xi\), with analytic coefficients in t.

For a given compact \({\mathscr {K}} \subset {\mathscr {O}}\), consider another compact \({\mathscr {L}} \subset {\mathscr {O}}\) such that \({\mathscr {K}} \subset {\mathring{\mathscr {L}}}\). By Corollary 4.2, there exists \(\nu \in {\mathbb {N}}\) such that, for any \(j=1,2,3\) and \(\xi \in {\mathbb {R}}^n\) either \(\Phi _j(\cdot ,\xi )\) is identically zero or \(\Phi _j(\cdot ,\xi )\) has at most \(\nu\) zeros in \({\mathscr {L}}\).

Assume at first that \(\Phi _3(\cdot ,\xi ) \not \equiv 0\).

Let \(\xi _0\in {\mathbb {R}}^n\setminus \{0\}\) be fixed, if \(\Phi _3(\cdot ,\xi _0) \not \equiv 0\), then \(\Phi _3(\cdot ,\xi _0)\) has at most \(\nu\) zeros in \({\mathscr {L}}\), hence, also the \(Q\bigl (t,\tau _j(t,\xi _0),\xi _0\bigr )\)s have at most \(\nu\) zeros in \({\mathscr {L}}\). If \(\Phi _3(\cdot ,\xi _0) \equiv 0\) but one of the \(Q\bigl (t,\tau _j(t,\xi _0),\xi _0\bigr )\) is not identically zero, then it has \(\eta\) zeros in \({\mathscr {L}}\). Since \(\Phi _3(\cdot ,\xi ) \not \equiv 0\), we can find a sequence \(\{\xi ^{(k)}\}_{k\in {\mathbb {N}}}\) convergent to \(\xi _0\) and such that \(\Phi _3(\cdot ,\xi ^{(k)}) \not \equiv 0\). If k is large enough, by Rouché Theorem, \(Q\bigl (t,\tau _j(t,\xi ^{(k)}),\xi ^{(k)}\bigr )\) has at most \(\nu\) zeros in \({\mathscr {L}}\), then \(Q\bigl (t,\tau _j(t,\xi _0),\xi _0\bigr )\) has \(\eta \le \nu\) zeros in \({\mathscr {K}}\).

If \(\Phi _3(t,\xi ) \equiv 0\), then at least one of the \(Q\bigl (t,\tau _j(t,\xi ),\xi \bigr )\) (say \(Q\bigl (t,\tau _3(t,\xi ),\xi \bigr )\)) is identically zero, thus \(\Phi _2(\cdot ,\xi )\) reduces to \(Q\bigl (t,\tau _1(t,\xi ),\xi \bigr ) \, Q\bigl (t,\tau _2(t,\xi ),\xi \bigr )\).

We can repeat the same argument as before: If \(\Phi _2(\cdot ,\xi )\) is not identical zero, then \(Q\bigl (t,\tau _1(t,\xi ),\xi \bigr )\) and \(Q\bigl (t,\tau _2(t,\xi ),\xi \bigr )\) have at most \(\nu\) zeros in \({\mathscr {K}}\), whereas if \(\Phi _2(\cdot ,\xi )\) is identical zero, at least one between \(Q\bigl (t,\tau _1(t,\xi ),\xi \bigr )\) and \(Q\bigl (t,\tau _2(t,\xi ),\xi \bigr )\) vanishes identically. If \(Q\bigl (t,\tau _2(t,\xi ),\xi \bigr )\) is identically zero, then \(\Phi _1(t,\xi ) = Q\bigl (t,\tau _1(t,\xi ),\xi \bigr )\), and we get immediately that \(Q\bigl (t,\tau _1(t,\xi ),\xi \bigr )\) is either identically zero or it has at most \(\nu\) zeros in \({\mathscr {K}}\). \(\square\)

Proposition 4.5

Let \(P(t,\tau ,\xi )\) be as in Proposition 4.3 and let \(Q(t,\tau ,\sigma ,\xi )\) be a symmetric polynomial in \((\tau ,\sigma )\) whose coefficients are polynomial in \(\xi\) and analytic in \(t\in {\mathscr {O}}\).

Then, for any compact set \({\mathscr {K}} \subset {\mathscr {O}}\), there exists \(\nu \in {\mathbb {N}}\) such that for any \(\xi\), the functions

$$\begin{aligned} t \mapsto Q\bigl (t,\tau _j(t,\xi ),\tau _k(t,\xi ),\xi \bigr ) \,, \qquad (j,k)\in {\mathcal {S}}_2 \,, \end{aligned}$$

are either identically zero or have at most \(\nu\) zeros in \({\mathscr {K}}\).

The proof is similar to that of Proposition 4.3 and is obtained by considering the functions

$$\begin{aligned} \Phi _1(t,\xi )&= Q_{1,2}(t,\xi )+Q_{2,3}(t,\xi )+Q_{3,1}(t,\xi ) \\ \Phi _2(t,\xi )&= Q_{1,2}(t,\xi ) \, Q_{2,3}(t,\xi ) + Q_{2,3}(t,\xi ) \, Q_{3,1}(t,\xi ) + Q_{3,1}(t,\xi ) \, Q_{1,2}(t,\xi ) \\ \Phi _3(t,\xi )&= Q_{1,2}(t,\xi ) \, Q_{2,3}(t,\xi ) \, Q_{3,1}(t,\xi ) \, , \end{aligned}$$

where for the sake of brevity, we have set

$$\begin{aligned} Q_{j,k}(t,\xi ) = Q\bigl (t,\tau _j(t,\xi ),\tau _k(t,\xi ),\xi \bigr ) \,. \end{aligned}$$

Proposition 4.6

Let

$$\begin{aligned} \Xi = \Bigl \{ \ \xi \in {\mathbb {R}}^n \ \Bigm | \ |\xi | \geqslant R \ \Bigr \} \,, \end{aligned}$$

for some \(R>0\) and let \(f:[0,T]\times \Xi \rightarrow {\mathbb {R}}\) be such that

  1. (1)

    f is Lipschitz continuous in t, uniformly with respect to \(\xi\), that is there exists \(C_0>0\) such that

    $$\begin{aligned} \bigl |f(t_1,\xi )-f(t_2,\xi )\bigr | \leqslant C_0 |t_1-t_2| \,, \qquad \text {for any}\; t_1,t_2\in [0,T]\; and \; \xi \in \Xi \,; \end{aligned}$$
  2. (2)

    there exist positive constants \(A,C_1,C_2\) such that

    $$\begin{aligned}C_1 \, \bigl (1+|\xi |\bigr )^{-A} \leqslant f(t,\xi ) \leqslant C_2 \, \bigl (1+|\xi |\bigr )^A \end{aligned}$$

    for any \(t\in [0,T]\) and \(\xi \in \Xi\);

  3. (3)

    there exists \(\nu \in {\mathbb {N}}\) such that for any \(\xi \in \Xi\), there exists a partition \(0=t_1<t_2<\cdots<t_{\mu -1}<t_\mu =T\) of [0, T], with \(\mu \le \nu\) such that

    • \(f(\cdot ,\xi ) \in {\mathcal {C}}^1\bigl (\left]t_j,t_{j+1}\right[\bigr )\), for \(j=1,2,\dotsc ,\mu -1\);

    • \(\partial _t f(t,\xi )\ne 0\) for any \(t \in \left]t_j,t_{j+1}\right[\), for \(j=1,2,\dotsc ,\mu -1\).

Then, f satisfies the logarithmic condition

$$\begin{aligned} \int _0^T \frac{\bigl |\partial _t f(t,\xi )\bigr |}{f(t,\xi )} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,, \quad \text {for any}\; \xi \in \Xi \,. \end{aligned}$$

Proof

Let us fix \(\xi\). As \(\partial _t f(t,\xi )\) does not change sign in \(\left]t_j,t_{j+1}\right[\), we have

$$\begin{aligned} \int _{t_j}^{t_{j+1}} \frac{\bigl |\partial _t f(t,\xi )\bigr |}{f(t,\xi )} \, \mathrm{d}t = \biggl |\int _{t_j}^{t_{j+1}} \frac{\partial _t f(t,\xi )}{f(t,\xi )} \, \mathrm{d}t\biggr | = \biggl |\log f(t_{j+1},\xi ) - \log f(t_j,\xi )\biggr | \leqslant C^* \, \log \bigl (1+|\xi |\bigr ) \,, \end{aligned}$$

where \(C^*\) depends on \(A,C_1,C_2\). Hence,

$$\begin{aligned} \int _0^T \frac{\bigl |\partial _t f(t,\xi )\bigr |}{f(t,\xi )} \, \mathrm{d}t \leqslant \nu \, C^* \, \log \bigl (1+|\xi |\bigr ) \,, \end{aligned}$$

where \(\nu\) and \(C^*\) are independent of \(\xi\). \(\square\)

4.1 Study of the condition (1.9)

Let

$$\begin{aligned} f(t,\xi ) = \bigl |\lambda _j (t,\xi ) - \lambda _h (t,\xi )\bigr | \,. \end{aligned}$$

The function \(f(t,\xi )\) never vanishes and its critical points verify

$$\begin{aligned} \partial _t \bigl (\lambda _j (t,\xi ) - \lambda _h (t,\xi )\bigr ) = 0 \,. \end{aligned}$$

Now, by the implicit function Theorem, we have

$$\begin{aligned} \partial _t \lambda _{j}(t,\xi ) = - \frac{(\partial _t L_\varepsilon ) (\lambda _{j})}{(\partial _\tau L_\varepsilon ) (\lambda _{j})} \,, \end{aligned}$$
(4.1)

where for the sake of brevity, we write

$$\begin{aligned} (\partial _t L_\varepsilon ) (\lambda _{j})&= \partial _t L_\varepsilon (t,\tau ,\xi )\bigm |_{\tau = \lambda _{j}(t,\xi )} \, , \\ (\partial _\tau L_\varepsilon ) (\lambda _{j})&= \partial _\tau L_\varepsilon (t,\tau ,\xi )\bigm |_{\tau = \lambda _{j}(t,\xi )} \, . \end{aligned}$$

From (4.1), we get

$$\begin{aligned} \Bigl [\partial _t\bigl (\lambda _{j}(t_1,\xi ) - \lambda _{h}(t_1,\xi )\bigr )\Bigr ]^2 = \frac{Q(x,\lambda _j,\lambda _h,\xi )}{\bigl [(\partial _\tau L_\varepsilon ) (\lambda _{j})\bigr ]^2 \, \bigl [(\partial _\tau L_\varepsilon ) (\lambda _{h})\bigr ]^2} \,, \end{aligned}$$

where

$$\begin{aligned} Q(x,\tau ,\sigma ,\xi ) = \Bigl [(\partial _t L_\varepsilon ) (\tau ) \, (\partial _\tau L_\varepsilon ) (\sigma ) - (\partial _t L_\varepsilon ) (\sigma ) \, (\partial _\tau L_\varepsilon ) (\tau )\Bigr ]^2 \,. \end{aligned}$$

The polynomial Q verifies the hypothesis of Proposition 4.5, hence the number of zeros of the function \(t\mapsto \partial _t f(t,\xi )\) is bounded w.r.t. \(\xi \in \Xi\), and, applying Proposition 4.6 to f, we see that (1.9) holds true.

4.2 Study of the condition (1.10)

Let

$$\begin{aligned} f(t,\xi ) = \bigl |\partial _t\lambda _{j}(t_1,\xi ) - \partial _t\lambda _{h}(t_1,\xi )\bigr | + 1 \,. \end{aligned}$$

If \(\partial _t f(t,\xi )\) changes sign at \(t_1\), then either \(\partial _t\lambda _{j}(t_1,\xi ) - \partial _t\lambda _{h}(t_1,\xi )=0\) or \(\partial _t^2\lambda _{j}(t_1,\xi ) - \partial _t^2\lambda _{h}(t_1,\xi )=0\).

The first case can be treated as before. For the second, from (4.1), we get

$$\begin{aligned} \partial _{tt}^2 \lambda _{j}(t,\xi ) = \frac{\psi (t,\lambda _j,\xi )}{\bigl [(\partial _\tau L_\varepsilon ) (\lambda _{j})\bigr ]^3} \,, \end{aligned}$$

where

$$\begin{aligned} \psi (t,\tau ,\xi )&= 2 \partial _{t\tau }^2 L_\varepsilon (\tau ) \, \partial _t L_\varepsilon (\tau ) \, \partial _\tau L_\varepsilon (\tau ) \\&\qquad - \partial _{\tau \tau }^2 L_\varepsilon (\tau ) \, \bigl [\partial _t L_\varepsilon (\tau )\bigr ]^2 - \partial _{tt}^2 L_\varepsilon (\tau ) \, \bigl [\partial _\tau L_\varepsilon (\tau )\bigr ]^2 \, , \end{aligned}$$

hence

$$\begin{aligned} \Bigl [\partial _{tt}^2\bigl (\lambda _{j}(t_1,\xi ) - \lambda _{h}(t_1,\xi )\bigr )\Bigr ]^2 = \frac{Q(x,\lambda _{j},\lambda _{h},\xi )}{\bigl [(\partial _\tau L_\varepsilon ) (\lambda _{j})\bigr ]^6 \, \bigl [(\partial _\tau L_\varepsilon ) (\lambda _{h})\bigr ]^6} \,, \end{aligned}$$

where

$$\begin{aligned} Q(x,\tau ,\sigma ,\xi ) = \Bigl [\psi (t,\tau ,\xi ) \, \bigl [\partial _\tau L_\varepsilon (\sigma )\bigr ]^3 - \psi (t,\sigma ,\xi ) \, \bigl [\partial _\tau L_\varepsilon (\tau )\bigr ]^3\Bigr ]^2 \,. \end{aligned}$$

The polynomial Q verifies the hypothesis of Proposition 4.5, hence the number of zeros of the function \(t\mapsto \partial _t f(t,\xi )\) is bounded w.r.t. \(\xi \in \Xi\), and, applying Proposition 4.6 to f, we see that (1.10) holds true.

4.3 Study of the condition (1.11)

If M has real coefficients, we consider

$$\begin{aligned} f(t,\xi ) = \bigl |\check{M}\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr )\bigl |+1 \,. \end{aligned}$$

For fixed \(\xi\), the oscillations of \(f(t,\xi )\) are zeros in t of either \(\check{M}\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr )\) or \(\partial _t \check{M}\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr )\).

Now

$$\begin{aligned} \partial _t \check{M}\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr ) = \partial _t \check{M}(t,\tau ,\xi )\bigm |_{\tau = \lambda _{j}(t,\xi )} + \partial _\tau \check{M}(t,\tau ,\xi )\bigm |_{\tau = \lambda _{j}(t,\xi )} \partial _t \lambda _{j}(t,\xi ) \,, \end{aligned}$$

and, by (4.1), we see that \(\partial _t \check{M}\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr ) = 0\) if, and only if, \(Q\bigl (t,\lambda _{j}(t,\xi ),\xi \bigr )=0\), where

$$\begin{aligned} Q(t,\tau ,\xi ) = \partial _t \check{M}(t,\tau ,\xi ) \partial _\tau L_\varepsilon (t,\tau ,\xi ) - \partial _\tau \check{M}(t,\tau ,\xi ) \partial _t L_\varepsilon (t,\tau ,\xi ) \,. \end{aligned}$$

The polynomial Q verifies the hypothesis of Proposition 4.5, hence the number of zeros of the function \(t\mapsto \partial _t f(t,\xi )\) is bounded w.r.t. \(\xi \in \Xi\), and, applying Proposition 4.6 to f, we see that (1.11) holds true.

If the coefficients of \(\check{M}\) are complex, we consider the zeros of

$$\begin{aligned} \partial _t |\check{M}|^2 = 2\mathrm{Re}\bigl (\partial _t \check{M} \, \overline{\check{M}}\bigr ) = 2 \partial _t \mathrm{Re}(\check{M}) \, \mathrm{Re}(\check{M}) + 2 \partial _t \Im (\check{M}) \, \Im (\check{M}) \end{aligned}$$

and by the same argument, we get (1.11) again.

The proof that condition (1.12) is satisfied is similar to that of condition (1.11), so we omit it.

5 Pointwise Levi conditions

Throughout this section, we assume that the coefficients of the operator are analytic, and we express the Levi conditions (1.13) and (1.14) as pointwise conditions.

We have to distinguish three cases:

Case I:

\(\Delta _L \not \equiv 0\);

Case II:

\(\Delta _L \equiv 0\) and \(\Delta _L^{(1)} \not \equiv 0\);

Case III:

\(\Delta _L \equiv \Delta _L^{(1)} \equiv 0\).

Case I: \(\Delta _L \not \equiv 0\)

We consider at first the terms of order 2.

Proposition 5.1

Assume that

$$\begin{aligned} \bigl |\tau _k(t,\xi ) - \tau _l(t,\xi )\bigr | \, \Bigl |\check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\Bigr | \lesssim \sqrt{\Delta _L(t,\xi ) \,} + \Bigl |\partial _t \sqrt{\Delta _L(t,\xi ) \,}\Bigr | \,, \end{aligned}$$
(5.1)

for all \((j,k,l)\in {\mathcal {S}}_3\), then condition (1.13) is verified.

For the proof, we need the following Lemma [20,  Proposition 4.1].

Lemma 5.2

Let \(\Delta (t,\xi )\) be an homogeneous polynomial in \(\xi\) with coefficient analytic in t and assume that \(\Delta (t,\xi )\not \equiv 0\). Then:

  1. 1.

    there exists \(X\subset {\mathbb {S}}^n:= \bigl \{ \, \xi \in {\mathbb {R}}^n \, \bigm | \, |\xi |=1 \, \bigr \}\) such that \(\Delta (t,\xi )\not \equiv 0\) in \(\left]-\delta ,T+\delta \right[\) for any \(\xi \in X\), and the set \({\mathbb {S}}^n\setminus X\) is negligible with respect to the Hausdorff \((n-1)\)–measure;

  2. 2.

    for any \([a,b]\subset \left]-\delta ,T+\delta \right[\), we can find constants \(c_1,c_2>0\) and \(p,q\in {\mathbb {N}}\) such that for any \(\xi \in X\) and any \(\varepsilon \in (0,1/e]\) there exists \(A_{\xi ,\varepsilon }\subset [a,b]\) such that

    1. 1.

      \(A_{\xi ,\varepsilon }\) is a union of at most p disjoint intervals,

    2. 2.

      \(m(A_{\xi ,\varepsilon }) \leqslant \varepsilon\),

    3. 3.

      \(\min _{t \in [a,b]{\setminus } A_{\xi ,\varepsilon }} \Delta (t,\xi ) \geqslant c_1 \, \varepsilon ^{2q} \bigl \Vert \Delta (\cdot ,\xi )\bigr \Vert _{L^\infty ([a,b])}\)

    4. 4.

      \(\int \limits _{[a,b]{\setminus } A_{\xi ,\varepsilon }} \frac{\bigl |\Delta '(t,\xi )\bigr |}{\Delta (t,\xi )}\,\mathrm{d}t \leqslant c_2 \, \log \frac{1}{\varepsilon }\).

Proof

(Proof of Proposition 5.1) We will prove that (5.1) implies (3.3a) and (3.3b) hence, thanks to Proposition 3.1, we get (1.13).

Let \(\varepsilon =|\xi |^{-2}\), with \(|\xi |\) large enough, and let \(A_{\xi ,\varepsilon }\) be the set given by Lemma 5.2 with \(\Delta (t,\xi ) = \Delta _L(t,\xi )\); we have

$$\begin{aligned} \int _{A_{\xi ,\varepsilon }} \frac{\bigl |\check{M}(\tau _j)\bigr |}{\bigl (|\tau _j-\tau _k|+1\bigr ) \bigl (|\tau _j-\tau _l|+1\bigr )} \, \mathrm{d}t \lesssim \int _{A_{\xi ,\varepsilon }} |\xi |^2 \, \mathrm{d}t \lesssim \varepsilon \,|\xi |^2 = 1 \,. \end{aligned}$$
(5.2)

On the other side, as

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _j)\bigr |}{\bigl (|\tau _j-\tau _k|+1\bigr ) \bigl (|\tau _j-\tau _l|+1\bigr )} \leqslant \frac{|\tau _k-\tau _l| \, \bigl |\check{M}(\tau _j)\bigr |}{\sqrt{\Delta _L(t,\xi ) \,}} \,, \end{aligned}$$

assuming (5.1), thanks to Lemma 5.2, we have

$$\begin{aligned} \begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }}&\frac{\bigl |\check{M}(\tau _j)\bigr |}{\bigl (|\tau _j-\tau _k|+1\bigr ) \bigl (|\tau _j-\tau _l|+1\bigr )} \, \mathrm{d}t \\&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} 1 + \frac{\bigl |\partial _t \Delta _L(t,\xi )\bigr |}{\Delta _L(t,\xi )} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,. \end{aligned} \end{aligned}$$
(5.3)

Combining (5.2) and (5.3), we get (3.3a).

Now we prove that (3.3b) holds true. As the roots of L are distinct for a.e. \((t,\xi )\), by the Lagrange interpolation formula, we have

$$\begin{aligned} \check{M}(t,\tau ,\xi ) = \ell _1(t,\xi ) \, L_{23}(t,\tau ,\xi ) + \ell _2(t,\xi ) \, L_{13}(t,\tau ,\xi ) + \ell _3(t,\xi ) \, L_{12}(t,\tau ,\xi ) \,, \end{aligned}$$
(5.4)

where the operators \(L_{jk}\) are the operators \(L_{jk,\varepsilon }\) defined in Sect. 2 with \(\varepsilon =0\), the \(\ell _j\) are given by

$$\begin{aligned} \ell _j(t,\xi ):= \frac{\check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )}{\bigl (\tau _j(t,\xi )-\tau _h(t,\xi )\bigr ) \bigl (\tau _j(t,\xi )-\tau _l(t,\xi )\bigr )} \end{aligned}$$
(5.5)

hence, by Hypothesis (5.1), they verify

$$\begin{aligned} \bigl |\ell _j(t,\xi )\bigr | \lesssim 1 + \frac{\bigl |\partial _t \Delta _L(t,\xi )\bigr |}{\Delta _L(t,\xi )} \,. \end{aligned}$$
(5.6)

On the other side, differentiating (5.4) w.r.t \(\tau\), we get

$$\begin{aligned} \partial _\tau \check{M}(\tau )&= \ell _1 \, (L_2+L_3) + \ell _2 \, (L_3+L_1) + \ell _3 \, (L_1+L_2) \\&= (\ell _2+\ell _3) \, L_1 + (\ell _3+\ell _1) \, L_2 + (\ell _1+\ell _2) \, L_3 \, , \end{aligned}$$

where the operators \(L_j\) are the operators \(L_{j,\varepsilon }\) defined in Sect. 2 with \(\varepsilon =0\),

Now, since \(\tau _1 \leqslant \tau _2 \leqslant \tau _3\), we can find \(\theta \in [0,1]\) such that

$$\begin{aligned} \tau _2 = \theta \, \tau _1 + (1 - \theta ) \, \tau _3 \,, \end{aligned}$$

hence

$$\begin{aligned} L_2 = \theta \, L_1 + (1 - \theta ) \, L_3 \,, \end{aligned}$$

consequently, we can write

$$\begin{aligned} \partial _\tau \check{M}(\tau ) = \widetilde{\ell }_3 \, L_1(\tau ) + \widetilde{\ell }_1 \, L_3(\tau ) \,, \end{aligned}$$
(5.7)

where \(\widetilde{\ell }_1\) and \(\widetilde{\ell }_3\) are some linear combination with bounded coefficients of \(\ell _1\), \(\ell _2\) and \(\ell _3\), hence the \(\widetilde{\ell }_j\)s verify (5.6) too.

Let \(\varepsilon =|\xi |^{-1}\), with \(|\xi |\) large enough, and let \(A_{\xi ,\varepsilon }\) be as above, we have

$$\begin{aligned} \int _{A_{\xi ,\varepsilon }} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t&\lesssim \int _{A_{\xi ,\varepsilon }} |\xi | \, \mathrm{d}t \lesssim \varepsilon \,|\xi | = 1 \, , \end{aligned}$$

whereas, thanks to (5.7):

$$\begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \bigl |\widetilde{\ell }_1(t,\xi )\bigr | + \bigl |\widetilde{\ell }_3(t,\xi )\bigr | \, \mathrm{d}t \\&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} 1 + \frac{\bigl |\partial _t \Delta _L(t,\xi )\bigr |}{\Delta _L(t,\xi )} \, \mathrm{d}t \\&\lesssim \log \bigl (1+|\xi |\bigr )\, . \end{aligned}$$

Combining the above estimates, we get (3.3b). \(\square\)

Condition (5.1) means that if, for fixed \(\xi\), the function

$$\begin{aligned} t \mapsto \bigl (\tau _j(t,\xi ) - \tau _k(t,\xi )\bigr ) \, \bigl (\tau _j(t,\xi ) - \tau _l(t,\xi )\bigr ) \end{aligned}$$

vanishes of order \(\nu\) at \(t=t_0\), then the function

$$\begin{aligned} t \mapsto \check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr ) \end{aligned}$$

must vanish (at least) at order \(\nu -1\) at \(t=t_0\). Thus, in space dimension \(n=1\), Proposition 5.1 can be precised.

Proposition 5.3

In space dimension \(n=1\), (1.13) is equivalent to the following condition:

there exist \(t_1,\dotsc ,t_\nu \in \left]-\delta ,T+\delta \right[\) such that

$$\begin{aligned} \prod _{h=1}^\nu |t-t_h| \, \Bigl |\check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\Bigr | \lesssim \bigl |\tau _j(t,\xi )-\tau _k(t,\xi )\bigr | \, \bigl |\tau _j(t,\xi )-\tau _l(t,\xi )\bigr | \,, \end{aligned}$$
(5.8)

with jkl such that \((j,k,l)\in {\mathcal {S}}_3\).

Remark 5.4

In the special case \(\nu =1\) and \(t_1=0\), condition (5.8) reduces to

$$\begin{aligned} |t|\,\Bigl |\check{M}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\Bigr | \lesssim \bigl |\tau _j(t,\xi )-\tau _k(t,\xi )\bigr | \, \bigl |\tau _j(t,\xi )-\tau _l(t,\xi )\bigr | \,, \end{aligned}$$
(5.9)

for any \((j,k,l)\in {\mathcal {S}}_3\).

Proof

By the previous remark, we see that (5.8) is equivalent to (5.1) and implies (3.3a) and (3.3b).

Now we prove that (3.3a) implies (5.8) by contradiction. First of all, as the zeros of \(\Delta _L\) are isolated, we can decompose \(\left]-\delta ,T+\delta \right[\) into a finite number of contiguous subintervals each containing a zero of \(\Delta _L\). Thus, with no loss of generality, we can restrict to the case in which \(\Delta _L(t)\) vanishes only at \(t=0\); in this case, condition (5.8) reduces to (5.9).

Suppose that (5.9) is violated, hence, with no loss of generality, we have

$$\begin{aligned} \frac{\Bigl |\check{M}\bigl (t,\tau _3(t) \,\xi ,\xi \bigr )\Bigr |}{\bigl |\tau _3(t)-\tau _1(t)\bigl | \bigl |\tau _3(t)-\tau _2(t)\bigl |\,|\xi |^2} \gtrsim \frac{1}{t^m} \end{aligned}$$

for some \(m\geqslant 2\).

As \(\Delta _L(0)=0\) and \(\Delta _L(t)\ne 0\) for \(t\ne 0\), there exist \(r_1,r_2\) such that

$$\begin{aligned} \bigl |\tau _3(t)-\tau _1(t)\bigl | \gtrsim t^{r_1} \qquad \bigl |\tau _3(t)-\tau _2(t)\bigl | \gtrsim t^{r_2} \end{aligned}$$

and \(r \overset{\mathrm {def}}{=}\max (r_1,r_2) \geqslant 1\). Note that, \(\min (r_1,r_2)>0\) if and only if \(t=0\) is a triple point.

For \(t\geqslant \varepsilon ^{1/r}\), \(\varepsilon =\frac{1}{|\xi |}\) and \(|\xi |\geqslant 1\), we have

$$\begin{aligned} \bigl |\tau _3(t)-\tau _1(t)\bigl |\,|\xi | \gtrsim 1 \qquad \bigl |\tau _3(t)-\tau _2(t)\bigl |\,|\xi | \gtrsim 1 \,, \end{aligned}$$

hence

$$\begin{aligned} \int _{\varepsilon ^{1/r}}^T&\frac{\Bigl |\check{M}\bigl (t,\tau _3(t) \,\xi ,\xi \bigr )\Bigr |}{\Bigl (\bigl |\tau _3(t)-\tau _1(t)\bigl |\,|\xi |+1\Bigr ) \Bigl (\bigl |\tau _3(t)-\tau _2(t)\bigl |\,|\xi |+1\Bigr )} \, \mathrm{d}t \\&\gtrsim \int _{\varepsilon ^{1/r}}^T \frac{1}{t^m} \, \frac{\bigl |\tau _3(t)-\tau _1(t)\bigl |\,|\xi |}{\bigl |\tau _3(t)-\tau _1(t)\bigl |\,|\xi |+1} \, \frac{\bigl |\tau _3(t)-\tau _2(t)\bigl |\,|\xi |}{\bigl |\tau _3(t)-\tau _2(t)\bigl |\,|\xi |+1} \, \mathrm{d}t \gtrsim \int _{\varepsilon ^{1/r}}^T \frac{1}{t^m} \, \mathrm{d}t \approx |\xi |^{\frac{m-1}{r}} \, . \end{aligned}$$

This shows that (3.3a) cannot hold true if (5.9) is violated. \(\square\)

Now we consider the terms of order 1.

Proposition 5.5

Assume that

$$\begin{aligned} \Bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\Bigr | \lesssim \sqrt{ \Delta _{\partial _t L}(t,\xi ) \, } + \frac{\bigl (\partial _t \Delta _{\partial _t L}(t,\xi )\bigr )^2}{\bigl [ \Delta _{\partial _t L}(t,\xi ) \bigr ]^{3/2}} \,, \qquad j=1,2 \,, \end{aligned}$$
(5.10)

then condition (1.14) is verified.

Proof

First of all, we recall that, by Proposition 3.2, (1.14) is equivalent to (3.9), hence we will prove that (5.10) implies (3.9).

Let \(\varepsilon =|\xi |^{-1/2}\), with \(|\xi |\) large enough, and let \(A_{\xi ,\varepsilon }\) be the set given by Lemma 5.2 with \(\Delta (t,\xi ) = \Delta _{\partial _t L}(t,\xi )\); we have

$$\begin{aligned} \int _{A_{\xi ,\varepsilon }} \sqrt{\frac{\bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _2(t,\xi ) - \sigma _1(t,\xi )\bigr |+1}\,} \, \mathrm{d}t \lesssim \int _{A_{\xi ,\varepsilon }} |\xi |^{1/2} \, \mathrm{d}t \lesssim \varepsilon \,|\xi |^{1/2} = 1 \,. \end{aligned}$$

On the other side, assuming (5.10) and thanks to Lemma 5.2, we have

$$\begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }}&\sqrt{\frac{\bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _2(t,\xi ) - \sigma _1(t,\xi )\bigr |+1}\,} \, \mathrm{d}t \\&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} 1 + \frac{\bigl |\partial _t \Delta _{\partial _t L}(t,\xi )\bigr |}{\Delta _{\partial _t L}(t,\xi )} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\, . \end{aligned}$$

Combining the above estimates, we get (3.9). \(\square\)

Condition (5.10) means that if, for fixed \(\xi\), the function \(t \mapsto \Delta _{\partial _t L}(t,\xi )\) vanishes of order \(2\nu\), with \(\nu >2\), at \(t=t_0\), then the function \(t \mapsto \check{N}\bigl (t,\tau _j(t,\xi ),\xi \bigr )\) must vanish (at least) at order \(\nu -2\) at \(t=t_0\). Thus, in space dimension \(n=1\), Proposition 5.5 can be precised.

Proposition 5.6

In space dimension \(n=1\), (5.10) is equivalent to (1.14) and can be written in the following form: there exist \(t_1,\dotsc ,t_\nu\) such that

$$\begin{aligned} \prod _{j=1}^\nu (t-t_j)^2 \, \Bigl |\check{N}\bigr (t,\sigma _1(t,\xi ),\xi \bigr )\Bigr | \lesssim \bigl |\sigma _2(t,\xi )-\sigma _1(t,\xi )\bigr | \,. \end{aligned}$$
(5.11)

Proof

We prove that if \(n=1\), (3.9) implies (5.11). As before, with no loss of generality, we can assume that \(\Delta _{\partial _t L}\) vanishes only in 0, and there exists \(r \geqslant 1\) such that

$$\begin{aligned} \bigl |\sigma _2(t)-\sigma _1(t)\bigl | \gtrsim t^r \,. \end{aligned}$$

Hence, for \(t\geqslant \varepsilon ^{1/r}\), \(\varepsilon =\frac{1}{|\xi |}\) and \(|\xi |\geqslant 1\), we have

$$\begin{aligned} \bigl |\sigma _2(t)-\sigma _1(t)\bigl |\,|\xi | \gtrsim 1 \,, \end{aligned}$$

and, consequently,

$$\begin{aligned} \frac{\bigl |\sigma _2(t)-\sigma _1(t)\bigl |\,|\xi |}{\bigl |\sigma _2(t)-\sigma _1(t)\bigl |\,|\xi | + 1} \gtrsim 1 \,. \end{aligned}$$

If (5.11) fails to hold, then there exists \(m\geqslant 3\) such that

$$\begin{aligned} \frac{\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )}{\sigma _2(t,\xi )-\sigma _1(t,\xi )} \approx \frac{1}{t^m} \quad \text{for}\; j=1 \;or\; j=2 \,, \end{aligned}$$

and we have

$$\begin{aligned} \int _{\varepsilon ^{1/r}}^T \sqrt{\frac{\bigl |\check{N}\bigl (t,\sigma _j(t,\xi ),\xi \bigr )\bigr |}{\bigl |\sigma _3(t,\xi )-\sigma _1(t,\xi )\bigr |+1} \,} \, \mathrm{d}t \gtrsim \int _{\varepsilon ^{1/r}}^T \frac{1}{t^{m/2}} \, \mathrm{d}t \approx |\xi |^{\frac{m/2-1}{r}} \,, \end{aligned}$$

thus (3.9) cannot be satisfied. \(\square\)

Case II: \(\Delta _L\equiv 0\) and \(\Delta _L^{(1)} \not \equiv 0\)

With no loss of generality, we can assume that \(\tau _1\equiv \tau _2\) and \(\tau _3\not \equiv \tau _1\).

Proposition 5.7

Assume that \(\tau _1\equiv \tau _2\) and \(\tau _3\not \equiv \tau _1\). If

$$\begin{aligned} \check{M}\bigr (t,\tau _1(t,\xi ),\xi \bigr ) \equiv 0 \end{aligned}$$
(5.12a)

and

$$\begin{aligned} \Bigl |Q\bigr (t,\tau _j(t,\xi ),\xi \bigr )\Bigr | \lesssim \sqrt{\Delta _L^{(1)}(t,\xi ) \,} + \Bigl |\partial _t \sqrt{\Delta _L^{(1)}(t,\xi ) \,}\Bigr | \,, \end{aligned}$$
(5.12b)

for \(j=1,3\), where \(Q(t,\tau ,\xi )= \frac{\check{M}(t,\tau ,\xi )}{\tau - \tau _1(t,\xi )}\), then condition (1.13) is verified.

Proof

The proof is similar to that of Proposition 5.1: We prove that (5.12a) and (5.12b) imply (3.3a) and (3.3b), hence, by Proposition 3.1, we get (1.13).

Note that, by (5.12a), (3.3a) with \(j=1\) or \(j=2\) is trivially satisfied, thus we need only to prove (3.3a) with \(j=3\).

Let \(\varepsilon =|\xi |^{-2}\) with \(|\xi |\) large enough, and let \(A_{\xi ,\varepsilon }\) be the set given by Lemma 5.2 with \(\Delta (t,\xi ) = \Delta _L^{(1)}(t,\xi )\); we have

$$\begin{aligned} \int _{A_{\xi ,\varepsilon }} \frac{\bigl |\check{M}(\tau _3)\bigr |}{\bigl (|\tau _3 - \tau _1|+1\bigr )^2} \, \mathrm{d}t \lesssim \int _{A_{\xi ,\varepsilon }} |\xi |^2 \, \mathrm{d}t \lesssim \varepsilon \,|\xi |^2 = 1 \,. \end{aligned}$$

On the other side, as

$$\begin{aligned} \frac{\bigl |\check{M}(\tau _3)\bigr |}{\bigl (|\tau _3 - \tau _1|+1\bigr )^2} \le \frac{\bigl |Q(\tau _3)\bigr |}{\sqrt{\Delta _L^{(1)} \,}} \,, \end{aligned}$$

assuming (5.12b), thanks to Lemma 5.2, we have

$$\begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \frac{\bigl |\check{M}(\tau _3)\bigr |}{\bigl (|\tau _3-\tau _1|+1\bigr )^2} \, \mathrm{d}t \lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} 1 + \frac{\bigl |\partial _t \Delta _L^{(1)}(t,\xi )\bigr |}{\Delta _L^{(1)}(t,\xi )} \, \mathrm{d}t \lesssim \log \bigl (1+|\xi |\bigr )\,. \end{aligned}$$

Now we prove that (3.3b) holds true.

As \(\tau _3\not \equiv \tau _1\), by Lagrange interpolation formula, we get

$$\begin{aligned} Q(\tau )&= \frac{Q(\tau _3)}{\tau _3-\tau _1} \, (\tau -\tau _1) + \frac{Q(\tau _1)}{\tau _1-\tau _3} \, (\tau -\tau _3) \end{aligned}$$

for a.e. \((t,\xi )\), hence

$$\begin{aligned} \check{M}(\tau )&= \frac{Q(\tau _3)}{\tau _3-\tau _1} \, L_{12} + \frac{Q(\tau _1)}{\tau _1-\tau _3} \, L_{13} \, , \end{aligned}$$
(5.13)

and, consequently,

$$\begin{aligned} \partial _\tau \check{M}(\tau )&= \frac{Q(\tau _3)}{\tau _3-\tau _1} \, (L_1+L_2) + \frac{Q(\tau _1)}{\tau _1-\tau _3} \, (L_1+L_3) \, . \end{aligned}$$
(5.14)

Let \(\varepsilon =|\xi |^{-1}\), with \(|\xi |\) large enough, and let \(A_{\xi ,\varepsilon }\) be as above, we have

$$\begin{aligned} \int _{A_{\xi ,\varepsilon }} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t&\lesssim \int _{A_{\xi ,\varepsilon }} |\xi | \, \mathrm{d}t \lesssim \varepsilon \,|\xi | = 1 \, , \end{aligned}$$

whereas, thanks to (5.14) and Lemma 5.2 we have

$$\begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \frac{\bigl |Q(\tau _1)\bigr | + \bigl |Q(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t \end{aligned}$$

hence, by (5.12b),

$$\begin{aligned} \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr | + \bigl |\partial _\tau \check{M}(\tau _3)\bigr |}{|\tau _1-\tau _3|+1} \, \mathrm{d}t&\lesssim \int _{\left]-\delta ,T+\delta \right[ \setminus A_{\xi ,\varepsilon }} 1 + \frac{\bigl |\partial _\tau \Delta _L^{(1)}(t,\xi )\bigr |}{\Delta _L^{(1)}(t,\xi )} \, \mathrm{d}t \\&\lesssim \log \bigl (1+|\xi |\bigr )\, . \end{aligned}$$

\(\square\)

Proposition 5.8

In space dimension \(n=1\), (1.13) is equivalent to (5.12a) and (5.12b).

Moreover, (5.12b) can be written in the following form: There exist \(t_1,\dotsc ,t_\nu\) such that

$$\begin{aligned} \prod _{j=1}^\nu |t-t_j| \, \Bigl |Q\bigr (t,\tau _k(t,\xi ),\xi \bigr )\Bigr | \lesssim \bigl |\tau _3(t,\xi )-\tau _1(t,\xi )\bigr | \,, \qquad k=1,3 \,. \end{aligned}$$
(5.15)

Remark 5.9

In the special case \(\nu =1\) and \(t_1=0\), condition (5.15) reduces to

$$\begin{aligned} |t| \, \Bigl |Q\bigr (t,\tau _k(t,\xi ),\xi \bigr )\Bigr | \lesssim \bigl |\tau _3(t,\xi )-\tau _1(t,\xi )\bigr | \,, \qquad k=1,3 \,. \end{aligned}$$
(5.16)

Proof

We prove that if \(\tau _1\equiv \tau _2\), then (5.12a) is necessary in order to have (1.13).

Indeed if (5.12a) fails to hold then \(\check{M}\bigr (t,\tau _1(t,\xi ),\xi \bigr ) \approx |\xi |^2\) and we have

$$\begin{aligned} \frac{\check{M}(\tau _1)}{\bigl (|\tau _1-\tau _2| + 1\bigr ) \bigl (|\tau _1-\tau _3| + 1\bigr )} = \frac{\check{M}(\tau _1)}{|\tau _1-\tau _3| + 1} \approx \frac{|\xi |^2}{|\xi | + 1} \,, \end{aligned}$$

hence (3.3a) cannot be verified.

As before, we can assume that \(\Delta _L^{(1)}\) vanishes only in 0, so that (5.15) reduces to (5.16). Moreover, we can assume that there exists \(r \geqslant 1\) such that for \(t\geqslant \varepsilon ^{1/r}\), \(\varepsilon =\frac{1}{|\xi |}\) and \(|\xi |\geqslant 1\) we have

$$\begin{aligned} \bigl |\tau _3(t)-\tau _1(t)\bigl |\,|\xi | \gtrsim 1 \,. \end{aligned}$$

If (5.16) fails to hold, then there exists \(m\geqslant 2\) such that

$$\begin{aligned} \frac{\check{Q}\bigl (t,\tau _3(t,\xi ),\xi \bigr )}{\tau _3(t,\xi )-\tau _1(t,\xi )} \approx \frac{1}{t^m} \,, \end{aligned}$$

and, as

$$\begin{aligned} \check{M}(t,\tau ,\xi ) = Q(t,\tau ,\xi ) \, \bigl ( \tau - \tau _1(t,\xi ) \bigr ) \end{aligned}$$

we have

$$\begin{aligned} \partial _\tau \check{M}(t,\tau _1,\xi ) = Q(t,\tau _1,\xi ) \,, \end{aligned}$$

hence

$$\begin{aligned} \int _{\varepsilon ^{1/r}}^T \frac{\bigl |\partial _\tau \check{M}(\tau _1)\bigr |}{|\tau _3-\tau _1|+1} \, \mathrm{d}t = \int _{\varepsilon ^{1/r}}^T \frac{\bigl |Q(\tau _1)\bigr |}{|\tau _3-\tau _1|+1} \, \mathrm{d}t \gtrsim \int _{\varepsilon ^{1/r}}^T \frac{1}{t^m} \, \mathrm{d}t \approx |\xi |^{\frac{m-1}{r}} \,. \end{aligned}$$

Thus, (3.3b) cannot be satisfied. \(\square\)

Note that, we need not to assume \(n=1\) to prove the necessity of (5.12a).

The term of order 1 is treated as in the previous case.

Case III: \(\Delta _L\equiv \Delta _L^{(1)}\equiv 0\) (operators with triple characteristics of constant multiplicity)

Proposition 5.10

If \(\Delta _L\equiv \Delta _L^{(1)}\equiv 0\), then L has a unique triple root:

$$\begin{aligned} L(t,\tau ,\xi ) = \bigl (\tau -\tau _1(t,\xi )\bigr )^3 \,, \end{aligned}$$
(5.17)

and Hypothesis (1.9) and (1.10) are satisfied. Moreover, Hypothesis (1.11), (1.12), (1.13) and (1.14) are satisfied if, and only if,

$$\begin{aligned} \check{M}(\tau _1) \equiv 0 \,, \quad \partial _\tau \check{M}(\tau _1) \equiv 0 \,, \quad \check{N}(\tau _1) \equiv 0 . \end{aligned}$$
(5.18)

Proof

It’s clear that if \(\Delta _L\equiv \Delta _L^{(1)}\equiv 0\), then L reduces to (5.17).

If L is as in (5.17), we have

$$\begin{aligned} {\mathcal {L}}(t,\tau ,\xi ) = \bigl (\tau -\tau _1(t,\xi )\bigr )^3 - 6\,\bigl (\tau -\tau _1(t,\xi )\bigr ) \end{aligned}$$

and

$$\begin{aligned} \lambda _1(t,\xi ) = \tau _1(t,\xi ) \,, \qquad \lambda _2(t,\xi ) = \tau _1(t,\xi )+\sqrt{6\,} \,, \qquad \lambda _3(t,\xi ) = \tau _1(t,\xi )-\sqrt{6\,} \,, \end{aligned}$$

and it is clear that (1.9) and (1.10) are satisfied.

It is also clear that (5.18) holds true if, and only if,

$$\begin{aligned} \check{M}(t,\tau ,\xi ) = m_0(t) \, \bigl (\tau -\tau _1(t,\xi )\bigr )^2 \quad \text {and}\quad \check{N}(t,\tau ,\xi ) = n_0(t) \, \bigl (\tau -\tau _1(t,\xi )\bigr ) \,. \end{aligned}$$

Thus, (1.11), (1.12), (1.13) and (1.14) hold true.

On the converse, if \(\check{M}(\tau _1)\not \equiv 0\), then (3.3a) is not satisfied since

$$\begin{aligned} \int _0^T \bigl |\check{M}\bigl (t,\tau _1(t,\xi ),\xi \bigr )\bigr | \, \mathrm{d}t \approx |\xi |^2 \,. \end{aligned}$$

Analogously, if \(\partial _\tau \check{M}(\tau _1)\not \equiv 0\), then (3.3b) is not satisfied since

$$\begin{aligned} \int _0^T \Bigl |\partial _\tau \check{M}\bigl (t,\tau _1(t,\xi ),\xi \bigr )\Bigr | \, \mathrm{d}t \approx |\xi | \,. \end{aligned}$$

Finally, if \(\check{N}(\mu _1)\not \equiv 0\), then (3.9) is not satisfied since

$$\begin{aligned} \int _0^T \sqrt{ \Bigl |\check{N}\bigl (t,\tau _1(t,\xi ),\xi \bigr )\Bigr |\,} \, \mathrm{d}t \approx |\xi |^{1/2} \,. \end{aligned}$$

\(\square\)

Conditions (5.18) correspond to the condition of good decomposition [13]: The operator P can be written as

$$\begin{aligned} P = L_1^3 + m_0(t) \, L_1^2 + n_0(t) \, L_1 + p_0 \,, \end{aligned}$$
(5.19)

where \(L_1^3 = L_1\circ L_1\circ L_1\) and \(L_1^2 = L_1\circ L_1\).

To see this, following [35], we have to check two conditions. The first is

$$\begin{aligned}&\text {the principal symbol of}\ P - L_1^3 \ \text {is divisible by} \ m_0(\tau -\tau _1)^2 \, , \ \end{aligned}$$
(5.20)

and the second condition is

$$\begin{aligned}&\text {the principal symbol of}~\ P - (L_1^3 + m_0(t) \, L_1^2) \ \text {is divisible by} \ (\tau -\tau _1) \, . \end{aligned}$$
(5.21)

As \(\tau _1\equiv \tau _2\equiv \tau _3\), the operator \(\widetilde{L}_{123,\varepsilon }\) in (2.3) reduces to \(L_1^3\) if \(\varepsilon =0\). According to (2.16), we have

$$\begin{aligned} P = L_1^3 + \check{M} + \frac{1}{2} \partial _t\partial _\tau \check{M} + \check{N} + p \,, \end{aligned}$$

thus (5.20) holds true if, and only if, the first two conditions in (5.18) hold true. In this case,

$$\begin{aligned} \check{M}(t,\tau ,\xi ) = m_0(t) \, L_{1,1}(t,\tau ,\xi ) = m_0(t) \, \bigl (\tau - \tau _1(t,\xi )\bigr )^2 \,. \end{aligned}$$

As \(\tau _1\equiv \tau _2\), the operator \(\widetilde{L}_{12,\varepsilon }\) in (2.2) reduces to \(L_1^2\) if \(\varepsilon =0\). By (2.4) with \(\varepsilon =0\), we have

$$\begin{aligned} L_1^2&= L_{11} + \frac{1}{2} \, \partial _t \partial _\tau L_{11} \, , \end{aligned}$$

and, multiplying by \(m_0\), we get

$$\begin{aligned} m_0\,L_1^2&= \check{M} + \frac{1}{2} \, \partial _t \partial _\tau \check{M} - m_0'(t) \, L_1 \, , \end{aligned}$$

hence

$$\begin{aligned} P = L_1^3 + m_0\,L_1^2 + m_0'(t) \, L_1 + \check{N} + p \,, \end{aligned}$$

thus (5.21) holds true if, and only if, the third condition in (5.18) holds true.

6 Operators with constant coefficients principal part

In this section, we prove the following Proposition

Proposition 6.1

Assume that the coefficients are analytic and those of the principal symbol are constant.

Then, Hypothesis (1.9), (1.10), (1.11) and (1.12) are satisfied, whereas (1.13) and (1.14) are necessary and sufficient for the \({\mathcal {C}}^\infty\) well posedness.

Proof

If the coefficients are analytic, then Hypothesis (1.9), (1.10), (1.11) and (1.12) are satisfied (cf. Sect. 4).

Now, we recall that if the coefficients are constant, the necessary and sufficient conditions for the \({\mathscr {C}}^\infty\) well posedness is well known, see [16, 17, 34]:

$$\begin{aligned} \text {there exists}\; C>0\; \text{such that}\; \tau ^3 + \sum _{j+|\alpha | \le 3} a_{j,\alpha }(t)\tau ^j\xi ^\alpha \ne\; 0\; if~\xi \in {\mathbb{R}}\; \text{and}\; |\Im \tau |>C \,. \end{aligned}$$
(6.1)

Various equivalent conditions have been given. According to [34], (6.1) is equivalent to the following conditions (cf. [31]): there exist bounded functions \({\mathsf {m}}_1,{\mathsf {m}}_2,{\mathsf {m}}_3,{\mathsf {n}}_1,{\mathsf {n}}_2\) such that

$$\begin{aligned} M(t,\tau ,\xi )&= {\mathsf {m}}_1(t,\xi ) \, \bigl (\tau - \tau _2(t,\xi )\bigr ) \bigl (\tau - \tau _3(t,\xi )\bigr ) + {\mathsf {m}}_2(t,\xi ) \, \bigl (\tau - \tau _3(t,\xi )\bigr ) \bigl (\tau - \tau _1(t,\xi )\bigr )\\& \quad + {\mathsf {m}}_3(t,\xi ) \, \bigl (\tau - \tau _1(t,\xi )\bigr ) \bigl (\tau - \tau _2(t,\xi )\bigr ) \end{aligned}$$
(6.2)
$$\begin{aligned} N(t,\tau ,\xi )&= {\mathsf {n}}_1(t,\xi ) \, \bigl (\tau - \sigma _2(t,\xi )\bigr ) + {\mathsf {n}}_2(t,\xi ) \, \bigl (\tau - \sigma _1(t,\xi )\bigr ) \end{aligned}$$
(6.3)

for a.e. \((t,\xi )\).

We recall that the above conditions are also necessary and sufficient for the \({\mathcal {C}}^\infty\) well posedness if the coefficients of the lower-order terms can vary [14, 36] (also if the coefficients of the lower-order terms are only \({\mathcal {C}}^\infty\)).

As we have proved in Sect. 2 that our conditions are sufficient for the well posedness, whereas conditions (6.2) and (6.3) are necessary, it remains to prove that if (6.2) and (6.3) hold true, then (1.13) and (1.14) are satisfied.

We prove at first that (6.2) implies (1.13). We have to distinguish three cases as in the previous section.

If \(\Delta _L\not \equiv 0\), then (6.2) is (5.4) and the \({\mathsf {m}}_j\) is given by (5.5). The boundness of the \({\mathsf {m}}_j\) is equivalent to (5.1), hence, by Proposition 5.1, (1.13) holds true.

If \(\Delta _L\equiv 0\) and \(\Delta _{\partial _\tau }L\not \equiv 0\), then with no loss of generality, we can assume that \(\tau _1\equiv \tau _2\) and \(\tau _3\not \equiv \tau _1\). In this case, the right hand side of (6.2) is divisible by \(\tau -\tau _1\), hence M must verify (5.12a). Moreover, (6.2) reduces to (5.13), thus the boundness of the \({\mathsf {m}}_j\) is equivalent to (5.12b). By Proposition 5.7, we get (1.13).

If \(\Delta _{\partial _\tau }L\equiv 0\), then \(\tau _1\equiv \tau _2\equiv \tau _3\) and the right hand side of (6.2) is divisible by \((\tau -\tau _1)^2\), hence M must verify the first two conditions in (5.18), and, by Proposition 5.10, we get (1.13).

Now we prove that (6.3) implies (1.14).

From (6.3), we have

$$\begin{aligned} N(\sigma _1) = {\mathsf {n}}_2 \, (\sigma _1 - \sigma _2) \,, \qquad N(\sigma _2) = {\mathsf {n}}_1 \, (\sigma _2 - \sigma _1) \,, \end{aligned}$$

from which we deduce that if \(\Delta _{\partial _\tau L}\not \equiv 0\), then (5.10) is satisfied; by Proposition 5.5, (1.14) holds true. On the other side, if \(\Delta _{\partial _\tau L}\equiv 0\), then the third condition in (5.18) holds true, hence, by Proposition 5.10, we get (1.13). \(\square\)

7 Some examples

In this section, we discuss some examples. In particular, we compare our Theorem 2 with Theorem 6.1 in [26] (see Example 3 below).

First of all, we remark that any positive or negative result for second-order equations can give a positive or negative result for third-order equations, just replacing the unknown function u by its time derivative \(\partial _t u\).

Example 1

In [9], it is constructed a function \(a(t)\in {\mathcal {C}}^\infty\) verifying \(a(0) = 0\), \(a(t) > 0\) for \(t > 0\) having an infinite number of oscillations for \(t\mapsto 0^+\), so that condition (1.1) is not verified and the Cauchy problem for

$$\begin{aligned} \partial _t^2 u - a(t) \, \partial _x^2 u = 0 \end{aligned}$$

is not well posed in \({\mathcal {C}}^\infty\). The same function can be used to show that the Cauchy problem for the equation

$$\begin{aligned} \partial _t^3 u - a(t) \, \partial _t\partial _x^2 u = 0 \end{aligned}$$

may be ill-posed. This shows that some control on the oscillations of the coefficients of the principal symbol is needed, as required by Hypothesis (1.9) and (1.10).

Example 2

It is well known that the Cauchy problem for the equation

$$\begin{aligned} \partial _t^2 u - \partial _x u = 0 \end{aligned}$$

is not well posed in \({\mathcal {C}}^\infty\) (see, e.g., [18]). Thus, the Cauchy problem for the equation

$$\begin{aligned} \partial _t^3 u - \partial _t\partial _x u = 0 \end{aligned}$$

is not well posed in \({\mathcal {C}}^\infty\) (see, e.g., [18]). This show that some Levi conditions on the lower-order terms are needed.

Example 3

Consider the homogeneous equation

$$\begin{aligned} \partial _t^3 u + a_1(t,x)\partial _t^2\partial _x u + a_2(t,x)\partial _t\partial _x^2 u + a_3(t,x)\,\partial _x^3 u = 0 \,. \end{aligned}$$

Assume, for simplicity \(a_1(t,x) \equiv 0\) and \(a_2\) and \(a_3\) analytic. The hyperbolicity assumption is then

$$\begin{aligned} \Delta _L(t,x) = -4\,a_2^3(t,x)-27\,a_3^2(t,x) \geqslant 0 \,. \end{aligned}$$

Assume also that \(a_2\) and \(a_3\) vanish at the origin, so that we have a triple characteristic root, whereas \(\Delta _L(t,x)>0\) for \((t,\xi )\ne (0,0)\), so that the characteristic roots are simple for \((t,\xi )\ne (0,0)\).

According to [8], if \(a_2(t,x)=a_2(t)\) and \(a_3(t,x)=a_3(t)\) depend only on the time variable t, a necessary condition for the \({\mathcal {C}}^\infty\) well posedness is the following

$$\begin{aligned} -a_2^3(t) \lesssim \Delta (t) \,. \end{aligned}$$
(7.1)

Condition (7.1) is sufficient also if \(a_2(t,x)=a_2(x)\) and \(a_3(t,x)=a_3(x)\) depend only on the space variable x [32], whereas if \(a_2\) and \(a_3\) depend on both variables the following condition should be also considered [26, Theorem 6.1]:

$$\begin{aligned} \bigl |\partial _t a_3\bigr | \lesssim \sqrt{a_2} \, \bigl |\partial _t a_2\bigr | \,. \end{aligned}$$

In order to compare the above results with Theorem 2, we have seen that (see Sect. 4 for details), if the coefficients are analytic, then conditions (1.9)–(1.12) are satisfied.

Next, as the equation is homogeneous and \(a_1\equiv 0\), then

$$\begin{aligned} \check{M}(t,\tau ,\xi ) = - \frac{1}{2} \, a_2'(t)\,\xi ^2 \end{aligned}$$

and \(\check{N}(t,\tau ,\xi )\equiv 0\) so that (1.14) is trivially satisfied.

To prove the equivalence between (7.1) and (1.14), we recall (cf. [8]) that if \(a_1\equiv 0\) then condition (7.1) is equivalent to the following condition

$$\begin{aligned} \tau _j^2(t) + \tau _k^2(t) \lesssim \bigl (\tau _j(t) - \tau _k(t)\bigr )^2 \,, \qquad \text{for}\; j\ne k \,, \end{aligned}$$
(7.2)

or, if the coefficients are analytic and the discriminant \(\Delta (t)\) vanishes only at \(t=0\), to the following:

$$\begin{aligned} |t|^2 \Bigl [\bigl (\tau _j'(t)\bigr )^2 + \bigl (\tau _k'(t)\bigr )^2\Bigr ] \lesssim \bigl (\tau _j(t) - \tau _k(t)\bigr )^2 \,, \qquad \text {for}\; j\ne k \,. \end{aligned}$$
(7.3)

On the other side, thanks to Viete’s formulas, as \(a_1\equiv 0\), we have

$$\begin{aligned} a_2(t) = - \frac{1}{2} \, \bigl [ \tau _1^2(t) + \tau _2^2(t) + \tau _3^2(t) \bigr ] \,. \end{aligned}$$

Combining (7.2) and (7.3), we see that (7.1) is equivalent to (5.9), which is equivalent to condition (1.13).