1 Introduction

Let us consider the following backward parabolic operator

$$\begin{aligned} L=\partial _t+ \sum _{j,k = 1}^n \partial _{x_j} \left( a_{j,k} (t,x) \partial _{x_k} \right) +\sum _{ j = 1}^n b_j(t,x)\partial _{x_j}+c(t,x), \end{aligned}$$
(1)

where all the coefficients are assumed to be defined in \([0,T] \times {\mathbb R}^n\), measurable and bounded; \(( a_{j,k}(t,x))_{j,k}\) is a real symmetric matrix for all \((t,x)\in [0,T]\times {\mathbb R}^n\) and there exists \(\lambda _0\in (0,1]\) such that

$$\begin{aligned} \sum _{j, k = 1}^n a_{j,k} (t,x) \xi _j \xi _k\ge \lambda _0|\xi |^2, \end{aligned}$$
(2)

for all \((t,x)\in [0,T]\times {\mathbb R}^n\) and \(\xi \in {\mathbb R}^n\).

Given a functional space \({\mathcal {H}}\), we say that the operator L has the \({\mathcal {H}}\)–uniqueness property if, whenever \(u\in {\mathcal {H}}\), \(Lu=0\) in \([0,T]\times {\mathbb R}^n\) and \(u(0,x)=0\) in \({\mathbb R}^n\), then \(u=0\) in \([0,T]\times {\mathbb R}^n\).

In the present paper, we are interested in the \({\mathcal {H}}\)–uniqueness property for the operator L defined in (1), when

$$\begin{aligned} {\mathcal {H}}= H^1\left( (0,T), L^2\left( {\mathbb R}^n\right) \right) \cap L^2\left( (0,T), H^2\left( {\mathbb R}^n\right) \right) \end{aligned}$$
(3)

(let us remark that this choice for \({\mathcal {H}}\) is, in some sense, natural, since, from elliptic regularity results, the domain of the operator \(-\sum _{j,k = 1}^n \partial _{x_j} \left( a_{j,k} (t,x) \partial _{x_k} \right)\) in \(L^2\left( \mathbb R^n\right)\) is \(H^2\left( \mathbb R^n\right)\), for all \(t\in [0,T]\)).

It is well known that, in dealing with the uniqueness property for partial differential operators, one of the main issues is the regularity of the coefficients. For example, in the case of elliptic operators, the uniqueness property in the case of Lipschitz continuous coefficients was proved by Hörmander in [14] (see [17] for a more refined result), while a famous non-uniqueness counterexample, for an elliptic operator having Hölder continuous coefficients, is due to Pliś (see [16]).

In [9, 10], we investigated the problem of finding the minimal regularity assumptions on the coefficients \(a_{j,k}\) ensuring the \({\mathcal {H}}\)–uniqueness property to (1). Namely, we proved the \({\mathcal {H}}\)–uniqueness property for (1) when the coefficients \(a_{j,k}\) are Lipschitz continuous in x and the regularity in t is given in terms of a modulus of continuity \(\mu\), i.e.,

$$\begin{aligned} \sup _{\begin{array}{c} s_1,\,s_2\in [0,T],\\ x\in {\mathbb {R}}^n \end{array}}\frac{|a_{j,k}(s_1,x)-a_{j,k}(s_2,x)|}{\mu (|s_1-s_2|)}\le C, \end{aligned}$$

where \(\mu\) satisfies the so-called Osgood condition

$$\begin{aligned} \int _0^1{1\over \mu (s)}\, \mathrm{{d}}s=+\infty . \end{aligned}$$

A counterexample in [9], similar to that one of Pliś quoted here above, shows that, considering the regularity with respect to t for the \(a_{j,k}\), the Osgood condition is sharp: given any non-Osgood modulus of continuity \(\mu\), it is possible to construct a backward parabolic operator like (1), whose coefficients are \(C^\infty\) in x and \(\mu\)-continuous in t, for which the \({\mathcal {H}}\)–uniqueness property does not hold.

It is interesting to remark that, in the recalled counterexample, the coefficients are in fact \(C^\infty\) in t for \(t\not =0\), and the Osgood continuity fails only at \(t=0\).

The loss of regularity for the coefficients at a single point is widely considered, e.g., in the case of well-posedness in the Cauchy problem for second-order hyperbolic operators of the type

$$\begin{aligned} P=\partial ^2_t- \sum _{j,k = 1}^n \partial _{x_j} (a_{j,k} (t,x) \partial _{x_k} )+\sum _{ j = 1}^n b_j(t,x)\partial _{x_j}+c(t,x), \end{aligned}$$

under the condition (2). For such class of operators, we have the well-posedness in Sobolev spaces when the coefficients are log-Lipschitz continuous with respect to t, there exist counterexamples to this property when the Lipschitz continuity fails only at \(t=0\), and, finally, the well-posedness in Sobolev spaces can be recovered adding a control on the Lipschitz constant of the \(a_{j,k}\)’s, for t going to 0 (the literature on such kind of problems is huge, see, e.g., [4,5,6,7,8, 13, 18])

In this paper, we show that if the loss of the Osgood continuity is properly controlled as t goes to 0, then the \({\mathcal {H}}\)–uniqueness property for (1) remains valid. Our hypothesis reads as follows: given a modulus of continuity \(\mu\) satisfying the Osgood condition, we assume that the coefficients \(a_{j,k}\) are Hölder continuous with respect to t on [0, T], and for all \(t\in (0,T]\)

$$\begin{aligned} \sup _{\begin{array}{c} s_1,\,s_2\in [t,T],\\ x\in {\mathbb {R}}^n \end{array}}\frac{|a_{j,k}(s_1,x)-a_{j,k}(s_2,x)|}{\mu (|s_1-s_2|)}\le C t^{-\beta }, \end{aligned}$$
(4)

where \(0<\beta <1\). The coefficients \(a_{j,k}\) are assumed to be globally Lipschitz continuous in x. Under such hypothesis, we prove that the \({\mathcal {H}}\)–uniqueness property holds for (1). As in [9, 10], the uniqueness result is consequence of a Carleman estimate with a weight function shaped on the modulus of continuity \(\mu\). The weight function is obtained as solution of a specific second-order ordinary differential equation. In the previous results cited above, the corresponding o.d.e. is autonomous. Here, on the contrary, the time-dependent control (4) yields to a non-autonomous o.d.e. Also, the “Osgood singularity” of \(a_{j,k}\) at \(t=0\) introduces a number of new technical difficulties which are not present in the fully Osgood-regular situation considered before.

The result is sharp in the following sense: we exhibit a counterexample in which the coefficients \(a_{j,k}\) are Hölder continuous with respect to t on [0, T], for all \(t\in (0,T]\) and for all \(\epsilon >0\)

$$\begin{aligned} \sup _{\begin{array}{c} s_1,\,s_2\in [t,T],\\ x\in {\mathbb {R}}^n \end{array}}\frac{|a_{j,k}(s_1,x)-a_{j,k}(s_2,x)|}{|s_1-s_2|}\le C t^{-(1+\epsilon )}, \end{aligned}$$
(5)

and the operator (1) does not have the \({\mathcal {H}}\)–uniqueness property. The borderline case \(\epsilon =0\) in (5) is considered in paper [11]. In such a situation, only a very particular uniqueness result holds and the problem remains essentially open.

2 Main result

We start with the definition of modulus of continuity.

Definition 1

A function \(\mu : [0,\, 1]\rightarrow [0,\,1]\) is a modulus of continuity if it is continuous, concave, strictly increasing and \(\mu (0)=0\), \(\mu (1)=1\).

Remark 1

Let \(\mu\) be a modulus of continuity. Then

  • for all \(s\in [0,\, 1]\), \(\mu (s)\ge s\);

  • on \((0,\, 1]\), the function \(s\mapsto \frac{\mu (s)}{s}\) is decreasing;

  • the limit \(\lim _{s\rightarrow 0^+} \frac{\mu (s)}{s}\) exists;

  • on \([1,\, +\infty )\), the function \(\sigma \mapsto \sigma \mu (\frac{1}{\sigma })\) is increasing;

  • on \([1,\, +\infty )\), the function \(\sigma \mapsto \frac{1}{\sigma ^2 \mu (\frac{1}{\sigma })}\) is decreasing.

Definition 2

Let \(\mu\) be a modulus of continuity and let \(\varphi :I \rightarrow B\), where I is an interval in \({\mathbb {R}}\) and B is a Banach space. \(\varphi\) is a function in \(C^\mu (I,B)\) if \(\varphi \in L^\infty (I,B)\) and

$$\begin{aligned} \Vert \varphi \Vert _{C^\mu (I,B)}= \Vert \varphi \Vert _{L^\infty (I,B)}+ \sup _{\begin{array}{c} t,s\in I\\ 0<|t-s|<1 \end{array}} \frac{\Vert \varphi (t)-\varphi (s)\Vert _B}{\mu (|t-s|)}<+\infty . \end{aligned}$$

Remark 2

Let \(\alpha \in (0,1)\) and \(\mu (s)=s^\alpha\). Then, \(C^\mu (I,B)\) is \(C^{0,\alpha }(I,B)\), the space of Hölder-continuous functions. Let \(\mu (s)=s\). Then, \(C^\mu (I,B)\) is Lip(I, B), the space of bounded Lipschitz-continuous functions.

We introduce the notion of Osgood modulus of continuity.

Definition 3

Let \(\mu\) be a modulus of continuity. \(\mu\) satisfies the Osgood condition if

$$\begin{aligned} \int _0^1 \frac{1}{\mu (s)}\, \mathrm{{d}}s=+\infty . \end{aligned}$$
(6)

Remark 3

Examples of moduli of continuity satisfying the Osgood condition are \(\mu (s)=s\) and \(\mu (s)= s\log (e+\frac{1}{s}-1)\).

We state our main result.

Theorem 1

Let L be the operator

$$\begin{aligned} L=\partial _t+\sum _{j,k=1}^n \partial _{x_j}\left( a_{j,k}(t,x)\partial _{x_k}\right) +\sum _{j=1}^n b_j(t,x)\partial _{x_j}+c(t,x), \end{aligned}$$
(7)

where all the coefficients are supposed to be complex valued, defined in \([0,\,T]\times {\mathbb {R}}^n\), measurable and bounded. Let \((a_{j,k}(t,x))_{j,k}\) be a real symmetric matrix and suppose there exists \(\lambda _0\in (0,\,1]\) such that

$$\begin{aligned} \sum _{j,k=1}^n a_{j,k}(t,x)\xi _j\xi _k\ge \lambda _0|\xi |^2, \end{aligned}$$
(8)

for all \((t,x)\in [0,\,T]\times {\mathbb {R}}^n\) and for all \(\xi \in {\mathbb {R}}^n\). Under this condition, L is a backward parabolic operator. Let \({\mathcal {H}}\) be the space of functions such that

$$\begin{aligned} \mathcal H=H^1\left( (0,T), L^2\left( {\mathbb {R}}^n\right) \right) \cap L^2 \left( (0,T), H^2\left( {\mathbb {R}}^n\right) \right) . \end{aligned}$$
(9)

Let \(\mu\) be a modulus of continuity satisfying the Osgood condition. Suppose that there exist \(\alpha \in (0,\,1)\) and \(C>0\) such that,

  1. i)

    for all \(j,k=1,\dots , n\),

    $$\begin{aligned} a_{j,k}\in C^{0,\alpha }\left( [0,T], L^\infty \left( {\mathbb {R}}^n\right) \right) \cap L^\infty \left( [0,T], Lip\left( {\mathbb {R}}^n\right) \right) ; \end{aligned}$$
    (10)
  2. ii)

    for all \(j,k=1,\dots , n\) and for all \(t\in (0,T]\),

    $$\begin{aligned} \sup _{\begin{array}{c} s_1,\,s_2\in [t,T],\\ x\in {\mathbb {R}}^n \end{array}}\frac{|a_{j,k}(s_1,x)-a_{j,k}(s_2,x)|}{\mu (|s_1-s_2|)}\le C t^{\alpha -1}. \end{aligned}$$
    (11)

Then L has the \(\mathcal H\)-uniqueness property, i.e., if \(u\in \mathcal H\), \(Lu=0\) in \([0,T]\times {\mathbb {R}}^n\) and \(u(0,x)=0\) in \({\mathbb {R}}^n\), then \(u=0\) in \([0,T]\times {\mathbb {R}}^n\).

Remark 4

The hypothesis (10), in particular the Hölder regularity with respect to t, is due to technical requirement for obtaining the Carleman estimate from which the main result is deduced. It does not seem easy to substitute it with different or weaker conditions.

3 Weight function and Carleman estimate

Defining

$$\begin{aligned} \phi (t)= \int _{\frac{1}{t}}^1 \frac{1}{\mu (s)}\, \mathrm{{d}}s, \end{aligned}$$
(12)

the function \(\phi\) is a strictly increasing \(C^1\) function on \([1,+\infty )\), with values in \([0,+\infty )\), and, by the Osgood condition, it is bijective. Moreover, for all \(t\in [1,+\infty )\),

$$\begin{aligned} \phi '(t)= \frac{1}{t^2\mu \left( \frac{1}{t}\right) }. \end{aligned}$$

We remark that \(\phi '(1)=1\) and \(\phi '\) is decreasing in \([1, +\infty )\), so that \(\phi\) is a concave function. Moreover, we notice also that \(\phi ^{-1}:[0,+\infty )\rightarrow [1,+\infty )\) and, for all \(s\in [0,+\infty )\),

$$\begin{aligned} \phi ^{-1}(s)\ge 1+s. \end{aligned}$$

We define

$$\begin{aligned} \psi _\gamma (\tau )= \phi ^{-1}\left( \gamma \int _0^{\frac{\tau }{\gamma } }{\left( T-s\right) ^{\alpha -1}}\, \mathrm{{d}}s\right) , \end{aligned}$$
(13)

where \(\tau \in [0,\gamma T]\).

$$\begin{aligned} \phi (\psi _\gamma (\tau ))=\gamma \int _0^{\frac{\tau }{\gamma } }{(T-s)^{\alpha -1}}\, \mathrm{{d}}s \end{aligned}$$

and

$$\begin{aligned} \phi '(\psi _\gamma (\tau ))\psi '_\gamma (\tau )=\left( T-\frac{\tau }{\gamma }\right) ^{\alpha -1}. \end{aligned}$$

Then

$$\begin{aligned} \psi '_\gamma (\tau )=\left( T-\frac{\tau }{\gamma }\right) ^{\alpha -1} \cdot (\psi _\gamma (\tau ))^2\mu \left( \frac{1}{\psi _\gamma (\tau )}\right) , \end{aligned}$$

i. e. \(\psi _\gamma\) is a solution to the differential equation

$$\begin{aligned} u'(\tau )= \left( T-\frac{\tau }{\gamma }\right) ^{\alpha -1} \, u^2(\tau )\, \mu \left( \frac{1}{u(\tau )}\right) . \end{aligned}$$

Finally we set, for \(\tau \in [0, \gamma T]\),

$$\begin{aligned} \Phi _\gamma (\tau )= \int _0^\tau \psi _\gamma (\sigma )\,\mathrm{{d}}\sigma . \end{aligned}$$
(14)

Remark that, with this definition, \(\Phi '(\tau )= \psi _\gamma (\tau )\) and

$$\begin{aligned} \Phi ''_\gamma (\tau )= \left( T-\frac{\tau }{\gamma }\right) ^{\alpha -1}\, (\Phi _\gamma ' (\tau ))^2 \, \mu \left( \frac{1}{\Phi _\gamma '(\tau )}\right) . \end{aligned}$$
(15)

In particular, for \(t\in [0,\frac{T}{2}]\),

$$\begin{aligned} \Phi ''_\gamma \left( \gamma (T-t)\right) = t^{\alpha -1} \, \Phi _\gamma ' (\gamma (T-t))\frac{\mu \left( \frac{1}{\Phi _\gamma '(\gamma (T-t))}\right) }{\frac{1}{\Phi _\gamma '(\gamma (T-t))}}\ge t^{\alpha -1} \ge \left( \frac{T}{2}\right) ^{\alpha -1}, \end{aligned}$$
(16)

since \(\Phi _\gamma ' (\gamma (T-t))= \psi _\gamma (\gamma (T-t))\ge 1\) and \(\frac{\mu (s)}{s}\ge 1\) for all \(s\in (0,1]\).

We can now state the Carleman estimate.

Theorem 2

In the previous hypotheses, there exist \(\gamma _0>0\), \(C>0\) such that

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} e^{\frac{2}{\gamma } \Phi _\gamma (\gamma (T-t))}\Vert \partial _tu+\sum _{j,k=1}^n\partial _{x_j}\left( a_{j,k}(t,x)\partial _{x_k}u\right) \Vert ^2_{L^2}\, \mathrm{{d}}t}\\ \displaystyle {\ge C\gamma ^{\frac{1}{2}} \int _0^{\frac{T}{2}} e^{\frac{2}{\gamma } \Phi _\gamma (\gamma (T-t))} \left( \Vert \nabla _x u\Vert ^2_{L^2 }+\gamma ^{\frac{1}{2}}\Vert u\Vert ^2_{L^2 }\right) \, \mathrm{{d}}t} \end{array} \end{aligned}$$
(17)

for all \(\gamma >\gamma _0\) and for all \(u\in C^\infty _0\left( {\mathbb {R}}^{n+1}\right)\) such that \({\mathrm{Supp}} \,u\subseteq \left[ 0,\frac{T}{2}\right] \times {\mathbb {R}}^n\).

The way of obtaining the \({\mathcal {H}}\)-uniqueness from the inequality (17) is a standard procedure, the details of which can be found in [9, Par. 3.4].

4 Proof of the Carleman estimate

4.1 Littlewood–Paley decomposition

We will use the so-called Littlewood–Paley theory. We refer to [2, 3, 15] and [1] for the details. Let \(\psi \in C^{\infty }([0,+\infty ), {\mathbb {R}})\) such that \(\psi\) is non-increasing and

$$\begin{aligned} \psi (t)=1\quad \text {for}\quad 0\le t\le \frac{11}{10}, \qquad \psi (t)=0\quad \text {for}\quad t\ge \frac{19}{10}. \end{aligned}$$

We set, for \(\xi \in {\mathbb {R}}^n\),

$$\begin{aligned} \chi (\xi )=\psi (|\xi |), \qquad \varphi (\xi )= \chi (\xi )-\chi (2\xi ). \end{aligned}$$
(18)

Given a tempered distribution u, the dyadic blocks are defined by

$$\begin{aligned} u_0= & {} \Delta _0 u= \chi (D)u ={\mathcal F}^{-1}(\chi (\xi )\hat{u}(\xi )), \\ u_j= & {} \Delta _j u= \varphi (2^{-j}D)u={\mathcal F}^{-1}(\varphi (2^{-j}\xi )\hat{u}(\xi ))\quad \text {if} \quad j\ge 1, \end{aligned}$$

where we have denoted by \({\mathcal F}^{-1}\) the inverse of the Fourier transform. We introduce also the operator

$$\begin{aligned} S_ku=\sum _{j=0}^{k} \Delta _ju= {\mathcal F}^{-1}\left( \chi \left( 2^{-k}\xi \right) \hat{u}(\xi )\right) . \end{aligned}$$

We recall some well-known facts on Littlewood–Paley deposition.

Proposition 1

([8, Prop. 3.1]) Let \(s\in {\mathbb {R}}\). A temperate distribution u is in \(H^s\) if and only if, for all \(j\in {\mathbb {N}}\), \(\Delta _ju\in L^2\) and

$$\begin{aligned} \sum _{j=0}^{+\infty } 2^{2js}\Vert \Delta _ju \Vert _{L^2}^2<+\infty . \end{aligned}$$

Moreover, there exists \(C>1\), depending only on n and s, such that, for all \(u\in H^s\),

$$\begin{aligned} \frac{1}{C} \sum _{j=0}^{+\infty } 2^{2js}\Vert \Delta _ju \Vert _{L^2}^2\le \Vert u\Vert ^2_{H^s}\le C \sum _{j=0}^{+\infty } 2^{2js}\Vert \Delta _ju \Vert _{L^2}^2. \end{aligned}$$
(19)

Proposition 2

([12, Lemma 3.2]). A bounded function a is a Lipschitz-continuous function if and only if

$$\begin{aligned} \sup _{k\in {\mathbb {N}}}\Vert \nabla (S_ k a)\Vert _{L^\infty }<+\infty . \end{aligned}$$

Moreover, there exists \(C>0\), depending only on n, such that, for all \(a\in Lip\) and for all \(k\in {\mathbb {N}}\),

$$\begin{aligned} \Vert \Delta _k a\Vert _{L^\infty }\le C\, 2^{-k}\, \Vert a\Vert _\mathrm{{Lip}}\qquad \text {and}\qquad \Vert \nabla (S_k a)\Vert _{L^\infty }\le C\, \Vert a\Vert _\mathrm{{Lip}}, \end{aligned}$$
(20)

where \(\Vert a\Vert _\mathrm{{Lip}}=\Vert a\Vert _{L^\infty }+\Vert \nabla a\Vert _{L^\infty }\).

4.2 Modified Bony’s paraproduct

Definition 4

Let \(m\in {\mathbb {N}}\setminus \{0\}\), \(a\in L^\infty\) and \(s\in {\mathbb {R}}\). For all \(u\in H^s\), we define

$$\begin{aligned} T^m_a u= S_{m-1}aS_{m+1}u + \sum _{k=m-1}^{+\infty } S_k a \Delta _{k+3} u. \end{aligned}$$

We recall some known facts on modified Bony’s paraproduct.

Proposition 3

([15, Prop. 5.2.1 and Th. 5.2.8]). Let \(m\in {\mathbb {N}}\setminus \{0\}\), \(a\in L^\infty\) and \(s\in {\mathbb {R}}\).

Then \(T^m_a\) maps \(H^s\) into \(H^s\) and there exists \(C>0\) depending only on n, m and s, such that, for all \(u\in H^s\),

$$\begin{aligned} \Vert T^m_a u\Vert _{H^s} \le C \Vert a\Vert _{L^\infty }\, \Vert u\Vert _{H^s}. \end{aligned}$$
(21)

Let \(m\in {\mathbb {N}}\setminus \{0\}\) and let \(a\in Lip\).

Then \(a-T^m_a\) maps \(L^2\) into \(H^1\) and there exists \(C'>0\) depending only on n, m, such that, for all \(u\in L^2\),

$$\begin{aligned} \Vert au -T^m_a u\Vert _{H^{1}} \le C' \Vert a\Vert _{Lip}\, \Vert u\Vert _{L^2}. \end{aligned}$$
(22)

Proposition 4

([8, Cor. 3.12]) Let \(m\in {\mathbb {N}}\setminus \{0\}\) and \(a\in Lip\). Suppose that, for all \(x\in {\mathbb {R}}^n\), \(a(x)\ge \lambda _0>0\).

Then, there exists m depending on \(\lambda _0\) and \(\Vert a\Vert _\mathrm{{Lip}}\) such that for all \(u\in L^2\),

$$\begin{aligned} \mathrm{Re}\, \left\langle T^m_au, u\right\rangle _{L^2, L^2}\ge \frac{\lambda _0}{2}\Vert u\Vert _{L^2}. \end{aligned}$$
(23)

A similar result remains valid when u is a vector valued function and a is replaced by a positive definite matrix \((a_{j,k})_{j,k}\).

Proposition 5

([8, Prop. 3.8 and Prop. 3.11] and [10, Prop. 3.8]) Let \(m\in {\mathbb {N}}\setminus \{0\}\) and \(a\in Lip\). Let \((T^m_a)^*\) be the adjoint operator of \(T^m_a\).

Then, there exists \(C>0\) depending only on n and m such that for all \(u\in L^2\),

$$\begin{aligned} \Vert (T^m_a-(T^m_a)^*)\partial _{x_j}u\Vert _{L^2}\le C \Vert a\Vert _\mathrm{{Lip}} \Vert u\Vert _{L^2}. \end{aligned}$$
(24)

We end this subsection with a property which will needed in the proof of the Carleman estimate.

Proposition 6

([10, Prop. 3.8]) Let \(m\in {\mathbb {N}}\setminus \{0\}\) and let \(a\in Lip\). Denote by \(\left[ \Delta _k, T^m_a\right]\) the commutator between \(\Delta _k\) and \(T^m_a\).

Then, there exists \(C>0\) depending only on n and m such that for all \(u\in H^1\),

$$\begin{aligned} \left( \sum _{h=0}^{+\infty } \Vert \partial _{x_j}\left( \left[ \Delta _k, T^m_a\right] \partial _{x_k}u\right) \Vert ^2_{L^2}\right) ^{\frac{1}{2}}\le C \Vert a\Vert _\mathrm{{Lip}}\Vert u\Vert _{H^1}. \end{aligned}$$
(25)

4.3 Approximated Carleman estimate

Setting

$$\begin{aligned} v(t,x)= e^{\frac{1}{\gamma } \Phi _\gamma (\gamma (T-t))} u(t,x), \end{aligned}$$

the Carleman estimate (17) becomes: there exist \(\gamma _0>0\), \(C>0\) such that

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv+\sum _{j,k=1}^n\partial _{x_j}( a_{j,k}(t,x)\partial _{x_k}v)+ \Phi '_\gamma (\gamma (T-t))v \Vert ^2_{L^2}\, {\rm{d}}t} \\ \displaystyle {\ge C\gamma ^{\frac{1}{2}} \int _0^{\frac{T}{2}} \left(\Vert \nabla _x v\Vert ^2_{L^2}+\gamma ^{\frac{1}{2}}\Vert u\Vert ^2_{L^2}\right)\, {\rm{d}}t,} \end{array} \end{aligned}$$
(26)

for all \(\gamma >\gamma _0\) and for all \(v\in C^\infty _0\left( {\mathbb {R}}^{n+1}\right)\) such that \({\mathrm{Supp}} \,u\subseteq \left[ 0,\frac{T}{2}\right] \times {\mathbb {R}}^n_x\).

First of all, using Proposition 4, we fix a value for m in such a way that

$$\begin{aligned} \mathrm{Re}\, \sum _{j,k}\left\langle T^m_{a_{j,k}}\partial _{x_k}v, \partial _{x_j}v\right\rangle _{L^2, L^2}\ge \frac{\lambda _0}{2}\Vert \nabla _x v\Vert _{L^2}, \end{aligned}$$
(27)

for all \(v\in C^\infty _0\left( {\mathbb {R}}^{n+1}\right)\) such that \({\mathrm{Supp}} \,u\subseteq \left[ 0,\frac{T}{2}\right] \times {\mathbb {R}}^n\). Next we use Proposition 3 and in particular from (22) we deduce that (26) will be a consequence of

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v)+ \Phi '_\gamma (\gamma (T-t))v \Vert ^2_{L^2}\, dt} \\ \displaystyle {\ge C\gamma ^{\frac{1}{2}} \int _0^{\frac{T}{2}} (\Vert \nabla _x v\Vert ^2_{L^2}+\gamma ^{\frac{1}{2}}\Vert u\Vert ^2_{L^2})\, dt,} \end{array} \end{aligned}$$
(28)

since the difference between (26) and (28) is absorbed by the right side part of (28) with possibly a different value of C and \(\gamma _0\). With a similar argument, using (19) and (25), (28) will be deduced from

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}}\sum _{h=0}^{+\infty }\Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}\left( T^m_{a_{j,k}}\partial _{x_k}v_h\right) + \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, \mathrm{{d}}t}\\ \displaystyle {\ge C\gamma ^{\frac{1}{2}} \int _0^{\frac{T}{2}} \sum _{h=0}^{+\infty }\left( \Vert \nabla _x v_h\Vert ^2_{L^2}+\gamma ^{\frac{1}{2}}\Vert v_h\Vert ^2_{L^2}\right) \, \mathrm{{d}}t,} \end{array} \end{aligned}$$
(29)

where we have denoted by \(v_h\) the dyadic block \(\Delta _h v\).

We fix our attention on each of the terms

$$\begin{aligned} \int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}\left( T^m_{a_{j,k}}\partial _{x_k}v_h\right) + \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, \mathrm{{d}}t. \end{aligned}$$

We have

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, {\rm{d}}t}\\ \displaystyle {= \int _0^{\frac{T}{2}} \Big ( \Vert \partial _tv_h\Vert ^2_{L^2}+\Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h) + \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}}\\ \quad \displaystyle {+ \gamma \Phi ''_\gamma \left(\gamma (T-t))\Vert v_h \Vert ^2_{L^2} +2\,\mathrm{Re}\, \langle \partial _tv_h, \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\rangle _{L^2, L^2} \right )\, {\rm{d}}t} \end{array} \end{aligned}$$
(30)

Let consider the last term in (30). We define, for \(\varepsilon \in [0, \frac{T}{2}]\),

$$\begin{aligned} {\tilde{a}}_{j,k, \varepsilon }(t,x)=\left\{ \begin{array}{ll} a_{j,k} (T,x),\qquad &{} \text {if}\ \ t\ge T\ \text {and}\ x\in {\mathbb {R}}^n ,\\ a_{j,k} (t,x),\qquad &{} \text {if}\ \ \varepsilon \le t\le T\ \text {and}\ x\in {\mathbb {R}}^n ,\\ a_{j,k} (\varepsilon ,x),\qquad &{} \text {if}\ \ t<\varepsilon \ \text {and}\ x\in {\mathbb {R}}^n , \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} a_{j,k,\varepsilon }(t,x)= \int _{-\varepsilon }^{\varepsilon } \rho _\varepsilon (s) {\tilde{a}}_{j,k,\varepsilon }(t-s, x)\,ds, \end{aligned}$$

where \(\rho \in C^\infty _0({\mathbb {R}})\) with \({\mathrm{Supp}} \,\rho \subseteq [-1,\,1]\), \(\int _{\mathbb {R}}\rho (s)ds=1\), \(\rho (s)\ge 0\) and \(\rho _\varepsilon (s)= \frac{1}{\varepsilon } \rho (\frac{s}{\varepsilon })\). With a straightforward computation, from (10) and (11), we obtain

$$\begin{aligned} |a_{j,k}(t,x)-a_{j,k, \varepsilon }(t,x)|\le C\, \min \left\{ \varepsilon ^\alpha ,\, t^{\alpha -1}\, \mu (\varepsilon )\right\} , \end{aligned}$$
(31)

and

$$\begin{aligned} |\partial _t a_{j,k, \varepsilon }(t,x)|\le C\min \left\{ \varepsilon ^{\alpha -1},\, t^{\alpha -1}\,\frac{ \mu (\varepsilon )}{\varepsilon }\right\} , \end{aligned}$$
(32)

for all \(j,k=1 \,\dots , n\) and for all \((t,x)\in \left[ 0, \frac{T}{2}\right] \times {\mathbb {R}}^n\). We deduce

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}}2\,\mathrm{Re}\, \left\langle \partial _tv_h, \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\right\rangle _{L^2, L^2} \, \mathrm{{d}}t}\\ \displaystyle {=-2\,\mathrm{Re}\, \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \left\langle \partial _{x_j} \partial _tv_h,\; T^m_{a_{j,k}} \partial _{x_k} v_h\right\rangle _{L^2, L^2} \, \mathrm{{d}}t}\\ \displaystyle {=-2\,\mathrm{Re}\, \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \left\langle \partial _{x_j} \partial _tv_h,\; (T^m_{a_{j,k}}-T^m_{a_{j,k,\varepsilon }}) \partial _{x_k} v_h\right\rangle _{L^2, L^2} \, \mathrm{{d}}t}\\ \quad {-2\,\mathrm{Re}\, \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \left\langle \partial _{x_j} \partial _tv_h,\; T^m_{a_{j,k,\varepsilon }} \partial _{x_k} v_h\right\rangle _{L^2, L^2} \, \mathrm{{d}}t}. \end{array} \end{aligned}$$

Now, \(T^m_{a_{j,k}}-T^m_{a_{j,k,\varepsilon }}= T^m_{a_{j,k}-a_{j,k,\varepsilon }}\) and, from (21) and (31),

$$\begin{aligned} \begin{array}{ll} \displaystyle { \Vert \left( T^m_{a_{j,k}}-T^m_{a_{j,k,\varepsilon }}\right) \partial _{x_k} v_h\Vert _{L^2}}&{}\displaystyle {=\Vert T^m_{a_{j,k}-a_{j,k,\varepsilon }}\partial _{x_k} v_h\Vert _{L^2}}\\ &{}\displaystyle {\le C\, \min \{\varepsilon ^\alpha , t^{\alpha -1}\mu (\varepsilon )\} \Vert \partial _{x_k} v_h\Vert _{L^2}.} \end{array} \end{aligned}$$

Moreover \(\Vert \partial _{x_j}v_h\Vert _{L^2}\le 2^{h+1} \Vert v_h\Vert _{L^2}\) and \(\Vert \partial _{x_j}\partial _t v_h\Vert _{L^2}\le 2^{h+1} \Vert \partial _t v_h\Vert _{L^2}\), so that

$$\begin{aligned} \begin{array}{ll} \displaystyle {\Big \vert 2\,\mathrm{Re}\, \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} \partial _tv_h,\; \left( T^m_{a_{j,k}}-T^m_{a_{j,k,\varepsilon }}\right) \partial _{x_k} v_h\big \rangle _{L^2, L^2} \, dt\Big \vert }\\ \qquad \displaystyle {\le 2C \, \int _0^{\frac{T}{2}} \min \left\{ \varepsilon ^\alpha , t^{\alpha -1}\mu (\varepsilon )\right\} \sum _{j,k=1}^n \Vert \partial _{x_j}\partial _t v_h\Vert _{L^2}\Vert \partial _{x_k}v_h\Vert _{L^2}\,dt}\\ \qquad \displaystyle {\le \frac{C}{N} \, \int _0^{\frac{T}{2}} \Vert \partial _t v_h\Vert ^2_{L^2}\,dt +CN\, 2^{4(h+1)} \, \int _0^{\frac{T}{2}} \min \left\{ \varepsilon ^\alpha , t^{\alpha -1}\mu (\varepsilon )\right\} \Vert v_h\Vert ^2_{L^2}\, \mathrm{{d}}t}, \end{array} \end{aligned}$$

where C depends only on n, m and \(\Vert a_{j,k}\Vert _{L^\infty }\) and \(N>0\) can be chosen arbitrarily.

Similarly

$$\begin{aligned} \begin{array}{ll} \displaystyle {-2\,\mathrm{Re}\, \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} \partial _tv_h,\; T^m_{a_{j,k,\varepsilon }} \partial _{x_k} v_h\big \rangle _{L^2, L^2} \, \mathrm{{d}}t}\\ \qquad \quad \displaystyle {=\int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} v_h,\; T^m_{ \partial _t a_{j,k,\varepsilon }} \partial _{x_k} v_h\big \rangle _{L^2, L^2} \, \mathrm{{d}}t}\\ \qquad \quad \quad \displaystyle {+ \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} v_h,\; \left( T^m_{a_{j,k,\varepsilon }}-\left( T^m_{a_{j,k,\varepsilon }}\right) ^*\right) \partial _{x_k} \partial _t v_h\big \rangle _{L^2, L^2} \, \mathrm{{d}}t}. \end{array} \end{aligned}$$

From (21) and (32), we have

$$\begin{aligned} \begin{array}{ll} \displaystyle {\Big \vert \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} v_h,\; T^m_{ \partial _t a_{j,k,\varepsilon }} \partial _{x_k} v_h\big \rangle _{L^2, L^2} \, dt\Big \vert }\\ \qquad \displaystyle { \le C\, 2^{2(h+1)} \int _0^{\frac{T}{2}} \min \{\varepsilon ^{\alpha -1}, t^{\alpha -1}\frac{\mu (\varepsilon )}{\varepsilon }\} \Vert v_h\Vert ^2_{L^2}\, dt}, \end{array} \end{aligned}$$

and, from (24),

$$\begin{aligned} \begin{array}{ll} \displaystyle {\Big \vert \int _0^{\frac{T}{2}} \sum _{j,k=1}^n \big \langle \partial _{x_j} v_h,\; (T^m_{a_{j,k,\varepsilon }}-(T^m_{a_{j,k,\varepsilon }})^*) \partial _{x_k} \partial _t v_h\big \rangle _{L^2, L^2} \, dt\Big \vert }\\ \qquad \displaystyle {\le C \int _0^{\frac{T}{2}} \Vert \nabla v_h\Vert _{L^2} \Vert \partial _t v_h\Vert _{L^2}\, dt}\\ \qquad \displaystyle {\le \frac{C}{N} \int _0^{\frac{T}{2}} \Vert \partial _t v_h\Vert _{L^2}^2\,dt + CN \, 2^{2(h+1)} \int _0^{\frac{T}{2}} \Vert v_h\Vert _{L^2}^2\,dt}, \end{array} \end{aligned}$$

where C depends only on n, m and \(\Vert a_{j,k}\Vert _{Lip}\) and \(N>0\) can be chosen arbitrarily.

As a conclusion, from (30), we finally obtain

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, {\rm{d}}t}\\ \qquad \displaystyle {\ge \int _0^{\frac{T}{2}} \Big ( \Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h) + \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2} }\\ \qquad\quad \displaystyle {+\,\gamma \Phi ''_\gamma (\gamma (T-t))\Vert v_h \Vert ^2_{L^2} -C \big ( 2^{4(h+1)} \min \{\varepsilon ^{\alpha }, t^{\alpha -1}\mu (\varepsilon )\} }\\ \qquad\quad \displaystyle {+\, 2^{2(h+1)}(\min \{\varepsilon ^{\alpha -1}, t^{\alpha -1}\frac{\mu (\varepsilon )}{\varepsilon }\} + 1)\big ) \Vert v_h \Vert ^2_{L^2}\Big )\, {\rm{d}}t.}\end{array} \end{aligned}$$
(33)

4.4 End of the proof

We start considering (33) for \(h=0\). We fix \(\varepsilon =\frac{1}{2}\). Recalling (16) we have

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_0+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_0)+ \Phi '_\gamma (\gamma (T-t))v_0 \Vert ^2_{L^2}\, dt}\\ \qquad \qquad \displaystyle {\ge \int _0^{\frac{T}{2}} (\gamma \Phi ''_\gamma (\gamma (T-t))- C')\Vert v_0 \Vert ^2_{L^2} }\\ \qquad \qquad \displaystyle {\ge \int _0^{\frac{T}{2}} (\gamma (\frac{T}{2})^{\alpha -1}- C')\Vert v_0 \Vert ^2_{L^2} \, dt}. \end{array} \end{aligned}$$

Choosing a suitable \(\gamma _0\), we have that, for all \(\gamma >\gamma _0\),

$$\begin{aligned} \int _0^{\frac{T}{2}} \Vert \partial _tv_0+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_0)+ \Phi '_\gamma (\gamma (T-t))v_0 \Vert ^2_{L^2}\, dt \ge \frac{\gamma }{2} \int _0^{\frac{T}{2}} \Vert v_0 \Vert ^2_{L^2}\, dt. \end{aligned}$$
(34)

We consider (33) for \(h\ge 1\). Choosing \(\varepsilon =2^{-2h}\), we have

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, dt}\\ \displaystyle {\ge \int _0^{\frac{T}{2}} \Big ( \Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h) + \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2} }\\ \quad \displaystyle {+\big (\gamma \Phi ''_\gamma (\gamma (T-t))-C ( 2^{4h} \min \{2^{-2h\alpha }, t^{\alpha -1}\mu (2^{-2h})\} +2^{2h})\big ) \Vert v_h \Vert ^2_{L^2}\Big )\, dt}\\ \displaystyle {\ge \int _0^{\frac{T}{2}} \Big ( \big(\Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\Vert _{L^2} - \Phi '_\gamma (\gamma (T-t))\Vert v_h \Vert _{L^2} \big )^2 }\\ \quad \displaystyle {+\big (\gamma \Phi ''_\gamma (\gamma (T-t))-C ( 2^{4h} \min \{2^{-2h\alpha }, t^{\alpha -1}\mu (2^{-2h})\} +2^{2h})\big ) \Vert v_h \Vert ^2_{L^2}\Big )\, dt.}\\ \end{array} \end{aligned}$$

From (27) it is possible to deduce that

$$\begin{aligned} \Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\Vert _{L^2}\ge \frac{\lambda _0}{8}\, 2^{2h} \Vert v_h\Vert _{L^2}. \end{aligned}$$
(35)

Suppose first that

$$\begin{aligned} \Phi '_\gamma (\gamma (T-t))\le \frac{\lambda _0}{16} 2^{2h}. \end{aligned}$$

From (27) we have

$$\begin{aligned} \Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\Vert _{L^2} - \Phi '_\gamma (\gamma (T-t))\Vert v_h \Vert _{L^2} \ge \frac{\lambda _0}{16}\, 2^{2h} \Vert v_h \Vert _{L^2} \end{aligned}$$

and then, using also (16), we obtain

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, dt}\\ \displaystyle {\ge \int _0^{\frac{T}{2}} \Big ( \big(\Vert \sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)\Vert _{L^2} - \Phi '_\gamma (\gamma (T-t))\Vert v_h \Vert _{L^2} \big )^2 }\\ \quad \displaystyle {+\big (\gamma \Phi ''_\gamma (\gamma (T-t))-C ( 2^{4h} \min \{2^{-2h\alpha }, t^{\alpha -1}\mu (2^{-2h})\} +2^{2h})\big ) \Vert v_h \Vert ^2_{L^2}\Big )\, dt} \\ \displaystyle {\ge \int _0^{\frac{T}{2}} \Big ( (\frac{\lambda _0}{16}\, 2^{2h} )^2 +\gamma (\frac{T}{2})^{\alpha -1}-C ( 2^{(4-2\alpha ) h} )\Big ) \Vert v_h \Vert ^2_{L^2}\, dt.} \end{array} \end{aligned}$$

Then, there exist \(\gamma _0>0\) and \(C>0\) such that, for all \(\gamma > \gamma _0\) and for all \(h\ge 1\),

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, dt}\\ \displaystyle {\ge C \int _0^{\frac{T}{2}} (\gamma + \gamma ^{\frac{1}{2}}2^{2h}) \Vert v_h \Vert ^2_{L^2}\, dt} \end{array} \end{aligned}$$
(36)

Suppose finally that

$$\begin{aligned} \Phi '_\gamma (\gamma (T-t))\ge \frac{\lambda _0}{16} 2^{2h}. \end{aligned}$$

From (15), the fact that \(\lambda _0 \le 1\) and the properties of the modulus of continuity \(\mu\) we obtain

$$\begin{aligned} \begin{array}{ll} \displaystyle {\Phi ''(\gamma (T-t))}&{}\displaystyle {=t^{\alpha -1} (\Phi _\gamma ' (\gamma (T-t)))^2 \, \mu (\frac{1}{\Phi _\gamma '(\gamma (T-t)))})}\\ &{}\displaystyle {\ge t^{\alpha -1} (\frac{\lambda _0}{16})^2 2^{4h}\mu (\frac{16}{\lambda _0} 2^{-2h}) \ge t^{\alpha -1} (\frac{\lambda _0}{16})^2 2^{4h}\mu (2^{-2h}).} \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ll} \displaystyle {\Phi ''(\gamma (T-t))}&{}\displaystyle {=t^{\alpha -1} (\Phi _\gamma ' (\gamma (T-t)))^2 \, \mu (\frac{1}{\Phi _\gamma '(\gamma (T-t)))})}\\ &{}\displaystyle {= t^{\alpha -1} \, \Phi _\gamma ' (\gamma (T-t))\frac{\mu (\frac{1}{\Phi _\gamma '(\gamma (T-t))})}{\frac{1}{\Phi _\gamma '(\gamma (T-t))}}\ge (\frac{T}{2})^{\alpha -1}.} \end{array} \end{aligned}$$

Consequently

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, dt}\\ \displaystyle {\ge \int _0^{\frac{T}{2}} \big (\gamma \Phi ''_\gamma (\gamma (T-t))-C ( 2^{4h} \min \{2^{-2h\alpha }, t^{\alpha -1}\mu (2^{-2h})\} +2^{2h})\big ) \Vert v_h \Vert ^2_{L^2}\, dt}\\ \displaystyle {\ge \int _0^{\frac{T}{2}} \Big(\frac{\gamma }{2} \big(t^{\alpha -1} (\frac{\lambda _0}{16})^2 2^{4h}\mu (2^{-2h})+ (\frac{T}{2})^{\alpha -1}\big ) -C \big ( t^{\alpha -1}2^{4 h}\mu (2^{-2h}) +2^{2h} \big )\Big) \Vert v_h \Vert ^2_{L^2}\, dt.} \end{array} \end{aligned}$$

Then, there exist \(\gamma _0>0\) and \(C>0\) such that, for all \(\gamma > \gamma _0\) and for all \(h\ge 1\),

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2}\, dt}\\ \qquad \qquad \qquad \qquad \displaystyle {\ge C \gamma \int _0^{\frac{T}{2}} (1 + 2^{2h}) \Vert v_h \Vert ^2_{L^2}\, dt.} \end{array} \end{aligned}$$
(37)

As a conclusion, from (34), (36) and (37), there exist \(\gamma _0>0\) and \(C>0\) such that, for all \(\gamma > \gamma _0\) and for all \(h\in {\mathbb {N}}\),

$$\begin{aligned} \begin{array}{ll} \displaystyle {\int _0^{\frac{T}{2}} \Vert \partial _tv_h+\sum _{j,k=1}^n\partial _{x_j}( T^m_{a_{j,k}}\partial _{x_k}v_h)+ \Phi '_\gamma (\gamma (T-t))v_h \Vert ^2_{L^2({\mathbb {R}}^n)}\, dt}\\ \qquad \qquad \qquad \qquad \displaystyle {\ge C \int _0^{\frac{T}{2}} (\gamma + \gamma ^{\frac{1}{2}}2^{2h}) \Vert v_h \Vert ^2_{L^2}\, dt} \end{array} \end{aligned}$$
(38)

and (29) follows. The proof is complete.

5 A counterexample

Theorem 3

There exists

$$\begin{aligned} l\in \big (\bigcap _{\alpha \in [0,1[} C^{0,\alpha }({\mathbb {R}})\big )\cap C^\infty ({\mathbb {R}}\setminus \{0\}) \end{aligned}$$

with

$$\begin{aligned}&\frac{1}{2}\le l(t)\le \frac{3}{2},\qquad \text {for all}\ \ t\in {\mathbb {R}}, \end{aligned}$$
(39)
$$\begin{aligned}&|l'(t)|\le C_\varepsilon |t|^{-(1+\varepsilon )},\qquad \text {for all}\ \ \varepsilon >0 \ \ \text {and}\ \ t\in {\mathbb {R}}\setminus \{0\}, \end{aligned}$$
(40)

and there exist \(u, \; b_1, \; b_2, \; c\in C^\infty _b({\mathbb {R}}_t\times {\mathbb {R}}^2_x)\), with

$$\begin{aligned} {\mathrm{Supp}} \,u=\{(t,x)\in {\mathbb {R}}_t\times {\mathbb {R}}^2_x\,\big | \, t\ge 0\}, \end{aligned}$$

such that

$$\begin{aligned} \partial _t u +\partial ^2_{x_1} u+ l\partial ^2_{x_2}u+b_1\partial _{x_1}u+b_2\partial _{x_2}u+cu=0 \qquad \text {in}\ \ {\mathbb {R}}_t\times {\mathbb {R}}^2_x . \end{aligned}$$

Remark 5

Actually, the function l will satisfy

$$\begin{aligned} \sup _{t\not =0} (\frac{|t|}{1+|\log |t||})|l'(t)|<+\infty . \end{aligned}$$
(41)

From (41) it is easy to obtain (40).

Proof

We will follow the proof of Theorem 1 in [16] (see also Theorem 3 in [9]). Let \(A, \; B, \; C,\; J\) be four \(C^\infty\) functions, defined in \({\mathbb {R}}\), with

$$\begin{aligned} 0\le A(s),\, B(s),\; C(s)\le 1 \quad \text {and}\quad -2\le J(s)\le 2,\quad \text {for all} \ \ s\in {\mathbb {R}}, \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ll} A(s)=1,\quad \text {for}\ \ s\le \frac{1}{5},\qquad \qquad &{} A(s)=0,\quad \text {for}\ \ s\ge \frac{1}{4},\\ B(s)=0,\quad \text {for}\ \ s\le 0 \ \ \text {or}\ \ s\ge 1,\qquad \qquad &{} B(s)=1,\quad \text {for}\ \ \frac{1}{6}\le s\le \frac{1}{2},\\ C(s)=0,\quad \text {for}\ \ s\le \frac{1}{4},\qquad \qquad &{} C(s)=1,\quad \text {for}\ \ s\ge \frac{1}{3},\\ J(s)=-2,\quad \text {for}\ \ s\le \frac{1}{6} \ \ \text {or}\ \ s\ge \frac{1}{2},\qquad \qquad &{} J(s)=2,\quad \text {for}\ \ \frac{1}{5}\le s\le \frac{1}{3}. \end{array} \end{aligned}$$

Let \((a_n)_n,\; (z_n)_n\) be two real sequences such that

$$\begin{aligned}&-1<a_n<a_{n+1}, \quad \text {for all} \ \ n\ge 1, \quad \text {and}\quad \lim _n a_n=0, \end{aligned}$$
(42)
$$\begin{aligned}&1<z_n<z_{n+1}, \quad \text {for all} \ \ n\ge 1, \quad \text {and}\quad \lim _n z_n=+\infty . \end{aligned}$$
(43)

We define

$$\begin{aligned} \begin{array}{c} r_n= a_{n+1}-a_n,\\ q_1=0 \quad \text {and}\quad \displaystyle {q_n=\sum _{k=2}^n z_kr_{k-1},} \quad \text {for}\ \ n\ge 2,\\ p_n=(z_{n+1}-z_n)r_n. \end{array} \end{aligned}$$

We require

$$\begin{aligned} p_n>1,\quad \text {for all}\ \ n\ge 1. \end{aligned}$$
(44)

We set

$$\begin{aligned} A_n(t)= & {} A(\frac{t-a_n}{r_n}),\quad B _n(t)= B(\frac{t-a_n}{r_n}), \\ C_n(t)= & {} C(\frac{t-a_n}{r_n}),\quad J_n(t)= J(\frac{t-a_n}{r_n}). \end{aligned}$$

We define

$$\begin{aligned} \begin{array}{ll} v_n(t,x_1)= \exp (-q_n-z_n(t-a_n))\cos \sqrt{z_n} \,x_1,\\ w_n(t,x_2)= \exp (-q_n-z_n(t-a_n)+J_n(t)p_n)\cos \sqrt{z_n} \,x_2, \end{array} \\ \begin{array}{ll} u(t, x_1,x_2)\\ =\left\{ \begin{array}{ll} v_1(t, x_1),&{}\text {for} \ \ t\le a_1,\\ A_n(t)v_n(t,x_1)+B_n(t)w_n(t,x_2)+C_n(t)v_{n+1}(t,x_1), &{} \text {for} \ \ a_n\le t\le a_{n+1},\\ 0,&{} \text {for} \ \ t\ge 0.\end{array} \right. \end{array} \end{aligned}$$

The condition

$$\begin{aligned} \lim _n \, \exp (-q_n+2p_n) z_{n+1}^\alpha p_n^\beta r_n^{-\gamma }=0, \qquad \text {for all}\ \ \alpha ,\,\beta ,\,\gamma >0, \end{aligned}$$
(45)

implies that \(u\in C^\infty _b({\mathbb {R}}_t\times {\mathbb {R}}^2_x)\).

We define

$$\begin{aligned} l(t)=\left\{ \begin{array}{ll} 1,&{}\text {for} \ \ t\le a_1\ \ \text {or}\ \ t\ge 0,\\ 1+J'_n(t)p_nz_n^{-1},\qquad &{} \text {for} \ \ a_n\le t\le a_{n+1}. \end{array} \right. \end{aligned}$$

l is a \(C^\infty ({\mathbb {R}}\setminus \{0\})\) function. The condition

$$\begin{aligned} \sup _n \{p_nr_n^{-1} z_n^{-1}\}\le \frac{1}{2\Vert J'\Vert _{L^\infty }} \end{aligned}$$
(46)

implies (39), i. e. the operator

$$\begin{aligned} L=\partial _t-\partial ^2_{x_1}-l(t)\partial ^2_{x_2} \end{aligned}$$

is a parabolic operator. Moreover, l is in \(\bigcap _{\alpha \in [0,1[} C^{0,\alpha }({\mathbb {R}})\) if

$$\begin{aligned} \sup _n\{p_nr_n^{-1-\alpha }z_n^{-1} \}<+\infty , \qquad \text {for all}\ \ \alpha \in [0,1[. \end{aligned}$$
(47)

Finally, we define

$$\begin{aligned} \begin{array}{l} \displaystyle {b_1=-\frac{Lu}{u^2+(\partial _{x_1}u)^2+(\partial _{x_2}u)^2}\partial _{x_1}u,} \\ \displaystyle {b_2=-\frac{Lu}{u^2+(\partial _{x_1}u)^2+(\partial _{x_2}u)^2}\partial _{x_2}u,} \\ \displaystyle {c=-\frac{Lu}{u^2+(\partial _{x_1}u)^2+(\partial _{x_2}u)^2}u.} \end{array} \end{aligned}$$

As in [16] and [9], the functions \(b_1,\, b_2, \, c\) are in \(C^\infty _b({\mathbb {R}}_t\times {\mathbb {R}}^2_x)\) if

$$\begin{aligned} \lim _n \, \exp (-p_n) z_{n+1}^\alpha p_n^\beta r_n^{-\gamma }=0, \qquad \text {for all}\ \ \alpha ,\,\beta ,\,\gamma >0. \end{aligned}$$
(48)

We choose, for \(j_0\ge 2\),

$$\begin{aligned} a_n=-{e^{-\sqrt{\log (n+j_0)}}},\qquad \qquad z_n=(n+j_0)^3. \end{aligned}$$

With this choice (42) and (43) are satisfied and we have

$$\begin{aligned} r_n\sim e^{-\sqrt{\log (n+j_0)}}\, \frac{1}{(n+j_0)\sqrt{\log (n+j_0)}}, \end{aligned}$$

where, for sequences \((f_n)_n,\, (g_n)_n\), \(f_n\sim g_n\) means \(\lim _n \frac{f_n}{g_n} = \lambda\), for some \(\lambda >0\). Similarly

$$\begin{aligned} p_n\sim e^{-\sqrt{\log (n+j_0)}}\, \frac{n+j_0}{\sqrt{\log (n+j_0)}} \end{aligned}$$

and condition (44) is verified, for a suitable fixed \(j_0\). Remarking that we have, for \(j_0\) suitably large,

$$\begin{aligned} q_n = \sum _{k=2}^n z_kr_{k-1}\ge z_nr_{n-1}\ge \lambda (n+j_0)^{\frac{7}{4}} \end{aligned}$$

and

$$\begin{aligned} p_n\le \lambda (n+j_0)^{\frac{5}{4}} \end{aligned}$$

for some \(\lambda >0\). Finally

$$\begin{aligned} p_n r_n^{-1} z_n^{-1}\sim \frac{1}{n+j_0}. \end{aligned}$$

As a consequence (45), (46), (47) and (48) are satisfied for a suitable fixed \(j_0\). It remains to check (41). We have

$$\begin{aligned} |l'(t)|\le \Vert J''\Vert _{L^\infty } p_n\, r_n^{-2}\,z_n^{-1},\qquad \text {for} \ \ a_n\le t\le a_{n+1} \end{aligned}$$

and consequently

$$\begin{aligned} \begin{array}{ll} \displaystyle {\sup _{t\not =0} (\frac{|t|}{1+|\log |t||})|l'(t)| }&{}=\ \displaystyle { \sup _n \sup _{t\in [a_n,a_{n+1}]} (\frac{|t|}{1+|\log |t||})|l'(t)|}\\ &{}\le \ \displaystyle {\sup _n \, (\frac{a_n}{1-\log a_n}) \Vert J''\Vert _{L^\infty } p_n\, r_n^{-2}\,z_n^{-1}}\\ &{}\le C. \end{array} \end{aligned}$$

The conclusion of the theorem is reached simply exchanging t with \(-t\). \(\square\)