1 Introduction and main result

Let us consider for \((t,x) \in [0,T] \times {\mathbb {R}}\) the Cauchy problem in the unknown \(u=u(t,x)\):

$$\begin{aligned} {\left\{ \begin{array}{ll} P(t, x, D_t, D_x) u(t,x)= f(t, x) \\ u(0,x) = g(x) \end{array}\right. }, \end{aligned}$$
(1.1)

where

$$\begin{aligned} P(t, x, D_t, D_x) = D_t + a_p(t)D^{p}_{x} + \displaystyle \sum _{j=0}^{p-1}a_j(t,x)D^{j}_{x}, \end{aligned}$$
(1.2)

with \(D=\frac{1}{i}\partial ,\) \(p\ge 2\), \(a_p\in C([0,T],{\mathbb {R}}), a_p(t) \ne 0\) for \(t \in [0,T]\), and \(a_j \in C([0,T],C^\infty ({\mathbb {R}};{\mathbb {C}}))\), \(j = 0, \ldots , p-1\). The operator P is known in literature as \(p-\)evolution operator, cf. [27], and p is the evolution degree. The well posedness of (1.1), (1.2) has been investigated in various functional settings for arbitrary p, cf. [3,4,5, 9]. Further results concern special values of p which correspond to classes of operators of particular interest in Mathematical Physics, cf. [6,7,8, 10, 20, 22] for the case \(p=2\) and [1] for the case \(p=3\). The condition that \(a_p\) is real-valued means that the principal symbol of P (in the sense of Petrowski) has the real characteristic \(\tau =-a_p(t)\xi ^p\); this guarantees that operator (1.2) satisfies the assumptions of Lax-Mizohata theorem. The presence of complex-valued coefficients in the lower-order terms of (1.2) plays a crucial role in the analysis of problem (1.1) in all the above-mentioned papers. In fact, when the coefficients \(a_j(t,x)\), \(j = 0, \ldots , p-1,\) are real valued and of class \({\mathcal {B}}^\infty \) with respect to x (that is uniformly bounded together with all their x-derivatives), it is well known that problem (1.1) is well-posed in \(L^2({\mathbb {R}})\) (and in \(L^2\)-based Sobolev spaces \(H^{m}\), \(m\in {\mathbb {R}}\)). On the contrary if any of the coefficients \(a_j(t,x)\) are complex valued, then in order to obtain well-posedness either in \(L^2({\mathbb {R}})\), or in \(H^{\infty }({\mathbb {R}})=\cap _{m\in {\mathbb {R}}}H^m({\mathbb {R}})\), some decay conditions at infinity on the imaginary part of the coefficients \(a_j\) are needed (see [4, 20]).

Sufficient conditions for well-posedness in \(L^2\) and \(H^\infty \) have been given in [8] and [22] for the case \(p=2\), in [3] for larger p. Considering Cauchy problem (1.1) in the framework of weighted Sobolev–Kato spaces \(H^m= H^{(m_1,m_2)}\), with \(m=(m_1,m_2) \in {\mathbb {R}}^2\), defined as

$$\begin{aligned} H^{m}({\mathbb {R}})=\{u \in {\mathscr {S}}'({\mathbb {R}}) : \langle x \rangle ^{m_2} \langle D_{x} \rangle ^{m_1} u \in L^{2}({\mathbb {R}}) \}, \end{aligned}$$
(1.3)

where \(\langle x\rangle ^{m_{2}}\langle D_{x}\rangle ^{m_{1}}\) denotes the operator with symbol \(\langle x\rangle ^{m_2}\langle \xi \rangle ^{m_{1}}\), and assuming the coefficients \(a_j\) to be polynomially bounded, the second and the third author obtained in [5] well-posedness also in the Schwartz space \({\mathscr {S}}({\mathbb {R}})\) of smooth and rapidly decreasing functions and in the dual space \({\mathscr {S}}'({\mathbb {R}})\). We recall that \({\mathscr {S}}({\mathbb {R}}) = \cap _{m \in {\mathbb {R}}^2} H^{m}({\mathbb {R}})\) and \({\mathscr {S}}'({\mathbb {R}}) = \cup _{m \in {\mathbb {R}}^2} H^{m}({\mathbb {R}})\). In short, the above-mentioned results can be summarized as follows: if

(1.4)

problem (1.1) is well-posed in:

  • \(L^{2}({\mathbb {R}})\), \(H^{m}({\mathbb {R}})\) for every \(m \in {\mathbb {R}}^2\) when \(\sigma > 1\);

  • \(H^{\infty }({\mathbb {R}})\), \({\mathscr {S}}({\mathbb {R}})\) when \(\sigma = 1\). In general, a finite loss of regularity of the solution with respect to the initial data is observed in the case \(\sigma =1\).

Now we want to consider the case when an estimate of form (1.4) for \(j=p-1\) holds for some \(\sigma \in (0,1).\) In this situation, there are no results in the literature for p-evolution operators of arbitrary order. In [10, 22] the case \(p=2\), which corresponds to Schrödinger-type equations, is considered assuming \(0<\sigma <1\) and \(C_\beta = C^{|\beta |+1} \beta !^{s_0}\) for some \(s_0 \in (1, 1/(1-\sigma ))\) in (1.4). The authors find well-posedness results in certain Gevrey spaces of order \(\theta \) with \(s_0 \le \theta <1/(1-\sigma )\), namely in the class

$$\begin{aligned} {\mathcal {H}}^\infty _\theta = \bigcup _{\rho >0}H^m_{\rho ;\theta },\quad H^m_{\rho ;\theta } := \{u \in L^2\vert \ e^{\rho \langle D \rangle ^{1/\theta }}u \in H^m \},\, m\in {\mathbb {R}}. \end{aligned}$$

In both papers, starting from data fg in \(H^m_{\rho ;\theta }\) for some \(\rho >0\) the authors obtain a solution in \(H^m_{\rho -\delta ;\theta }\) for some \(\delta >0\) such that \(\rho -\delta >0.\) This means a sort of loss of regularity in the constant \(\rho \) which rules the Gevrey behavior. We also notice that the condition \(s_0 \le \theta <1/(1-\sigma )\) means that the rate of decay of the coefficients of P imposes a restriction on the spaces \({\mathcal {H}}^\infty _\theta \) in which problem (1.1) is well posed. Finally, the case \(\theta >s_0=1/(1-\sigma )\) is investigated in [7] where the authors prove that a decay condition as \(|x|\rightarrow \infty \) on a datum in \(H^m\), \(m\ge 0\), produces a solution with (at least locally) the same regularity as the data, but with a different behavior at infinity. In the recent paper [6] the role of data with exponential decay on the regularity of the solution has been also analyzed for 2-evolution equations in arbitrary space dimension, in the frame of Gelfand–Shilov-type spaces which can be seen as the global counterpart of classical Gevrey spaces, cf. Sect. 2.1. In particular, it is proved that starting from data with an exponential decay at infinity, we can find a solution with the same Gevrey regularity of the data but with a possible exponential growth at infinity in x. Moreover, this holds for every \(\theta >s_0\). Finally, the result in [6] is proved under the more general assumption with respect to (1.4) that the coefficients \(a_1,\ a_0\) may admit an algebraic growth at infinity, namely

(1.5)
(1.6)

Recently, we started to consider the case \(p=3\) in a Gevrey setting in one space dimension under assumption (1.4) with \(j=p-1\) taking \(\sigma \in (0,1)\). This case is of particular interest because linear 3-evolution equations can be regarded as linearizations of relevant physical semilinear models like KdV and KdV-Burgers equation and their generalizations, see for instance [21, 24,25,26, 30]. There are some results concerning KdV-type equations with coefficients not depending on (tx) in the Gevrey setting, see [16,17,18]. Our aim is to treat the more general case of variable coefficients. The present paper and [1] are devoted to establish the linear theory which is a preliminary step to treat the semilinear case. In a future paper we shall consider the case when the coefficients \(a_j\) may depend also on u following the approach developed in [2] in the \(H^\infty \) setting.

Also for the case \(p=3\), assuming a condition of form (1.4) with \(j=2\) on the term \(a_2\) for some \(\sigma \in (0,1)\), namely

(1.7)

is enough to lose in general well posedness both in \(H^\infty \) and in \({\mathscr {S}}\), since the necessary condition for \(H^\infty \) well-posedness

(1.8)

proved in [4] is no more satisfied. Namely, well posedness in \(H^\infty \) or in \({\mathscr {S}}\) may fail due to an infinite loss of regularity or of decay. To give an idea of the latter phenomenon, consider the following initial value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} D_t u+D^3_x u +a_2(t,x)D_x^2 u +a_1(t,x)D_x u +a_0(t,x)u =0 \\ u(0,x) = e^{-\langle x \rangle ^{1-\sigma }} \end{array}\right. }, \quad (t,x) \in [0,T]\times {\mathbb {R}}{,} \nonumber \\\end{aligned}$$
(1.9)

where

$$\begin{aligned} a_2(t,x)= & {} i(t-1)(1-\sigma ) x \langle x \rangle ^{-\sigma -1},\\ a_1(t,x)= & {} 2(t-1)(1-\sigma ) [\langle x \rangle ^{-\sigma -1}-(\sigma +1)x^2 \langle x \rangle ^{-\sigma -3}], \\ a_0(t,x)= & {} i \langle x \rangle ^{1-\sigma }+i(t-1)(1-\sigma ^2)[3x \langle x \rangle ^{-\sigma -3}-(\sigma +3)x^3 \langle x \rangle ^{-\sigma -5}]. \end{aligned}$$

Notice that the coefficients \(a_j\) are analytic and satisfy conditions (1.4) for \(j=2\) and (1.5), (1.6) for \(j=0,1\). Moreover the initial datum belongs to \({\mathscr {S}}({\mathbb {R}})\) since \(\sigma \in (0,1)\). It is easy to verify that problem (1.9) admits the solution

$$\begin{aligned} u(t,x)= e^{(t-1)\langle x \rangle ^{1-\sigma }} \notin C([0,T], {\mathscr {S}}({\mathbb {R}})), \end{aligned}$$

if \(T \ge 1\). Analogously, \(u \notin C([0,T], H^\infty ({\mathbb {R}}))\). More precisely, we notice that the solution has the same regularity as the initial data, but it grows exponentially for \(|x| \rightarrow \infty \) when \(t \ge 1.\) This motivates us to study the effect of an exponential decay of the data on the solution of (1.1).

In the recent paper [1] we proved a result of well posedness in the space \({\mathcal {H}}^\infty _{\theta }\) for problem (1.1) which extends to the case \(p=3\) the results obtained in [10, 22] for the case \(p=2\) (at least in one space dimension). As in the latter case, also in [1] a loss of regularity in the index \(\rho \) appears. However, the previous example suggests that this loss can be avoided assuming the initial data to admit a suitable exponential decay. The price to pay is a considerable loss of decay which may produce solutions admitting an exponential growth. In view of the considerations above, it is quite natural to analyze problem (1.1) when the initial data belong to Gelfand–Shilov spaces, cf. Sect. 2.1 for the definition.

In order to state our main result we need to recall the definition of Gevrey-type \({\textbf {{SG}}}\)-symbol classes and of Gelfand–Shilov Sobolev spaces.

Given \(\mu ,\, \nu \, \ge 1\), \(m = (m_1, m_2) \in {\mathbb {R}}^{2}\), we denote by \({\textbf {{SG}}}^{m_1,m_2}_{\mu , \nu }({\mathbb {R}}^{2})\) (or by \({\textbf {{SG}}}^{m}_{\mu , \nu }({\mathbb {R}}^{2})\)) the space of all functions \(p \in C^\infty ({\mathbb {R}}^{2})\) for which there exist \(C,C_1 > 0\) such that

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}p(x,\xi )| \le C_1 C^{|\alpha +\beta |}\alpha !^{\mu }\beta !^{\nu } \langle \xi \rangle ^{m_1 -|\alpha |} \langle x \rangle ^{m_2 - |\beta |}, \quad x,\xi \in {\mathbb {R}}, \alpha , \beta \in {\mathbb {N}}, \end{aligned}$$

see also Definition 2. In the case \(\mu =\nu \) we write \({\textbf {{SG}}}^{m_1,m_2}_{\mu }({\mathbb {R}}^{2})\) instead of \({\textbf {{SG}}}^{m_1,m_2}_{\mu , \mu }({\mathbb {R}}^{2})\). In the following we shall obtain our results via energy estimates; hence, we need to introduce the Gelfand–Shilov–Sobolev spaces \(H^{m}_{\rho ;s,\theta }({\mathbb {R}})\) defined, for \(m = (m_1, m_2), \rho = (\rho _1, \rho _2)\)   in \({\mathbb {R}}^{2}\) and \(\theta , s > 1\), by

$$\begin{aligned} H^{m}_{\rho ; s,\theta } ({\mathbb {R}}) = \{ u \in {\mathscr {S}}'({\mathbb {R}}) : \langle x \rangle ^{m_2} \langle D \rangle ^{m_1} e^{\rho _2 \langle x \rangle ^{\frac{1}{s}}} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} u \in L^{2}({\mathbb {R}}) \}, \end{aligned}$$

where \(e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}}\) is the Fourier multiplier with symbol \(e^{\rho _1 \langle \xi \rangle ^{\frac{1}{\theta }}}\). When \(\rho = (0, 0)\) we recover the usual notion of weighted Sobolev spaces (1.3).

Our pseudodifferential approach allows to consider more general 3-evolution operators of the form

$$\begin{aligned} P(t, D_t, x, D_x) = D_t + a_3(t, D_x) + a_2(t, x, D_x) + a_1(t, x, D_x) + a_0(t, x, D_x),\nonumber \\ \end{aligned}$$
(1.10)

\(t \in [0, T],\ x \in {\mathbb {R}},\) where \(a_3(t,D_x)\) is a pseudodifferential operator with symbol \(a_3(t,\xi )\in {\mathbb {R}}\), while, for \(j=0,1,2\), \(a_j(t,x,D_x)\) are pseudodifferential operators with symbols \(a_j(t,x,\xi )\in {\mathbb {C}}\). Notice that (1.2) in the case \(p=3\) is a particular case of (1.10). Our main result reads as follows.

Theorem 1

Let \(P(t, x, D_t, D_x)\) be an operator as in (1.10) and assume that there exist \(C_{a3}, R_{a3} > 0\) and \(\sigma {\in }\) (0, 1) such that the following conditions hold:

  1. (i)

    \(a_3 \in C([0, T], {\textbf {{SG}}}^{3, 0}_{1}({\mathbb {R}}^2))\), \(a_3\) is real valued and

    $$\begin{aligned} |\partial _{\xi } a_3(t, \xi )| \ge C_{a_3}|\xi |^{2}, \qquad \forall |\xi | \ge R_{a_3}, \,\, \forall t \in [0, T]; \end{aligned}$$
  2. (ii)

    ;

  3. (iii)

    ;

  4. (iv)

    .

Let \(s, \theta > 1\) such that \(s_0 \le s < \frac{1}{1-\sigma }\) and \(\theta > s_0\). Let \(f\in C([0,T]; H^{m}_{\rho ; s, \theta }({\mathbb {R}}))\) and \(g\in H^{m}_{\rho ; s, \theta }({\mathbb {R}})\), where \(m=(m_1,m_2),\rho =(\rho _1,\rho _2) \in {\mathbb {R}}^2\) and \(\rho _2 > 0\). Then the Cauchy problem (1.1) admits a solution \(u\in C([0,T]; H^{m}_{(\rho _1,-{\tilde{\delta }}); s,\theta }({\mathbb {R}}))\) for every \({\tilde{\delta }}>0\), which satisfies the following energy estimate

$$\begin{aligned} \Vert u(t) \Vert ^{2}_{H^{m}_{(\rho _1, -{\tilde{\delta }}); s, \theta }} \le C \left( \Vert g \Vert ^{2}_{H^m_{\rho ; s, \theta }} + \int _{0}^{t} \Vert f (\tau ) \Vert ^{2}_{H^{m}_{\rho ; s, \theta }} d\tau \right) , \end{aligned}$$
(1.11)

for all \(t\in [0,T]\) and for some \(C>0\).

Remark 1

We notice that the solution obtained in Theorem 1 has the same Gevrey regularity as the initial data, but it may lose the decay exhibited at \(t=0\) and admit an exponential growth for \(|x| \rightarrow \infty \) when \(t>0\). Moreover, the loss \(\rho _2+{{\tilde{\delta }}}\) for an arbitrary \({{\tilde{\delta }}}>0\) in the behavior at infinity is independent of \(\theta \), s and \(\rho _1.\) Both these phenomena had been already observed in the case \(p=2\), see [6].

Remark 2

Let us compare Theorem 1 with the recent result obtained in [1]. In the latter paper, taking \(a_0\) uniformly bounded, \(a_1 \sim \langle x\rangle ^{-\sigma /2} \) and \( a_2 \sim \langle x\rangle ^{-\sigma }\) for some \(\sigma \in (1/2,1)\), and the Cauchy data \(f(t),g\in H^{(m_1,0)}_{(\rho _1,0);s,\theta }\) with \( s_0<\theta <1/(2(1-\sigma ))\) we prove the existence of a unique solution \(u \in C([0,T], H^{(m_1,0)}_{(\rho _1',0);s,\theta }({\mathbb {R}}))\), for some \(\rho _1' \in (0,\rho _1)\), i.e., a solution less regular than the data. Theorem 1 in the present paper shows that if the data \(f(t),g\in H^{(m_1,m_2)}_{(\rho _1,\rho _2);s,\theta }\) with \(\rho _2>0, \theta >s_0\) and \(s_0 \le s< 1/(1-\sigma )\), then there exists a solution \(u \in C([0,T], H^{(m,0)}_{(\rho _1,-{\tilde{\delta }});s,\theta })\), \(\forall {\tilde{\delta }}>0\), i.e., a solution with the same index \(\rho _1\) as the data, but with a possible worse behavior at infinity: in particular, this solution may grow exponentially for \(|x|\rightarrow \infty \). Concerning the assumptions, with respect to the existing literature, in particular [1, 22], in our result we allow:

  • A polynomial growth of exponent \(1-\sigma \in (0,1)\) for the coefficients and \(a_0\);

  • An arbitrary Gevrey regularity index \(\theta >s_0\) both for the data and for the solution, without any upper bound: namely there is no relation between the rate of decay of the data and the Gevrey regularity of the solution.

Remark 3

Part of the recent literature on p-evolution equations is focused on the research of necessary conditions for the well posedness of problem (1.1) in various functional settings, see [4, 13, 20]. Necessary conditions are usually expressed in an integral form as in (1.8) instead than via pointwise decay estimates as in (1.7). As far as we know the only result of this type in the Gevrey setting concerns the case \(p=2\), see [13]. Our purpose is to investigate this problem in the next future for generic p.

In order to help the reading of the next sections we briefly outline the strategy of the proof of Theorem 1. Let

$$\begin{aligned} iP = \partial _{t} + ia_3(t,D) + \sum _{j = 0}^{2} ia_j(t,x,D) = \partial _{t} + ia_{3}(t,D) + A(t,x,D). \end{aligned}$$

Noticing that \(a_3(t,\xi )\) is real valued, we have

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert u(t) \Vert ^{2}_{L^{2}}&= 2\mathrm{Re}\, (\partial _{t} u(t), u(t))_{L^{2}} \\&= 2\mathrm{Re}\, (iPu(t), u(t))_{L^2} - 2\mathrm{Re}\, (ia_3(t,D) u(t), u(t))_{L^2} \\&\quad - 2\mathrm{Re}\,(Au(t), u(t))_{L^2} \\&\le \Vert Pu(t) \Vert ^{2}_{L^{2}} + \Vert u(t) \Vert ^{2}_{L^{2}} - \,((A+A^{*})u(t), u(t))_{L^2}. \end{aligned}$$

Since \((A+A^{*})(t) \in {\textbf {{SG}}}^{2, 1-\sigma }({\mathbb {R}}^{2})\) we cannot derive an energy inequality in \(L^2\) from the estimate above. The idea is then to conjugate the operator \({ iP}\) by a suitable pseudodifferential operator \(e^{\Lambda }(t,x,D)\) in order to obtain

$$\begin{aligned} (iP)_{\Lambda } := e^{\Lambda } (iP) \{e^{\Lambda }\}^{-1} = \partial _{t} + ia_3(t,D) + A_{\Lambda }(t,x,D), \end{aligned}$$

where \(A_{\Lambda }\) still has symbol \(A_\Lambda (t,x,\xi ) \in {\textbf {{SG}}}^{2, 1-\sigma }({\mathbb {R}}^2)\) but with \(\mathrm{Re}\, A_{\Lambda } \ge 0\). In this way, with the aid of Fefferman–Phong (see [14]) and sharp Gårding (see Theorem 1.7.15 of [28]) inequalities, we obtain the estimate from below

$$\begin{aligned} \mathrm{Re}\, (A_{\Lambda } v(t), v(t))_{L^{2}} \ge -c \Vert v(t) \Vert ^{2}_{L^{2}}, \end{aligned}$$

and therefore for the solution v of the Cauchy problem associated with the operator \(P_\Lambda \) we get

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert v(t) \Vert ^{2}_{L^{2}} \le C(\Vert (iP)_{\Lambda }v(t) \Vert ^{2}_{L^{2}} + \Vert v(t) \Vert ^{2}_{L^{2}}). \end{aligned}$$

Gronwall inequality then gives the desired energy estimate for the conjugated operator \((iP)_{\Lambda }\). By standard arguments in the energy method we then obtain that the Cauchy problem associated with \(P_{\Lambda }\)

$$\begin{aligned} {\left\{ \begin{array}{ll} P_{\Lambda }(t,D_t, x, D_x) v(t,x) = e^{\Lambda }(t,x,D_x) f(t,x), \quad (t,x) \in [0,T] \times {\mathbb {R}}\\ v(0,x)= e^{\Lambda }(0,x, D_x)g(x), \quad x \in {\mathbb {R}}, \end{array}\right. } \end{aligned}$$
(1.12)

is well-posed in the weighted Sobolev spaces \(H^{m}({\mathbb {R}})\) in (1.3). Finally, we derive the existence of a solution of (1.1) from the well posedness of (1.12). In fact, if u solves (1.1) then \(v=e^{\Lambda }u\) solves (1.12), and if v solves (1.12) then \(u=\{e^{\Lambda }\}^{-1}v\) solves (1.1). In this step the continuous mapping properties of \(e^{\Lambda }\) and \(\{e^{\Lambda }\}^{-1}\) will play an important role.

The construction of the operator \(e^\Lambda \) will be the core of the proof. The function \(\Lambda (t,x,\xi )\) will be of the form

$$\begin{aligned} \Lambda (t, x, \xi ) = k(t) \langle x \rangle ^{1-\sigma }_{h} + \lambda _2(x,\xi ) + \lambda _1(x,\xi ), \quad t \in [0,T],\, x, \xi \in {\mathbb {R}}, \end{aligned}$$

where \(\lambda _1, \lambda _2 \in {\textbf {{SG}}}^{0, 1-\sigma }_{\mu } ({\mathbb {R}}^2)\), \(k \in C^1([0,T]; {\mathbb {R}})\) is a non-increasing function to be chosen later on and \(\langle x \rangle _{h}=\sqrt{h^2+x^2}\) for some \(h \ge 1\) to be chosen later on. The role of the terms \(\lambda _1, \lambda _2, k\) will be the following:

  • The transformation with \(\lambda _2\) will change the terms of order 2 into the sum of a positive operator of the same order plus a remainder of order 1;

  • The transformation with \(\lambda _1\) will not change the terms of order 2, but it will turn the terms of order 1 into the sum of a positive operator of order 1 plus a remainder of order 0, with some growth with respect to x;

  • The transformation with \(k(t)\langle x \rangle _{h}^{1-\sigma }\) will correct this remainder term, making it positive.

The precise definitions of \(\lambda _2\) and \(\lambda _1\) will be given in Sect. 4. Since \(\Lambda \) admits an algebraic growth on the x variable, then \(e^{\Lambda }\) presents an exponential growth; this is the reason why we need to work with pseudodifferential operators of infinite order.

The paper is organized as follows. In Sect. 2 we recall some basic definitions and properties of Gelfand–Shilov spaces and the calculus for pseudodifferential operators of infinite order that we will use in the next sections. Section 3 is devoted to prove a result of spectral invariance for pseudodifferential operators with Gevrey regular symbols which is new in the literature and interesting per se. In this paper the spectral invariance will be used to prove the continuity properties of the inverse \(\{e^{\Lambda }(t,x,D)\}^{-1}\). In Sect. 4 we introduce the functions \(\lambda _1,\lambda _2\) mentioned above and prove the invertibility of the operator \(e^{{\tilde{\Lambda }}}(x,D), {\tilde{\Lambda }}=\lambda _1+\lambda _2\). In Sect. 5 we perform the change of variable and the conjugation of the operator \(\textit{iP}\). Section 6 concerns the choice of the parameters appearing in the definition of \(\Lambda \) in order to obtain a positive operator on \(L^2({\mathbb {R}})\). Finally, in Sect. 7, we give the proof of Theorem 1.

2 Gelfand–Shilov spaces and pseudodifferential operators of infinite order on \({\mathbb {R}}^n\)

2.1 Gelfand–Shilov spaces

Given \(s, \theta \ge 1\) and \(A, B > 0\) we say that a smooth function f belongs to \({\mathcal {S}}^{\theta , A}_{s, B} ({\mathbb {R}}^{n})\) if there is a constant \(C > 0\) such that

$$\begin{aligned} |x^{\beta } \partial ^{\alpha }_{x}f(x)| \le C A^{|\alpha |} B^{|\beta |}\alpha !^{\theta } \beta !^{s}, \end{aligned}$$

for every \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\) and \(x \in {\mathbb {R}}^{n}\). The norm

$$\begin{aligned} \Vert f \Vert _{\theta , s, A, B} \,= \sup _{\overset{x \in {\mathbb {R}}^{n}}{\alpha , \beta \in {\mathbb {N}}^{n}_{0}}} |x^{\beta } \partial ^{\alpha }_{x}f(x)|A^{-|\alpha |}B^{-|\beta |}\alpha !^{-\theta }\beta !^{-s} \end{aligned}$$

turns \({\mathcal {S}}^{\theta , A}_{s, B}({\mathbb {R}}^{n})\) into a Banach space. We define

$$\begin{aligned} {\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n}) = \bigcup _{A,B >0} {\mathcal {S}}^{\theta , A}_{s, B} ({\mathbb {R}}^{n}) \end{aligned}$$

and we can equip it with the inductive limit topology of the Banach spaces \({\mathcal {S}}^{\theta , A}_{s, B}({\mathbb {R}}^{n})\). The spaces \({\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n})\) have been originally introduced in the book [15], see [29]. We also consider the projective version, that is

$$\begin{aligned} \Sigma ^{\theta }_{s} ({\mathbb {R}}^{n}) = \bigcap _{A, B > 0} {\mathcal {S}}^{\theta , A}_{s, B} ({\mathbb {R}}^{n}) \end{aligned}$$

equipped with the projective limit topology. When \(\theta = s\) we simply write \({\mathcal {S}}_{\theta }\), \(\Sigma _{\theta }\) instead of \({\mathcal {S}}^{\theta }_{\theta }, \Sigma ^{\theta }_{\theta }\). We can also define, for \(C, \varepsilon > 0\), the normed space \({\mathcal {S}}^{\theta , \varepsilon }_{s, C}({\mathbb {R}}^{n})\) given by the functions \(f \in C^\infty ({\mathbb {R}}^n)\) such that there is \(C > 0\) satisfying

$$\begin{aligned} \Vert f\Vert _{s,\theta }^{\varepsilon , C}:= \sup _{{\mathop {\alpha \in {\mathbb {N}}^{n}_{0}}\limits ^{x \in {\mathbb {R}}^{n}}}}C^{-|\alpha |} \alpha !^{-\theta } e^{\varepsilon |x|^{\frac{1}{s}}}|\partial ^{\alpha }_{x}f(x)| <\infty , \end{aligned}$$

and we have (with equivalent topologies)

$$\begin{aligned} {\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n}) = \bigcup _{C,\varepsilon> 0} {\mathcal {S}}^{\theta , \varepsilon }_{s, C}({\mathbb {R}}^{n}) \quad \text {and} \quad \Sigma ^{\theta }_{s}({\mathbb {R}}^{n}) = \bigcap _{C,\varepsilon > 0} {\mathcal {S}}^{\theta , \varepsilon }_{s, C}({\mathbb {R}}^{n}). \end{aligned}$$

The following inclusions are continuous (for every \(\varepsilon > 0\))

$$\begin{aligned} \Sigma ^{\theta }_{s} ({\mathbb {R}}^{n}) \subset {\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n}) \subset \Sigma ^{\theta +\varepsilon }_{s+\varepsilon } ({\mathbb {R}}^{n}). \end{aligned}$$

All the previous spaces can be written in terms of the Gelfand–Shilov Sobolev spaces \(H^m_{\rho ;s,\theta },\) with \(\rho , m \in {\mathbb {R}}^2\) defined in Introduction. Namely, we have

$$\begin{aligned} {\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n})= \bigcup _{{\mathop {\rho _j>0, j=1,2}\limits ^{\rho \in {\mathbb {R}}^2}}}H^m_{\rho ;s,\theta }({\mathbb {R}}^{n}), \qquad \Sigma ^{\theta }_{s}({\mathbb {R}}^{n})= \bigcap _{{\mathop {\rho _j >0, j=1,2}\limits ^{\rho \in {\mathbb {R}}^2}}}H^m_{\rho ;s,\theta }({\mathbb {R}}^{n}). \end{aligned}$$

From now on we shall denote by \(({\mathcal {S}}^{\theta }_{s})' ({\mathbb {R}}^{n})\), \((\Sigma ^{\theta }_{s})' ({\mathbb {R}}^{n})\) the respective dual spaces.

Concerning the action of the Fourier transform \(\mathcal {F}\) we have the following isomorphisms

$$\begin{aligned}&{\mathcal {F}}: \Sigma ^{\theta }_{s} ({\mathbb {R}}^{n}) \rightarrow \Sigma ^{s}_{\theta } ({\mathbb {R}}^{n}), \quad {\mathcal {F}}: {\mathcal {S}}^{\theta }_{s}({\mathbb {R}}^{n}) \rightarrow {\mathcal {S}}^{s}_{\theta }({\mathbb {R}}^{n}), \\&\quad {\mathcal {F}}: H^{(m_1,m_2)}_{(\rho _1,\rho _2);s,\theta }({\mathbb {R}}^{n}) \rightarrow H^{(m_2,m_1)}_{(\rho _2,\rho _1);\theta ,s}({\mathbb {R}}^{n}). \end{aligned}$$

2.2 Pseudodifferential operators of infinite order

We start defining the symbol classes of infinite order.

Definition 1

Let \(\tau \in {\mathbb {R}}\), \(\kappa , \theta , \mu , \nu > 1\) and \(C ,c> 0\).

  1. (i)

    We denote by \({\textbf {{SG}}}^{\tau , \infty }_{\mu ,\nu ;\kappa }({\mathbb {R}}^{2n}; C, c)\) the Banach space of all functions \(p\in C^\infty ({\mathbb {R}}^{2n})\) satisfying the following condition:

    $$\begin{aligned} \Vert p\Vert _{C,c}:=\sup _{\alpha , \beta \in {\mathbb {N}}^n_0} C^{-|\alpha + \beta |} \alpha !^{-\mu } \beta !^{-\nu }\sup _{x,\xi \in {\mathbb {R}}^n}\langle \xi \rangle ^{-\tau +|\alpha |}\langle x \rangle ^{|\beta |}e^{-c|x|^{\frac{1}{\kappa }}}|\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x}p(x,\xi )| <\infty . \end{aligned}$$

    We set \({\textbf {{SG}}}^{\tau , \infty }_{\mu ,\nu ;\kappa }({\mathbb {R}}^{2n}):=\bigcup _{C,c>0}{\textbf {{SG}}}^{\tau , \infty }_{\mu ,\nu ;\kappa }({\mathbb {R}}^{2n}; C, c)\) with the topology of inductive limit of the Banach spaces \({\textbf {{SG}}}^{\tau , \infty }_{\mu ,\nu ;\kappa }({\mathbb {R}}^{2n}; C,c)\).

  2. (ii)

    We denote by \({\textbf {{SG}}}^{\infty , \tau }_{\mu ,\nu ;\theta }({\mathbb {R}}^{2n}; C,c)\) the Banach space of all functions \(p\in C^\infty ({\mathbb {R}}^{2n})\) satisfying the following condition:

    $$\begin{aligned} \Vert p\Vert ^{C,c}:=\sup _{\alpha , \beta \in {\mathbb {N}}^{n}_{0}}C^{-|\alpha + \beta |} \alpha !^{-\mu } \beta !^{-\nu }\sup _{x, \xi \in {\mathbb {R}}^{n}} \langle \xi \rangle ^{|\alpha |} \langle x \rangle ^{-\tau + |\beta |}e^{-c|\xi |^{\frac{1}{\theta }}} |\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x}p(x,\xi )| <\infty . \end{aligned}$$

    We set \({\textbf {{SG}}}^{\infty , \tau }_{\mu ,\nu ; \theta }({\mathbb {R}}^{2n}) :=\bigcup _{C,c>0} {\textbf {{SG}}}^{\infty , \tau }_{\mu ,\nu ;\theta }({\mathbb {R}}^{2n}; C,c)\) with the topology of inductive limit of the spaces \({\textbf {{SG}}}^{\infty , \tau }_{\mu ,\nu ; \theta }({\mathbb {R}}^{2n}; C,c)\).

We also need the following symbol classes of finite order.

Definition 2

Let \(\mu , \nu \ge 1\), \(m = (m_1, m_2) \in {\mathbb {R}}^{2}\) and \(C > 0\). We denote by \({\textbf {{SG}}}^{m}_{\mu ,\nu }({\mathbb {R}}^{2n}; C)\) the Banach space of all functions \(p\in C^\infty ({\mathbb {R}}^{2n})\) satisfying the following condition:

$$\begin{aligned} \sup _{\alpha , \beta \in {\mathbb {N}}^{n}_{0}}C^{-|\alpha + \beta |} \alpha !^{-\mu } \beta !^{-\nu }\sup _{x, \xi \in {\mathbb {R}}^{n}} \langle \xi \rangle ^{-m_1+|\alpha |} \langle x \rangle ^{-m_2 + |\beta |}|\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}p(x,\xi )| <\infty . \end{aligned}$$

We set \({\textbf {{SG}}}^{m}_{\mu , \nu }({\mathbb {R}}^{2n}):=\bigcup _{C>0}{\textbf {{SG}}}^{m}_{\mu ,\nu }({\mathbb {R}}^{2n}; C)\).

Finally we say that \(p \in {\textbf {{SG}}}^{m}({\mathbb {R}}^{2n})\) if for any \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\) there is \(C_{\alpha , \beta } > 0\) satisfying

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}p(x,\xi )| \le C_{\alpha ,\beta } \langle \xi \rangle ^{m_1 -|\alpha |} \langle x \rangle ^{m_2 - |\beta |}, \quad x,\xi \in {\mathbb {R}}^{n}. \end{aligned}$$

When \(\mu = \nu \) we write \({\textbf {{SG}}}^{m}_{\mu }({\mathbb {R}}^{2n})\), \({\textbf {{SG}}}^{\tau ,\infty }_{\mu ,\kappa }({\mathbb {R}}^{2n})\), \({\textbf {{SG}}}^{\infty ,\tau }_{\mu ,\theta }({\mathbb {R}}^{2n})\) instead of \({\textbf {{SG}}}^{m}_{\mu , \mu }({\mathbb {R}}^{2n})\), \({\textbf {{SG}}}^{\tau , \infty }_{\mu ,\mu ; k}({\mathbb {R}}^{2n})\), \({\textbf {{SG}}}^{\infty , \tau }_{\mu , \mu ; \theta }({\mathbb {R}}^{2n})\).

As usual, given a symbol \(p(x,\xi )\) we shall denote by p(xD) or by \(\text {op} (p)\) the pseudodifferential operator defined as standard by

where u belongs to some suitable function space depending on the assumptions on p, and stands for \((2\pi )^{-n}d\xi \). We have the following continuity results.

Proposition 1

Let \(\tau \in {\mathbb {R}}\), \(s > \mu \ge 1\), \(\nu \ge 1\) and \(p \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; s}({\mathbb {R}}^{2n})\). Then for every \(\theta >\nu \) the pseudodifferential operator p(xD) with symbol \(p(x,\xi )\) is continuous on \(\Sigma ^{\theta }_{s}({\mathbb {R}}^{n})\) and it extends to a continuous map on \((\Sigma ^{\theta }_{s})'({\mathbb {R}}^{n})\).

Proposition 2

Let \(\tau \in {\mathbb {R}}\), \(\theta > \nu \ge 1\), \(\mu \ge 1\) and \(p \in {\textbf {{SG}}}^{\infty , \tau }_{\mu , \nu ; \theta }({\mathbb {R}}^{2n})\). Then for every \(s >\mu \) the operator p(xD) is continuous on \(\Sigma ^{\theta }_{s}({\mathbb {R}}^{n})\) and it extends to a continuous map on \((\Sigma ^{\theta }_{s})'({\mathbb {R}}^{n})\)).

The proof of Propositions 1 and 2 can be derived following the argument in the proof of [6, Proposition 2.3] and [1, Proposition 1]. We leave details to the reader.

Now we define the notion of asymptotic expansion and recall some fundamental results, which can be found in Appendix A of [6]. For \(t_1, t_2 \ge 0\) set

$$\begin{aligned} Q_{t_1,t_2} = \{(x,\xi ) \in {\mathbb {R}}^{2n} : \langle x \rangle< t_1 \,\, \text {and} \,\, \langle \xi \rangle < t_2 \} \end{aligned}$$

and \(Q^{e}_{t_1, t_2} = {\mathbb {R}}^{2n} {\setminus } Q_{t_1, t_2}\). When \(t_1 = t_2 = t\) we simply write \(Q_t\) and \(Q^{e}_{t}\).

Definition 3

We say that:

  1. (i)

    \(\sum \nolimits _{j \ge 0} a_j \in \mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\) if \(a_j(x, \xi ) \in C^{\infty }({\mathbb {R}}^{2n})\) and there are \(C, c, B > 0\) satisfying

    $$\begin{aligned} |\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x} a_j(x, \xi )| \le C^{|\alpha | + |\beta | + 2j + 1} \alpha !^{\mu } \beta !^{\nu } j!^{\mu + \nu -1} \langle \xi \rangle ^{\tau - |\alpha | - j} \langle x \rangle ^{-|\beta | - j} e^{c|x|^{\frac{1}{\kappa }}}, \end{aligned}$$

    for every \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\), \(j \ge 0\) and \((x, \xi ) \in Q^{e}_{B(j)}\), where \(B(j) := Bj^{\mu + \nu - 1}\);

  2. (ii)

    \(\sum \nolimits _{j \ge 0} a_j \in \mathrm{FSG}^{\infty , \tau }_{\mu , \nu ; \theta }\) if \(a_j(x, \xi ) \in C^{\infty }({\mathbb {R}}^{2n})\) and there are \(C, c, B > 0\) satisfying

    $$\begin{aligned} |\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x} a_j(x, \xi )| \le C^{|\alpha | + |\beta | + 2j + 1} \alpha !^{\mu } \beta !^{\nu } j!^{\mu + \nu -1} \langle \xi \rangle ^{- |\alpha | - j} e^{c|\xi |^{\frac{1}{\theta }}} \langle x \rangle ^{ \tau -|\beta | - j}, \end{aligned}$$

    for every \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\), \(j \ge 0\) and \((x, \xi ) \in Q^{e}_{B(j)}\);

  3. (iii)

    \(\sum \nolimits _{j \ge 0} a_j \in \mathrm{FSG}^{m}_{\mu , \nu }\) if \(a_j(x, \xi ) \in C^{\infty }({\mathbb {R}}^{2n})\) and there are \(C, B > 0\) satisfying

    $$\begin{aligned} |\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x} a_j(x, \xi )| \le C^{|\alpha | + |\beta | + 2j + 1} \alpha !^{\mu } \beta !^{\nu } j!^{\mu + \nu -1} \langle \xi \rangle ^{m_1 - |\alpha | - j} \langle x \rangle ^{m_2 -|\beta | - j}, \end{aligned}$$

    for every \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\), \(j \ge 0\) and \((x, \xi ) \in Q^{e}_{B(j)}\).

Definition 4

Let \(\sum \nolimits _{j \ge 0} a_j\), \(\sum \nolimits _{j \ge 0} b_j\) in \(\mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\). We say that \(\sum \nolimits _{j \ge 0} a_j \sim \sum \nolimits _{j \ge 0} b_j\) in \(\mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\) if there are \(C, c, B > 0\) satisfying

$$\begin{aligned}&|\partial ^{\alpha }_{\xi }\partial ^{\beta }_{x} \sum _{j < N} (a_j - b_j) (x, \xi )| \\&\quad \le C^{|\alpha | + |\beta | + 2N + 1} \alpha !^{\mu } \beta !^{\nu } N!^{\mu + \nu - 1} \langle \xi \rangle ^{\tau - |\alpha | - N} \langle x \rangle ^{-|\beta | - N} e^{c|x|^{\frac{1}{\kappa }}}, \end{aligned}$$

for every \(\alpha , \beta \in {\mathbb {N}}^{n}_{0}\), \(N \ge 1\) and \((x, \xi ) \in Q^{e}_{B(N)}\). Analogous definitions for the classes \(\mathrm{FSG}^{\infty , \tau }_{\mu , \nu ; \theta }\), \(\mathrm{FSG}^{m}_{\mu , \nu }\).

Remark 4

If \(\sum \nolimits _{j \ge 0} a_j \in \mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\), then \(a_0 \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }\). Given \(a \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }\) and setting \(b_0 = a\), \(b_j = 0\), \(j \ge 1\), we have \(a = \sum \nolimits _{j \ge 0}b_j\). Hence we can consider \({\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }\) as a subset of \(\mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\).

Proposition 3

Given \(\sum \nolimits _{j \ge 0} a_j \in \mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa },\) there exists \(a \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }\) such that \(a \sim \sum \nolimits _{j \ge 0} a_j\) in \(\mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }\). Analogous results for the classes \(\mathrm{FSG}^{\infty , \tau }_{\mu , \nu ; \theta }\) and \(\mathrm{FSG}^{m}_{\mu , \nu }\).

Proposition 4

Let \(a \in {\textbf {{SG}}}^{0, \infty }_{\mu , \nu ; \kappa }\) such that \(a \sim 0\) in \(\mathrm{FSG}^{0, \infty }_{\mu , \nu ; \kappa }\). If \(\kappa > \mu + \nu - 1\), then \(a \in {\mathcal {S}}_\delta ({\mathbb {R}}^{2n})\) for every \(\delta \ge \mu + \nu - 1\). Analogous results for the classes \(\mathrm{FSG}^{\infty , \tau }_{\mu , \nu ; \theta }\) and \(\mathrm{FSG}^{m}_{\mu , \nu }\).

Concerning the symbolic calculus and the continuous mapping properties on the Gelfand–Shilov Sobolev spaces we have the following results, cf. [6, Propositions A.12 and A.13].

Theorem 2

Let \(p \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n})\), \(q \in {\textbf {{SG}}}^{\tau ', \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n})\) with \(\kappa > \mu + \nu - 1\). Then the \(L^{2}\) adjoint \(p^{*}\) and the composition \(p\circ q\) have the following structure:

\(p^{*}(x,D) = a(x, D) + r(x,D)\) where \(r\in {\mathcal {S}}_{\mu +\nu -1}({\mathbb {R}}^{2n})\), \(a \in {\textbf {{SG}}}^{\tau , \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n})\), and

$$\begin{aligned} a(x,\xi ) \sim \sum _{\alpha } \frac{1}{\alpha !} \partial ^{\alpha }_{\xi }D^{\alpha }_x \overline{p(x,\xi )} \,\, \text {in} \,\, \mathrm{FSG}^{\tau , \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n}); \end{aligned}$$

\(p(x,D)\circ q(x, D) = b(x,D) + s(x,D)\), where \(s \in {\mathcal {S}}_{\mu +\nu -1}({\mathbb {R}}^{2n})\), \(b \in {\textbf {{SG}}}^{\tau +\tau ', \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n})\) and

$$\begin{aligned} b(x, \xi ) \sim \sum _{\alpha } \frac{1}{\alpha !} \partial ^{\alpha }_{\xi }p(x,\xi ) D^{\alpha }_xq(x,\xi ) \,\, \text {in} \,\, \mathrm{FSG}^{\tau +\tau ', \infty }_{\mu , \nu ; \kappa }({\mathbb {R}}^{2n}). \end{aligned}$$

Analogous results for the classes \({\textbf {{SG}}}^{\infty , \tau }_{\mu , \nu ; \theta }({\mathbb {R}}^{2n})\) and \({\textbf {{SG}}}^{m}_{\mu ,\nu }({\mathbb {R}}^{2n})\).

Theorem 3

Let \(p \in {\textbf {{SG}}}^{m'}_{\mu , \nu }({\mathbb {R}}^{2n})\) for some \(m' \in {\mathbb {R}}^{2}\). Then for every \(m, \rho \in {\mathbb {R}}^{2}\) and \(s,\theta \) such that \(\min \{s, \theta \} > \mu + \nu - 1\) the operator p(xD) maps \(H^{m}_{\rho ; s, \theta }({\mathbb {R}}^{n})\) into \(H^{m-m'}_{\rho ; s, \theta }({\mathbb {R}}^{n})\) continuously.

A simple application of the Faà di Bruno formula gives us the following result.

Proposition 5

If \(\lambda \in {\textbf {{SG}}}^{0,\frac{1}{\kappa }}_{\mu }({\mathbb {R}}^{2n})\), then \(e^{\lambda } \in {\textbf {{SG}}}^{0, \infty }_{\mu ; \kappa }({\mathbb {R}}^{2n})\). If \(\lambda \in {\textbf {{SG}}}^{\frac{1}{\theta }, 0}_{\mu }({\mathbb {R}}^{2n})\), then \(e^{\lambda } \in {\textbf {{SG}}}^{\infty , 0}_{\mu ; \theta }({\mathbb {R}}^{2n})\).

We conclude this section proving the following theorem.

Theorem 4

Let \(\rho , m \in {\mathbb {R}}^{2}\) and \(s, \theta , \mu > 1\) with \(\min \{s,\theta \} > 2\mu - 1\). Let \(\lambda \in {\textbf {{SG}}}^{0, \frac{1}{\kappa }}_{\mu }({\mathbb {R}}^{2n})\). Then:

  1. (i)

    If \(\kappa >s\), the operator \(e^\lambda (x,D):H^{m}_{\rho ; s, \theta } ({\mathbb {R}}^{n})\longrightarrow H^{m}_{\rho -\delta e_2; s, \theta }({\mathbb {R}}^{n})\) is continuous for every \(\delta >0\), where \(e_2 = (0, 1)\);

  2. (ii)

    If \(\kappa =s \), the operator \(e^{\lambda }(x,D):H^{m}_{\rho ; s, \theta } ({\mathbb {R}}^{n})\longrightarrow H^{m}_{\rho -\delta e_2; s, \theta }({\mathbb {R}}^{n})\) is continuous for every

    $$\begin{aligned} \delta >C(\lambda ):= \sup _{(x,\xi ) \in {\mathbb {R}}^{2n}} \displaystyle \frac{\lambda (x,\xi )}{\langle x \rangle ^{1/s}}. \end{aligned}$$

Proof

  1. (i)

    Let \(\phi \) be a Gevrey function of index \(\mu \) such that, for a positive constant K, \(\phi (x)=1\) for \(|x|<K/2\), \(\phi (x)=0\) for \(|x|>K\) and \(0\le \phi (x)\le 1\) for every \(x\in {\mathbb {R}}^n\). We split the symbol \(e^{\lambda (x,\xi )}\) as

    $$\begin{aligned} e^{\lambda (x,\xi )} = \phi (x)e^{\lambda (x,\xi )} +(1-\phi (x))e^{\lambda (x,\xi )} = a_1(x,\xi )+ a_2(x,\xi ). \end{aligned}$$
    (2.1)

    Since \(\phi \) has compact support and \(\lambda \) has order zero with respect to \(\xi \), we have \(a_1 \in {\textbf {{SG}}}_\mu ^{0,0}\). On the other hand, given any \(\delta >0\) and choosing K large enough, since \(\kappa >s\) we may write \(|\lambda (x,\xi )|\langle x\rangle ^{-1/s}<\delta \) on the support of \(a_2(x,\xi )\). Hence we obtain

    $$\begin{aligned} a_2(x,\xi )=e^{\delta \langle x\rangle ^{1/s}}(1-\phi (x))e^{\lambda (x,\xi )-\delta \langle x\rangle ^{1/s}}, \end{aligned}$$

    with \((1-\phi (x))e^{\lambda (x,\xi )-\delta \langle x\rangle ^{1/s}}\) of order (0, 0) because \(\lambda (x,\xi )-\delta \langle x\rangle ^{1/s}<0\) on the support of \((1-\phi (x))\). Thus, (2.1) becomes

    $$\begin{aligned} e^{\lambda (x,\xi )}= a_1(x,\xi )+e^{\delta \langle x\rangle ^{1/s}}{{\tilde{a}}}_2(x,\xi ), \end{aligned}$$

    \(a_1\) and \({{\tilde{a}}}_2\) of order (0, 0). Since by Theorem 3 the operators \(a_1(x,D)\) and \({{\tilde{a}}}_2(x,D)\) map continuously \(H^m_{\rho ,s,\theta }\) into itself, then we obtain (i). The proof of (ii) follows a similar argument and can be found in [6, Theorem 2.4].\(\square \)

3 Spectral invariance for SG-\(\Psi \)DO with Gevrey estimates

Let \(p \in {\textbf {{SG}}}^{0,0}({\mathbb {R}}^{2n})\), then p(xD) extends to a continuous operator on \(L^{2}({\mathbb {R}}^{n})\). Suppose that \(p(x,D): L^{2}({\mathbb {R}}^{n}) \rightarrow L^{2}({\mathbb {R}}^{n})\) is bijective. The question is to determine whether or not the inverse \(p^{-1}\) is also a \({\textbf {{SG}}}\) operator of order (0, 0). This is known as the spectral invariance problem and it has an affirmative answer, see [11].

Following the ideas presented in [11, pp. 51–57], we will prove that the symbol of \(p^{-1}\) satisfies Gevrey estimates, whenever the symbol \(p \in {\textbf {{SG}}}^{0,0}_{\mu , \nu }({\mathbb {R}}^{2n})\). This is an important step in the study of the continuous mapping properties of \(\{e^{\Lambda }(x,D)\}^{-1}\) on Gelfand–Shilov Sobolev spaces \(H^m_{\rho ;s,\theta }\).

Theorems 5, 6, 7 here below can be found in [31, Chapters 20, 21].

Theorem 5

Let XY separable Hilbert spaces. Then a bounded operator \(A: X \rightarrow Y\) is Fredholm if and only if there are \(B: Y \rightarrow X\) bounded, \(K_1: X \rightarrow Y\) and \(K_2: Y \rightarrow X\) compact operators such that

$$\begin{aligned} BA = I_X - K_1, \qquad AB = I_{Y} - K_2. \end{aligned}$$

Theorem 6

Let XYZ be separable Hilbert spaces, and let \(A: X \rightarrow Y\), \(B: Y \rightarrow Z\) be Fredholm operators. Then:

  • \(B \circ A : X \rightarrow Z\) is Fredholm and \(i(BA) = i(B) + i(A)\), where \(i(\cdot )\) stands for the index of a Fredholm operator;

  • \(Y = N(A^{t}) \oplus R(A)\).

Remark 5

Let X be a Hilbert space and \(K: X \rightarrow X\) be a compact operator. Then \(I-K\) is Fredholm and \(i(I-K) = 0\).

Theorem 7

Let \(p \in {\textbf {{SG}}}^{m_1, m_2}({\mathbb {R}}^{2n})\) such that \(p(x,D): H^{s_1+m_1,s_2+m_2}({\mathbb {R}}^{n}) \rightarrow H^{m_1,m_2}({\mathbb {R}}^{n})\) is Fredholm for some \(s_1, s_2 \in {\mathbb {R}}\). Then p is \({\textbf {{SG}}}\)-elliptic, that is there exist \(C,R>0\) such that

$$\begin{aligned} |p(x,\xi )| \ge C \langle \xi \rangle ^{m_1} \langle x \rangle ^{m_2} \qquad \textit{for}\quad (x,\xi ) \in Q_R^e. \end{aligned}$$

Theorem 8

Let \(p \in {\textbf {{SG}}}^{m_1, m_2}_{\mu ,\nu }({\mathbb {R}}^{2n})\) be \({\textbf {{SG}}}\)-elliptic. Then there is \(q \in {\textbf {{SG}}}^{-m_1, -m_2}_{\mu ,\nu }({\mathbb {R}}^{2n})\) such that

$$\begin{aligned} p(x,D) \circ q(x,D) = I + r_1(x,D), \qquad q(x,D) \circ p(x,D) = I + r_2(x,D), \end{aligned}$$

where \(r_1, r_2 \in S_{\mu +\nu -1}({\mathbb {R}}^{2n})\).

Proof

See [28, Theorem 6.3.16]. \(\square \)

In order to prove the main result of this section, we need the following technical lemma.

Lemma 1

Let \(A : L^{2}({\mathbb {R}}^n) \rightarrow L^{2}({\mathbb {R}}^n)\) be a bounded operator such that A and \(A^{*}\) map \(L^2({\mathbb {R}}^n)\) into \(\Sigma _{r}({\mathbb {R}}^n)\) continuously. Then the Schwartz kernel of A belongs to \(\Sigma _{r}({\mathbb {R}}^{2n})\).

Proof

Since \(\Sigma _{r}({\mathbb {R}}^{2n}) \subset L^{2}({\mathbb {R}}^{2n})\) is a nuclear Fréchet space, (cf. [12]), by [19, Propositions 2.1.7 and 2.1.8], we have that A is defined by a kernel H(xy) and we can write

$$\begin{aligned} H(x,y) = \sum _{j \in {\mathbb {N}}_{0}} a_j f_j(x) g_j(y) = \sum _{j \in {\mathbb {N}}_{0}} {\tilde{a}}_{j} {\tilde{f}}_{j}(x) {\tilde{g}}_{j}(y), \end{aligned}$$

where \(a_{j}, {\tilde{a}}_{j} \in {\mathbb {C}}\), \({\tilde{f}}_{j}(x), g_{j}(y) \in \Sigma _{r}({\mathbb {R}}^{n})\), \(f_{j}(x), {\tilde{g}}_{j}(y) \in L^{2}({\mathbb {R}}^{n})\), \(\sum \nolimits _{j}|a_j| < \infty \), \(\sum \nolimits _{j}|{\tilde{a}}_{j}| < \infty \), \({\tilde{f}}_{j}(x), g_{j}(y)\) converge to zero in \(\Sigma _{r}({\mathbb {R}}^{n})\) and \(f_{j}(x), {\tilde{g}}_{j}(y)\) converge to zero in \(L^{2}({\mathbb {R}}^{n})\).

We now use the following characterization: \(H \in \Sigma _{r}({\mathbb {R}}^{2n})\) if and only if

$$\begin{aligned} \sup _{\alpha , \beta \in {\mathbb {N}}^{n}_{0}} \left\| \dfrac{x^{\alpha }y^{\beta }H(x,y)}{C^{|\alpha |+|\beta |}\alpha !^r\beta !^r} \right\| _{L^{2}}< \infty \quad \text {and} \quad \sup _{\alpha , \beta \in {\mathbb {N}}^{n}_{0}} \left\| \dfrac{\xi ^{\alpha }\eta ^{\beta }{\widehat{H}}(\xi ,\eta )}{C^{|\alpha |+|\beta |}\alpha !^r\beta !^r} \right\| _{L^{2}} < \infty \end{aligned}$$

for every \(C>0\), and prove that both the latter conditions hold. Note that

$$\begin{aligned} \Vert y^{\beta }H(x,y) \Vert ^{2}_{L^{2}}&= \iint \left| \sum _{j \in {\mathbb {N}}_{0}} a_j f_j(x) y^{\beta } g_j(y)\right| ^{2} \mathrm{d}x\,\mathrm{d}y \\&= \iint \left| \sum _{j \in {\mathbb {N}}_{0}} a_j^{\frac{1}{2}} f_j(x) a_j^{\frac{1}{2}} y^{\beta } g_j(y)\right| ^{2} \mathrm{d}x\,\mathrm{d}y \\&\le \int \sum _{j \in {\mathbb {N}}_{0}} |a_j^{\frac{1}{2}} f_j(x)|^{2} \mathrm{d}x \int \sum _{j \in {\mathbb {N}}_{0}} |a_j^{\frac{1}{2}} y^{\beta } g_j(y)|^{2} \mathrm{d}y \\&= \sum _{j \in {\mathbb {N}}_{0}} |a_j| \Vert f_j\Vert ^{2}_{L^{2}} \sum _{j \in {\mathbb {N}}_{0}} |a_j| \Vert y^{\beta }g_j\Vert ^{2}_{L^{2}}. \end{aligned}$$

Since \(g_j\) converges to zero in \(\Sigma _r({\mathbb {R}}^{n})\), we have

$$\begin{aligned} \Vert y^{\beta } g_{j}(y) \Vert _{L^2} = \left\| \dfrac{y^{\beta } g_{j}(y)}{C^{|\beta |}\beta !^{r}} \right\| _{L^2} C^{|\beta |} \beta !^{r} \le C^{|\beta |} \beta !^{r} \sup _{j \in {\mathbb {N}}_{0}} \left\| \dfrac{y^{\beta } g_{j}(y)}{C^{|\beta |}\beta !^{r}} \right\| _{L^2}, \end{aligned}$$

for every \(C>0\), and therefore

$$\begin{aligned} \Vert y^{\beta }H(x,y) \Vert _{L^{2}} \le \left( \sum _{j \in {\mathbb {N}}_0} |a_j|\right) \sup _{j \in {\mathbb {N}}_{0}} \Vert f_j \Vert _{L^2} \sup _{j \in {\mathbb {N}}_{0}} \left\| \dfrac{y^{\beta } g_{j}(y)}{C^{|\beta |}\beta !^{r}} \right\| _{L^2} C^{|\beta |} \beta !^{r}. \end{aligned}$$

Hence

$$\begin{aligned} \sup _{\beta \in {\mathbb {N}}^{n}_{0}}\ \left\| \dfrac{y^{\beta } H(x,y)}{C^{|\beta |} \beta !^{r}} \right\| _{L^2}< \infty \iff \sup _{N \in {\mathbb {N}}_{0}}\ \left\| \dfrac{ \langle y \rangle ^{N} H(x,y)}{C^{N} N!^{r}} \right\| _{L^2} < \infty , \end{aligned}$$

for every \(C >0\). Using the representation \(\sum \nolimits {\tilde{a}}_{j} {\tilde{f}}_{j}(x) {\tilde{g}}_{j}(y)\), analogously we can obtain

$$\begin{aligned} \sup _{\alpha \in {\mathbb {N}}^{n}_{0}}\ \left\| \dfrac{x^{\alpha } H(x,y)}{C^{|\alpha |} \alpha !^{r}} \right\| _{L^2}< \infty \iff \sup _{N \in {\mathbb {N}}_{0}}\ \left\| \dfrac{ \langle x \rangle ^{N} H(x,y)}{C^{N} N!^{r}} \right\| _{L^2} < \infty , \end{aligned}$$

for every \(C>0\).

Now note that, for every \(N \in {\mathbb {N}}_0\), \(x, y \in {\mathbb {R}}^{n}\),

$$\begin{aligned} \langle x, y \rangle ^{N} = (\langle x \rangle ^{2} + |y|^{2})^{\frac{N}{2}} \le (\langle x \rangle + \langle y \rangle )^{N} \le 2^{N-1}(\langle x \rangle ^N + \langle y \rangle ^N). \end{aligned}$$

Therefore, for every \(C > 0\),

$$\begin{aligned} \left\| \frac{\langle x, y \rangle ^{N} H(x, y)}{C^{N} N!^{r}} \right\| _{L^{2}} \le \left\| \frac{\langle x \rangle ^{N} H(x, y)}{C^{N}_{1} N!^{r}} \right\| _{L^{2}} + \left\| \frac{\langle y \rangle ^{N} H(x, y)}{C^{N}_{1} N!^{r}} \right\| _{L^{2}}, \end{aligned}$$

where \(C_1 = (2^{-1}C)^{N}\). Hence, for every \(C > 0\),

$$\begin{aligned} \sup _{N \in {\mathbb {N}}_0} \left\| \frac{\langle x, y \rangle ^{N} H(x, y)}{C^{N} N!^{r}} \right\| _{L^{2}} < \infty . \end{aligned}$$

Since the Fourier transformation is an isomorphism on \(L^{2}\) and on \(\Sigma _{r}\), we have

$$\begin{aligned} {\widehat{H}}(\xi ,\eta ) = \sum _{j \in {\mathbb {N}}_{0}} a_j {\widehat{f}}_j(\xi ) {\widehat{g}}_j(\eta ) = \sum _{j \in {\mathbb {N}}_{0}} {\tilde{a}}_{j} \widehat{{\tilde{f}}}_{j}(\xi ) \widehat{{\tilde{g}}}_{j}(\eta ), \end{aligned}$$

where \(a_{j}, {\tilde{a}}_{j} \in {\mathbb {C}}\), \(\widehat{{\tilde{f}}}_{j}(\xi ), {\widehat{g}}_j(\eta ) \in \Sigma _{r}({\mathbb {R}}^{n})\), \({\widehat{f}}_j(\xi ), \widehat{{\tilde{g}}}_{j}(\eta ) \in L^{2}({\mathbb {R}}^{n})\), \(\sum \nolimits _{j}|a_j| < \infty \), \(\sum \nolimits _{j}|{\tilde{a}}_{j}| < \infty \), \(\widehat{{\tilde{f}}}_{j}(\xi ), {\widehat{g}}_j(\eta )\) converge to zero in \(\Sigma _{r}({\mathbb {R}}^{n})\) and \({\widehat{f}}_j(\xi ), \widehat{{\tilde{g}}}_{j}(\eta )\) converge to zero in \(L^{2}({\mathbb {R}}^{n})\). In an analogous way as before we get, for every \(C > 0\),

$$\begin{aligned} \sup _{N \in {\mathbb {N}}_0} \left\| \frac{\langle \xi , \eta \rangle ^{N} {\widehat{H}}(\xi , \eta )}{C^{N} N!^{r}} \right\| _{L^{2}} < \infty . \end{aligned}$$

Hence \(H \in \Sigma _{r}({\mathbb {R}}^{2n})\). \(\square \)

Theorem 9

Let \(p \in {\textbf {{SG}}}^{0,0}_{\mu ,\nu }({\mathbb {R}}^{2n})\) such that \(p(x,D): L^2({\mathbb {R}}^n) \rightarrow L^2({\mathbb {R}}^n)\) is bijective. Then \(\{p(x,D)\}^{-1}: L^2({\mathbb {R}}^n) \rightarrow L^2({\mathbb {R}}^n)\) is a pseudodifferential operator given by a symbol \({\tilde{p}}= q + {\tilde{k}}\) where \(q \in {\textbf {{SG}}}^{0,0}_{\mu ,\nu }({\mathbb {R}}^{2n})\) and \({\tilde{k}} \in \Sigma _{r}({\mathbb {R}}^{2n})\) for every \(r > \mu + \nu - 1\).

Proof

Since \(p(x,D): L^2({\mathbb {R}}^n) \rightarrow L^2({\mathbb {R}}^n)\) is bijective, then p(xD) is Fredholm and

$$\begin{aligned} i(p(x, D)) = dim N(p(x,D)) - dim N(p^{t}(x,D)) = 0, \end{aligned}$$

where N denotes the kernel of the operators.

Therefore by Theorem 7p is \({\textbf {{SG}}}\)-elliptic and by Theorem 8 there is \(q \in {\textbf {{SG}}}^{0,0}_{\mu , \nu }({\mathbb {R}}^{2n})\) such that

$$\begin{aligned} q(x,D) \circ p(x,D) = I + r(x,D), \qquad p(x,D) \circ q(x,D) = I + s(x,D), \end{aligned}$$

for some \(r, s \in {\mathcal {S}}_{\mu +\nu -1}({\mathbb {R}}^{2n})\). In particular r(xD), s(xD) are compact operators on \( L^2({\mathbb {R}}^n)\). By Theorem 5q(xD) is a Fredholm operator and we have

$$\begin{aligned} i(q(x,D))= & {} i(q(x,D)) + i(p(x,D)) \\= & {} i(q(x,D) \circ p(x,D)) = i(I + r(x,D)) = 0. \end{aligned}$$

Note that N(q(xD)) and \(N(q^{t}(x,D))\) are subspaces of \({\mathcal {S}}_{\mu +\nu -1}({\mathbb {R}}^{n})\). Indeed, let \(f \in N(q)\) and \(g \in N(q^t)\), then

$$\begin{aligned} 0= & {} p(x,D) \circ q(x,D) f = (I+s(x,D))f \implies f = -s(x,D)f, \\ 0= & {} p^t(x,D) \circ q^t(x,D) g = (q(x,D) \circ p(x,D))^t g \\= & {} (I+r(x,D))^t g \implies g = -r^{t}(x,D)g. \end{aligned}$$

Since \(L^2({\mathbb {R}}^n)\) is a separable Hilbert space and N(q(xD)) is closed, we have the following decompositions

$$\begin{aligned} L^2 = N(q) \oplus N(q)^{\bot }, \qquad L^2 = N(q^t) \oplus R_{L^2}(q), \end{aligned}$$

where \(R_{L^2}(q)\) denotes the range of q(xD) as an operator on \(L^2({\mathbb {R}}^{n})\).

Let \(\pi : L^2 \rightarrow N(q)\) the projection of \(L^2\) onto N(q) with null space \(N(q)^{\bot }\), \(F: N(q) \rightarrow N(q^t)\) an isomorphism and \(i: N(q^t) \rightarrow L^2\) the inclusion. Set \(Q = i\circ F \circ \pi \). Then \(Q: L^2 \rightarrow L^2\) is bounded and its image is contained in \(N(q^t) \subset {\mathcal {S}}_{\mu +\nu -1}\). It is not difficult to see that \(Q^{*} = {\tilde{i}} \circ F^{*} \circ \pi _{N(q^{t})}\), where \({\tilde{i}}\) is the inclusion of N(q) into \(L^2\) and \(\pi _{N(q^{t})}\) is the orthogonal projection of \(L^2\) onto N(q). Since \({\mathcal {S}}_{\mu +\nu -1} \subset \Sigma _{r}\), then by Lemma 1, Q is given by a kernel in \(\Sigma _{r}\).

We will now show that \(q + Q\) is a bijective parametrix of p. Indeed, let \(u = u_1 + u_2 \in N(q)\oplus N(q^t)\) such that \((q + Q) u = 0\). Then \(0 = q u_2 + (i\circ F)u_1 \in R_{L^2}(q) \oplus N(q^t)\). Hence \(q u_2 = 0\) and \(i\circ F u_1 = 0\) which implies that \(u = 0\). In order to prove that Q is surjective, consider \(f = f_1 + f_2 \in R_{L^2}(q) \oplus N(q^t)\). There exist \(u_1 \in L^2\) and \(u_2 \in N(q)\) such that \(q u_1 = f_1\) and \(F u_2 = f_2\). Now write \(u_1 = v_1 + v_2 \in N(q) \oplus N(q)^{\bot }\). Then \(q(u_1) = q(v_2)\) and therefore \((q+Q)(v_2+u_2) = f_1 + f_2 = f\). Finally notice that

$$\begin{aligned}&p(x,D) \circ (q(x,D) + Q) = I + s(x,D) + p(x,D) \circ Q = I + s'(x,D), \\&(q(x,D) + Q) \circ p(x,D) = I + r(x,D) + Q \circ p(x,D) = I + r'(x,D), \end{aligned}$$

where \(r', s' \in \Sigma _{r}({\mathbb {R}}^{2n})\).

Now set \({\tilde{q}} = q(x,D) + Q\). Therefore \({\tilde{q}} \circ p(x,D) = I + r'(x,D): L^2 \rightarrow L^2\) is bijective. Set \(k = -(I+r')^{-1} \circ r'\). Then \((I+r')(I+k) = I\) and \(k= - r' - r'k\). Observe that

$$\begin{aligned} k^t = -\{r'\}^t - k^t\{r'\}^t = -\{r'\}^t+\{r'\}^t\{(I+r')^{-1}\}^{t}\{r'\}^t. \end{aligned}$$

Hence \(k, k^t\) map \(L^2\) into \(\Sigma _{r}\) and by Lemma 1 we have that k is given by a kernel in \(\Sigma _{r}({\mathbb {R}}^{2n})\).

To finish the proof, it is enough to notice that

$$\begin{aligned} p^{-1} \circ {\tilde{q}}^{-1} = ({\tilde{q}} \circ p)^{-1} = (I+r')^{-1} = I+k \implies p^{-1} = (I+k) \circ {\tilde{q}}. \end{aligned}$$

\(\square \)

4 Change of variables

4.1 Definition and properties of \(\lambda _2\) and \(\lambda _1\)

Let \(M_2, M_1>0\) and \(h \ge 1\) to be chosen later on. We define

$$\begin{aligned} \lambda _2(x, \xi )= & {} M_2 w\left( \frac{\xi }{h}\right) \int _{0}^{x} \langle y \rangle ^{-\sigma } \mathrm{d}y, \quad (x, \xi ) \in {\mathbb {R}}^2, \end{aligned}$$
(4.1)
$$\begin{aligned} \lambda _1(x, \xi )= & {} M_1 w\left( \frac{\xi }{h}\right) \langle \xi \rangle ^{-1}_{h} \int _{0}^{x} \langle y \rangle ^{-\frac{\sigma }{2}} \psi \left( \frac{\langle y \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \mathrm{d}y, \quad (x, \xi ) \in {\mathbb {R}}^2, \end{aligned}$$
(4.2)

where

$$\begin{aligned} w (\xi ) = {\left\{ \begin{array}{ll} 0, \qquad \qquad \qquad \,\,\, |\xi | \le 1\\ -\text {sgn}(\partial _{\xi }a_3(t, \xi )) , \quad |\xi | > R_{a_3} \end{array}\right. }, \quad \psi (y) = {\left\{ \begin{array}{ll} 1, \quad |y| \le \frac{1}{2} \\ 0, \quad |y| \ge 1 \end{array}\right. }, \end{aligned}$$

\(|\partial ^{\alpha } w(\xi )| \le C_{w}^{\alpha + 1} \alpha !^{\mu }\), \(|\partial ^{\beta } \psi (y)| \le C_{\psi }^{\beta + 1}\beta !^{\mu }\) for some \(\mu > 1\) which will be chosen later. Notice that by the assumption (i) of Theorem 1 the function \(w(\xi )\) is constant for \(\xi \ge R_{a_3}\) and for \(\xi \le -R_{a_3}.\)

Lemma 2

Let \(\lambda _2(x, \xi )\) as in (4.1). Then there exists \(C>0\) such that for all \(\alpha , \beta \in {\mathbb {N}}\) and \((x,\xi ) \in {\mathbb {R}}^2\):

  1. (i)

    \(|\lambda _2(x, \xi )| \le \frac{M_2}{1-\sigma } \langle x \rangle ^{1-\sigma };\)

  2. (ii)

    \(|\partial ^{\beta }_{x}\lambda _2(x, \xi )| \le M_2 C^{\beta } \beta ! \langle x \rangle ^{1-\sigma -\beta }\), for \(\beta \ge 1\);

  3. (iii)

    \(| \partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}\lambda _2(x, \xi )| \le M_2 C^{\alpha +\beta +1} \alpha !^{\mu } \beta ! \chi _{E_{h, R_{a_3}}} (\xi ) \!\langle \! \xi \! \rangle ^{-\alpha }_{h}\!\langle \! x \rangle ^{1-\sigma - \beta }\), for \(\alpha \ge 1, \beta \ge 0\),

where \(E_{h, R_{a_3}} = \{ \xi \in {\mathbb {R}}:h \le |\xi | \le R_{a_3}h\}\). In particular \(\lambda _2 \in {\textbf {{SG}}}^{0, 1-\sigma }_{\mu }({\mathbb {R}}^2)\).

Proof

First note that

$$\begin{aligned} |\lambda _2(x, \xi )| = M_2 \left| w\left( \frac{\xi }{h}\right) \right| \int _{0}^{|x|} \langle y \rangle ^{-\sigma } \mathrm{d}y \le M_2 \int _{0}^{\langle x \rangle } y^{-\sigma } \mathrm{d}y = \frac{M_2}{1-\sigma } \langle x \rangle ^{1-\sigma }. \end{aligned}$$

For \(\beta \ge 1\)

$$\begin{aligned} |\partial ^{\beta }_{x} \lambda _2(x, \xi )| \le M_2 \left| w\left( \frac{\xi }{h}\right) \right| |\partial ^{\beta - 1}_{x} \langle x \rangle ^{-\sigma }| \le M_2 C^{\beta -1} (\beta - 1)! \langle x \rangle ^{1-\sigma - \beta }. \end{aligned}$$

For \(\alpha \ge 1\)

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \lambda _2(x, \xi )|&\le M_2 h^{-\alpha } \left| w^{(\alpha )} \left( \frac{\xi }{h}\right) \right| \int ^{\langle x \rangle }_{0} y^{-\sigma } \mathrm{d}y \\&\le \frac{M_2}{1-\sigma } C^{\alpha + 1}_{w} \langle R_{a_3} \rangle ^{\alpha } \alpha !^{\mu } \chi _{E_{h, R_{a_3}}} (\xi ) \langle \xi \rangle _{h}^{-\alpha } \langle x \rangle ^{1-\sigma }. \end{aligned}$$

Finally, for \(\alpha , \beta \ge 1\)

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \lambda _2(x, \xi )|&\le M_2 h^{-\alpha } \left| w^{(\alpha )} \left( \frac{\xi }{h}\right) \right| \partial ^{\beta -1}_{x} \langle x \rangle ^{-\sigma } \\&\le M_2 C^{\alpha + 1}_{w} \langle R_{a_3} \rangle ^{\alpha } C^{\beta - 1} \alpha !^{\mu } (\beta - 1)! \chi _{E_{h, R_{a_3}}} (\xi ) \langle \xi \rangle _{h}^{-\alpha } \langle x \rangle ^{1-\sigma - \beta }. \end{aligned}$$

\(\square \)

For the function \(\lambda _1\) we can prove the following alternative estimates.

Lemma 3

Let \(\lambda _1(x,\xi )\) as in (4.2). Then there exists \(C>0\) such that for all \(\alpha , \beta \ge 0\) and \((x,\xi ) \in {\mathbb {R}}^2{:}\)

  1. (i)

    \(|\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \lambda _1(x, \xi )| \le M_1 C^{\alpha + \beta + 1} (\alpha ! \beta !)^{\mu } \langle \xi \rangle ^{-1 - \alpha }_{h} \langle x \rangle ^{1-\frac{\sigma }{2} - \beta };\)

  2. (ii)

    \(|\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \lambda _1(x, \xi )| \le M_1 C^{\alpha + \beta + 1} (\alpha ! \beta !)^{\mu } \langle \xi \rangle ^{- \alpha }_{h} \langle x \rangle ^{1-\sigma - \beta }.\)

In particular \(\lambda _1 \in {\textbf {{SG}}}^{0, 1-\sigma }_{\mu }({\mathbb {R}}^2)\).

Proof

Denote by \(\chi _{\xi }(x)\) the characteristic function of the set \(\{x \in {\mathbb {R}}:\langle x \rangle ^{\sigma } \le \langle \xi \rangle ^{2}_{h}\}\). For \(\alpha = \beta = 0\) we have

$$\begin{aligned} |\lambda _1(x, \xi )|&\le M_1 \left| w\left( \frac{\xi }{h}\right) \right| \langle \xi \rangle ^{-1}_{h} \int ^{\langle x \rangle }_{0} y^{-\frac{\sigma }{2}} \mathrm{d}y \le \frac{2}{2-\sigma } M_1 \langle \xi \rangle ^{-1}_{h} \langle x \rangle ^{1-\frac{\sigma }{2}}, \end{aligned}$$

and

$$\begin{aligned} |\lambda _1(x, \xi )| \le M_1 \left| w\left( \frac{\xi }{h}\right) \right| \int ^{\langle x \rangle }_{0} \langle \xi \rangle ^{-1}_h \langle y \rangle ^{-\frac{\sigma }{2}} \chi _{\xi }(y) \mathrm{d}y \le \frac{M_1}{1-\sigma } \langle x \rangle ^{1-\sigma }. \end{aligned}$$

For \(\alpha \ge 1\), with the aid of Faà di Bruno formula, we have

$$\begin{aligned} |\partial ^{\alpha }_{\xi }\lambda _1(x, \xi )|&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} h^{-\alpha _1} \left| w^{(\alpha _1)} \left( \frac{\xi }{h} \right) \right| \partial ^{\alpha _2}_{\xi }\langle \xi \rangle ^{-1}_{h} \\&\quad \times \left| \int _{0}^{x} \langle y \rangle ^{-\frac{\sigma }{2}} \partial ^{\alpha _3}_{\xi } \psi \left( \frac{\langle y \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \mathrm{d}y \right| \\&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} C^{\alpha _1+1}_{w} \langle R_{a_3} \rangle ^{\alpha _3} \alpha _1!^{\mu } \langle \xi \rangle ^{-\alpha _1}_{h} C^{\alpha _2}\alpha _2! \langle \xi \rangle ^{-1 - \alpha _2}_{h} \\&\quad \times \int _{0}^{\langle x\rangle } \langle y \rangle ^{-\frac{\sigma }{2}} \chi _{\xi }(y) \sum _{j = 1}^{\alpha _3} \frac{\left| \psi ^{(j)} \left( \frac{\langle y \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \right| }{j!} \sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \frac{\alpha _3!}{\gamma _1!\dots \gamma _j!} \prod _{\ell = 1}^{j} \partial ^{\gamma _\ell }_{\xi } \langle \xi \rangle ^{-2}_{h} \langle y \rangle ^{\sigma } \mathrm{d}y \\&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} C^{\alpha _1+1}_{w} \langle R_{a_3} \rangle ^{\alpha _3} \alpha _1!^{\mu } \langle \xi \rangle ^{-\alpha _1}_{h} C^{\alpha _2}\alpha _2! \langle \xi \rangle ^{-1 - \alpha _2}_{h} \\&\quad \times \int _{0}^{\langle x\rangle } \langle y \rangle ^{-\frac{\sigma }{2}} \chi _{\xi }(y) \sum _{j = 1}^{\alpha _3} C^{j+1}_{\psi } j!^{\mu - 1} \sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \frac{\alpha _3!}{\gamma _1!\dots \gamma _j!} \prod _{\ell = 1}^{j} C^{\gamma _\ell + 1} \gamma _{\ell }! \langle \xi \rangle ^{- \gamma _\ell }_{h} \mathrm{d}y \\&\le M_1 C_{\{w, \psi , \sigma , R_{a_3}\}}^{\alpha +1} \alpha !^{\mu } \langle \xi \rangle ^{-1 - \alpha }_{h} \langle y \rangle ^{1-\frac{\sigma }{2}}, \end{aligned}$$

and

$$\begin{aligned} |\partial ^{\alpha }_{\xi }\lambda _1(x, \xi )|&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} h^{-\alpha _1} \left| w^{(\alpha _1)} \left( \frac{\xi }{h} \right) \right| \partial ^{\alpha _2}_{\xi }\langle \xi \rangle ^{-1}_{h} \\&\quad \times \left| \int _{0}^{x} \langle y \rangle ^{-\frac{\sigma }{2}} \partial ^{\alpha _3}_{\xi } \psi \left( \frac{\langle y \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \mathrm{d}y \right| \\&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} C^{\alpha _1+1}_{w} \langle R_{a_3} \rangle ^{\alpha _3} \alpha _1!^{\mu } \langle \xi \rangle ^{-\alpha _1}_{h} C^{\alpha _2}\alpha _2! \langle \xi \rangle ^{- \alpha _2}_{h} \\&\quad \times \int _{0}^{\langle x\rangle } \langle \xi \rangle ^{-1}_{h} \langle y \rangle ^{-\frac{\sigma }{2}} \chi _{\xi }(y) \sum _{j = 1}^{\alpha _3} \frac{\left| \psi ^{(j)} \left( \frac{\langle y \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \right| }{j!} \sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \frac{\alpha _3!}{\gamma _1!\dots \gamma _j!} \prod _{\ell = 1}^{j} \partial ^{\gamma _\ell }_{\xi } \langle \xi \rangle ^{-2}_{h} \langle y \rangle ^{\sigma } \mathrm{d}y \\&\le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} C^{\alpha _1+1}_{w} \langle R_{a_3} \rangle ^{\alpha _3} \alpha _1!^{\mu } \langle \xi \rangle ^{-\alpha _1}_{h} C^{\alpha _2}\alpha _2! \langle \xi \rangle ^{- \alpha _2}_{h} \\&\quad \times \int _{0}^{\langle x\rangle } \langle y \rangle ^{-\sigma } \chi _{\xi }(y) \sum _{j = 1}^{\alpha _3} C^{j+1}_{\psi } j!^{\mu - 1}\sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \frac{\alpha _3!}{\gamma _1!\dots \gamma _j!} \prod _{\ell = 1}^{j} C^{\gamma _\ell + 1} \gamma _{\ell }! \langle \xi \rangle ^{- \gamma _\ell }_{h} \mathrm{d}y \\&\le M_1 C_{\{w, \psi , \sigma , R_{a_3}\}}^{\alpha +1} \alpha !^{\mu } \langle \xi \rangle ^{- \alpha }_{h} \langle y \rangle ^{1-\sigma }. \end{aligned}$$

For \(\beta \ge 1\) we have

$$\begin{aligned} |\partial ^{\beta }_{x} \lambda _1(x, \xi )|&\le M_1 \langle \xi \rangle ^{-1}_h \chi _{\xi } (x)\sum _{\beta _1 + \beta _2 = \beta - 1} \frac{(\beta -1)!}{\beta _1!\beta _2!} \partial ^{\beta _1}_{x} \langle x \rangle ^{-\frac{\sigma }{2}} \sum _{j=1}^{\beta _2} \frac{\left| \psi ^{(j)} \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \right| }{j!}\\&\quad \times \sum _{\delta _1 + \dots + \delta _j = \beta _2} \frac{\beta _2!}{\delta _1! \dots \delta _j!} \prod _{\ell = 1}^{j} \langle \xi \rangle ^{-2}_{h}\partial ^{\delta _\ell }_{x} \langle x \rangle ^{\sigma } \\&\le M_1 \langle \xi \rangle ^{-1}_h \chi _{\xi } (x)\sum _{\beta _1 + \beta _2 = \beta - 1} \frac{(\beta -1)!}{\beta _1!\beta _2!}\\&\qquad \times C^{\beta _1 + 1} \beta _1!^{\mu } \langle x \rangle ^{-\frac{\sigma }{2} - \beta _1} \sum _{j=1}^{\beta _2} C_{\psi }^{j+1}j!^{\mu - 1} \\&\quad \times \sum _{\delta _1 + \dots +\delta _j = \beta _2} \frac{\beta _2!}{\delta _1! \dots \delta _j!} \prod _{\ell = 1}^{j} C^{\delta _\ell + 1} \delta _{\ell }! \langle x \rangle ^{-\delta _\ell } \\&\le M_1 C_{\psi }^{\alpha + \beta + 1} (\beta - 1)!^{\mu } \langle \xi \rangle ^{-1}_h \chi _{\xi } (x) \langle x \rangle ^{1-{\sigma } - \beta } \\&\le M_1 C_{\psi }^{\alpha + \beta + 1} (\beta - 1)!^{\mu } \langle x \rangle ^{1-\sigma - \beta }. \end{aligned}$$

Finally, for \(\alpha , \beta \ge 1\) we have

$$\begin{aligned}&|\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \lambda _{1}(x, \xi )| \le M_1 \sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} h^{-\alpha _1}\\&\qquad \times \left| w^{(\alpha _1)} \left( \frac{\xi }{h} \right) \right| \partial ^{\alpha _2}_{\xi }\langle \xi \rangle ^{-1}_{h} \sum _{\beta _1 + \beta _2 = \beta - 1} \frac{(\beta -1)!}{\beta _1!\beta _2!} \\&\qquad \times \partial ^{\beta _1}_{x} \langle x\rangle ^{\frac{\sigma }{2}} \left| \partial ^{\alpha _3}_{\xi } \partial ^{\beta _2}_{x} \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \right| \\&\quad \le M_1 \chi _{\xi }(x)\sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} h^{-\alpha _1} \\&\qquad \times \left| w^{(\alpha _1)} \left( \frac{\xi }{h} \right) \right| \partial ^{\alpha _2}_{\xi }\langle \xi \rangle ^{-1}_{h} \sum _{\beta _1 + \beta _2 = \beta - 1} \frac{(\beta -1)!}{\beta _1!\beta _2!} \partial ^{\beta _1}_{x} \langle x\rangle ^{-\frac{\sigma }{2}} \\&\quad \times \sum _{j=1}^{\alpha _3 + \beta _2} \frac{\left| \psi ^{(j)} \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}}\right) \right| }{j!} \\&\qquad \times \sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \sum _{\delta _1 + \dots +\delta _j = \beta _2} \frac{\alpha _3!}{\gamma _1! \dots \gamma _j!} \frac{\beta _2!}{\delta _1! \dots \delta _j!} \prod _{\ell = 1}^{j} \partial ^{\gamma _{\ell }}_{\xi } \langle \xi \rangle ^{-2}_{h} \partial ^{\delta {\ell }}_{x} \langle x \rangle ^{\sigma } \\&\quad \le M_1 \chi _{\xi }(x)\sum _{\alpha _1 + \alpha _2 + \alpha _3 = \alpha } \dfrac{\alpha !}{\alpha _1!\alpha _2!\alpha _3!} C^{\alpha _1+1}_{w} \alpha _1!^{\mu } \langle R_{a_3} \rangle ^{\alpha _1} \langle \xi \rangle ^{-\alpha _1}_{h} C^{\alpha _2 + 1} \alpha _2! \langle \xi \rangle ^{-1 - \alpha _2}_{h} \\&\quad \times \sum _{\beta _1 + \beta _2 = \beta - 1} \frac{(\beta -1)!}{\beta _1!\beta _2!} C^{\beta _1+1} \beta _1! \langle x\rangle ^{-\frac{1}{2}(1-\frac{1}{s})-\beta _1} \sum _{j=1}^{\alpha _3 + \beta _2} C^{j+1}_{\psi } j!^{\mu -1} \\&\qquad \times \sum _{\gamma _1 + \dots + \gamma _j = \alpha _3} \sum _{\delta _1 + \dots + \delta _j = \beta _2}\frac{\alpha _3!}{\gamma _1! \dots \gamma _j!} \frac{\beta _2!}{\delta _1! \dots \delta _j!}\\&\qquad \times \prod _{\ell = 1}^{j} C^{\gamma _\ell + 1} \gamma _\ell ! \langle \xi \rangle ^{-2 - \gamma _\ell }_{h} C^{\delta _\ell + 1} \delta _{\ell }! \langle x \rangle ^{\sigma - \delta _{\ell }} \\&\quad \le M_1 \chi _{\xi }(x) C^{\alpha + \beta + 1}_{\{w, \sigma , \psi , R_{a_3}\}} \alpha !^{\mu } (\beta -1)!^{\mu } \langle \xi \rangle ^{-1 - \alpha }_{h} \langle x \rangle ^{1-\frac{\sigma }{2} - \beta } \\&\quad \le M_1 C^{\alpha + \beta + 1}_{\{w,\sigma , \psi , R_{a_3}\}} \alpha !^{\mu } (\beta -1)!^{\mu } \langle \xi \rangle ^{- \alpha }_{h} \langle x \rangle ^{ 1-\sigma - \beta }. \end{aligned}$$

\(\square \)

4.2 Invertibility of \(e^{{\tilde{\Lambda }}}\), \({\tilde{\Lambda }} = \lambda _2 + \lambda _1\)

In this section we construct an inverse for the operator \(e^{{{\tilde{\Lambda }}}}(x,D)\) with \({\tilde{\Lambda }}(x, \xi ) = \lambda _2(x,\xi ) + \lambda _1(x, \xi )\) and we prove that the inverse acts continuously on Gelfand-Shilov-Sobolev spaces. By Lemmas 2 and 3 we have \({\tilde{\Lambda }} \in {\textbf {{SG}}}^{0, 1-\sigma }_{\mu }({\mathbb {R}}^{2})\). Therefore, by Proposition 5, \(e^{{\tilde{\Lambda }}} \in {\textbf {{SG}}}^{0, \infty }_{\mu ; 1/(1-\sigma )}({\mathbb {R}}^{2})\). To construct the inverse of \(e^{{\tilde{\Lambda }}}(x,D)\) we need to use the \(L^2\) adjoint of \(e^{-{\tilde{\Lambda }}}(x,D)\), defined as an oscillatory integral by

Assuming \(\mu > 1\) such that \(1/(1-\sigma ) > 2\mu -1\), by results from the calculus, we may write

$$\begin{aligned} ^{R}e^{-{\tilde{\Lambda }}} = a_1(x,D) + r_1(x,D), \end{aligned}$$

where \(a_1 \sim \sum _{\alpha } \frac{1}{\alpha !} \partial ^{\alpha }_{\xi }D^{\alpha }_{x}e^{-{\tilde{\Lambda }}}\) in \(\mathrm{FSG}^{0, \infty }_{\mu ; 1/(1-\sigma )}({\mathbb {R}}^{2})\), \(r_1 \in {\mathcal {S}}_{2\mu - 1}({\mathbb {R}}^{2})\), and

$$\begin{aligned} e^{{\tilde{\Lambda }}} \circ ^{R}e^{-{\tilde{\Lambda }}} = e^{{\tilde{\Lambda }}}\circ a_1(x,D) + e^{{\tilde{\Lambda }}}\circ r_1(x,D) = a_2(x,D) + r_2(x,D) + e^{{\tilde{\Lambda }}}\circ r_1(x,D), \end{aligned}$$

where

$$\begin{aligned} a_2 \sim \sum _{\alpha , \beta } \frac{1}{\alpha ! \beta !} \partial ^{\alpha }_{\xi }e^{{\tilde{\Lambda }}}\partial ^{\beta }_{\xi }D^{\alpha +\beta }_{x}e^{-{\tilde{\Lambda }}} = \sum _{\gamma } \frac{1}{\gamma !} \partial ^{\gamma }_{\xi }(e^{{\tilde{\Lambda }}}D^{\gamma }_{x}e^{-{\tilde{\Lambda }}}) \,\,\text {in}\,\, \mathrm{FSG}^{0, \infty }_{\mu ; 1/(1-\sigma )}({\mathbb {R}}^2) \end{aligned}$$

and \(r_2 \in {\mathcal {S}}_{2\mu -1}({\mathbb {R}}^{2})\). Therefore

$$\begin{aligned} e^{{\tilde{\Lambda }}} \circ ^{R}e^{-{\tilde{\Lambda }}} = a(x,D) + r(x,D), \end{aligned}$$

where \(a \sim \sum _{\gamma } \frac{1}{\gamma !} \partial ^{\gamma }_{\xi }(e^{{\tilde{\Lambda }}}D^{\gamma }_{x}e^{-{\tilde{\Lambda }}})\) in \(\mathrm{FSG}^{0, \infty }_{\mu ; 1/(1-\sigma )}({\mathbb {R}}^{2})\) and \(r \in {\mathcal {S}}_{2\mu - 1}({\mathbb {R}}^{2})\).

Now let us study more carefully the asymptotic expansion

$$\begin{aligned} \sum _{\gamma \ge 0} \frac{1}{\gamma !} \partial ^{\gamma }_{\xi }(e^{{\tilde{\Lambda }}}D^{\gamma }_{x}e^{-{\tilde{\Lambda }}}) = \sum _{\gamma \ge 0} r_{1, \gamma }. \end{aligned}$$

Note that

$$\begin{aligned} e^{{\tilde{\Lambda }}(x,\xi )} D^{\gamma }_{x}e^{-{\tilde{\Lambda }}(x,\xi )}&= \sum _{j = 1}^{\gamma } \frac{(-1)^{\gamma }}{j!} \sum _{\gamma _1 + \dots + \gamma _j = \gamma } \frac{{\gamma !}}{\gamma _1! \dots \gamma _j!} \prod _{\ell = 1}^{j} D^{\gamma _\ell }_{x} {\tilde{\Lambda }}(x, \xi ), \end{aligned}$$

hence, for \(\alpha , \beta \ge 0\),

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} r_{1, \gamma }|&\le \frac{1}{\gamma !} \sum _{j = 1}^{\gamma } \frac{1}{j!} \sum _{\gamma _1 + \dots + \gamma _j = \gamma } \frac{{\gamma !}}{\gamma _1! \dots \gamma _j!} \\&\quad \times \sum _{\alpha _1 + \dots + \alpha _j = \alpha + \gamma } \sum _{\beta _1 + \dots + \beta _j = \beta } \frac{(\alpha +\gamma )!}{\alpha _1! \dots \alpha _j!} \frac{\beta !}{\beta _1!\dots \beta _j!} \\&\quad \times \prod _{\ell = 1}^{j} |\partial ^{\alpha _\ell }_{\xi }\partial ^{\beta _\ell + \gamma _\ell }_{x} {\tilde{\Lambda }}(x, \xi )| \\&\le \frac{1}{\gamma !} \sum _{j = 1}^{\gamma } \frac{1}{j!} \sum _{\gamma _1 + \dots + \gamma _j = \gamma } \frac{{\gamma !}}{\gamma _1! \dots \gamma _j!} \\&\quad \times \sum _{\alpha _1 + \dots + \alpha _j = \alpha + \gamma } \sum _{\beta _1 + \dots + \beta _j = \beta } \frac{(\alpha +\gamma )!}{\alpha _1! \dots \alpha _j!} \frac{\beta !}{\beta _1!\dots \beta _j!} \\&\quad \times \prod _{\ell = 1}^{j}C_{{\tilde{\Lambda }}}^{\alpha _\ell + \beta _\ell + \gamma _\ell + 1} \alpha _{\ell }!^{\mu }(\beta _\ell + \gamma _{\ell })!^{\mu } \langle \xi \rangle _{h}^{-\alpha _\ell }\langle x \rangle ^{1-\sigma - \beta _\ell - \gamma _\ell } \\&\le C^{\alpha + \beta + 2\gamma + 1} \alpha !^{\mu } \beta !^{\mu } \gamma !^{2\mu - 1} \langle \xi \rangle _{h}^{-\alpha - \gamma } \sum _{j = 1}^{\gamma } \frac{\langle x \rangle ^{(1-\sigma )j - \beta - \gamma }}{j!}. \end{aligned}$$

In the following we shall consider the sets

$$\begin{aligned} Q_{t_1, t_2; h} = \{(x, \xi ) \in {\mathbb {R}}^{2} : \langle x \rangle< t_1 \,\, \text {and} \,\, \langle \xi \rangle _{h} < t_2 \} \end{aligned}$$

and \(Q_{t_1, t_2; h}^{e} = {\mathbb {R}}^{2} {\setminus } Q_{t_1, t_2; h}\). When \(t_1 = t_2 = t\) we simply write \(Q_{t; h}\) and \(Q^{e}_{t; h}\).

Let \(\psi (x, \xi ) \in C^{\infty }({\mathbb {R}}^{2})\) such that \(\psi \equiv 0\) on \(Q_{2; h}\), \(\psi \equiv 1\) on \(Q^{e}_{3; h}\), \(0 \le \psi \le 1\) and

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \psi (x, \xi )| \le C^{\alpha + \beta + 1}_{\psi }\alpha !^{\mu } \beta !^{\mu }, \end{aligned}$$

for every \(x, \xi \in {\mathbb {R}}\) and \(\alpha , \beta \in {\mathbb {N}}_0\). Now set \(\psi _0 \equiv 1\) and, for \(j \ge 1\),

$$\begin{aligned} \psi _j (x, \xi ) := \psi \left( \dfrac{x}{R(j)}, \dfrac{\xi }{R(j)} \right) , \end{aligned}$$

where \(R(j) := R j^{2\mu -1}\) and \(R > 0\) is a large constant. Let us recall that

  • \((x, \xi ) \in Q^{e}_{3R(j)} \implies \left( \dfrac{x}{R(j)}, \dfrac{\xi }{R(j)} \right) \in Q^{e}_{3} \implies \psi _i(x, \xi ) = 1\), for \(i \le j\);

  • \((x, \xi ) \in Q_{R(j)} \implies \left( \dfrac{x}{R(j)}, \dfrac{\xi }{R(j)} \right) \in Q_{2} \implies \psi _i(x, \xi ) = 0\), for \(i \ge j\).

Defining \(b(x,\xi ) = \sum _{j \ge 0} \psi _{j}(x,\xi ) r_{1, j}(x,\xi )\) we have that \(b \in {\textbf {{SG}}}^{0,\infty }_{\mu ;\frac{1}{1-\sigma }}({\mathbb {R}}^{2})\) and

$$\begin{aligned} b(x,\xi ) \sim \sum _{j \ge 0} r_{1, j}(x,\xi ) \,\, \text {in} \,\, \mathrm{FSG}^{0, \infty }_{\mu ;\frac{1}{1-\sigma }}({\mathbb {R}}^{2}). \end{aligned}$$

We will show that \(b \in SG^{0,0}_{\mu }({\mathbb {R}}^{2n})\). Indeed, first we write

$$\begin{aligned} b(x,\xi ) = 1 + \sum _{j \ge 1} \psi _{j}(x,\xi ) r_{1,j}(x,\xi ) = 1 + \sum _{j \ge 0} \psi _{j+1}(x,\xi ) r_{1, j+1}(x,\xi ). \end{aligned}$$

On the support of \(\partial ^{\alpha _1}_{\xi }\partial ^{\beta _1}_{x} \psi _{j+1}\) we have

$$\begin{aligned} \langle x \rangle \le 3R(j+1) \quad \text {and} \quad \langle \xi \rangle _{h} \le 3 R(j+1), \end{aligned}$$

whenever \(\alpha _1 + \beta _1 \ge 1\). Hence

$$\begin{aligned}&|\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \sum _{j \ge 0} \psi _{j+1} r_{1, j+1} (x,\xi ) |\\&\quad \le \sum _{\overset{\alpha _1+\alpha _2 = \alpha }{\beta _1+\beta _2 = \beta }} \frac{\alpha !}{\alpha _1!\alpha _2!} \frac{\beta !}{\beta _1!\beta _2!} |\partial ^{\alpha _1}_{\xi }\partial ^{\beta _1}_{x} \psi _{j+1}(x,\xi )| |\partial ^{\alpha _2}_{\xi } \partial ^{\beta _2}_{x} r_{1, j+1}(x,\xi )| \\&\quad \le \sum _{j \ge 0} \sum _{\overset{\alpha _1+\alpha _2 = \alpha }{\beta _1+\beta _2 = \beta }} \frac{\alpha !}{\alpha _1!\alpha _2!} \frac{\beta !}{\beta _1!\beta _2!} \frac{1}{R(j+1)^{(\alpha _1 + \beta _1)}} C_{\psi }^{\alpha _1+\beta _1 +1} \alpha _1!^{\mu }\beta _1!^{\mu } \\&\qquad \times C^{\alpha _2+\beta _2 + 2(j+1) +1 } \alpha _2!^{\mu } \beta _2!^{\mu } (j+1)!^{2\mu -1} \\&\qquad \times \langle \xi \rangle ^{-\alpha _2-(j+1)}_{h} \langle x \rangle ^{-\beta _1-(j+1)} \sum _{\ell = 1}^{j+1} \frac{\langle x \rangle ^{(1-\sigma )\ell }}{\ell !} \\&\quad \le \sum _{j \ge 0} \sum _{\overset{\alpha _1+\alpha _2 = \alpha }{\beta _1+\beta _2 = \beta }} \frac{\alpha !}{\alpha _1!\alpha _2!} \frac{\beta !}{\beta _1!\beta _2!} \frac{1}{R(j+1)^{(\alpha _1+\beta _1)}} C_{\psi }^{\alpha _1+\beta _1 +1} \alpha _1!^{\mu }\beta _1!^{\mu } \\&\qquad \times C^{\alpha _2+\beta _2 + 2(j+1) +1 } \alpha _2!^{\mu } \beta _2!^{\mu } (j+1)!^{2\mu -1} \langle \xi \rangle ^{-\alpha _2-(j+1)}_{h}\\&\qquad \times \langle x \rangle ^{-\sigma - \beta _1} \sum _{\ell = 1}^{j+1} \frac{\langle x \rangle ^{(1-\sigma )(\ell -1) - j}}{\ell !} \\&\quad \le {\tilde{C}}^{\alpha +\beta +1}(\alpha !\beta !)^{\mu } \langle \xi \rangle ^{-1 - \alpha }_{h} \langle x \rangle ^{-\sigma - \beta }\\&\qquad \times \sum _{j \ge 0} C^{2j} (j+1)!^{2\mu -1} \langle \xi \rangle ^{-j}_{h} \sum _{\ell = 0}^{j+1} \frac{\langle x \rangle ^{(1-\sigma )(\ell -1) - j}}{\ell !}. \end{aligned}$$

We also have that

$$\begin{aligned} \langle x \rangle \ge R(j+1) \quad \text {or} \quad \langle \xi \rangle _{h} \ge R(j+1) \end{aligned}$$

holds true on the support of \(\partial ^{\alpha _1}_{\xi }\partial ^{\beta _1}_{x} \psi _{j+1}\). If \(\langle \xi \rangle _{h} \ge R(j+1)\), then

$$\begin{aligned} \langle \xi \rangle ^{-j}_{h} \le R^{-j}(j+1)^{-j(2\mu -1)} \le R^{-j} (j+1)!^{-(2\mu -1)}. \end{aligned}$$

On the other hand, since we are assuming \(\mu > 1\) such that \(2\mu -1 < \frac{1}{1-\sigma }\), if \(\langle x \rangle \le R(j+1)\) we obtain

$$\begin{aligned} \langle x \rangle ^{(1-\sigma )(\ell -1) - j}&\le R^{(1-\sigma )(\ell -1) - j}\{(j+1)^{2\mu -1} \}^{(1-\sigma )(\ell -1) - j} \\&\le R^{-\sigma j} (j+1)^{\ell - 1 - j(2\mu -1)} \\&= R^{-\sigma j} (j+1)^{\ell - 1} (j+1)!^{-(2\mu -1)}. \end{aligned}$$

Enlarging \(R >0\) if necessary, we can infer that \(\sum _{j \ge 1} r_{1, j} \in {\textbf {{SG}}}^{-1, -\sigma }_{\mu }({\mathbb {R}}^{2})\).

In analogous way it is possible to prove that \(\sum _{j \ge k} r_{1, j} \in {\textbf {{SG}}}^{-k, -\sigma k}_{\mu }({\mathbb {R}}^{2})\). Hence, we may conclude

$$\begin{aligned} b(x,\xi ) - \sum _{j < k}r_{1,j}(x,\xi ) \in {\textbf {{SG}}}^{-k, -\sigma k}_{\mu }({\mathbb {R}}^{2}), \quad k \in {\mathbb {N}}, \end{aligned}$$

that is, \(b \sim \sum _{j} r_{1,j}\) in \({\textbf {{SG}}}^{0,0}_{\mu }({\mathbb {R}}^{2})\).

Since \(a \sim \sum r_{1,j}\) in \(\mathrm{FSG}^{0, \infty }_{\mu ;1/(1-\sigma )}({\mathbb {R}}^{2})\), \(b \sim \sum r_{1,j}\) in \(\mathrm{FSG}^{0,\infty }_{\mu ;1/(1-\sigma )}({\mathbb {R}}^{2})\) we have \(a - b \in {\mathcal {S}}_{2\mu -1}({\mathbb {R}}^{2})\). Thus we may write

$$\begin{aligned} e^{{\tilde{\Lambda }}}(x,D) \circ ^{R}e^{-{\tilde{\Lambda }}} = I + {\tilde{r}}(x,D) + {\bar{r}}(x,D) =I+r(x,D), \end{aligned}$$

where \({\tilde{r}} \in {\textbf {{SG}}}^{-1, -\sigma }_{\mu }({\mathbb {R}}^{2})\), \({\tilde{r}} \sim \sum _{\gamma \ge 1} r_{1, \gamma }\) in \({\textbf {{SG}}}^{-1, -\sigma }_{\mu }({\mathbb {R}}^{2})\) and \({\bar{r}} \in {\mathcal {S}}_{2\mu -1}({\mathbb {R}}^{2})\). In particular \(r \in {\textbf {{SG}}}^{-1, -\sigma }_{2\mu -1}({\mathbb {R}}^2)\), therefore we obtain

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} r(x,\xi )|&\le C_{\alpha , \beta } \langle \xi \rangle ^{-1 -\alpha }_{h} \langle x \rangle ^{-\sigma -\beta } \\&\le C_{\alpha , \beta } h^{-1} \langle \xi \rangle ^{-\alpha }_{h} \langle x \rangle ^{-\sigma - \beta }. \end{aligned}$$

This implies that the \((0,0)-\)seminorms of r are bounded by \(h^{-1}\). Choosing h large enough, we obtain that \(I + r(x,D)\) is invertible on \(L^{2}({\mathbb {R}})\) and its inverse \((I+r(x,D))^{-1}\) is given by the Neumann series \(\sum _{j \ge 0}(-r(x,D))^{j}\).

By Theorem 9 we have

$$\begin{aligned} (I+r(x,D))^{-1}= q(x,D) + k(x,D), \end{aligned}$$

where \(q \in {\textbf {{SG}}}^{0,0}_{2\mu -1}({\mathbb {R}}^{2})\), \(k \in \Sigma _{\delta }({\mathbb {R}}^{2})\) for every \(\delta > 2(2\mu -1) - 1 = 4\mu - 3 \). Choosing \(\mu >1\) close enough to 1, we have that \(\delta \) can be chosen arbitrarily close to 1. Hence, by Theorem 3, for every fixed \(s>1, \theta >1\), we can find \(\mu >1\) such that

$$\begin{aligned} (I+r(x,D))^{-1} : H^{m'}_{\rho '; s, \theta }({\mathbb {R}}) \rightarrow H^{m'}_{\rho '; s, \theta }({\mathbb {R}}) \end{aligned}$$

is continuous for every \(m', \rho ' \in {\mathbb {R}}^2\). Analogously one can show the existence of a left inverse of \(e^\Lambda \) with the same properties. Summing up, we obtain the following result.

Lemma 4

Let \(s, \theta > 1\) and take \(\mu > 1\) such that \(\min \{s, \theta \} > 4\mu -3\). For \(h > 0\) large enough, the operator \(e^{{\tilde{\Lambda }}}(x,D)\) is invertible on \(L^2({\mathbb {R}})\) and on \(\Sigma _{\min \{s,\theta \}}({\mathbb {R}})\) and its inverse is given by

$$\begin{aligned} \{e^{{\tilde{\Lambda }}}(x,D)\}^{-1} = ^{R}e^{-{\tilde{\Lambda }}(x,D)} \circ (I+r(x,D))^{-1}=^{R}e^{-{\tilde{\Lambda }}(x,D)} \circ \sum _{j \ge 0} (-r(x,D))^{j}, \end{aligned}$$

where \(r \in {\textbf {{SG}}}^{-1, -\sigma }_{2\mu - 1}({\mathbb {R}}^{2})\) and \(r \sim \sum _{\gamma \ge 1} \frac{1}{\gamma !} \partial ^{\gamma }_{\xi }(e^{{\tilde{\Lambda }}} D^{\gamma }_{x} e^{-{\tilde{\Lambda }}})\) in \({\textbf {{SG}}}^{-1, -\sigma }_{2\mu -1}({\mathbb {R}}^{2})\). Moreover, the symbol of \((I+r(x,D))^{-1}\) belongs to \({\textbf {{SG}}}^{0,0}_{\delta }({\mathbb {R}}^2)\) for every \(\delta >4\mu -3\) and it maps continuously \(H^{m'}_{\rho ';s,\theta }({\mathbb {R}})\) into itself for any \(\rho ', m' \in {\mathbb {R}}^2\).

We conclude this section writing \(\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1}\) in a more precise way. From the asymptotic expansion of the symbol \(r(x,\xi )\) we may write

$$\begin{aligned} \{e^{{\tilde{\Lambda }}}(x,D)\}^{-1} = ^{R}e^{-{\tilde{\Lambda }}} \circ (I - r(x,D) + (r(x,D))^2 + q_{-3}(x,D)), \end{aligned}$$

where \(q_{-3}\) denotes an operator with symbol in \({\textbf {{SG}}}^{-3, -3\sigma }_{\delta }({\mathbb {R}}^{2})\) for every \(\delta >4\mu -3\). Now note that

$$\begin{aligned} r = i\partial _{\xi } \partial _{x}{\tilde{\Lambda }} + \frac{1}{2}\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} - [\partial _x {\tilde{\Lambda }}]^{2}) + q_{-3} = q_{-1} + q_{-2} + q_{-3}. \end{aligned}$$

Hence

$$\begin{aligned} (r(x,D))^2= & {} (q_{-1} + q_{-2} + q_{-3})(x,D) \circ (q_{-1} + q_{-2} + q_{-3})(x,D) \\= & {} q_{-1}(x,D) \circ q_{-1}(x,D) + q_{-3}(x,D) \\= & {} \text {op}\left\{ -[\partial _{\xi } \partial _{x} {\tilde{\Lambda }} ]^{2} + q_{-3} \right\} \end{aligned}$$

for a new element \(q_{-3}\) in the same space. We finally obtain:

$$\begin{aligned}&\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1} \nonumber \\&\quad = ^{R}e^{-{\tilde{\Lambda }}} \circ \left[ I +\text {op}\left( - i\partial _{\xi } \partial _{x} {\tilde{\Lambda }} - \frac{1}{2}\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} - [\partial _x {\tilde{\Lambda }}]^{2}) - [\partial _{\xi } \partial _{x} {\tilde{\Lambda }} ]^{2} + q_{-3} \right) \right] ,\nonumber \\ \end{aligned}$$
(4.3)

where \(q_{-3} \in {\textbf {{SG}}}^{-3, -3\sigma }_{\delta }({\mathbb {R}}^{2})\). Since we deal with operators whose order does not exceed 3, in the next sections we are going to use frequently formula (4.3) for the inverse of \(e^{{\tilde{\Lambda }}}(x,D)\).

5 Conjugation of iP

In this section we will perform the conjugation of iP by the operator \(e^{ \rho _1 \langle D \rangle ^{\frac{1}{\theta }} } \circ e^{\Lambda }(t, x, D)\) and its inverse, where \(\Lambda (t,x,\xi )=k(t)\langle x\rangle ^{1-\sigma }_h+{{\tilde{\Lambda }}}(x,\xi )\) and the function \(k\in C^1([0,T];{\mathbb {R}})\) is a non-increasing function such that \(k(T)\ge 0\). Since the arguments in the following involve also derivatives with respect to t these derivatives will be denoted by \(D_t\), whereas the symbol D in the notation for pseudodifferential operators will always correspond to derivatives with respect to x.

More precisely, we will compute

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}}&\circ&e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \circ e^{{\tilde{\Lambda }}}(x,D) \circ (iP)(t, x, D_t, D_x) \\&\circ&\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1} \circ e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} \circ e^{-\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} , \end{aligned}$$

where \(\rho _1 \in {\mathbb {R}}\) and \(P(t, x, D_t, D_x)\) is given by (1.10). As we discussed before, the role of this conjugation is to make positive the lower-order terms of the conjugated operator.

Since the operator \(^{R}e^{-{\tilde{\Lambda }}}\) appears in the inverse \(\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1}\), we need the following technical lemma.

Lemma 5

Let \({\tilde{\Lambda }} \in {\textbf {{SG}}}^{0, 1-\sigma }_{\mu }({\mathbb {R}}^2)\) and \(a \in {\textbf {{SG}}}^{m_1, m_2}_{1, s_0}({\mathbb {R}}^2)\), with \(\mu > 1\) such that \(1/(1-\sigma ) > \mu + s_0 - 1\) and \(s_0 > \mu \). Then, for \(M \in {\mathbb {N}}\),

$$\begin{aligned}&e^{{\tilde{\Lambda }}}(x,D) \circ a(x,D) \circ ^{R}e^{-{\tilde{\Lambda }}} \\&\quad = a(x,D) + \text {op}\left( \sum _{1 \le \alpha + \beta < M} \frac{1}{\alpha ! \beta !} \partial ^{\alpha }_{\xi } \{ \partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} D^{\beta }_{x} a D^{\alpha }_{x} e^{-{\tilde{\Lambda }}}\} + q_{M} \right) + r(x,D), \end{aligned}$$

where \(q_M \in {\textbf {{SG}}}^{m_1 - M, m_2 - M\sigma }_{\mu , s_0}({\mathbb {R}}^2)\) and \(r \in {\mathcal {S}}_{\mu +s_0 -1}({\mathbb {R}}^{2})\).

Proof

Since \(e^{\pm {\tilde{\Lambda }}} \in {\textbf {{SG}}}^{0, \infty }_{\mu ; \frac{1}{1-\sigma }} ({\mathbb {R}}^{2})\) and \(a \in {\textbf {{SG}}}^{m_1, m_2}_{1, s_0} ({\mathbb {R}}^{2})\), by results from the calculus, we obtain

$$\begin{aligned} ^{R}e^{-{\tilde{\Lambda }}} = a_1(x,D) + r_1(x,D) \quad \mathrm{and}\quad e^{{\tilde{\Lambda }}} (x,D) \circ a(x,D) = a_2(x,D) + r_2(x,D), \end{aligned}$$

where \(a_1 \in {\textbf {{SG}}}^{0, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}({\mathbb {R}}^{2})\), \(a_2 \in {\textbf {{SG}}}^{m_1, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}({\mathbb {R}}^{2}),\) \(r_1, r_2 \in {\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})\) and

$$\begin{aligned}&a_1 \sim \sum _{\alpha } \frac{1}{\alpha !} \partial ^{\alpha }_{\xi } D^{\alpha }_{x} e^{-{\tilde{\Lambda }}} \,\, \text {in} \,\, \mathrm{FSG}^{0, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}({\mathbb {R}}^{2}), \\&\quad a_2 \sim \sum _{\beta } \frac{1}{\beta !} \partial ^{\beta }_{\xi }e^{{\tilde{\Lambda }}} D^{\beta }_{x} a \,\, \text {in} \,\, \mathrm{FSG}^{m_1, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}({\mathbb {R}}^{2}). \end{aligned}$$

Hence

$$\begin{aligned} e^{{\tilde{\Lambda }}} \circ a(x,D) \circ ^{R}e^{-{\tilde{\Lambda }}}= & {} a_2(x,D) \circ a_1(x,D) + a_2(x,D) \circ r_1(x,D) \\&+ r_2(x,D) \circ a_1(x,D) + r_2(x,D) \circ r_1(x,D) \\= & {} a_3(x,D) + r_3(x,D)+ a_2(x,D) \circ r_1(x,D) \\&+ r_2(x,D) \circ a_1(x,D) + r_2(x,D) \circ r_1(x,D), \end{aligned}$$

with \(a_2(x,D) \circ a_1(x,D) = a_3(x,D) + r_3(x,D)\), where \(a_3 \in {\textbf {{SG}}}^{m_1, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}({\mathbb {R}}^{2})\), \(r_3 \in {\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})\) and

$$\begin{aligned} a_3&\sim \sum _{\gamma , \alpha , \beta } \frac{1}{\alpha !\beta !\gamma !} \, \partial ^{\gamma }_{\xi } \{\partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} D^{\beta }_{x}a\} \partial ^{\alpha }_{\xi }D^{\alpha + \gamma }_{x}e^{-{\tilde{\Lambda }}} = \sum _{\alpha , \beta } \frac{1}{\alpha ! \beta !} \, \partial ^{\alpha }_{\xi } \{ \partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} D^{\beta }_{x} a D^{\alpha }_{x} e^{-{\tilde{\Lambda }}}\} \\&=\displaystyle \sum _{j\ge 0}\sum _{\alpha +\beta =j}\frac{1}{\alpha ! \beta !} \, \partial ^{\alpha }_{\xi } \{ \partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} D^{\beta }_{x} a D^{\alpha }_{x} e^{-{\tilde{\Lambda }}}\}=:\sum _{j\ge 0} c_j \,\, \text {in} \,\, \mathrm{FSG}^{m_1, \infty }_{\mu , s_0; \frac{1}{1-\sigma }}. \end{aligned}$$

Thus

$$\begin{aligned} e^{{\tilde{\Lambda }}}(x,D) \circ a(x,D) \circ ^{R}e^{-{\tilde{\Lambda }}} = a_3(x,D) + r(x,D), \end{aligned}$$

for some \(r \in {\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})\).

Now let us study the asymptotic expansion of \(a_3\). For \(\alpha , \beta \in {\mathbb {N}}_0\) we have (omitting the dependence \((x,\xi )\)):

$$\begin{aligned} \partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} \partial ^{\beta }_{x} a \partial ^{\alpha }_{x} e^{-{\tilde{\Lambda }}}&= \partial ^{\beta }_{x} a \sum _{h = 1}^{\beta } \frac{1}{h!} \sum _{\beta _1 + \dots + \beta _h = \beta } \frac{\beta !}{\beta _1! \dots \beta _h!} \prod _{\ell = 1}^{h} \partial ^{\beta _\ell }_{\xi } {\tilde{\Lambda }} \\&\quad \times \sum _{k = 1}^{\alpha } \frac{1}{k!} \sum _{\alpha _1 + \dots + \alpha _k = \alpha } \frac{\alpha !}{\alpha _1! \dots \alpha _k!} \prod _{\ell = 1}^{k} \partial ^{\alpha _\ell }_{x} (-{\tilde{\Lambda }}). \end{aligned}$$

Therefore, by Faà di Bruno formula, for \(\gamma , \delta \in {\mathbb {N}}_0\), we have

$$\begin{aligned}&\partial ^{\gamma + \alpha }_{\xi } \partial ^{\delta }_{x} \{\partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} \partial ^{\beta }_{x} a \partial ^{\alpha }_{x} e^{-{\tilde{\Lambda }}}\}\\&\quad = \sum _{\gamma _1 + \gamma _2 + \gamma _3 = \gamma + \alpha } \sum _{\delta _1 + \delta _2 + \delta _3 = \delta } \frac{(\gamma + \alpha )!}{\gamma _1! \gamma _2! \gamma _3!} \frac{\delta !}{\delta _1! \delta _2! \delta _3!} \,\, \partial ^{\gamma _1}_{\xi } \partial ^{\beta + \delta _1}_{x} a \\&\qquad \times \partial ^{\gamma _2}_{\xi } \partial ^{\delta _2}_{x} \left( \sum _{h = 1}^{\beta } \frac{1}{h!}\sum _{\beta _1 + \dots + \beta _h = \beta } \frac{\beta !}{\beta _1! \dots \beta _h!} \prod _{\ell = 1}^{h} \partial ^{\beta _\ell }_{\xi } {\tilde{\Lambda }} \right) \\&\qquad \times \partial ^{\gamma _3}_{\xi } \partial ^{\delta _3}_{x} \left( \sum _{k = 1}^{\alpha } \frac{1}{k!} \sum _{\alpha _1 + \dots + \alpha _k = \alpha } \frac{\alpha !}{\alpha _1! \dots \alpha _k!} \prod _{\ell = 1}^{k} \partial ^{\alpha _\ell }_{x} (-{\tilde{\Lambda }}) \right) \\&\quad = \sum _{\gamma _1 + \gamma _2 + \gamma _3 = \gamma + \alpha } \sum _{\delta _1 + \delta _2 + \delta _3 = \delta } \frac{(\gamma + \alpha )!}{\gamma _1! \gamma _2! \gamma _3!} \frac{\delta !}{\delta _1! \delta _2! \delta _3!} \,\, \partial ^{\gamma _1}_{\xi } \partial ^{\beta + \delta _1}_{x} a \\&\qquad \times \sum _{h = 1}^{\beta } \frac{1}{h!}\sum _{\beta _1 + \dots + \beta _h = \beta } \frac{\beta !}{\beta _1! \dots \beta _h!} \sum _{\theta _1 + \dots + \theta _h = \gamma _2} \sum _{\sigma _1 + \dots + \sigma _h = \delta _2} \frac{\gamma _2!}{\theta _1! \dots \theta _h!} \frac{\delta _2!}{\sigma _1! \dots \sigma _h!} \\&\qquad \times \prod _{\ell = 1}^{h} \partial ^{\theta _\ell + \beta _\ell }_{\xi } \partial ^{\sigma _\ell }_{x} {\tilde{\Lambda }} \\&\qquad \times \sum _{k = 1}^{\alpha } \frac{1}{k!}\sum _{\alpha _1 + \dots + \alpha _k = \alpha } \frac{\alpha !}{\alpha _1! \dots \alpha _k!} \sum _{\theta _1 + \dots + \theta _k = \gamma _3} \sum _{\sigma _1 + \dots + \sigma _k = \delta _3} \frac{\gamma _3!}{\theta _1! \dots \theta _k!} \frac{\delta _3!}{\sigma _1! \dots \sigma _k!} \\&\qquad \times \prod _{\ell = 1}^{k} \partial ^{\theta _\ell }_{\xi } \partial ^{\alpha _\ell + \sigma _\ell }_{x} (-{\tilde{\Lambda }}), \end{aligned}$$

hence

$$\begin{aligned}&|\partial ^{\gamma +\alpha }_{\xi } \partial ^{\delta }_{x}(\partial ^{\beta }_{\xi } e^{{\tilde{\Lambda }}} D^{\beta }_{x}a D^{\alpha }_{x}e^{-{\tilde{\Lambda }}})| \\&\quad \le \sum _{\overset{\gamma _1 + \gamma _2 + \gamma _3 = \gamma + \alpha }{\delta _1 + \delta _2 + \delta _3 = \delta }} \frac{(\gamma + \alpha )!}{\gamma _1! \gamma _2! \gamma _3!} \frac{\delta !}{\delta _1! \delta _2! \delta _3!} C^{\gamma _1 + \beta + \delta _1 + 1}_{a} \gamma _1!^{\mu }(\beta +\gamma _1)!^{s_0} \\&\qquad \times \langle \xi \rangle ^{m_1 - \gamma _1} \langle x \rangle ^{m_2 - \beta - \delta _1} \\&\qquad \times \sum _{h= 1}^{\beta } \frac{1}{h!}\sum _{\beta _1 + \dots + \beta _h = \beta } \frac{\beta !}{\beta _1! \dots \beta _h!} \sum _{\theta _1 + \dots + \theta _h = \gamma _2} \sum _{\sigma _1 + \dots + \sigma _h = \delta _2} \frac{\gamma _2!}{\theta _1! \dots \theta _h!} \frac{\delta _2!}{\sigma _1! \dots \sigma _h!} \\&\qquad \times \prod _{\ell = 1}^{h} C^{\theta _\ell + \beta _\ell + \sigma _\ell + 1}_{{\tilde{\Lambda }}} (\theta _\ell + \beta _\ell )!^{\mu }\sigma _{\ell }!^{\mu } \langle \xi \rangle ^{-\theta _\ell - \beta _\ell } \langle x \rangle ^{1-\sigma - \sigma _{\ell }} \\&\qquad \times \sum _{k = 1}^{\alpha } \frac{1}{k!}\sum _{\alpha _1 + \dots + \alpha _k = \alpha } \frac{\alpha !}{\alpha _1! \dots \alpha _k!} \sum _{\theta _1 + \dots + \theta _k = \gamma _3} \sum _{\sigma _1 + \dots + \sigma _k = \delta _3} \frac{\gamma _3!}{\theta _1! \dots \theta _k!} \frac{\delta _3!}{\sigma _1! \dots \sigma _k!} \\&\qquad \times \prod _{\ell = 1}^{k} C^{\theta _\ell + \beta _\ell + \sigma _\ell + 1}_{{\tilde{\Lambda }}} (\theta _\ell )!^{\mu }(\alpha _\ell +\sigma _{\ell })!^{\mu } \langle \xi \rangle ^{-\theta _\ell } \langle x \rangle ^{1-\sigma - \alpha _\ell - \sigma _{\ell }} \\&\quad \le C_1^{\gamma + \delta + 2(\alpha + \beta ) + 1}\gamma !^{\mu }\delta !^{s_0}(\alpha + \beta )!^{\mu + s_0} \langle \xi \rangle ^{m_1 - \gamma -(\alpha + \beta )} \langle x \rangle ^{m_2 - \delta -(\alpha + \beta )} \\&\qquad \times \sum _{k = 1}^{\alpha } \frac{ \langle x \rangle ^{k(1-\sigma )}}{k!} \sum _{h =1}^{\beta } \frac{ \langle x \rangle ^{h(1-\sigma )} }{h!} \\&\quad \le C_1^{\gamma + \delta + 2(\alpha + \beta ) + 1}\gamma !^{\mu }\delta !^{s_0}(\alpha + \beta )!^{\mu + s_0} \langle \xi \rangle ^{m_1 - \gamma -(\alpha + \beta )} \langle x \rangle ^{m_2 - \delta -(\alpha + \beta )} \\&\quad \quad \times 2^{\alpha +\beta } \sum _{k = 1}^{\alpha +\beta } \frac{ \langle x \rangle ^{k(1-\sigma )}}{k!}. \end{aligned}$$

The above estimate implies

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} c_j(x,\xi )| \le C^{\alpha + \beta + 2j + 1}\alpha !^{\mu }\beta !^{s_0}(j)!^{\mu + s_0-1} \langle \xi \rangle ^{m_1 - \alpha - j} \langle x \rangle ^{m_2 - \beta - j} \sum _{k = 1}^{j} \frac{ \langle x \rangle ^{k(1-\sigma )} }{k!}, \end{aligned}$$

for every \(j \ge 0\), \(\alpha , \beta \in {\mathbb {N}}_0\) and \(x,\xi \in {\mathbb {R}}\).

Let \(\psi (x, \xi ) \in C^{\infty }({\mathbb {R}}^{2})\) such that \(\psi \equiv 0\) on \(Q_{2}\), \(\psi \equiv 1\) on \(Q^{e}_{3}\), \(0 \le \psi \le 1\) and

$$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x} \psi (x, \xi )| \le C^{\alpha + \beta + 1}\alpha !^{\mu } \beta !^{s_0}, \end{aligned}$$

for every \(x, \xi \in {\mathbb {R}}\) and \(\alpha , \beta \in {\mathbb {N}}_0\). Now set \(\psi _0 \equiv 1\) and, for \(j \ge 1\),

$$\begin{aligned} \psi _j (x, \xi ) := \psi \left( \dfrac{x}{R(j)}, \dfrac{\xi }{R(j)} \right) , \end{aligned}$$

where \(R(j) = R j^{s_0 + \mu - 1}\), for a large constant \(R > 0\).

Setting \(b(x,\xi ) = \sum _{j \ge 0} \psi _{j}(x,\xi )c_{j}(x,\xi )\) we have \(b \in {\textbf {{SG}}}^{m_1, \infty }_{\mu , s_0}({\mathbb {R}}^{2})\) and

$$\begin{aligned} b(x,\xi ) \sim \sum _{j \ge 0} c_j(x,\xi ) \,\, \text {in} \,\, \mathrm{FSG}^{m_1, \infty }_{\mu , s_0}({\mathbb {R}}^{2}). \end{aligned}$$

By similar arguments as the ones used in Sect. 4.2 we can prove that

$$\begin{aligned} \sum _{j \ge k} \psi _{j}(x,\xi ) c_{j}(x,\xi ) \in {\textbf {{SG}}}^{m_1-k, m_2 - \sigma k}_{\mu ,s_0}({\mathbb {R}}^{2}), \quad k \in {\mathbb {N}}_{0}. \end{aligned}$$

Hence

$$\begin{aligned} b(x,\xi ) - \sum _{j < k} c_j(x,\xi ) \in {\textbf {{SG}}}^{m_1-k, m_2 - \sigma k}_{\mu , s_0}, \quad k \in {\mathbb {N}}. \end{aligned}$$

Since \(1/(1-\sigma ) > \mu + s_0 -1\) we can conclude that \(b-a_3 \in {\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})\) and we obtain

$$\begin{aligned} e^{{\tilde{\Lambda }}} \circ a \circ ^{R}e^{-{\tilde{\Lambda }}} = b(x,D) + {\tilde{r}}(x,D), \end{aligned}$$

where \({\tilde{r}} \in {\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})\). This concludes the proof. \(\square \)

5.1 Conjugation by \(e^{{\tilde{\Lambda }}}\)

We start noting that \(e^{{\tilde{\Lambda }}} \partial _t \{e^{{\tilde{\Lambda }}}\}^{-1} = \partial _t\) since \({\tilde{\Lambda }}\) does not depend on t.

  • Conjugation of \(ia_3(t,D)\).

    Since \(a_3\) does not depend on x, applying Lemma 5, we have

    $$\begin{aligned} e^{{\tilde{\Lambda }}}(x,D) (ia_3)(t,D) ^{R}(e^{-{\tilde{\Lambda }}})= ia_3(t,D) + s(t,x,D) + r_3(t,x,D), \end{aligned}$$

    with

    $$\begin{aligned} s \sim \sum _{j \ge 1} \frac{1}{j!} \partial ^{j}_{\xi } \{e^{{\tilde{\Lambda }}} ia_3 D^{j}_{x}e^{-{\tilde{\Lambda }}}\} \in \mathrm{FSG}^{2, -\sigma }_{\mu , s_0}({\mathbb {R}}^{2}),\quad r_3 \in C([0,T],{\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2})). \end{aligned}$$

    Hence, using (4.3) we can write more explicitly s(txD) and we obtain

    $$\begin{aligned}&e^{{\tilde{\Lambda }}} (ia_3)\{e^{{\tilde{\Lambda }}}\}^{-1}\\&\quad = \text {op}\left( ia_3 - \partial _{\xi }(a_3 \partial _{x}{\tilde{\Lambda }}) + \frac{i}{2}\partial ^{2}_{\xi }[a_3(\partial ^{2}_{x}{\tilde{\Lambda }} - (\partial _{x}{\tilde{\Lambda }})^2)] + a^{(0)}_{3} + r_3\right) \\&\quad \circ \left[ I +\text {op}\left( - i\partial _{\xi } \partial _{x} {\tilde{\Lambda }} - \frac{1}{2}\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} - [\partial _x {\tilde{\Lambda }}]^{2}) - [\partial _{\xi } \partial _{x} {\tilde{\Lambda }} ]^{2} + q_{-3} \right) \right] \\&\quad = ia_3 - \text {op}\left( \partial _{\xi }(a_3 \partial _{x}{\tilde{\Lambda }}) + \frac{i}{2}\partial ^{2}_{\xi }\{a_3(\partial ^{2}_{x}{\tilde{\Lambda }} - \{\partial _{x}{\tilde{\Lambda }}\}^2)\} - a_3\partial _{\xi }\partial _{x}{\tilde{\Lambda }} + i\partial _{\xi }a_3\partial _{\xi }\partial ^{2}_{x}{\tilde{\Lambda }} \right) \\&\qquad + \text {op}\left( i\partial _{\xi }(a_3 \partial _{x}{\tilde{\Lambda }})\partial _{\xi }\partial _{x}{\tilde{\Lambda }} - \frac{i}{2}a_3\{\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} + [\partial _x {\tilde{\Lambda }}]^{2}) + 2[\partial _{\xi } \partial _{x} {\tilde{\Lambda }} ]^{2}+ r_0 + {\bar{r}}\right) \\&\quad = ia_3 - \text {op}\left( \partial _{\xi }a_3 \partial _{x}{\tilde{\Lambda }} + \frac{i}{2}\partial ^{2}_{\xi }\{a_3[\partial ^{2}_{x}{\tilde{\Lambda }} - (\partial _{x}{\tilde{\Lambda }})^2]\} + i\partial _{\xi }a_3\partial _{\xi }\partial ^{2}_{x}{\tilde{\Lambda }}\right) \\&\qquad +\text {op}\left( i\partial _{\xi }(a_3 \partial _{x}{\tilde{\Lambda }})\partial _{\xi }\partial _{x}{\tilde{\Lambda }} - \frac{i}{2}a_3\{\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} + [\partial _x {\tilde{\Lambda }}]^{2}) + 2(\partial _{\xi } \partial _{x} {\tilde{\Lambda }} )^{2}\}+ r_0+ {\bar{r}}\right) \\&\quad = ia_3 - \text {op}\left( \partial _{\xi }a_3 \partial _{x}\lambda _2 + \partial _{\xi }a_3\partial _{x}\lambda _1 + \frac{i}{2}\partial ^{2}_{\xi }\{a_3(\partial ^{2}_{x}\lambda _2 - \{\partial _{x}\lambda _2\}^2)\} + i\partial _{\xi }a_3\partial _{\xi }\partial ^{2}_{x}\lambda _2 \right) \\&\qquad +\text {op}\left( i\partial _{\xi }(a_3 \partial _{x}\lambda _2)\partial _{\xi }\partial _{x}\lambda _2 - \frac{i}{2}a_3\{\partial ^{2}_{\xi }(\partial ^{2}_{x}\lambda _2 + [\partial _x \lambda _2]^{2}) + 2[\partial _{\xi } \partial _{x} \lambda _2 ]^{2}\} + r_0+ {\bar{r}}\right) , \end{aligned}$$

    for some symbols \(a^{(0)}_{3} \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\mu , s_0}({\mathbb {R}}^2))\), \(r_0 \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\delta }({\mathbb {R}}^{2}))\) and, since we may assume \(\delta < \mu +s_0-1\), \({\bar{r}} \in C([0,T];{\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^2))\).

  • Conjugation of \(ia_2(t,x,D)\).

    By Lemma 5 with \(M=2\) and using (4.3) we get

    $$\begin{aligned}&e^{{\tilde{\Lambda }}}(x,D) (ia_2)(t,x,D)\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1}\\&\quad = \text {op}\left( ia_2 - \partial _{\xi }\{a_2 \partial _{x}{\tilde{\Lambda }}\} + \partial _{\xi }{\tilde{\Lambda }}\partial _{x}a_2 + a^{(0)}_{2} + r_2 \right) \\&\quad \circ \left[ I -\text {op}\left( i\partial _{\xi } \partial _{x} {\tilde{\Lambda }} + \frac{1}{2}\partial ^{2}_{\xi }(\partial ^{2}_{x}{\tilde{\Lambda }} - [\partial _x {\tilde{\Lambda }}]^{2}) + [\partial _{\xi } \partial _{x} {\tilde{\Lambda }} ]^{2} + q_{-3}\right) \right] \\&\quad = ia_2(t,x,D) + \text {op}( - \partial _{\xi }\{a_2 \partial _{x}{\tilde{\Lambda }}\} + \partial _{\xi }{\tilde{\Lambda }}\partial _{x}a_2 + a_2\partial _{\xi }\partial _{x} {\tilde{\Lambda }} + r_0+ {\bar{r}})\\&\quad = ia_2(t,x,D) + \text {op}(- \partial _{\xi }a_2 \partial _{x}{\tilde{\Lambda }} + \partial _{\xi }{\tilde{\Lambda }}\partial _{x}a_2 + r_0 + {\bar{r}} ) \\&\quad = ia_2(t,x,D) + \text {op}\left( - \partial _{\xi }a_2 \partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 + r_0 +{\bar{r}} \right) , \end{aligned}$$

    for some symbols \(a^{(0)}_{2} \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\mu , s_0})\), \(r_0 \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\delta }({\mathbb {R}}^{2}))\) and \({\bar{r}} \in C([0,T];\Sigma _{\mu +s_0-1} ({\mathbb {R}}^{2}))\).

  • Conjugation of \(ia_1(t,x,D)\):

    $$\begin{aligned} e^{{\tilde{\Lambda }}}(x,D) ia_1(t,x,D) \{e^{{\tilde{\Lambda }}}(x,D)\}^{-1}&= \text {op}(ia_1 + a^{(0)}_{1} + r_1)\circ \sum _{j\ge 0}(-r(t, x,D))^{j} \\&=\text {op}( ia_1+ {\tilde{r}}_{0} + {\tilde{r}}), \end{aligned}$$

    where \(a^{(0)}_{1} \in C([0,T]; {\textbf {{SG}}}^{0,1-2\sigma }_{\mu , s_0}({\mathbb {R}}^2))\), \(\tilde{r_0} \in C([0,T]; {\textbf {{SG}}}^{0,1-2\sigma }_{\delta }({\mathbb {R}}^{2}))\), \({\tilde{r}} \in C([0,T];\Sigma _{\mu +s_0-1} ({\mathbb {R}}^{2}))\).

  • Conjugation of \(ia_0(t,x,D)\):

    $$\begin{aligned} e^{{\tilde{\Lambda }}}(x,D) ia_0(t,x,D)\{e^{{\tilde{\Lambda }}}(x,D)\}^{-1}&= \text {op}(ia_0 + a^{(0)}_{0} + r_0) \sum _{j\ge 0}(-r(t,x,D))^{j} \\&= \text {op}( ia_0+ \tilde{{\tilde{r}}}_{0} + {\tilde{r}}_1), \end{aligned}$$

    where \(a^{(0)}_{0} \in C([0,T]; {\textbf {{SG}}}^{0,1-2\sigma }_{\mu , s_0})\), \(\tilde{{\tilde{r}}}_0 \in C([0,T]; {\textbf {{SG}}}^{-1,1-2\sigma }_{\delta }({\mathbb {R}}^{2}))\) and \({\tilde{r}}_1 \in C([0,T];\Sigma _{\mu +s_0-1} ({\mathbb {R}}^{2}))\).

Summing up we obtain

$$\begin{aligned}&e^{{\tilde{\Lambda }}}(x,D) (iP(t,x,D_t,D_x)) \{e^{{\tilde{\Lambda }}}(x,D)\}^{-1} = \partial _{t} + ia_3(t,D) + ia_2(t,x,D) \nonumber \\&\quad - \text {op}(\partial _{\xi }a_3\partial _{x}\lambda _2) + ia_1(t,x,D) -\text {op}( \partial _{\xi }a_3\partial _{x}\lambda _1 + \partial _{\xi }a_2\partial _{x}\lambda _2 - \partial _{\xi }\lambda _2\partial _{x}a_2 +ib_1)\nonumber \\&\quad + ia_0(t,x,D) + r_{\sigma }(t,x,D) + r_{0}(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$
(5.1)

where

$$\begin{aligned} b_1 \in C([0,T]; {\textbf {{SG}}}^{1,-2\sigma }_{\mu , s_0}({\mathbb {R}}^{2})),\ b_1(t,x,\xi )\in {\mathbb {R}},\ b_1\ \mathrm{does\ not\ depend\ on}\ \lambda _1, \end{aligned}$$
(5.2)

and

$$\begin{aligned}&r_{0} \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\delta }({\mathbb {R}}^{2})), \; r_{\sigma } \in C([0,T]; {\textbf {{SG}}}^{0, 1-2\sigma }_{\delta }({\mathbb {R}}^{2})),\\&{\bar{r}} \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2})). \end{aligned}$$

5.2 Conjugation by \(e^{k(t)\langle x \rangle ^{1-\sigma }_{h}}\)

Let us recall that we are assuming that \(k \in C^{1}([0,T]; {\mathbb {R}})\), \(k'(t)\le 0\) and \(k(t) \ge 0\) for every \(t\in [0,T]\). Following the same ideas of the proof of Lemma 5 one can prove the following result.

Lemma 6

Let \(a \in C([0,T],{\textbf {{SG}}}^{m_1, m_2}_{\mu , s_0}({\mathbb {R}}^2))\), where \(1< \mu < s_0\) and \(\mu + s_0 - 1 < \frac{1}{1-\sigma }\). Then

$$\begin{aligned} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \, a(t,x,D) \, e^{-k(t)\langle x \rangle ^{1-\sigma }_{h} } = a(t,x,D) + b(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

for some symbols \(b \sim \sum _{j \ge 1} \frac{1}{j!} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}}\, \partial ^{j}_{\xi } a\, D^{j}_{x} e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}}\) in \({\textbf {{SG}}}^{m_1 - 1, m_2 -\sigma }_{\mu , s_0}({\mathbb {R}}^{2})\) and \({\bar{r}} \in C([0,T],{\mathcal {S}}_{\mu +s_0-1}({\mathbb {R}}^{2}))\).

Let us perform the conjugation by \(e^{k(t)\langle x \rangle ^{1-\sigma }_{h}}\) of the operator \(e^{{\tilde{\Lambda }}} (iP) \{e^{{\tilde{\Lambda }}}\}^{-1}\) in (5.1) with the aid of Lemma 6.

  • Conjugation of \( \partial _t\): \(e^{k(t)\langle x \rangle ^{1-\sigma }_{h} } \, \partial _{t} \, e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} = \partial _{t} - k'(t)\langle x \rangle ^{1-\sigma }_{h}\).

  • Conjugation of \(ia_3(t,D)\):

    $$\begin{aligned}&e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \, ia_3(t,D) e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}}\\&\quad = ia_3 +\text {op}(- k(t)\partial _{\xi }a_3 \partial _{x} \langle x \rangle ^{1-\sigma }_{h}) \\&\qquad + \text {op}\left( \frac{i}{2}\partial ^{2}_{\xi }a_3 \{k(t)\partial ^2_{x}\langle x \rangle ^{1-\sigma }_{h} - k^{2}(t) [\partial _{x}\langle x \rangle ^{1-\sigma }_{h}]^2 \}\right) + a^{(0)}_{3} + r_{3}, \end{aligned}$$

    where \(a^{(0)}_{3} \in C([0,T]; {\textbf {{SG}}}^{0, -3\sigma }_{\mu , s_0}({\mathbb {R}}^{2}))\) and \(r_{3} \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2}))\).

  • Conjugation of \(\text {op}(ia_2 -\partial _{\xi }a_3 \partial _{x}\lambda _2)\):

    $$\begin{aligned}&e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \text {op}(ia_2 -\partial _{\xi }a_3\partial _{x}\lambda _2) \,e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}}\\&\quad = ia_2 +\text {op}(-\partial _{\xi }a_3\partial _{x}\lambda _2 ) +\text {op}(- k(t)\partial _{\xi }a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} \\&\qquad - ik(t)\partial _{\xi }\{\partial _{\xi }a_3\partial _{x}\lambda _2\} \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + a^{(0)}_{2}+ r_2), \end{aligned}$$

    where \(a^{(0)}_{2} \in C([0,T]; {\textbf {{SG}}}^{0, -2\sigma }_{\mu , s_0}({\mathbb {R}}^{2}))\) and \(r_2 \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2}))\).

  • Conjugation of \(i(a_1+a_0)(t,x,D)\):

    $$\begin{aligned} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} i(a_1+a_0)(t,x,D)e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} = \text {op}(ia_1 + ia_0 + a_{1,0}+ r_1), \end{aligned}$$

    where \(r_1 \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2}))\) and

    $$\begin{aligned} a_{1, 0} \sim \sum _{j \ge 1} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \frac{1}{j!} \partial ^{j}_{\xi }i(a_1+a_0) D^{j}_{x} e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} \,\, \text {in} \,\, {\textbf {{SG}}}^{0, 1-2\sigma }_{\mu , s_0}({\mathbb {R}}^{2}). \end{aligned}$$

    It is not difficult to verify that the following estimate holds

    $$\begin{aligned} | a_{1,0} (t,x,\xi )| \le \max \{1, k(t)\} C_{T} \langle x \rangle ^{1 - 2\sigma }_{h}, \end{aligned}$$
    (5.3)

    where \(C_{T}\) depends on \(a_1\) and does not depend on k(t).

  • Conjugation of \(\text {op}(- \partial _{\xi }a_3\partial _{x}\lambda _1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 + ib_1)\): taking into account i) of Lemma 3

    $$\begin{aligned} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}}&\text {op}(- \partial _{\xi }a_3\partial _{x}\lambda _1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 + ib_1) e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} \\&= \text {op}( -\partial _{\xi }a_3\partial _{x}\lambda _1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 + ib_1 + r_0 + {\bar{r}}), \end{aligned}$$

    where \(r_0 \in C([0,T]; {\textbf {{SG}}}^{0, 0}_{\mu , s_0}({\mathbb {R}}^{2}))\) and \({\bar{r}} \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2}))\).

  • Conjugation of \(r_{\sigma }(t,x,D)\):

    $$\begin{aligned} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} \,r_{\sigma }(t,x,D) \, e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} = r_{\sigma ,1}(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

    where \(r \in C([0,T]; \Sigma _{\mu +s_0-1}({\mathbb {R}}^{2}))\), \(r_{\sigma ,1}\in C([0,T]; {\textbf {{SG}}}^{0, 1-2\sigma }_{\delta }({\mathbb {R}}^{2}))\) and the estimate

    $$\begin{aligned} |r_{\sigma ,1}(t,x,\xi )| \le C_{T,{\tilde{\Lambda }}} \langle x \rangle ^{1-2\sigma }_{h} \end{aligned}$$
    (5.4)

    holds with \(C_{T,{\tilde{\Lambda }}}\) independent of k(t).

Gathering all the previous computations we may write

$$\begin{aligned}&e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{\Lambda } (iP) \{e^{\Lambda }\}^{-1}e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}} \nonumber \\&\quad = \partial _{t} + \text {op}(ia_3 -\partial _{\xi }a_3\partial _{x}\lambda _2 + ia_2- k(t)\partial _{\xi }a_3 \partial _{x} \langle x \rangle ^{1-\sigma }_{h}) \nonumber \\&\qquad + \text {op}(-\partial _{\xi }a_3\partial _{x}\lambda _1 + ia_1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 - k(t)\partial _{\xi }a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} ) \nonumber \\&\qquad +\text {op}( ib_1 + ic_1 +ia_0 - k'(t)\langle x \rangle ^{1-\sigma }_{h} + a_{1,0} + r_{\sigma ,1})\nonumber \\&\qquad + r_0(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$
(5.5)

where \(b_1\) satisfies (5.2),

$$\begin{aligned} c_1 \in C([0,T]; {\textbf {{SG}}}^{1, -2\sigma }_{\mu , s_0}({\mathbb {R}}^{2})),\ c_1(t,x,\xi )\in {\mathbb {R}},\ c_1\ \mathrm{does\ not\ depend\ on}\ \lambda _1, \end{aligned}$$
(5.6)

(but \(c_1\) depends of \(\lambda _2, k(t)\)), \(a_{1,0}\) as in (5.3), \(r_{\sigma ,1}\) as in (5.4), and for some new operators

$$\begin{aligned} r_0 \in C([0,T]; {\textbf {{SG}}}^{0,0}_{\delta }({\mathbb {R}}^{2})),\; {\bar{r}} \in C([0,T]; {\mathcal {S}}_{\mu + s_0 -1}({\mathbb {R}}^{2})). \end{aligned}$$

5.3 Conjugation by \(e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}}\)

Since we are considering \(\theta > s_0\) and \(\mu > 1\) arbitrarily close to 1, we may assume that all the previous smoothing remainder terms have symbols in \(\Sigma _{\theta }({\mathbb {R}}^{2})\).

Lemma 7

Let \(a \in {\textbf {{SG}}}^{m_1, m_2}_{\mu , s_0}\), where \(1< \mu < s_0\) and \(\mu + s_0 - 1 < \theta \). Then

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} \, a(x,D) \, e^{-\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } = a(x,D) + b(x,D) + r(x,D), \end{aligned}$$

for some symbols \(b \sim \sum _{j \ge 1} \frac{1}{j!} \partial ^{j}_{\xi }e^{\rho _1 \langle \xi \rangle ^{\frac{1}{\theta }}}\, D^{j}_{x}a \, e^{-\rho _1 \langle \xi \rangle ^{\frac{1}{\theta }}}\) in \({\textbf {{SG}}}^{m_1 - (1-\frac{1}{\theta }), m_2 -1}_{\mu , s_0}({\mathbb {R}}^{2})\) and \(r \in {\mathcal {S}}_{\mu +s_0 -1}({\mathbb {R}}^{2})\).

Let us now conjugate by \(e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}}\) the operator \(e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{\tilde{\Lambda }} (iP) \{e^{\tilde{\Lambda }}\}^{-1}e^{-k(t)\langle x \rangle ^{1-\sigma }_{h}}\) in (5.5). First of all we observe that since \(a_3\) does not depend of x, we simply have

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} \, ia_3(t,D) \, e^{-\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} = ia_3(t,D). \end{aligned}$$
  • Conjugation of \(\text {op}\left( -\partial _{\xi }a_3\partial _{x}\lambda _2 + ia_2 - k(t)\partial _{\xi }a_3\partial _x \langle x \rangle ^{1-\sigma }_{h}\right) \):

    $$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} \,\text {op}&( -\partial _{\xi }a_3\partial _{x}\lambda _2 + ia_2 - k(t)\partial _{\xi }a_3\partial _x \langle x \rangle ^{1-\sigma }_{h} )\, e^{-\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} \\&= \text {op}(-\partial _{\xi }a_3\partial _{x}\lambda _2 + ia_2 - k(t)\partial _{\xi }a_3\partial _x \langle x \rangle ^{1-\sigma }_{h}) + (a_{2,\rho _1} + {\bar{r}})(t,x,D), \end{aligned}$$

    where \(a_{2,\rho _1} \in C([0,T], {\textbf {{SG}}}^{1+\frac{1}{\theta }, -1}_{\mu , s_0}({\mathbb {R}}^{2}))\), \({\bar{r}} \in C([0,T],\Sigma _{\theta }({\mathbb {R}}^{2}))\) and the following estimate holds

    $$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}a_{2,\rho _1}(t,x,\xi )| \le \max \{1,k(t)\} |\rho _1| C_{\lambda _2, T}^{\alpha +\beta +1}\alpha !^{\mu }\beta !^{s_0} \langle \xi \rangle ^{1+\frac{1}{\theta } -\alpha } \langle x \rangle ^{-\sigma - \beta }. \end{aligned}$$
    (5.7)

    In particular

    $$\begin{aligned} |a_{2,\rho _1}(t,x,\xi )| \le \max \{1,k(t)\} |\rho _1| C_{\lambda _2, T} \langle \xi \rangle ^{1+\frac{1}{\theta }}_{h} \langle x \rangle ^{-\sigma }. \end{aligned}$$
  • Conjugation of \( \text {op}(- \partial _{\xi }a_3\partial _{x}\lambda _1 + ia_1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 - k(t)\partial _{\xi }a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + ib_1 + ic_1): \) the conjugation of this term is given by

    $$\begin{aligned} \text {op}(- \partial _{\xi }a_3\partial _{x}\lambda _1 +&ia_1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2 - k(t)\partial _{\xi }a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + ib_1 + ic_1) \\&+ a_{1, \rho _1}(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

    where \({\bar{r}} \in C([0,T],\Sigma _{\theta } ({\mathbb {R}}^{2}))\) and \(a_{1,\rho _1}\) satisfies the following estimate

    $$\begin{aligned} |\partial ^{\alpha }_{\xi } \partial ^{\beta }_{x}a_{1,\rho _1}(t,x,\xi )| \le \max \{1,k(t)\} |\rho _1| C_{{\tilde{\Lambda }}, T}^{\alpha +\beta +1}\alpha !^{\mu }\beta !^{s_0} \langle \xi \rangle ^{\frac{1}{\theta } -\alpha } \langle x \rangle ^{-\sigma - \beta }. \end{aligned}$$
    (5.8)

    In particular

    $$\begin{aligned} |a_{1,\rho _1}(t,x,\xi )| \le \max \{1,k(t)\} |\rho _1| C_{{\tilde{\Lambda }}, T} \langle \xi \rangle ^{\frac{1}{\theta }}_{h} \langle x \rangle ^{-\sigma }. \end{aligned}$$
  • Conjugation of \(\text {op}(ia_0 - k'(t)\langle x \rangle ^{1-\sigma }_{h} + a_{1,0} + r_{\sigma ,1})\):

    $$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} \text {op}(ia_0 -&k'(t)\langle x \rangle ^{1-\sigma }_{h} + a_{1,0} + r_{\sigma ,1}) e^{-\rho _1 \langle D \rangle {\frac{1}{\theta }}} \\&= \text {op}( ia_0 - k'(t)\langle x \rangle ^{1-\sigma }_{h} + a_{1,0} + r_{\sigma ,1}) + r_0(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

    where \(r_0 \in C([0,T]; {\textbf {{SG}}}^{(0,0)}_{\delta }({\mathbb {R}}^{2}))\) and \({\bar{r}} \in C([0,T]; \Sigma _{\theta }({\mathbb {R}}^{2}))\).

Summing up, from the previous computations we obtain

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}}&e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}}(iP) \{ e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}} \}^{-1} = \partial _{t} + ia_3(t,D)\\&+ \text {op}\left( -\partial _{\xi }a_3\partial _{x}\lambda _2 + ia_2 - k(t)\partial _{\xi }a_3 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} +a_{2,\rho _1} \right) \\&+ \text {op}\left( - \partial _{\xi }a_3\partial _{x}\lambda _1+ ia_1 - \partial _{\xi }a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}a_2\right. \\&\left. - k(t)\partial _{\xi }a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + ib_1 + ic_1 + a_{1,\rho _1}\right) \\&+\text {op}( ia_0- k'(t)\langle x \rangle ^{1-\sigma }_{h} + a_{1,0} + r_{\sigma ,1}) + r_0(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

with \(a_{2,\rho _1}\) as in (5.7), \(b_1\) as in (5.2), \(c_1\) as in (5.6), \(a_{1,\rho _1}\) as in (5.8), \(a_{1,0}\) as in (5.3), \(r_{\sigma ,1}\) as in (5.4), and for some operators

$$\begin{aligned} r_0 \in C([0,T]; {\textbf {{SG}}}^{(0,0)}_{\delta }({\mathbb {R}}^{2})), \quad {\bar{r}} \in C([0,T]; \Sigma _{\theta }({\mathbb {R}}^{2})). \end{aligned}$$

6 Estimates from below

In this section we will choose \(M_2, M_1\) and k(t) in order to apply Fefferman–Phong and sharp Gårding inequalities to get the desired energy estimate for the conjugated problem. By the computations of the previous section we have

$$\begin{aligned}&e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}}(x,D) (iP) \{ e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}}(x,D) \}^{-1}\\&\quad = \partial _{t} + ia_3(t,D) + \sum _{j=0}^2 {\tilde{a}}_j(t,x,D)+ r_0(t,x,D) + {\bar{r}}(t,x,D), \end{aligned}$$

where \({\tilde{a}}_2, {\tilde{a}}_1\) represent the parts with \(\xi -\)order 2, 1, respectively, and \({\tilde{a}}_0\) represents the part with \(\xi -\)order 0, but with a positive order (less than or equal to \(1-\sigma \)) with respect to x. The real parts are given by

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_2= & {} -\partial _{\xi }a_3\partial _{x}\lambda _2 - \mathrm{Im}\, a_2 - k(t)\partial _{\xi }a_3 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\, a_{2,\rho _1}, \\ \mathrm{Im}\, {\tilde{a}}_2= & {} \mathrm{Re}\, a_2 + \mathrm{Im}\, a_{2,\rho _1}, \\ \mathrm{Re}\, {\tilde{a}}_1= & {} - \partial _{\xi }a_3\partial _{x}\lambda _1 - \mathrm{Im}\,a_1 - \partial _{\xi }\mathrm{Re}\,a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}\mathrm{Re}\,a_2 \\&- k(t)\partial _{\xi }\mathrm{Re}\,a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\,a_{1,\rho _1}, \\ \mathrm{Re}\, {\tilde{a}}_0= & {} -\mathrm{Im} a_0- k'(t)\langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\,{a}_{1,0} + \mathrm{Re}\,{r}_{\sigma ,1}. \end{aligned}$$

Since the Fefferman–Phong inequality holds true only for scalar symbols, we need to decompose \(\mathrm{Im}\, {\tilde{a}}_2\) into its Hermitian and anti-Hermitian part:

$$\begin{aligned} i \mathrm{Im} {\tilde{a}}_2=\displaystyle \frac{i \mathrm{Im} {\tilde{a}}_2+(i \mathrm{Im} {\tilde{a}}_2)^*}{2}+\frac{i \mathrm{Im} {\tilde{a}}_2-(i \mathrm{Im} {\tilde{a}}_2)^*}{2}=t_1+t_2, \end{aligned}$$

where \(2\mathrm{Re}\langle t_2(t,x,D)u,u\rangle =0\) and \(t_1(t,x,\xi )=-\displaystyle \sum \nolimits _{\alpha \ge 1}\frac{i}{2\alpha !}\partial _\xi ^\alpha D_x^\alpha \mathrm{Im} {\tilde{a}}_2(t,x,\xi )\). Observe that, using (5.7),

$$\begin{aligned} |t_1(t,x,\xi )|&\le C_{a_2} \langle \xi \rangle \langle x \rangle ^{-1} + \max \{1, k(t)\}|\rho _1|C_{\lambda _2} \langle \xi \rangle ^{\frac{1}{\theta }} \langle x \rangle ^{-\sigma - 1} \nonumber \\&\le \{ C_{a_2} + h^{-(1-\frac{1}{\theta })}\max \{1, k(0)\}|\rho _1|C_{\lambda _2} \} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}}. \end{aligned}$$
(6.1)

In this way we may write

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }}} &e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}}(x,D) (iP) \{ e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{k(t)\langle x \rangle ^{1-\sigma }_{h}} e^{{\tilde{\Lambda }}}(x,D) \}^{-1} \nonumber \\= & {} \partial _{t} + ia_3(t,D) + (\mathrm{Re}\, {\tilde{a}}_2 + t_2 + t_1 + {\tilde{a}}_{1} + {\tilde{a}}_0)(t,x,D) + {\tilde{r}}_0(t,x,D),\nonumber \\ \end{aligned}$$
(6.2)

where \({\tilde{r}}_0\) has symbol of order (0, 0).

Now we are ready to choose \(M_2, M_1\) and k(t). The function k(t) will be of the form \(k(t)=K(T-t)\), \(t \in [0,T]\), for a positive constant K to be chosen. In the following computations we shall consider \(|\xi | > hR_{a_3}\). Observe that \(2|\xi |^{2} \ge \langle \xi \rangle ^{2}_{h}\) whenever \(|\xi | \ge h > 1\). For \(\mathrm{Re}\, {\tilde{a}}_2\) we have:

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_2&= M_2 |\partial _{\xi } a_3| \langle x \rangle ^{-\sigma } - \mathrm{Im}\, a_2 - k(t)\partial _{\xi }a_3 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\, a_{2,\rho _1} \\&\ge M_2 C_{a_3} |\xi |^{2} \langle x \rangle ^{-\sigma } - C_{a_2} \langle \xi \rangle ^{2}_h \langle x \rangle ^{-\sigma } \\&\quad - {\tilde{C}}_{a_3}k(0)(1-\sigma )\langle \xi \rangle ^{2}_{h} \langle x \rangle ^{-\sigma }_{h} - \max \{1,k(0)\} C_{\lambda _2, \rho _1} \langle \xi \rangle ^{1+\frac{1}{\theta }}_{h} \langle x \rangle ^{-\sigma } \\&\ge (M_2 \frac{C_{a_3}}{2} - C_{a_2} -{\tilde{C}}_{a_3} k(0)(1-\sigma ) - \max \{1,k(0)\}C_{\lambda _2, \rho _1} \langle \xi \rangle ^{-(1-\frac{1}{\theta })}_{h} ) \langle \xi \rangle ^{2}_{h} \langle x \rangle ^{-\sigma } \\&\ge (M_2 \frac{C_{a_3}}{2} - C_{a_2} - {\tilde{C}}_{a_3}k(0)(1-\sigma ) - \max \{1,k(0)\} C_{\lambda _2, \rho _1} h^{-(1-\frac{1}{\theta })} ) \langle \xi \rangle ^{2}_{h} \langle x \rangle ^{-\sigma } \\&=(M_2 \frac{C_{a_3}}{2} - C_{a_2} - {\tilde{C}}_{a_3}KT(1-\sigma ) - \max \{1,KT\} C_{\lambda _2, \rho _1} h^{-(1-\frac{1}{\theta })} ) \langle \xi \rangle ^{2}_{h} \langle x \rangle ^{-\sigma } \end{aligned}$$

For \(\mathrm{Re}\,{\tilde{a}}_1\), we have:

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_1&= M_1 |\partial _{\xi }a_3| \langle \xi \rangle ^{-1}_{h} \langle x \rangle ^{-\frac{\sigma }{2}} \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}} \right) - \mathrm{Im}\,a_1 - \partial _{\xi }\mathrm{Re}\,a_2\partial _{x}\lambda _2 + \partial _{\xi }\lambda _2\partial _{x}\mathrm{Re}\,a_2 \\&\quad - k(t)\partial _{\xi }\mathrm{Re}\,a_2 \partial _{x} \langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\,a_{1,\rho _1} \\&\ge M_1 C_{a_3} |\xi |^{2} \langle \xi \rangle ^{-1}_{h} \langle x \rangle ^{-\frac{\sigma }{2}} \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}} \right) - C_{a_1}\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} - {\tilde{C}}_{a_2, \lambda _2} \langle \xi \rangle _{h} \langle x \rangle ^{-\sigma } \\&\quad -Ck(0) (1-\sigma ) \langle \xi \rangle _{h} \langle x \rangle ^{-\sigma }_{h} - \max \{1,k(0)\}C_{{\tilde{\Lambda }}, \rho _1} \langle \xi \rangle ^{\frac{1}{\theta }}_{h} \langle x \rangle ^{-\sigma }_{h} \\&\ge M_1 \frac{C_{a_3}}{2} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}} \right) - C_{a_1}\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} - {\tilde{C}}_{a_2, \lambda _2}\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&\quad -Ck(0) (1-\sigma ) \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}}_{h} \langle x \rangle ^{-\frac{\sigma }{2}} - \max \{1,k(0)\}C_{{\tilde{\Lambda }}, \rho _1} \langle \xi \rangle ^{- (1-\frac{1}{\theta })}_{h} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&= (M_1 \frac{C_{a_3}}{2} - C_{a_1} - {\tilde{C}}_{a_2, \lambda _2} - C KT(1-\sigma )\langle x \rangle ^{-\frac{\sigma }{2} }_{h}) \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&\quad - \max \{1,KT\}C_{{\tilde{\Lambda }}, \rho _1} \langle \xi \rangle ^{-(1-\frac{1}{\theta })}_{h} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&\qquad - M_1 \frac{C_{a_3}}{2} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \left( 1 - \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}} \right) \right) . \end{aligned}$$

Since \(\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \le \sqrt{2}\) on the support of \(1 - \psi \left( \frac{\langle x \rangle ^{\sigma }}{\langle \xi \rangle ^{2}_{h}} \right) \), we may conclude

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_1&\ge (M_1 \frac{C_{a_3}}{2} - C_{a_1} - {\tilde{C}}_{a_2, \lambda _2} - CKT(1-\sigma )h^{-\frac{\sigma }{2}})\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&\quad - \max \{1,KT\}C_{{\tilde{\Lambda }}, \rho _1} h^{- (1-\frac{1}{\theta })} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} - M_1 \frac{C_{a_3}}{2} \sqrt{2}. \end{aligned}$$

Taking (6.1) into account we obtain

$$\begin{aligned} \mathrm{Re}\, ({\tilde{a}}_1 + t_1)&\ge (M_1 \frac{C_{a_3}}{2} - C_{a_2} - C_{a_1} - {\tilde{C}}_{a_2, \lambda _2} - CKT(1-\sigma )h^{-\frac{\sigma }{2}})\langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} \\&\quad - \max \{1,KT\}(C_{{\tilde{\Lambda }}, \rho _1} +|\rho _1|C_{\lambda _2} )h^{- (1-\frac{1}{\theta })} \langle \xi \rangle _{h} \langle x \rangle ^{-\frac{\sigma }{2}} - M_1 \frac{C_{a_3}}{2} \sqrt{2}. \end{aligned}$$

For \(\mathrm{Re}\, {\tilde{a}}_0\), we have:

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_{0}&= - \mathrm{Im} a_0- k'(t)\langle x \rangle ^{1-\sigma }_{h} + \mathrm{Re}\,{a}_{1,0} + \mathrm{Re}\,{r}_{\sigma ,1} \\&= - \mathrm{Im} a_0+ ( -k'(t) - \max \{1,k(0)\}C_T \langle x \rangle ^{-\sigma }_{h} - C_{T, {\tilde{\Lambda }}} \langle x \rangle ^{-\sigma }_{h} )\langle x \rangle ^{1-\sigma }_{h} \\&\ge ( -C_{a_0}+K - \max \{1,KT\}C_Th^{-\sigma } - C_{T, {\tilde{\Lambda }}}h^{-\sigma } )\langle x \rangle ^{1-\sigma }_{h}. \end{aligned}$$

Finally, let us proceed with the choices of \(M_1, M_2\) and K. First we choose K larger than \(\max \{C_{a_0}, 1/T\}\), then we set \(M_2\) such that \(M_2 \frac{C_{a_3}}{2} - C_{a_2} - {\tilde{C}}_{a_3}KT(1-\sigma ) > 0\) and after that we take \(M_1\) such that \(M_1 \frac{C_{a_3}}{2} - C_{a_2} - C_{a_1} - {\tilde{C}}_{a_2, \lambda _2} > 0\) (choosing \(M_2\), \(M_1\) we determine \({\tilde{\Lambda }}\)). Enlarging the parameter h we may assume

$$\begin{aligned}&KTC_{\lambda _2, \rho _1} h^{-\left( 1-\frac{1}{\theta }\right) }< \frac{1}{4}(M_2 \frac{C_{a_3}}{2} - C_{a_2} - {\tilde{C}}_{a_3}KT(1-\sigma )), \\&CKT(1-\sigma )h^{-\frac{\sigma }{2}} + KT(C_{{\tilde{\Lambda }}, \rho _1} + |\rho _1|C_{\lambda _2} )h^{- (1-\frac{1}{\theta })}\\&\quad< \frac{1}{4}(M_1 \frac{C_{a_3}}{2} - C_{a_2} - C_{a_1} - {\tilde{C}}_{a_2, \lambda _2} ),\\&KTC_Th^{-\sigma } + C_{T, {\tilde{\Lambda }}}h^{-\sigma } <\frac{K-C_{a_0}}{4}. \end{aligned}$$

With these choices we obtain that \(\mathrm{Re}\, {\tilde{a}}_2 \ge 0\), \(\mathrm{Re}\, ({\tilde{a}}_1 + t_1) + M_1 \frac{C_{a_3}}{2} \sqrt{2} \ge 0\) and \(\mathrm{Re}\, {\tilde{a}}_0 \ge 0\). Let us also remark that the choices of \(M_2, M_1\) and k(t) do not depend of \(\rho _1\) and \(\theta \).

7 Proof of Theorem 1

Let us denote

$$\begin{aligned} {\tilde{P}}_{\Lambda }:= & {} e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{\Lambda }(t,x,D) \,iP\, \{ e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{\Lambda }(t,x,D) \}^{-1}. \end{aligned}$$

By (6.2), with the choices of \(M_2,M_1,k(t)\) in the previous section, we get

$$\begin{aligned} i{{\tilde{P}}}_{\Lambda }= & {} \partial _{t} + ia_3 (t,D) + (\mathrm{Re}\, {\tilde{a}}_2 + t_2)(t,x,D)\\&+( {\tilde{a}}_{1} + t_1)(t,x,D)+ {\tilde{a}}_{0}(t,x,D) + {\tilde{r}}_{0}(t,x,D), \end{aligned}$$

with

$$\begin{aligned} \mathrm{Re}\, {\tilde{a}}_2 \ge 0,\; \mathrm{Re}\, ({\tilde{a}}_1 +t_1) + M_1 \frac{C_{a_3}}{2} \sqrt{2} \ge 0,\; \mathrm{Re}\, {\tilde{a}}_0 \ge 0. \end{aligned}$$
(7.1)

Fefferman–Phong inequality applied to \(\mathrm{Re}\, {\tilde{a}}_2\) and sharp Gårding inequality applied to \({\tilde{a}}_1 + t_1+M_1 \frac{C_{a_3}}{2} \sqrt{2}\) and \({\tilde{a}}_0\) give

for a positive constant c. Now applying Gronwall inequality we come to the following energy estimate:

$$\begin{aligned} \Vert v(t) \Vert ^{2}_{L^2} \le C \left( \Vert v(0) \Vert ^{2}_{L^2} + \int _{0}^{t} \Vert (i{{\tilde{P}}}_{\Lambda } v(\tau ) \Vert ^{2}_{L^2} d\tau \right) , t\in [0,T], \end{aligned}$$

for every \(v(t,x) \in C^{1}([0,T]; {\mathscr {S}}({\mathbb {R}}))\). By usual computations, this estimate provides well-posedness of the Cauchy problem associated with \({{\tilde{P}}}_{\Lambda }\) in \(H^m({\mathbb {R}})\) for every \(m=(m_1,m_2)\in {\mathbb {R}}^2\): namely for every \({{\tilde{f}}}\in C([0,T]; H^m({\mathbb {R}}))\) and \({{\tilde{g}}}\in H^m({\mathbb {R}})\), there exists a unique \(v \in C([0,T]; H^m({\mathbb {R}}))\) such that \({{\tilde{P}}}_\Lambda v = {{\tilde{f}}}\), \(v(0) = \tilde{g}\) and

$$\begin{aligned} \Vert v(t) \Vert ^{2}_{H^m} \le C\left( \Vert \tilde{g} \Vert ^{2}_{H^m} + \int _{0}^{t} \Vert {{\tilde{f}}}(\tau ) \Vert ^{2}_{H^m} d\tau \right) ,\quad t\in [0,T]. \end{aligned}$$
(7.2)

Let us now turn back to our original Cauchy problem (1.10), (1.1). Fixing initial data \(f \in C([0,T], H^m_{\rho ;s,\theta }({\mathbb {R}}))\) and \(g \in H^m_{\rho ;s,\theta }({\mathbb {R}})\) for some \(m, \rho \in {\mathbb {R}}^2\) with \(\rho _2 >0\) and positive \(s,\theta \) such that \(\theta > s_0\), we can define \(\Lambda \) as at the beginning of Sect. 6 with \(\mu >1\) such that \(s_0 > 2\mu -1\) and \(M_1,M_2, k(0)\) such that (7.1) holds. Then by Theorem 4 we get

$$\begin{aligned} f_{\rho _1,\Lambda }:= e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{\Lambda }(t,x,D)f \in C([0,T], H^m_{(0,\rho _2-{{\bar{\delta }}} ); s, \theta }({\mathbb {R}})) \end{aligned}$$

and

$$\begin{aligned} g_{\rho _1,\Lambda }:= e^{\rho _1 \langle D \rangle ^{\frac{1}{\theta }} } e^{\Lambda }(0,x,D)g \in H^m_{(0,\rho _2-{{\bar{\delta }}} ); s, \theta }({\mathbb {R}}) \end{aligned}$$

for every \({{\bar{\delta }}} >0\), because \(1/(1-\sigma )>s\). Since \(\bar{\delta }\) can be taken arbitrarily small, we have that \(f_{\rho _1,\Lambda } \in C([0,T], H^m)\) and \(g_{\rho _1,\Lambda } \in H^m\). Hence the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} {\tilde{P}}_{\Lambda } v = f_{\rho _1,\Lambda } \\ v(0) = g_{\rho _1,\Lambda } \end{array}\right. } \end{aligned}$$

admits a unique solution \(v \in C([0,T], H^m)\cap C^{1}([0,T], H^{m_1-3, m_2-1+1/\sigma )})\) satisfying energy estimate (7.2). Taking now \(u= (e^{\Lambda (t,x,D)})^{-1} e^{-\rho _1\langle D \rangle ^{1/\theta }}v\), we easily see that u solves Cauchy problem (1.1), it satisfies

$$\begin{aligned} e^{\rho _1\langle D\rangle ^{1/\theta }}e^{K(T-t)\langle x\rangle _h^{1-\sigma }}e^{{{\tilde{\Lambda }}}(x,D)}u\in H^m({\mathbb {R}}) \end{aligned}$$

and it is the unique solution with this property. Namely,   \( u \in C([0,T], H^m_{(\rho _1,-{\tilde{\delta }}); s, \theta }({\mathbb {R}})) \cap C^{1}([0,T], H^{(m_1-3,m_2)}_{(\rho _1 ,-{\tilde{\delta }}); s, \theta }({\mathbb {R}})) \) for every \({\tilde{\delta }}>0\). Moreover, from (7.2) we get

$$\begin{aligned}\Vert u\Vert _{H_{(\rho _1,-{\tilde{\delta }}); s, \theta }^m}&\le C \Vert v \Vert _{H^m} \le C\left( \Vert g_{\rho _1, \Lambda } \Vert ^{2}_{H^m} + \int _{0}^{t} \Vert f_{\rho _1, \Lambda } (\tau ) \Vert ^{2}_{H^m} d\tau \right) \\&\le C\left( \Vert g \Vert ^{2}_{H^m_{\rho ;s,\theta }} + \int _{0}^{t} \Vert f(\tau ) \Vert ^{2}_{H^{m}_{\rho ;s,\theta }} d\tau \right) . \end{aligned}$$

which gives (1.11). This concludes the proof. \(\square \)

Remark 6

Notice that the argument of the proof of Theorem 1 and in particular energy estimate (1.11) implies that the solution of problem (1.1) is unique in the space of all functions u such that

$$\begin{aligned} e^{\rho _1\langle D\rangle ^{1/\theta }}e^{K(T-t)\langle x\rangle _h^{1-\sigma }}e^{{{\tilde{\Lambda }}}(x,D)}u\in H^m ({\mathbb {R}}). \end{aligned}$$

In general we cannot conclude that it is unique in \(C([0,T], H^m_{(\rho _1, -{\tilde{\delta }}); s, \theta }({\mathbb {R}}))\).

Remark 7

In our main result we assume that the symbol of the leading term \(a_3(t,D)\) is independent of x. This assumption is crucial in the argument of the proof. As a matter of fact, we observe that if \(a_3\) depends on x, even allowing it to decay like \(\langle x \rangle ^{-m}\) for \(m>>0\) , the conjugation of this term with the operator \(e^{\rho _1 \langle D \rangle ^{1/\theta }}\) would give

$$\begin{aligned} e^{\rho _1 \langle D \rangle ^{1/\theta }} (ia_3(t,x,D)) e^{-\rho _1 \langle D \rangle ^{1/\theta }} = ia_3(t,x,D_x) + \text {op }\left( \rho _1 \partial _\xi \langle \xi \rangle ^{\frac{1}{\theta }}\cdot \partial _x a_3 \right) + \text {l.o.t}. \end{aligned}$$

We observe that

$$\begin{aligned} \rho _1 \partial _\xi \langle \xi \rangle ^{\frac{1}{\theta }}\cdot \partial _x a_3 (t,x,\xi ) \sim \langle \xi \rangle ^{2+\frac{1}{\theta }} \langle x \rangle ^{m-1}. \end{aligned}$$

Hence since \(2+\frac{1}{\theta } >2\), this remainder term could not be controlled by other lower-order terms whose order in \(\xi \) does not exceed 2. Notice that this represents a difference in comparison with the \(H^\infty \) frame where a dependence on x in the leading term can be allowed by assuming a suitable decay assumption with respect to x, cf. [2, Sect. 4].

Remark 8

With a major technical effort one can consider 3-evolution equations in higher space dimension, that is for \(x \in {\mathbb {R}}^n, n >1.\) At this moment, results of this type exist only for the case \(p=2\), cf. [6, 10, 22]. When passing to higher dimension, the main difficulty is the choice of the functions \(\lambda _1, \lambda _2\) defining the change of variable, which must be chosen in order to satisfy certain partial differential inequalities, see also [2, Sect. 4]. These may be non-trivial for \(p >2\). In this paper we prefer to restrict to the one space dimensional case both since the content is already quite technical and since the main physical models to which our results could be applied are included in this case.