1 Introduction

In this paper we consider the following Cauchy problem for Klein–Gordon equations with time-dependent potential:

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{tt}-\Delta u + M(t) u= 0,\;\; (t,x)\in (0,\infty )\times {\mathbb {R}}^n,\\ u(0,x) = u_0(x),\;\; u_t(0,x) =u_1(x), \;\; x\in {\mathbb {R}}^n, \end{array}\right. } \end{aligned}$$
(1)

where \(M = M(t)\) is real-valued. A large amount of work has been devoted to (1). In particular, we take up the recent works [1,2,3, 6], which studied the asymptotic properties of the energy as t tends to infinity. For a positive continuous function \(p=p(t)\) we introduce the following energy to the solution of (1):

$$\begin{aligned} E(u;p)(t):=\frac{1}{2}\left( \Vert \nabla u(t,\cdot )\Vert _{L^2}^2 +\Vert u_t(t,\cdot )\Vert _{L^2}^2 +p(t)\Vert u(t,\cdot )\Vert _{L^2}^2 \right) . \end{aligned}$$
(2)

In [1, 2] the authors studied the following scale-invariant model for the coefficient in the potential

$$\begin{aligned} M(t) = \frac{\mu ^2}{(1+t)^2} \end{aligned}$$
(3)

with a positive number \(\mu \), and they showed the following energy estimate:

$$\begin{aligned} E(u;p)(t)\lesssim E(u;p)(0) \end{aligned}$$
(4)

with

$$\begin{aligned} p(t)={\left\{ \begin{array}{ll} (1+t)^{-1} &{}\quad {\hbox {for}}\, \mu >1/2,\\ (1+t)^{-1} (\log (e+t))^{-2} &{}\quad {\hbox {for}}\, \mu =1/2, \\ (1+t)^{-1-\sqrt{1-4\mu ^2}} &{} \quad {\hbox {for}}\, \mu <1/2, \end{array}\right. } \end{aligned}$$
(5)

where \(f\lesssim g\) with positive functions f and g denotes that there exists a positive constant C such that the estimate \(f\le C g\) is valid. Moreover, \(f \simeq g\) denotes that f and g satisfy \(f \lesssim g\) and \(g \lesssim f\). Thus we observe that the influence of the potential to the energy is quite different if \(0<\mu <1/2\) or \(\mu >1/2\); the potential satisfying the former is called “non-effective.” Generally, the time-dependent potential M(t)u is “non-effective” if \(\limsup _{t\rightarrow \infty }(1+t)\int _t^\infty M(s)\,{\hbox {d}}s <1/4\). Here the notion of effective and non-effective coefficients originated from [13, 14] for the classification of the time-dependent dissipations to dissipative wave equations

$$\begin{aligned} \tilde{u}_{tt}-\Delta \tilde{u} +2b(t) \tilde{u}_t=0, \end{aligned}$$
(6)

which is identified with the Klein–Gordon equation of (1) by

$$\begin{aligned} \tilde{u}=\exp \left( -\int ^t_0 b(s)\,{\hbox {d}}s \right) u\quad \hbox {and}\,M=-b'-b^2. \end{aligned}$$
(7)

It is known that the “oscillations” of variable coefficients have a crucial influence on energy estimates for hyperbolic equations. For instance, it is shown in [11, 12] that the energy to the solution of the Cauchy problem for the wave equation with time-dependent propagation speed

$$\begin{aligned} u_{tt}-a(t)\Delta u=0 \end{aligned}$$
(8)

can be unbounded as \(t\rightarrow \infty \) if a(t) is oscillating very fast though a(t) is bounded and strictly positive. Precisely, the energy is not bounded in general if the estimate \(|a'(t)|\lesssim (1+t)^{-\beta }\) with \(\beta < 1\) holds; on the other hand the energy is bounded uniformly with respect to t if \(\beta >1\). Moreover, it is proved in [11] that the energy is also uniformly bounded with respect to t if \(|a'(t)|\lesssim (1+t)^{-1}\) and \(|a''(t)|\lesssim (1+t)^{-2}\) with \(a\in C^2([0,\infty ))\). Here the oscillations in the coefficient a(t) satisfying \(|a'(t)|\lesssim (1+t)^{-\beta }\) with \(\beta <1\) and \(\beta \ge 1\) are called “very fast” and “very slow,” respectively. Thus we expect that if the oscillations in the coefficient are very slow, then the asymptotic behavior of the energy is the same as in the case without any oscillations. The notion of oscillations can be introduced to dissipative wave equation (6) with

$$\begin{aligned} \frac{a'(t)}{2a(t)^2}= b\left( \int ^t_0 a(s)\,{\hbox {d}}s \right) \end{aligned}$$
(9)

and to the Klein–Gordon equation of (1) with (7). Energy estimates with very slow oscillating coefficients in the dissipation \(b(t)u_t\) and in the potential M(t)u, which were described by \(|b(t)|\lesssim (1+t)^{-1}\) and \(|M(t)|\lesssim (1+t)^{-2}\), were studied in [13, 14], and [6], respectively.

Generally, very fast oscillating coefficients may destroy estimates, which are valid for slow oscillating coefficients (see [4]). However, some additional assumptions to the coefficient enable the estimates, even though the oscillations become very fast. Indeed, the energy to the solution of (8) can be bounded uniformly with respect to t although the oscillations of a(t) are very fast if \(a\in C^m([0,\infty ))\) with \(m\ge 2\) and there exist positive constants \(a_\infty >0\) and \(\alpha \in [0,1)\) such that

$$\begin{aligned} \int ^t_0|a(s)-a_\infty |\,{\hbox {d}}s=O (t^\alpha ) \quad (t\rightarrow \infty ), \end{aligned}$$
(10)

here (10) is called the stabilization property for (8) (see [5, 7, 9]). Corresponding stabilization properties were studied in [8] for dissipative wave equations (6) with non-effective dissipation and in [3] for Klein–Gordon equations of (1) with effective potential. Briefly, the aim of the present paper is to prove energy estimate (4) to the solutions of (1) with non-effective and very fast oscillating coefficient in the potential to introduce a suitable stabilization property.

This paper is organized as follows. In Sect. 2 we give the main theorem and corresponding examples. In Sect. 3 we introduce the strategy of the proof and some estimates to be used in the proof. In Sect. 4 we prove some estimates of the micro-energy in restricted phase spaces. In Sect. 5 we give the proof of our main theorem. In Sect. 6 we give some concluding remarks, and Sect. 7 is an appendix.

2 Main result

In this paper we restrict ourselves to the following special structure of the coefficient in the potential:

$$\begin{aligned} M(t) = \frac{\mu ^2}{(1+t)^2} + \delta (t) \end{aligned}$$
(11)

as a perturbed model of scale-invariant potential (3). For the perturbation \(\delta =\delta (t)\) we introduce the following hypothesis:

Hypothesis 1

(Non-effective condition) The principal part of M(t) is non-effective, that is, \(\mu \) satisfies

$$\begin{aligned} 0<\mu <\frac{1}{2}. \end{aligned}$$
(12)

Hypothesis 2

(Oscillation condition) There exists a real number \(\beta \) satisfying \(\beta <1\) such that

$$\begin{aligned} \left| \delta (t)\right| \lesssim (1+t)^{-2\beta }. \end{aligned}$$
(13)

Hypothesis 3

(Stabilization condition) There exists a real number \(\gamma \) satisfying \(\gamma >1\) such that

$$\begin{aligned} \left| \int _t^\infty \delta (s)\,{\hbox {d}}s\right| \lesssim (1+t)^{-\gamma }. \end{aligned}$$
(14)

Then our main theorem is given as follows:

Theorem 1

Let \(\delta \in C^0([0,\infty ))\) and Hypotheses 1, 2 and 3 be fulfilled. If the following conditions hold : 

$$\begin{aligned} 2\beta {\left\{ \begin{array}{ll} \ge -\gamma + 3 &{} \textit{ for }\; \gamma \not =2,\\ > 1 &{} \textit{ for }\; \gamma =2, \end{array}\right. } \end{aligned}$$
(15)

then energy estimate (4) with (5) is established.

Remark 1

By Hypotheses 1 and 3 we see that

$$\begin{aligned} \limsup _{t\rightarrow \infty }(1+t)\left| \int _t^\infty M(s)\,{\hbox {d}}s\right| \le \mu ^2 + \limsup _{t\rightarrow \infty }(1+t)\left| \int _t^\infty \delta (s)\,{\hbox {d}}s\right| <\frac{1}{4}. \end{aligned}$$

It follows that M(t)u is non-effective.

Remark 2

Hypothesis 2 and 3 do not require the asymptotic behavior \(\delta (t)=O(t^{-2})\) (\(t\rightarrow \infty \)). Hence, (11) is not always a small perturbation of scale-invariant model (3) in the sense of the \(L^\infty \) norm. In other words, M(t) is allowed to possess very fast oscillations if one reduces the Klein–Gordon equation to (6) and (8) by (7) and (9). Indeed, we will introduce some examples of \(\delta (t)\), to which one can apply Theorem 1. These examples satisfy

$$\begin{aligned} \liminf _{t\rightarrow \infty }t^\beta \delta (t)<0. \end{aligned}$$
(16)

It follows that for any \(T>0\) there exists \(T_1\) satisfying \(T_1>T\) such that \(M(T_1)<0\). Here we note that models with negative time-dependent potential appear in some cosmology models (see [15]).

Example 1

We define M(t) by \(M(t)= \mu ^2(1+t)^{-2} + \delta (t)\) with

$$\begin{aligned} \delta (t)= \frac{\sin ((1+t)^{\kappa })}{(1+t)^{2\beta }} , \end{aligned}$$

where \(\beta \), \(\mu \) and \(\kappa \) are real numbers satisfying \(0<\mu <1/2\), \(\kappa >0\) and \(1-\kappa /4\le \beta <1\). Thus Hypotheses 1 and 2 are satisfied. By Lemma 7 from “Appendix” we have

$$\begin{aligned} \left| \int _t^\infty \delta (s)\, {\hbox {d}}s \right| =\frac{1}{\kappa } \left| \int _{(1+t)^{\kappa }}^\infty \frac{\sin \theta }{\theta ^{1+\frac{2\beta -1}{\kappa }}}\, {\hbox {d}}\theta \right| \lesssim (1+t)^{-(2\beta +\kappa -1)}. \end{aligned}$$

Thus Hypothesis 3 is satisfied with \(\gamma =2\beta +\kappa -1 \ge 1+\kappa /2>1\). Therefore, Theorem 1 is applicable since (15) holds, that is, if

$$\begin{aligned} \beta {\left\{ \begin{array}{ll} \ge 1-\dfrac{\kappa }{4} &{}\quad {\hbox {for}}\; 2\beta +\kappa \ne 3, \\ > \dfrac{1}{2} &{}\quad {\hbox {for}}\; 2\beta +\kappa =3. \end{array}\right. } \end{aligned}$$
(17)

Example 2

Let \(\{t_j\}_{j=1}^\infty =\{2\pi j\}_{j=1}^\infty \). We define M(t) by \(M(t):= \mu ^2(1+t)^{-2} + \delta (t)\) with

$$\begin{aligned} \delta (t):= {\left\{ \begin{array}{ll} t_j^{-2\beta } \sin \left( t_j^{\kappa }(t-t_j)\right) &{} t\in \left[ t_j,t_j+2\pi t_j^{-\kappa }\right] ,\\ 0 &{} t\in [0,\infty )\setminus \bigcup _{j=1}^\infty \left[ t_j,t_j+2\pi t_j^{-\kappa }\right] , \end{array}\right. } \end{aligned}$$

where \(\beta \), \(\mu \) and \(\kappa \) are real numbers satisfying \(0<\mu <1/2\), \(\kappa >0\) and \(1-(1+\kappa )/4 \le \beta <1\). Thus Hypotheses 1 and 2 are satisfied. Here we note that \(t_j+2\pi t_j^{-\kappa }<t_{j+1}\). For an arbitrarily given positive real number t satisfying \(t\ge t_1\) we take \(N\in {\mathbb {N}}\) satisfying \(t_{N}\le t < t_{N+1}\). Then we have

$$\begin{aligned} |\delta (t)|\le t_N^{-2\beta } =\left( \frac{N+1}{N}\right) ^{2\beta } t_{N+1}^{-2\beta } \simeq (1+t)^{-2\beta }. \end{aligned}$$

Moreover, if we note \(\int ^{t_j+2\pi t_j^{-\kappa }}_{t_j}\delta (s)\,{\hbox {d}}s=\int ^{t_{j+1}}_{t_j}\delta (s)\,{\hbox {d}}s=0\) for any j, we have

$$\begin{aligned} \left| \int ^\infty _t \delta (s)\,{\hbox {d}}s\right|&=\left| \int ^{t_{N+1}}_t \delta (s)\,{\hbox {d}}s\right| \le \int ^{t_{N}+2\pi t_{N}^{-\kappa }}_{t_{N}} |\delta (s)|\,{\hbox {d}}s \le 4 t_{N}^{-(2\beta +\kappa )} \\&=4 \left( \frac{N+1}{N}\right) ^{2\beta +\kappa } t_{N+1}^{-(2\beta +\kappa )} \simeq (1+t)^{-(2\beta +\kappa )}. \end{aligned}$$

Thus Hypothesis 3 is satisfied with \(\gamma =2\beta +\kappa \ge (3+\kappa )/2>1\). Therefore, Theorem 1 is applicable since (15) holds, that is, if

$$\begin{aligned} \beta \ge {\left\{ \begin{array}{ll} 1-\dfrac{1+\kappa }{4} &{}\quad {\hbox {for}}\; 2\beta +\kappa \ne 2, \\ \dfrac{1}{2} &{}\quad {\hbox {for}}\; 2\beta +\kappa = 2. \end{array}\right. } \end{aligned}$$
(18)

3 Strategy of the proof and some estimates

3.1 Reduction to the dissipative wave equation

The basic strategy for the proof of our main theorem is the reduction to dissipative wave equation (6). The change of variables \(u(t,x)=\eta (t)w(t,x)\) with \(\eta =\eta (t)\in C^2([0,\infty ))\) transforms the Klein–Gordon equation into the following dissipative wave equation:

$$\begin{aligned} w_{tt}-\Delta w+\frac{2\eta '(t)}{\eta (t)}w_t +\left( \frac{\eta ''(t)}{\eta (t)}+M(t)\right) w=0. \end{aligned}$$
(19)

If we take \(\eta \) as the solution to the following Liouville-type equation:

$$\begin{aligned} \eta ''(t)+M(t)\eta (t)=0, \end{aligned}$$
(20)

then (19) corresponds to (6) with \(b=\eta '/\eta \). Hence, we may apply arguments for treating dissipative wave equations with very fast oscillations in [8]. The idea was introduced in [6] for very slow oscillating potentials. Thus the main task for the proof of our main theorem is to develop their method for very fast oscillating potentials.

Let T be a nonnegative number and \(\{q_k(t)\}_{k=1}^\infty \) be defined by

$$\begin{aligned} q_k(t):= {\left\{ \begin{array}{ll} M(t) &{}\quad {\hbox {for}}\; k=1,\\ \sum _{j=1}^{k-1} \left( \int _t^\infty q_j(s)\,{\hbox {d}}s \right) \left( \int _t^\infty q_{k-j}(s)\,{\hbox {d}}s \right) &{}\quad {\hbox {for}} \; k\ge 2. \end{array}\right. } \end{aligned}$$
(21)

Then a solution to (20) is represented formally as follows:

$$\begin{aligned} \eta (t) =\exp \left( \sum _{j=1}^\infty \int ^t_T \int ^\infty _\tau q_j(s)\,{\hbox {d}}s \,{\hbox {d}}\tau \right) \end{aligned}$$
(22)

(see Lemma 8 in Sect. 7). Therefore, we shall consider the following problems to realize our strategy:

  1. (i)

    Uniform convergence of \(\eta (t)\) on any compact interval of \([0,\infty )\).

  2. (ii)

    Estimates of the oscillations and the stabilization to \(b(t)=\eta '(t)/\eta (t)\).

3.2 Convergence of \(\eta (t)\)

Let \(\mu \) be a real number satisfying (12), and \(\gamma _k\) be kth Catalan’s number defined by

$$\begin{aligned} \gamma _k:=\frac{(2k)!}{k!(k+1)!} \;\; (k=0,1,\ldots ). \end{aligned}$$

If we define \(\nu \) and \(\mu _k(t)\) by

$$\begin{aligned} \nu :=\frac{1-\sqrt{1-4\mu ^2}}{2} \end{aligned}$$
(23)

and

$$\begin{aligned} \mu _k(t):=\gamma _{k-1} \mu ^{2k}(1+t)^{-2}, \end{aligned}$$

then we have the following lemma:

Lemma 1

The following equalities are established : 

$$\begin{aligned} \sum _{k=1}^\infty \mu _k(t) =\nu (1+t)^{-2} \end{aligned}$$
(24)

and

$$\begin{aligned} \sum _{j=1}^{k-1} \left( \int ^\infty _t \mu _j(s)\,{\hbox {d}}s \right) \left( \int ^\infty _t \mu _{k-j}(s)\,{\hbox {d}}s \right) =\mu _k(t) \end{aligned}$$
(25)

for any \(k \ge 2\).

Proof

By the generating function of the sequence of Catalan’s numbers:

$$\begin{aligned} \sum _{j=0}^\infty \gamma _{j}r^{j}=\frac{1-\sqrt{1-4r}}{2r} \end{aligned}$$
(26)

(see Lemma 9 in Sect. 7), we have

$$\begin{aligned} \sum _{k=1}^\infty \mu _k(t) =\mu ^2 \sum _{j=0}^\infty \gamma _{j} \mu ^{2j}(1+t)^{-2} =\frac{1-\sqrt{1-4\mu ^{2}}}{2}(1+t)^{-2}. \end{aligned}$$

By the equalities \(\int ^\infty _t \mu _k(s)\,ds=(1+t)\mu _k(t)\) and

$$\begin{aligned} \gamma _{k-1}=\sum _{j=0}^{k-1}\gamma _{j-1} \gamma _{k-j-1}, \end{aligned}$$

it follows that

$$\begin{aligned} \sum _{j=1}^{k-1} \mu _j(t) \mu _{k-j}(t) =\gamma _{k-1}\mu ^{2k}(1+t)^{-4} = \mu _{k}(t) (1+t)^{-2} \end{aligned}$$
(27)

for \(k\ge 2\). Hence we have

$$\begin{aligned} \sum _{j=1}^{k-1} \left( \int ^\infty _t \mu _j(s)\,{\hbox {d}}s \right) \left( \int ^\infty _t \mu _{k-j}(s)\,{\hbox {d}}s \right) =(1+t)^2\sum _{j=1}^{k-1} \mu _j(t)\mu _{k-j}(t) =\mu _k(t). \end{aligned}$$

\(\square \)

Remark 3

If \(\delta (t)=0\) and \(0<\mu <1/2\), then \(q_k(t)=\mu _k(t)\), and thus \(\eta (t)=(1+t)^\nu \) and \(b(t)=\nu (1+t)^{-1}\) with \(0<\nu <1/2\).

Let \(\sigma _k(t)\) with \(k=1,2,\ldots \) be the error of \(q_k(t)\) from \(\mu _k(t)\), that is,

$$\begin{aligned} \sigma _k(t):=q_k(t)-\mu _k(t). \end{aligned}$$

Then we have the following lemmas:

Lemma 2

For any positive real number \(\rho \) there exist constants \(T\ge 0\) and \(\omega _0>0\) such that

$$\begin{aligned} |\sigma _k(t)| \le \omega _0 ((1+\rho )^k -1 ) (1+t)^{-\gamma +1} \mu _k(t) \;\;(k=1,2,\ldots ) \end{aligned}$$
(28)

and

$$\begin{aligned} \left| \int ^\infty _t\sigma _k(s)\,{\hbox {d}}s\right| \le \omega _0 ((1+\rho )^k -1 )(1+t)^{-\gamma +2} \mu _k(t) \;\;(k=1,2,\ldots ) \end{aligned}$$
(29)

for any \(t\ge T\).

Proof

By (14) there exists a positive constant \(\rho _0\) such that

$$\begin{aligned} \rho _0=\max \left\{ \rho \mu ^2, \sup _{t\ge 0}\left\{ (1+t)^{\gamma }\left| \int ^\infty _t\delta (s)\,{\hbox {d}}s\right| \right\} \right\} . \end{aligned}$$

For a given positive real number \(\rho \) we define \(\omega _0\) and T by

$$\begin{aligned} \omega _0:=\frac{\rho _0}{\rho \mu ^{2}}\quad {\hbox {and}}\, T:=\omega _0^{\frac{1}{\gamma -1}}-1. \end{aligned}$$
(30)

Then for any \(t\ge T\) we have

$$\begin{aligned} \omega _0(1+t)^{-\gamma +1} \le \omega _0(1+T)^{-\gamma +1} = 1 \end{aligned}$$
(31)

and

$$\begin{aligned} \left| \int ^\infty _t \sigma _1(s)\,{\hbox {d}}s\right| =\left| \int ^\infty _t \delta (s)\,{\hbox {d}}s\right| \le \rho _0 (1+t)^{-\gamma } = \omega _0 \rho (1+t)^{-\gamma +2} \mu _1(t) \end{aligned}$$

for any \(t\ge T\). It follows that (29) is valid for \(k=1\). Let \(j\ge 3\) and suppose that (29) is valid for \(k=2,\ldots ,j-1\). Then we have

$$\begin{aligned}&\left| \left( \int ^\infty _t \mu _{j-k}(s)\,{\hbox {d}}s \right) \left( \int ^\infty _t \sigma _k(s)\,{\hbox {d}}s \right) + \left( \int ^\infty _t \mu _{k}(s)\,{\hbox {d}}s \right) \left( \int ^\infty _t \sigma _{j-k}(s)\,{\hbox {d}}s \right) \right| \\&\quad =\left| (1+t)\mu _{j-k}(t)\left( \int ^\infty _t \sigma _k(s)\,{\hbox {d}}s \right) + (1+t)\mu _{k}(t)\left( \int ^\infty _t \sigma _{j-k}(s)\,{\hbox {d}}s \right) \right| \\&\quad \le \omega _0 ((1+\rho )^k+(1+\rho )^{j-k}-2 ) (1+t)^{-\gamma +3} \mu _{k}(t)\mu _{j-k}(t) \end{aligned}$$

and

$$\begin{aligned}&\left| \left( \int ^\infty _t \sigma _{k}(s)\,{\hbox {d}}s \right) \left( \int ^\infty _t \sigma _{j-k}(s)\,{\hbox {d}}s \right) \right| \\&\le \omega _0^2 ((1+\rho )^k-1 ) ((1+\rho )^{j-k}-1) (1+t)^{-2\gamma +4} \mu _k(t)\mu _{j-k}(t). \end{aligned}$$

Therefore, by (25), (27), (31) and the equality

$$\begin{aligned} (1+\rho )^k + (1+\rho )^{j-k}-2 + ((1+\rho )^k-1)((1+\rho )^{j-k}-1) =(1+\rho )^j -1 \end{aligned}$$

we have

$$\begin{aligned} |\sigma _j(t)|&=|q_j(t)-\mu _j(t)| \\&=\left| \sum _{k=1}^{j-1} \left( \int ^\infty _t (\mu _k(s)+\sigma _k(s) )\,{\hbox {d}}s \right) \left( \int ^\infty _t (\mu _{j-k}(s)+\sigma _{j-k}(s) )\,{\hbox {d}}s \right) -\mu _j(t) \right| \\&\le \omega _0 \sum _{k=1}^{j-1} ((1+\rho )^k+(1+\rho )^{j-k}-2 ) (1+t)^{-\gamma +3}\mu _{k}(t)\mu _{j-k}(t) \\&\quad +\omega _0^2\sum _{k=1}^{j-1}((1+\rho )^k-1)((1+\rho )^{j-k}-1) (1+t)^{-2\gamma +4} \mu _k(t)\mu _{j-k}(t) \\&\le \omega _0 ((1+\rho )^j-1) (1+t)^{-\gamma +1}\mu _j(t) \end{aligned}$$

for any \(t\ge T\). Moreover, we have

$$\begin{aligned} \left| \int ^\infty _t\sigma _j(s)\,{\hbox {d}}s\right|&\le \omega _0 ((1+\rho )^j-1)\int ^\infty _t (1+s)^{-\gamma +1}\mu _j(s)\,{\hbox {d}}s \\&= \frac{\omega _0}{\gamma } ((1+\rho )^j-1)(1+t)^{-\gamma +2}\mu _j(t) \\&\le \omega _0 ((1+\rho )^j-1)(1+t)^{-\gamma +2}\mu _j(t). \end{aligned}$$

Thus (28) and (29) are valid for \(k=j\). Consequently, (28) and (29) are valid for any \(k\in {\mathbb {N}}\). \(\square \)

By Lemmas 1 and 2 we have the following two propositions, which ensure the convergence of \(\eta (t)\).

Proposition 1

The following estimates are established on \([T,\infty )\):

$$\begin{aligned} \sum _{j=1}^\infty q_j(t) - M(t) \lesssim (1+t)^{-2} \end{aligned}$$
(32)

and

$$\begin{aligned} \sum _{j=1}^\infty \int ^\infty _t q_j(s)\,ds -\nu (1+t)^{-1} \lesssim (1+t)^{-\gamma }. \end{aligned}$$
(33)

Proof

Let \(\rho >0\) satisfy \((1+\rho )\mu ^2<1/4\). By (21), (26), (31), Lemmas 1 and 2 we have

$$\begin{aligned} \left| \sum _{j=1}^\infty q_j(t)-M(t)\right|&= \left| \sum _{j=2}^\infty q_j(t)\right| =\left| \sum _{j=2}^\infty \mu _j(t) + \sum _{j=2}^\infty \sigma _j(t)\right| \\&\le \nu (1+t)^{-2} + \omega _0\sum _{j=2}^\infty ((1+\rho )^j -1)(1+t)^{-\gamma +1}\mu _j(t) \\&\le \left( \nu + \omega _0\sum _{j=2}^\infty \gamma _{j-1} ((1+\rho )\mu ^{2})^j (1+t)^{-\gamma +1} \right) (1+t)^{-2} \\&\le \left( \nu + \frac{1-\sqrt{1-4(1+\rho )\mu ^2}}{2}\right) (1+t)^{-2} \\&\lesssim (1+t)^{-2}. \end{aligned}$$

Moreover, by (14), Lemmas 1 and 2 we have

$$\begin{aligned}&\left| \sum _{j=1}^\infty \int ^\infty _t q_j(s)\,{\hbox {d}}s - \nu (1+t)^{-1}\right| =\left| \sum _{j=1}^\infty \int ^\infty _t \sigma _j(s)\,{\hbox {d}}s\right| \\&\le \left| \int ^\infty _t \delta (s)\,{\hbox {d}}s\right| +\omega _0\sum _{j=2}^\infty ((1+\rho )^j -1) (1+t)^{-\gamma +2} \mu _j(t) \\&\lesssim (1+t)^{-\gamma } +\omega _0 \frac{1-\sqrt{1-4(1+\rho )\mu ^2}}{2}(1+t)^{-\gamma } \\&\simeq (1+t)^{-\gamma }. \end{aligned}$$

Thus the proof is completed. \(\square \)

We note that \(\eta (t)\) is the solution to the following initial value problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \eta ''(t)+M(t)\eta (t)=0,\;\; t\in (T,\infty ),\\ (\eta (T),\eta '(T))=(1,\tilde{\eta }), \end{array}\right. } \end{aligned}$$
(34)

where T was defined by (30) and \(\tilde{\eta }=\sum _{j=1}^\infty \int ^\infty _T q_j(s) \,{\hbox {d}}s\). Let us continue the solution \(\eta =\eta (t)\) on the interval [0, T) as the solution to the backward Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \eta ''(t)+M(t)\eta (t)=0,\;\; t\in [0,T),\\ (\eta (T),\eta '(T))=(1,\tilde{\eta }). \end{array}\right. } \end{aligned}$$
(35)

We define b(t) and \(\sigma (t)\) on \([0,\infty )\) by

$$\begin{aligned} b(t):=\frac{\eta '(t)}{\eta (t)} \end{aligned}$$
(36)

and

$$\begin{aligned} \sigma (t):=b(t)-\nu (1+t)^{-1}. \end{aligned}$$
(37)

Then we have the following proposition.

Proposition 2

If Hypotheses 1, 2 and 3 are fulfilled, then the following estimates are established : 

$$\begin{aligned} \left| b'(t)\right| \lesssim (1+t)^{-2\beta } \end{aligned}$$
(38)

and

$$\begin{aligned} |\sigma (t)|\lesssim (1+t)^{-\gamma }. \end{aligned}$$
(39)

It follows that

$$\begin{aligned} \eta (t) \simeq (1+t)^{\nu }, \end{aligned}$$
(40)

and that for any \(\nu _0\) and \(\nu _1\) satisfying \(0<\nu _0<\nu < \nu _1 \le 1/2\) there exists \(T_0\ge 0\) such that

$$\begin{aligned} \nu _0(1+t)^{-1} \le b(t) \le \nu _1(1+t)^{-1} \end{aligned}$$
(41)

for any \(t\ge T_0\).

Proof

Estimates (38) and (39) are trivial on the finite interval [0, T]. Suppose that \(t\ge T\). Then by Proposition 1, (13), (22) and (77) we have

$$\begin{aligned} \left| b'(t)\right|&=\left| \frac{\eta ''(t)}{\eta (t)}-\frac{\eta '(t)^2}{\eta (t)^2}\right| =\left| -M(t)-\left( \sum _{j=1}^\infty \int _t^\infty q_j(s){\hbox {d}}s \right) ^2\right| =\left| \sum _{k=1}^\infty q_k(t)\right| \\&\lesssim |M(t)|+(1+t)^{-2} \lesssim (1+t)^{-2\beta } \end{aligned}$$

and

$$\begin{aligned} |\sigma (t)| =\left| \sum _{j=1}^\infty \int _t^\infty q_j(s){\hbox {d}}s -\nu (1+t)^{-1}\right| \lesssim (1+t)^{-\gamma }. \end{aligned}$$

Moreover, by (33), (36), (37) and (39) we have

$$\begin{aligned} \eta (t)=\eta (T)\exp \left( \nu \int ^t_T(1+s)^{-1}\,{\hbox {d}}s+\int ^t_T\sigma (s)\,{\hbox {d}}s \right) \simeq (1+t)^\nu \end{aligned}$$

and

$$\begin{aligned} b(t)(1+t)=\nu +\sigma (t)(1+t) {\left\{ \begin{array}{ll} \le \nu _1 \\ \ge \nu _0 \end{array}\right. } \end{aligned}$$

for any \(t \ge T_0\) with \(\min \{\nu _1-\nu ,\nu -\nu _0\}\ge \sup _{t\ge T_0}\{|\sigma (t)|(1+t)\}\). Thus the proof is complete. \(\square \)

4 Construction of the fundamental solution

4.1 Micro-energy and zones

For the nonnegative constants T and \(T_0\) in Propositions 1 and 2 estimate (4) is trivial on the finite interval \([0,T_1]\) with \(T_1=\max \{T;T_0\}\) by application of the usual energy method. Thus we can suppose that \(T_1=0\) from now on without loss of generality.

By partial Fourier transformation with respect to x and denoting \(v(t,\xi )=\widehat{u}(t,\xi )\), (1) is reduced to the following Cauchy problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{tt} +|\xi |^2 v + M(t)v=0,\;\; (t,\xi )\in (0,\infty )\times {\mathbb {R}}^n,\\ v(0,\xi )=\widehat{u}_0(\xi ),\;\; v_t(0,\xi )=\widehat{u}_1(\xi ),\;\; \xi \in {\mathbb {R}}^n. \end{array}\right. } \end{aligned}$$
(42)

For a positive large number N, which will be chosen later, we divide the extended phase space \([0,\infty )\times \mathbb {R}^n\) into three zones: the pseudo-differential zone \(Z_{\varPsi }=Z_{\varPsi }(N)\), the hyperbolic zone \(Z_{H}=Z_{H}(N)\) and the intermediate zone \(Z_{I}=Z_{I}(N)\) as follows:

$$\begin{aligned}&Z_{\varPsi }(N):=\{(t,\xi ) \in [0,\infty ) \times {\mathbb {R}}^n \;;\; (1+t)|\xi | \le N\},\\&Z_{H}(N):=\left\{ (t,\xi ) \in [0,\infty ) \times {\mathbb {R}}^n \;;\; (1+t)^{2-\gamma }|\xi |\ge N \right\} , \\&Z_{I}(N):=\left\{ (t,\xi )\in [0,\infty ) \times {\mathbb {R}}^n \;;\; (1+t)^{2-\gamma }|\xi |\le N\le (1+t) |\xi |\right\} {.} \end{aligned}$$

Here we note that \(2-\gamma \le 2\beta -1<1\) by Hypothesis 2, 3 and (15). It follows that \(Z_I\not \subseteq Z_H\). We define \(\theta _1=\theta _1(r)\) and \(\theta _2=\theta _2(r)\) on \([0,\infty )\) by

$$\begin{aligned} \theta _1(r):={\left\{ \begin{array}{ll} \infty &{}\quad {\hbox {for}}\,\, r=0, \\ \max \left\{ Nr^{-1}-1,0 \right\} &{}\quad {\hbox {for}}\,\, r>0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \theta _2(r):={\left\{ \begin{array}{ll} \infty &{}\quad {\hbox {for}}\,\, r=0, \\ \max \left\{ (Nr^{-1})^{\frac{1}{2-\gamma }}-1,0 \right\} &{}\quad {\hbox {for}}\,\, r>0\, {\hbox {and}}\, \gamma \ne 2, \\ \max \left\{ (Nr^{-1})^{\frac{1}{2\beta -1}}-1,0 \right\} &{}\quad {\hbox {for}}\,\, r>0\, {\hbox {and}}\, \gamma =2. \end{array}\right. } \end{aligned}$$

Then \((\theta _1(|\xi |),\xi )\) and \((\theta _2(|\xi |),\xi )\) denote the separating hypersurfaces between \(Z_\varPsi \) and \(Z_I\), and \(Z_I\) and \(Z_H\), respectively. Indeed, the zones can be represented as follows:

$$\begin{aligned}&Z_{\varPsi }(N) =\{(t,\xi ) \in [0,\infty ) \times {\mathbb {R}}^n \;;\; 0 \le t \le \theta _1(|\xi |)\},\\&Z_{H}(N) = {\left\{ \begin{array}{ll} \left\{ (t,\xi ) \in [0,\infty ) \times {\mathbb {R}}^n \;;\; t \ge \theta _2(|\xi |) \right\} &{}\quad {\hbox {for}}\; \gamma \le 2,\\ \left\{ (t,\xi ) \in [0,\infty ) \times {\mathbb {R}}^n \;;\; 0 \le t \le \theta _2(|\xi |) \right\} &{}\quad {\hbox {for}}\; \gamma> 2, \end{array}\right. } \\&Z_{I}(N) = {\left\{ \begin{array}{ll} \left\{ (t,\xi )\in [0,\infty ) \times {\mathbb {R}}^n \;;\; \theta _1(|\xi |) \le t \le \theta _2(|\xi |) \right\} &{}\quad {\hbox {for}}\; \gamma \le 2,\\ \left\{ (t,\xi )\in [0,\infty ) \times {\mathbb {R}}^n \;;\; t\ge \theta _1(|\xi |) \; {\hbox {and}}\; t\ge \theta _2(|\xi |) \right\} &{}\quad {\hbox {for}}\, \gamma > 2\; . \end{array}\right. } \end{aligned}$$

Let \(\chi \in C^\infty ({\mathbb {R}}_+)\) such that \(\chi '(r) \le 0\), \(\chi (r)=1\) for \(r \le 1\), \(\chi (3/2)=1/2\) and \(\chi (r)=0 \) for \(r \ge 2\). We define the micro-energy \(U(t,\xi )\) by

$$\begin{aligned} U(t,\xi ): = ( h(t,\xi ) v(t,\xi ), v_t(t,\xi ) - b(t)v(t,\xi ) )^T, \end{aligned}$$
(43)

where

$$\begin{aligned} h(t,\xi ): = (1+t)^{-1} \chi (N^{-1}(1+t)|\xi |) + i|\xi |( 1- \chi (N^{-1}(1+t)|\xi |) ). \end{aligned}$$
(44)

Then we have the following lemma.

Lemma 3

We have that \(h(t,\xi )=(1+t)^{-1}\) in \(Z_{\varPsi }(N)\) and \(|h(t,\xi )|\simeq |\xi |\) in \(Z_I(N)\cup Z_H(N)\). Moreover, there exists a positive constant \(N_0\) such that for any \(N\ge N_0\) the following estimate holds : 

$$\begin{aligned} |U(t,\xi )|^2 \simeq |\xi |^2\left| v(t,\xi )\right| ^2 +\left| v_t(t,\xi )\right| ^2 +(1+t)^{-2}\left| v(t,\xi )\right| ^2 \end{aligned}$$
(45)

uniformly with respect to \((t,\xi )\).

Proof

We have \(h(t,\xi )=(1+t)^{-1}\) in \(Z_{\varPsi }(N)\) by (44). Let \((t,\xi )\in Z_I(N)\cup Z_H(N)\), that is, \(N^{-1}(1+t)|\xi |\ge 1\). If \(1 \le N^{-1}(1+t)|\xi |\le 3/2\), then we have

$$\begin{aligned} |h(t,\xi )| {\left\{ \begin{array}{ll} \ge \frac{1}{2}(1+t)^{-1} \ge \frac{1}{3N}|\xi |, \\ \le \sqrt{(1+t)^{-2}+\frac{1}{4}|\xi |^2} \le \sqrt{\frac{1}{N^2}+\frac{1}{4}}|\xi |. \end{array}\right. } \end{aligned}$$

If \(N^{-1}(1+t)|\xi |\ge 3/2\), then we have

$$\begin{aligned} |h(t,\xi )| {\left\{ \begin{array}{ll} \ge \frac{1}{2}|\xi |, \\ \le \sqrt{\frac{1}{4}(1+t)^{-2}+|\xi |^2} \le \sqrt{\frac{1}{9N^2}+1}|\xi |. \end{array}\right. } \end{aligned}$$

Therefore, by (41), Cauchy–Schwarz inequality and recalling Proposition 2, we have in \(Z_\varPsi (N)\) that

$$\begin{aligned} |U(t,\xi )|^2&= (1+t)^{-2}|v|^2 + |v_t|^2 + b(t)^2|v|^2 - 2b(t)\mathfrak {R}(v_t\overline{v}) \\&\ge ((1+t)^{-2}- b(t)^2)|v|^2 + \frac{1}{2}|v_t|^2 \\&\ge (1-\nu _1^2)(1+t)^{-2}|v|^2 + \frac{1}{2}|v_t|^2 \\&\ge \frac{1}{2}(1-\nu _1^2)(1+t)^{-2}|v|^2 +\frac{1}{2N^2}(1-\nu _1^2)|\xi |^2|v|^2 +\frac{1}{2}|v_t|^2 \end{aligned}$$

and

$$\begin{aligned} |U(t,\xi )|^2&\le (1+t)^{-2}|v|^2 +2|v_t|^2 +2b(t)^2|v|^2 \\&\le (1+2\nu _1^2)(1+t)^{-2}|v|^2 +2|v_t|^2 \\&\le |\xi |^2|v|^2+(1+2\nu _1^2)(1+t)^{-2}|v|^2 +2|v_t|^2. \end{aligned}$$

On the other hand, in \(Z_I(N)\cup Z_H(N)\) with \(N\ge 2/3\) we have

$$\begin{aligned} |U(t,\xi )|^2&\ge \frac{1}{9N^2}|\xi |^2|v|^2 +|v_t|^2 + b(t)^2|v| -\frac{36\nu _1^2}{36\nu _1^2+1}|v_t|^2 -\frac{36\nu _1^2+1}{36\nu _1^2}b(t)^2|v|^2 \\&= \frac{1}{18N^2}|\xi |^2|v|^2 +\frac{1}{36\nu _1^2+1}|v_t|^2 +\left( \frac{1}{18N^2}|\xi |^2 -\frac{1}{36\nu _1^2}b(t)^2 \right) |v|^2 \\&\ge \frac{1}{18N^2}|\xi |^2|v|^2 +\frac{1}{36\nu _1^2+1}|v_t|^2 +\frac{1}{36}(1+t)^{-2}|v|^2 \end{aligned}$$

and

$$\begin{aligned} |U(t,\xi )|^2&\le 2|\xi |^2|v|^2 +2|v_t|^2+2b(t)^2|v|^2 \\&\le 2|\xi |^2|v|^2 +2|v_t|^2+2\nu _1^2(1+t)^{-2}|v|^2. \end{aligned}$$

Thus (45) is proved. \(\square \)

For the micro-energy \(U=U(t,\xi )\) defined by (43) we define \(V=V(t,\xi )\) by

$$\begin{aligned} V(t,\xi ):=\eta (t)^{-1} U(t,\xi ). \end{aligned}$$
(46)

Then we have the following lemma.

Lemma 4

The vector V is a solution to the following first-order system : 

$$\begin{aligned} \partial _t V =A V, \,\,\, A=A(t,\xi )= \begin{pmatrix} \frac{h_t(t,\xi )}{h(t,\xi )} &{} h(t,\xi ) \\ -\frac{|\xi |^2}{h(t,\xi )} &{} -2b(t) \end{pmatrix}. \end{aligned}$$
(47)

Proof

The proof is straight-forward from the definitions of U, \(\eta \) and (36). \(\square \)

We shall consider the fundamental solution \(E=E(t,s,\xi )\) to (47), that is, the solution to

$$\begin{aligned} \partial _tE= A(t,\xi )E\,,\quad E(s,s,\xi )=I. \end{aligned}$$
(48)

4.2 Considerations in the pseudo-differential zone \(Z_\varPsi (N)\)

We shall prove the following statement.

Proposition 3

Assume Hypothesis 1. The fundamental solution to (48) satisfies the following estimate : 

$$\begin{aligned} \Vert E(t,0,\xi )\Vert \lesssim (1+t)^{-2\nu } \end{aligned}$$

uniformly for \((t,\xi )\in Z_\varPsi \).

Proof

Let \((t,\xi )\in Z_{\varPsi }\), that is, \(0 \le t \le \theta _1(|\xi |)\). We consider (48) with

$$\begin{aligned} A= \begin{pmatrix} -(1+t)^{-1} &{} (1+t)^{-1} \\ -(1+t)|\xi |^2 &{} -2b(t) \end{pmatrix}. \end{aligned}$$

If we put \(E(t,0,\xi )=(e_{ij}(t,\xi ))_{i,j=1,2}\), then we can write for \(j=1,2\) the following system of coupled integral equations of Volterra type:

$$\begin{aligned} e_{1j}(t,\xi )= & {} (1+t)^{-1}\left( \delta _{1j}+\int ^t_0 e_{2j}(\tau ,\xi )\,{\hbox {d}}\tau \right) , \end{aligned}$$
(49)
$$\begin{aligned} e_{2j}(t,\xi )= & {} \eta (t)^{-2} \left( \delta _{2j}-|\xi |^2\int ^t_0 (1+\tau )\eta (\tau )^2 e_{1j}(\tau ,\xi )\,{\hbox {d}}\tau \right) . \end{aligned}$$
(50)

By substituting (50) into (49) and integrating by parts we get

$$\begin{aligned} e_{1j}(t,\xi )&=(1+t)^{-1} \left( \delta _{1j}+\delta _{2j}\int _0^t \eta (\tau )^{-2}\,{\hbox {d}}\tau \right) \\&\quad - |\xi |^2 (1+t)^{-1} \int _0^t \eta (\tau )^{-2} \int _0^\tau (1+s) \eta (s)^{2} e_{1j}(s,\xi )\,{\hbox {d}}s \,{\hbox {d}}\tau \\&=(1+t)^{-1} \left( \delta _{1j}+\delta _{2j}\int _0^t \eta (\tau )^{-2}\,{\hbox {d}}\tau \right) \\&\quad - |\xi |^2(1+t)^{-1} \int _0^t \left( \int _{\tau }^t \eta (s)^{-2}\,{\hbox {d}}s \right) (1+\tau )\eta (\tau )^{2} e_{1j}(\tau ,\xi ) \,{\hbox {d}}\tau . \end{aligned}$$

We define \(f_j(t,\xi )\) by

$$\begin{aligned} f_j(t,\xi ):= \eta (t)^{2}|e_{1j}(t,\xi )|. \end{aligned}$$

By (40), there exists a constant \(C \ge 1\) such that

$$\begin{aligned} C^{-1}(1+t)^{\nu } \le \eta (t) \le C(1+t)^{\nu }. \end{aligned}$$

Then we have

$$\begin{aligned} f_j(t,s,\xi )&\le (1+t)^{-1}\eta (t)^2 \left( 1+\int _0^t \eta (\tau )^{-2}\,{\hbox {d}}\tau \right) \\&\quad +|\xi |^2 (1+t)^{-1}\eta (t)^{2} \int _0^t \left( \int _{\tau }^t \eta (s)^{-2}\,{\hbox {d}}s \right) (1+\tau ) f_j(\tau ,\xi ) \,{\hbox {d}}\tau \\&\le C^2 (1+t)^{-1+2\nu } \left( 1+C^2 \int _0^t (1+\tau )^{-2\nu }\,{\hbox {d}}\tau \right) \\&\quad +C^4 |\xi |^2 (1+t)^{-1+2\nu } \int _0^t \left( \int _{\tau }^t (1+s)^{-2\nu }\,{\hbox {d}}s \right) (1+\tau ) f_j(\tau ,\xi ) \,{\hbox {d}}\tau \\&\le C^2 (1+t)^{-1+2\nu } \left( 1+\frac{C^2}{1-2\nu }(1+t)^{1-2\nu } \right) +\frac{C^4}{1-2\nu } |\xi |^2 \int _0^t (1+\tau ) f_j(\tau ,\xi ) \,{\hbox {d}}\tau \\&\le C_1 +C_2|\xi |^2 \int _0^t (1+\tau ) f_j(\tau ,\xi ) \,{\hbox {d}}\tau , \end{aligned}$$

where \(C_1=C^2+C^4/(1-2\nu )\) and \(C_2=C^4/(1-2\nu )\). By Gronwall’s inequality we conclude

$$\begin{aligned} f_j(t,\xi ) \le C_1 \exp \left( C_2|\xi |^2 \int _0^t (1+\tau ) \,{\hbox {d}}\tau \right) \le C_1 \exp \left( \frac{C_2 N^2}{2}\right) . \end{aligned}$$

Thus we get \(|e_{1j}(t,\xi )| \lesssim \eta (t)^{-2}\). Moreover, by (50) we have

$$\begin{aligned} |e_{2j}(t,\xi )| \lesssim \eta (t)^{-2} \left( 1 + |\xi |^2 \int _s^t (1+\tau ) {\hbox {d}}\tau \right) \lesssim \eta (t)^{-2}. \end{aligned}$$

Summarizing the above estimates and (40) we conclude the proof. \(\square \)

4.3 Considerations in the hyperbolic zone \(Z_H(N)\)

We shall prove the following statement.

Proposition 4

Assume Hypotheses 1, 2 and 3. The fundamental solution \(E(t,s,\xi )\) to (48) satisfies the following estimate : 

$$\begin{aligned} \Vert E(t,s,\xi )\Vert \lesssim \frac{(1+s)^\nu }{(1+t)^\nu } \end{aligned}$$
(51)

uniformly for \((t,\xi ),(s,\xi )\in Z_H(N)\) with \(s \le t\) and \(N\ge \nu _1/\sqrt{2}\).

Proof

Estimate (51) is trivial for \(s \le t \le 2^{1/(\gamma -1)}-1\). Hence, we suppose that \(t \ge 2^{1/(\gamma -1)}-1\) from now on. Then we see that

$$\begin{aligned} N^{-1}(1+t)|\xi | \ge 2N^{-1}(1+t)^{2-\gamma }|\xi | \ge 2. \end{aligned}$$

It follows that

$$\begin{aligned} A= \begin{pmatrix} 0 &{} i|\xi | \\ i|\xi | &{} -2b(t) \end{pmatrix}. \end{aligned}$$

Let \(M_0\) be the diagonalizer of the principal part with respect to powers of \(|\xi |\) of A given by

$$\begin{aligned} M_0:=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 &{} -1 \\ 1 &{} 1 \end{pmatrix}. \end{aligned}$$
(52)

We put \(E_1=E_1(t,s,\xi ):=M_0^{-1}E(t,s,\xi )\). Then (48) is reduced to

$$\begin{aligned} \partial _t E_1=A_1 E_1, \quad E_1(s,s,\xi )=M_0^{-1}, \end{aligned}$$
(53)

whereas

$$\begin{aligned} A_1: = M_0^{-1}A M_0 =\begin{pmatrix} i|\xi |-b(t) &{} -b(t) \\ -b(t) &{} -i|\xi |-b(t) \end{pmatrix}. \end{aligned}$$

Let \(M_1=M_1(t,\xi )\) be the diagonalizer of the principal part of \(A_1\) given by

$$\begin{aligned} M_1: =\begin{pmatrix} 1 &{} -\frac{(A_1)_{12}}{(A_1)_{11}-(A_1)_{22}} \\ -\frac{(A_1)_{21}}{(A_1)_{22}-(A_1)_{11}} &{} 1 \end{pmatrix} =\begin{pmatrix} 1 &{} -\frac{i b(t)}{2|\xi |} \\ \frac{i b(t)}{2|\xi |} &{} 1 \end{pmatrix}. \end{aligned}$$

By (41) we have

$$\begin{aligned} \det M_1 = 1- \frac{b(t)^2}{4|\xi |^2} \ge 1- \frac{\nu _1^2}{4 N^2} \ge \frac{1}{2} \end{aligned}$$

for \((1+t)|\xi | \ge N\), that is, for \((t,\xi )\in Z_H(N)\cup Z_I(N)\) with \(N\ge \nu _1/\sqrt{2}\). It follows that \(M_1\) is invertible. Moreover, we have

$$\begin{aligned} \Vert M_1(t,\xi )\Vert \le \max _{i,j=1,2}\{|(M_1)_{i,j}(t,\xi )|\} \le \max \left\{ 1,\frac{\nu _1}{2N}\right\} =1. \end{aligned}$$
(54)

We put

$$\begin{aligned} E_2=E_2(t,s,\xi ):=M_1^{-1}(t,\xi )E_1(t,s,\xi ). \end{aligned}$$

Then (53) is reduced to

$$\begin{aligned} \partial _t E_2 = A_2(t,\xi ) E_2, \quad E_2(s,s,\xi )=M_1^{-1}(s,\xi )M_0^{-1}, \end{aligned}$$
(55)

where

$$\begin{aligned} A_2=A_2(t,\xi ):=M_1^{-1} A_1 M_1 - M_1^{-1} (\partial _t M_1). \end{aligned}$$

Here we note the following representations:

$$\begin{aligned} (A_2)_{11} =\overline{(A_2)_{22}} =i|\xi |-b(t) -\frac{i}{\det M_1} \left( \frac{b(t)^2}{2|\xi |}+\frac{b(t)b'(t)}{4|\xi |^2}\right) \end{aligned}$$

and

$$\begin{aligned} (A_2)_{21} =\overline{(A_2)_{12}} =\frac{1}{\det M_1} \left( -\frac{b(t)^3}{4|\xi |^2}+\frac{i b'(t)}{2|\xi |}\right) . \end{aligned}$$

By (38), (41) and noting that \(\beta< 1 < \gamma \) we have

$$\begin{aligned} \begin{aligned} |(A_2)_{21}|&\lesssim |\xi |^{-1} (|\xi |^{-1}(1+t)^{-3} + (1+t)^{-2\beta }) \\&\lesssim |\xi |^{-1} ((1+t)^{-\gamma -1} + (1+t)^{-2\beta } ) \simeq |\xi |^{-1}(1+t)^{-2\beta } \end{aligned} \end{aligned}$$
(56)

in \(Z_H\). Moreover, for \((t,\xi ),(s,\xi )\in Z_H\) and \(s \le t\), by (36) we have

$$\begin{aligned} \left| \exp \left( \int ^t_s (A_{2})_{jj}(\tau ,\xi )\,{\hbox {d}}\tau \right) \right| =\exp \left( -\int ^t_s b(\tau )\,{\hbox {d}}\tau \right) =\frac{\eta (s)}{\eta (t)} \end{aligned}$$
(57)

for \(j=1,2\). We define \(\varPhi _2(t,s,\xi )\) by

$$\begin{aligned} \varPhi _2(t,s,\xi ):= \begin{pmatrix} \exp \left( \int ^t_s (A_{2})_{11}(\tau ,\xi )\,{\hbox {d}}\tau \right) &{} 0 \\ 0 &{} \exp \left( \int ^t_s (A_{2})_{22}(\tau ,\xi )\,{\hbox {d}}\tau \right) \end{pmatrix}, \end{aligned}$$

and we put \(E_3(t,s,\xi ):=\varPhi _2^{-1}(t,s,\xi )E_2(t,s,\xi )\). Then (55) is reduced to

$$\begin{aligned} \partial _t E_3 =R_3(t,s,\xi ) E_3,\quad E_3(s,s,\xi )=M_1^{-1}(s,\xi )M_0^{-1}, \end{aligned}$$
(58)

where

$$\begin{aligned} R_3(t,s,\xi )&:=\varPhi _2^{-1}(t,s,\xi )A_2(t,\xi )\varPhi _2(t,s,\xi ) -\varPhi _2^{-1}(t,s,\xi )(\partial _t \varPhi _2)(t,s,\xi ) \\&=\begin{pmatrix} 0 &{} \overline{r_3(t,s,\xi )} \\ r_3(t,s,\xi ) &{} 0 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} r_3(t,s,\xi ) =(A_2)_{21}(t,\xi )\exp \left( i\int _s^t \mathfrak {I}\{(A_2)_{11}(\tau ,\xi )\}\,{\hbox {d}}\tau \right) . \end{aligned}$$

Here \(E_3(t,s,\xi )\) can be represented as a Peano–Baker series in the form

$$\begin{aligned} E_3(t,s,\xi ) =&\, E_3(s,s,\xi ) +\int _s^t R_3(t_1,s,\xi ) {\hbox {d}}t_1 \\&+\sum _{k=2}^\infty \int _s^t \int _s^{t_1} \cdots \int _s^{t_{k-1}} R_3(t_1,s,\xi ) \cdots R_3(t_k,s,\xi )\, {\hbox {d}}t_k \cdots {\hbox {d}}t_1. \end{aligned}$$

Therefore, by (56) and the equalities \(\Vert R_3(\tau ,s,\xi )\Vert =|r_3(t,s,\xi )|=|(A_{2})_{21}(t,\xi )|\) there exists a positive constant C such that

$$\begin{aligned} \left\| E_3(t,s,\xi ) \right\|&\le \left\| E_3(s,s,\xi ) \right\| +\exp \left( \int ^t_s \Vert R_3(\tau ,s,\xi )\Vert \,{\hbox {d}}\tau \right) \\&\lesssim 1 + \exp \left( C|\xi |^{-1} \int ^t_s (1+\tau )^{-2\beta } \,{\hbox {d}}\tau \right) . \end{aligned}$$

We define \(\phi (t,s):=\int ^t_s (1+\tau )^{-2\beta } \,{\hbox {d}}\tau \). Then we shall estimate \(|\xi |^{-1}\phi (t,s)\) in \(Z_H(N)\). If \(\gamma < 2\), then by (15) we have \(2\beta -1\ge -\gamma +2>0\). It follows

$$\begin{aligned} |\xi |^{-1}\phi (t,s) \le |\xi |^{-1}\phi (\infty ,\theta _2) = \frac{(1+\theta _2)^{-2\beta -\gamma +3}}{N(2\beta -1)} \le \frac{1}{N(2\beta -1)}. \end{aligned}$$

If \(\gamma > 2\) and \(\beta <1/2\), then we have

$$\begin{aligned} |\xi |^{-1}\phi (t,s)&\le |\xi |^{-1}\phi (\theta _2,0) \le \frac{|\xi |^{-1}(1+\theta _2)^{-2\beta +1}}{1-2\beta } \\&=\frac{(1+\theta _2)^{-2\beta -\gamma +3}}{N(1-2\beta )} \le \frac{1}{N(1-2\beta )}. \end{aligned}$$

If \(\gamma > 2\) and \(\beta \ge 1/2\), then by (15) we have

$$\begin{aligned} |\xi |^{-1}\phi (t,s)&\le |\xi |^{-1}\int ^{\theta _2}_0 (1+\tau )^{\gamma -3} \,{\hbox {d}}\tau \\&\le \frac{|\xi |^{-1}}{\gamma -2}(1+\theta _2)^{\gamma -2} =\frac{1}{N(\gamma -2)}. \end{aligned}$$

If \(\gamma = 2\) and \(|\xi |\le N\), then by (15) we have

$$\begin{aligned} |\xi |^{-1}\phi (t,s)&\le |\xi |^{-1}\phi (\infty ,\theta _2) =\frac{1}{2\beta -1}|\xi |^{-1}(1+\theta _2)^{-2\beta +1} = \frac{1}{N(2\beta -1)}. \end{aligned}$$

If \(\gamma = 2\) and \(|\xi |\ge N\), then by (15) we have

$$\begin{aligned} |\xi |^{-1}\phi (t,s) \le N^{-1}\phi (\infty ,0) = \frac{1}{N(2\beta -1)}. \end{aligned}$$

Thus we have \(\Vert E_3(t,s,\xi )\Vert \lesssim 1\) uniformly in \(Z_H\). Consequently, by (54) and (57) we obtain

$$\begin{aligned} \left\| E(t,s,\xi ) \right\|&=\left\| M_0 M_1(t,\xi ) \varPhi _2(t,s,\xi ) E_3(t,s,\xi ) \right\| \\&\simeq \left\| \varPhi _2(t,s,\xi ) E_3(t,s,\xi ) \right\| = \frac{\eta (s)}{\eta (t)}\left\| E_3(t,s,\xi ) \right\| \\&\lesssim \frac{\eta (s)}{\eta (t)}. \end{aligned}$$

Thus the proof of Proposition 4 is concluded by (40). \(\square \)

4.4 Considerations in the intermediate zone \(Z_I(N)\)

We shall prove the following statement.

Proposition 5

Assume Hypotheses 1 and 3. The fundamental solution \(E=E(t,s,\xi )\) to (48) satisfies the following estimate : 

$$\begin{aligned} \Vert E(t,s,\xi )\Vert \lesssim \frac{(1+s)^\nu }{(1+t)^\nu } \end{aligned}$$
(59)

uniformly for \((t,\xi ),(s,\xi )\in Z_I\) with \(s \le t\).

We shall introduce some lemmas in order to prove Proposition 5.

Lemma 5

Estimate (59) holds uniformly for

$$\begin{aligned} \theta _1(|\xi |) \le s \le t \le \tilde{\theta }_1(|\xi |):=2\theta _1(|\xi |)+1. \end{aligned}$$

Proof

Let \(\theta _1 \le t \le \tilde{\theta }_1\), that is, \(1\le N^{-1}(1+t)|\xi | \le 2\). By Lemma 3 and the estimates

$$\begin{aligned} \left| \frac{h_t(t,\xi )}{h(t,\xi )}\right| \lesssim |\xi |^{-1}(|\xi |(1+t)^{-1}+|\xi |^2) \simeq (1+t)^{-1}, \end{aligned}$$

there exists a positive constant C such that

$$\begin{aligned} \Vert A(t,\xi )\Vert \lesssim C(1+t)^{-1}. \end{aligned}$$

It follows that

$$\begin{aligned} \int ^t_s \Vert A(\tau ,\xi )\Vert \,{\hbox {d}}\tau \le C(1+\theta _1)^{-1}\int ^{\tilde{\theta _1}}_{\theta _1}\,{\hbox {d}}\tau =C \end{aligned}$$

for \(\theta _1 \le s \le t \le \tilde{\theta }_1\). Therefore, by the same way to estimate \(E_3=E_3(t,s,\xi )\) in \(Z_H\) we have \(\Vert E(t,s,\xi )\Vert \lesssim 1\) uniformly for \((t,\xi ),(s,\xi )\in Z_I\). Moreover, by (40) and the estimates

$$\begin{aligned} 1 \le \frac{(1+t)^\nu }{(1+s)^\nu } \le \frac{\left( 1+\widetilde{\theta }_1\right) ^\nu }{(1+\theta _1)^\nu } = 2^\nu \end{aligned}$$

we have (59) what we wanted to prove. \(\square \)

We define \(B=B(t,\xi )\) by

$$\begin{aligned} B(t,\xi ):=\begin{pmatrix} 0 &{} i|\xi | \\ i|\xi | &{} -2\nu (1+t)^{-1} \end{pmatrix}. \end{aligned}$$

Let us consider the fundamental solution \(\mathcal {E}(t,s,\xi )\) to

$$\begin{aligned} \partial _t \mathcal {E}(t,s,\xi ) = B(t,\xi ) \mathcal {E}(t,s,\xi ),\;\; \mathcal {E}(s,s,\xi )=I. \end{aligned}$$
(60)

Then we have the following lemma.

Lemma 6

Assume Hypothesis 1. The fundamental solution to (60) satisfies the following estimates : 

$$\begin{aligned} \Vert \mathcal { E}(t,s,\xi )\Vert \lesssim \frac{(1+s)^\nu }{(1+t)^\nu }\quad {\hbox {and}} \, \Vert \mathcal {E}^{-1}(t,s,\xi )\Vert \lesssim \frac{(1+t)^\nu }{(1+s)^\nu } \end{aligned}$$
(61)

uniformly for \((t,\xi ),(s,\xi )\in Z_I\).

Proof

We put \(\mathcal {E}_1=\mathcal { E}_1(t,s,\xi ):=M_0^{-1}\mathcal {E}(t,s,\xi )\), where \(M_0\) was defined in (52). Then (60) is reduced to

$$\begin{aligned} \partial _t \mathcal { E}_1 = B_1(t,\xi ) \mathcal { E}_1,\quad \mathcal {E}_1(s,s,\xi )=M_0^{-1}, \end{aligned}$$
(62)

where

$$\begin{aligned} B_1:=M_0^{-1}B M_0=\begin{pmatrix} i|\xi |-\nu (1+t)^{-1} &{} -\nu (1+t)^{-1} \\ -\nu (1+t)^{-1} &{} -i|\xi |-\nu (1+t)^{-1} \end{pmatrix}. \end{aligned}$$

Let \(\widetilde{M}_1\) be the diagonalizer of the principal part of \(B_1\) given by

$$\begin{aligned} \widetilde{M}_1:= \begin{pmatrix} 1 &{} -\frac{(B_1)_{12}}{(B_1)_{11}-(B_1)_{22}} \\ -\frac{(B_1)_{21}}{(B_1)_{22}-(B_1)_{11}} &{} 1 \end{pmatrix} =\begin{pmatrix} 1 &{} -\frac{i\nu (1+t)^{-1}}{2|\xi |} \\ \frac{i\nu (1+t)^{-1}}{2|\xi |} &{} 1 \end{pmatrix}. \end{aligned}$$

Here we see that \(\widetilde{M}_1\) is invertible and \(\det \widetilde{M}_1\ge 1/2\) in \(Z_H(N) \cup Z_I(N)\) with \(N\ge \nu /\sqrt{2}\). We put \(\mathcal {E}_2=\mathcal {E}_2(t,s,\xi ):=\widetilde{M}_1^{-1}(t,\xi )\mathcal {E}_1(t,s,\xi )\). Then (62) is reduced to

$$\begin{aligned} \partial _t \mathcal {E}_2 = B_2(t,\xi ) \mathcal { E}_2,\quad \mathcal { E}_2(s,s,\xi )=\widetilde{M}_1^{-1}(s,\xi )M_0^{-1}, \end{aligned}$$
(63)

where

$$\begin{aligned} B_2=B_2(t,\xi )&:=\widetilde{M}_1^{-1} B_1 \widetilde{M}_1 -\widetilde{M}_1^{-1} (\partial _t \widetilde{M}_1),\\ (B_2)_{11}&=\overline{(B_2)_{22}} =i|\xi |-\nu (1+t)^{-1} -\frac{i}{\det \widetilde{M}_1} \left( \frac{\nu ^2(1+t)^{-2}}{2|\xi |}-\frac{\nu ^2(1+t)^{-3}}{4|\xi |^2}\right) \end{aligned}$$

and

$$\begin{aligned} (B_2)_{21} =\overline{(B_2)_{12}} =\frac{1}{\det \widetilde{M}_1} \left( -\frac{\nu ^3(1+t)^{-3}}{4|\xi |^2}-\frac{i \nu (1+t)^{-2}}{2|\xi |}\right) . \end{aligned}$$

Here we note that for \(\theta _1\le s < t\) we have

$$\begin{aligned} \left| \exp \left( \int ^t_s (B_{2})_{jj}(\tau ,\xi )\,{\hbox {d}}\tau \right) \right| =\frac{(1+s)^{\nu }}{(1+t)^{\nu }} \end{aligned}$$
(64)

for \(j=1,2\) and

$$\begin{aligned} |(B_2)_{21}| \lesssim |\xi |^{-2}(1+t)^{-3}+|\xi |^{-1}(1+t)^{-2} \lesssim |\xi |^{-1}(1+t)^{-2}. \end{aligned}$$

We define \(\widetilde{\varPhi }_2(t,s,\xi )\) by

$$\begin{aligned} \widetilde{\varPhi }_2(t,s,\xi ):= \begin{pmatrix} \exp \left( \int ^t_s (B_{2})_{11}(\tau ,\xi )\,{\hbox {d}}\tau \right) &{} 0 \\ 0 &{} \exp \left( \int ^t_s (B_{2})_{22}(\tau ,\xi )\,{\hbox {d}}\tau \right) \end{pmatrix}, \end{aligned}$$

and we put \(\mathcal {E}_3(t,s,\xi ) :=\widetilde{\varPhi }_2^{-1}(t,s,\xi )\mathcal {E}_2(t,s,\xi )\). Then (63) is reduced to

$$\begin{aligned} \partial _t \mathcal {E}_3 =\widetilde{R}_3(t,s,\xi ) \mathcal {E}_3,\quad \mathcal {E}_3(s,s,\xi )=\widetilde{M}_1^{-1}(s,\xi )M_0^{-1}, \end{aligned}$$
(65)

where

$$\begin{aligned} \widetilde{R}_3(t,s,\xi )&:=\widetilde{\varPhi }_2^{-1}(t,s,\xi )B_2(t,\xi )\widetilde{\varPhi }_2(t,s,\xi ) -\widetilde{\varPhi }_2^{-1}(t,s,\xi )(\partial _t \widetilde{\varPhi }_2)(t,s,\xi ) \\&=\begin{pmatrix} 0 &{} \overline{\widetilde{r}_3(t,s,\xi )} \\ \widetilde{r}_3(t,s,\xi ) &{} 0 \end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned} \widetilde{r}_3(t,s,\xi ) =(B_2)_{21}(t,\xi )\exp \left( i\int _s^t \mathfrak {I}\{(B_2)_{11}(\tau ,\xi )\}\,{\hbox {d}}\tau \right) . \end{aligned}$$

Therefore, there exists a positive constant C such that

$$\begin{aligned} \left\| \mathcal {E}_3(t,s,\xi ) \right\|&\le \left\| \mathcal {E}_3(s,s,\xi ) \right\| +\exp \left( \int ^t_s \Vert \widetilde{R}_3(\tau ,s,\xi )\Vert \,{\hbox {d}}\tau \right) \\&\lesssim 1 + \exp \left( C|\xi |^{-1} \int ^\infty _{\theta _1} (1+\tau )^{-2} \,{\hbox {d}}\tau \right) \\&= 1 + \exp \left( C|\xi |^{-1} (1+\theta _1)^{-1}\right) \lesssim 1. \end{aligned}$$

Consequently, by (64) we have

$$\begin{aligned} \Vert \mathcal {E}(t,s,\xi )\Vert \simeq \big \Vert \widetilde{\varPhi }_2(t,s,\xi ) \mathcal {E}_3(t,s,\xi )\big \Vert \lesssim \frac{(1+s)^\nu }{(1+t)^\nu }. \end{aligned}$$

Analogously, by the equality

$$\begin{aligned} \left| \exp \left( -\int ^t_s (B_{2})_{jj}(\tau ,\xi )\,{\hbox {d}}\tau \right) \right| =\frac{(1+t)^\nu }{(1+s)^\nu } \end{aligned}$$
(66)

we have \(\Vert \mathcal { E}^{-1}(t,s,\xi )\Vert \lesssim (1+t)^\nu /(1+s)^\nu \). Thus the proof is complete. \(\square \)

Proof of Proposition 5

For \(\tilde{\theta }_1 \le s \le t \le \theta _2\) we consider the fundamental solution \(E=E(t,s,\xi )\) to (48) with

$$\begin{aligned} A= \begin{pmatrix} 0 &{} i|\xi | \\ i|\xi | &{} -2b(t) \end{pmatrix}. \end{aligned}$$

We shall prove the equivalence between \(\Vert E(t,s,\xi )\Vert \) and \(\Vert \mathcal {E}(t,s,\xi )\Vert \) by using the stabilization condition in Hypothesis 3. We define \(\varLambda _0=\varLambda _0(t,\xi )\) by

$$\begin{aligned} \varLambda _0(t) :=\begin{pmatrix} 1 &{} 0 \\ 0 &{} \psi (t) \end{pmatrix}, \quad \psi (t)=\exp \left( 2\int ^\infty _t \sigma (\tau )\,{\hbox {d}}\tau \right) . \end{aligned}$$

We put \(\widetilde{E}_1=\widetilde{E}_1(t,s,\xi ):=\varLambda _0^{-1}(t)E(t,s,\xi )\). Then (48) is reduced to

$$\begin{aligned} \partial _t \widetilde{E}_1=\widetilde{A}_1(t,\xi )\widetilde{E}_1, \;\; \widetilde{E}_1(s,s,\xi )=\varLambda _0^{-1}(s), \end{aligned}$$
(67)

where

$$\begin{aligned} \widetilde{A}_1 :=\begin{pmatrix} 0 &{} i|\xi |\psi (t) \\ i|\xi |\psi (t)^{-1} &{} -2\nu (1+t)^{-1} \end{pmatrix}. \end{aligned}$$
(68)

We put \(\widetilde{E}_2=\widetilde{E}_2(t,s,\xi ):=\mathcal {E}^{-1}(t,s,\xi )\widetilde{E}_1(t,s,\xi )\). Then (67) is reduced to

$$\begin{aligned} \partial _t \widetilde{E}_2 = \widetilde{A}_2(t,s,\xi ) \widetilde{E}_2, \;\; \widetilde{E}_2(s,s,\xi )=\widetilde{\varLambda }_0^{-1}(s), \end{aligned}$$
(69)

where

$$\begin{aligned} \widetilde{A}_2(t,s,\xi )&:=\mathcal { E}^{-1}(t,s,\xi )(\widetilde{A}_1(t,\xi )-B(t,\xi ))\mathcal { E}(t,s,\xi ) \\&=\mathcal {E}^{-1}(t,s,\xi ) \begin{pmatrix} 0 &{} i|\xi |(\psi (t)-1) \\ i|\xi |(\psi (t)^{-1}-1) &{} 0 \end{pmatrix} \mathcal { E}(t,s,\xi ). \end{aligned}$$

By (39) we have \(\lim _{t\rightarrow \infty }\psi (t)=1\). It follows that

$$\begin{aligned} \left| \psi (t)-1\right| \lesssim \left| 2\int ^\infty _t \sigma (\tau )\,{\hbox {d}}\tau \right| \lesssim (1+t)^{-\gamma +1}. \end{aligned}$$
(70)

Analogously, we have

$$\begin{aligned} \left| \psi (t)^{-1}-1\right| \lesssim (1+t)^{-\gamma +1}. \end{aligned}$$
(71)

Therefore, by Lemma 6, (70) and (71) we have

$$\begin{aligned} \int ^t_s \big \Vert \widetilde{A}_2(\tau ,s,\xi )\big \Vert \,{\hbox {d}}\tau \lesssim |\xi |\widetilde{\phi }(t,s), \quad \widetilde{\phi }(t,s):=\int ^{t}_{s} (1+\tau )^{-\gamma +1} \,{\hbox {d}}\tau . \end{aligned}$$

If \(\gamma <2\), then we have

$$\begin{aligned} |\xi |\widetilde{\phi }(t,s) \le |\xi |\widetilde{\phi }(\theta _2,0) \le \frac{|\xi |(1+\theta _2)^{2-\gamma }}{2-\gamma } =\frac{N}{2-\gamma }. \end{aligned}$$

If \(\gamma >2\) and \(|\xi |\le N\), then we have

$$\begin{aligned} |\xi |\widetilde{\phi }(t,s) \le N\widetilde{\phi }(\infty ,0) =\frac{N}{\gamma -2}. \end{aligned}$$

If \(\gamma >2\) and \(|\xi |\ge N\), then we have

$$\begin{aligned} |\xi |\widetilde{\phi }(t,s) \le |\xi |\widetilde{\phi }(\infty ,\theta _2) =\frac{N}{\gamma -2}. \end{aligned}$$

If \(\gamma =2\), then we have

$$\begin{aligned} |\xi |\int ^{t}_{s} (1+\tau )^{-1} \,{\hbox {d}}\tau&\le |\xi |\int ^{t}_{s} (1+\tau )^{2\beta -2} \,{\hbox {d}}\tau \le |\xi |\int ^{\theta _2}_{\theta _1} (1+\tau )^{2\beta -2} \,{\hbox {d}}\tau \\&\le \frac{|\xi |(1+\theta _2)^{2\beta -1}}{2\beta -1} =\frac{N}{2\beta -1}. \end{aligned}$$

Therefore, by the same way to estimate \(E_3=E_3(t,s,\xi )\) in \(Z_H\) we have

$$\begin{aligned} \big \Vert \widetilde{E}_2(t,s,\xi )\big \Vert \lesssim \big \Vert \widetilde{E}_2(s,s,\xi )\big \Vert \lesssim 1. \end{aligned}$$

Consequently, by Lemma 6 we have

$$\begin{aligned} \Vert E(t,s,\xi )\Vert =\big \Vert \widetilde{\varLambda }_0(t) \mathcal {E}(t,s,\xi ) \widetilde{E}_2(t,s,\xi ) \big \Vert \lesssim \frac{(1+s)^\nu }{(1+t)^\nu }. \end{aligned}$$

Thus the proof of Proposition 5 is concluded. \(\square \)

5 Proof of Theorem 1

5.1 In the case \(\gamma \le 2\)

Let \(\gamma \le 2\). We note that if \(\gamma \le 2\), then \(Z_\varPsi (N) \cup Z_I(N) \subset \{(t,\xi )\,;\, |\xi | \le N\}\). If \((t,\xi )\in Z_\varPsi (N)\), then by Lemma 3, Proposition 3 and (40) we have

$$\begin{aligned}&|\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2 + p(t)|v(t,\xi )|^2 \\&\le (1+t)^{2\nu } (|\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2 + (1+t)^{-2}|v(t,\xi )|^2) \\&\simeq (1+t)^{2\nu } |U(t,\xi )|^2 \simeq (1+t)^{4\nu } \left| V(t,\xi )\right| \\&=(1+t)^{4\nu } \left| E(t,0,\xi )V(0,\xi )\right| ^2 \\&\lesssim \left| V(0,\xi )\right| ^2. \end{aligned}$$

Moreover, by Propositions 3, 4 and 5 we have

$$\begin{aligned} \Vert E(t,0,\xi )\Vert =\Vert E(t,\theta _1,\xi ) E(\theta _1,0,\xi )\Vert \lesssim (1+t)^{-\nu }(1+\theta _1)^{-\nu } \end{aligned}$$

for \((t,\xi )\in Z_I(N)\),

$$\begin{aligned} \Vert E(t,0,\xi )\Vert =\Vert E(t,\theta _2,\xi )E(\theta _2,\theta _1,\xi )E(\theta _1,0,\xi )\Vert \lesssim (1+t)^{-\nu }(1+\theta _1)^{-\nu } \end{aligned}$$

for \((t,\xi )\in Z_H(N)\cap \{(t,\xi )\,;\,|\xi |\le N\}\) and

$$\begin{aligned} \Vert E(t,0,\xi )\Vert \lesssim (1+t)^{-\nu } \end{aligned}$$

for \((t,\xi )\in Z_H(N)\cap \{(t,\xi )\,;\,|\xi |\ge N\}\). Therefore, if \((t,\xi )\in Z_I \cup (Z_H(N)\cap \{(t,\xi )\,;\,|\xi |\le N\})\), then we have

$$\begin{aligned} |\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2&\lesssim |U(t,\xi )|^2 \simeq (1+t)^{2\nu } \left| V(t,\xi )\right| \\&=(1+t)^{2\nu } \left| E(t,0,\xi )V(0,\xi )\right| ^2 \\&\lesssim (1+\theta _1)^{-2\nu } \left| V(0,\xi )\right| ^2. \end{aligned}$$

Hence,

$$\begin{aligned} p(t)|v(t,\xi )|^2&\le p(\theta _1)|v(t,\xi )|^2 = N^{-2} (1+\theta _1)^{2\nu } |\xi |^2|v(t,\xi )|^2 \\&\lesssim |V(0,\xi )|^2. \end{aligned}$$

On the other hand, if \((t,\xi )\in Z_H(N)\cap \{(t,\xi )\,;\,|\xi |\ge N\}\), then we have

$$\begin{aligned} |\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2&\lesssim \left| V(0,\xi )\right| ^2, \end{aligned}$$

and thus

$$\begin{aligned} p(t)|v(t,\xi )|^2 \le |v(t,\xi )|^2 \le N^{-2} |\xi |^2|v(t,\xi )|^2 \lesssim |V(0,\xi )|^2. \end{aligned}$$

Consequently, the following estimate is established uniformly with respect to \((t,\xi )\):

$$\begin{aligned} |\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2 + p(t)|v(t,\xi )|^2 \lesssim |V(0,\xi )|^2. \end{aligned}$$
(72)

Parseval’s identity and (3) conclude energy estimate (4).

5.2 In the case \(\gamma >2\)

Let \(\gamma >2\). We note that estimate (72) in \(Z_\varPsi (N) \cup (Z_I(N) \cap \{(t,\xi )\,;\, |\xi |\le N\})\) is proved exactly by the same way as in the proof for \(\gamma \le 2\). By Propositions 4 and 5 we have

$$\begin{aligned} \Vert E(t,0,\xi )\Vert =\Vert E(t,\theta _2,\xi ) E(\theta _2,0,\xi )\Vert \lesssim (1+t)^{-\nu } \end{aligned}$$

for \((t,\xi )\in Z_I(N)\cap \{(t,\xi )\,;\,|\xi |\ge N\}\), and

$$\begin{aligned} \Vert E(t,0,\xi )\Vert \lesssim (1+t)^{-\nu } \end{aligned}$$

for \((t,\xi )\in Z_H(N)\). Therefore, analogously to the corresponding estimate in the case \(\gamma \le 2\) we have

$$\begin{aligned} |\xi |^2 |v(t,\xi )|^2 + |v_t(t,\xi )|^2 \lesssim \left| V(0,\xi )\right| ^2 \end{aligned}$$

for \((t,\xi )\in (Z_I(N)\cap \{(t,\xi )\,;\,|\xi |\ge N\}) \cup Z_H(N)\), and thus

$$\begin{aligned} p(t)|v(t,\xi )|^2 \lesssim |V(0,\xi )|^2. \end{aligned}$$

Consequently, estimate (72) is established uniformly with respect to \((t,\xi )\), and thus energy estimate (4) is concluded.

6 Concluding remarks

Klein–Gordon equation (1) can be identified with dissipative wave equation (6) by means of (7). Hence, we may expect that the previous results for (6) and (8) will be directly reduced to Klein–Gordon equations. However, such a procedure is not straight-forward. Indeed, it is not easy to see whether the corresponding oscillation and stabilization conditions to b, which were introduced in previous papers, are satisfied or not by the solution of nonlinear equation (7).

The optimality of assumption (15) is an open problem. If we succeed to reduce our problem to previous results for dissipative wave equation (6) in [8] or the wave equation with variable propagation speed (8) in [9], we may expect that if \(\delta \in C^m([0,\infty ))\) with \(m\ge 0\), then assumption (15) is weakened to

$$\begin{aligned} \beta \ge \frac{1}{2}\left( -\gamma +3 - \frac{m(\gamma -1)}{m+2}\right) \end{aligned}$$
(73)

under some suitable assumptions for the derivatives \(\delta ^{(k)}(t)\) (\(k=1,\cdots ,m\)). Moreover, we may also expect that estimate (4) does not hold in general if \(\beta < -\gamma +2\).

In [6] the authors considered the following general model of the coefficient in the potential:

$$\begin{aligned} M(t) =\frac{\mu ^2}{g(t)^2 (1+t)^2 } \end{aligned}$$
(74)

with a non-effective potential having very slow oscillations. We may expect to extend (74) to very fast oscillations. However, the argument of proof for Theorem 1 can be applied only in the case \(g(t)\equiv 1\).