1 Introduction and formulation of the main results

We consider the following equation:

$$ \rho (x ) \bigl(\rho (x )y^{ (k )} \bigr)^{{'} } +\sum_{j=0}^{k-1}r_{j} (x )y^{ (k-j )}+r_{k}(x) y=f (x ), $$
(1)

where \(k=1, 2, \ldots \) , \(x\in {R}= (-\infty ,+\infty )\), \(f (x )\in L_{2} :=L_{2} (R )\). In what follows, we assume that \(\rho (x )>0\) is (\(k+1 \)) times, and \(r_{j}=r_{j} (x )\) (\(j=\overline{1,k-1}\)) is (\(k-j \)) times continuously differentiable, and \(r_{k}=r_{k} (x )\) is a continuous functions. Equation (1) is given in an infinite domain, and its coefficients can be unbounded functions. Hence, it is a singular differential equation.

Let \(L_{0} \) be a differential operator from \(C_{{\mathrm{0}}}^{ (k+1 )} (R )\) to \(L_{2} \), which is defined by the following formula:

$$ L_{0} y=\rho (x ) \bigl(\rho (x )y^{ (k )} \bigr)^{{'} } +\sum_{j=0}^{k-1}r_{j} (x )y^{ (k-j )} +r_{k} (x )y . $$

Since the coefficients ρ and \(r_{j} \) (\(j=\overline{0,k} \)) are smooth functions, the operator \(L_{0} \) is a closable operator (see [1, Sect. 6 of Chap. 2]). We denote by L the closure of \(L_{0} \).

A function \(y (x )\) is called a solution of differential equation (1) if there exists a sequence \(\{ y_{m} (x) \} _{m=1}^{\infty } \subseteq C_{0}^{ (k+1 )} (R )\) such that \(y_{m} \to y\) and \(Ly_{m} \to f\) in the norm of \(L_{2} \) as \(m\to \infty \). It is clear that \(y\in L_{2} \).

The following more general equation

$$ \rho _{0} (x )y^{ (k+1 )} +\tilde{r}_{0} (x )y^{ (k )} +\sum_{j=1}^{k-1}r_{j} y^{ (k-j )} +r_{k} (x )y=f (x ), \quad \rho _{0} (x )>0, $$

can be reduced to equation (1). Indeed, set \(\rho (x )=\sqrt{\rho _{0} (x )} \) and \(r_{0} =\tilde{r}_{0} -\rho \rho '\), then

$$\begin{aligned}& \rho _{0} (x )y^{ (k+1 )} +\tilde{r}_{0} (x )y^{ (k )} +\sum_{j=1}^{k-1}r_{j} (x )y^{ (k-j )} +r_{k} (x )y \\& \quad =\rho (x ) \bigl(\rho (x )y^{ (k )} \bigr)^{{'} } + \bigl( \tilde{r}_{0} (x )-\rho (x )\rho ' (x ) \bigr)y^{ (k )} +\sum_{j=1}^{k-1}r_{j} (x )y^{ (k-j )} +r_{k} (x )y \\& \quad =\rho (x ) \bigl(\rho (x )y^{ (k )} \bigr)^{{'} } +\sum _{j=0}^{k-1}r_{j} (x )y^{ (k-j )} +r_{k} (x )y , \end{aligned}$$

this is the left-hand side of equation (1).

We study equation (1) in the case that the intermediate coefficients \(r_{j} (x )\) (\(j=\overline{1,k-1}\)) can be unbounded and their growth do not depend on the extreme coefficients \(\rho _{0} (x )\) and \(r_{k} (x )\). When \(r_{j} (x )\) (\(j=\overline{1,k-1}\)) are bounded or they are unbounded and are controlled by the extreme coefficients \(\rho _{0} (x )\) and \(r_{k} (x )\), this equation has been studied systematically. For more details, see [24].

A number of problems of stochastic analysis and stochastic differential equations lead to singular elliptic equations and ordinary differential equations and their systems with unbounded intermediate coefficients. Specific representatives of such equations are the stationary equations of Ornstein–Uhlenbeck (see [5]) and Fokker–Planck–Kolmogorov (see [6]). In the case \(k=1\), equation (1) is the simplest model of Brownian motion of particles with a covariance matrix determined by the function \(\rho (x )\), and \(r_{0} (x )\) is called the drift coefficient.

For applications of equation (1) to various practical processes, it is important to investigate the correctness of equation (1) with coefficients \(\rho (x )\) and \(r_{j} (x )\) (\(j=\overline{0,k-1} \)) from wider classes. In the case that the intermediate coefficients do not depend on the potential and the diffusion coefficient and can grow as a linear function, the correctness of the second-order singular elliptic equations was studied in [710]. The correctness conditions for the second-order and third-order one-dimensional differential equations with rapidly growing intermediate coefficients were obtained in [1116]. However, in [1116] the condition of weak oscillation is imposed on the intermediate and senior coefficients. In this paper, sufficient conditions for the existence and uniqueness of a solution \(y (x )\) of (1) are obtained. Moreover, for the solution, we proved the following inequality:

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} +\sum_{j=1}^{k} \bigl\Vert r_{j} y^{ (k-j )} \bigr\Vert _{2} \le C \Vert f \Vert _{2} . $$

Using this estimate, we obtained compactness conditions for operators \(\theta (x )L^{-1} \) and \(\theta (x )\frac{d^{\alpha } }{dx^{\alpha } } L^{-1} \) (\(\alpha =\overline{1,k-1}\)).

The difference between this result and the results in [716] is that equation (1) is high order, and the coefficients \(r_{j} (x )\) (\(j=\overline{0,k-1} \)) can grow rapidly, and all coefficients can be fluctuating (see Example 4.1). In addition, the leading coefficient can tend to zero at infinity. In other words, the cases of some degenerate equations are covered. Note that the criteria for the existence of positive periodic solutions for differential equations with indefinite singularity and pseudo almost periodic solutions of an iterative functional differential equations, respectively, were found in [17] and [18].

We introduce the following notation:

$$\begin{aligned}& T_{u,v,s} (x )= \biggl[ \int _{0}^{x}u^{2} (t)\,dt \biggr]^{1/2} \biggl[ \int _{x}^{+\infty }t^{2(s-1)} v^{-2} (t)\,dt \biggr]^{1/2} ,\quad {x}>{ 0}, \\& M_{u,v,s} (\tau )= \biggl[ \int _{\tau }^{0}u^{2} (t)\,dt \biggr]^{1/2} \biggl[ \int _{-\infty }^{\tau }t^{2(s-1)} v^{-2} (t)\,dt \biggr]^{1/2} ,\quad \tau < {0}, \\& \gamma _{u,v,s} =\max \Bigl(\sup_{x>0} T_{u,v,s} (x ), \sup_{\tau < 0} M_{u,v,s} (\tau ) \Bigr), \end{aligned}$$

where \(u=u (x )\) and \(v=v (x )\ne 0\) are real continuous functions, s is a positive integer number. The following statements are the main results of this paper.

Theorem 1.1

Let \(\rho (x )>0\) be (\(k+1\)) times, \(r_{j} (x )\) (\(j=\overline{1,k-1}\)) be (\(k-j\)) times continuously differentiable, \(r_{k} (x )\) be a continuous function, and the following conditions be satisfied:

  1. (a)

    \(r_{0} \ge 1\), \(\gamma _{1,\sqrt{r_{0} } ,k} <\infty \);

  2. (b)

    there exists a point \(x_{0} \in R\) such that \(\sup_{x< x_{0} } [\rho (x ) \exp \int _{x_{0} }^{x} \frac{r_{0} (t )}{\rho ^{2} (t )}\,dt ]< \infty \);

  3. (c)

    \(\max_{j=\overline{1,k}} \gamma _{r_{j} ,\sqrt{r_{0} } ,j} <\infty \).

Then, for any \(f\in L_{2} \), equation (1) has a unique solution y and

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} +\sum_{j=1}^{k-1} \bigl\Vert r_{j} y^{ (k-j )} \bigr\Vert _{2} + \Vert r_{k} y \Vert _{2} \le C \Vert f \Vert _{2} $$
(2)

holds, where \(\Vert \cdot \Vert _{2} \) is the norm of \(L_{2} \).

Changing variable in Theorem 1.1, we obtain the following result.

Theorem 1.2

Let \(\rho (x )>0\) be (\(k+1\)) times, \(r_{j} (x )\) (\(j=\overline{1,k-1}\)) be (\(k-j\)) times continuously differentiable, and \(r_{k} (x )\) be a continuous function, and the following conditions be satisfied:

  1. (a)

    respectively, \(r_{0} (x)\le -1\), \(\gamma _{1,\sqrt{|r_{0} |} ,k} <\infty \);

  2. (b)

    there exists a point \(x_{1} \in R\) such that \(\sup_{x>x_{1} } [\rho (x ) \exp \int _{x_{1} }^{x} \frac{r_{0} (t )}{\rho ^{2} (t )}\,dt ]< \infty \);

  3. (c)

    \(\max_{j=\overline{1,k}} \gamma _{r_{j} ,\sqrt{|r_{0} |} ,j} <\infty \).

Then, for any \(f\in L_{2} \), equation (1) has a unique solution y and

$$ \bigl\Vert \sqrt{ \vert r_{0} \vert } y^{ (k )} \bigr\Vert _{2} +\sum _{j=1}^{k-1} \bigl\Vert r_{j} y^{ (k-j )} \bigr\Vert _{2} + \Vert r_{k} y \Vert _{2} \le C \Vert f \Vert _{2} $$
(3)

holds.

Theorem 1.3

Suppose that the coefficients ρ and \(r_{j}\) (\(j=\overline{0,k} \)) satisfy the conditions of Theorem 1.1or Theorem 1.2and the function \(\theta (x )\) is continuous, and let for some \(j\in [1, k ]\) the equality

$$ \max \Bigl(\lim_{x\to +\infty } T_{\theta ,\sqrt{|r_{0} |} , j} (x ), \lim_{\tau \to -\infty } M_{\theta ,\sqrt{|r_{0} |} , j} (\tau ) \Bigr)=0 $$
(4)

hold. Then \(\theta (x)\frac{d^{k-j} }{dx^{k-j} } L^{-1} \) (\(\theta (x)L^{-1} \) for \(j=k\)) is the compact operator in \(L_{2} \).

2 Some auxiliary statements

Let \(C_{+}^{(s)} (0,+\infty )= \{ m(x)\in C^{ (s )} (0,+\infty ):\exists \tau >0 m (x )=0 \ \forall x>\tau \} \) (\(s\in N \)). The next lemma is a particular case of Theorem 2.1 in [19].

Lemma 2.1

Suppose that the functions \(g(x)\), \(v(x)\ne 0\) (\(x>0\)) are continuous, and for a natural number s,

$$ \tilde{T}_{g,v,s} =\sup_{x>0} T_{g,v,s} (x)< \infty $$

holds. Then, for any \(y\in C_{+}^{(s)} (0,+\infty )\),

$$ \biggl( \int _{0}^{+\infty } \bigl\vert g (x )y (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2} \le C \biggl( \int _{0}^{+\infty } \bigl\vert v (x )y^{ (s )} (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2} $$
(5)

holds. Moreover, if C is the smallest constant for which inequality (5) is valid, then

$$ T_{0, g, v, s} \le C\le 2\tilde{T}_{g, v, s}, $$
(6)

where

$$ T_{0,g,v,s} =\sup_{x>0} \biggl[ \int _{0}^{x}g^{2} (t)\,dt \biggr]^{1/2} \biggl[ \int _{x}^{+\infty }(t-x)^{2(s-1)} v^{-2} (t)\,dt \biggr]^{1/2} . $$

Remark 2.1

If \(s=1\) and C is the smallest constant for which inequality (5) is valid, then, instead of (6), the inequalities \(\tilde{T}_{ g, v, s} \le C\le 2\tilde{T}_{g, v, s} \) hold (see [20]).

Lemma 2.2

Suppose that the functions \(u(x)\), \(h(x)\ne 0\) (\(x<0\)) are continuous and \(\tilde{M}_{u,h,s} =\sup_{x<0} M_{u,h,s} (x) <\infty \) for a natural number s. Then, for any \(y\in C_{-}^{(s)} (-\infty , 0 )= \{ \eta \in C^{ (s )} (-\infty , 0 ): \exists \tau <0\eta (x )=0 \ \forall x<\tau \} \),

$$ \biggl( \int _{-\infty }^{0} \bigl\vert u (x )y (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2} \le C_{1} \biggl( \int _{-\infty }^{0} \bigl\vert h (x )y^{ (s )} (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2} $$
(7)

holds. Moreover, if \(C_{1} \) is the smallest constant for which (7) is valid, then

$$ M_{0,u,h,s} \le C_{1} \le 2\tilde{M}_{u,h,s} , $$

where

$$ M_{0,u,h,s} =\sup_{\tau < 0} \biggl[ \int _{ \tau }^{0}u^{2} (t)\,dt \biggr]^{1/2} \biggl[ \int _{-\infty }^{ \tau }(\tau -t)^{2(s-1)} h^{-2} (t)\,dt \biggr]^{1/2} . $$

Proof

Changing variable in Lemma 2.1, we obtain the desired result. □

Remark 2.2

If \(s=1\) and \(C_{1} \) is the smallest constant for which inequality (7) is valid, then the inequalities \(\tilde{M}_{u,h,s} \le C_{1} \le 2\tilde{M}_{u,h,s} \) hold.

The following statement is proved by application of Lemmas 2.1 and 2.2.

Lemma 2.3

Let continuous functions \(u(x)\), \(v(x)\ne 0\) (\(x \in R\)) satisfy the conditions \(\tilde{T}_{u,v,s} <\infty \), \(\tilde{M}_{u,v,s} <\infty \) for some natural number s. Then, for any \(y\in C_{0}^{ (s )} (R )\),

$$ \biggl( \int _{-\infty }^{+\infty } \bigl\vert u (x )y (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2} \le C_{2} \biggl( \int _{- \infty }^{+\infty } \bigl\vert v (x )y^{ (s )} (x ) \bigr\vert ^{2}\,dx \biggr)^{1/2}. $$
(8)

Moreover, if \(C_{2} \) is the smallest constant for which inequality (8) is valid, then

$$ \min (T_{0, u,v,s} ,M_{0, u,v,s} )\le C_{2} \le 2 \gamma _{u,v,s} . $$

Remark 2.3

If \(s=1\) and \(C_{2} \) is the smallest constant for which inequality (8) is valid, then

$$ \min (\tilde{T}_{u,v,s} ,\tilde{M}_{u,v,s} )\le C_{2} \le 2\gamma _{u,v,s} . $$

3 On a two-term differential operator

Let \(l_{0} \) be a differential operator from the set \(C_{0}^{ (k+1 )} (R )\) to \(L_{2} \), which is defined by

$$ l_{0} y=\rho (x ) \bigl(\rho (x )y^{ (k )} \bigr)^{{'} } +r_{0} (x )y^{ (k )} . $$

We denote its closure by l.

Lemma 3.1

Let \(\rho (x )>0\) and the function \(r_{0} (x )\) satisfy condition (a) of Theorem 1.1. Then the operator l is invertible, and for \(y\in D (l )\), the inequality

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} + \Vert y \Vert _{2} \le C \Vert ly \Vert _{2} $$
(9)

holds.

Proof

Let \(y\in C_{0}^{ (k+1 )} (R )\). Integrating by parts, we get that

$$ \bigl(l_{0} y,y^{ (k )} \bigr)= \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2}^{2} . $$

By Hölder’s inequality,

$$ \bigl\vert \bigl(l_{0} y,y^{ (k )} \bigr) \bigr\vert \le \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} \biggl\Vert \frac{1}{\sqrt{r_{0} } } l_{0} y \biggr\Vert _{2} . $$

Hence,

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} \le \biggl\Vert \frac{1}{\sqrt{r_{0} } } l_{0} y^{ (k )} \biggr\Vert _{2}. $$
(10)

According to Lemma 2.3, we have that

$$ \Vert y \Vert _{2} \le 2\gamma _{1,\sqrt{r_{0} } ,k} \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} . $$

Therefore,

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} + \Vert y \Vert _{2} \le C \Vert l_{0} y \Vert _{2}, $$
(11)

where \(C=2 \gamma _{1,\sqrt{r_{0} } ,k} +1\).

Now let \(y\in D (l )\). Then there is a sequence \(\{ y_{m} \} \subseteq C_{0}^{ (k+1 )} (R )\) such that \(\Vert y_{m} -y \Vert _{2} \to 0\), \(\Vert l_{0} y_{m} -ly \Vert _{2} \to 0\) as \(m\to \infty \). So, it follows that

$$ \bigl\vert \Vert y_{m} \Vert _{2} - \Vert y \Vert _{2} \bigr\vert \to 0,\qquad \bigl\vert \Vert l_{0} y_{m} \Vert _{2} - \Vert ly \Vert _{2} \bigr\vert \to 0 \quad (m\to \infty ). $$
(12)

According to (11), we have

$$ \bigl\Vert \sqrt{r_{0} } y_{m}^{ (k )} -\sqrt{r_{0} } y_{s}^{ (k )} \bigr\Vert _{2} + \Vert y_{m} -y_{s} \Vert _{2} \le C \Vert l_{0} y_{m} -l_{0} y_{s} \Vert _{2},\quad s, m \in N. $$
(13)

We denote by \(W_{2,r_{0} }^{k} (R )\) the completion of \((C_{0}^{ (k )} (R ), \Vert \cdot \Vert _{\mathop{W}_{2,r_{0} }^{k} } )\), where \(\Vert y \Vert _{W_{2,r_{0}}^{k} } = \Vert \sqrt{r_{0} } y^{ (k )} \Vert _{2} + \Vert y \Vert _{2} \). From (12) and (13) it follows that \(\{ y_{m} \} \) is a Cauchy sequence in the space \(W_{2,r_{0} }^{k} (R )\). Consequently, there is \(y\in W_{2,r_{0} }^{k} (R )\) such that \(\{ y_{m} \} \) converges to y in the norm of \(W_{2,r_{0} }^{k} (R )\). Using (11) and (12), we obtain that for y inequality (9) holds. According to (9), l is invertible. Similarly, by (10),

$$ \bigl\Vert \sqrt{r_{0} } y^{ (k )} \bigr\Vert _{2} \le \biggl\Vert \frac{1}{\sqrt{r_{0} } } ly \biggr\Vert _{2},\quad y\in D(l). $$
(14)

 □

Inequality (9) gives \(D (l )\subseteq W_{2,r_{0} }^{k} (R )\). For \(y\in D (l )\), set \(z=y^{ (k )} \) and \(\tilde{L}z=\rho (x ) (\rho (x )z )^{{'} } +r_{0} (x )z\).

Lemma 3.2

Suppose that \(\rho (x )\) and \(r_{0} (x )\) satisfy the conditions of Lemma 3.1. Then is a closed operator in \(L_{2} \).

Proof

Let \(\{ z_{n} \} _{n=1}^{\infty } \subseteq D (\tilde{L} )\) such that \(\Vert z_{n} -z \Vert _{2} \to 0\), \(\Vert \tilde{L}z_{n} -w \Vert _{2} \to 0\) as \(n\to \infty \). According to our choice, there is a sequence \(\{ y_{n} \} _{n=1}^{\infty } \subseteq D (l )\) such that \(y_{n}^{ (k )} =z_{n} \) and

$$ \bigl\Vert y_{n}^{ (k )} -z \bigr\Vert _{2} \to 0,\qquad \Vert ly_{n} -w \Vert _{2} \to 0 \quad (n\to \infty ). $$

By Lemma 3.1, \(y_{n} \) (\(n\in N\)) satisfies (9). Hence, \(\{ y_{n} \} _{n=1}^{\infty } \subseteq W_{2,r_{0} }^{k} (R )\) is a Cauchy sequence. Therefore, there exists \(y\in L_{2} \) such that

$$ \bigl\Vert y_{n}^{ (k )} -z \bigr\Vert _{2} + \Vert y_{n} -y \Vert _{2} \to 0 \quad (n\to \infty ). $$
(15)

So

$$ \Vert y_{n} -y \Vert _{2} \to 0,\qquad \bigl\Vert y_{n}^{ (k )} -z \bigr\Vert _{2} \to 0,\qquad \Vert ly_{n} -w \Vert _{2} \to 0 \quad (n \to \infty ). $$

Since the operator l and the generalized differentiation operator are closed, we have \(y\in D (l )\), \(z=y^{ (k )} \in D (\tilde{L} )\) and

$$ w=\tilde{L}z. $$
(16)

Thus, is a closed operator. □

Lemma 3.3

If functions \(\rho (x )\) and \(r_{0} (x )\) satisfy the conditions of Lemma 3.1, then

$$ R (l )=R (\tilde{L} ). $$

The proof follows from the following equalities:

$$\begin{aligned} R (\tilde{L} ) =& \bigl\{ w\in L_{2} : \exists z \in D ( \tilde{L} ), w=\tilde{L}z \bigr\} \\ =& \bigl\{ w\in L_{2} : \exists y\in D (l ), y^{ (k )} =z \in D (\tilde{L} ), w=ly \bigr\} \\ =& \bigl\{ w\in L_{2} : \exists y\in D (l ), w=ly \bigr\} =R (l ). \end{aligned}$$

Lemma 3.4

Suppose that the functions \(\rho (x )\) and \(r_{0} (x )\) satisfy conditions (a) and (b) of Theorem 1.1. Then l is invertible and its inverse \(l^{-1} \) is bounded.

Proof

By Lemma 3.1, l has an inverse \(l^{-1} \). Since l is a closed operator, using (9), we deduce that \(R (l )\) is a closed set. By Lemma 3.3, it suffices to prove \(R (\tilde{L} )=L_{2} \). If \(R (\tilde{L} )\ne L_{2} \), then, according to [1, p. 284], there is a nonzero element \(v (x )\in L_{2} \) such that

$$ (\tilde{L}z,v )= \bigl(z,\tilde{L}^{*} v \bigr)=0 $$

(where \(\tilde{L}^{*} \) is the adjoint of ) for any \(z\in D (\tilde{L} )\). Since \(C_{0}^{(1)} (R)\subseteq D (\tilde{L} )\), the set \(D (\tilde{L} )\) is dense in \(L_{2} \). Therefore,

$$ \tilde{L}^{*} v=-\rho (x ) \bigl(\rho (x )v \bigr)^{{'} } +r_{0} (x )v=0. $$

This implies that \(v(x)\) is continuously differentiable and

$$ v (x )=\frac{C}{\rho (x )} \exp \int _{x_{0} }^{x} \frac{r_{0} }{\rho ^{2} }\,dt. $$

Since \(v\ne 0\), we have \(C\ne 0\). Taking into account condition b) of Theorem 1.1, we have that \(\vert v (x ) \vert \ge \frac{|C|}{K} >0\) for all \(x< x_{0} \), where

$$ K=\sup_{x< x_{0} } \rho (x )\exp \int _{x_{0} }^{x}\frac{r_{0} (t )}{\rho ^{2} (t )}\,dt . $$

Hence \(v\notin L_{2} \). This is a contradiction. □

Remark 3.1

If in Lemma 3.4 the condition \(r_{0} \ge 1\) is replaced with the condition \(r_{0} \ge \delta >0\), then the lemma remains valid.

4 Proofs of the main results

Proof of Theorem 1.1

Set \(x=mt\), \(\hat{y} (t )=y (mt )\), \(\hat{\rho } (t )=\rho (mt )\), \(\hat{r}_{j} (t )=r_{j} (mt ) \) (\(j= \overline{0,k}\)), \(\hat{f} (t )=f (mt )m^{- (k+1 )} \) (\(m>0\)). Then equation (1) changes to

$$\begin{aligned} P_{m} \hat{y} =&\hat{\rho } (t ) \bigl(\hat{\rho } (t ) \hat{y}^{ (k )} (t ) \bigr)^{{'} } \\ &{}+\sum_{j=0}^{k-1}m^{- (j+1 )} \hat{r}_{j} (t ) \hat{y}^{ (k-j )} (t )+m^{- (k+1 )} \hat{r}_{k} (t )\hat{y} (t )=\hat{f}. \end{aligned}$$
(17)

Let be a closure of \(\hat{l}_{0} \), where \(\hat{l}_{0}: D(\hat{l}_{0} )\to L_{2} \) is defined by

$$ \hat{l}_{0} \hat{y}=\hat{\rho } (t ) \bigl(\hat{\rho } (t ) \hat{y}^{ (k )} (t ) \bigr)^{{'} } +m^{-1} \hat{r}_{0} (t )\hat{y}^{ (k )} ,\qquad D ( \hat{l}_{0} )=C_{0}^{ (k+1 )} (R ). $$

By the conditions of the theorem, we can choose a number m so that

$$ m\ge \max \Bigl(2, 8 \max_{j= \overline{1, k}} \gamma _{\hat{r}_{j} , \sqrt{\hat{r}_{0} } , j} \Bigr). $$
(18)

Then, according to condition c) of the theorem, Lemma 2.3, and estimate (14), we obtain that, for any \(\hat{y}\in D (\hat{l} )\),

$$\begin{aligned} \sum_{j=1}^{k-1} \biggl\Vert \frac{1}{m^{j+1} } \hat{r}_{j} \hat{y}^{ (k-j )} \biggr\Vert _{2} + \biggl\Vert \frac{1}{m^{k+1} } \hat{r}_{k} \hat{y} \biggr\Vert _{2} \le& \frac{2}{m} \Biggl(\sum_{ \theta =0}^{k-1} \frac{1}{m^{\theta } } \Biggr) \max_{j=\overline{1, k}} \gamma _{\hat{r}_{j} , \sqrt{ \hat{r}_{0} } , j} \biggl\Vert \frac{1}{m} \sqrt{ \hat{r}_{0} } \hat{y}^{ (k )} \biggr\Vert _{2} \\ \le& \frac{1}{2} \biggl\Vert \frac{1}{m} \sqrt{\hat{r}_{0} } \hat{y}^{ (k )} \biggr\Vert _{2} \le \frac{1}{2} \Vert \hat{l} \hat{y} \Vert _{2}. \end{aligned}$$
(19)

By (19) and Lemma 3.1, we get that

$$\begin{aligned} \Vert S\hat{y} \Vert _{2} =& \Biggl\Vert \sum _{j=1}^{k-1}m^{-(k-j)} r_{j} (t ) \hat{y}^{ (k-j )} (t )+m^{- (k+1 )} \hat{r}_{k} (t ) \hat{y} (t ) \Biggr\Vert _{2} \\ \le& \frac{1}{2} \Vert \hat{l}\hat{y} \Vert _{2} ,\quad \hat{y}\in D (\hat{l} ). \end{aligned}$$
(20)

According to Lemma 3.4 and Remark 3.1, the operator is invertible, and its inverse \(\hat{l}^{-1} \) is defined on the whole \(L_{2} \). Then, by inequality (20) and the well-known statement on small perturbations [21, Chap. 4, Theorem 1.16], the following operator

$$ P_{m} \hat{y}=\hat{l}\hat{y}+\sum_{j=1}^{k-1}m^{-(k-j)} r_{j} (t ) \hat{y}^{ (k-j )} (t )+m^{- (k+1 )} \hat{r}_{k} (t )\hat{y} (t ) $$

is also closed and invertible, and the inverse operator \(P_{m}^{-1} \) is defined on the whole space \(L_{2} \). So, it follows that, for each \(\hat{f}\in L_{2} \), \(\hat{y}=P_{m}^{-1} \hat{f}\in D (P_{m} )\) and ŷ is a solution of equation (17). By (19), we deduce that

$$ \biggl\Vert \frac{1}{m} \sqrt{\hat{r}_{0} } \hat{y}^{ (k )} \biggr\Vert _{2} +\sum _{j=1}^{k-1} \biggl\Vert \frac{1}{m^{j+1} } \hat{r}_{j} \hat{y}^{ (k-j )} \biggr\Vert _{2} + \biggl\Vert \frac{1}{m^{k+1} } \hat{r}_{k} \hat{y} \biggr\Vert _{2} \le C \Vert \hat{f} \Vert _{2}. $$
(21)

Using the substitution \(t=m^{-1} x\), we obtain that the function \(y (x )=\hat{y} (\frac{1}{m} x )\) is a solution to equation (1). Inequality (21) implies (3). □

Proof of Theorem 1.3

Let the conditions of Theorem 1.1 be satisfied. Without loss of generality, we assume that \(\theta (x)\) is a real function. Let

$$ Q_{j} = \biggl\{ \theta (x)\frac{d^{k-j} y}{dx^{k-j} } :y\in D(L){,} { \mathrm{ }} \Vert Ly \Vert _{2} \le 1 \biggr\} \quad (j= \overline{1,k} ). $$

By Theorem 1.1 and (3), for any \(y\in C_{0}^{(k+1)} (R)\): \(\Vert Ly \Vert _{2} \le 1\), we obtain

$$ \Vert z \Vert _{2} = \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{2} \le C_{1} \cdot \bigl\Vert \sqrt{r_{0} } y^{(k)} \bigr\Vert _{2} \le C C_{1} \Vert Ly \Vert _{2} \le C_{2} . $$

These inequalities are valid for any \(y\in D(L)\) such that \(\Vert Ly \Vert _{2} \le 1\), since L is a closed operator. Therefore, \(Q_{j} \) is bounded in \(L_{2} \). Let us show that \(Q_{j} \) is compact in \(L_{2} \). By the Frechet–Kolmogorov theorem, it suffices to show that, for each \(\varepsilon >0\), there is a number \(N_{\varepsilon } \) such that, for any \(y\in C_{0}^{(k+1)} (R)\), \(\Vert Ly \Vert _{2} \le 1\), and \(N\ge N_{\varepsilon } \), the following inequality

$$ \Vert z \Vert _{L_{2} (R \backslash [-N,N] )} = \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{L_{2} (R\backslash [-N,N] )} < \varepsilon $$
(22)

holds. We have that

$$ \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{L_{2} (R\backslash [-N,N] )} = \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{L_{2} (- \infty ,-N )} + \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{L_{2} (N,+\infty )}. $$
(23)

According to Lemma 2.1,

$$\begin{aligned}& \int _{N}^{+\infty } \bigl\vert \theta (t)y^{(k-j)} (t ) \bigr\vert ^{2}\,dt \\& \quad = \int _{0}^{+\infty } \bigl\vert \theta (t+N)y^{(k-j)} (t+N ) \bigr\vert ^{2}\,dt \\& \quad \le \sup_{x>0} \biggl[ \int _{0}^{x}\theta ^{2} (t+N ) \,dt \cdot \int _{x}^{+\infty }(t+N)^{2(j-1)} r_{0}^{-1} (t+N )\,dt \biggr] \times \\& \qquad {}\times \int _{0}^{+\infty } \bigl\vert \sqrt{r{}_{0} (t+N )} y^{(k)} (t+N ) \bigr\vert ^{2}\,dt \\& \quad =\sup_{x\geq N} \biggl( \int _{0}^{x}\theta ^{2} (t )\,dt \cdot \int _{x}^{+\infty } t^{2(j-1)} r_{0}^{-1} (t )\,dt \biggr)\cdot \int _{N}^{+\infty } \bigl\vert \sqrt{r{}_{0} (t )} y^{(k)} (t ) \bigr\vert ^{2}\,dt \\& \quad \le \sup_{x\geq N} \biggl( \int _{0}^{x}\theta ^{2} (t )\,dt \cdot \int _{x}^{+\infty } t^{2(j-1)} r_{0}^{-1} (t )\,dt \biggr)\cdot \int _{0}^{+\infty } \bigl\vert \sqrt{r{}_{0} (t )} y^{(k)} (t ) \bigr\vert ^{2}\,dt. \end{aligned}$$
(24)

Similarly, using Lemma 2.2, we obtain

$$\begin{aligned}& \int _{-\infty }^{-N} \bigl\vert \theta (t)y^{(k-j)} (t) \bigr\vert ^{2}\,dt \\& \quad \leq \sup_{\tau \leq -N} \biggl( \int _{\tau }^{0} \theta ^{2} (t )\,dt \cdot \int _{-\infty }^{\tau } t^{2(j-1)} r_{0}^{-1} (t )\,dt \biggr)\cdot \int _{-\infty }^{0} \bigl\vert \sqrt{r{}_{0} (t )} y^{(k)} (t ) \bigr\vert ^{2}\,dt. \end{aligned}$$
(25)

Set

$$ A_{s, r_{0}, j } (N)=\max \Bigl( \sup_{t\geq N} T_{ \theta , \sqrt{r_{0}}, j} (x), \sup_{\tau \leq -N} M_{ \theta , \sqrt{r_{0}}, j} ( \tau ) \Bigr). $$

By virtue of (23), (24), (25), and (3), we have that

$$ \bigl\Vert \theta y^{(k-j)} \bigr\Vert _{L_{2} ( R\backslash [-N,N] ) } \leq A_{\theta , r_{0}, j } (N). $$

Taking into account this inequality and condition (4), we see that the number \(N_{\varepsilon } \) for given \(\varepsilon >0\) can be chosen so that, for all \(y\in C_{0}^{(k+1)} (R)\), \(\Vert Ly \Vert _{2} \le 1\), and \(N: N\ge N_{\varepsilon } \), inequality (22) holds. □

Example 4.1

We consider the equation

$$ \tilde{L}_{0}y=\rho _{0} (x) \bigl( \rho _{0} (x) y^{(3)} \bigr)^{{'} } + r(x) y^{(3)} - g(x) y'- h(x) y=f (x ), $$
(26)

where

$$\begin{aligned}& \rho _{0} (x)=\textstyle\begin{cases} (1+x^{4} ) (2-\sin ^{8} 10x^{6} ), & x< 0, \\ (1+x^{11} )^{-1} (1+x^{3} \cos ^{2} 7x^{10}), & x\ge 0, \end{cases}\displaystyle \\& r(x)= \bigl[9+x^{2} \bigl(4-\sin ^{10} x^{8} \bigr) \bigr]^{4}, \qquad g(x)=\sqrt{x}\cos ^{4} \bigl(3 \exp x^{2} \bigr),\qquad h(x)= 6\sin \bigl(\exp \vert x \vert \bigr). \end{aligned}$$

We denote by the closure of the operator \(\tilde{L}_{0}\) \(( D(\tilde{L}_{0})= C_{0}^{(4)} (R) )\) corresponding to (26). It is easy to check that \(\gamma _{1,\sqrt{r} ,3} <\infty \), \(\gamma _{g, \sqrt{r}, 2} <\infty \) and

$$ \sup_{x< 0} \biggl[\rho _{0} (x)\exp \int _{0}^{x} \frac{r(t) }{\rho _{0}^{2}(t)}\,dt \biggr]< \infty $$

hold. Hence, by Theorem 1.1, equation (26) has a unique solution for any \(f(x)\in L_{2} \). By direct calculation we obtain that

$$\begin{aligned}& \max \Bigl(\lim_{x\to +\infty } T_{(|x| +1)^{ \alpha } , \sqrt{r}, 3} (x ), \lim _{\tau \to -\infty } M_{(|\tau | +1)^{\alpha } , \sqrt{r}, 3} (\tau ) \Bigr)=0\quad \mbox{if } \alpha < 1 , \quad \mbox{and}\\& \max \Bigl(\lim_{x\to +\infty } T_{(|x| +1)^{ \beta } , \sqrt{r} ,2} (x ), \lim _{\tau \to -\infty } M_{(|\tau | +1)^{\beta } , \sqrt{r} ,2} (\tau ) \Bigr)=0\quad \mbox{if } \beta < 2. \end{aligned}$$

Therefore, by Theorem 1.3, the operators \((|x| +1)^{\alpha } \tilde{L}^{-1} \) and \((|x| +1)^{\beta } \frac{d}{dx} \tilde{L}^{-1} \) are compact in \(L_{2} \) for \(\alpha <1 \) and \(\beta <2 \), where \(\tilde{L}^{-1}\) is the inverse of . Note that the coefficients of (26) are fluctuating and \(\rho (x)\) tends to zero as \(x\to +\infty \) and is unbounded as \(x\to -\infty \).

Example 4.2

We consider the following higher-order equation:

$$\begin{aligned} \tilde{l}_{0} y =& \bigl(1+x^{2} \bigr)^{s} \bigl( \bigl(1+x^{2} \bigr)^{s} y^{ (k )} \bigr)^{{'} } + \bigl[2- \bigl(11k+3x^{2} \bigr)^{2k} \bigr]y^{ (k )} \\ &{}+\sum_{j=1}^{k-1`} \bigl[(-1)^{j} +2jx^{2} \bigr]^{k-\frac{j}{2} } y^{ (k-j )} -\frac{5}{3+4x^{2} } y=f (x ), \end{aligned}$$
(27)

where \(x\in R\), k and s are natural numbers, and \(f\in L_{2} \). Let be the closure of the operator \(\tilde{l}_{0}\) with \(D(\tilde{l}_{0})= C_{0}^{(k+1)} (R)\). Direct calculations show that all conditions of Theorem 1.2 are satisfied. Hence, equation (27) is uniquely solvable and its solution y satisfies the following estimate:

$$ \bigl\Vert \sqrt{ \bigl(11k+3x^{2} \bigr)^{2k} -2} y^{ (k )} \bigr\Vert _{2} +\sum _{j=1}^{k-1} \bigl\Vert \bigl[(-1)^{j} +2jx^{2} \bigr]^{k-\frac{j}{2} } y^{ (k-j )} \bigr\Vert _{2} + \Vert y \Vert _{2} \le C \Vert f \Vert _{2} . $$

Moreover, for continuous functions \(\theta _{j} (x)\) (\(j= \overline{1, k}\)) such that \(\vert \theta _{j} (x) \vert \le (1+x^{2} )^{\omega _{j} } \) with \(\omega _{j} < k-\frac{j}{2} \), the equality

$$ \max \Bigl(\lim_{x\to +\infty } T_{\theta _{j} , [2-(11k+3x^{2} )^{k} ], j} (x ), \lim _{\tau \to -\infty } M_{\theta _{j} , [2-(11k+3x^{2} )^{k} ], j} (\tau ) \Bigr)=0 $$

holds. Then, according to Theorem 1.3, the operators \((1+x^{2} )^{\alpha } \tilde{l}^{-1} \) (\(\alpha <\frac{k}{2} \)) and \(\theta _{j} (x)\frac{d^{k-j} }{dx^{k-j} } \tilde{l}^{-1} \) (\(j=\overline{1,k-1}\)) are compact in \(L_{2}\), where \(\tilde{l}^{-1}\) is the inverse of .