1 Introduction and preliminaries

Consider the first-order linear difference equation

$$ \Delta x(n)+p(n)x(n-k)=0,\quad n\geq0, $$
(1)

where Δ denotes the forward difference operator, i.e., \(\Delta x(n)=x(n+1)-x(n)\), \(\{p(n)\}_{n=0}^{\infty}\) is a nonnegative sequence of reals, and k is a natural number. By a solution of the difference equation (1) we mean a sequence of real numbers \(\{x(n)\} _{n=-k}^{\infty}\) which satisfies Eq. (1) for all \(n\geq0\). A solution \(\{x(n)\} _{n=-k}^{\infty}\) of the difference equation (1) is said to be oscillatory if the terms of the sequence \(\{x(n)\}_{n=-k}^{\infty}\) are neither eventually positive nor eventually negative. Otherwise, the solution \(\{x(n)\}_{n=-k}^{\infty}\) is said to be nonoscillatory.

In the last few decades, the oscillatory behavior of the solutions to difference equations has been extensively studied. See, for example, [39, 13, 14, 16, 18, 20, 21, 2330] and the references cited therein. For the general theory of difference equations, the reader is referred to the monographs [1, 2, 13, 19]. In 1981, Domshlak [8] considered the case where \(k=1\). In 1989, Erbe and Zhang [9] proved that all solutions of Eq. (1) oscillate if

$$\begin{aligned}& \beta:=\liminf_{n\rightarrow\infty}p(n)>0\quad\textit{and}\quad \limsup_{n\rightarrow\infty}p(n)>1-\beta \end{aligned}$$
(2)
$$\begin{aligned}& \quad\textit{or}\quad \liminf_{n\rightarrow\infty}p(n)> \frac{k^{k}}{(k+1)^{k+1}}, \end{aligned}$$
(3)
$$\begin{aligned}& \quad\textit{or}\quad A:=\limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n}p(i)>1, \end{aligned}$$
(4)

while Ladas, Philos, and Sficas [18] improved condition (3) as follows:

$$ \alpha:=\liminf_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)> \biggl( \frac {k}{k+1} \biggr) ^{k+1}. $$
(5)

Note that this condition is sharp in the sense that the fraction on the right-hand side cannot be improved, since when \(p(n)\) is a constant, say \(p(n)=p\), then this condition reduces to

$$ p>\frac{k^{k}}{(k+1)^{k+1}}, $$
(6)

which is a necessary and sufficient condition [21] for the oscillation of all solutions to Eq. (1). Moreover, concerning the constant \(\frac{k^{k}}{(k+1)^{k+1}}\) in (3) it should be emphasized that, as it is shown in [9], if

$$ \sup p(n)< \frac{k^{k}}{(k+1)^{k+1}}, $$
(7)

then Eq. (1) has a nonoscillatory solution. In 1990, Ladas [16] conjectured that Eq. (1) has a nonoscillatory solution if

$$ \sum_{i=n-k}^{n-1}p(i)\leq \biggl( \frac{k}{k+1} \biggr) ^{k+1}\quad \text{for all large }n. $$

However, this conjecture is not correct, and a counter-example was given in 1994 by Yu, Zhang, and Wang [30]. Moreover, in 1999 Tang and Yu [28] showed that Eq. (1) has a nonoscillatory solution if the so-called “corrected Ladas conjecture”

$$ \sum_{i=n-k}^{n}p(i)\leq \biggl( \frac{k}{k+1} \biggr) ^{k+1}\quad\text{for all large }n $$
(8)

is satisfied. In 2017, Karpuz [14] improved the above result by replacing condition (8) with

$$ \sum_{i=n-k}^{n}p(i)\leq \biggl( \frac{k}{k+1} \biggr) ^{k}\quad\text{for all large }n, $$
(9)

which is a weaker condition. In the case that the above mentioned conditions (4) and (5) are not satisfied, in 2004 Stavroulakis [25] and in 2006 Chatzarakis and Stavroulakis [6] derived the following sufficient oscillation conditions for Eq. (1):

$$\begin{aligned}& \limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)>1-\frac{\alpha^{2}}{4}, \end{aligned}$$
(10)
$$\begin{aligned}& \limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)>1-\alpha^{k}, \end{aligned}$$
(11)
$$\begin{aligned}& \limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)>1-\frac{\alpha^{2}}{ 2(2-\alpha)}, \end{aligned}$$
(12)

where α is as in (5) and satisfies \(0<\alpha\leq ( \frac{k}{k+1} ) ^{k+1}\). Also Chen and Yu [7] derived the following oscillation condition:

$$ \limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)>1-\frac{1-\alpha -\sqrt{ 1-2\alpha-\alpha^{2}}}{2}. $$
(13)

In 2001, Shen and Stavroulakis [23] established several oscillation conditions which, in the case of the difference equation with \(k=1\)

$$ \Delta x(n)+p(n)x(n-1)=0, $$
(14)

reduce to the sufficient oscillation condition

$$ \limsup_{n\rightarrow\infty}p(n)> \biggl( \frac{1+\sqrt{1-4\alpha }}{2} \biggr) ^{2},\quad \textit{where } 0\leq\alpha\leq1/4. $$
(15)

Remark 1.1

([23])

Observe that when \(\alpha=1/4\), condition (15) reduces to

$$ \limsup_{n\rightarrow\infty}p(n)>1/4, $$
(16)

which cannot be improved in the sense that the lower bound \(1/4\) cannot be replaced with a smaller number. Indeed, by condition (7), we see that Eq. (14) has a nonoscillatory solution if \(\sup p(n)<1/4\). Note, however, that even in the critical state where \(\lim_{n\rightarrow\infty}p(n)=1/4\), Eq. (14) can be either oscillatory or nonoscillatory. For example, if \(p(n)=\frac{1}{4}+\frac{c}{n^{2}}\), then Eq. (14) will be oscillatory in case \(c>1/4\) and nonoscillatory in case \(c<1/4\) (the Kneser-like theorem [8]).

Recall that the difference equation (1) is the discrete analogue of the delay differential equation

$$ x^{\prime}(t)+p(t)x(t-\tau)=0,\quad t\geq t_{0}, $$
(17)

where \(p:[t_{0},\infty)\rightarrow \mathbb{R}^{+}\) is a nonnegative continuous function andτis a positive constant. In 1972, Ladas, Lakshmikantham, and Papadakis [17] proved that all solutions of Eq. (17) oscillate if

$$ \mathcal{A}:=\limsup_{t\rightarrow\infty} \int_{t-\tau}^{t}p(s)\,ds>1, $$
(18)

while in 1982 Koplatadze and Canturija [15] established the following result: If

$$ \mathfrak{a}:=\liminf_{t\rightarrow\infty} \int_{t-\tau}^{t}p(s)\,ds>\frac{1}{e}, $$
(19)

then all solutions of Eq. (17) oscillate. If

$$ \mathcal{A=}\limsup_{t\rightarrow\infty} \int_{t-\tau}^{t}p(s)\,ds< \frac{1}{e}, $$

or more generally,

$$ \int_{t-\tau}^{t}p(s)\,ds\leq\frac{1}{e}\quad\text{for all large }t, $$

then Eq. (17) has a nonoscillatory solution. Observe that in the case that p is a positive constant the above condition (19) reduces to

$$ p\tau>\frac{1}{e}, $$
(20)

which is a necessary and sufficient condition [13] for all solutions of the delay differential equation

$$ x^{\prime}(t)+px(t-\tau)=0,\quad p,\tau>0, $$

to oscillate. Nevertheless, note that \(( \frac{k}{k+1} ) ^{k}= ( \frac{1}{1+\frac{1}{k}} ) ^{k}\downarrow\frac{1}{e}\) as \(k\rightarrow\infty\), and therefore conditions (5) and (6) can be interpreted as the discrete analogue of (19) and (20), respectively.

Very recently, Garab et al. [11] essentially improved the above condition (19) by replacing it with

$$ \mathcal{A=}\limsup_{t\rightarrow\infty} \int_{t-\tau}^{t}p(s)\,ds>\frac{1}{e}, $$
(21)

under the additional assumptions that p is a bounded and uniformly continuous function such that

$$ \mathfrak{a}:=\liminf_{t\rightarrow\infty} \int_{t-\tau }^{t}p(s)\,ds>0,\quad\text{and}\quad \int_{t-\tau}^{t}p(s)\,ds\textit{ is slowly varying at infinity}. $$

Our aim in this paper is to extend this result to the discrete case. To this end, assume that the sequence \(A(n)=\sum_{i=n-k}^{n-1}p(i)\) is slowly varying at infinity (see [11]), i.e., for every \(\lambda\in \mathbb{N} \), \(\lim_{n\rightarrow\infty}[A(n+\lambda)-A(n)]=0\). It should be mentioned that the idea how to obtain sharp oscillation conditions in the continuous time case by considering slowly varying coefficients originated from Pituk [22]. Moreover, the continuous-time result has been recently generalized in [10] and in [12] for variable delay and several variable delays, respectively. The next work of the present authors is to study discrete analogues of these problems.

2 Main result

Our main result is the following theorem.

Theorem 2.1

Assume that \(p(n)_{n\in \mathbb{N}}\) is nonnegative and bounded such that

$$\liminf_{n\rightarrow\infty}\sum_{i=n-k}^{n-1}p(i)>0.$$

Assume also that \(A(n)=\sum_{i=n-k}^{n-1}p(i)\) is slowly varying at infinity and

$$ B:=\limsup_{n\rightarrow\infty}A(n)=\limsup _{n\rightarrow\infty}\sum_{i=n-k}^{n-1}p(i)> \biggl( \frac{k}{k+1} \biggr) ^{k+1}. $$
(22)

Then all solutions of Eq. (1) oscillate.

To prove this theorem, we need the following lemma.

Lemma 2.1

([6])

Assume that \(x(n) \) is an eventually positive solution of Eq. (1) and that \(A(n)=\sum_{i=n-k}^{n-1}p(i)\ge P>0 \) for large n. Then

$$ x(n)>\frac{P^{2}}{2(2-P)}x(n-k) \quad\textit{for large } n. $$

Now we give a proof of Theorem 2.1.

Proof of Theorem 2.1

Suppose, for the sake of contradiction, that the assumptions of Theorem 2.1 hold and Eq. (1) has a nonoscillatory solution \(x(n)\). Without loss of generality, we may assume that \(x(n)\) is eventually positive (otherwise take the solution \(-x(n)\)) and since \(\Delta x(n)=x(n+1)-x(n)=-p(n)x(n-k)\leq0 \), the solution \(x(n)\) is nonincreasing. By the virtue of Lemma 2.1 there exists a positive number K such that

$$ K:=\sup_{n\in\mathbb{N} }\frac{x(n-k)}{x(n)}< \infty, $$
(23)

and by the monotonicity of \(x(n)\), we get

$$ \frac{x(n-k)}{x(n+1)}=\frac{x(n-k)}{x(n)} \frac{x(n)}{x(n+1)}\leq K\frac{x(n) }{x(n+1)}\leq K\frac{x(n-k+1)}{x(n+1)}\leq K^{2}. $$
(24)

In view of (22), there exists a strictly increasing sequence \(\delta(n)_{n\in\mathbb{N}}\) of natural numbers such that

$$ \lim_{n\rightarrow\infty}A\bigl(\delta(n)\bigr)=B. $$
(25)

Introduce the sequence

$$ y(n)^{(m)}=\frac{x(\delta(n)+m)}{x(\delta(n)+1)}\quad\text{for }n,m\in \mathbb{N}. $$

Observe that \(y(n)^{(m)}\leq1\).

Furthermore, since \(x(n)\) is an eventually positive solution of Eq. (1), we have

$$ x\bigl(\delta(n)+m+1\bigr)-x\bigl(\delta(n)+m\bigr)+p\bigl( \delta(n)+m\bigr)x\bigl(\delta(n)+m-k\bigr)=0, $$
(26)

and hence

$$ y(n)^{(m+1)}-y(n)^{(m)}+p\bigl(\delta(n)+m \bigr)y(n)^{(m-k)}=0. $$
(27)

From (24) and (26), for every \(n,m\in\mathbb{N}\), we obtain

$$\begin{aligned} x\bigl(\delta(n)+m+1\bigr)-x\bigl(\delta(n)+m\bigr) =&-p\bigl(\delta(n)+m\bigr) \frac{x(\delta (n)+m-k)}{x(\delta(n)+m+1)}x\bigl(\delta(n)+m+1\bigr) \\ \geq&-MK^{2}x\bigl(\delta(n)+m+1\bigr), \end{aligned}$$

where \(M:=\sup p(n)<\infty\). Setting \(L=MK^{2}\), the last inequality gives \(y(n)^{(m+1)}-y(n)^{(m)}\geq-Ly(n)^{(m+1)} \) or \((L+1)y(n)^{(m+1)}\geq y(n)^{(m)}\), which, by iteration, yields \((L+1)^{m-1}y(n)^{(m)}\geq1\), and therefore

$$ \frac{1}{(L+1)^{m-1}}\leq y(n)^{(m) }\leq1. $$
(28)

Set \(q(n)^{(m)}=p(\delta(n)+m)\).

As it is shown above, for each \(m\in\mathbb{N} \), \(y(n)^{(m) }\) is a bounded sequence, and since p is bounded, \(q(n)^{(m) }\)is also bounded. In view of the above, by the Bolzano–Weierstrass theorem and Cantor’s diagonal argument, it follows that there exists a strictly increasing sequence \(s(n)_{n\in\mathbb{N}}\) of natural numbers such that, for all \(m\in \mathbb{N}\),

$$ \lim_{n\rightarrow\infty}y\bigl(s(n)\bigr)^{(m)}:=y(m) \quad\text{and} \quad\lim_{n\rightarrow \infty}q\bigl(s(n)\bigr)^{(m)}:=q(m). $$

By virtue of (27), for every \(m,n\in\mathbb{N}\), we get \(y(s(n))^{(m+1)}-y(s(n))^{(m)}+q(s(n))^{(m)}y(s(n))^{(m-k)}=0\), and taking \(n\rightarrow\infty\), we have

$$ y(m+1)-y(m)+q(m)y(m-k)=0. $$
(29)

That is, the positive sequence \(y(m)\) is a (nonoscillatory) solution of Eq. (29). Now, taking into account (25) and the fact that \(A(n)\) is slowly varying at infinity, for \(m\in\mathbb{N}\), we have

$$\begin{aligned} \sum_{i=m-k}^{m-1}q(i) & =\sum _{i=m-k}^{m-1}\lim_{n\rightarrow\infty }q \bigl(s(n)\bigr)^{(i)}=\lim_{n\rightarrow\infty}\sum _{i=m-k}^{m-1}q\bigl(s(n)\bigr)^{(i)} \\ &= \lim_{n\rightarrow\infty}\sum_{i=m-k}^{m-1}p \bigl(\delta\bigl(s(n)\bigr)+i\bigr) =\lim_{n\rightarrow\infty}\sum _{i=\delta(s(n))+m-k}^{\delta (s(n))+m-1}p(i) \\ &= \lim_{n\rightarrow\infty}A\bigl(\delta\bigl(s(n)\bigr)+m\bigr)=\lim _{n\rightarrow\infty }A\bigl(\delta(n)\bigr)=B. \end{aligned}$$

That is, \(\sum_{i=m-k}^{m-1}q(i)\) is equal to the constant B and therefore

$$ \liminf_{m\rightarrow\infty}\sum_{i=m-k}^{m-1}q(i)=B> \biggl( \frac {k}{k+1} \biggr) ^{k+1}. $$

By virtue of (5), the sequence \(y(m)\) (being a solution of Eq. (29)) is oscillatory, a contradiction. The proof is complete. □

Remark 2.2

Note that in the special case that \(k=1\), i.e., in the case of the difference equation

$$ \Delta x(n)+p(n)x(n-1)=0, $$

condition (22) reduces to \(\limsup_{n\rightarrow\infty }p(n)>1/4\), that is, to condition (16) which is sharp and cannot be further improved. Observe, however, that condition (16) is derived from condition (15) only in the case that \(\alpha=1/4 \). In general, that is, without the assumption that \(A(n) \) is slowly varying, \(\limsup_{n\rightarrow\infty}A(n)>1/4 \) does not seem to imply oscillation of all solutions, which points out that our main result improves oscillation condition (15) even in the case of \(k=1 \).

3 Example

In this section an example, which illustrates the significance of our result, is presented. It is to be pointed out that in this example all conditions of Theorem 2.1 are satisfied and therefore all solutions oscillate while none of the conditions presented in Sect. 1 is satisfied.

Example 3.1

Consider the difference equation

$$ \Delta x(n)+p(n)x(n-2)=0,\quad n\geq0, $$
(30)

where \(p(n)=\frac{1}{9}+\sigma\cos^{2}(\frac{\pi}{2}\sqrt{n})\) with \(\sigma\in(\frac{1}{27},\frac{1}{9})\). It is clear that \(\frac{1}{9}\le p(n)<\frac{2}{9}\) and Eq. (30) is a special case of Eq. (1) with \(k=2\).

First we show that \(p(n)\) is slowly varying at infinity. It is sufficient to show that \(f:[0,+\infty)\rightarrow \mathbb{R}\) with \(f(x)=\frac{1}{9}+\sigma\cos^{2}(\frac{\pi}{2}\sqrt {x})\) is slowly varying at infinity. To this end, we follow [11, 22] stating that a continuous function \(f:[0,+\infty)\rightarrow \mathbb{R}\) is slowly varying at infinity if and only if there exists a natural number \(l_{1}\) such that on \([l_{1},+\infty)\) the function f can be decomposed into the sum of a continuous function which has a finite limit as \(x\rightarrow+\infty\) and a continuously differentiable function whose derivative vanishes at infinity. That is, there exist functions \(g,h:[l_{1},+\infty)\rightarrow \mathbb{R} \) such that \(f(x)=g(x)+h(x)\) on \([l_{1},+\infty)\), where g is continuous and \(\lim_{x\rightarrow+\infty}g(x)\) is a finite number, and h is a continuously differentiable function with \(\lim_{x\rightarrow+\infty}h^{ \prime}(x)=0\). It is clear that \(f(x)\) satisfies these conditions since \(\frac{1}{9}\) is constant and the derivative of \(\cos^{2}(\frac{\pi }{2}\sqrt{x})\) vanishes at infinity. Note that \(p(n)\) is slowly varying at infinity implies that \(A(n) \) is also slowly varying.

Furthermore, by the choice of σ, we have

$$ B=\limsup_{n\rightarrow\infty}A(n)=\limsup_{n\rightarrow\infty}\sum _{i=n-k}^{n-1}p(i)=\frac{2}{9}+2 \sigma> \biggl( \frac{2}{3} \biggr) ^{3}. $$

That is, all conditions of Theorem 2.1 are satisfied and therefore all solutions of Eq. (30) oscillate.

Next we show that this conclusion cannot be derived from any of the known results mentioned in Sect. 1. First we compare our criterion with conditions (2), (3), and (4). Observe that

$$ \beta=\liminf_{n\rightarrow\infty}p(n)=1/9>0\quad\text{and}\quad\limsup _{n\rightarrow\infty}p(n)< 2/9< 1-1/9. $$

Moreover,

$$ \beta=1/9< 4/27 $$

and also, by the choice of σ,

$$ \limsup_{n\rightarrow\infty}\sum_{i=n-2}^{n}p(i)=3/9+3 \sigma< 1. $$

Therefore none of conditions (2), (3), and (4) is satisfied. Observe also that

$$ \alpha=\liminf_{n\rightarrow\infty}\sum_{i=n-2}^{n-1}p(i)=2/9< ( 2/3 ) ^{3}, $$

and therefore condition (5) is not satisfied.

Further, we compare with conditions (10)–(13). As we have seen, \(\alpha =2/9<(2/3)^{3}\) and \(B=2/9+2\sigma<4/9\). Observe that

$$\begin{gathered} B< 4/9< 80/81=1-\alpha^{2}/4, \\ B< 4/9< 77/81=1-\alpha^{2}, \\ B< 4/9< \frac{71}{72}=1- \frac{\alpha^{2}}{2(2-\alpha)},\end{gathered} $$

and

$$ B< 4/9< \frac{11+\sqrt{41}}{18}=1-\frac{1-\alpha-\sqrt{ 1-2\alpha-\alpha^{2}}}{2}, $$

and therefore none of conditions (10), (11), (12), and (13) is satisfied.