1 Introduction

1.1 Some notations

Throughout the paper \({\mathbb {N}}\) denotes the set of all positive integers, \([a,b]\) is a closed non-degenerated interval of the real line (i.e., \(a\ne b\)), \((a,b)\) is an open interval of the real line, \(C[a, b]\) is the space of all continuous functions on the interval \([a,b]\), \(C^{(k)}[a,b]\), \(k\in {\mathbb {N}}\), is the space of all k-times continuously-differentiable functions on the interval \([a,b]\), and \(D^{(k)}(a,b)\), \(k\in {\mathbb {N}}\), is the space of all k-times differentiable functions on the interval \((a,b)\).

1.2 A few words on solvability

The solvability of various types of equations (differential, partial differential, difference, functional, integral, etc.) is a topic of a great popularity for a wide audience (see, e.g., [1,2,3,4,5,6,7,8]). Many mathematicians and scientists like such kind of results, as well as proofs which use closed-form formulas for solutions to some equations in proving theoretical or practical results. There has been a renewed interest in solving the equations, especially in the last few decades, because of the appearance of symbolic algebra programs, which can help in finding solutions to some equations. We have devoted part of our research to solving some equations, with a concentration on difference equations and systems of difference equations (see, i.e., [9,10,11,12,13,14,15,16,17,18,19] and numerous references therein). Out of these papers on solvable difference equations and systems, some of them are devoted to additive-type difference equations [9, 13, 16, 18], some to product-type ones [10,11,12, 14], whereas some are devoted to various representations of general solutions to difference equations [15, 17,18,19]. For some interesting applications of some classes of solvable difference equations and related topics, see, e.g., [20,21,22,23,24,25,26] and the references therein.

Closed-form formulas for solutions to some solvable difference or differential equations can be used for transforming them to integral-type ones, where for the case of difference equations the notion of integral essentially refers to a sum, and such transformations are frequently employed for some additive-type difference equations. It is a frequent situation that it is easier to deal with integral equations than with the differential ones, so, solvability methods can be useful. Here we deal with a classical problem for which, due to the additive form of the differential inequality considered here, this type of transformation will be also used in presenting a proof of the problem.

1.3 The problem which motivated this research of ours and some history related to it

During the work on the problems in our papers [25] and [26], in order to transform a linear difference equation to an “integral”-type form, we have recollected the following problem which can be found in the known book by Polya and Szegö [27] (for the English translation, see [28, p. 157]).

Problem 1

Let \(f\in C^{(2)}[a,b]\) be such that

  1. (a)

    \(f(a)=f(b)=0\),

  2. (b)

    \(f(x)>0\) for \(x\in (a,b)\),

  3. (c)

    \(f''(x)+f(x)>0\) for \(x\in (a,b)\).

Show that \(b-a>\pi \).

Book [27] cites article [29] as the original source of the problem. In fact, [29] poses a problem from geometry concerning convexity of a curve given in polar coordinates, which implies the statement formulated in Problem 1. Our considerable, but, of course, not thorough, literature investigation showed an interesting fact that the original problem attracted considerable attention of some experts of that time. The first answer to the problem was given by Hadamard in [30], the second one by Poincaré in [31], the third one by Roux in [32], the fourth one by Duporcq in [33], the fifth one by Le Roux in [34]. The solution to Problem 1 given in [27] (i.e., in [28, p. 367]) is relatively simple, but it is kind of a set up one and uses a few tricks. Such tricks can be usually found in many books on differential equations, which deal with basic Sturm theory, that is, with the results related to zeros of solutions to ordinary differential equations (see, for example, [35]). Some other results on differential inequalities and related topics, such as integral inequalities, can be found in [1] and [36].

1.4 One of our aims and the main idea for achieving it

Since in condition (c) in Problem 1, there appears a linear differential inequality with constant coefficients, it is a natural idea to try to find a proof of Problem 1 based on solvability of the corresponding linear differential equation with constant coefficients (it is well known that the differential equations with constant coefficients are solvable [1, 4,5,6,7, 35]). The idea is to regard the differential inequality as a nonhomogeneous differential equation and employ one of the solvability methods for dealing with such equations. The fact that the differential equation is of second order is crucial for our consideration, and it facilitates the situation considerably. Namely, not only the differential equation is theoretically, but it is also practically solvable equation since the characteristic polynomial associated with the equation is solvable by radicals due to the Abel–Ruffini theorem [37]. Hence, one of our aims is to present a solution to Problem 1 which is interesting to those working on solvability theory.

2 Main results

Here we give a detailed, elegant, and straightforward/direct proof of a slight generalization of the result in Problem 1 by using solvability of the corresponding linear differential equation. The proof could be known, but we have not managed to find it in the literature so far, which is one of the reasons for writing this note. The proof is actually a slight modification of our original solution to Problem 1, which we obtained long time ago, but have not published it so far. Besides, we could not find in the literature a detailed discussion on the estimate for the distance between zeros, which is another reason for writing this note. From the proof of our first theorem it will be seen how the fact that the distance between the zeros cannot be equal to π is naturally explained, whereas for the case when \(b-a>\pi \), we will construct such functions.

We want also to point out that the value of the coefficient at term \(f(x)\) in condition (c) can be replaced by any positive number ω, but with the dispense of changing the lower bound of the distance between the zeros a and b of the function f. Finally, we also show that the estimate for the distance between the zeros is best possible.

So, we prove the following result.

Theorem 1

Let \(\omega >0\) and \(f\in C^{(2)}[a,b]\) be such that

  1. (a)

    \(f(a)=f(b)=0\),

  2. (b)

    \(f(x)>0\) for \(x\in (a,b)\),

  3. (c)

    \(f''(x)+\omega ^{2}f(x)>0\) for \(x\in (a,b)\).

Then the following inequality holds:

$$\begin{aligned} b-a>\frac{\pi }{\omega }, \end{aligned}$$
(1)

and the lower bound \(\frac{\pi }{\omega }\) for the distance \(b-a\) is best possible.

Proof

Assume to the contrary that inequality (1) does not hold, that is, that the following holds:

$$\begin{aligned} b-a\le \frac{\pi }{\omega }. \end{aligned}$$
(2)

Let

$$\begin{aligned} \varepsilon (x):=f''(x)+\omega ^{2}f(x). \end{aligned}$$
(3)

Then obviously \(\varepsilon \in C[a,b]\), whereas relation (3) can be regarded as a nonhomogeneous differential equation of second order with constant coefficients. Together with the conditions \(f(a)=f(b)=0\), the equation becomes a standard boundary value problem.

Since general solution to the corresponding homogeneous differential equation

$$ f''(x)+\omega ^{2}f(x)=0 $$

has the form

$$ f_{h}(x)=c_{1}\sin \omega x+c_{2}\cos \omega x, $$

we can find general solution to nonhomogeneous equation (3) by the Lagrange method of variation of constants (see [38,39,40] or any of the books [1, 4,5,6,7, 35]).

So, let

$$\begin{aligned} f(x)=c_{1}(x)\sin \omega x+c_{2}(x)\cos \omega x,\quad x \in [a,b]. \end{aligned}$$
(4)

Then it must be

$$\begin{aligned} & c_{1}'(x)\sin \omega x+c_{2}'(x) \cos \omega x=0, \\ &\omega c_{1}'(x)\cos \omega x-\omega c_{2}'(x)\sin \omega x=\varepsilon (x), \end{aligned}$$

from which it follows that

$$ c_{1}'(x)=\frac{\varepsilon (x)\cos \omega x}{\omega }\quad \text{and}\quad c_{2}'(x)=-\frac{\varepsilon (x)\sin \omega x}{\omega }, $$

and consequently

$$\begin{aligned} c_{1}(x)=c_{1}(a)+ \int _{a}^{x}\frac{\varepsilon (t)\cos \omega t}{ \omega }\,dt\quad \text{and}\quad c_{2}(x)=c_{2}(a)- \int _{a}^{x}\frac{ \varepsilon (t)\sin \omega t}{\omega }\,dt. \end{aligned}$$
(5)

By using (5) in (4) we see that the general solution to equation (3) is

$$\begin{aligned} f(x)=c_{1}\sin \omega x+c_{2}\cos \omega x+ \frac{1}{\omega } \int _{a} ^{x}\varepsilon (t)\sin \omega (x-t) \,dt \end{aligned}$$
(6)

(here we simply write \(c_{1}\) and \(c_{2}\) instead of \(c_{1}(a)\) and \(c_{2}(a)\)).

Since function f must satisfy the conditions in (a), we have that it must be

$$ \begin{aligned} &c_{1}\sin \omega a+c_{2}\cos \omega a=0, \\ &c_{1}\sin \omega b+c_{2} \cos \omega b=- \frac{1}{\omega } \int _{a}^{b}\varepsilon (t)\sin \omega (b-t) \,dt. \end{aligned} $$
(7)

There are two cases to be considered:

$$ (1)\quad b-a=\frac{\pi }{\omega }\quad \text{and}\quad (2)\quad b-a\in \biggl(0,\frac{ \pi }{\omega }\biggr). $$

Case \(b-a=\frac{\pi }{\omega }\). Since \(b=a+\frac{\pi }{ \omega }\), from the second equality in (7), we have

$$ c_{1}\sin \omega \biggl(a+\frac{\pi }{\omega } \biggr)+c_{2}\cos \omega \biggl(a+\frac{\pi }{\omega } \biggr)=- \frac{1}{\omega } \int _{a}^{a+\frac{\pi }{\omega }}\varepsilon (t)\sin \omega \biggl(a+\frac{ \pi }{\omega }-t \biggr)\,dt, $$

from which, along with some calculation and the first equality in (7), it follows that

$$\begin{aligned} 0=-c_{1}\sin \omega a-c_{2}\cos \omega a=- \frac{1}{\omega } \int _{a} ^{a+\frac{\pi }{\omega }}\varepsilon (t)\sin \omega \biggl(a+\frac{ \pi }{\omega }-t \biggr)\,dt. \end{aligned}$$
(8)

However, since the integrand in (8) is positive on the interval \((a,a+\frac{\pi }{\omega })\) (see (c)), we have

$$ -\frac{1}{\omega } \int _{a}^{a+\frac{\pi }{\omega }}\varepsilon (t) \sin \omega \biggl(a+\frac{\pi }{\omega }-t \biggr)\,dt< 0, $$

which contradicts (8).

Case \(b-a\in (0,\frac{\pi }{\omega })\). By solving linear system (7), we obtain

$$\begin{aligned}& c_{1}=\frac{\cos \omega a}{\omega \sin \omega (a-b)} \int _{a}^{b} \varepsilon (t)\sin \omega (b-t) \,dt, \end{aligned}$$
(9)
$$\begin{aligned}& c_{2}=\frac{-\sin \omega a}{\omega \sin \omega (a-b)} \int _{a}^{b} \varepsilon (t)\sin \omega (b-t) \,dt. \end{aligned}$$
(10)

Note that since \(0< b-a<\frac{\pi }{\omega }\), we have

$$ \sin \omega (a-b)\ne 0, $$

so that the values of constants \(c_{1}\) and \(c_{2}\) in (9) and (10) are well defined, in this case.

Employing (9) and (10) in (6), we obtain

$$\begin{aligned} f(x)=\frac{1}{\omega } \biggl( \int _{a}^{x}\varepsilon (t)\sin \omega (x-t) \,dt+\frac{ \sin \omega (x-a)}{\sin \omega (a-b)} \int _{a}^{b}\varepsilon (t) \sin \omega (b-t) \,dt \biggr). \end{aligned}$$
(11)

Since \(f\in C[a,b]\), it is also integrable. Hence, by integrating (11) over the interval \([a,b]\), using condition (b) and a well-known theorem, we have

$$\begin{aligned} 0< \int _{a}^{b}f(x)\,dx={}&\frac{1}{\omega } \int _{a}^{b} \int _{a}^{x}\varepsilon (t)\sin \omega (x-t) \,dt\,dx \\ & {} -\frac{1}{\omega } \int _{a}^{b}\varepsilon (t)\sin \omega (b-t) \,dt \int _{a}^{b}\frac{\sin \omega (x-a)}{\sin \omega (b-a)}\,dx. \end{aligned}$$
(12)

From inequality (12), by using the Fubini theorem (see, for example, [41, 42]), some standard algebraic and integral calculations and use of some trigonometric addition formulas, we have

$$\begin{aligned} 0 &< \frac{1}{\omega } \biggl( \int _{a}^{b} \int _{a}^{x}\varepsilon (t) \sin \omega (x-t) \,dt\,dx- \int _{a}^{b}\varepsilon (t)\sin \omega (b-t) \,dt \int _{a}^{b}\frac{\sin \omega (x-a)}{\sin \omega (b-a)}\,dx \biggr) \\ &=\frac{1}{\omega } \biggl( \int _{a}^{b}\varepsilon (t) \int _{t}^{b} \sin \omega (x-t)\,dx\,dt- \int _{a}^{b}\varepsilon (t)\sin \omega (b-t) \,dt \frac{(1-\cos \omega (b-a))}{\omega \sin \omega (b-a)} \biggr) \\ &=\frac{1}{\omega } \biggl( \int _{a}^{b}\varepsilon (t) \frac{1-\cos \omega (b-t)}{\omega }\,dt-\frac{(1-\cos \omega (b-a))}{\omega \sin \omega (b-a)} \int _{a}^{b}\varepsilon (t)\sin \omega (b-t) \,dt \biggr) \\ &=\frac{1}{\omega ^{2}} \int _{a}^{b}\varepsilon (t) \frac{\sin \omega (b-a)(1- \cos \omega (b-t))-\sin \omega (b-t)(1-\cos \omega (b-a))}{\sin \omega (b-a)}\,dt \\ &=\frac{1}{\omega ^{2}\sin \omega (b-a)} \int _{a}^{b}\varepsilon (t) \bigl( \sin \omega (b-a)-\sin \omega (b-t)-\sin \omega (t-a)\bigr)\,dt \\ &=\frac{1}{\omega ^{2}\sin \omega (b-a)} \int _{a}^{b}\varepsilon (t) \biggl(\sin \omega (b-a)-2\sin \frac{\omega (b-a)}{2}\cos \frac{ \omega (b+a-2t)}{2} \biggr) \\ &=\frac{2}{\omega ^{2}\sin \omega (b-a)} \int _{a}^{b}\varepsilon (t) \sin \frac{\omega (b-a)}{2} \biggl(\cos \frac{\omega (b-a)}{2}-\cos \frac{ \omega (b+a-2t)}{2} \biggr) \\ &=-\frac{4}{\omega ^{2}\sin \omega (b-a)} \int _{a}^{b}\varepsilon (t) \sin \frac{\omega (b-a)}{2}\sin \frac{\omega (b-t)}{2}\sin \frac{ \omega (t-a)}{2}=:I_{a,b}. \end{aligned}$$
(13)

However, since all the functions under the sign of the last integral are positive on the interval \((a,b)\) and \(b-a\in (0,\frac{\pi }{\omega })\), it follows that

$$ I_{a,b}< 0, $$

which contradicts the inequality in (13). Hence, assumption (2) is not possible, implying that inequality (1) must hold, as claimed.

Now we show that the lower bound \(\pi /\omega \) is best possible. To do this we will construct a one-parameter family of functions \(f_{\delta }\), where δ is a small positive number such that, for fixed δ, \(f_{\delta }\) satisfies conditions (a)–(c) and that \(b-a=\frac{\pi }{\omega }+\delta \). Let \(\delta \in (0,\frac{ \pi }{2\omega })\),

$$ \widehat{a}=a-\frac{\delta }{2},\quad \text{and}\quad \widehat{b}=a+ \frac{ \pi }{\omega }+\frac{\delta }{2}. $$

Then clearly \(\widehat{b}-\widehat{a}=\frac{\pi }{\omega }+\delta \).

We choose \(f_{\delta }\) in the following form:

$$ f_{\delta }(x)=\sin (cx+d), $$

and choose constants c and d such that \(f_{\delta }(\widehat{a})=f _{\delta }(\widehat{b})=0\).

A simple calculation shows that

$$ f_{\delta }(x)=\sin \biggl(\frac{\omega \pi }{\pi +\omega \delta } \biggl(x+ \frac{\delta }{2}-a \biggr) \biggr). $$

It is clear that

$$\begin{aligned} f_{\delta }(x)>0,\quad x\in (\widehat{a},\widehat{b}). \end{aligned}$$
(14)

We also have

$$\begin{aligned} f_{\delta }''(x)+\omega ^{2}f_{\delta }(x)={}&\sin \biggl(\frac{\omega \pi }{\pi +\omega \delta } \biggl(x+\frac{\delta }{2}-a \biggr) \biggr) \biggl(\omega ^{2}- \biggl(\frac{\omega \pi }{\pi +\omega \delta } \biggr) ^{2} \biggr) \\ ={}&f_{\delta }(x)\omega ^{2} \biggl(1- \biggl( \frac{\pi }{\pi +\omega \delta } \biggr)^{2} \biggr). \end{aligned}$$
(15)

Using the fact that \(\frac{\pi }{\pi +\omega \delta }\in (0,1)\) and inequality (14) in (15), it follows that

$$ f_{\delta }''(x)+\omega ^{2}f_{\delta }(x)>0,\quad x\in (\widehat{a}, \widehat{b}), $$

which means that condition (c) holds.

Hence, the function \(f_{\delta }\) satisfies conditions (a)–(c) on the interval \([\widehat{a},\widehat{b}]\), whose length is \(\frac{\pi }{\omega }+\delta \). Since δ is an arbitrary number, we see that the length of the interval can be arbitrarily close to \(\frac{\pi }{\omega }\), which means that the lower bound in (1) is really best possible, finishing the proof of the theorem. □

If we omit the condition that the second derivative of the function f is continuous on the interval \([a,b]\), then the above proof cannot be applied, since then the function \(\varepsilon (x)\) defined in (3) need not be integrable. Hence, in this case another method has to be used, which might be less constructive than the one given in the proof of Theorem 1.

There is also a result related to Theorem 1, which does not request the continuity of derivatives on the closed interval \([a,b]\). It can be found, for example, in [43]. The following relative of Theorem 1 can be proved by using a modification of the proof presented therein.

Theorem 2

Let \(f\in C[a,b]\cap D^{(2)}(a,b)\) be such that

  1. (a)

    \(f(a)=f(b)=0\),

  2. (b)

    \(f(x)>0\) for \(x\in (a,b)\),

  3. (c)

    the following inequality holds:

    $$ \bigl(p(x)f'(x)\bigr)'+f(x)>0 $$

    for every \(x\in (a,b)\) and for some \(p\in C[a,b]\cap D(a,b)\) such that

    $$\begin{aligned} p(x)\ge \frac{1}{\omega ^{2}}\quad \textit{for }x\in [a,b], \end{aligned}$$
    (16)

    for some \(\omega >0\).

Then the following inequality holds:

$$\begin{aligned} b-a\ge \frac{\pi }{\omega }. \end{aligned}$$
(17)

Proof

Let \(g(x)=\ln f(x)\). From conditions (a) and \(f\in C[a,b]\) it follows that

$$ \lim_{x\to a+0}g(x)=\lim_{x\to b-0}g(x)=-\infty . $$

Further, we have

$$\begin{aligned} \limsup_{x\to a+0}g'(x)=+\infty \quad \text{and} \quad \liminf_{x \to b-0}g'(x)=-\infty . \end{aligned}$$
(18)

Indeed, assume that \(\limsup_{x\to a+0}g'(x)<+\infty\). Then

$$ \sup_{x\in(a,\frac{a+b}{2}]}g'(x)=:L< +\infty . $$

Let \(L_{1}:=\max \{1, L\}\).

By using the Lagrange mean value theorem, we have

$$\begin{aligned} g(x)=g(x_{0})+(x-x_{0})g'(\zeta ) \end{aligned}$$
(19)

for every \(x_{0},x\in (a,b)\) and some ζ inside the interval with endpoints \(x_{0}\) and x. Hence, for \(x_{0}=\frac{a+b}{2}\) and every \(x\in (a,x_{0})\), we would have

$$ g(x)\ge g(x_{0})-(x_{0}-x)L_{1}>g(x_{0})-(x_{0}-a)L_{1}>- \infty , $$

which would be a contradiction with the fact \(\lim_{x\to a+0}g(x)=- \infty \).

On the other hand, if it were \(\liminf_{x\to b-0}g'(x)>-\infty\), then we would have

$$ \inf_{x\in[\frac{a+b}{2},b)}g'(x)=M>-\infty. $$

Let \(M_{1}=\min \{-1,M\}\), then from (19) with \(x_{0}=\frac{a+b}{2}\) and every \(x\in (x_{0},b)\), we would have

$$ g(x)\ge g(x_{0})+(x-x_{0})M_{1}>g(x_{0})+(b-x_{0})M_{1}>- \infty , $$

which would be a contradiction with the fact \(\lim_{x\to b-0}g(x)=- \infty \).

From (18) along with the continuity of function p and condition (16), we have

$$\begin{aligned} \limsup_{x\to a+0}p(x)g'(x)=+\infty \quad \text{and}\quad \liminf_{x\to b-0}p(x)g'(x)=-\infty . \end{aligned}$$
(20)

Further, by some calculation and use of conditions (b) and (c), we have

$$ \bigl(p(x)g'(x)\bigr)'=\frac{(p(x)f'(x))'}{f(x)}- \frac{(p(x)g'(x))^{2}}{p(x)}>-1-\bigl( \omega p(x)g'(x) \bigr)^{2} $$

for \(x\in (a,b)\).

Let

$$\begin{aligned} F(x)=\frac{1}{\omega }\arctan \bigl(\omega p(x)g'(x)\bigr). \end{aligned}$$
(21)

Then, by employing the Lagrange mean value theorem to function (21) on an interval \([t,s]\subset (a,b)\), \(t\ne s\), we get

$$\begin{aligned} F(t)-F(s)=\frac{(pg')'(\zeta )}{1+(\omega (pg')(\zeta ))^{2}}(t-s)< s-t< b-a. \end{aligned}$$
(22)

By letting \(t\to a-0\) and \(s\to b-0\), in the inequality in (22), using (20) and that \(\omega >0\), it follows that

$$ \frac{\pi }{\omega }=\limsup_{t\to a-0}F(t)-\liminf _{s\to b-0}F(s) \le b-a, $$

from which inequality (17) follows, finishing the proof of the theorem. □

From Theorem 2 we obtain the following corollary.

Corollary 1

Let \(\omega >0\) and \(f\in C[a,b]\cap D ^{(2)}(a,b)\) be such that

  1. (a)

    \(f(a)=f(b)=0\),

  2. (b)

    \(f(x)>0\) for \(x\in (a,b)\),

  3. (c)

    \(f''(x)+\omega ^{2}f(x)>0\) for \(x\in (a,b)\).

Then inequality (17) holds and the lower bound \(\frac{\pi }{ \omega }\) for the distance \(b-a\) is best possible.

Proof

The first part of the corollary follows directly by choosing in Theorem 2 \(p(x)\equiv \frac{1}{\omega ^{2}}\). That the estimate in (17) is best possible can be shown as in the proof of Theorem 1. □

Remark 1

If condition (c) in Corollary 1 is replaced by the following one:

$$ f''(x)+\omega ^{2}f(x)\ge 0\quad \text{for }x\in (a,b), $$

then it is easy to see that the function

$$ f(x)=\sin \omega (x-a) $$

satisfies also conditions (a), (b) on the interval \([a,a+\frac{\pi }{\omega }]\), so that the lower bound \(\frac{\pi }{ \omega }\) is achieved in this case.

However, if

$$ f''(x)+\omega ^{2}f(x)>0 $$

for \(x\in (a,a+\frac{\pi }{\omega })\), it is not clear if the lower bound \(\frac{\pi }{\omega }\) in Corollary 1 is achieved.

Remark 2

Theorem 1 can be obtained from the statement in Problem 1. Indeed, if the statement is proved, then instead of the function f, the function

$$ f_{1/\omega }(x):=f \biggl(\frac{x}{\omega } \biggr) $$

can be taken into consideration on the interval \([\omega a, \omega b]\).

From \(f(a)=f(b)=0\), we have

$$ f_{1/\omega }(\omega a)=f_{1/\omega }(\omega b)=0. $$

From \(f(x)>0\), \(x\in (a,b)\), we have

$$ f_{1/\omega }(x)>0\quad \text{for }x\in (\omega a,\omega b). $$

Finally, if \(f''(x)+\omega ^{2}f(x)>0\) for \(x\in (a,b)\), then we have

$$ (f_{1/\omega })''(x)+f_{1/\omega }(x)= \frac{f''_{1/\omega }(x)+\omega ^{2}f_{1/\omega }(x)}{\omega ^{2}}>0 $$

for \(x\in (\omega a,\omega b)\). Hence, Theorem 1 follows from the statement in Problem 1.