1 Introduction

In recent years, a lot of works have been devoted to the solvability of boundary value problems for linear functional differential equations (see, for example, [119]). We consider the Cauchy problem for equations with a singularity at the initial point. The main results are Theorems 1 and 2, where we obtain sharp conditions for the solvability.

Similar existence results for other boundary value problems and other functional equations are established in [2, 419]. In the Volterra case, the Cauchy problem is considered in [2022] for some classes of nonlinear singular functional differential equations. Solvability conditions for boundary value problems with weighted initial conditions are established in [2333].

We do not impose any restrictions on the growth or the sign of the singular coefficient. Our constants in the solvability conditions are the best ones in the considered classes of functional operators. These best conditions cannot be derived from the contraction mapping principle.

In the next section we formulate main results and give an example for a singular differential equation with deviating argument. Further, in Section 3, the Fredholm property of the considered singular problems is proved. In Section 4, the existence results are proved.

We use the following standard notation:

  • \(\mathbb{R}=(-\infty,\infty)\);

  • L is the Banach space of measurable functions \(z:[0,1]\to \mathbb{R}\) such that

    $$\Vert z \Vert _{\mathbf{L}}= \int_{0}^{1} \bigl\vert z(s) \bigr\vert \,ds< + \infty; $$
  • C is the Banach space of continuous functions \(x:[0,1]\to \mathbb{R}\) with the norm

    $$\Vert x \Vert _{\mathbf{C}}=\max_{t\in[0,1]} \bigl\vert x(t) \bigr\vert ; $$
  • AC is the Banach space of absolutely continuous functions \(x:[0,1]\to\mathbb{R}\) with any of two equivalent norms

    $$\Vert x \Vert _{\mathbf{AC}}= \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert \dot{x}(s) \bigr\vert \,ds\quad\text{or}\quad \Vert x \Vert _{\mathbf{AC}}= \bigl\vert x(0) \bigr\vert + \int_{0}^{1} \bigl\vert \dot{x}(s) \bigr\vert \,ds. $$

Definition 1

An operator \(T:\mathbf{C}\to\mathbf{L}\) is called positive if it maps nonnegative functions from the space C into a.e. nonnegative functions from L.

Definition 2

We will say that a boundary value problem has the Fredholm property if the corresponding operator of this problem is a Noether operator with index zero.

2 Main results

Let \(p:(0,1]\to\mathbb{R}\) be a positive measurable function such that

$$ \int_{\varepsilon}^{1} p(s) \,ds< +\infty\quad \text{for every $ \varepsilon \in(0,1)$},\qquad \lim_{\varepsilon\to0+} \int_{\varepsilon}^{1} p(t) \,dt=\infty. $$
(2.1)

We consider the singular ordinary differential equation

$$ (\mathcal{L}_{k} x) (t)\equiv\dot{x}(t)+k p(t)x(t)=f(t), \quad t\in (0,1],\quad k\ne0, $$
(2.2)

and the functional differential equation

$$ (\mathcal{L}_{k} x) (t)=(Tx) (t)+f(t),\quad t\in[0,1], $$
(2.3)

where \(T:\mathbf{C}\to\mathbf{L}\) is a linear bounded operator.

Definition 3

A function \(x\in\mathbf{AC}\) is called a solution of (2.3) if x satisfies equality (2.3) for a.a. \(t\in[0,1]\).

Let \(T^{+}:\mathbf{C}\to\mathbf{L}\), \(T^{-}:\mathbf{C}\to\mathbf{L}\) be positive linear operators with the norms

$$ \bigl\Vert T^{+} \bigr\Vert _{\mathbf{C}\to\mathbf{L}}=\mathcal{T}^{+},\qquad \bigl\Vert T^{-} \bigr\Vert _{\mathbf{C}\to\mathbf{L}}=\mathcal{T}^{-}. $$
(2.4)

Theorem 1

Let \(k>0\). Then the Cauchy problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(T^{+}x)(t)-(T^{-}x)(t)+f(t),& t\in[0,1],\\ x(0)=0, \end{cases} $$
(2.5)

has a unique solution for all \(f\in\mathbf{L}\) if

$$ \mathcal{T}^{+}\leqslant1, \qquad \mathcal{T}^{-}\leqslant2 \sqrt {1- \mathcal{T}^{+}}. $$
(2.6)

Theorem 2

Let \(k<0\). Then the Cauchy problem with the additional boundary value condition

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(T^{+}x)(t)-(T^{-}x)(t)+f(t),& t\in[0,1],\\ x(0)=0,\quad x(1)=c, \end{cases} $$
(2.7)

has a unique solution for all \(f\in\mathbf{L}\), \(c\in\mathbb{R}\) if

$$ \mathcal{T}^{-}\leqslant1, \qquad \mathcal{T}^{+}\leqslant2 \sqrt {1- \mathcal{T}^{-}}. $$
(2.8)

The nonsingular case \(k=0\) was considered in [16].

Theorem 3

[16]

The Cauchy problem

$$ \textstyle\begin{cases} \dot{x}(t)=(T^{+}x)(t)-(T^{-}x)(t)+f(t),& t\in[0,1],\\ x(0)=c, \end{cases} $$

has a unique solution for all \(f\in\mathbf{L}\), \(c\in\mathbb{R}\) if

$$ \mathcal{T}^{+}< 1, \qquad \mathcal{T}^{-}< 1+ 2\sqrt{1-\mathcal{T}^{+}}. $$

Remark 1

From the proofs of Theorems 1, 2, it will follow that inequalities (2.6) and (2.8) are unimprovable in the following sense. If at least one of them is not fulfilled, there exist positive operators \(T^{+}\), \(T^{-}\) such that equalities (2.4) hold and problem (2.5) or (2.7) has no unique solution.

Remark 2

We do not impose any restrictions on the growth order of the coefficient p at the singular point.

Example 1

Suppose that \(p:(0,1]\to(0,+\infty)\) satisfies conditions (2.1).

By Theorems 1, 2 and Remark 1, we get the following solvability condition.

If \(q\in\mathbf{L}\) is a nonnegative function such that

$$ \int_{0}^{1} q(s) \,ds \leqslant2, $$

then

  1. (i)

    the Cauchy problem

    $$ \textstyle\begin{cases} \dot{x}(t)+p(t)x(t)=-q(t)x(h(t))+f(t),& t\in[0,1],\\ x(0)=0, \end{cases} $$
    (2.9)

    has a unique absolutely continuous solution for every measurable function \(h:[0,1]\to[0,1]\) and for every \(f\in\mathbf{L}\);

  2. (ii)

    the Cauchy problem with additional boundary condition

    $$ \textstyle\begin{cases} \dot{x}(t)-p(t)x(t)=q(t)x(h(t))+f(t),& t\in[0,1],\\ x(0)=0,\quad x(1)=c, \end{cases} $$
    (2.10)

    has a unique absolutely continuous solution for every measurable function \(h:[0,1]\to[0,1]\) and for every \(f\in\mathbf{L}\), \(c\in \mathbb{R}\).

For every \(Q>2\), there exist a nonnegative function \(q\in\mathbf{L}\),

$$ \int_{0}^{1} q(s) \,ds=Q, $$

and a measurable function \(h:[0,1]\times[0,1]\) such that problem (2.9) (or problem (2.10)) has no unique solution.

3 Fredholm boundary value problems

Definition 4

A locally absolutely continuous function \(x:(0,1]\to\mathbb{R}\) is called a solution of equation (2.2) if x satisfies (2.2) almost everywhere.

Every solution to (2.2) has a representation

$$ x(t)=x_{1}(t) \biggl(x(1)- \int_{t}^{1}\frac{f(s)}{x_{1}(s)} \,ds \biggr),\quad t \in(0,1], $$
(3.1)

where

$$ x_{1}(t)=e^{k\int_{t}^{1}p(s) \,ds},\quad t\in(0,1]. $$

It is obvious that for \(k>0\) the function \(x_{1}\) decreases on \((0,1]\) and \(\lim_{t\to0+}x_{1}(t)=\infty\); for \(k<0\) the function \(x_{1}\) increases on \((0,1]\), \(\lim_{t\to 0+}x_{1}(t)=0\).

Definition 5

For \(k<0\), denote by \(\mathbf{D}_{k}\) the set of all solutions to (2.2) for all \(f\in\mathbf{L}[0,1]\).

Lemma 1

\(\mathbf{D}_{k}\), \(k<0\), is a Banach space with respect to the norm

$$ \Vert x \Vert _{\mathbf{D}_{k}}= \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert (\mathcal{L}_{k} x) (s) \bigr\vert \,ds. $$

Proof

Equality (3.1) gives a one-to-one correspondence \({\mathcal {J}}\) between \(\mathbf{D}_{k}\) and \(\mathbf{L}\times\mathbb{R}\):

$$ \begin{gathered} \bigl({\mathcal {J}} \{f,c\}\bigr) (t)= x_{1}(t) \biggl(c- \int_{t}^{1}\frac {f(s)}{x_{1}(s)} \,ds \biggr),\quad t \in(0,1],f\in\mathbf{L}, c\in\mathbb{R}, \\ {\mathcal {J}}^{-1} x=\bigl\{ x(1), \mathcal{L}_{k} x\bigr\} ,\quad x\in\mathbf{D}_{k}. \end{gathered} $$

The space \(\mathbf{L}\times\mathbb{R}\) is a Banach space with respect to the norm

$$ \bigl\Vert \{f,c\} \bigr\Vert _{\mathbf{L}\times\mathbb{R}}= \vert c \vert + \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds. $$

From this the lemma follows. □

Lemma 2

If \(k<0\), for every \(x\in\mathbf{D}_{k}\), there exists a finite limit \(x(0+)=0\).

Extend all elements of \(\mathbf{D}_{k}\), \(k<0\), at \(t=0\) continuously.

Lemma 3

If \(k<0\), the embedding \(x\to x\) from \(\mathbf{D}_{k}\) into AC is bounded.

Proofs of Lemmas 2, 3

Let \(x\in\mathbf{D}_{k}\), \(k<0\), be a solution to (2.2). Then

$$ \begin{gathered} \Vert x \Vert _{\mathbf{D}_{k}}= \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds, \\ \Vert x \Vert _{\mathbf{AC}}= \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert \dot{x}(s) \bigr\vert \,ds= \bigl\vert x(1) \bigr\vert + \int _{0}^{1} \bigl\vert f(s)-k p(s)x(s) \bigr\vert \,ds \\ \hphantom{\Vert x \Vert _{\mathbf{AC}}}\leqslant \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds+ \int_{0}^{1} \vert k \vert p(s) \bigl\vert x(s) \bigr\vert \,ds \\ \hphantom{\Vert x \Vert _{\mathbf{AC}}} \leqslant \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds+ \int_{0}^{1} \vert k \vert p(s) x_{1}(s) \,ds \bigl\vert x(1) \bigr\vert + \vert k \vert \int_{0}^{1} p(t)x_{1}(t) \int_{t}^{1} \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds \,dt \\ \hphantom{\Vert x \Vert _{\mathbf{AC}}}= \bigl\vert x(1) \bigr\vert + \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds+ \bigl\vert x(1) \bigr\vert + \vert k \vert \int_{0}^{1} p(t)x_{1}(t) \int_{t}^{1} \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds \,dt. \end{gathered} $$

Changing the order of integration in the last integral (it is possible by the Fubini theorem since all integrands are nonnegative), we get

$$ \begin{gathered} \vert k \vert \int_{0}^{1} p(t)x_{1}(t) \int_{t}^{1} \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds \,dt \\ \quad = \int_{0}^{1} \int_{0}^{s} \vert k \vert p(t)x_{1}(t) \,dt \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds= \int _{0}^{1}\bigl(x_{1}(s)-x_{1}(0+) \bigr)\frac{ \vert f(s) \vert }{x_{1}(s)} \,ds= \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds. \end{gathered} $$

Therefore,

$$ \Vert x \Vert _{\mathbf{AC}}\leqslant2 \bigl\vert x(1) \bigr\vert +2 \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds \leqslant2 \Vert x \Vert _{\mathbf{D}_{k}}, $$
(3.2)

and for every \(x\in\mathbf{D}_{k}\), we have \(\dot{x}\in\mathbf{L}\). So, there exists a finite limit \(x(0+)\). Extend elements of \(\mathbf {D}_{k}\) at \(t=0\) continuously. For every \(x\in\mathbf{D}_{k}\), there exists \(f\in\mathbf{L}\) such that

$${k}p(t)x(t)=f(t)-\dot{x}(t),\quad t\in[0,1], $$

where the right-hand side is integrable on \([0,1]\). So,

$$\int_{0}^{1} p(s) \bigl\vert x(t) \bigr\vert \,dt< +\infty, $$

which implies for the continuous function x that \(x(0)=0\). Therefore,

$$\lim_{t\to0} x(t)=0\quad \text{for all }x\in\mathbf{D}_{k}. $$

Inequality (3.2) means that the space \(\mathbf{D}_{k}\) is continuously embedded into AC. □

Let \(k>0\). From (3.1) it follows that a solution of (2.2) can have a finite limit at \(t=0+\) only if

$$ x(1)= \int_{0}^{1}\frac{f(s)}{x_{1}(s)} \,ds. $$
(3.3)

In this case a solution has the representation

$$ x(t)= \int_{0}^{t}\frac{x_{1}(t)}{x_{1}(s)}f(s) \,ds,\quad t\in(0,1], $$
(3.4)

and, obviously, has the zero limit at \(t=0+\). Indeed, it follows from (3.4) that

$$\bigl\vert x(t) \bigr\vert \leqslant \int_{0}^{t}\frac{x_{1}(t)}{x_{1}(s)} \bigl\vert f(s) \bigr\vert \,ds\leqslant \int _{0}^{t} \bigl\vert f(s) \bigr\vert \,ds, \quad t\in(0,1]. $$

Hence, \(\lim_{t\to0+} x(t) =0\) by the absolute continuousness of the Lebesgue integral.

Definition 6

For \(k>0\), denote by \(\mathbf{D}_{k}\) the space of all solutions of (2.2) satisfying condition (3.3) for all \(f\in \mathbf{L}\) and extended by zero at \(t=0\).

Lemma 4

The space \(\mathbf{D}_{k}\), \(k>0\), is a Banach space with respect to the norm

$$ \Vert x \Vert _{\mathbf{D}_{k}}= \int_{0}^{1} \bigl\vert (\mathcal{L}_{k} x) (s) \bigr\vert \,ds. $$

Proof

Equality (3.4) gives a one-to-one correspondence \({\mathcal {J}}:\mathbf{D}_{k}\to\mathbf{L}\):

$$ \begin{gathered} ({\mathcal {J}} f) (t)= x_{1}(t) \int_{0}^{t}\frac{f(s)}{x_{1}(s)} \,ds,\quad t\in(0,1],\quad f\in\mathbf{L}, \\ {\mathcal {J}}^{-1} x= \mathcal{L}_{k} x,\quad x\in \mathbf{D}_{k}. \end{gathered} $$
(3.5)

Since the space L is a Banach space, the lemma is proved. □

Lemma 5

If \(k>0\), the space \(\mathbf{D}_{k}\) is continuously embedded in AC.

Proof

We will show that the embedding \(x\to x:\mathbf{D}_{k}\to\mathbf{AC}\) is bounded. If \(f\in\mathbf{L}\) and \(x={\mathcal {J}} f\in\mathbf{D}_{k}\) is defined by (3.5), then

$$ \begin{gathered} \Vert x \Vert _{\mathbf{D}_{k}}= \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds, \\ \Vert x \Vert _{\mathbf{AC}}= \bigl\vert x(0) \bigr\vert + \int_{0}^{1} \bigl\vert \dot{x}(s) \bigr\vert \,ds\leqslant \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds+ \int_{0}^{1} \bigl(-\dot{x}_{1}(t)\bigr) \int_{0}^{t} \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds \,dt. \end{gathered} $$

Changing the integration order in the last integral, we have

$$ \begin{aligned} \int_{0}^{1} \bigl(-\dot{x}_{1}(t)\bigr) \int_{0}^{t} \frac{ \vert f(s) \vert }{ x_{1}(s) } \,ds \,dt &= \int_{0}^{1} \int_{s}^{1} \bigl(-\dot{x}_{1}(t)\bigr) \,dt \frac{ \vert f(s) \vert }{x_{1}(s)} \,ds \\ &= \int_{0}^{1} \frac{x_{1}(s)-1}{x_{1}(s)} \bigl\vert f(s) \bigr\vert \,ds\leqslant \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds. \end{aligned} $$

Therefore, the space \(\mathbf{D}_{k}\) is continuously embedded into AC:

$$ \Vert x \Vert _{\mathbf{AC}}\leqslant \bigl\vert x(1) \bigr\vert +2 \int_{0}^{1} \bigl\vert f(s) \bigr\vert \,ds \leqslant2 \Vert x \Vert _{\mathbf{D}_{k}}. $$

 □

Remark 3

For every \(k\ne0\), the sets \(\mathbf{D}_{k}\) and

$$ \widetilde{\mathbf{AC}}_{0}\equiv\bigl\{ y\in\mathbf{AC}: y(0)=0, py \in \mathbf{L}\bigr\} $$

coincide.

Proof

If \(x\in\mathbf{D}_{k}\), then \(x(0)=0\), \(x\in\mathbf{AC}\), and for some \(f\in\mathbf{L}\) the equality

$$p(t) x(t)=\frac{1}{k} \bigl(f(t)-\dot{x}(t) \bigr), \quad t\in(0,1], $$

holds. This implies that \(x\in\widetilde{\mathbf{AC}}_{0}\). Conversely, if \(x\in\widetilde{\mathbf{AC}}_{0}\), then for all k the function

$$f(t)=\dot{x}(t)+k p(t) x(t), \quad t\in(0,1], $$

is integrable; therefore, x belongs to \(\mathbf{D}_{k}\). □

So, we proved the following assertion.

Lemma 6

For every \(k\ne0\), the space \(\mathbf{D}_{k}\) is the Banach space of all absolutely continuous solutions to equation (2.2) for all \(f\in\mathbf{L}\). The space \(\mathbf{D}_{k}\) is embedded into the space AC continuously. Every solution to functional differential equation (2.3) belongs to the space \(\mathbf{D}_{k}\).

Now, for \(k<0\), consider the boundary value problem in the space \(\mathbf{D}_{k}\)

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(Tx)(t)+f(t),& t\in[0,1],\\ x(0)=0, x(1)=c, \end{cases} $$
(3.6)

where \(f\in\mathbf{L}\), \(c\in\mathbb{R}\).

Lemma 7

The boundary value problem (3.6) has the Fredholm property.

Proof

Let \(k<0\). The space \(\mathbf{D}_{k}\) is continuously embedded into AC by Lemma 3.

From (3.1) it follows that (3.6) is equivalent to the equation in the space \(\mathbf{D}_{k}\):

$$ x(t)=x_{1}(t)c- \int_{t}^{1}\frac{x_{1}(t)((Tx)(s)+f(s))}{x_{1}(s)} \,ds,\quad t\in(0,1], $$

which can be rewritten as

$$ x(t)=(\Lambda Tx) (t) + g, $$
(3.7)

where

$$\begin{gathered} (\Lambda z) (t)=- \int_{t}^{1}\frac{x_{1}(t)}{x_{1}(s)} z(s) \,ds,\quad t\in [0,1],\quad z\in\mathbf{L}, \\ g(t)=c x_{1}(t)-(\Lambda f) (t),\quad t\in[0,1], \end{gathered} $$

\(\Lambda:\mathbf{L}\to\mathbf{D}_{k}\) is a linear bounded operator, \(g\in\mathbf{D}_{k}\).

Since the space \(\mathbf{D}_{k}\) is continuously embedded in AC, which is continuously embedded in C, then we have that the linear operator Λ acts from L into C and is bounded. So, we can consider (3.7) in the space C: every solution from C is a solution from \(\mathbf{D}_{k}\) and vice versa.

From Lemma 2 in [34], the operator \(I-\Lambda T:\mathbf {C}\to\mathbf{C}\) (\(I:\mathbf{C}\to\mathbf{C}\) is the identity operator) has the Fredholm property. We can prove this here. Every linear bounded operator \(T:\mathbf{C}\to \mathbf{L}\) is weakly completely continuous [35], VI.4.5. Therefore, the operator \(\Lambda T:\mathbf{C}\to\mathbf{C}\) is weakly completely continuous, and the product of such operators, the operator \((\Lambda T)^{2}:\mathbf{C}\to\mathbf{C}\), is completely continuous [35], VI.7.5. By Nikolsky’s theorem [36], p.504, the operator \(I-\Lambda T\) has the Fredholm property. Thus, problem (3.6) has the Fredholm property. □

Corollary 8

If \(k<0\), problem (3.6) has a unique solution \(x\in\mathbf {D}_{k}\) for every \(f\in\mathbf{L}\), \(c\in\mathbb{R}\) if and only if the homogeneous problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(Tx)(t),& t\in[0,1],\\ x(0)=0,\quad x(1)=0, \end{cases} $$

has only the trivial solution in the space \(\mathbf{D}_{k}\).

Similarly, for \(k>0\), using the continuity of embedding the space \(\mathbf{D}_{k}\) into AC, we prove the following assertions.

Lemma 9

Let \(k>0\). The Cauchy problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(Tx)(t)+f(t),& t\in[0,1],\\ x(0)=0, \end{cases} $$
(3.8)

in the space \(\mathbf{D}_{k}\) has the Fredholm property.

Corollary 10

Let \(k>0\). The Cauchy problem (3.8) has a unique solution \(x\in\mathbf{D}_{k}\) for every \(f\in\mathbf{L}\) if and only if the homogenous Cauchy problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=(Tx)(t),& t\in[0,1],\\ x(0)=0, \end{cases} $$

has only the trivial solution in the space \(\mathbf{D}_{k}\).

4 Proofs of main results

By Lemma 6, we can consider problems (2.5) and (2.7) in the spaces \(\mathbf{D}_{k}\). By Corollaries 8 and 10, to prove Theorems 1, 2, it is sufficient to prove that homogeneous problems (2.5) and (2.7) (for \(f=\mathbf{0}\), \(c=0\)) have only the trivial solution.

To prove Theorem 1, we need the following lemma, which can be proved similarly to the analogous assertions from [3739].

Lemma 11

Suppose that nonnegative numbers \(\mathcal{T}^{+}\geqslant0\), \(\mathcal {T}^{-}\geqslant0\) are given. Let \(k>0\).

Problem (2.5) in the space \(\mathbf{D}_{k}\) has a unique solution for all linear positive operators \(T^{+}\), \(T^{-}:\mathbf{C}\to \mathbf{L}\) satisfying (2.4) if and only if the Cauchy problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=p_{1}(t)x(t_{1})+p_{2}(t)x(t_{2}),& t\in[0,1],\\ x(0)=0, \end{cases} $$
(4.1)

has only the trivial solution for all \(0\leqslant t_{1}< t_{2}\leqslant1\) and for all functions \(p_{1}\), \(p_{2}\in\mathbf{L}\) such that the conditions

$$ p_{1}+p_{2}=p^{+}-p^{-},\qquad -p^{-}(t) \leqslant p_{i}(t)\leqslant p^{+}(t),\quad t\in[0,1],\quad i=1,2, $$
(4.2)

are fulfilled for some nonnegative functions \(p^{+}\), \(p^{-}\in\mathbf {L}\) with the norm

$$ \bigl\Vert p^{+} \bigr\Vert _{\mathbf{L}}= \mathcal{T}^{+}, \qquad \bigl\Vert p^{-} \bigr\Vert _{\mathbf {L}}=\mathcal{T}^{-}. $$
(4.3)

Proof of Theorem 1

Solve problem (4.1), which is equivalent to the equation

$$ x(t)= \int_{0}^{t}\frac{x_{1}(t)}{x_{1}(s)}\bigl(p_{1}(s)x(t_{1})+p_{2}(s)x(t_{2}) \bigr) \,ds,\quad t\in[0,1], $$

in the space C. This equation has only the trivial solution if and only if

$$ \Delta=\left \vert \begin{matrix} 1-\int_{0}^{t_{1}}\frac{x_{1}(t_{1})}{x_{1}(s)}p_{1}(s) \,ds & -\int_{0}^{t_{1}}\frac {x_{1}(t_{1})}{x_{1}(s)}p_{2}(s) \,ds\\ -\int_{0}^{t_{2}} \frac{x_{1}(t_{2})}{x_{1}(s)}p_{1}(s) \,ds & 1-\int_{0}^{t_{2}} \frac{x_{1}(t_{2})}{x_{1}(s)}p_{2}(s) \,ds \end{matrix} \right \vert \ne0. $$

For all \(p_{1}\), \(p_{2}\) satisfying (4.2), (4.3), we have

$$ \Delta=\left \vert \begin{matrix} 1-\int_{0}^{t_{1}} \frac{x_{1}(t_{1})}{x_{1}(s)}p_{1}(s) \,ds & 1-\int_{0}^{t_{1}} \frac{x_{1}(t_{1})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds\\ -\int_{0}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}p_{1}(s) \,ds & 1-\int_{0}^{t_{2}} \frac{x_{1}(t_{2})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds \end{matrix} \right \vert . $$

Our aim is to determine for which \(\mathcal{T}^{+}\), \(\mathcal{T}^{-}\) the inequality \(\Delta>0\) is fulfilled for all \(0\leqslant t_{1}< t_{2}\leqslant1\) and for all \(p_{1}\), \(p^{+}\), \(p^{-}\) such that (4.2), (4.3) hold.

For \(p_{1}=0\), we have

$$ \Delta= 1- \int_{0}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}\bigl(p^{+}(s)-p^{-}(s) \bigr) \,ds. $$

Now, for sufficiently small \(t_{2}>0\) and fixed \(p^{+}\), \(p^{-}\), we have \(\Delta>0\). Therefore, the inequality \(\Delta\ne0\) is fulfilled for all admissible parameters if and only if \(\Delta>0\) for all admissible parameters.

For \(p_{1}=0\) and for fixed \(t_{2}\), Δ is minimal if the function \(p^{+}\in\mathbf{L}\) is ‘concentrated’ at the point \(s=t_{2}\), and the function \(p^{-}\in\mathbf{L}\) is ‘concentrated’ at the point \(s=0\). Therefore, the inequality

$$ \bigl\Vert p^{+} \bigr\Vert =\mathcal{T}^{+}\leqslant1 $$
(4.4)

is necessary for the inequality \(\Delta>0\) for all possible parameters.

We have

$$ \Delta=\alpha+ \int_{0}^{t_{1}} \biggl(\beta\frac{x_{1}(t_{2})}{x_{1}(s)}-\alpha \frac {x_{1}(t_{1})}{x_{1}(s)} \biggr)p_{1}(s) \,ds +\beta \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}p_{1}(s) \,ds, $$

where \(\alpha= 1-\int_{0}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds>0\), \(\beta= 1-\int_{0}^{t_{1}}\frac{x_{1}(t_{1})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds>0\), since inequality (4.4) holds. So, for fixed \(t_{1}< t_{2}\), \(p^{+}\), \(p^{-}\in\mathbf{L}\), the functional Δ takes its minimum if

$$p_{1}(t)=-p^{-}(t),\quad t\in[t_{1},t_{2}], $$

and

$$p_{1}(t)=p^{+}(t),\quad t\in[0,t_{1}),\quad \text{or}\quad p_{1}(t)=-p^{-}(t),\quad t\in[0,t_{1}). $$

If

$$p_{1}(t)=-p^{-}(t),\quad t\in[0,t_{2}], $$

it is easy to see that \(\Delta>0\) for any other parameters.

Consider the case

$$ p_{1}(t)= \textstyle\begin{cases} p^{+}(t),& t\in[0,t_{1}],\\ -p^{-}(t),& t\in(t_{1},t_{2}]. \end{cases} $$

Then

$$ \begin{aligned} \Delta&= \biggl(1- \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}p^{+}(s) \,ds \biggr) \biggl( 1- \int_{0}^{t_{1}}\frac{x_{1}(t_{2})}{x_{1}(s)}p^{+}(s) \,ds \biggr) \\ &\quad {}+ \int_{0}^{t_{1}}\frac{x_{1}(t_{1})}{x_{1}(s)}p^{-}(s) \,ds \biggl( \frac {x_{1}(t_{2})}{x_{1}(t_{1})}- \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{2})}{x_{1}(s)}p^{-}(s) \,ds \biggr). \end{aligned} $$

Now, Δ takes its minimal nonpositive value if the functions \(p^{+}\) and \(p^{-}\) are ‘concentrated’ at \(s=t_{1}\) on the interval \([0,t_{1}]\) and at \(s=t_{2}\) on the interval \([t_{1},t_{2}]\).

Using notation

$$ \begin{gathered} \mathcal{T}^{+}_{1}= \int_{0}^{t_{1}}p^{+}(s) \,ds,\qquad\mathcal{T}^{-}_{1}= \int _{0}^{t_{1}}p^{-}(s) \,ds, \\ \mathcal{T}^{+}_{2}= \int_{t_{1}}^{t_{2}}p^{+}(s) \,ds,\qquad\mathcal {T}^{-}_{2}= \int_{t_{1}}^{t_{2}}p^{-}(s) \,ds, \\ \mathcal{T}^{+}_{1}+\mathcal{T}^{+}_{2}\leqslant\mathcal{T}^{+}, \qquad \mathcal{T}^{-}_{1}+\mathcal{T}^{-}_{2}\leqslant \mathcal{T}^{-}, \end{gathered} $$

we get

$$ \Delta>\Delta_{1}\equiv\bigl(1-\mathcal{T}_{1}^{+}\bigr) \bigl(1-\mathcal {T}_{2}^{+}\bigr)+\mathcal{T}_{1}^{-} \biggl( \frac{x_{1}(t_{2})}{x_{1}(t_{1})}-\mathcal {T}_{2}^{-} \biggr). $$

It is easy to see that \(\Delta_{1}\) takes its minimal value in the case of

$$\mathcal{T}^{+}_{1}=0,\qquad\mathcal{T}^{+}_{2}=\mathcal{T}^{+}, \qquad\mathcal {T}^{-}_{1}=\mathcal{T}^{-}_{2}=\mathcal{T}^{-}/2, \qquad\frac{x_{1}(t_{2})}{x_{1}(t_{1})}=0. $$

Then

$$ \Delta_{1}=1-\mathcal{T}^{+}-\bigl(\mathcal{T}^{-}\bigr)^{2}/4. $$

Therefore, \(\Delta>0\) for all possible parameters if and only if inequalities (2.6) are fulfilled. Note, that if the non-strict inequalities (2.6) are fulfilled, then \(\Delta>0\) for all admissible integrable \(p_{1}\), \(p^{+}\), \(p^{-}\) and all \(a\leqslant t_{1}\leqslant t_{2}\leqslant b\). □

Now consider problem (2.7) for \(k<0\).

To prove Theorem 2, we need an analog of Lemma 11 (it can be proved similarly to appropriate statements from [37, 38]).

Lemma 12

Suppose nonnegative numbers \(\mathcal{T}^{+}\geqslant0\), \(\mathcal {T}^{-}\geqslant0\) are given. Let \(k<0\).

Problem (2.7) in the space \(\mathbf{D}_{k}\) has a unique solution for all linear positive operators \(T^{+}\), \(T^{-}:\mathbf{C}\to \mathbf{L}\) satisfying (2.4) if and only if the boundary value problem

$$ \textstyle\begin{cases} (\mathcal{L}_{k} x)(t)=p_{1}(t) x(t_{1})+p_{2}(t) x(t_{2}),& t\in[0,1],\\ x(0)=0, x(1)=0, \end{cases} $$
(4.5)

has in the space \(\mathbf{D}_{k}\) only the trivial solution for all \(0\leqslant t_{1}< t_{2}\leqslant1\) and for all functions \(p_{1}\), \(p_{2}\in\mathbf{L}\) such that the conditions

$$ p_{1}+p_{2}=p^{+}-p^{-},\qquad -p^{-}(t) \leqslant p_{i}(t)\leqslant p^{+}(t),\quad t\in[0,1],\quad i=1,2, $$
(4.6)

are fulfilled for some nonnegative functions \(p^{+}\), \(p^{-}\in\mathbf {L}\) with the norm

$$ \bigl\Vert p^{+} \bigr\Vert _{\mathbf{L}}= \mathcal{T}^{+}, \qquad \bigl\Vert p^{-} \bigr\Vert _{\mathbf {L}}=\mathcal{T}^{-}. $$
(4.7)

Proof of Theorem 2

Problem (4.5) is equivalent to the equation

$$ x(t)=- \int_{t}^{1}\frac{x_{1}(t)}{x_{1}(s)}\bigl(p_{1}(s)x(t_{1})+p_{2}(s)x(t_{2}) \bigr) \,ds,\quad t\in(0,1], $$

in the space C. This equation has only the trivial solution if and only if

$$ \Delta=\left \vert \begin{matrix} 1+\int_{t_{1}}^{1}\frac{x_{1}(t_{1})}{x_{1}(s)}p_{1}(s) \,ds & \int _{t_{1}}^{1}\frac{x_{1}(t_{1})}{x_{1}(s)}p_{2}(s) \,ds\\ \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}p_{1}(s) \,ds & 1+\int _{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}p_{2}(s) \,ds \end{matrix} \right \vert \ne0. $$
(4.8)

For all \(p_{1}\), \(p_{2}\) satisfying (4.7), (4.6), we have

$$ \Delta=\left \vert \begin{matrix} 1+\int_{t_{1}}^{1}\frac{x_{1}(t_{1})}{x_{1}(s)}p_{1}(s) \,ds & 1+\int _{t_{1}}^{1}\frac{x_{1}(t_{1})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds\\ \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}p_{1}(s) \,ds & 1+\int _{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}(p^{+}(s)-p^{-}(s)) \,ds \end{matrix} \right \vert . $$

Now we have to determine for which \(\mathcal{T}^{+}\), \(\mathcal{T}^{-}\) the inequality \(\Delta>0\) is fulfilled for all functions \(p_{1}\), \(p^{+}\), \(p^{-}\) satisfying (4.7), (4.6) and for all \(0\leqslant t_{1}< t_{2}\leqslant1\).

If \(p_{1}=0\), then

$$ \Delta= 1+ \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}\bigl(p^{+}(s)-p^{-}(s) \bigr) \,ds. $$

For sufficiently small \(1-t_{2}>0\), all values Δ are positive. Therefore, the condition \(\Delta\ne0\) in (4.8) can be replaced by \(\Delta>0\).

For fixed \(t_{2}\), the value of Δ is minimal if the function \(p^{+}\in\mathbf{L}\) is ‘concentrated’ at \(s=0\), and the function \(p^{-}\in\mathbf{L}\) is ‘concentrated’ at \(s=t_{2}\). Therefore, the condition

$$ \bigl\Vert p^{-} \bigr\Vert _{\mathbf{L}}=\mathcal{T}^{-} \leqslant1 $$
(4.9)

is necessary to provide \(\Delta>0\) for all possible parameters.

We have

$$ \Delta=\alpha+ \int_{t_{2}}^{1} \biggl(\alpha\frac{x_{1}(t_{1})}{x_{1}(s)}- \beta \frac{x_{1}(t_{2})}{x_{1}(s)} \biggr)p_{1}(s) \,ds +\alpha \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{1})}{x_{1}(s)}p_{1}(s) \,ds, $$

where

$$ \begin{gathered} \alpha\equiv1+ \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}\bigl(p^{+}(s)-p^{-}(s) \bigr) \,ds>0, \\ \beta\equiv1+ \int_{t_{1}}^{1}\frac{x_{1}(t_{1})}{x_{1}(s)}\bigl(p^{+}(s)-p^{-}(s) \bigr) \,ds>0, \end{gathered} $$

if inequality (4.9) holds. Hence, for fixed \(t_{1}< t_{2}\), \(p^{+}\), \(p^{-}\), the functional Δ takes its minimal value if

$$p_{1}(t)=-p^{-}(t),\quad t\in[t_{1},t_{2}], $$

and

$$p_{1}(t)=p^{+}(t),\quad t\in[t_{2},1]\quad \text{or}\quad p_{1}(t)=-p^{-}(t),\quad t\in[t_{2},1]. $$

If

$$p_{1}(t)=-p^{-}(t),\quad t\in[t_{1},1], $$

then it is easy to see that \(\Delta>0\) for any other parameters. Consider the case

$$ p_{1}(t)= \textstyle\begin{cases} -p^{-}(t),& t\in[t_{1},t_{2}],\\ p^{+}(t),& t\in(t_{2},1]. \end{cases} $$

Then

$$ \begin{aligned} \Delta&= \biggl(1- \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{1})}{x_{1}(s)}p^{-}(s) \,ds \biggr) \biggl( 1- \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}p^{-}(s) \,ds \biggr) \\ &\quad {}+ \int_{t_{2}}^{1}\frac{x_{1}(t_{2})}{x_{1}(s)}p^{+}(s) \,ds \biggl( \frac {x_{1}(t_{1})}{x_{1}(t_{2})}- \int_{t_{1}}^{t_{2}}\frac{x_{1}(t_{1})}{x_{1}(s)}p^{+}(s) \,ds \biggr). \end{aligned} $$

Now Δ takes its nonpositive minimum if the functions \(p^{+}\) and \(p^{-}\) are ‘concentrated’ at \(s=t_{1}\) on the interval \([t_{1},t_{2}]\) and at the points \(s=t_{2}\) on the interval \([t_{2},1]\).

Using the notation

$$ \begin{gathered} \mathcal{T}^{+}_{1}= \int_{t_{1}}^{t_{2}}p^{+}(s) \,ds,\qquad\mathcal {T}^{-}_{1}= \int_{t_{1}}^{t_{2}}p^{-}(s) \,ds, \\ \mathcal{T}^{+}_{2}= \int_{t_{2}}^{1}p^{+}(s) \,ds,\qquad\mathcal{T}^{-}_{2}= \int _{t_{2}}^{1}p^{-}(s) \,ds, \end{gathered} $$

we get

$$ \Delta>\Delta_{1}\equiv\bigl(1-\mathcal{T}_{1}^{-}\bigr) \bigl(1-\mathcal {T}_{2}^{-}\bigr)+\mathcal{T}_{2}^{+} \biggl( \frac{x_{1}(t_{1})}{x_{1}(t_{2})}-\mathcal {T}_{1}^{+} \biggr). $$

It is easy to see that \(\Delta_{1}\) takes its minimal value if, for example,

$$\mathcal{T}^{-}_{1}=0,\qquad\mathcal{T}^{-}_{2}=\mathcal{T}^{+}, \qquad\mathcal {T}^{+}_{1}=\mathcal{T}^{+}_{2}=\mathcal{T}^{+}/2, \qquad\frac{x_{1}(t_{1})}{x_{1}(t_{2})}=0. $$

Then

$$ \Delta_{1}=1-\mathcal{T}^{-}-\bigl(\mathcal{T}^{+}\bigr)^{2}/4. $$

Hence, \(\Delta>0\) for all possible parameters if and only if inequalities (2.8) are fulfilled. □