1 Introduction

Let \(A=(a_{ij})_{n\times n}\) be a square matrix of order n. \(\alpha:[0,1]\rightarrow \mathbb{R}\) is a bounded variation function, and \(\int _{0}^{1}u(t)\,d\alpha (t)=(\int _{0}^{1}u_{1}(t)\,d\alpha (t),\int _{0}^{1}u_{2}(t)\,d\alpha (t),\ldots , \int _{0}^{1}u_{n}(t)\,d\alpha (t))^{T}\) where \(\int _{0}^{1}u_{i}(t)\,d \alpha (t)\) denotes the Riemann–Stieltjes integrals of \(u_{i}\) with respect to α and \(u^{T}\) denotes the vector transpose of the row vector u. Take \(h=\int _{0}^{1}t\,d\alpha (t)\) and \(B=hA\).

In this paper, we will study the existence of solutions for the following integral boundary value problem at resonance in \(\mathbb{R} ^{n}\):

$$ \textstyle\begin{cases} -u''(t)=f(t,u(t),u'(t)),\quad t\in (0,1), \\ u(0)=0,\qquad u(1)=A\int _{0}^{1}u(t)\,d\alpha (t), \end{cases} $$
(1.1)

under the following assumptions:

  1. (H1)

    B is a diagonalization matrix, and \(\det (I-B)=0\);

  2. (H2)

    \(\int _{0}^{1}t(1-t)\,d\alpha (t)\neq0\);

  3. (H3)

    \(f:[0,1]\times \mathbb{R}^{2n}\rightarrow \mathbb{R}^{n}\) satisfies the Carathéodory conditions.

If the condition (\(H_{1}\)) is considered, the associated linear problem \(-u''(t)=0\), \(u(0)=0\), \(u(1)=A\int _{0}^{1}u(t)\,d\alpha (t)\) has a nontrivial solution \(u(t)=\psi t\) with \(\psi \in \operatorname{Ker}(I-B)\). This means that this problem is a resonant integral boundary value problem (IBVP).

Integral boundary value problems of this form arise in different areas of applied mathematics and physics such as heat conduction, thermoelasticity, underground water flow, and plasma physics. Moreover, integral boundary value problems constitute a very important class of problems based on the fact that two-point, three-point, multi-point and nonlocal boundary value problems can be treated as special cases of Riemann–Stieltjes integral boundary value problems. As a result, the existence of solutions for such problems has received great attention (see [1, 5,6,7,8,9, 11, 12, 14, 15, 23, 26,27,28]). It is well known that, when \(n = 1\), the existence theory of integral boundary value problems for ordinary differential equations or fractional differential equations has been well studied; we refer the reader to [4, 10, 17, 20, 21, 24, 25, 29,30,31,32,33,34,35, 37] for some recent results at non-resonance and to [2, 3, 16, 18, 22, 23, 27, 36] for results at resonance. When \(n\geq 2\) and A is not a diagonal matrix, IBVP (1.1) becomes a system of ordinary differential equations with coupled boundary conditions. Differential systems with coupled integral boundary conditions can be applied to reaction–diffusion phenomena, interaction problems and Lotka–Volterra models. Recently, there have been many papers addressing the existence of solutions for differential systems of coupled integral boundary value problems; see, for example, [1, 3, 5,6,7, 9, 11,12,13,14].

To the best of our knowledge, the solvability of problem (1.1) at resonance has not been considered before. The main purpose of this paper is to establish an existence result for problem (1.1) when \(n\geq 2\). Our main method is based on the coincidence degree theory of Mawhin and the theory of matrix diagonalization in linear algebra.

We end this section by recalling some notations and abstract results from coincidence degree theory.

Let X and Y be two real Banach spaces, \(L: \operatorname{dom}L \subset X \rightarrow Y\) be a linear Fredholm operator of index zero, and \(P: X\rightarrow X\) and \(Q: Y\rightarrow Y\) be two continuous projectors such that

$$ \operatorname{Im} P=\operatorname{ker} L,\qquad \operatorname{ker} Q=\operatorname{Im} L,\qquad X= \operatorname{ker} L\oplus \operatorname{ker} P,\qquad Y=\operatorname{Im} L\oplus \operatorname{Im} Q. $$

It follows from the above equalities that the reduced operator

$$ L|_{\operatorname{dom}L\cap \operatorname{Ker}P}: \operatorname{dom}L\cap \operatorname{ker} P\rightarrow \operatorname{Im} L $$

is invertible. We denote its inverse by \(K_{P}\) (the generalized inverse operator of L). If Ω is an open bounded subset of X such that \(\operatorname{dom}L\cap \varOmega \neq\emptyset \), the mapping \(N: X\rightarrow Y\) will be called L-compact on Ω̅ if \(QN(\overline{ \varOmega })\) is bounded and \(K_{P}(I-Q)N: \overline{\varOmega }\rightarrow X\) is compact.

We make use of the following result from Mawhin [19].

Theorem 1.1

([19] (Mawhin continuation theorem))

Let \(L: \operatorname{dom}L \subset X \rightarrow Y\) be a Fredholm operator of index zero and N be L-compact on Ω̅. The equation \(L\varphi =N\varphi \) has at least one solution in \(\operatorname{dom}L \cap \overline{ \varOmega }\) if the following conditions are satisfied:

  1. (1)

    \(L\varphi \neq \lambda N\varphi \) for every \((\varphi, \lambda ) \in [(\operatorname{dom}L\backslash \operatorname{ker}L)\cap \partial \varOmega ] \times (0,1)\);

  2. (2)

    \(N\varphi \notin \operatorname{Im}L\) for every \(\varphi \in \operatorname{ker}L \cap \partial \varOmega \);

  3. (3)

    \(\operatorname{deg} (JQN|_{\operatorname{ker}L},\varOmega \cap \operatorname{ker}L,0 ) \neq 0\), where \(J:\operatorname{Im} Q\rightarrow \operatorname{Ker} L\) is some isomorphism.

2 Preliminaries

We use the classical spaces \(X=C^{1}([0,1], \mathbb{R}^{n})\) and \(Y=L^{1}([0,1], \mathbb{R}^{n})\). For \(u\in X\), we use the norm \(\|u\|_{X}=\max \{\|u\|_{\infty }, \|u'\|_{\infty }\}\), where \(\|u\|_{\infty }=\max_{t\in [0,1]}\{|u_{1}(t)|, |u_{2}(t)|, \ldots,|u_{n}(t)|\}\), and denote the norm in \(L^{1}([0,1], \mathbb{R}^{n})\) by \(\|u\|_{1}=\max_{1\leq i\leq n}\int _{0} ^{1}|u_{i}(t)|\,dt\). We also use the Sobolev space defined by

$$ X_{0}=\bigl\{ u\in X: u' \text{ is absolutely continuous on } [0,1] \text{ and } u''\in Y \bigr\} . $$

We define L to be the linear operator from \(D(L)\subset X\) to Y with

$$ \operatorname{dom}L= \biggl\{ u\in X_{0}: u(0)=0, u(1)=A \int _{0}^{1}u(t)\,d\alpha (t) \biggr\} $$

and for \(u\in D(L)\), \(Lu=-u''\). Let \(N: X\rightarrow Y\) be the nonlinear operator defined by

$$ (Nu) (t)=f\bigl(t,u(t),u'(t)\bigr),\quad t\in [0,1]. $$

Thus, problem (1.1) can be written as \(Lu=Nu\).

Lemma 2.1

The following results hold:

  1. (1)

    \(\operatorname{Ker} L=\{u\in X: u(t)=\psi t, \psi \in \operatorname{Ker}(I-B)\subset \mathbb{R}^{n}\}\).

  2. (2)

    \(\operatorname{Im} L=\{v\in Y: \varphi (v)\in \operatorname{Im}(I-B)\}\), where \(\varphi:Y\rightarrow \mathbb{R} ^{n}\) is a linear operator defined by

    $$ \varphi (v)=A \int _{0}^{1} \int _{0}^{1}G(t,s)v(s)\,ds\,d\alpha (t), $$
    (2.1)

    where

    $$ G(t,s)=\textstyle\begin{cases} t(1-s),& 0\leq t\leq s\leq 1, \\ s(1-t),&0\leq s\leq t\leq 1. \end{cases} $$
    (2.2)

Proof

(1) For \(u\in \operatorname{Ker} L\), we obtain \(-u''=0\). Then \(u(t)=\psi t+\psi _{1}\) with \(\psi,\psi _{1} \in \mathbb{R}^{n}\). With consideration of the boundary conditions \(u(0)=0\) and \(u(1)=A\int _{0} ^{1}u(t)\,d\alpha (t)\), we conclude that \(\psi _{1}=0\) and \(\psi =A\int _{0}^{1}\psi t\,d\alpha (t)=A(h\psi )=B\psi \). Thus, \(\psi \in \operatorname{Ker}(I-B)\). Again, if \(u(t)=\psi t\) with \(\psi \in \operatorname{Ker}(I-B)\), then \(u\in \operatorname{Ker} L\).

(2) For \(v\in \operatorname{Im} L\), there exists \(u\in \operatorname{dom}L\) such that \(-u''(t)=v(t)\). Thus,

$$ u(t)=u(0) (1-t)+u(1)t+ \int _{0}^{1}G(t,s)v(s)\,ds. $$
(2.3)

Using the boundary conditions \(u(0)=0\) and \(u(1)=A\int _{0}^{1}u(t)\,d \alpha (t)\), it follows from (2.3) that

$$\begin{aligned} u(1)& =A \int _{0}^{1}u(t)\,d\alpha (t)=A \int _{0}^{1} \biggl(u(1)t+ \int _{0}^{1}G(t,s)v(s)\,ds \biggr)\,d \alpha (t) \\ & =A\bigl(h u(1)\bigr)+\varphi (v)=B u(1)+\varphi (v). \end{aligned}$$

This implies that \(\varphi (v)=(I-B)u(1)\). Thus, \(v\in \{v\in Y: \varphi (v)\in \operatorname{Im}(I-B)\}\). Conversely, if \(v\in Y\) and \(\varphi (v)\in \operatorname{Im}(I-B)\), let

$$ u(t)=\xi t+ \int _{0}^{1}G(t,s)v(s)\,ds, $$

where \(\xi \in \mathbb{R}^{n}\) satisfies

$$ (I-B)\xi =\varphi (v). $$
(2.4)

Then \(-u''(t)=v(t)\), \(u(0)=0\), \(u(1)=\xi \) and

$$ A \int _{0}^{1}u(t)\,d\alpha (t)=A\xi \int _{0}^{1} t\,d\alpha (t) +\varphi (v)=B\xi + \varphi (v)=\xi. $$

Thus, \(u(1)=A\int _{0}^{1}u(t)\,d\alpha (t)\) and \(v\in \operatorname{Im} L\). This completes the proof. □

Recall that a matrix is diagonalizable over the field \(\mathbb{R}\) if and only if its minimal polynomial is a product of distinct linear factors over \(\mathbb{R}\). Thus, it follows from \((H_{1})\) that the minimal polynomial of the matrix B can be written as

$$ \mu _{B}(x)=(1-x)g(x), $$
(2.5)

where \(1-x\) and \(g(x)\) are two polynomials which are relatively prime. Hence, there exist two polynomials, \(a(t)\) and \(b(t)\), such that

$$ (1-x)a(x)+g(x)b(x)=1. $$

From this, we conclude that \((I-B)a(B)+g(B)b(B)=I\). Thus,

$$ \begin{aligned} &\operatorname{Im} (I-B) =\operatorname{Ker} g(B), \\ &\operatorname{Ker} (I-B) =\operatorname{Im} g(B), \\ &\mathbb{R}^{n} = \operatorname{Im} (I-B)\oplus \operatorname{Im} g(B). \end{aligned} $$
(2.6)

Moreover, by (2.5), we have \((I-B)g(B)=0\), that is,

$$ g(B)B=g(B). $$
(2.7)

Consequently, we deduce that

$$ g(B)g(B)=g(1)g(B), $$
(2.8)

where \(g(1)\neq0\) holds following from the fact that \(1-x\) and \(g(x)\) are two relatively prime polynomials.

Example 2.1

When \(B^{m}=B\) with \(2\leq m\leq n\), the minimal polynomial of the matrix, B, is

$$ \mu _{B}(x)=x-x^{m}=(1-x) \bigl(x+x^{2}+\cdots +x^{m-1}\bigr)=(1-x)g(x). $$

Thus, we have

$$ g(1)=m-1. $$

When \(B^{m}=I\) with a \(2\leq m\leq n\), the minimal polynomial of the matrix, B, can be explicitly given by

$$ \mu _{B}(x)=1-x^{m}=(1-x) \bigl(1+x+x^{2}+\cdots +x^{m-1}\bigr)=(1-x)g(x). $$

Thus, we obtain

$$ g(1)=m. $$

Lemma 2.2

L is a Fredholm operator of index zero.

Proof

We define an operator \(Q: Y\rightarrow Y\) by

$$ (Qv) (t)=kg(B)\varphi (v),\quad v\in Y, $$

where φ is given in (2.1) and \(k=\frac{2h}{g(1)\int _{0}^{1}t(1-t)\,d\alpha (t)}\). Note that if \(w(t)=\psi \) with \(\psi \in \mathbb{R}^{n}\), we have

$$\begin{aligned} (Qw) (t)&= kg(B)A\psi \int _{0}^{1} \int _{0}^{1}G(t,s)\,ds\,d\alpha (t) \\ &= kg(B)A\psi \int _{0}^{1}\frac{t(1-t)}{2}\,d\alpha (t)= \frac{h}{g(1)}g(B)A \psi \\ &= \frac{1}{g(1)}g(B)B\psi =\frac{1}{g(1)}g(B)\psi. \end{aligned}$$

Hence,

$$\begin{aligned} \bigl(Q^{2}v\bigr) (t)&= \frac{1}{g(1)}g(B) \bigl(kg(B)\varphi (v)\bigr) \\ &= \frac{k}{g(1)}g(B)g(B)\varphi (v) \\ &= kg(B)\varphi (v)=(Qv) (t). \end{aligned}$$

Therefore, the map Q is a continuous linear projector. Moreover, by (2.6) and Lemma 2.1, we have

$$ v\in \operatorname{Ker} Q\quad \Leftrightarrow\quad \varphi (v)\in \operatorname{Ker} g(B) \quad\Leftrightarrow\quad \varphi (v)\in \operatorname{Im}(I-B) \quad\Leftrightarrow\quad v\in \operatorname{Im} L. $$

This means that \(\operatorname{Ker} Q=\operatorname{Im} L\). For \(v\in Y\), \(v-Qv \in \operatorname{Ker} Q=\operatorname{Im} L\). Therefore, \(Y=\operatorname{Im} L+ \operatorname{Im} Q\), and, again, \(\operatorname{Im} L\cap \operatorname{Im} Q=\{0\}\). Hence, \(Y=\operatorname{Im} L\oplus \operatorname{Im} Q\). Combining the previous results with the additional information that ImL is closed, we conclude that L is a Fredholm operator of index zero. □

In what follows, we make the following assumption on the matrix B:

  1. (H4)

    There exists \(l\in \mathbb{R}\) such that \(l(I-B)(I-B)=(I-B)\).

Example 2.2

When \(B^{2}=B\), we can take \(l=1\) such that

$$ (I-B) (I-B)=I-2B+B^{2}=I-B. $$

If \(B^{2}=I\), we can take \(l=\frac{1}{2}\) such that

$$ l(I-B) (I-B)=\frac{1}{2}\bigl(I-2B+B^{2}\bigr)= \frac{1}{2}(2I-2B)=I-B. $$

Lemma 2.3

Assuming that (H4) holds,

$$ l(I-B)\psi =\psi,\quad \forall \psi \in \operatorname{Im}(I-B). $$

Proof

Let \(\psi \in \operatorname{Im}(I-B)\) so that \(\psi =(I-B)\psi _{1}\) for \(\psi _{1}\in \mathbb{R}^{n}\). Using the condition (H4), we have

$$ l(I-B)\psi =l(I-B) (I-B)\psi _{1}=(I-B)\psi _{1}=\psi. $$

 □

Lemma 2.4

If Ω is an open bounded subset such that \(\operatorname{dom}L \cap \varOmega \neq\emptyset \), then N is L-compact on Ω̅.

Proof

Define the linear operator \(P: X\rightarrow X\) by

$$ (Pu) (t)=\frac{1}{g(1)}g(B)u(1)t. $$

Then we have

$$\begin{aligned} \bigl(P^{2}u\bigr) (t)&=\frac{1}{g(1)}g(B) \biggl( \frac{1}{g(1)}g(B)u(1) \biggr)t \\ &= \frac{1}{g^{2}(1)}g(B)g(B)u(1)t=\frac{1}{g(1)}g(B)u(1)t=(Pu) (t). \end{aligned}$$

This shows that P is a continuous projection operator. In the following, we will assert that \(\operatorname{Im} P= \operatorname{Ker} L\). In fact, if \(v\in \operatorname{Im} P\), there is \(u\in X\), such that

$$ v(t)=Pu(t)=\frac{1}{g(1)}g(B)u(1)t. $$

Thus, it follows from (2.6) that \(v\in \operatorname{Ker} L\). Conversely, if \(v\in \operatorname{Ker} L\), we have

$$ v(t)=\psi t,\quad \psi \in \operatorname{Ker}(I-B). $$

On account of the second identity in (2.6), there exists \(\psi _{1} \in \mathbb{R}^{n}\), such that \(\psi =g(B)\psi _{1}\). Taking \(u(t)=g(1)\psi _{1}\), we then have

$$ Pu(t)=\frac{1}{g(1)}g(B)u(1)t=g(B)\psi _{1}t=\psi t, $$

which implies that \(v\in \operatorname{Im} P\). Thus, we conclude that \(\operatorname{Im} P=\operatorname{Ker} L\), and consequently,

$$ X=\operatorname{Ker} L\oplus \operatorname{Ker} P. $$

Therefore, the generalized inverse \(K_{P}: \operatorname{Im} L\rightarrow \operatorname{dom}L\cap \operatorname{Ker} P\) can be given by

$$ (K_{P}v) (t)= \int _{0}^{1}G(t,s)v(s)\,ds+l\biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (v)t, $$
(2.9)

where the constant l is given in (H4). Note that, since \(G(1,s)=0\) for all \(s\in [0,1]\) and from (2.9), we have

$$ P(K_{P}v) (t)=\frac{l}{g(1)}g(B) \biggl(I-\frac{1}{g(1)}g(B) \biggr)\varphi (v)t=0. $$

Hence, \(K_{P}v\in \operatorname{Ker} P\). For \(v\in \operatorname{Im} L\), we know that

$$ (K_{P}v) (1)=l\biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (v) $$

and

$$\begin{aligned} & A \int _{0}^{1}(K_{P}v) (t)\,d\alpha (t) \\ &\quad= A \int _{0}^{1} \biggl( \int _{0}^{1}G(t,s)v(s)\,ds+l\biggl(I- \frac{1}{g(1)}g(B)\biggr) \varphi (v)t \biggr) \,d\alpha (t) \\ &\quad= \varphi (v)+lA\biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (v) \int _{0}^{1} t\,d\alpha (t) \\ &\quad= \varphi (v)+lA\biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (v)h =\varphi (v)+lB\biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (v) \\ &\quad= \varphi (v)+l(B-I+I) \biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (v) \\ &\quad= \varphi (v)-l(I-B) \biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (v) +l \biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (v) \\ &\quad= \varphi (v)-l(I-B)\varphi (v) +l\biggl(I-\frac{1}{g(1)}g(B)\biggr) \varphi (v) =l\biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (v). \end{aligned}$$

Therefore, \((K_{P}v)(1)=A\int _{0}^{1}(K_{P}v)(t)\,d\alpha (t)\), and consequently, \(K_{P}\) is well defined. Furthermore, if \(u\in \operatorname{dom}L \cap \operatorname{Ker} P\), then, using (2.7) and Lemma 2.3, we have

$$\begin{aligned} &(K_{P}Lu) (t)\\ &\quad = - \int _{0}^{1}G(t,s)u''(s) \,ds+l\biggl(I-\frac{1}{g(1)}g(B)\biggr)\varphi (Lu)t \\ &\quad= u(t)-u(1)t+ l\biggl(I-\frac{1}{g(1)}g(B)\biggr)A \biggl( \int _{0}^{1} \bigl(u(t)-u(1)t\bigr)\,d \alpha (t) \biggr)t \\ &\quad= u(t)-u(1)t+ l\biggl(I-\frac{1}{g(1)}g(B)\biggr) \biggl(A \int _{0}^{1}u(t)\,d\alpha (t)-Ahu(1) \biggr)t \\ &\quad= u(t)-u(1)t+ l\biggl(I-\frac{1}{g(1)}g(B)\biggr) (I-B)u(1)t \\ &\quad= u(t)-u(1)t+ l(I-B)u(1)t \\ &\quad= u(t). \end{aligned}$$

This shows that \(K_{P}=(L|_{\operatorname{dom}L\cap \operatorname{Ker} P})^{-1}\) and that \(LK_{P}v(t)=v(t)\), \(v\in \operatorname{Im} L\). For \(v\in \operatorname{dom}L\), by (2.8) we obtain

$$ (K_{P}v)'(t)= \int _{t}^{1}(1-s)v(s)\,ds- \int _{0}^{t}sv(s)\,ds+l\biggl(I- \frac{1}{g(1)}g(B)\biggr)A \int _{0}^{1} \int _{0}^{1}G(t,s)v(s)\,ds\,d\alpha (t). $$

Notice that

$$\begin{aligned} &\max_{t\in [0,1]} \biggl\vert \int _{0}^{1}G(t,s)v(s)\,ds \biggr\vert \leq \max _{t\in [0,1]} \int _{0}^{1}t(1-t) \bigl\vert v(s) \bigr\vert \,ds \leq \frac{ \Vert v \Vert _{1}}{4}, \\ &\max_{t\in [0,1]} \biggl\vert \int _{0}^{1}G'_{t}(t,s)v(s) \,ds \biggr\vert = \max_{t\in [0,1]} \biggl\vert - \int _{0}^{t}sv(s)\,ds + \int _{t}^{1}(1-s)v(s)\,ds \biggr\vert \\ &\phantom{\max_{t\in [0,1]} \biggl\vert \int _{0}^{1}G'_{t}(t,s)v(s) \,ds \biggr\vert }\leq \max_{t\in [0,1]} \biggl(t \int _{0}^{t} \bigl\vert v(s) \bigr\vert \,ds+(1-t) \int _{t} ^{1} \bigl\vert v(s) \bigr\vert \,ds \biggr) \\ &\phantom{\max_{t\in [0,1]} \biggl\vert \int _{0}^{1}G'_{t}(t,s)v(s) \,ds \biggr\vert }\leq \max_{t\in [0,1]} \biggl(t \int _{0}^{1} \bigl\vert v(s) \bigr\vert \,ds+(1-t) \int _{0} ^{1} \bigl\vert v(s) \bigr\vert \,ds \biggr)= \Vert v \Vert _{1}, \end{aligned}$$

and

$$\begin{aligned} \biggl\vert \int _{0}^{1} \int _{0}^{1}G(t,s)v(s)\,ds\,d\alpha (t) \biggr\vert &\leq \int _{0}^{1} \int _{0}^{1}t(1-t) \bigl\vert v(s) \bigr\vert \,ds\,d \Biggl(\bigvee_{0}^{t}(\alpha ) \Biggr)\\ & = \int _{0}^{1}t(1-t)\,d \Biggl(\bigvee _{0}^{t}(\alpha ) \Biggr) \cdot \Vert v \Vert _{1}, \end{aligned}$$

where \(\bigvee_{0}^{t}(\alpha )\) denotes the total variation of α on \([0,t]\) defined by

$$ \bigvee_{0}^{t}(\alpha )=\sup _{\mathcal{P}}\sum_{i=1}^{n} \bigl\vert \alpha (t_{i})-\alpha (t_{i-1}) \bigr\vert , $$

where the supremum runs over the set of all partitions

$$ \mathcal{P}=\bigl\{ P=\{t_{0}, \ldots, t_{n}\} | P \text{ is a partition of } [0,t] \bigr\} . $$

Let \(\|\cdot \|_{*}\) be the max-norm of matrices defined by

$$ \Vert A \Vert _{*}=\max_{1\leq i\leq n, 1\leq j\leq m} \vert a_{ij} \vert ,\quad \text{for } A=(a_{ij})_{n\times m}, $$

and \(\|\cdot \|_{\mathbb{R}^{n}}\) be the maximum norm in \(\mathbb{R} ^{n}\). Then we have

$$ \Vert Av \Vert _{\mathbb{R}^{n}}\leq m \Vert A \Vert _{*} \Vert v \Vert _{\mathbb{R}^{m}},\quad \text{for } A=(a_{ij})_{n\times m} \text{ and } v\in \mathbb{R} ^{m}. $$

Thus,

$$ \Vert K_{P}v \Vert _{\infty }\leq \Biggl( \frac{1}{4}+n \vert l \vert \cdot \biggl\Vert \biggl(I- \frac{1}{g(1)}g(B)\biggr)A \biggr\Vert _{*} \int _{0} ^{1}t(1-t)\,d \Biggl(\bigvee _{0}^{t}(\alpha ) \Biggr) \Biggr) \Vert v \Vert _{1} $$

and

$$ \bigl\Vert K'_{P}v \bigr\Vert _{\infty }\leq \Biggl(1+n \vert l \vert \cdot \biggl\Vert \biggl(I-\frac{1}{g(1)}g(B) \biggr)A \biggr\Vert _{*} \int _{0}^{1}t(1-t) \,d \Biggl(\bigvee _{0}^{t}(\alpha ) \Biggr) \Biggr) \Vert v \Vert _{1}. $$

Consequently, we conclude that

$$ \Vert K_{P}v \Vert _{X}\leq M \Vert v \Vert _{1}, $$

where \(M=1+n|l| \|(I-\frac{1}{g(1)}g(B))A\|_{*}\int _{0}^{1}t(1-t)\,d (\bigvee_{0}^{t}(\alpha ) )\). It is easy to see that

$$ (QNu) (t)=kg(B)A \int _{0}^{1} \int _{0}^{1}G(t,s)f\bigl(s,u(s), u'(s) \bigr)\,ds\,d \alpha (t)\in \mathbb{R}^{n} $$

and

$$\begin{aligned} & K_{P}(I-Q)Nu(t)\\ &\quad=K_{P}Nu(t)-K_{P}QNu(t) \\ &\quad= \int _{0}^{1}G(t,s)Nu(s)\,ds+l\biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (Nu)t \\ &\qquad{} + \int _{0}^{1}G(t,s)\,ds\cdot QNu(t)+l\biggl(I- \frac{1}{g(1)}g(B)\biggr)\varphi (QNu)t \\ &\quad= \int _{0}^{1}G(t,s)Nu(s)\,ds+\frac{t(1-t)}{2} QNu(t) \\ &\qquad{} +l\biggl(I-\frac{1}{g(1)}g(B)\biggr) \bigl(\varphi (Nu)+\varphi (QNu) \bigr)t. \end{aligned}$$

By using the standard argument, we can show that \(QN((\overline{ \varOmega }))\) is bounded and \(K_{P}(I-Q)N(\overline{\varOmega })\) is compact. Thus, N is L-compact on Ω̅. □

The above results (Lemmas 2.1, 2.2, and 2.4) may be concrete for a specific matrix. In the following, we suppose that a diagonalizable matrix B satisfy \(B^{2}=I\) and \(\dim \operatorname{Ker}(I-B)=k\). So, there exist a set of linearly independent vectors \(\{\eta _{1},\eta _{2},\ldots,\eta _{n}\} \) such that

$$ BC= C \begin{pmatrix} I_{k} &0 \\ 0&-I_{n-k} \end{pmatrix},\quad C=(\eta _{1},\eta _{2},\ldots, \eta _{n}), \eta _{i}= \begin{pmatrix} \eta _{1i} \\ \eta _{2i} \\ \vdots \\ \eta _{ni} \end{pmatrix}, $$

where \(\eta _{i}\) \((i=1,2,\ldots, k)\) is an eigenvector of B with eigenvalue 1 and \(\eta _{i}\) \((i=k+1,k+2,\ldots, n)\) is an eigenvector of B with eigenvalue −1. Moreover, we shall suppose that

$$ \begin{pmatrix} \eta _{11} & \eta _{12} &\cdots & \eta _{1k} \\ \eta _{21} & \eta _{22} &\cdots & \eta _{2k} \\ \vdots & \vdots &\cdots & \vdots \\ \eta _{k1} & \eta _{k2} &\cdots & \eta _{kk} \end{pmatrix}=I_{k}, \qquad \begin{pmatrix} \eta _{k+1, k+1} & \eta _{k+1,k+2} &\cdots & \eta _{k+1,n} \\ \eta _{k+2,k+1} & \eta _{k+2,k+2} &\cdots & \eta _{k+2,n} \\ \vdots & \vdots &\cdots & \vdots \\ \eta _{n,k+1} & \eta _{n,k+2} &\cdots & \eta _{n, n} \end{pmatrix}=I_{n-k}. $$
(2.10)

Take \(C^{-1}=(c_{ij})_{n\times n}\),

$$ D_{1}= \begin{pmatrix} c_{k+1,1} & c_{k+1,2} &\cdots & c_{k+1,k} \\ c_{k+2,1} & c_{k+2,2} &\cdots & c_{k+2,k} \\ \vdots & \vdots & \cdots & \vdots \\ c_{n,1} & c_{n,2} &\cdots & c_{n,k} \end{pmatrix},\qquad D_{2}= \begin{pmatrix} c_{k+1,k+1} & c_{k+1,k+2} &\cdots & c_{k+1,n} \\ c_{k+2,k+1} & c_{k+2,k+2} &\cdots & c_{k+2,n} \\ \vdots & \vdots & \cdots & \vdots \\ c_{n,k+1} & c_{n,k+2} &\cdots & c_{n,n} \end{pmatrix}. $$

It follows from \(C^{-1}C=I\) and (2.10) that, for \(l,m\in \{1, 2, \ldots,k\}\),

$$ \sum_{i=1}^{n}c_{li}\eta _{im}=\delta _{lm}=\textstyle\begin{cases} 1 & \text{if } l=m, \\ 0 & \text{if } l\neq m. \end{cases} $$
(2.11)

Based on the notation above, (2.4) can be rewritten as

$$ C \begin{pmatrix} 0 &0 \\ 0&2I_{n-k} \end{pmatrix}C^{-1}\xi =\frac{1}{h}C \begin{pmatrix} I_{k} &0 \\ 0&-I_{n-k} \end{pmatrix}C^{-1} \int _{0}^{1} \int _{0}^{1}G(t,s)v(s)\,ds\,d\alpha (t). $$

Thus,

$$ \begin{pmatrix} 0 &0 \\ 0&2I_{n-k} \end{pmatrix}C^{-1}\xi =\frac{1}{h} \begin{pmatrix} I_{k} &0 \\ 0&-I_{n-k} \end{pmatrix}C^{-1} \int _{0}^{1} \int _{0}^{1}G(t,s)v(s)\,ds\,d\alpha (t). $$

Then the above matrix equation reduces to

$$ \textstyle\begin{cases} \int _{0}^{1}\int _{0}^{1}G(t,s)\sum_{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)=0,\quad i=1,2,\ldots, k, \\ -\int _{0}^{1}\int _{0}^{1}G(t,s)\sum_{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)=2h\sum_{j=1}^{n}c_{i j}\xi _{j},\quad i=k+1,k+2,\ldots, n, \end{cases} $$

or

$$ \textstyle\begin{cases} \int _{0}^{1}\int _{0}^{1}G(t,s)\sum_{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)=0,\quad i=1,2,\ldots, k, \\ -\int _{0}^{1}\int _{0}^{1}G(t,s)(D_{1}, D_{2})v(s)\,ds\,d\alpha (t)=2h(D _{1}, D_{2})\xi. \end{cases} $$
(2.12)

Consequently,

$$ \operatorname{Im} L= \Biggl\{ v\in Y: \int _{0}^{1} \int _{0}^{1}G(t,s) \sum _{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)=0, i=1,2,\ldots , k \Biggr\} . $$

By (2.10), we know \(\operatorname{det}(D_{2})\neq0\). From the second part of (2.12), we infer that

$$ \begin{pmatrix} \xi _{k+1} \\ \xi _{k+2} \\ \vdots \\ \xi _{n} \end{pmatrix} =-\frac{D_{2}^{-1}D_{1}}{2h} \begin{pmatrix} \xi _{1} \\ \xi _{2} \\ \vdots \\ \xi _{k} \end{pmatrix} - \frac{1}{2h} \int _{0}^{1} \int _{0}^{1}G(t,s) \bigl(D_{2}^{-1}D_{1}, I_{n-k}\bigr)v(s)\,ds\,d\alpha (t). $$
(2.13)

Define the linear operator \(P: X\rightarrow X\) by

$$ (Pu) (t)=u_{1}(1)\eta _{1}t+u_{2}(1)\eta _{2}t+\cdots +u_{k}(1)\eta _{k}t,\quad u\in X, $$

It follows from the left formula in (2.10) that P is a continuous projection operator with \(\operatorname{Im} P= \operatorname{Ker} L\).

Based on the assumption (H2), we define linear operators \(Q: Y\rightarrow Y\) by

$$ Qv=\sum_{i=1}^{k}\gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)\cdot \eta _{i}, \quad v\in Y, $$
(2.14)

where \(\gamma = \frac{2}{\int _{0}^{1}t(1-t)\,d\alpha (t)}\). For given \(v\in Y\), we take

$$ \alpha _{i}= \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}v _{j}(s)\,ds\,d \alpha (t),\quad i=1,2,\ldots,k. $$

Then (2.13) reduces to

$$ (Qv) (t)=\gamma \Biggl(\alpha _{1},\alpha _{2},\ldots,\alpha _{k}, \sum_{m=1}^{k}\alpha _{m}\eta _{k+1,m}, \sum_{m=1}^{k} \alpha _{m}\eta _{k+2,m},\ldots,\sum _{m=1}^{k}\alpha _{m}\eta _{n,m} \Biggr). $$

With the help of (2.11), we have

$$\begin{aligned} & \frac{1}{\gamma }\sum_{j=1}^{n}c_{i j}(Qv)_{j}(s) \\ &\quad= c_{i 1}\alpha _{1}+c_{i 2}\alpha _{2}+ \cdots +c_{i k}\alpha _{k}+c_{i k+1} \sum _{m=1}^{k}\alpha _{m}\eta _{k+1,m} \\ &\qquad{} +c_{i k+2} \sum_{m=1}^{k}\alpha _{m}\eta _{k+2,m}+\cdots +c_{i n} \sum _{m=1}^{k}\alpha _{m}\eta _{n,m} \\ &\quad= \alpha _{1}(c_{i 1}+c_{i k+1}\eta _{k+1,1}+c_{i k+2}\eta _{k+2,1} + \cdots +c_{i n} \eta _{n,1}) \\ &\qquad{} +\alpha _{2}(c_{i 2}+c_{i k+1}\eta _{k+1,2}+c_{i k+2}\eta _{k+2,2} + \cdots +c_{i n} \eta _{n,2}) \\ &\qquad{} +\cdots + \alpha _{k}(c_{i k}+c_{i k+1}\eta _{k+1,k}+c_{i k+2}\eta _{k+2,k} + \cdots +c_{i n} \eta _{n,k}) \\ &\quad= \alpha _{1}\delta _{i1}+\alpha _{2}\delta _{i2}+\cdots + \alpha _{k}\delta _{ik},\quad i=,1,2, \ldots,k. \end{aligned}$$

Consequently,

$$\begin{aligned} \bigl(Q^{2}v\bigr) (t)&= \sum_{i=1}^{k} \gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}(Qv)_{j}(s)\,ds \,d\alpha (t)\cdot \eta _{i} \\ &= \gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\,ds\,d\alpha (t) \Biggl(\sum _{i=1}^{k}\gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)\cdot \eta _{i} \Biggr) \\ &= \gamma \int _{0}^{1}\frac{t(1-t)}{2}\,d\alpha (t) \Biggl( \sum_{i=1}^{k}\gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}v_{j}(s)\,ds\,d \alpha (t)\cdot \eta _{i} \Biggr) \\ &= (Qv) (t). \end{aligned}$$

This implies that Q is a continuous projection operator. Obviously, \(\operatorname{Ker} Q=\operatorname{Im} L\) holds from the linear independence of the vectors \(\{\eta _{1},\eta _{2},\ldots,\eta _{k}\}\).

From (2.12), it follows that the generalized inverse \(K_{P}: \operatorname{Im} L\rightarrow \operatorname{dom}L\cap \operatorname{Ker} P\) can be defined by

$$ (K_{P}v) (t)= \int _{0}^{1}G(t,s)v(s)\,ds+ \delta \cdot t, $$

where \(\delta \in \mathbb{R}^{n}\) is given by

$$ \delta = \biggl(\underbrace{0,\ldots,0} _{k}, \frac{1}{2h} \int _{0}^{1} \int _{0}^{1}G(t,s) \bigl(D_{2}^{-1}D_{1}, I_{n-k}\bigr)v(s)\,ds\,d\alpha (t) \biggr) ^{T}. $$

Similar to the proof of Lemma 2.4, we obtain

$$ \Vert K_{P}v \Vert _{X}\leq \Biggl(1+ \frac{n}{2h} \bigl\Vert \bigl(D_{2}^{-1}D_{1}, I_{n-k}\bigr) \bigr\Vert _{*} \int _{0}^{1}t(1-t)\,d \Biggl(\bigvee _{0}^{t} ( \alpha ) \Biggr) \Biggr) \Vert v \Vert _{1}. $$
(2.15)

3 Main result

In this section, we use Theorem 1.1 to prove the existence of solutions to (1.1). For this purpose, we use the following assumptions:

  1. (H5)

    There exist nonnegative functions \(a, b, c\in L^{1}[0,1]\) such that, for all \(u,v\in \mathbb{R}^{n}\) and \(t\in [0,1]\),

    $$ \bigl\vert f(t,u,v) \bigr\vert \leq a(t) \Vert u \Vert _{\mathbb{R}^{n}}+b(t) \Vert v \Vert _{\mathbb{R}^{n}}+c(t). $$
  2. (H6)

    There exists a constant \(\varLambda >0\) such that, for each \(u\in \operatorname{dom}L\), if \(\|u'(t)\|_{\mathbb{R}^{n}}>\varLambda \) for all \(t\in [0,1]\), then

    $$ g(B)A \int _{0}^{1} \int _{0}^{1}G(t,s)f\bigl(s,u(s),u'(s) \bigr)\,ds\,d\alpha (t)\neq0. $$
  3. (H7)

    There exists a constant \(\varLambda _{1}>0\) such that either for any \(\psi \in \mathbb{R}^{n}\) with \(\psi =B\psi \) and \(\|\psi \|_{ \mathbb{R}^{n}}>\varLambda _{1}\),

    $$ (\psi,QNu)\leq 0, $$
    (3.1)

    or for any \(\psi \in \mathbb{R}^{n}\) with \(\psi =B\psi \) and \(\|\psi \|_{\mathbb{R}^{n}}>\varLambda _{1}\),

    $$ (\psi,QNu)\geq 0, $$
    (3.2)

    where \(u(t)=\psi t\), and \((\cdot,\cdot )\) denotes the scalar product in \(\mathbb{R}^{n}\).

Theorem 3.1

Let the assumptions (H1)(H7) hold. Then (1.1) has at least one solution in X provided that \((n\|g(B)\|_{*}+M|g(1)|)( \|a\|_{1}+\|b\|_{1})<|g(1)|\).

Proof

Set

$$ \varOmega _{1}=\bigl\{ u\in \operatorname{dom}L\backslash \operatorname{Ker} L: Lu=\lambda Nu \text{ for some } \lambda \in [0,1]\bigr\} . $$

Suppose that \(u\in \varOmega _{1}\), and \(Lu=\lambda Nu\). Then \(\lambda \neq0\) and \(Nu\in \operatorname{Im}L=\operatorname{Ker}Q\) so that

$$ g(B)A \int _{0}^{1} \int _{0}^{1}G(t,s)f\bigl(s,u(t),u'(t) \bigr)=0, \quad\text{for all } t\in [0,1]. $$

Thus, from (H6), there is \(t_{0}\in [0,1]\) such that \(\|u'(t_{0})\|_{\mathbb{R}^{n}}\leq \varLambda \). By the absolute continuity of u, for \(t\in [0,1]\), we have

$$ \bigl\vert u(t) \bigr\vert = \biggl\vert u(0)- \int _{0}^{t}u'(s)\,ds \biggr\vert \leq \bigl\Vert u' \bigr\Vert _{\infty } $$

and

$$ \bigl\vert u'(t) \bigr\vert = \biggl\vert u'(t_{0})- \int _{t_{0}}^{t}u''(s)\,ds \biggr\vert \leq \varLambda + \bigl\Vert u'' \bigr\Vert _{1}. $$
(3.3)

This yields

$$ \Vert Pu \Vert _{X}=\max \bigl\{ \Vert Pu \Vert _{\infty }, \bigl\Vert (Pu)' \bigr\Vert _{\infty } \bigr\} \leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \bigl\Vert u(1) \bigr\Vert _{\mathbb{R}^{n}}\leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \bigl\Vert u' \bigr\Vert _{\infty }. $$
(3.4)

Again, if \(u\in \operatorname{dom}L\), then \((I-P)u\in \operatorname{dom}L\cap \operatorname{Ker} P\) and \(LPu=0\). Then, by (2.7) and Lemma 2.3,

$$\begin{aligned} \bigl\Vert (I-P)u \bigr\Vert _{X}&= \bigl\Vert K_{P}L(I-P)u \bigr\Vert _{X}\leq M \bigl\Vert L(I-P)u \bigr\Vert _{1} \\ &= M \Vert Lu \Vert _{1}=M \bigl\Vert u'' \bigr\Vert _{1}\leq M \Vert Nu \Vert _{1}. \end{aligned}$$
(3.5)

Using (3.3), (3.4) and (3.5), we conclude that

$$\begin{aligned} \Vert u \Vert _{X}&= \bigl\Vert Pu+(I-P)u \bigr\Vert _{X}\leq \Vert Pu \Vert _{X}+ \bigl\Vert (I-P)u \bigr\Vert _{X} \\ &\leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \bigl(\varLambda + \bigl\Vert u'' \bigr\Vert _{1}\bigr)+M \bigl\Vert u'' \bigr\Vert _{1} \\ &= \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \varLambda + \biggl(\frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } +M \biggr) \bigl\Vert u'' \bigr\Vert _{1} \\ &\leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \varLambda + \biggl(\frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } +M \biggr) \int _{0}^{1} \bigl\vert f\bigl(s, u(s),u'(s)\bigr) \bigr\vert \,ds \\ &\leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \varLambda + \biggl(\frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } +M \biggr) \bigl( \Vert a \Vert _{1} \Vert u \Vert _{\infty }+ \Vert b \Vert _{1} \bigl\Vert u' \bigr\Vert _{\infty }+ \Vert c \Vert _{1}\bigr) \\ &\leq \frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } \varLambda + \biggl(\frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } +M \biggr) \Vert c \Vert _{1}+ \biggl(\frac{n \Vert g(B) \Vert _{*}}{ \vert g(1) \vert } +M \biggr) \bigl( \Vert a \Vert _{1}+ \Vert b \Vert _{1}\bigr) \Vert u \Vert _{X}. \end{aligned}$$

The last inequality allows us to deduce that

$$ \Vert u \Vert _{X}\leq \frac{ n \Vert g(B) \Vert _{*}\varLambda + (n \Vert g(B) \Vert _{*}+M \vert g(1) \vert ) \Vert c \Vert _{1}}{ \vert g(1) \vert -(n \Vert g(B) \Vert _{*}+M \vert g(1) \vert )( \Vert a \Vert _{1}+ \Vert b \Vert _{1})}. $$

Thus, \(\varOmega _{1}\) is bounded. Let

$$ \varOmega _{2}=\{u\in \operatorname{Ker} L: Nu\in \operatorname{Im}L\}. $$

For \(u\in \varOmega _{2}\) and from the definition of ImL, \(u(t)=\psi t\), where \(\psi \in \mathbb{R}^{n}\). Since \(QNu=0\), we have

$$ g(B)A \int _{0}^{1} \int _{0}^{1}G(t,s)f\bigl(s,u(s),u'(s) \bigr)\,ds\,d\alpha (t)=0. $$

Hence, from (H6), we can show that

$$ \Vert u \Vert _{X}= \Vert u \Vert _{\infty }= \Vert \psi \Vert _{\mathbb{R}^{n}}\leq \varLambda. $$

Therefore, \(\varOmega _{2}\) is a bounded set in X.

Let \(J(\psi )=\psi t\) be the isomorphism. Then we want to show that the set of u in KerL such that

$$ -\lambda u+(1-\lambda )JQNu=0 $$

with \(\lambda \in [0,1]\) is bounded if (3.1) holds. This means that (with \(\psi = B\psi \) and \(u=\psi t\))

$$ -\lambda \psi t+(1-\lambda )QN(\psi t)t=0, $$

or

$$ -\lambda \psi +(1-\lambda )QN(\psi t)=0. $$

If \(\lambda =0\), we have \(QN(\psi t)=0\), that is,

$$ g(B)A \int _{0}^{1} \int _{0}^{1}G(t,s)f(s,\psi s,\psi )\,ds\,d\alpha (t)=0. $$

Thus, we deduce that \(\|\psi \|_{\mathbb{R}^{n}}\leq \varLambda \) follows from (H5). Otherwise, if \(\|\psi \|_{\mathbb{R}^{n}}>\varLambda _{1}\), in view of (H7), we have

$$ 0< \lambda \Vert \psi \Vert ^{2}_{\mathbb{R}^{n}}=(1-\lambda ) \bigl( \psi,QN(\psi t)\bigr) \leq 0. $$

Thus, \(\|u\|_{X}=\|\psi \|_{\mathbb{R}^{n}}\leq \varLambda _{1}\). Using the same argument as above, we can conclude that the set of u in KerL such that

$$ \lambda u+(1-\lambda )JQNu=0 $$

with \(\lambda \in [0,1]\) is bounded if (3.2) holds. Therefore, the set

$$ \varOmega _{3}=\bigl\{ u \in \operatorname{Ker} L: \mu \lambda u+(1- \lambda )JQNu=0, \lambda \in [0,1] \bigr\} $$

is bounded if conditions (H6) and (H7) are satisfied, where

$$ \mu =\textstyle\begin{cases} -1 & \text{if (3.1) holds}; \\ 1 & \text{if (3.2) holds}. \end{cases} $$

Finally, the proof of this theorem is now an easy consequence of Lemmas 2.1, 2.2, and 2.3 and Theorem 1.1. Let Ω be a bounded and open subset of X \(\bigcup_{i=1}^{3}\overline{\varOmega _{i}}\subset \varOmega \). Then, by the above argument, we have

  1. (i)

    \(Lx\neq \lambda Nx\), for every \((x,\lambda )\in [( \operatorname{dom}L\backslash \operatorname{Ker} L)\cap \partial \varOmega ]\times (0,1)\),

  2. (ii)

    \(Nx\notin \operatorname{Im}L\) for \(x\in \operatorname{Ker} L\cap \partial \varOmega \),

  3. (iii)

    \(H(x,\lambda )=\mu \lambda x+(1-\lambda )JQNx\). By the homotopy property of degree,

    $$ \deg (JQN|_{\operatorname{Ker} L}, \operatorname{Ker} L\cap \varOmega,0)=\deg ( \mu I, \operatorname{Ker} L\cap \varOmega,0)\neq 0. $$

Then, by Theorem 1.1, \(Lu=Nu\) has at least one solution in \(\operatorname{dom}L\cap \overline{\varOmega }\) so that the IBVP (1.1) has at least one solution. □

For the special case that a diagonalizable matrix B satisfy \(B^{2}=I\) and \(\dim \operatorname{Ker}(I-B)=k\), we make the following assumptions:

  1. (H8)

    There exists a constant \(\varLambda >0\) such that, for each \(u\in \operatorname{dom}L\), if \(|u_{1}'(t)|>\varLambda \) for all \(t\in [0,1]\) or \(|u_{2}'(t)|>\varLambda \) for all \(t\in [0,1]\), or … , or \(|u_{k}'(t)|>\varLambda \) for all \(t\in [0,1]\), then

    $$ (QNu) (t)=\sum_{i=1}^{k}\gamma \int _{0}^{1} \int _{0}^{1}G(t,s)\sum _{j=1}^{n}c_{i j}f_{j} \bigl(s,u(s),u'(s)\bigr)\,ds\,d\alpha (t)\cdot \eta _{i} \neq0. $$

Theorem 3.2

Let the assumptions (H2), (H3), (H5), (H7), and (H8) hold. Then (1.1) has at least one solution in X provided that \((\|\sum_{i=1}^{k}\eta _{i}\|_{\mathbb{R}^{n}}+M _{1})(\|a\|_{1}+\|b\|_{1})<1\), where \(M_{1}= (1+\frac{n}{2h} \|(D _{2}^{-1}D_{1}, I_{n-k})\|_{*}\int _{0}^{1}t(1-t)\,d (\bigvee_{0}^{t}(\alpha ) ) )\).

Proof

For \(u\in \varOmega _{1}\), \(QNu=0\). Then, from (H8), there is \(t_{i}\in [0,1]\) (\(i=1,2,\ldots,k\)) such that \(|u_{i}'(t_{i})|\leq \varLambda \). By the absolute continuity of \(u_{i}\), for \(t\in [0,1]\), we have

$$ \bigl\vert u_{i}(t) \bigr\vert = \biggl\vert u_{i}(0)- \int _{0}^{t}u_{i}'(s)\,ds \biggr\vert \leq \bigl\Vert u' \bigr\Vert _{\infty },\quad i=1,2,\ldots,k, $$

and

$$ \bigl\vert u_{i}'(t) \bigr\vert = \biggl\vert u_{i}'(t_{i})- \int _{t_{i}}^{t}u_{i}''(s) \,ds \biggr\vert \leq \varLambda + \bigl\Vert u'' \bigr\Vert _{1},\quad i=1,2,\ldots,k. $$

This yields

$$\begin{aligned} \Vert Pu \Vert _{X}&= \max \bigl\{ \Vert Pu \Vert _{\infty }, \bigl\Vert (Pu)' \bigr\Vert _{\infty }\bigr\} = \Vert Pu \Vert _{\infty } \\ &\leq \max_{1\leq i\leq k}\bigl\{ \bigl\vert u_{i}(1) \bigr\vert \bigr\} \cdot \Biggl\Vert \sum_{i=1} ^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} \leq \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}}\cdot \bigl\Vert u' \bigr\Vert _{\infty }. \end{aligned}$$

Again, if \(u\in \operatorname{dom}L\), then \((I-P)u\in \operatorname{dom}L\cap \operatorname{Ker} P\) and \(LPu=0\). Then, by (2.15),

$$\begin{aligned} \bigl\Vert (I-P)u \bigr\Vert _{X}&= \bigl\Vert K_{P}L(I-P)u \bigr\Vert _{X}\leq M_{1} \bigl\Vert L(I-P)u \bigr\Vert _{1} \\ &= M_{1} \Vert Lu \Vert _{1}=M_{1} \bigl\Vert u'' \bigr\Vert _{1}\leq M_{1} \Vert Nu \Vert _{1}. \end{aligned}$$

Using the above three inequalities, we conclude that

$$\begin{aligned} \Vert u \Vert _{X}&= \bigl\Vert Pu+(I-P)u \bigr\Vert _{X}\leq \Vert Pu \Vert _{X}+ \bigl\Vert (I-P)u \bigr\Vert _{X} \\ &\leq \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}}\bigl(\varLambda + \bigl\Vert u'' \bigr\Vert _{1}\bigr)+M_{1} \bigl\Vert u'' \bigr\Vert _{1} = \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} \varLambda \\ &{}+ \Biggl( \Biggl\Vert \sum _{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} +M_{1} \Biggr) \bigl\Vert u'' \bigr\Vert _{1} \\ &\leq \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} \varLambda + \Biggl( \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} +M_{1} \Biggr) \int _{0}^{1} \bigl\vert f\bigl(s, u(s),u'(s)\bigr) \bigr\vert \,ds \\ &\leq \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} \varLambda + \Biggl( \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} +M_{1} \Biggr) \Vert c \Vert _{1}\\ &{}+ \Biggl( \Biggl\Vert \sum_{i=1}^{k}\eta _{i} \Biggr\Vert _{\mathbb{R}^{n}} +M _{1} \Biggr) \bigl( \Vert a \Vert _{1}+ \Vert b \Vert _{1}\bigr) \Vert u \Vert _{X}. \end{aligned}$$

The last inequality allows us to deduce that

$$ \Vert u \Vert _{X}\leq \frac{ \Vert \sum_{i=1}^{k}\eta _{i} \Vert _{\mathbb{R} ^{n}} \varLambda + ( \Vert \sum_{i=1}^{k}\eta _{i} \Vert _{\mathbb{R} ^{n}} +M_{1} ) \Vert c \Vert _{1}}{1- ( \Vert \sum_{i=1}^{k}\eta _{i} \Vert _{\mathbb{R}^{n}} +M_{1} )( \Vert a \Vert _{1}+ \Vert b \Vert _{1})}. $$

Thus, \(\varOmega _{1}\) is bounded. The rest of the proof repeats that of Theorem 3.1. □

Example 3.1

Consider the differential system

$$ \textstyle\begin{cases} -x''_{1}(t)=\sin x_{2}(t)x'_{2}(t)+\frac{1}{5}\arctan x_{2}(t)+ \frac{1}{5}x'_{1}(t)+t, \\ -x''_{2}(t)=\sin x_{1}(t)x_{3}(t)-\cos x'_{3}(t) +\frac{1}{20}x'_{2}(t)- \frac{1}{20}x_{2}(t)+e^{t}, \\ -x''_{3}(t)=\arctan x_{3}^{2}(t)+\cos x_{1}(t)+\cos x_{2}(t)+ \frac{1}{20}x_{2}(t)+\frac{1}{10}x'_{2}(t), \\ x_{1}(0)=x_{2}(0)=x_{3}(0)=0,\qquad x_{1}(1)=2\int _{0}^{1}x_{1}(t)\,dt, \\ x_{2}(1)=2\int _{0}^{1}x_{3}(t)\,dt,\qquad x_{3}(1)=2\int _{0}^{1}x_{2}(t)\,dt. \end{cases} $$
(3.6)

Here \(f_{i}:[0,1]\times \mathbb{R}^{6}\rightarrow \mathbb{R}\), \(i=1,2,3\) are defined, respectively, by

$$\begin{aligned} \begin{aligned} &f_{1}(t,x,y)= \sin x_{2}y_{2}+\frac{1}{5}\arctan x_{2}+ \frac{1}{5}y_{1}+t, \\ &f_{2}(t,x,y)= \sin x_{1}x_{3}-\cos y_{3} -\frac{1}{20}y_{2}-\frac{1}{20}x_{2}+e^{t}, \\ &f_{3}(t,x,y)= \arctan x_{3}^{2}+\cos x_{1}+\cos x_{2}+ \frac{1}{20}x_{2}+ \frac{1}{10}y_{2}, \end{aligned} \end{aligned}$$
(3.7)

where \(x=(x_{1},x_{2},x_{3})^{T}, y=(y_{1},y_{2},y_{3})^{T}\in \mathbb{R}^{3}\).

Take \(\alpha (t)=t\),

$$ A= \begin{pmatrix} 2&0&0 \\ 0&0 &2 \\ 0&2&0 \end{pmatrix} $$

and

$$ f(t,x,y)= \begin{pmatrix} f_{1}(t,x,y) \\ f_{2}(t,x,y) \\ f_{3}(t,x,y) \end{pmatrix}. $$

Then \(h=\int _{0}^{1}t\,d\alpha (t)=\frac{1}{2}\), \(B=Ah=\frac{1}{2}A\), \(B^{2}=I\),

$$\begin{aligned} &C= \begin{pmatrix} 1&0&0 \\ 0&1 &-1 \\ 0&1&1 \end{pmatrix}=(\eta _{1},\eta _{2},\eta _{3}), \qquad C^{-1}= \begin{pmatrix} 1&0&0 \\ 0&\frac{1}{2} &\frac{1}{2} \\ 0&-\frac{1}{2}&\frac{1}{2} \end{pmatrix}=(c_{ij})_{3\times 3}, \\ &D_{2}=\frac{1}{2},\qquad D_{1}=\biggl(0, - \frac{1}{2}\biggr),\qquad n=3, \qquad k=2, \\ &(Qv) (t)= 12 \int _{0}^{1} \int _{0}^{1}G(t,s)v_{1}(s)\,ds\,d\alpha (t)\cdot \eta _{1} \\ &\phantom{(Qv) (t)=}{}+ 6 \int _{0}^{1} \int _{0}^{1}G(t,s) \bigl(v_{2}(s)+v_{3}(s) \bigr)\,ds\,d\alpha (t)\cdot \eta _{2} \\ &\phantom{(Qv) (t)}= \begin{pmatrix} 12\int _{0}^{1}\int _{0}^{1}G(t,s)v_{1}(s)\,ds\,d\alpha (t) \\ 6\int _{0}^{1}\int _{0}^{1}G(t,s)(v_{2}(s)+v_{3}(s))\,ds\,d\alpha (t) \\ 6\int _{0}^{1}\int _{0}^{1}G(t,s)(v_{2}(s)+v_{3}(s))\,ds\,d\alpha (t) \end{pmatrix}. \end{aligned}$$

It follows from (3.7) that

$$ \bigl\vert f(t,x,y) \bigr\vert \leq \frac{1}{20} \vert x \vert + \frac{1}{5} \vert y \vert +4. $$

Note that \(M_{1}= (1+\frac{n}{2h} \|(D_{2}^{-1}D_{1}, I_{n-k})\| _{*}\int _{0}^{1}t(1-t)\,d (\bigvee_{0}^{t}(\alpha ) ) )\), we have \(M_{1}=\frac{3}{2}\). Then we obtain \((\|\sum_{i=1}^{2}\eta _{i} \|_{\mathbb{R}^{3}}+M_{1})(\|a\|_{1}+\|b\|_{1})=\frac{5}{8}<1\). Therefore, condition (H5) is satisfied.

Take \(\varLambda =200\). Then, for \(|y_{1}(t)|\geq \varLambda \) for all \(t\in [0,1]\), we have

$$\begin{aligned} & \bigl\vert f_{1}(t,x,y) \bigr\vert \\ &\quad= \biggl\vert \sin x_{2}y_{2}+\frac{1}{5}\arctan x_{2}+\frac{1}{5}y_{1}+t \biggr\vert \geq \frac{1}{5} \vert y_{1} \vert -4\geq 36 \end{aligned}$$

and for \(|y_{2}(t)|\geq \varLambda \) for all \(t\in [0,1]\), we have

$$\begin{aligned} & \bigl\vert (f_{2}+f_{3}) (t,x,y) \bigr\vert \\ &\quad=\biggl\vert \sin x_{1}x_{3}-\cos y_{3}+e^{t}+ \arctan x_{3}^{2}+\cos x_{1}+ \cos x_{2}+\frac{1}{10}y_{2} \biggr\vert \geq \frac{1}{20} \vert y_{2} \vert -6\geq 4. \end{aligned}$$

Thus, for each \(u\in X\), if \(|v_{1}(t)|\geq \varLambda \) for all \(t\in [0,1]\) or \(|v_{2}(t)|\geq \varLambda \) for all \(t\in [0,1]\), we conclude that

$$ (QNv) (t) = \begin{pmatrix} 12\int _{0}^{1}\int _{0}^{1}G(t,s)v_{1}(s)\,ds\,d\alpha (t) \\ 6\int _{0}^{1}\int _{0}^{1}G(t,s)(v_{2}(s)+v_{3}(s))\,ds\,d\alpha (t) \\ 6\int _{0}^{1}\int _{0}^{1}G(t,s)(v_{2}(s)+v_{3}(s))\,ds\,d\alpha (t) \end{pmatrix}\neq0. $$

Hence, (H8) holds.

Let \(\psi =(c_{1},c_{2},c_{2})^{T}\in \operatorname{Ker} (I-B)=\{ c_{1}(1,0,0)+ c _{2}(0,1,1): c_{1},c_{2}\in \mathbb{R}\}\). Then we have

$$\begin{aligned} &f_{1}(t,\psi t,\psi )= \sin c_{2}^{2}t+ \frac{1}{5}\arctan c_{2}t+\frac{1}{5}c_{1}+t, \\ &f_{2}(t,\psi t,\psi )= \sin c_{1}c_{2}t^{2}- \cos c_{2} -\frac{1}{20}c_{2}-\frac{t}{20}c_{2}+e ^{t}, \\ &f_{3}(t,\psi t,\psi )= \arctan c_{2}^{2}t^{2}+ \cos c_{1}t+\cos c_{2}t+ \frac{t}{20}c_{2}+ \frac{1}{10}c_{2}, \end{aligned}$$

and

$$\begin{aligned} & c_{1}f_{1}(t,\psi t, \psi )+c_{2} \bigl(f_{2}(t,\psi t, \psi )+f_{3}(t,\psi t, \psi )\bigr) \\ &\quad= c_{1} \biggl( \sin c_{2}^{2}t+ \frac{1}{5}\arctan c_{2}t+\frac{1}{5}c_{1}+t \biggr) \\ &\qquad{} +c_{2} \biggl(\frac{1}{10}c_{2}+\sin c_{1}c_{2}t^{2}-\cos c_{2} +e^{t} +\arctan c_{2}^{2}t^{2}+\cos c_{1}t+\cos c_{2}t \biggr) \\ &\quad\geq \frac{1}{5}c_{1}^{2}-3 \vert c_{1} \vert +\frac{1}{10}c_{2}^{2}-10 \vert c_{2} \vert \geq \frac{1}{10}\max \bigl\{ c_{1}^{2}, c_{2}^{2}\bigr\} -13\max \bigl\{ \vert c_{1} \vert , \vert c_{2} \vert \bigr\} . \end{aligned}$$

Note that \(\|\psi \|_{\mathbb{R}^{3}}=\max \{|c_{1}|, |c_{2}|\}\) and

$$ \bigl(\psi,QN(\psi t)\bigr)=12 \int _{0}^{1} \int _{0}^{1}G(t,s)\bigl[c_{1}f_{1}+c_{2}(f_{2}+f_{3}) \bigr]\,ds\,dt\geq 156, \quad\text{if } \Vert \psi \Vert _{\mathbb{R}^{3}}\geq 131, $$

we see that condition (H7) is satisfied. It follows from Theorem 3.2 that the problem (3.7) has at least one solution.