1 Introduction

Impulsive differential equations have gained considerable importance due to their varied applications in many problems of physics, chemistry, biology, applied sciences and engineering. For details and explanations, we refer the reader to Refs. [19]. In particular, great interest has been shown by many authors in the subject of impulsive boundary value problems (IBVPs), and a variety of results for IBVPs equipped with different kinds of boundary conditions have been obtained, for instance, see [1028] and the references cited therein.

However, there is almost no paper on second-order n-dimensional impulsive systems, especially for multi-parameter second-order n-dimensional impulsive singular Neumann systems. In this paper, we will introduce this new problem and discuss the existence of infinitely many positive solutions.

Consider the n-dimensional nonlinear second-order impulsive Neumann system

$$ \begin{aligned} &{-}\mathbf{x}^{\prime\prime}(t)+ M\mathbf{x}(t)=\lambda { \mathbf{g}}(t)\mathbf{f} \bigl(t,\mathbf{x}(t) \bigr),\quad t\in J, t\neq t_{k}, \\ &{-}\Delta {\mathbf{x}}^{\prime}|_{t=t_{k}}=\mu { \mathbf{I}}_{k} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr),\quad k=1,2,\ldots ,m, \end{aligned} $$
(1.1)

with the following boundary conditions:

$$ {\mathbf{x}}^{\prime}(0)=\mathbf{x}^{\prime}(1)=0, $$
(1.2)

where λ and μ are positive parameters and M is a positive constant, \(J=[0,1], t_{k} \in \mathrm{R}\), \(k =1,2,\ldots ,m\), \(m \in \mathrm{N}\) satisfy \(0< t_{1}< t_{2}<\cdots <t_{k}<\cdots <t_{n}<1\). In addition,

$$\begin{aligned}& {\mathbf{x}}=[x_{1},x_{2},\ldots ,x_{n}]^{\top }, \\& {\mathbf{g}}(t)=\operatorname{diag} \bigl[g_{1}(t),g_{2}(t), \ldots ,g_{n}(t) \bigr], \\& {\mathbf{f}}(t,\mathbf{x})= \bigl[f_{1}(t,\mathbf{x}),\ldots ,f_{i}(t,\mathbf{x}),\ldots ,f_{n}(t,\mathbf{x}) \bigr]^{\top }, \\& {\mathbf{I}}_{k} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr)= \bigl[I_{k}^{1} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr), \ldots ,I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr),\ldots ,I_{k}^{n} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \bigr]^{\top }, \\& -\Delta {\mathbf{x}}^{\prime}|_{t=t_{k}}= \bigl[-\Delta x_{1}^{\prime}|_{t=t_{k}}, - \Delta x_{2}^{\prime}|_{t=t_{k}},\ldots ,-\Delta x_{n}^{\prime}|_{t=t_{k}} \bigr]^{\top }, \end{aligned}$$

here

$$ f_{i}(t,\mathbf{x})=f_{i}(t,x_{1},\ldots ,x_{i},\ldots , x_{n}), \qquad I_{k} ^{i}(t_{k},\mathbf{x})=I_{k}^{i}(t_{k},x_{1}, \ldots ,x_{i},\ldots , x _{n}). $$

Therefore, system (1.1) means that

$$ \textstyle\begin{cases} -x_{1}^{\prime\prime}(t)+ Mx_{1}(t)=\lambda g_{1}(t)f_{1}(t,x_{1}(t),x_{2}(t),\dots ,x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{1}^{\prime}|_{t=t_{k}}=\mu I_{k}^{1}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \\ -x_{2}^{\prime\prime}(t)+ Mx_{2}(t)=\lambda g_{2}(t)f_{2}(t,x_{1}(t),x_{2}(t),\ldots ,x_{n}(t)),&t\in J,t\neq t_{k}, \\ -\Delta x_{2}^{\prime}|_{t=t_{k}}=\mu I_{k}^{2}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \\ \cdots , \\ -x_{n}^{\prime\prime}(t)+ Mx_{n}(t)=\lambda g_{n}(t)f_{n}(t,x_{1}(t),x_{2}(t), \ldots ,x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{n}^{\prime}|_{t=t_{k}}=\mu I_{k}^{n}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \end{cases} $$
(1.3)

where \(-\Delta x_{i}^{\prime}|_{t=t_{k}}=x_{i}^{\prime}((t_{k})^{+})-x_{i} ^{\prime}((t_{k})^{-})\) and in which \(x_{i}^{\prime}((t_{k})^{+})\) and \(x _{i}^{\prime}((t_{k})^{-})\) denote the right-hand limit and left-hand limit of \(x_{i}^{\prime}(t)\) at \(t=t_{k}\), respectively.

Similarly, (1.2) means that

$$ \textstyle\begin{cases} x_{1}^{\prime}(0)=x_{1}^{\prime}(1)=0, \\ x_{2}^{\prime}(0)=x_{2}^{\prime}(1)=0, \\ \cdots , \\ x_{n}^{\prime}(0)=x_{n}^{\prime}(1)=0. \end{cases} $$
(1.4)

By a solution x to system (1.1)–(1.2), we understand a vector-valued function \(\mathbf{x}=[x_{1},x_{2},\dots ,x_{n}]^{\top } \in C^{2}(J,R^{n})\), which satisfies (1.1) and (1.2) for \(t\in J\). In addition, for each \(i=1,2,\dots ,n, k =1,2,\ldots ,m\), \(x_{i}(t _{k}^{ +})\) and \(x_{i}(t_{k}^{-})\) exist and \(x_{i}(t)\) is absolutely continuous on each interval \((0,t_{1}]\) and \((t_{k},t_{k+1}]\). A solution is positive if, for each \(i=1,2,\dots ,n\), \(x_{i}(t)>0\) for all \(t\in J\) and there is at least one nontrivial component of x is positive on J.

For the case of \(n=1, \lambda =1\) and \(\mathbf{I}_{k}\equiv 0, k=1,2, \ldots ,m\), system (1.1)–(1.2) reduces to the problem studied by Sun, Cho and O’Regan in [29]. By using a cone fixed point theorem, the authors obtained some sufficient conditions for the existence of positive solutions in Banach spaces. Very recently, in the case \(n=1, M=0, \lambda =1\) and \(\mathbf{I}_{k}\equiv 0, k=1,2,\ldots ,m\), Sovrano and Zanolin [30] presented a multiplicity result of positive solutions for system (1.1)–(1.2) by applying shooting method. For other excellent results on Neumann boundary value problems, we refer the reader to the references [3142].

Here we emphasize that our problem is new in the sense of multi-parameter second-order n-dimensional impulsive singular Neumann systems introduced here. To the best of our knowledge, the existence of single or multiple positive solutions for multi-parameter second-order n-dimensional impulsive singular Neumann systems (1.1)–(1.2) has not yet to be studied, especially for the existence of infinitely many positive solutions for system (1.1)–(1.2). In consequence, our main results of the present work will be a useful contribution to the existing literature on the topic of second-order n-dimensional impulsive singular Neumann systems. The existence of infinitely many positive solutions for the given problem are new, though they are proved by applying the well-known method based on the fixed index theory in cones and the inequality technique.

Throughout this paper, we use \(i=1,2,\dots ,n\), unless otherwise stated.

Let the components of g, f and \(\mathbf{I}_{k}\) satisfy the following conditions:

\((H_{1})\) :

\(g_{i}(t)\in L^{p}[0,1]\) for some \(p\in [1,+\infty )\), and there exists \(N_{i}>0\) such that \(g_{i}(t)\geq N_{i}\) a.e. on J;

\((H_{2})\) :

for every \(g_{i}(t),i=1,2,\ldots ,n\), there exists a sequence \(\{t_{j}^{\prime}\}_{j=1}^{\infty } \) such that \(t_{1}^{\prime}<\delta \), where \(\delta =\min \{t_{1},\frac{1}{2}\} \), \(t_{j}^{\prime} \downarrow t_{0}^{\prime}>0 \) and \(\lim_{t\rightarrow t_{j}^{\prime}} g_{i}(t) =+\infty\) for all \(j=1, 2,\ldots \) ;

\((H_{3})\) :

\(f_{i}(t,\mathbf{x})\in C(J\times R_{+}^{n}, R_{+}), I_{k} ^{i}(t_{k},\mathbf{x}(t_{k}))\in C(J\times R_{+}^{n}, R_{+})\), where \(R^{+}=[0,+\infty )\) and \(R_{+}^{n}=\prod_{i=1}^{n}R_{+}\).

Remark 1.1

It is not difficult to see that the condition (\(H _{2}\)) plays an important role in the proof of Theorem 3.1, and there are many functions satisfying (\(H_{2}\)), for detail to see Example 3.1.

Remark 1.2

From the proof of the main results reported by Sovrano and Zanolin [30], it is not difficult to see that \(f(t,u)>0\) for \(u>0\) is an important condition, although we consider the multiplicity of positive solution on the parameter λ and μ without using it, for detail, to see Theorem 3.1.

Our plan of this article is as follows. In Sect. 2, we collect some well-known results to be used in the subsequent sections and present several new properties of Green’s function, which plays a pivotal role in obtaining the main results given in Sect. 3. In the final section, we also give an example of a family of diagonal matrix functions \(\mathbf{g}(t)\) such that \((H_{2})\) holds.

2 Preliminaries

Let \(J^{\prime}=J\setminus \{ t_{1},t_{2},\ldots ,t_{m} \} \) and \(E=C[0,1]\). We define \(PC_{1}[0,1]\) in E by

$$ PC_{1}[0,1]= \bigl\{ x\in E:x^{\prime}(t)\in C(t_{k}, t_{k+1}), \exists x ^{\prime} \bigl(t_{k}^{-} \bigr), x^{\prime} \bigl(t_{k}^{+} \bigr), k=1,2, \ldots ,m \bigr\} . $$
(2.1)

Then \(PC_{1}[0,1]\) is a real Banach space with the norm

$$ \Vert x \Vert _{PC_{1}}=\max \bigl\{ \Vert x \Vert _{\infty }, \bigl\Vert x^{\prime} \bigr\Vert _{\infty } \bigr\} , $$

where \(\Vert x \Vert _{\infty }=\sup_{t\in J}\vert x(t) \vert , \Vert x^{\prime} \Vert _{\infty }=\sup_{t\in J}\vert x^{\prime}(t) \vert \).

Let \(PC_{1}^{n}[0,1]=\underbrace{PC_{1}[0,1]\times \cdots \times PC _{1}[0,1]}_{n}\), and, for any \(\mathbf{x}=[x_{1},x_{2},\dots ,x_{n}]^{ \top }\in PC_{1}^{n}[0,1]\),

$$ \Vert {\mathbf{x}} \Vert =\sum_{i=1}^{n} \Vert x_{i} \Vert _{PC_{1}}. $$
(2.2)

Then \((PC_{1}^{n}[0,1],\Vert \cdot \Vert )\) is a real Banach space.

Suppose that \(G(t,s)\) is the Green’s function of the boundary value problem

$$ -x_{i}^{\prime\prime}(t)+Mx_{i}(t)=0,\qquad x_{i}^{\prime}(0)=x_{i}^{\prime}(1)=0, $$

then

$$ G(t,s)= \frac{1}{\gamma \sinh \gamma } \textstyle\begin{cases} \cosh \gamma (1-t)\cosh \gamma s, &0\leq s\leq t\leq 1, \\ \cosh \gamma (1-s)\cosh \gamma t, &0\leq t\leq s\leq 1, \end{cases} $$
(2.3)

where \(\cosh t = \frac{e^{t}+e^{-t}}{2}\), \(\sinh t= \frac{e^{t}-e^{-t}}{2}\), \(\gamma =\sqrt{M}\).

It is obvious that

$$ A=\frac{1}{\gamma \sinh \gamma }\leq G(t,s)\leq \frac{\cosh \gamma }{ \gamma \sinh \gamma }=B,\quad \forall t,s\in J, $$
(2.4)

and then we have

$$ A\leq G(s,s)\leq B,\quad \forall s\in J. $$

Lemma 2.1

For any \(\theta \in (t_{0}^{\prime},\delta )\), there is

$$ \frac{\cosh\gamma \theta }{\gamma \sinh \gamma }\leq G(t,s)\leq \frac{ \cosh \gamma \theta \cosh \gamma (1-\theta )}{\gamma \sinh \gamma }, \quad \forall t\in [\theta,1], s\in J. $$
(2.5)

Proof

We get Eq. (2.5) easily by the definition of \(G(t,s)\), we omit it here. □

To establish the existence of positive solutions to system (1.1)–(1.2), for a fixed \(\theta \in (t_{0}^{\prime},\delta )\), we construct the cone \(\mathbf{K}_{\theta }\) in \(PC_{1}^{n}[0,1]\) by

$$\begin{aligned} {\mathbf{K}}_{\theta }&= \Biggl\{ \mathbf{x}=(x_{1},x_{2}, \dots ,x_{n}) \in PC _{1}^{n}[0,1]:x_{i}(t) \geq 0, \\ &\quad {} i=1,2,\ldots ,n, t\in J,\min_{t\in [\theta ,1]} \sum _{i=1}^{n}x_{i}(t) \geq \sigma \Vert \textbf{x} \Vert \Biggr\} , \end{aligned}$$
(2.6)

where

$$ \sigma =\frac{\cosh \gamma \theta }{\rho \gamma \sinh \gamma }, $$
(2.7)

here ρ is defined by

$$ \rho =\max \{B,\sinh \gamma \}, $$
(2.8)

and it is easy to see \(\mathbf{K}_{\theta }\) is a closed convex cone of \(PC_{1}^{n}[0,1]\).

Let \(\{\theta_{j}\}_{j=1}^{\infty }\) be such that \(t_{j+1}^{\prime}<\theta _{j}<t_{j}^{\prime}\), \(j=1,2,\ldots \) . Then we get \(0<\cdots <t_{j+1}^{\prime}< \theta_{j}<t_{j}^{\prime}<\cdots <t_{3}^{\prime}<\theta_{2}<t_{2}^{\prime}<\theta_{1}<t _{1}^{\prime}<\delta \leq t_{1}<t_{2}<\cdots <t_{m}<1\), and then, for any \(j\in \textrm{N}\), we can define the cone \(\mathbf{K}_{\theta_{j}}\) by

$$ {\mathbf{K}}_{\theta_{j}}= \Biggl\{ \mathbf{x} \in PC_{1}^{n}[0,1]: x_{i}(t) \geq 0,t\in J, i=1,2,\ldots ,n, \min _{t\in [\theta_{j},1]} \sum_{i=1}^{n}x_{i}(t) \geq \sigma_{j}\Vert \textbf{x} \Vert \Biggr\} , $$
(2.9)

where

$$ \sigma_{j}=\frac{\cosh \gamma \theta_{j}}{\rho \gamma \sinh \gamma }, $$
(2.10)

here ρ is defined by (2.8), and

$$ \theta_{j}\in \bigl[t_{j+1}^{\prime},t_{j}^{\prime} \bigr],\quad j=1,2,\ldots . $$
(2.11)

It is easy to see \(\mathbf{K}_{\theta_{j}}\) is also a closed convex cone of \(PC_{1}^{n}[0,1]\).

Also, for a positive number τ, define \(\mathbf{K}_{\tau \theta_{j}}\) by

$$ {\mathbf{K}}_{\tau \theta_{j}}= \bigl\{ \mathbf{x}\in {\mathbf{K}}_{\theta_{j}}: \Vert \textbf{x} \Vert < \tau \bigr\} . $$

Remark 2.1

It is obvious that \(0<\sigma ,\sigma_{j} <1\) by the definition of σ and \(\sigma_{j}\).

Lemma 2.2

If \((H_{1})\)\((H_{3})\) hold, then system (1.1)(1.2) has a unique solution \(\mathbf{x}=[x_{1},x_{2},\ldots , x_{n}]^{\top } \in R_{+}^{n}\) in which \(x_{i}(t)\) given by

$$ x_{i}(t) =\lambda \int_{0}^{1}G(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr). $$
(2.12)

Proof

We use the fact that system (1.1)–(1.2) is equivalent to system (1.3)–(1.4). Therefore system (1.1)–(1.2) has a unique solution x, which is equivalent to the following problem:

$$ \textstyle\begin{cases} -x_{i}^{\prime\prime}(t)+ Mx_{i}(t)=\lambda g_{i}(t)f(t,x_{1}(t), x_{2}(t),\ldots , x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{i}^{\prime}|_{t=t_{k}}=I_{k}(t_{k}, x_{1}(t_{1}), x_{2}(t_{2}),\ldots , x_{n}(t_{n})),& k=1,2,\ldots ,m, \\ x_{i}^{\prime}(0)=x_{i}^{\prime}(1)=0,\end{cases} $$
(2.13)

has a unique solution \(x_{i}\), which is given by (2.12).

Next, by a proof which is similar to that of Lemma 2.4 in [40], we can show that (2.12) holds. This finishes the proof of Lemma 2.2. □

Let \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to PC_{1}^{n}[0,1]\) be a map with components \((T_{\lambda \mu }^{1},\ldots ,T_{\lambda \mu }^{i},\ldots ,T_{\lambda \mu }^{n})\). We understand that \(\mathbf{T}_{\lambda \mu }\mathbf{x}=(T_{\lambda \mu }^{1}{\mathbf{x}},\ldots ,T _{\lambda \mu }^{i}{\mathbf{x}},\ldots ,T_{\lambda \mu }^{n}{\mathbf{x}})^{ \top }\), where

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t) &=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\quad {}+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr),\quad i=1,2,\ldots ,n. \end{aligned}$$
(2.14)

Remark 2.2

It follows from Lemma 2.2 and the definition of \(\mathbf{T}_{\lambda \mu }\) that

$$ {\mathbf{x}}=[x_{1},x_{2},\dots ,x_{n}]^{\top } \in PC_{1}^{n}[0,1] $$

is a solution of the system (1.1)–(1.2) if and only if \(\mathbf{x}=[x _{1},x_{2},\ldots ,x_{n}]^{\top }\) is a fixed point of operator \(\textbf{T}_{\lambda \mu }\).

Lemma 2.3

Assume that \((H_{1})\)\((H_{3})\) hold. Then \(\mathbf{T}_{\lambda \mu }(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}} \) and \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is a completely continuous.

Proof

By the theory of matrix analysis, if we want to prove that \(\mathbf{T}_{\lambda \mu }(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta _{j}} \) and \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to{\mathbf{K}}_{\theta_{j}}\) is a completely continuous, then, for \(i=1,2,\ldots , n\), we only prove that \(T_{\lambda \mu }^{i}(\mathbf{K} _{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}} \) and \(T_{\lambda \mu } ^{i}: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is a completely continuous.

Firstly, we prove that \(T_{\lambda \mu }^{i}(\mathbf{K}_{\theta_{j}}) \subset {\mathbf{K}}_{\theta_{j}}\). For \(t\in [\theta_{j},1]\), it follows from (2.5) and (2.14) that

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}\textbf{x} \bigr) (t) &=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t _{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq B \Biggl[\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds +\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr]. \end{aligned}$$
(2.15)

It is obvious that

$$ G_{t}^{\prime}(t,s)= \frac{1}{\sinh \gamma } \textstyle\begin{cases} -\sinh \gamma (1-t)\cosh \gamma s,&0\leq s\leq t\leq 1, \\ \sinh \gamma (1-s)\cosh \gamma t,&0\leq t\leq s\leq 1, \end{cases} $$
(2.16)

and

$$ \max_{t,s\in J,t\neq s} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert \leq \sinh \gamma . $$
(2.17)

By (2.14) and (2.17), we have

$$\begin{aligned} \bigl\vert \bigl(T_{\lambda \mu }^{i}\textbf{x} \bigr)^{\prime}(t) \bigr\vert &= \Biggl\vert \lambda \int_{0}^{1}G_{t}^{\prime}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G_{t}^{\prime}(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr\vert \\ &\leq \lambda \int_{0}^{1} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m} \bigl\vert G_{t}^{\prime}(t,t_{k}) \bigr\vert I_{k}^{i} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \\ &\leq \sinh \gamma \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] . \end{aligned}$$
(2.18)

For any \(t\in J\), combined with (2.15) and (2.18), we have

$$ \bigl\Vert T_{\lambda \mu }^{i}\textbf{x} \bigr\Vert _{PC_{1}} \leq \rho \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] . $$
(2.19)

Then, by (2.5), (2.6) and (2.19)

$$\begin{aligned} \min_{t\in [\theta_{j},1]} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&=\min_{t\in [\theta_{j},1]} \Biggl[ \lambda \int_{0}^{1}G(t,s) g _{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \frac{\cosh \gamma \theta_{j}}{\gamma \sinh \gamma } \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \frac{\cosh \gamma \theta_{j}}{\rho \gamma \sinh \gamma } \rho \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \sigma_{j} \bigl\Vert T_{\lambda \mu }^{i} \textbf{x} \bigr\Vert _{PC^{1}}. \end{aligned}$$
(2.20)

This shows that \(T_{\lambda \mu }^{i}(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}}\).

Next, by using similar arguments of Lemmas 5 and 6 [16] one can prove that the operator \(T_{\lambda \mu }^{i}: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is completely continuous. So the proof of Lemma 2.3 is complete. □

To obtain some of the norm inequalities in our main results, we employ the famous Hölder inequality.

Lemma 2.4

(Hölder)

Let \(e\in L^{p}[a,b]\) with \(p>1\), \(h\in L^{q}[a,b]\) with \(q>1\) and \(\frac{1}{p}+\frac{1}{q}=1\). Then \(eh\in L^{1}[a,b]\) and

$$ \Vert eh \Vert _{1}\le \Vert e \Vert _{p}\Vert h \Vert _{q}. $$

Let \(e\in L^{1}[a,b]\), \(h\in L^{\infty }[a,b]\). Then \(eh\in L^{1}[a,b]\) and

$$ \Vert eh \Vert _{1}\le \Vert e \Vert _{1}\Vert h \Vert _{\infty }. $$

Finally, we state the well-known fixed point index theorem in [43].

Lemma 2.5

Let E be a real Banach space and let K be a cone in E. For \(r>0\), we define \(K_{r}= \{ x\in K:\Vert x \Vert < r \} \). Assume that \(T:\bar{K}_{r}\rightarrow K\) is completely continuous such that \(Tx\neq x\) for \(x\in \partial K{r}= \{ x\in K:\Vert x \Vert =r \} \).

  1. (i)

    If \(\Vert Tx \Vert \geq \Vert x \Vert \) for \(x\in \partial K_{r}\), then \(\mathbf{i}(T,K _{r},K)=0\).

  2. (ii)

    If \(\Vert Tx \Vert \leq \Vert x \Vert \) for \(x\in \partial K_{r}\), then \(\mathbf{i}(T,K _{r},K)=1\).

3 Main result

In this section, we establish the solvable intervals of the positive parameters λ and μ for the existence of the infinitely many positive solutions for system (1.1)–(1.2) by using Lemma 2.4 and Lemma 2.5.

For ease of expression, we introduce the following notation:

$$\begin{aligned}& \bigl(f_{0}^{\tau } \bigr)^{i}=\max \biggl\{ \max _{t\in J}\frac{f_{i}(t, \mathbf{x})}{\tau }, 0\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad F_{0}^{ \tau }=\max_{1\leq i\leq n} \bigl(f_{0}^{\tau } \bigr)^{i}; \\& \bigl(f_{\sigma_{j}\tau }^{\tau } \bigr)^{i}=\min \biggl\{ \min _{t\in [\theta_{j},1]}\frac{f_{i}(t,\mathbf{x})}{\tau }, \sigma _{j}\tau \leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad F_{\sigma_{j}\tau } ^{\tau }= \min_{1\leq i\leq n} \bigl(f_{\sigma_{j}\tau }^{\tau } \bigr)^{i}; \\& I_{0}^{\tau }(k))^{i}= \max \biggl\{ \max _{t\in J}\frac{I_{k} ^{i}(t,\mathbf{x})}{\tau }, 0\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad \mathbf{I}_{0}^{\tau }(k)=\max _{1\leq i\leq n} \bigl(I_{0}^{\tau }(k) \bigr)^{i}; \\& \bigl(I_{\sigma_{j}\tau }^{\tau }(k) \bigr)^{i}=\min \biggl\{ \min _{t\in [\theta_{j},1]}\frac{I_{k}^{i}(t,\mathbf{x})}{\tau }, \sigma_{j}\tau \leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad \mathbf{I}_{\sigma _{j}\tau }^{\tau }(k)= \min_{1\leq i\leq n} \bigl(I_{\sigma_{j}\tau } ^{\tau }(k) \bigr)^{i}, \end{aligned}$$

where \(i=1,2,\ldots ,n\), \(j=1,2,\ldots \) , and

$$ D=\max \bigl\{ \Vert G \Vert _{q}\Vert g_{i} \Vert _{p}, \Vert G \Vert _{1}\Vert g_{i} \Vert _{\infty }, B\Vert g_{i} \Vert _{1} \bigr\} ,\qquad \rho_{0}=\min \biggl\{ 1,\frac{A}{\cosh \gamma } \biggr\} . $$

We consider the following three cases for \(\omega_{i}(t)\in L^{P}[0,1]: p>1\), \(p=1\) and \(p=\infty \). Case \(p>1\) is treated in the following theorem. It is our main result.

Theorem 3.1

Assume that \((H_{1})\)\((H_{3})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\) and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad j=1,2, \ldots . $$
(3.1)

For each natural number j, we assume that f and \(\mathbf{I} _{k}\) satisfy

(\(H_{4}\)):

\(F_{0}^{r_{j}}\leq L, F_{0}^{R_{j}}\leq L\) and for any \(k\in \{1,2,\ldots ,m\}\), \(\mathbf{I}_{0}^{r_{j}}(k)\leq L\), \(\mathbf{I} _{0}^{R_{j}}(k)\leq L\), where

$$ L< \min \biggl\{ \frac{1}{n\lambda \rho_{0}D},\frac{1}{n\mu mA} \biggr\} ; $$
(3.2)
(\(H_{5}\)):

\(F_{\sigma_{j}\eta_{j}}^{\eta_{j}}\geq l\) and for any \(k\in \{1,2,\ldots ,m\}\), \(\mathbf{I}_{\sigma_{j}\eta_{j}}^{\eta_{j}} \geq l\), where \(l>0\).

Then there exist \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\), \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\) and \(\Vert {\mathbf{x}}_{j}^{(1)} \Vert >\sigma_{j}\eta_{j}\).

Proof

Letting \(\lambda_{0}=\sup \{\lambda_{j}\}\), \(\lambda_{j}=\frac{1}{2AN_{i}(1-\theta_{j})l}\), and \(\mu_{0}=\sup \{\mu_{j}\}, \mu_{j}=\frac{1}{2A _{j}ml}\), \(j=1,2,\ldots \) . Then, for any \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), (2.14) and Lemma 2.3 imply that \(\textbf{T}_{\lambda \mu }\) and \(T_{\lambda \mu }^{i}\) (\(i=1,2,\ldots ,n\)) are all completely continuous.

Let \(t\in J\), \(\mathbf{x}\in \partial {\mathbf{K}}_{r_{j}\theta_{j}}\). Then \(\Vert {\mathbf{x}} \Vert = r_{j}\).

Therefore, for any \(\mathbf{x}\in \partial {\mathbf{K}}_{r_{j}\theta_{j}}\), it follows from \((H_{4})\) that

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t _{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq \lambda \int_{0}^{1}G(s,s)g_{i}(s)Lr_{j}\,ds+ \mu \sum_{k=1} ^{m}G(t,t_{k})Lr_{j} \\ &\leq \lambda L\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}r_{j}+ \mu LmBr_{j} \\ &< \frac{r_{j}}{n}+\frac{r_{j}}{n}=\frac{r_{j}}{n}= \frac{\Vert {\mathbf{x}} \Vert }{n}. \end{aligned}$$
(3.3)

Moreover, by (2.4), (2.5), (2.14), (2.16) and \((H_{4})\),

$$\begin{aligned} \bigl\vert \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr)^{\prime}(t) \bigr\vert &\leq \lambda \int_{0}^{1} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\quad {}+ \mu \sum_{k=1}^{m} \bigl\vert G_{t}^{\prime}(t,t_{k}) \bigr\vert I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq \sinh \gamma \Biggl[\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \Biggl[\lambda \int_{0}^{1}G(t,s)g_{i}(s)f_{i} \bigl(s, \mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}AI_{k}^{i} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \Biggl[\lambda \int_{0}^{1}\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu A\sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \bigl(\lambda Lr_{j}\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}+ \mu AmLr_{j} \bigr) \\ &< \frac{r_{j}}{n}+\frac{r_{j}}{n} \\ &=\frac{r_{j}}{n}=\frac{\Vert {\mathbf{x}} \Vert }{n}. \end{aligned}$$
(3.4)

Consequently, from (3.3) and (3.4), we have

$$ \Vert {\mathbf{T}}_{\lambda \mu } \Vert =\sum_{i=1}^{n} \bigl\Vert T_{\lambda \mu }^{i}\textbf{x}\Vert _{PC^{1}}\leq \Vert \textbf{x} \bigr\Vert ,\quad \forall \textbf{x}\in \partial \textbf{K}_{r_{j}\theta_{j}}. $$
(3.5)

And then, by Lemma 2.5, we get

$$ {\mathbf{i}}(\mathbf{T}_{\lambda \mu }, \textbf{K}_{r_{j}\theta_{i}}, \textbf{K}_{\theta_{j}})=1. $$
(3.6)

Similarly, for \(\textbf{x}\in \partial \textbf{K}_{R_{j}\theta_{j}}\), we have \(\Vert T_{\lambda \mu }\textbf{x} \Vert \leq \Vert \textbf{x} \Vert \), and it follows from Lemma 2.5 that

$$ {\mathbf{i}}(\mathbf{T}_{\lambda \mu }, \textbf{K}_{R_{j}\theta_{j}}, \textbf{K}_{\theta_{j}})=1. $$
(3.7)

On the other hand, letting

$$ {\mathbf{x}}\in {\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}= \Biggl\{ \mathbf{x} \in {\mathbf{K}}_{\theta_{j}}:\Vert {\mathbf{x}} \Vert < \eta_{j},\min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}x_{i}(t)> \sigma_{j}\eta _{j} \Biggr\} , $$

then \(\Vert \textbf{x} \Vert \leq \eta_{j}\). And hence, it is similar to the proof of (3.5), we have

$$ \Vert {\mathbf{T}}_{\lambda \mu }{\mathbf{x}} \Vert \leq \eta_{j}. $$
(3.8)

Furthermore, for \(\textbf{x}\in \bar{\textbf{K}}_{\sigma_{j}\eta_{j} \theta_{j}}^{\eta_{j}}\), we have \(\Vert {\mathbf{x}} \Vert \leq \eta_{j}, \min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}x_{i}(t)\geq \sigma_{j}\eta _{j}\), and then it follows from \((H_{5})\) that

$$\begin{aligned} \min_{t\in [\theta_{j},1]} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&= \min_{t\in [\theta_{j},1]} \Biggl[ \lambda \int_{0}^{1}G(t,s)g _{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ & \geq A\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+A\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \\ & \geq AN_{i}\lambda \int_{\theta_{j}}^{1}f_{i} \bigl(s,x(s) \bigr)\,ds+A \mu ml\eta _{j} \\ & \geq A N_{i}\lambda (1-\theta_{j})l\eta_{j}+A \mu ml\eta_{j} \\ & > A N_{i}\lambda_{0}(1-\theta_{j})l \eta_{j}+A\mu_{0}ml\eta_{j} \\ & \geq A N_{i}\lambda_{i}(1-\theta_{j})l \eta_{j}+A\mu_{j} ml\eta_{j} \\ & > \frac{\eta_{j}}{2}+\frac{\eta_{j}}{2} \\ & =\eta_{j}=\Vert \textbf{x} \Vert , \end{aligned}$$

which shows that

$$ \min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}\bigl(T_{\lambda \mu }^{i}x_{i}(t)\bigr)\geq \min_{t\in [\theta_{j},1]}\bigl(T_{\lambda \mu }^{i}x_{i}(t)\bigr)>\Vert {\mathbf{x}} \Vert . $$
(3.9)

Letting \(\mathbf{x}_{0}=(x_{0}^{1},\ldots ,x_{0}^{i},\ldots ,x_{0}^{n})\) and \(\mathbf{F}(t,\mathbf{x})=(1-t)\mathbf{T}_{\lambda \mu }{\mathbf{x}}+t \textbf{x}_{0}\), where \(x_{0}^{i}\equiv \frac{\sigma_{j}\eta_{j}+\eta _{j}}{2}\), \(i=1,2,\ldots ,n\), then \(\mathbf{F}: J\times \bar{\textbf{K}} _{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}\rightarrow \textbf{K}_{ \theta_{j}}\) is completely continuous, and from the analysis above, we obtain for \((t,\textbf{x})\in J\times \bar{\mathbf{K}}_{\sigma_{j}\eta _{j}\theta_{j}}^{\eta_{i}}\),

$$ {\mathbf{F}}(t,\mathbf{x})\in \bar{\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}} ^{\eta_{j}}. $$
(3.10)

Therefore, for \(t\in J, \mathbf{x}\in \bar{\mathbf{K}}_{\sigma_{j}\eta_{j} \theta_{j}}^{\eta_{j}}\), we have \(\mathbf{F}(t,\mathbf{x})\neq {\mathbf{x}}\). Hence, by the normality property and the homotopy invariance property of the fixed point index, we obtain

$$ {\mathbf{i}} \bigl(\mathbf{T}_{\lambda \mu }, \textbf{K}_{\sigma_{i}\eta_{j}\theta _{j}}^{\eta_{j}}, \textbf{K}_{\theta_{j}} \bigr)=\mathbf{i} \bigl(\mathbf{x}_{0}, \textbf{K}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}, \textbf{K}_{ \theta_{j}} \bigr)=1. $$
(3.11)

Consequently, by the solution property of the fixed point index, \(\mathbf{T}_{\lambda \mu }\) has a fixed point \(\textbf{x}_{j}^{(1)}\) and \(\textbf{x}_{j}^{(1)}\in \bar{\textbf{K}}_{\sigma_{j}\eta_{j}\theta _{j}}^{\eta_{j}}\). By Lemma 2.2 and (2.14), it follows that \(\textbf{x}_{j}^{(1)}\) is a solution to system (1.1)–(1.2), and

$$ \bigl\Vert {\mathbf{x}}_{j}^{(1)} \bigr\Vert > \sigma_{j}\eta_{j}. $$

On the other hand, from (3.6), (3.7) and (3.11) together with the additivity of the fixed point index, we get

$$\begin{aligned}& \mathbf{i} \bigl(\mathbf{T}_{\lambda \mu },\mathbf{K}_{R_{j}\theta_{j}}/ \bigl( \bar{ \mathbf{K}}_{f} {r_{j}\theta_{j}}\cup \bar{ \mathbf{K}}_{\sigma_{j}\eta _{j}\theta_{j}}^{\eta_{j}} \bigr),\mathbf{K}_{\theta_{j}} \bigr) \\& \quad =\mathbf{i}(\mathbf{T}_{\lambda \mu },\mathbf{K}_{R_{j}\theta_{j}}, \mathbf{K}_{ \theta_{j}})-\mathbf{i} \bigl(\mathbf{T}_{\lambda \mu }, \bar{ \mathbf{K}}_{\sigma _{j}\eta_{j}\theta_{j}}^{\eta_{j}},\mathbf{K}_{\theta_{j}} \bigr)- \mathbf{i}( \mathbf{T}_{\lambda \mu },\bar{\mathbf{K}}_{r_{j}\theta_{j}}, \mathbf{K}_{\theta _{j}}) =1-1-1=-1. \end{aligned}$$
(3.12)

Hence, by the solution property of the fixed point index, \(\mathbf{T} _{\lambda \mu }\) has a fixed point \(\mathbf{x}_{j}^{(2)}\) and \(\mathbf{x} _{j}^{(2)}\in {\mathbf{K}}_{R_{j}}/(\bar{\mathbf{K}}_{r_{j}}\cup \bar{\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}})\). Since \(j\in \mathrm{N}\) was arbitrary, the proof is complete. □

The following corollary deals with the case \(p=\infty \).

Corollary 3.1

Assume that for each natural number j, \((H_{1})\)\((H_{5})\) hold. Let \(\{r_{i}\}_{i=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j},\quad i=1,2, \ldots . $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\) and \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\).

Proof

Let \(\Vert G \Vert _{1}\Vert g_{i} \Vert _{\infty }\) replace \(\Vert G \Vert _{q} \Vert g_{i} \Vert _{p}\) and repeat the argument above. □

Finally, we consider the case of \(p=1\).

Corollary 3.2

Assume that for each natural number j, \((H_{1})\)\((H_{5})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad i=1,2, \ldots \,. $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\) and \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\).

Proof

Let \(B\Vert g_{i} \Vert _{1}\) replace \(\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}\) and repeat the previous argument. Similar to the proof of Theorem 3.1, we can get Corollary 3.2. □

Corollary 3.3

Assume that for each natural number j, \((H_{1})\)\((H_{3})\) and \((H_{5})\) hold. Let \(\{r_{j}\}_{j=1}^{ \infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{ \infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad i=1,2, \ldots \,. $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has one infinite families of positive solutions.

Remark 3.1

Some ideas of the n-dimensional system are from [44].

Remark 3.2

Some ideas of the existence of denumerably many positive solutions are from [45].

Remark 3.3

From the proof of Theorem 3.1, it is not difficult to see that \((H_{2})\) plays an important role in the proof that system (1.1)–(1.2) has two infinite families of positive solutions. As an example, we consider a family of diagonal matrix functions \(\mathbf{g}(t)\) as follows.

Example 3.1

We will check that there exists a family of diagonal matrix functions \(\mathbf{g}(t)\) satisfying condition \((H_{2})\).

For ease of the discussion, an example of the case \(n=2\) is given as follows. Define \(\mathbf{g}(t)\) by

$$ {\mathbf{g}}(t)= \left ( \textstyle\begin{array}{@{}c@{\quad }c@{}} g_{1}(t) & 0 \\ 0 & g_{2}(t) \end{array}\displaystyle \right ) , $$

where \(g_{1}(t)\) and \(g_{2}(t)\) singular at \(t_{j}^{\prime}\), \(j=1,2,\ldots\) , where

$$ t_{j}^{\prime}=\frac{2}{5}-\frac{1}{10}\sum_{i=1}^{j} \frac{1}{(2i-1)^{4}},\quad j=1,2,\ldots\,. $$
(3.13)

It follows from (3.13) that

$$\begin{aligned}& t_{1}^{\prime}=\frac{2}{5}-\frac{1}{10}= \frac{3}{10}, \\& t_{j}^{\prime}-t_{j+1}^{\prime}= \frac{1}{10(2j+1)^{4}}, \quad j=1,2,\ldots \,, \end{aligned}$$

and from \(\sum_{j=1}^{\infty }\frac{1}{(2j-1)^{4}}=\frac{\pi ^{4}}{96}\), we have

$$ t_{0}^{\prime}=\lim_{j\rightarrow \infty }t_{j}^{\prime}= \frac{2}{5}- \frac{1}{10}\sum_{j=1}^{\infty } \frac{1}{(2j-1)^{4}} = \frac{2}{5}-\frac{1}{10}\cdot \frac{\pi^{4}}{96}=\frac{2}{5}-\frac{\pi ^{4}}{960}>\frac{1}{10}. $$

Let

$$ \tau_{1}=\frac{\sqrt{2}}{3} \biggl(\frac{\pi^{2}}{4}-1 \biggr),\qquad \tau_{2}=-\sqrt{2}e \biggl(\frac{\pi^{2}}{4}-1 \biggr). $$

Consider the functions

$$\begin{aligned}& g_{1}(t)=\sum_{j=1}^{\infty }g_{j}^{(1)}(t), \quad t\in J, \\& g_{2}(t)=\sum_{j=1}^{\infty }g_{j}^{(2)}(t), \quad t\in J, \end{aligned}$$

where

$$\begin{aligned}& g_{j}^{(1)}(t)= \textstyle\begin{cases} \frac{j+2}{(j+1)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in [0,\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2} ),\\ \frac{1}{\tau_{1}\sqrt{t_{j}^{\prime}-t}},&t\in [\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime} ), \\ \frac{1}{\tau_{1}\sqrt{t-t_{j}^{\prime}}},& t\in [t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2} ], \\ \frac{j+2}{(j+1)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in (\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1 ], \end{cases}\displaystyle \end{aligned}$$

and

$$\begin{aligned}& g_{j}^{(2)}(t)= \textstyle\begin{cases} \frac{2}{(2j-2)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in [0,\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2} ), \\ \frac{1}{\tau_{2}\sqrt{t_{j}^{\prime}-t}},& t\in [\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime} ), \\ \frac{1}{\tau_{2}\sqrt{t-t_{j}^{\prime}}},& t\in [t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2} ], \\ \frac{2}{(2j-2)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in (\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1 ]. \end{cases}\displaystyle \end{aligned}$$

From \(\sum_{j=1}^{\infty }\frac{j+2}{(j+1)!}=2e-3\), \(\sum_{j=1}^{\infty }\frac{2}{(2j-2)!}=e+e^{-1}\) and \(\sum_{j=1}^{\infty }\frac{1}{(2j-1)^{2}}=\frac{\pi^{2}}{8}\), we have

$$\begin{aligned} &\begin{aligned}[b] \sum_{j=1}^{\infty } \int_{0}^{1}g_{j}^{(1)}(t)\,dt&= \sum_{j=1}^{\infty } \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac{j+2}{(j+1)!(t _{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac{j+2}{(j+1)!(2-t _{j}^{\prime}-t_{j-1}^{\prime})}\,dt \\ &\quad {}+ \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}^{t_{j}}\frac{1}{\tau_{1}\sqrt{t _{j}^{\prime}-t}}\,dt+ \int_{t_{j}}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{ \tau_{1}\sqrt{t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty }\frac{j+2}{(j+1)!} + \frac{\sqrt{2}}{\tau _{1}}\sum_{j=1}^{\infty } \Bigl( \sqrt{\bigl(t_{j}^{\prime}-t_{j+1}^{\prime} \bigr)} +\sqrt{\bigl(t_{j-1}^{\prime}-t_{j}^{\prime} \bigr)} \Bigr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}}\sum_{j=1}^{\infty } \biggl( \frac{1}{(2j+1)^{2}} +\frac{1}{(2j-1)^{2}} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}} \biggl(\frac{\pi^{2}}{8}-1+\frac{\pi ^{2}}{8} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}} \biggl(\frac{\pi^{2}}{4}-1 \biggr) \\ &=2e-3+3=2e, \end{aligned} \end{aligned}$$
(3.14)
$$\begin{aligned} &\begin{aligned}[b] \sum_{j=1}^{\infty } \int_{0}^{1}g_{j}^{(2)}(t)\,dt&= \sum_{j=1}^{\infty } \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac{2}{(2j-2)!(t _{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac{2}{(2j-2)!(2-t _{j}^{\prime}-t_{j-1}^{\prime})}\,dt \\ &\quad {}+ \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}^{t_{j}}\frac{1}{\tau_{2}\sqrt{t _{j}^{\prime}-t}}\,dt+ \int_{t_{j}}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{\tau_{2}\sqrt{t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty }\frac{2}{(2j-2)!}+ \frac{\sqrt{2}}{\tau_{2}}\sum_{j=1}^{\infty } \Bigl( \sqrt{\bigl(t_{j}^{\prime}-t_{j+1}^{\prime} \bigr)}+\sqrt{\bigl(t_{j-1}^{\prime}-t_{j}^{\prime} \bigr)} \Bigr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}}\sum_{j=1}^{\infty } \biggl(\frac{1}{(2j+1)^{2}}+\frac{1}{(2j-1)^{2}}\biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}} \biggl(\frac{\pi^{2}}{8}-1+ \frac{\pi^{2}}{8} \biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}} \biggl(\frac{\pi^{2}}{4}-1 \biggr) \\ &=e+e^{-1}-e^{-1}=e. \end{aligned} \end{aligned}$$
(3.15)

Thus, from (3.14) and (3.15), it is easy to see that

$$\begin{aligned}& \int_{0}^{1}{\mathbf{g}}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty }g_{j} ^{(1)}(t)\,dt =\sum_{j=1}^{\infty } \int_{0}^{1}\omega_{j}^{(1)}(t)\,dt=2e< \infty , \\& \int_{0}^{1}{\mathbf{g}}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty }g_{j} ^{(2)}(t)\,dt =\sum_{j=1}^{\infty } \int_{0}^{1}\omega_{j}^{(2)}(t)\,dt=e< \infty . \end{aligned}$$

Therefore \(\omega_{1}(t),\omega_{2}(t)\in L^{1}[0,1]\), which shows that condition \((H_{2})\) holds.