1 Introduction

In [1], the periodic solutions of difference equations, the solutions of boundary value problems and the steady state solutions of the majority of neural networks can be summed up in the solutions of the nonlinear algebraic system as follows:

$$ Bu=f(u), $$
(1)

where \(B=(b_{ij})_{n\times n}\) is a real symmetric n by n matrix and \(f(u)=(f_{1}(u_{1}),f_{2}(u_{2}),\ldots,f_{n}(u_{n}))^{T}\) for \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\in R^{n}\), where \(f_{k}:R\rightarrow R\) for each k.

A column vector \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\in R^{n}\) is said to be a solution corresponding to it if substitution of u into (1) renders it an identity. A vector \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\) is said to be positive if \(u_{i}>0\) for \(i\in \{ 1,2,\ldots,n \} \), negative if \(u_{i}<0\) for \(i\in \{ 1,2,\ldots,n \} \), zero-free if \(u_{i}\neq 0\) for all \(i\in \{ 1,2,\ldots,n \} \), respectively.

It has come to our attention that: if B in (1) is a real positive definite matrix, the existence of zero-free solutions of (1) is studied in [24]; if B in (1) takes the form of a real symmetric matrix and has a positive eigenvalue, the existence of non-trivial solutions of (1) is discussed in [1, 5, 6]; if B in (1) represents a symmetric non-negative matrix, the existence of positive and negative solutions of (1) is explored in [7].

In case B is a real symmetric matrix with some additional conditions attached to its elements, and by using variational approaches (see [8]), this paper presents the existence criteria for the positive and negative solutions of (1). Our proofs are elementary. Furthermore, we provide some examples to show that our conditions are new and our results are sharp.

2 Preliminaries

In this section, we give some notations and definitions.

Let

$$ \Omega^{+}= \bigl\{ u=(u_{1},u_{2}, \ldots,u_{n})^{T}\in R^{n}:u_{k}\geq 0\mbox{ for } k\in \{ 1,2,\ldots,n \} \bigr\} $$

and

$$ \Omega^{-}= \bigl\{ v=(v_{1},v_{2}, \ldots,v_{n})^{T}\in R^{n}:v_{k}\leq 0\mbox{ for }k\in \{ 1,2,\ldots,n \} \bigr\} . $$

Consider the function \(I:\Omega^{+}\rightarrow R\) defined by

$$ I(u)=\frac{1}{2}u^{T}Bu-\sum_{k=1}^{n} \int_{0}^{u_{k}}f_{k}(s)\,ds. $$
(2)

Since

$$ \frac{\partial I(u)}{\partial u_{k}}= ( Bu ) _{k}-f_{k}(u _{k}),\quad k \in \{ 1,2,\ldots,n \}, $$

a column vector \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\) is a positive solution of (1) if and only if u is a critical point of function I in the interior of \(\Omega^{+}\).

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric n by n matrix. For convenience, for some \(\{ i_{1},i_{2},\ldots, i_{k_{0}} \} \subset \{ 1,2,\ldots,n \} \), let

$$ \overline{B}(i_{1},i_{2},\ldots,i_{k_{0}})= \sum _{i,j\in \{ i_{1},i_{2},\ldots,i_{k_{0}} \} }b_{ij}, $$

and let \(\lambda_{\max }\) be the maximum eigenvalue of B and \(\lambda_{\min } \) the minimum eigenvalue of B.

3 Superlinear case

In this section, we are concerned with \(f_{1},\ldots,f_{n}\) that are ‘superlinear’ near 0.

Theorem 3.1

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix with \(b_{ij}\geqslant 0\) for \(i,j\in \{ 1,2,\ldots,n \} \), while \(i\neq j\), \(\max \{ b_{11},b_{12} \} >0\), \(\max \{ b_{i+1,i},b_{i+1,i+1} \} >0\) for \(i\in \{ 1,2,\ldots,n-1 \} \), and there exists some \(\{ i_{1},i_{2},\ldots,i _{k_{0}} \} \subset \{ 1,2,\ldots,n \} \) such that \(\overline{B}(i_{1},i_{2},\ldots,i_{k_{0}})>0\). Assume furthermore that \(f_{k}\in C ( [ 0,+\infty ) ,R ) \) for each k and

(\(\mathrm{G}_{1}\)):

There exist constants \(a_{1}>\frac{1}{2}\lambda_{\max }\), \(a_{2}>0 \) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\geq a_{1}z^{2}-a_{2}\quad \textit{for } z\geq M \textit{ and } k\in \{ 1,2,\ldots,n \} . $$
(\(\mathrm{G}_{2}\)):

For each \(k\in \{ 1,2,\ldots,n \} \),

$$ \lim_{z\rightarrow 0^{+}}\frac{\int_{0}^{z}f_{k}(s)\,ds}{z ^{2}}=0. $$

Then system (1) has at least one positive real solution.

Proof

First, we prove that I is bounded from above in \(\Omega^{+}\). According to (\(\mathrm{G}_{1}\)), if

$$ a=\max_{1\leq k\leq n} \biggl\{ \biggl\vert \int_{0}^{z}f_{k}(s)\,ds-a _{1}z^{2}+a_{2}\biggr\vert +a_{2}:0 \leq z\leq M \biggr\} , $$

then for any \(z\geq 0\) and \(k\in \{ 1,2,\ldots,n \} \),

$$ \int_{0}^{z}f_{k}(s)\,ds\geq a_{1}z^{2}-a. $$

For any \(u\in \Omega^{+}\),

$$ I(u)\leq \frac{1}{2}\lambda_{\max }\Vert u\Vert ^{2}-a_{1}\sum_{k=1}^{n}u_{k}^{2}+na= \biggl( \frac{1}{2}\lambda_{\max }-a_{1} \biggr) \Vert u\Vert ^{2}+na. $$
(3)

Since \(a_{1}>\frac{1}{2}\lambda_{\max }\), for any \(u\in \Omega^{+}\),

$$ I(u)\leq na. $$

That is, I is bounded from above in \(\Omega^{+}\).

Claim 1

Let \(c_{0}=\sup_{u\in \Omega ^{+}} I(u)\), then \(c_{0}>0\).

By (\(\mathrm{G}_{2}\)), there exist constants \(\delta >0\) and \(\beta \in ( 0,\frac{1}{2k_{0}}\overline{B}(i_{1},i_{2},\ldots,i_{k_{0}}) ) \) such that for any \(0\leq z\leq \delta \),

$$ \int_{0}^{z}f_{k}(s)\,ds\leq \beta z^{2},\quad k=1,2,\ldots,n. $$
(4)

Choosing \(v=(v_{1},v_{2},\ldots,v_{n})^{T}\in \Omega^{+}\), where \(v_{i}= \delta \) if \(i\in \{ i_{1},i_{2},\ldots,i_{k_{0}} \} ,v_{i}=0\) if \(i\notin \{ i_{1},i_{2},\ldots, i_{k_{0}} \} \). By (2) and (4),

$$\begin{aligned} I(v) =&\frac{1}{2}\delta^{2}\overline{B}(i_{1},i_{2}, \ldots,i_{k_{0}})- \sum_{i=1}^{n} \int_{0}^{v_{i}}f_{i}(s)\,ds \\ =&\frac{1}{2}\delta^{2}\overline{B}(i_{1},i_{2}, \ldots,i_{k_{0}})- \sum_{i\in \{ i_{1},i_{2},\ldots,i_{k_{0}}\} } \int_{0}^{ \delta }f_{i}(s)\,ds \\ \geq &\frac{1}{2}\delta^{2}\overline{B}(i_{1},i_{2}, \ldots,i_{k_{0}}) -\sum_{i\in \{ i_{1},i_{2},\ldots,i_{k_{0}}\}} \beta \delta^{2} \\ \geq & \biggl( \frac{1}{2}\overline{B}(i_{1},i_{2}, \ldots,i_{k_{0}})-k _{0}\beta \biggr) \delta^{2} \\ >&0. \end{aligned}$$

This shows that \(c_{0}=\sup_{u\in \Omega ^{+}}I(u)>0\).

By the definition of \(c_{0}\), there is a sequence \(\{ u^{ ( i ) } \} \subset \Omega^{+}\) such that \(\lim_{i\rightarrow \infty } I ( u^{ ( i ) } ) =c_{0}\). It is easy to see that there is a positive number p such that for \(i\geqslant 1\),

$$ -p\leq I \bigl( u^{ ( i ) } \bigr) . $$
(5)

By (3) and (5),

$$ -p\leq I \bigl( u^{ ( i ) } \bigr) \leq \biggl( \frac{1}{2} \lambda_{\max}-a_{1} \biggr) \bigl\Vert u^{ ( i ) }\bigr\Vert ^{2}+na. $$

It follows that

$$ \bigl\Vert u^{ ( i ) }\bigr\Vert ^{2}\leq \biggl( a_{1}- \frac{1}{2}\lambda_{\max} \biggr) ^{-1} ( na+p ) . $$

Thus \(\{ u^{ ( i ) } \} \) is a bounded sequence in \(\Omega^{+}\). Consequently, it has a convergent subsequence \(\{ u^{ ( i_{j} ) } \} \). Let \(\{ u^{ ( i _{j} ) } \} \) tend to \(u^{ ( 0 ) }= ( u_{1} ^{ ( 0 ) },u_{2}^{ ( 0 ) },\ldots,u_{n}^{ ( 0 ) } ) ^{T}\) as \(j\rightarrow \infty \). Then \(u^{ ( 0 ) } \in \Omega^{+}\) and \(I ( u^{ ( 0 ) } ) =c_{0}\). Note that \(I ( 0 ) =0\) and \(c_{0}>0\). Clearly, there exists \(i_{0}\in \{ 1,2,\ldots,n \} \) such that \(u_{i_{0}}^{ ( 0 ) }>0\).

Claim 2

\(u^{ ( 0 ) }\) is a positive critical point of function I.

Let \(i_{0}\in \{ 3,4,\ldots,n-1 \} \). First, we prove that \(u_{i_{0}+1}^{ ( 0 ) },u_{i_{0}+2}^{ ( 0 ) },\ldots,u _{n}^{ ( 0 ) }\)are greater than zero. Assume that \(u_{i_{0}+1} ^{ ( 0 ) }=0\), then

$$ c_{0}=I \bigl( u^{ ( 0 ) } \bigr) =\frac{1}{2}\sum _{i\neq i_{0}+1,j\neq i_{0}+1} b _{ij}u_{i}^{ ( 0 ) }u_{j}^{ ( 0 ) }- \sum_{i\neq i_{0}+1} \int_{0}^{u_{i}^{ ( 0 ) }}f_{i}(s)\,ds. $$
(6)

For \(r\geq 0\), let

$$ I_{1} ( r ) =\frac{1}{2}b_{i_{0}+1,i_{0}+1}r^{2}+b_{i_{0}+1,i _{0}}u_{i_{0}}^{ ( 0 ) }r+r \sum_{j\neq i_{0},j\neq i_{0}+1} b _{i_{0}+1,j}u_{j}^{ ( 0 ) }- \int_{0}^{r}f_{i_{0}+1}(s)\,ds. $$
(7)

We assert that there exists a positive constant \(r_{1}\) such that \(I_{1} ( r_{1} ) >0\). Since \(\max \{ b_{i_{0}+1,i_{0}}, b _{i_{0}+1,i_{0}+1} \} >0\), the problem can be discussed on two scenarios: if \(b_{i_{0}+1,i_{0}}>0\), then

$$ \lim{r\rightarrow 0^{+}}\frac{\frac{1}{2}b_{i_{0}+1,i _{0}+1}r^{2}+b_{i_{0}+1,i_{0}}u_{i_{0}}^{ ( 0 ) }r}{r^{2}}=+ \infty . $$
(8)

By (7), (8) and (\(\mathrm{G}_{2}\)), the assertion holds; if \(b_{i_{0}+1,i_{0}+1}>0\), note that \(b_{ij}\geqslant 0\) for \(i,j\in \{ 1,2,\ldots,n \} \) while \(i\neq j\), then

$$ I_{1} ( r ) \geq \frac{1}{2}b_{i_{0}+1,i_{0}+1}r^{2}- \int _{0}^{r}f_{i_{0}+1}(s)\,ds. $$
(9)

By (9) and (\(\mathrm{G}_{2}\)), the assertion holds. In conclusion, there exists a positive constant \(r_{1}\) such that \(I_{1} ( r_{1} ) >0\).

Choosing

$$ \overline{u}^{ ( 0 ) }= \bigl( u_{1}^{ ( 0 ) },\ldots,u _{i_{0}}^{ ( 0 ) },r_{1},u_{i_{0}+2}^{ ( 0 ) }, \ldots,u _{n}^{ ( 0 ) } \bigr) ^{T}, $$

then \(\overline{u}^{ ( 0 ) }\in \Omega^{+}\) and

$$ I \bigl( \overline{u}^{ ( 0 ) } \bigr) =I \bigl( u^{ ( 0 ) } \bigr) +I_{1} ( r_{1} ) >I \bigl( u^{ ( 0 ) } \bigr) =c_{0}, $$

which is contrary to the definition of \(c_{0}\). Thus \(u_{i_{0}+1}^{ ( 0 ) }>0\). Similarly, \(u_{i_{0}+2}^{ ( 0 ) },\ldots,u _{n}^{ ( 0 ) }\)are greater than zero.

It can be deduced that \(u_{i_{0}-1}^{ ( 0 ) }>0\) by changing \(i_{0}+1\) as \(i_{0}-1\) in the above mentioned procedure. Similarly, \(u_{2}^{ ( 0 ) },\ldots,u_{i_{0}-2}^{ ( 0 ) }\) are greater than zero.

Finally, we prove that \(u_{1}^{ ( 0 ) }>0\). Assume that \(u_{1}^{ ( 0 ) }=0\). Since \(u_{2}^{ ( 0 ) }>0\), we see that

$$ c_{0}=I \bigl( u^{ ( 0 ) } \bigr) =\frac{1}{2}\sum _{i\neq 1,j\neq 1} b _{ij}u_{i}^{ ( 0 ) }u_{j}^{ ( 0 ) }- \sum_{i\neq 1} \int_{0}^{u_{i}^{ ( 0 ) }}f_{i}(s)\,ds\mbox{.} $$
(10)

For \(r\geq 0\), let

$$ I_{2} ( r ) =\frac{1}{2}b_{11}r^{2}+b_{12}u_{2}^{ ( 0 ) }r+r \sum_{j\neq 1,j\neq 2} b_{1,j}u_{j}^{ ( 0 ) }- \int_{0}^{r}f_{1}(s)\,ds. $$
(11)

We assert that there exists a positive constant \(r_{2}\) such that \(I_{2} ( r_{2} ) >0\). Since \(\max \{ b_{11},b_{12} \} >0\), the problem can be discussed on two scenarios: if \(b_{12}>0\), then

$$ \lim_{r\rightarrow 0^{+}}\frac{\frac{1}{2}b_{11}r^{2}+b_{12}u_{2}^{ ( 0 ) }r}{r^{2}}=+\infty . $$
(12)

By (11), (12) and (\(\mathrm{G}_{2}\)), the assertion holds; if \(b_{11}>0\), note that \(b_{ij}\geqslant 0\) for \(i,j\in \{ 1,2,\ldots,n \} \) while \(i\neq j\), then

$$\begin{aligned} I_{2} ( r ) =&\frac{1}{2}b_{11}r^{2}+b_{12}u_{2}^{ ( 0 ) }r+r \sum_{j\neq 1,j\neq 2} b_{1,j}u_{j}^{ ( 0 ) }- \int_{0}^{r}f_{1}(s)\,ds \\ \geq &\frac{1}{2}b_{11}r^{2}- \int_{0}^{r}f_{1}(s)\,ds. \end{aligned}$$

By (\(\mathrm{G}_{2}\)), the assertion holds. In conclusion, there exists a positive constant \(r_{2}\) such that \(I_{2} ( r_{2} ) >0\).

Choosing \(\overline{u}^{ ( 0 ) }= ( r_{2},u_{2}^{ ( 0 ) },\ldots,u_{n}^{ ( 0 ) } ) ^{T}\), we have \(\overline{u}^{ ( 0 ) }\in \Omega^{+}\) and

$$ I \bigl( \overline{u}^{ ( 0 ) } \bigr) =I \bigl( u^{ ( 0 ) } \bigr) +I_{2} ( r_{2} ) >I \bigl( u^{ ( 0 ) } \bigr) =c_{0}, $$

which is contrary to the definition of \(c_{0}\). Thus \(u_{1}^{ ( 0 ) }>0\).

All cases are exhausted, then \(u^{ ( 0 ) }\) belongs to the interior of \(\Omega^{+}\)and \(I ( u^{ ( 0 ) } ) =\sup_{u\in \Omega ^{+}} I(u)\). It follows that \(u^{ ( 0 ) }\) is a positive critical point of I, that is, \(u^{ ( 0 ) }= ( u_{1}^{ ( 0 ) },u_{2}^{ ( 0 ) },\ldots,u_{n} ^{ ( 0 ) } ) ^{T}\) is a positive solution of (1).

The proof is completed. □

Theorem 3.2

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix which satisfies the conditions in Theorem  3.1. Assume furthermore that \(f_{k}\in C( ( -\infty,0 ] ,R ) \) for each k and

(\(\mathrm{G}_{1}^{\prime }\)):

There exist constants \(a_{1}>\frac{1}{2} \lambda_{\max }\), \(a_{2}>0\) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\geq a_{1}z^{2}-a_{2}\quad \textit{for } z\leq -M\textit{ and } k\in \{ 1,2,\ldots,n \} . $$
(\(\mathrm{G}_{2}^{\prime }\)):

For each \(k\in \{ 1,2,\ldots,n \} \),

$$ \lim_{z\rightarrow 0^{-}}\frac{\int_{0}^{z}f_{k}(s)\,ds}{z^{2}}=0. $$

Then system (1) has at least one negative real solution.

Consider the function \(\overline{I}:\Omega^{-}\rightarrow R\) defined by

$$ \overline{I}(v)=\frac{1}{2}v^{T}Bv-\sum _{k=1}^{n} \int_{0}^{v_{k}}f _{k}(s)\,ds. $$
(13)

Let \(u=-v\), then finding a critical point of in the interior of \(\Omega^{-}\) is equivalent to finding a critical point of function (14) in the interior of \(\Omega^{+}\),

$$ J(u)=\frac{1}{2}u^{T}Bu-\sum_{k=1}^{n} \int_{0}^{-u_{k}}f_{k}(s)\,ds. $$
(14)

Similar to the proof of Theorem 3.1, it can be proved that there exists a negative critical point of J in the interior of \(\Omega^{+}\), so the conclusion of Theorem 3.2 is true.

According to Theorems 3.1 and 3.2, the following can be directly obtained.

Corollary 3.1

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix which satisfies conditions in Theorem  3.1. Assume furthermore that \(f_{k}\in C ( R,R ) \) for each k and

(\(\mathrm{G}_{1}^{\prime\prime }\)):

There exist constants \(a_{1}> \frac{1}{2}\lambda_{\max }\), \(a_{2}>0\) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\geq a_{1}z^{2}-a_{2}\quad \textit{for }\vert z\vert \geq M\textit{ and }k\in \{ 1,2,\ldots,n \} . $$
(\(\mathrm{G}_{2}^{\prime\prime }\)):

For each \(k\in \{ 1,2,\ldots,n \} \),

$$ \lim_{z\rightarrow 0} \frac{\int_{0}^{z}f_{k}(s)\,ds}{z^{2}}=0. $$

Then system (1) has at least one positive real solution and one negative real solution.

Example 3.1

The system

$$ \left( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -1 \end{array}\displaystyle \right) \left( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\displaystyle \right) = \left( \textstyle\begin{array}{c} x_{1}^{r} \\ x_{2}^{r} \\ x_{3}^{r} \end{array}\displaystyle \right),\quad r\in (1,+\infty ), $$
(15)

is one of the form (1), where

$$ B= \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -1 \end{array}\displaystyle \right ),\quad n=3, $$

and \(f_{k}(s)=s^{r}\), \(k=1,2,3\). B is neither a non-negative matrix nor a positive definite matrix because the main diagonal elements of B are negative, but a real symmetric matrix with \(b_{ij}>0\) for \(i,j=1,2,3\) while \(i\neq j\), \(\max \{ b_{11},b_{12} \} =\max \{ b_{21},b_{22} \} =\max \{ b_{32},b_{33} \} =1\), and the sum of all its elements is 3. The eigenvalues of B are 1, −2 and −2. Let \(a_{1}=1\), \(a_{2}=1\) and \(M=(r+1)^{\frac{1}{r-1}}\), then for any \(z\geq M\) and \(k\in \{ 1,2,3 \} \), we have

$$\begin{aligned} \int_{0}^{z}f_{k}(s)\,ds =& \frac{z^{r+1}}{r+1} \\ =&\frac{z^{r-1}}{r+1}\times z^{2} \\ \geq &z^{2} \\ \geq &a_{1}z^{2}-a_{2}. \end{aligned}$$

For each \(k\in \{ 1,2,3 \} \),

$$ \lim_{z\rightarrow 0^{+}}\frac{\int_{0}^{z}f_{k}(s)\,ds}{z ^{2}}=\lim_{z\rightarrow 0^{+}} \frac{\frac{z^{r+1}}{r+1}}{z ^{2}}=0. $$

That is, (15) satisfies all the conditions of Theorem 3.1. Thus, (15) has at least one positive solution.

Indeed, \((1,1,1)^{T}\) is the unique positive solution to (15). Let \((x_{1},x_{2},x_{3})^{T}\) be the positive solution to (15) such that

$$ \left \{ \textstyle\begin{array}{l@{\qquad}l} -x_{1}+x_{2}+x_{3}=x_{1}^{r}, \\ x_{1}-x_{2}+x_{3}=x_{2}^{r}, \\ x_{1}+x_{2}-x_{3}=x_{3}^{r}, \end{array}\displaystyle \right . $$
(16)

we assert that \(x_{1}=x_{2}=x_{3}\). By the first and second equations of (16) and differential mean value theorem,

$$ -2x_{1}+2x_{2}=x_{1}^{r}-x_{2}^{r}=r \theta_{1}^{r-1}(x_{1}-x_{2}), $$
(17)

where \(\theta_{1}\) is a certain positive number between \(x_{1}\) and \(x_{2}\). If \(x_{1}\neq x_{2}\), by (17), then \(r\theta_{1}^{r-1}=-2\), which is contrary to the condition that r and \(\theta_{1}\) are positive. Similarly, by the second and third equations of (16) and differential mean value theorem, it is easy to find \(x_{2}=x_{3}\). In conclusion, \(x_{1}=x_{2}=x_{3}\). Let \(x_{1}=x_{2}=x _{3}=t\). By (15), \(t=1\). Thus \((1,1,1)^{T}\) is the unique positive solution of (15) and Theorem 3.1 is sharp.

Example 3.2

Consider the scenario that \(r=3\) for the nonlinear algebra system (15). It is easy to verify that (15) satisfies all the conditions of Corollary 3.1. Thus by Corollary 3.1, the system has one positive solution and one negative solution. From the discussion in Example 3.1, \((1,1,1)^{T}\) is the unique positive solution. It is easy to find that \((-1,-1,-1)^{T}\) is the unique negative solution because \(r=3\). This example shows that Corollary 3.1 is sharp.

Example 3.3

The system

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & \frac{3}{2} & 0 & 0 \\ \frac{3}{2} & -2 & \frac{13}{4} & 0 \\ 0 & \frac{13}{4} & -2 & \frac{3}{2} \\ 0 & 0 & \frac{3}{2} & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\displaystyle \right ) =\left ( \textstyle\begin{array}{c} x_{1}^{3} \\ \frac{1}{2}x_{2}^{3} \\ \frac{1}{2}x_{3}^{3} \\ x_{4}^{3} \end{array}\displaystyle \right ) $$
(18)

is one of the form (1), where

$$ B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & \frac{3}{2} & 0 & 0 \\ \frac{3}{2} & -2 & \frac{13}{4} & 0 \\ 0 & \frac{13}{4} & -2 & \frac{3}{2} \\ 0 & 0 & \frac{3}{2} & -2 \end{array}\displaystyle \right ),\quad n=4, $$

\(f_{1}(s)=f_{4}(s)=s^{3}\), and \(f_{2}(s)=f_{3}(s)=\frac{1}{2}s^{3}\). B is neither a non-negative matrix nor a positive definite matrix because the main diagonal elements of B are negative, but a real symmetric matrix with \(b_{ij}>0\) for \(i,j=1,2,3,4\) while \(i\neq j\), \(\max \{ b_{11},b_{12} \} =\max \{ b_{21},b_{22} \} =\max \{ b_{43},b_{44} \} = \frac{3}{2}\), \(\max \{ b_{32},b_{33} \} =\frac{13}{4}\), and the sum of all its elements is \(\frac{9}{2}\). The eigenvalues of B are \(-\frac{3}{8}+\frac{1}{8}\sqrt{313}\), \(-\frac{3}{8}-\frac{1}{8} \sqrt{313}\), \(-\frac{29}{8}+\frac{1}{8}\sqrt{313}\) and \(- \frac{29}{8}+\frac{1}{8}\sqrt{313}\). Let \(a_{1}=1\), \(a_{2}=3\) and \(M=3\), it is easy to verify that (18) satisfies all the conditions of Corollary 3.1. Thus, (18) has at least one positive solution and one negative solution. Indeed, all the solutions of (18) can be found and given by \((0,0,0,0)^{T}\), \((1,2,2,1)^{T}\) and \((-1,-2,-2,-1)^{T}\). This example shows that Corollary 3.1 is sharp.

4 Sublinear case

In this section, we are concerned with \(f_{1},\ldots,f_{n}\) that are ‘sublinear’ near 0.

Theorem 4.1

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix with \(b_{ij}\leq 0\) for \(i,j\in \{ 1,2,\ldots,n \} \) while \(i\neq j\). Assume furthermore that \(f_{k}\in C ( [ 0,+\infty ) ,R ) \) for each k and

(\(\mathrm{G}_{3}\)):

There exist constants \(a_{1}<\frac{1}{2}\lambda_{\min }\), \(a_{2}>0 \) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\leq a_{1}z^{2}+a_{2}\quad \textit{for } z>M \textit{ and } k\in \{ 1,2,\ldots,n \} . $$
(\(\mathrm{G}_{4}\)):

For each \(k\in \{ 1,2,\ldots,n \} \),

$$ \lim_{z\rightarrow 0^{+}}\frac{\int_{0}^{z}f_{k}(s)\,ds}{z ^{2}}=+\infty . $$

Then system (1) has one positive real solution.

Proof

Let I be defined by (2). First, we prove that I is bounded from below in \(\Omega^{+}\). By (\(\mathrm{G}_{3}\)), if

$$ a=\max_{1\leq k\leq n} \biggl\{ \biggl\vert \int_{0}^{z}f_{k}(s)\,ds-a _{1}z^{2}-a_{2}\biggr\vert +a_{2}:0 \leq z\leq M \biggr\} , $$

then for any \(z\geq 0\) and \(k\in \{ 1,2,\ldots,n \} \),

$$ \int_{0}^{z}f_{k}(s)\,ds\leq a_{1}z^{2}+a. $$

For any \(u\in \Omega^{+}\),

$$ I(u)\geq \frac{1}{2}\lambda_{\min }\Vert u\Vert ^{2}-a_{1}\sum_{k=1}^{n}u_{k}^{2}-na= \biggl( \frac{1}{2}\lambda_{\min }-a_{1} \biggr) \Vert u\Vert ^{2}-na. $$
(19)

Since \(a_{1}<\frac{1}{2}\lambda_{\min }\), for any \(u\in \Omega^{+}\),

$$ I(u)\geq -na. $$

That is, I is bounded from below in \(\Omega^{+}\).

Claim 3

Let \(c_{0}=\inf_{u\in \Omega ^{+}}I(u)\), then \(c_{0}<0\).

By (\(\mathrm{G}_{4}\)), there exist constants \(\delta >0\) and \(\beta > \frac{1}{2n}\sum_{i,j=1}^{n} b_{ij}\) such that for any \(0\leq z\leq \delta \),

$$ \int_{0}^{z}f_{k}(s)\,ds\geq \beta z^{2},\quad k=1,2,\ldots,n. $$
(20)

Choosing \(v=(\delta,\delta,\ldots,\delta )^{T}\in \Omega^{+}\), by (2) and (20),

$$\begin{aligned} I(v) =&\frac{1}{2}\delta^{2}\sum_{i,j=1}^{n} b _{ij}-\sum_{i=1}^{n} \int_{0}^{\delta }f_{i}(s)\,ds \\ \leq &\frac{1}{2}\delta^{2}\sum_{i,j=1}^{n} b _{ij}-\sum_{i=1}^{n} \beta \delta^{2} \\ =& \Biggl( \frac{1}{2}\sum_{i,j=1}^{n} b_{ij}- \sum_{i=1}^{n} \beta \Biggr) \delta^{2} \\ < &0. \end{aligned}$$

This shows that \(c_{0}=\inf_{u\in \Omega ^{+}}I(u)<0\).

By the definition of \(c_{0}\), there is a sequence \(\{ u^{ ( i ) } \} \subset \Omega^{+}\) such that \(\lim_{i\rightarrow \infty } I ( u^{ ( i ) } ) =c_{0}\). It is easy to see that there is a positive number p such that for any \(i\geq 1\),

$$ I \bigl( u^{ ( i ) } \bigr) \leq p. $$
(21)

By (19) and (21),

$$ \biggl( \frac{1}{2}\lambda_{\min }-a_{1} \biggr) \bigl\Vert u^{ ( i ) }\bigr\Vert ^{2}-na\leq I \bigl( u^{ ( i ) } \bigr) \leq p. $$

It follows that

$$ \bigl\Vert u^{ ( i ) }\bigr\Vert ^{2}\leq \biggl( \frac{1}{2} \lambda_{\min }-a_{1} \biggr) ^{-1} ( na+p ) . $$

Thus \(\{ u^{ ( i ) } \} \) is a bounded sequence in \(\Omega^{+}\). Consequently, it has a convergent subsequence \(\{ u^{ ( i_{j} ) } \} \). Let \(\{ u^{ ( i _{j} ) } \} \) tend to \(u^{ ( 0 ) }= ( u_{1} ^{ ( 0 ) },u_{2}^{ ( 0 ) },\ldots,u_{n}^{ ( 0 ) } ) ^{T}\) as \(j\rightarrow \infty \). Then \(u^{ ( 0 ) } \in \Omega^{+}\) and \(I ( u^{ ( 0 ) } ) =c_{0}\). Note that \(I ( 0 ) =0\) and \(c_{0}<0\), therefore \(u^{ ( 0 ) }\neq 0\).

Claim 4

\(u^{ ( 0 ) }\) is a zero-free critical point of functional I.

Assume that there exists \(i_{0}\in \{ 1,2,\ldots,n \} \) such that \(u_{i_{0}}^{ ( 0 ) }=0\),

$$ c_{0}=I \bigl( u^{ ( 0 ) } \bigr) =\frac{1}{2}\sum _{i\neq i_{0},j\neq i_{0}}b _{ij}u_{i}^{ ( 0 ) }u_{j}^{ ( 0 ) }- \sum_{i\neq i_{0}} \int_{0}^{u_{i}^{ ( 0 ) }}f_{i}(s)\,ds. $$
(22)

For \(r\geq 0\), let

$$ I_{3} ( r ) =\frac{1}{2}b_{i_{0},i_{0}}r^{2}+r\sum _{j\neq i_{0}} b _{i_{0},j}u_{j}^{ ( 0 ) }- \int_{0}^{r}f_{i_{0}}(s)\,ds. $$

Since \(b_{i_{0},j}\leq 0\) for \(j\in \{ 1,2,\ldots,n \} \) while \(j\neq i_{0}\), by (\(\mathrm{G}_{4}\)),

$$ \lim_{r\rightarrow 0^{+}}\frac{I_{3} ( r ) }{r ^{2}}=\lim_{r\rightarrow 0^{+}} \biggl( \frac{1}{2}b_{i _{0},i_{0}}+\frac{\sum_{j\neq i_{0}}b_{i_{0},j}u_{j}^{ ( 0 ) }}{r}-\frac{\int_{0}^{r}f_{i_{0}}(s)\,ds}{r^{2}} \biggr) =-\infty, $$
(23)

then there exists a positive constant \(r_{3}\) such that \(I_{3} ( r _{3} ) <0\).

Choosing \(\overline{u}^{ ( 0 ) }= ( u_{1}^{ ( 0 ) },\ldots,u_{i_{0}-1}^{ ( 0 ) },r_{3},u_{i_{0}+1}^{ ( 0 ) },\ldots,u_{n}^{ ( 0 ) } ) ^{T}\), we have \(\overline{u} ^{ ( 0 ) }\in \Omega^{+}\)and

$$ I \bigl( \overline{u}^{ ( 0 ) } \bigr) =I \bigl( u^{ ( 0 ) } \bigr) +I_{3} ( r_{3} ) < I \bigl( u^{ ( 0 ) } \bigr) =c_{0}, $$

which is contrary to the definition of \(c_{0}\). Thus \(u^{ ( 0 ) }\) belongs to the interior of \(\Omega^{+}\) and \(I ( u^{ ( 0 ) } ) =\inf_{u\in \Omega ^{+}}I(u)\). It follows that \(u^{ ( 0 ) }\)is a positive critical point of I, that is, \(u^{ ( 0 ) }= ( u_{1}^{ ( 0 ) },u_{2}^{ ( 0 ) },\ldots,u_{n}^{ ( 0 ) } ) ^{T}\) is a positive solution of (1).

The proof is completed. □

Theorem 4.2

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix with \(b_{ij}\leq 0\) for \(i,j\in \{ 1,2,\ldots,n \} \) while \(i\neq j\). Assume furthermore that \(f_{k}\in C( (-\infty,0 ],R ) \) for each k and

(\(\mathrm{G}_{3}^{\prime }\)):

There exist constants \(a_{1}<\frac{1}{2} \lambda_{\min }\), \(a_{2}>0\) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\leq a_{1}z^{2}+a_{2}\quad \textit{for } z< -M \textit{ and } k\in \{ 1,2,\ldots,n \} . $$
(\(\mathrm{G}^{\prime}_{4}\)):

For each \(k\in \{ 1,2,\ldots,n \} \),

$$ \lim_{z\rightarrow 0^{-}}\frac{\int_{0}^{z}f_{k}(s)\,ds}{z ^{2}}=+\infty . $$

Then system (1) has one negative real solution.

Let J be defined by (14). Similar to the proof of Theorem 4.1, it can be proved that there exists a critical point of J in the interior of \(\Omega^{+}\), so the conclusion of Theorem 4.2 is true.

According to Theorems 4.1 and 4.2, the following can be directly obtained.

Corollary 4.1

Let \(B=(b_{ij})_{n\times n}\) be a real symmetric matrix with \(b_{ij}\leq 0\) for \(i,j\in \{ 1,2,\ldots,n \} \) while \(i\neq j\). Assume furthermore that \(f_{k}\in C ( R,R ) \) for each k and

(\(\mathrm{G}^{\prime\prime}_{3}\)):

There exist constants \(a_{1}< \frac{1}{2}\lambda_{\min }\), \(a_{2}>0\) and \(M>0\) such that

$$ \int_{0}^{z}f_{k}(s)\,ds\leq a_{1}z^{2}+a_{2}\quad \textit{for }\vert z\vert >M \textit{ and } k\in \{ 1,2,\ldots,n \}. $$
(\(\mathrm{G}^{\prime\prime}_{4}\)):

For each \(k\in \{ 1,2,\ldots,n \}\),

$$ \lim_{z\rightarrow 0} \frac{\int_{0}^{z}f_{k}(s)\,ds}{z^{2}}=+\infty. $$

Then system (1) has one positive real solution and one negative real solution.

Example 4.1

The system

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\displaystyle \right ) = \left ( \textstyle\begin{array}{c} f_{1}(x_{1}) \\ f_{2}(x_{2}) \\ f_{3}(x_{3}) \end{array}\displaystyle \right ), $$
(24)

where

$$ f_{k}(s)= \textstyle\begin{cases} -s-2,&s\leq -1, \\ s^{\frac{1}{3}},&\vert s\vert < 1, \\ -s+2,&s\geq 1, \end{cases}\displaystyle \quad \text{for } k=1,2,3. $$

(24) is one of the form (1), where

$$ B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{array}\displaystyle \right ),\quad n=3. $$

B is not a positive definite matrix but a positive semidefinite matrix. The eigenvalues of B are \(0, 3\) and 3, therefore \(\lambda_{\min }=0\). Let \(a_{1}=-\frac{1}{4}\), \(a_{2}=4\) and \(M=8\), it is easy to verify that (24) satisfies all conditions of Corollary 4.1. Thus, (24) has one positive solution and one negative solution.

Indeed, let \((x_{1},x_{2},x_{3})^{T}\) be a positive solution, we assert \(\min \{ x_{1},x_{2},x_{3} \} \geq 1\).

Let \(\min \{ x_{1},x_{2},x_{3} \} =x_{1}\). If \(0<\min \{ x_{1},x_{2},x_{3} \} <1\), by (24),

$$ 2x_{1}-x_{2}-x_{3}=x_{1}^{\frac{1}{3}}. $$

It follows that \(x_{1}>2x_{1}-x_{1}^{\frac{1}{3}}=x_{2}+x_{3}\), which is contrary to \(\min \{ x_{1},x_{2},x_{3} \} =x_{1}\). This shows \(\min \{ x_{1},x_{2},x_{3} \} \geq 1\).

The positive solution of (24) is obtained by

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\displaystyle \right ) =\left ( \textstyle\begin{array}{c} -x_{1}+2 \\ -x_{2}+2 \\ -x_{3}+2 \end{array}\displaystyle \right ). $$
(25)

It follows that

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 3 & -1 & -1 \\ -1 & 3 & -1 \\ -1 & -1 & 3 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\displaystyle \right ) =\left ( \textstyle\begin{array}{c} 2 \\ 2 \\ 2 \end{array}\displaystyle \right ). $$
(26)

The solution of (26) can be found and given by \((2,2,2)^{T}\). Thus \((2,2,2)^{T}\) is the unique positive solution of (24). Similarly, \((-2,-2,-2)^{T}\) is the unique negative solution of (24). This example shows that Corollary 4.1 is sharp.

Example 4.2

The system

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -2 & -1 & -1 \\ -1 & -2 & -1 \\ -1 & -1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\displaystyle \right ) =\left ( \textstyle\begin{array}{c} f_{1}(x_{1}) \\ f_{2}(x_{2}) \\ f_{3}(x_{3}) \end{array}\displaystyle \right ), $$
(27)

where

$$ f_{k}(s)=s^{\frac{1}{3}}-5s^{3}\quad \mbox{for }k=1,2,3. $$

(27) is one of the form (1), where

$$ B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -2 & -1 & -1 \\ -1 & -2 & -1 \\ -1 & -1 & -2 \end{array}\displaystyle \right ),\quad n=3. $$

B is a negative definite matrix. The eigenvalues of B are \(-4,-1\) and −1. Let \(a_{1}=-\frac{17}{4}\), \(a_{2}>0\) and \(M=2\), it is easy to verify that system (27) satisfies all the conditions of Corollary 4.1. Thus, (27) has one positive solution and one negative solution. Indeed, all the solutions of (27) can be found and given by \(( 0,0,0 ) ^{T}\), \((1,1,1)^{T}\) and \((-1,-1,-1)^{T}\). This example shows that Corollary 4.1 is sharp.