1 Introduction

Equilibrium problems, which were firstly studied by Blum and Oettli [1], provide an unified framework for fixed point problems, variational inequality, complementarity problems, and optimization problems. It is well known that the vector equilibrium problem is a vital extension of equilibrium problems, which contain vector variational inequality, vector complementarity problems, and vector optimization problems as special cases. In the past decades, various kinds of vector equilibrium problems and their applications have been introduced and studied; see [29] and the references therein. Recently, Chang et al. [10] and Kumam et al. [5] researched a generalized mixed equilibrium problem,

$$F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\geq0, \quad\forall y\in K. $$

It is very general in the sense that it includes fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems, and equilibrium problems as special cases.

Error bounds, which play a critical role in algorithm design, can be used to measure how much the approximate solution fails to be in the solution set and to analyze the convergence rates of various methods. Recently, kinds of error bounds have been presented for variational inequalities in [1118]. Results for error bounds have been established for a weak vector variational inequality (WVVI) in [1215, 19]. Xu and Li [15] obtained error bounds for a weak vector variational inequality with cone constraints by using a method of image space analysis. By using a scalarization approach of Konnov [20], Li and Mastroeni [13] established error bounds for two kinds of (WVVI) with set-valued mappings. By a regularized gap function and a D-gap function, Charitha and Dutta [12] used a projection operator method to obtain error bounds of (WVVI), respectively. Sun and Chai [14] studied some error bounds for generalized vector variational inequalities in virtue of the regularized gap functions. Very recently, a global error bound of a weak vector variational inequality was established by the nonlinear scalarization method in Li [19].

However, to the best of our knowledge, an error bound of the generalized mixed vector equilibrium problem (GMVE) has never been investigated. In this paper, motivated by ideas in Sun and Chai [14] and Yamashita et al. [18], we introduce a scalar gap function for (GMVE). Then an error bound of (GMVE) is presented. As an application of an error bound for (GMVE), we also get error bounds of (GVVI) and (VVI), respectively.

This paper is organized as follows: In Section 2, we first recall some basic definitions. In Section 3, we introduce scalar gap functions for (GMVE), (GVVI), and (VVI). By using these gap functions, we obtain some error bound results for (GMVE), (GVVI), and (VVI), respectively.

2 Mathematical preliminaries

Throughout this paper, let \(\mathbb{R}^{n}\) be the n-dimensional Euclidean space and denote \(\mathbb{R}^{n}_{+}=\{x=(x_{1},x_{2},\ldots,x_{n}):x_{j}\geq0, j=1,2,\ldots,n\}\), the norms of all finite dimensional spaces be denoted by \(\|\cdot \|\), the inner products of all finite dimensional spaces be denoted by \(\langle\cdot,\cdot\rangle\). Let \(K\subseteq\mathbb{R}^{n}\) be a nonempty closed convex set. Let \(A_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) (\(i=1,2,\ldots,m\)) be a vector-valued mapping, \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(g_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) (\(i=1,2,\ldots,m\)) be real-valued functions. For abbreviation, we put

$$A:=(A_{1},A_{2},\ldots,A_{m}),\qquad F:=(F_{1},F_{2},\ldots,F_{m}),\qquad g:=(g_{1},g_{2}, \ldots,g_{m}) $$

and for any \(x,v\in\mathbb{R}^{n}\)

$$\bigl\langle A(x),v \bigr\rangle := \bigl( \bigl\langle A_{1}(x),v \bigr\rangle , \bigl\langle A_{2}(x),v \bigr\rangle ,\ldots, \bigl\langle A_{m}(x),v \bigr\rangle \bigr). $$

In this paper, we consider the generalized mixed vector equilibrium problem (GMVE) of finding \(x\in K\) such that

$$ F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\notin{-} \operatorname{int} \mathbb {R}^{m}_{+},\quad \forall y\in K. $$
(1)

Denote by \(S_{GMVE}\) the solution set of (GMVE).

If \(m=1\), our problem is to find \(x \in K\) such that

$$ F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\geq0,\quad \forall y \in K. $$
(2)

Then this problem reduces to a generalized mixed equilibrium problem [5].

If \(F=0\), our problem is to find \(x \in K\) such that

$$ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\notin{-} \operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall y\in K. $$
(3)

Then this problem reduces to a generalized vector variational inequality problem (GVVI) [14].

In the case of \(F=0\) and \(g=0\), (GMVE) is a vector variational inequality problem (VVI), introduced and studied by Giannessi [21], finding \(x\in K\) such that

$$ \bigl\langle A(x), y-x \bigr\rangle \notin{-}\operatorname{int} \mathbb{R}^{m}_{+},\quad \forall y\in K. $$
(4)

In the case of \(A\equiv0\) and \(g=0\), then (GMVE) is a vector equilibrium problem, finding \(x \in K\) such that

$$ F(x,y)\notin{-}\operatorname{int} \mathbb{R}^{m}_{+}, \quad \forall y\in K. $$
(5)

In the case of \(A\equiv0\) and \(F=0\), (GMVE) is a vector optimization problem, finding \(x \in K\) such that

$$ g(y)-g(x)\notin{-}\operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall y\in K. $$
(6)

For \(i=1,2,\ldots,m\), we denote the generalized mixed vector equilibrium problems (GMVE) associated with \(F_{i}\), \(A_{i}\) and \(g_{i}\) as \((GMVE)^{i}\), the generalized vector variational inequality problems (GVVI) associated with \(A_{i}\) and \(g_{i} \) as \((GVVI)^{i}\), and the vector variational inequality problems (VVI) associated with \(A_{i}\) as \((VVI)^{i}\), respectively. The solution sets of \((GMVE)^{i}\), \((GVVI)^{i}\), and \((VVI)^{i}\) will be denoted by \(S_{GMVE}^{i}\), \(S_{GVVI}^{i}\), and \(S_{VVI}^{i}\), respectively.

In the paper, we intend to investigate gap functions and error bounds of (GMVE), (GVVI), and (VVI). We shall recall some notations and definitions, which will be used in the sequel.

Definition 2.1

A real-valued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex (resp. concave) over \(\mathbb{R}^{n}\), if

$$\begin{aligned} &f \bigl(\lambda x +(1-\lambda)y \bigr)\leq\lambda f(x) + (1-\lambda)f(y) \\ &\quad \bigl(\mbox{resp. } f \bigl(\lambda x +(1-\lambda)y \bigr)\geq\lambda f(x) + (1- \lambda)f(y) \bigr), \end{aligned}$$

for every \(x, y \in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\).

Definition 2.2

A vector-valued function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is strongly monotone over \(\mathbb{R}^{n}\) with modulus \(\kappa> 0\), if for any \(x,y \in\mathbb{R}^{n}\),

$$\bigl\langle h(y)-h(x), y-x \bigr\rangle \geq\kappa\|y-x\|^{2}. $$

Definition 2.3

A real-valued function \(\vartheta: \mathbb{R}^{n} \rightarrow\mathbb{R}\) is said to be a scalar-valued gap function of (GMVE) (resp. (GVVI) and (VVI)), if it satisfies the following conditions:

  1. (i)

    \(\vartheta(x)\geq0\), for any \(x\in K \),

  2. (ii)

    \(\vartheta(x_{0})=0 \) if and only if \(x_{0} \in K\) is a solution of (GMVE) (resp. (GVVI) and (VVI)).

3 Main results

Notice that Sun and Chai [14] introduced a new scalar-valued gap function of (GVVI) without any scalarization approach; the gap function discussed in [14] is simpler from the computational view. In terms of an approach due to Sun and Chai [14] and Yamashita et al. [18], we construct the function \(\vartheta_{\alpha}:K\rightarrow\mathbb{R}\) for \(\alpha> 0\),

$$\begin{aligned}& \vartheta_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} - \alpha\varphi (x,y) \Bigr\} , \end{aligned}$$
(7)
$$\begin{aligned}& \psi_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} -\alpha\varphi(x,y) \Bigr\} , \end{aligned}$$
(8)

and

$$ \phi_{\alpha}(x):=\sup_{y\in K}\min _{1\leq i\leq m} \bigl\langle A_{i}(x), x-y \bigr\rangle -\alpha \varphi(x,y)\}, $$
(9)

respectively, where \(F_{i},\varphi:\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to \mathbb{\mathbb{R}}\), \(i=1,2,\ldots,m\) are real-valued functions. In the following, let φ be a continuously differentiable function, which has the following property with the associated constants \(\gamma, \beta>0\).

  1. (P)

    For all \(x,y \in\mathbb{R}^{n}\),

    $$ \beta\|x-y\|^{2}\leq\varphi(x,y)\leq(\gamma- \beta) \|x-y\|^{2},\quad \gamma\geq 2\beta>0. $$
    (10)

For example, let \(\kappa>0\) and let \(\varphi:\mathbb{\mathbb {R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to\mathbb{\mathbb{R}}\) be defined by

$$\varphi(x,y)=\kappa \|x-y\|^{2},\quad x,y\in\mathbb{R}^{n}. $$

Then φ satisfies condition (P), with \(\gamma=2\kappa \) and \(\beta=\kappa\).

For any \(i=1,2,\ldots,m\), we suppose that \(F_{i}\) satisfies the following conditions:

  1. (A1)

    \(F_{i}\) is a convex function about the second variable on \(\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\).

  2. (A2)

    \(F_{i}(x,y)=0\), \(\forall x,y\in\mathbb{\mathbb{R}}^{n}\), if and only if \(x=y\).

  3. (A3)

    For any \(x,y,z\in K\), \(F_{i}(x,y)+F_{i}(y,z)\leq F_{i}(x,z)\).

Theorem 3.1

If \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex about the second variable and \(g_{i}\) is convex over \(\mathbb {R}^{n}\) for any \(i=1,2,\ldots,m\), then the function \(\vartheta_{\alpha}\), with \(\alpha> 0\), defined by (7) is a gap function for (GMVE).

Proof

(i) It is clear that for any \(x\in K\),

$$\vartheta_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} - \alpha\varphi (x,y) \Bigr\} \geq0 $$

follows simply by setting \(x=y\) in the right hand side of the expression for \(\vartheta_{\alpha}(x)\).

(ii) If there exists \(x_{0}\in K\) such that \(\vartheta_{\alpha}(x_{0})=0\), then

$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), x_{0}-y \bigr\rangle +g_{i}(x_{0})-g_{i}(y) \bigr\} \leq\alpha \varphi(x,y),\quad \forall y\in K. $$

For arbitrary \(x\in K\) and \(\kappa\in(0,1)\), let \(y=x+\kappa(x_{0}-x)\). Since K is convex, we get \(y\in K\) and

$$\begin{aligned} &\min_{1\leq i\leq m}\bigl\{ -F_{i} \bigl(x_{0},x+ \kappa(x_{0}-x) \bigr)+ \bigl\langle A_{i}(x_{0}), x_{0}- \bigl(x+\kappa(x_{0}-x) \bigr) \bigr\rangle +g_{i}(x_{0})-g_{i} \bigl(x+\kappa(x_{0}-x) \bigr)\bigr\} \\ &\quad\leq\alpha\varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr). \end{aligned}$$

Since \(F_{i}\) is convex over \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) about the second variable and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\), we have

$$\begin{aligned} &\min_{1\leq i\leq m} \bigl\{ -\kappa F_{i}(x_{0},x_{0})-(1- \kappa ) F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}), (1-\kappa) (x_{0}-x) \bigr\rangle \\ &\qquad{}+g_{i}(x_{0})- \kappa g_{i}(x_{0})-(1-\kappa)g_{i}(x) \bigr\} \\ &\quad\leq\min_{1\leq i\leq m} \bigl\{ -F_{i} \bigl(x_{0},x+ \kappa(x_{0}-x) \bigr)+ \bigl\langle A_{i}(x_{0}), (1-\kappa) (x_{0}-x) \bigr\rangle +g_{i}(x_{0})-g_{i} \bigl(x+\kappa(x_{0}-x) \bigr) \bigr\} \\ &\quad\leq\alpha\varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr). \end{aligned}$$
(11)

For the function \(\varphi(x,y)\), by using (10), we have

$$ \varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr) \leq(1- \kappa)^{2}(\gamma-\beta)\|x_{0}-x\|^{2}. $$
(12)

By the property (A2) of the function \(F_{i}\), we obtain \(F_{i}(x_{0},x_{0})=0\).

Hence, from (11) and (12), we get

$$\begin{aligned} &(1-\kappa)\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \\ &\quad\leq\alpha(\gamma-\beta) (1-\kappa)^{2}\|x-x_{0} \|^{2}. \end{aligned}$$

So,

$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \leq\alpha(\gamma- \beta) (1-\kappa)\|x-x_{0}\|^{2}. $$

Taking the limit as \(\kappa\rightarrow1\), we obtain

$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \leq0. $$

Then, for any \(x\in K\), there exists \(1\leq i_{0} \leq m\) such that

$$-F_{i_{0}}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i_{0}}(x_{0})-g_{i_{0}}(x)\leq0. $$

This means that

$$F(x_{0},x)+ \bigl\langle A_{i}(x_{0}), x-x_{0} \bigr\rangle +g(x)-g(x_{0})\notin-\operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall x\in K. $$

Thus, \(x_{0}\in S_{GMVE}\).

Conversely, if \(x_{0}\in S_{GMVE}\), then there exists \(1\leq i_{0}\leq m\) such that

$$F_{i_{0}}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), y-x_{0} \bigr\rangle +g_{i_{0}}(y)-g_{i_{0}}(x_{0}) \geq0\quad \mbox{for any }y\in K. $$

This means that

$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), x_{0}-y \bigr\rangle +g_{i}(x_{0})-g_{i}(y) \bigr\} \leq0\quad \mbox{for any }y\in K. $$

So,

$$\vartheta_{\alpha}(x_{0})\leq0. $$

Since \(\vartheta_{\alpha}(x_{0})\geq0\) for any \(x\in K\),

$$\vartheta_{\alpha}(x_{0})= 0. $$

This completes the proof. □

By a similar method, we conclude the following results for (GVVI) and (VVI), respectively.

Corollary 3.1

The function \(\psi_{\alpha}\), with \(\alpha> 0\), defined by (8) is a gap function for (GVVI).

Corollary 3.2

The function \(\phi_{\alpha}\), with \(\alpha> 0\), defined by (9) is a gap function for (VVI).

Now, by using the gap function \({\vartheta_{\alpha}}(x)\), we obtain an error bound result for (GMVE).

Theorem 3.2

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Assume \(F_{i}\) is convex about the second variable over \(\mathbb {R}^{n}\times\mathbb{R}^{n}\) and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq \emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then for any \(x\in K\),

$$d(x,S_{GMVE})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}} \sqrt {\vartheta_{\alpha}(x)}, $$

where \(d(x,S_{GMVE})\) denotes the distance from the point x to the solution set \(S_{GMVE}\).

Proof

By (7), we get

$$\vartheta_{\alpha}(x)\geq\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} -\alpha\varphi(x,y)\quad \mbox{for any }y \in K. $$

Since \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), all \((GMVE)^{i}\) have the same solution. Without loss of generality, we can assume that the same solution is \(x_{0}\in K\). Obviously, \(x_{0}\in S_{GMVE}\) and

$$\vartheta_{\alpha}(x)\geq\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,x_{0})+ \bigl\langle A_{i}(x), x-x_{0} \bigr\rangle +g_{i}(x)-g_{i}(x_{0}) \bigr\} -\alpha\varphi(x,x_{0}). $$

Without loss of generality, we assume that

$$\begin{aligned} &{-}F_{1}(x,x_{0})+ \bigl\langle A_{1}(x), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \\ &\quad=\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,x_{0})+ \bigl\langle A_{i}(x), x-x_{0} \bigr\rangle +g_{i}(x)-g_{i}(x_{0}) \bigr\} . \end{aligned}$$

Since A is strongly monotone, we obtain

$$\vartheta_{\alpha}(x)\geq-F_{1}(x,x_{0})+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0})+\kappa\|x-x_{0} \|^{2}-\alpha\varphi(x,x_{0}). $$

It follows from the property (P) of the function φ that

$$\begin{aligned} \vartheta_{\alpha}(x)\geq{}&{-}F_{1}(x,x_{0})+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \\ &{}+\kappa\|x-x_{0} \|^{2}-\alpha(\gamma-\beta)\|x-x_{0}\|^{2}. \end{aligned}$$
(13)

By \(x_{0}\in S_{GMVE}^{1}\), we get

$$ F_{1}(x_{0},x)+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \geq0. $$
(14)

For the function \(F_{1}\), by using (A3), we get from (13) and (14)

$$\vartheta_{\alpha}(x)=\vartheta_{\alpha}(x)+F_{1}(x,x)\geq \vartheta _{\alpha}(x)+F_{1}(x,x_{0})+F(x_{0},x) \geq \bigl[\kappa-\alpha(\gamma-\beta) \bigr]\| x-x_{0} \|^{2}. $$

Namely,

$$\vartheta_{\alpha}(x)\geq \bigl[\kappa-\alpha(\gamma-\beta) \bigr] \|x-x_{0}\|^{2}. $$

Then

$$\|x-x_{0}\|\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt {\vartheta_{\alpha}(x)}, $$

which means that

$$d(x,S_{GMVE})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt {\vartheta_{\alpha}(x)}. $$

This complete the proof. □

The following example shows that, in general, the conditions of Theorem 3.2 can be achieved.

Example 3.1

Let \(n=1\), \(m=2\), \(K\subseteq R\), and \(K=[-1,1]\). Define \(A_{1}, A_{2}, g_{1}, g_{2}: R\rightarrow R\), \(F_{1}: R\times R\rightarrow R\), \(F_{2}: R \times R\rightarrow R\) by

$$\begin{aligned}& A_{1}(x)= x,\qquad A_{2}(x)=x^{3} + 4x,\qquad g_{1}(x)=2x^{2},\qquad g_{2}(x)=3x^{4}, \\& F_{1}(x,y)=-x^{2}+y^{2},\qquad F_{2}(x,y)=-3x^{4}+3y^{4}. \end{aligned}$$

Then

$$F(x,y)= \bigl(-x^{2}+y^{2},-3x^{4}+3y^{4} \bigr),\qquad A(x)= \bigl(x,x^{3}+2x \bigr),\qquad g(x)= \bigl(2x^{2}, 3x^{4} \bigr). $$

Obviously, \(F_{1}(x,y)\) and \(F_{2}(x,y)\) are convex about the second variable, respectively. \(A_{1}\) and \(A_{2}\) are strongly monotone over K with the modulus \(\kappa_{1}=1\) and \(\kappa_{2}=2\), respectively. Moreover, \(g_{1}\) and \(g_{2}\) are convex over R. On the other hand, by direct calculations, we have

$$\bigcap^{2}_{i=1}S_{GMVE}^{i}= \{0\}. $$

Thus, the conditions of Theorem 3.2 are satisfied.

Similarly, by using gap functions \(\psi_{\alpha}\) and \(\phi_{\alpha}\), we can also obtain error bound results for (GVVI) and (VVI), respectively.

Corollary 3.3

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). If \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{ GVVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),

$$d(x,S_{GVVI})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt{\psi _{\alpha}(x)}, $$

where \(d(x,S_{GVVI})\) denotes the distance from the point x to the solution set \(S_{GVVI}\).

Corollary 3.4

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Further assume that \(\bigcap^{m}_{i=1}S_{VVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),

$$d(x,S_{VVI})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt{\phi _{\alpha}(x)}, $$

where \(d(x,S_{VVI})\) denotes the distance from the point x to the solution set \(S_{VVI}\).

Remark 3.1

(i) In [14], there exist some mistakes in the proof of Theorem 3.2, which lead to the requirement of Lipschitz properties of \(g_{i}\), \(i=1,2,\ldots,n\). Hence, we give the modified error bound for (GVVI) in Corollary 3.3 without Lipschitz assumption.

(ii) In [12], Charitha and Dutta established error bounds for (VVI) by using the projection operator method and strongly monotone assumptions, whereas it seems that our method is more simple from the computational view since there are not any scalarization parameters.

(iii) Under the conditions of Theorem 3.2, the strong assumption that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\) shows that \(S_{GMVE}\) is a singleton set but not a general set. In fact, by \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), there exists \(x\in K\) such that \(x\in\bigcap^{m}_{i=1}S_{GMVE}^{i}\), namely, for every \(i=1,2,\ldots, m\),

$$ F_{i}(x,y)+ \bigl\langle A_{i}(x), y-x \bigr\rangle +g_{i}(y)-g_{i}(x)\geq0,\quad \forall y\in K. $$
(15)

It is clear that \(x\in S_{GMVE}\). If \(S_{GMVE}\) is not a singleton set, there exists \(x'\in S_{GMVE}\) with \(x'\neq x\). Therefore, there exists \(j\in\{1,2,\ldots, m\}\) such that

$$ F_{j} \bigl(x',y \bigr)+ \bigl\langle A_{j} \bigl(x' \bigr), y-x' \bigr\rangle +g_{j}(y)-g_{j} \bigl(x' \bigr)\geq0, \quad \forall y\in K. $$
(16)

Thus, from (15), we have

$$ F_{j} \bigl(x,x' \bigr)+ \bigl\langle A_{j}(x), x'-x \bigr\rangle +g_{j} \bigl(x' \bigr)-g_{j}(x)\geq0. $$
(17)

From (16), we have

$$ F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j} \bigl(x' \bigr), x-x' \bigr\rangle +g_{j}(x)-g_{j} \bigl(x' \bigr)\geq0. $$
(18)

According to (17) and (18), we get

$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j}(x), x'-x \bigr\rangle + \bigl\langle A_{j} \bigl(x' \bigr), x-x' \bigr\rangle \geq0. $$
(19)

However, by the properties (A2) and (A3) of the function \(F_{j}\),

$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)\leq0. $$
(20)

As \(A_{j}\) is strongly monotone, we have

$$ \bigl\langle A_{j}(x)-A_{j} \bigl(x' \bigr), x-x' \bigr\rangle \geq\kappa \bigl\| x-x'\bigr\| ^{2}>0. $$
(21)

By combining (20) and (21), we have

$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j}(x)-A_{j} \bigl(x' \bigr), x'-x \bigr\rangle < 0. $$
(22)

This, however, contradicts (18).

Now we ask: How do we establish error bounds for \(S_{GMVE}\) in terms of the gap function \(\vartheta_{\alpha}\), under mild assumptions, such that \(S_{GMVE}\) need not be a singleton set in general? This problem may be interesting and valuable in vector optimization.