1 Introduction

In recent years, the second-order cone \(\mathcal{L}^n_+:= \{(x_0,x^{n-1}) \in {\mathbb {R}}\times {\mathbb {R}}^{n-1}: x_0 \ge \Vert x^{n-1}\Vert \}\), also known as the Lorentz cone, attracted much attention of the researchers in optimization, particularly in conic optimization. Many optimization problems can be reformulated as the conic ones. There are computationally stable numerical algorithms for solving various such problems, including complementarity problems. The literature on the subject is vast and commonly accessible, see, e.g., Alizadeh and Goldfarb [1], and a recent work [6], by Hao et al., which is related to the mixed complementarity problem over the second-order cone.

The Lorentz cone has a particularly regular structure: it is a self-dual cone, whose base is isometric to the Euclidean unit ball in \({\mathbb {R}}^{n-1}\) and is irreducible. In the context of Euclidean Jordan algebras, \(\mathcal{L}^n_+\) is a symmetric cone (of squares) in the spin algebra \(\mathcal{L}^n\) it generates. We will not pursue this direction here.

There are several known important versions of the extended Lorentz cone, including Bishop–Phelps cone [5] and the extended second-order cone (ESOC), which was recently developed by S. Z. Németh and his co-authors, see [10,11,12,13,14]. The Lyapunov rank of a cone K, denoted by \(\beta (K)\) (see its definition in the next section) is an invariant which shows that the Lorentz cone and ESOC are generally not linearly isomorphic. It was introduced and studied by F. Alizadeh et al. in [16] under the name of bilinearity rank. The Lyapunov rank of \(\mathcal{L}^n_+\) was computed in [16] and [4], and equals \(\frac{n^2-n}{2}\). Orlitzky, in [15], showed that the latter quantity is the maximum value the Lyapunov rank can attain for a proper cone in \({\mathbb {R}}^n\). Sznajder, in [18], showed that the ESOC is irreducible and computed its Lyapunov rank, which is generally lower than \(\frac{n^2-n}{2}\).

In this article, we study another extension of the Lorentz cone, called the monotone extended second-order cone (MESOC) [3]. There are three main results related to MESOC here:

  • computing its Lyapunov rank, which turns out, in general, is much lower than the minimal upper bound indicated in [15],

  • proving that MESOC (in contrast to ESOC) is a reducible cone,

  • showing that a closed convex set is an isotonic projection set with respect to MESOC if and only if it is a cylinder (in an ambient space).

In [3], an application of MESOC to portfolio optimization has been presented and possible other applications have been suggested.

The paper is organized as follows: In Sect. 2, we collect the necessary definitions and provide examples of monotone cones. The main concept related to a cone K, on which the paper relies upon, is the complementarity set of K. In Sect. 3, for MESOC, we identify its dual space and investigate the structure of its complementarity set. We also formulate here and prove the results listed above. In Sect. 4, based on the work done in [13] and [14], we study the properties of the mixed complementarity problem (MiCP). By exploring the relationship of mixed complementarity problem and nonlinear complementarity problem derived in [13], and by using the isotonicity of MESOC obtained in Sect. 3, we generate a fixed point iteration sequence (called Picard iteration by some authors), which is convergent to a solution of the MiCP on a general closed and convex cone. The convergence of this iteration is order-based, rather than based on a usual contraction mapping principle, although the preprint [9] and the example in the final section suggest that in certain situations it may be implicitly related to such a principle. This example is about a real MiCP example. We show the existence of a solution, in exact numbers, by using the above iteration.

2 Preliminaries

Denote the canonical unit vectors of \({\mathbb {R}}^n\) by \(e^1,\ldots ,e^n\) and let \(e=e^1+\cdots +e^n\). Any vector \(z\in {\mathbb {R}}^n\) is considered to be a column vector and can be uniquely written as \(z=(z_1,\ldots , z_n)^\top :=z_1e^1+\cdots +z_ne^n\). In particular, \(e=(1,\ldots ,1)^\top \).

The canonical inner product of any two vectors \(x,y\in {\mathbb {R}}^n\) is defined as

$$\begin{aligned} \langle x,y\rangle :=x^\top y=x_1y_1+\cdots +x_ny_n. \end{aligned}$$

We identify \({\mathbb {R}}^p\times {\mathbb {R}}^q\) with \({\mathbb {R}}^{p+q}\) through \((x,y)=(x^\top ,y^\top )^\top \).

We call the set

$$\begin{aligned} {\mathcal {H}}(u,\alpha ):=\{x\in {\mathbb {R}}^n:\;\langle x,u\rangle =\alpha \} \end{aligned}$$

an affine hyperplane with the normal \(u\in {\mathbb {R}}^n\setminus \{0\}\) and the corresponding sets

$$\begin{aligned} {\mathcal {H}}_-(u,\alpha ):= & {} \{x\in {\mathbb {R}}^n:\langle x,u\rangle \le \alpha \},\\ {\mathcal {H}}_+(u,\alpha ):= & {} \{x\in {\mathbb {R}}^n:\langle x,u\rangle \ge \alpha \}, \end{aligned}$$

closed half-spaces. An affine hyperplane through the origin will be simply called hyperplane.

A nonempty set \(K\subseteq {\mathbb {R}}^n\) is a cone if for any \(x\in K\) and \(\forall \alpha > 0\), it holds \(\alpha x \in K\). A set K is a convex cone (i.e., cone K is a convex set) if and only if for any \(x,y\in K\) and \(\forall \alpha ,\beta > 0\), it holds \(\alpha x + \beta y \in K.\)

A cone K is called a closed cone (pointed) when it is a closed set (\(K \cap -K =\{0\}\)).

The dual cone of a cone K is given by

$$\begin{aligned} K^* := \{y \in {\mathbb {R}}^n :\langle x, y\rangle \ge 0, \forall x\in K\}. \end{aligned}$$

We define the following set, which is vital for our further considerations

$$\begin{aligned} C(K):= \{(x, y): x\in K, y\in K^*,\, x \perp y\}, \end{aligned}$$

called the complementarity set of K, where \(x\perp y\) means \(\langle x,y\rangle = 0\).

A cone \(K\subseteq {\mathbb {R}}^n\) is called simplicial if there is a basis \(\{u^i:1\le i\le n\}\) of \({\mathbb {R}}^n\) such that

$$\begin{aligned} K=\left\{ \alpha _1 u^1+\dots +\alpha _n u^n\,:\,\alpha _i\ge 0,\,1\le i\le n\right\} \!\!. \end{aligned}$$

The vectors \(u^i\), \(1\le i\le n\) are called the generators of K. It is known that the dual of a simplicial cone is also simplicial.

We present two examples of complementarity sets, the second will be used later.

Example 1

Define the monotone cone \({\mathbb {R}}_{\ge }^n\) as

$$\begin{aligned} {\mathbb {R}}_{\ge }^n :=\{x\in {\mathbb {R}}^n:x_1\ge x_2\ge \cdots \ge x_n\}. \end{aligned}$$

It is easy to check that its dual cone \(({\mathbb {R}}_{\ge }^n)^*\) is given by

$$\begin{aligned} ({\mathbb {R}}_{\ge }^n)^*=\left\{ y\in {\mathbb {R}}^n: \sum _{i=1}^{j} y_j\ge 0,~ j=1,2,\ldots ,n-1,~ \sum _{i=1}^{n}y_i=0 \right\} \!\!. \end{aligned}$$

It is an important object, also known as the Schur cone (see [17], Example 7.4), since it induces the so-called Schur ordering, which plays an important role in the theory of majorization, see [7].

The complementarity set \(C({\mathbb {R}}_{\ge }^n)\) of the cone \({\mathbb {R}}_{\ge }^n\) is described as

$$\begin{aligned} C({\mathbb {R}}_{\ge }^n)= & {} \Big \{(x,y):x\in {\mathbb {R}}_{\ge },y\in ({\mathbb {R}}_{\ge }^n)^*,~ (x_i-x_{i+1})\sum _{j=1}^{i} y_j=0,\\&\forall i=1,2,\ldots ,n-1\Big \}. \end{aligned}$$

Example 2

  We define the monotone nonnegative cone \({\mathbb {R}}_{\ge +}^n\) as:

$$\begin{aligned} {\mathbb {R}}_{\ge +}^n :=\{x\in {\mathbb {R}}^n: x_1\ge x_2 \ge \cdots \ge x_n\ge 0\}. \end{aligned}$$

Its dual cone is given by:

$$\begin{aligned} ({\mathbb {R}}_{\ge +}^n)^*=\left\{ y\in {\mathbb {R}}^n: \sum _{i=1}^{j}y_i\ge 0,\, j=1,2,\ldots ,n\right\} \!, \end{aligned}$$

and the complementarity set of  \({\mathbb {R}}_{\ge +}^n\) is equal to

$$\begin{aligned} C({\mathbb {R}}_{\ge +}^n)= & {} \left\{ x\in {\mathbb {R}}_{\ge +}^n,y\in ({\mathbb {R}}_{\ge +}^n)^*: \Big (x_j=x_{j+1} \text { or } \sum _{i=1}^{j}y_j=0\right. \!, \\&\left. \text { }\forall j=1,2,\ldots ,n-1\Big ), \text {and}\Big (x_n=0 \,\text { or } \sum _{i=1}^{n}y_i=0\Big )\right\} \!. \end{aligned}$$

Both \({\mathbb {R}}^n_{\ge +}\) and \(({\mathbb {R}}^n_{\ge +})^*\) are simplicial cones.

Recall [13] that the extended second-order cone (ESOC) is defined by

$$\begin{aligned} \text {ESOC}=\{(x,u)\in {\mathbb {R}}^p\times {\mathbb {R}}^q:\,x\ge \Vert u\Vert e\} \end{aligned}$$

and its dual cone is given as

$$\begin{aligned} \text {(ESOC)}^*=\{(x,u)\in {\mathbb {R}}^p\times {\mathbb {R}}^q:\,\langle x,e\rangle \ge \Vert u\Vert ,\,x\ge 0\}, \end{aligned}$$

where p and q are nonnegative integers.

A matrix \(A\in {\mathbb {R}}^{n\times n}\) is called Lyapunov-like on K, if

$$\begin{aligned} \langle Ax,y\rangle = 0, \text { }\forall (x,y) \in C(K). \end{aligned}$$
(1)

Define a vector space \({{\,\mathrm{LL}\,}}(K)\) as the set of all Lyapunov-like matrices on K and denote its dimension as \(\beta (K)\), which we call the Lyapunov rank (or bilinearity rank) of K.

For an arbitrary closed convex set \(C \subseteq {\mathbb {R}}^m\), we define mapping \(P_C\)-metric projection onto C:

$$\begin{aligned} {\mathbb {R}}^m\ni x\mapsto P_Cx:=\arg \min \{\Vert y-x\Vert : y\in C\}. \end{aligned}$$

Necessarily, \(P_C\) is a point–to–point mapping, which is well defined from \({\mathbb {R}}^m\) onto C. We also indicate that the projection \(P_C\) is nonexpansive, i.e., for any \(x,y\in {\mathbb {R}}^m\),

$$\begin{aligned} \Vert P_C(x)-P_{C}(y)\Vert \le \Vert x-y\Vert . \end{aligned}$$
(2)

For any pointed closed convex cone \(K\subset {\mathbb {R}}^m\), a mapping \(F: {\mathbb {R}}^m \rightarrow {\mathbb {R}}^m\) is called K-isotone if for any \(x, y \in K\), \(x\le _{K} y\) implies \(F(x)\le _{K} F(y)\); here \(x\le _{K} y\) means \(y-x \in K\). If the projection \(P_C\) is K-isotone, then the closed convex set \(C \subseteq {\mathbb {R}}^m\) is called a K-isotone projection set.

Finally, for a proper closed convex cone \(K \subset {\mathbb {R}}^m\) and a mapping \(F: {\mathbb {R}}^m \rightarrow {\mathbb {R}}^m\) we define a complementarity problem CP(KF) as to find an \(x^* \in K\) such that \(F(x^*) \in K^*\) and \(x^* \perp F(x^*)\). In other words, we seek an \(x^*\) such that \((x^*,F(x^*)) \in C(K)\).

3 Main Results

The first topic we are interested in is the complementarity problem based on the monotone extended second-order cone, which we introduce below. Let p and q be two nonnegative integers. The monotone extended second-order cone (informally MESOC) is defined as follows:

$$\begin{aligned} L:=\{(x,u)\in {\mathbb {R}}^p\times {\mathbb {R}}^q:x_1\ge x_2\ge \cdots \ge x_p\ge \Vert u\Vert \}. \end{aligned}$$
(3)

In order to find solutions of a complementarity problem, first we need to find the dual cone and the complementarity set of this cone. Although a considerable part of the characterization has been already presented in [3], for the sake of completeness, we decide to include it here.

Proposition 3.1

Let p and q be two nonnegative integers. Then, the dual cone of a monotone extended second-order cone L in (3) is

$$\begin{aligned} {\small M:=\left\{ (x,u)\in {\mathbb {R}}^p\times {\mathbb {R}}^q:\sum _{i=1}^{j}x_i\ge 0, \forall j\in \{1,\ldots ,p-1\}, \sum _{i=1}^{p}x_i\ge \Vert u\Vert \right\} }, \end{aligned}$$
(4)

that is, \(M=L^*\).

Proof

First, we show that \(M\subseteq L^*\). Let \((x,u)\in L\) and \((y,v)\in M\). Using Abel’s summation formula, we have

$$\begin{aligned} \langle (x,u),(y,v)\rangle= & {} x^Ty+u^Tv= \sum _{i=1}^{p-1}(x_i-x_{i+1})\sum _{j=1}^{i}y_j+x_p\sum _{i=1}^{p}y_i+u^Tv\\\ge & {} \Vert u\Vert \Vert v\Vert +u^Tv\ge 0. \end{aligned}$$

So, we have \(M\subseteq L^*\). Now, we show that \(L^*\subseteq M\). For, let \((y,v)\in L^*\) and \(e=(1,1,\ldots ,1) \in {\mathbb {R}}^p\). It is obvious that \((\Vert v\Vert e,-v) \in L\). Suppose \(v\ne 0\), then

$$\begin{aligned} \langle (\Vert v\Vert e,-v), (y,v)\rangle \ge 0\Leftrightarrow \Vert v\Vert \sum _{i=1}^{p}y_i-\Vert v\Vert ^2\ge 0. \end{aligned}$$

Hence, \(\displaystyle \sum \nolimits _{i=1}^{p}y_i\ge \Vert v\Vert .\) When \(v=0\), then  \((e,0)\in L\) and \( (y,0)\in L^*\) imply that \(\displaystyle \sum \nolimits _{i=1}^{p}y_i\ge 0=\Vert v\Vert .\)

We also have \(\Big ((\underbrace{1,1,\ldots ,1,0}_{i<p},\underbrace{0,\ldots ,0}_{p-i}),(\underbrace{0,0,\ldots ,0}_q)\Big )\in L\), and \((y,v)\in L^*\), which implies that

$$\begin{aligned} \sum _{j=1}^{i}y_i\ge 0, \text { }\forall i\in \{1,2,\ldots ,p-1\}. \end{aligned}$$

Thus, \((y,v)\in M\), so \(L^*\subseteq M\). Altogether, we have \(L^* = M\). \(\square \)

After finding the dual of the monotone extended second-order cone, we will describe the complementarity set of this cone. In order to do so, we need to use the inequality, introduced in Lemma 3.1.

Lemma 3.1

For every \((x,u) \in L\) and \((y,v) \in M\), we have

$$\begin{aligned} \langle x,y \rangle \ge \Vert u\Vert \sum _{i=1}^{p}y_i\ge \Vert u\Vert \Vert v\Vert \,. \end{aligned}$$

Proof

First, we prove that \(\langle x,y\rangle \ge \Vert u\Vert \sum _{i=1}^{p}y_i\). Since \((x,u)\in L, (y,v)\in M\), it follows that \( x_1\ge x_2\ge \cdots \ge x_p\ge \Vert u\Vert , \) \( \sum _{i=1}^{j}y_j\ge 0\) for all \(j\in \{1,\ldots ,p-1\}\) and \(\sum _{i=1}^{p}y_i\ge \Vert v\Vert \ge 0. \) Thus, by using the backward induction,

$$\begin{aligned} \begin{aligned} \sum _{i=1}^{p}y_i&=y_1+y_2+\cdots +y_p \ge 0\\ \implies&(x_p-\Vert u\Vert )\sum _{i=1}^{p-1}y_i + (x_p-\Vert u\Vert )y_p\ge 0\\ \implies&(x_{p-1}-\Vert u\Vert )\sum _{i=1}^{p-2}y_i + (x_{p-1}-\Vert u\Vert )y_{p-1}+(x_p-\Vert u\Vert )y_p\ge 0\\ \implies&(x_{p-2}-\Vert u\Vert )\sum _{i=1}^{p-3}y_i +(x_{p-2}-\Vert u\Vert )y_{p-2}\\&+ (x_{p-1}-\Vert u\Vert )y_{p-1}+(x_p-\Vert u\Vert )y_p\ge 0\\ \cdots&\\ \implies&(x_1-\Vert u\Vert )y_1+(x_2-\Vert u\Vert )y_2+\cdots +(x_p-\Vert u\Vert )y_p\ge 0\\ \iff&\langle x,y\rangle \ge \Vert u\Vert \sum _{i=1}^{p}y_i\,. \end{aligned} \end{aligned}$$

Finally, since \(\langle x,y\rangle \ge \Vert u\Vert \sum _{i=1}^{p}y_i\) and \(\sum _{i=1}^{p}y_i\ge \Vert v\Vert \), we have

$$\begin{aligned} \langle x,y \rangle \ge \Vert u\Vert \sum _{i=1}^{p}y_i\ge \Vert u\Vert \Vert v\Vert \,. \end{aligned}$$

\(\square \)

By using Lemma 3.1, we find the complementarity set of the monotone extended second-order cone.

Proposition 3.2

Let \((x,y,u,v)\in C(L)\).Footnote 1 If \(u\ne 0, v\ne 0\), then

$$\begin{aligned} C(L)= & {} \Bigg \{(x,u,y,v): (x,u)\in L,\text { }(y,v)\in M,\text { }\\ \langle x,y\rangle= & {} \Vert u\Vert \sum _{i=1}^{p}y_i,\text { }~ \sum _{i=1}^{p}y_i=\Vert v\Vert ,\text { and } \exists \lambda> 0 ~\text { such that } v=-\lambda u\Bigg \}\\= & {} \Bigg \{(x,u,y,v): (x,u)\in L, \text { }(y,v)\in M, \text { }(x_i-x_{i+1})\sum _{j=1}^{i}y_j=0,\text { }\\&\forall i=1,\ldots ,p-1, \text { } x_p=\Vert u\Vert ,\text { }\sum _{i=1}^{p}y_i=\Vert v\Vert , \text { and } \exists \lambda >0 ~\text { such that } v=-\lambda u\Bigg \}. \end{aligned}$$

Proof

Let

$$\begin{aligned} S:= & {} \Bigg \{(x,u,y,v): (x,u)\in L,\text { }(y,v)\in M,\text { }\\ \langle x,y\rangle= & {} \Vert u\Vert \sum _{i=1}^{p}y_i,\text { }~ \sum _{i=1}^{p}y_i=\Vert v\Vert ,\text { and }\exists \lambda > 0 ~\text { such that } v=-\lambda u\Bigg \}. \end{aligned}$$

Now, our task is to show that \(C(L)=S\). First, we need to prove that \(C(L)\subseteq S\). For arbitrary \((x,u,y,v)\in C(L)\), by using Lemma 3.1, we have

$$\begin{aligned} \begin{aligned} 0&= \langle (x,u),(y,v)\rangle = \langle x,y\rangle +\langle u,v\rangle \\&\ge \Vert u\Vert \sum _{i=1}^{p}y_i + \langle u,v\rangle \\&\ge \Vert u\Vert \Vert v\Vert + \langle u,v\rangle \ge 0\,. \end{aligned} \end{aligned}$$

Hence, all the inequalities above must be equalities, that is,

$$\begin{aligned} \begin{aligned} 0&=\langle x,y\rangle +\langle u,v\rangle = \Vert u\Vert \sum _{i=1}^{p}y_i + \langle u,v\rangle \\&= \Vert u\Vert \Vert v\Vert + \langle u,v\rangle = 0\,. \end{aligned} \end{aligned}$$

Thus,

$$\begin{aligned} \langle x,y\rangle =\Vert u\Vert \sum _{i=1}^{p}y_i=\Vert u\Vert \Vert v\Vert . \end{aligned}$$
(5)

Therefore,

$$\begin{aligned} \Vert u\Vert \sum _{i=1}^{p}y_i=\Vert u\Vert \Vert v\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert u\Vert \Vert v\Vert +\langle u,v\rangle =0. \end{aligned}$$
(6)

From (5) we get \(\langle x,y\rangle =\Vert u\Vert \sum _{i=1}^{p}y_i\) and, subsequently, \(\sum _{i=1}^{p}y_i=\Vert v\Vert \). From the equality case in the Cauchy–Schwarz inequality, equation (6) implies that \(\exists \lambda >0, v=-\lambda u\). Thus, \(C(L)\subseteq S\).

Now, for the converse inclusion \(S\subseteq C(L)\). We have: \(\forall (x,u,y,v)\in S\), \(\exists \lambda >0\) such that \(v = -\lambda u \), \((x,u)\in L, (y,v)\in M, x^Ty=\Vert u\Vert \sum _{i=1}^{p}y_i,\forall i=1,\ldots ,p\), and \(\sum _{i=1}^{p}y_i=\Vert v\Vert \). Thus,

$$\begin{aligned} \langle (x,u),(y,v)\rangle = \langle x,y\rangle +\langle u,v\rangle = \Vert u\Vert \Vert v\Vert + \langle u,v\rangle = 0. \end{aligned}$$

Therefore, \((x,u,y,v)\in C(L)\). Hence, \(S\subseteq C(L)\).

Finally, we have

$$\begin{aligned} C(L)= & {} S=\Bigg \{(x,u,y,v): (x,u)\in L,\text { }(y,v)\in M,\text { }\nonumber \\ \langle x,y\rangle= & {} \Vert u\Vert \sum _{i=1}^{p}y_i,\text { }~ \sum _{i=1}^{p}y_i=\Vert v\Vert ,\text { and }\exists \lambda > 0 ~\text { such that } v=-\lambda u\Bigg \}. \end{aligned}$$
(7)

Moreover,

$$\begin{aligned} \begin{aligned} \Vert u\Vert \sum _{i=1}^{p}y_i&= \langle x,y\rangle \\&= y_1(x_1-x_2)+(y_1+y_2)(x_2-x_3)+\cdots \\&\quad +(y_1+y_2+\cdots +y_{p-1})(x_{p-1}-x_p)+(y_1+y_2+\cdots +y_p)x_p \end{aligned} \end{aligned}$$

if and only if

$$\begin{aligned} (\Vert u\Vert -x_p)\sum _{i=1}^{p}y_i= & {} y_1(x_1-x_2)+(y_1+y_2)(x_2-x_3)\\&+\cdots +(y_1+y_2+\cdots +y_{p-1})(x_{p-1}-x_p). \end{aligned}$$

In the equation above, it is obvious that the LHS (left-hand side) is nonpositive and the RHS (right-hand side) is nonnegative, thus both must be equal to 0. Since the components of the sum in the RHS are all nonnegative, each component must be equal to 0. Hence, from equation (7) it follows that

$$\begin{aligned} C(L)= & {} \Bigg \{(x,u,y,v): (x,u)\in L, \text { }(y,v)\in M, \text { }(x_i-x_{i+1})\sum _{j=1}^{i}y_j=0,\text { }\\&\forall i=1,\ldots ,p-1, \text { } x_p=\Vert u\Vert ,\text { }\sum _{i=1}^{p}y_i=\Vert v\Vert , \text { and }\exists \lambda >0 ~\text { such that } v=-\lambda u\Bigg \}. \end{aligned}$$

Now the proof is complete. \(\square \)

Lemma 3.2

Let \(A\in {\mathbb {R}}^{p\times p}\). Then, \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}^p_{\ge +})\) if and only if it is of the form

$$\begin{aligned} A=\left[ \begin{array}{cccccc} a-\sum _{i=2}^pa_i &{} a_{2} &{} a_{3} &{} \cdots &{} \cdots &{} a_{p}\\ &{} a-\sum _{i=3}^pa_i &{} a_{3} &{} \cdots &{} \cdots &{} a_{p}\\ &{} &{} a-\sum _{i=4}^pa_i &{} \cdots &{} \cdots &{} a_{p}\\ &{} &{} &{} \ddots &{} \vdots &{} \vdots \\ &{} \mathbf{0} &{} &{} &{} a-a_p &{} a_{p}\\ &{} &{} &{} &{} &{} a \end{array} \right] \!, \end{aligned}$$
(8)

where \(a,a_2,a_3\ldots ,a_p\in {\mathbb {R}}\) are arbitrary.

Proof

Let \(e^i\in {\mathbb {R}}^p\), \(1\le i\le p\) be the canonical unit vectors in \({\mathbb {R}}^p\) and \(e^{p+1}\) be the zero vector in \({\mathbb {R}}^p\). Denote \(u^i:=\sum _{k=1}^i e^k\in {\mathbb {R}}^n_{\ge +}\)  and  \(v^i:=e^i-e^{i+1}\in ({\mathbb {R}}^n_{\ge +})^*\), for  \(1\le i\le p\) (see Example 2). Then, \(\langle u^i,v^j\rangle =\delta _{ij}\), where \(\delta _{ij}\) is the Kronecker symbol, that is, \(\delta _{ii}=1\) and \(\delta _{ij}=0\), for \(i\ne j\). If follows that \((u^i,v^j)\in C({\mathbb {R}}^n_{\ge +})\), whenever \(i\ne j\) (as it can be seen from Example 2, too). Hence, if \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}^n_{\ge +})\) and \(i\ne j\), then

$$\begin{aligned} \langle Au^i,v^j\rangle =\sum _{k=1}^i (a_{jk}-a_{j+1,k})=0, \end{aligned}$$
(9)

where we set \(a_{p+1,k}:=0\). By using equation (9), we get

$$\begin{aligned} \sum _{\ell =j}^p\langle Au^i,v^{\ell }\rangle =\sum _{k=1}^i a_{jk}=0,\quad \text{ if }\quad j>i. \end{aligned}$$
(10)

By equation (10), we get

$$\begin{aligned} a_{ji}=\sum _{k=1}^i a_{jk}-\sum _{k=1}^{i-1} a_{jk}=0, \quad \text{ if }\quad j>i. \end{aligned}$$
(11)

By using again equation (9), we get

$$\begin{aligned} a_{ji}-a_{j+1,i}=\sum _{k=1}^i (a_{jk}-a_{j+1,k}) -\sum _{k=1}^{i-1} (a_{jk}-a_{j+1,k})=0,\quad \text{ if }\quad j+1<i.\nonumber \\ \end{aligned}$$
(12)

Equations (9), (11) and (12) imply that A is of the form (8). Now, suppose that A is of the form (8). From Example 2, any element \((x,y)\in C({\mathbb {R}}^n_{\ge +})\) can be written in the form

$$\begin{aligned} (x,y)=\left( \sum _{i\in I}\alpha _i u^i,\sum _{j\in J}\beta _j v^j\right) ;\quad \alpha _i,\beta _j\ge 0, \end{aligned}$$
(13)

for some \(I,J\subseteq \{1,2,\dots ,n\}\) with \(I\cup J=\{1,2,\dots ,n\}\) and \(I\cap J=\varnothing \), because \(\{u^i:1\le i\le j\}\subseteq {\mathbb {R}}^n_{\ge +}\) and \(\{v^i:1\le i\le j\}\subseteq ({\mathbb {R}}^n_{\ge +})^*\) are generators of the simplicial cones \({\mathbb {R}}^n_{\ge +}\) and \(({\mathbb {R}}^n_{\ge +})^*\), respectively, and \(x\perp y\). As \(\langle Au^i,v^j\rangle =0,\) by considering the derivation of equations (9), (10), (11) and (12) in the reverse order, equation (13) implies that \(\langle Ax,y\rangle =0\). Hence, \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}^n_{\ge +})\). \(\square \)

Theorem 3.1

For the monotone extended second-order cone (3), any Lyapunov-like transformation T is of the form

$$\begin{aligned} T=\left[ \begin{array}{cccccc|ccc} a-\sum _{j=2}^pa_{j} &{} a_{2} &{} a_{3} &{} \cdots &{} \cdots &{} a_{p} &{} c_1 &{} \cdots &{} c_q\\ &{} a-\sum _{j=3}^pa_{j} &{} a_{3} &{} \cdots &{} \cdots &{} a_{p} &{} c_1 &{} \cdots &{} c_q \\ &{} &{} a-\sum _{j=4}^pa_{j} &{} \cdots &{} \cdots &{} a_{p} &{} c_1 &{} \cdots &{} c_q \\ &{} &{} &{} \ddots &{} \vdots &{} \vdots &{} \vdots &{} &{} \vdots \\ &{} \mathbf{0} &{} &{} &{} a-a_{p} &{} a_{p} &{} c_1 &{} \cdots &{} c_q \\ &{} &{} &{} &{} &{} a &{} c_1 &{} \cdots &{} c_q \\ \hline &{} &{} &{} &{} &{} c_1 &{} a &{} &{} *\\ &{} \mathbf{0} &{} &{} &{} &{} \vdots &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} c_q &{} -* &{} &{} a\\ \end{array} \right] , \end{aligned}$$
(14)

where \(a,a_2,a_3,\ldots ,a_p,c_1,...,c_q\in {\mathbb {R}}\) are arbitrary. Hence, its Lyapunov rank is given by

$$\begin{aligned} \beta (L)=p+\frac{q(q+1)}{2}\,. \end{aligned}$$

Proof

Recall that the complementarity set for the monotone extended second-order cone L is

$$\begin{aligned} C(L)=\{((x,u),(y,v)) \in L\times M: (x,u)\perp (y,v)\}. \end{aligned}$$

We partition the above set in the following way:

$$\begin{aligned} C(L):=C_1(L)\cup C_2(L)\cup C_3(L)\cup C_4(L), \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&C_1(L):=\{(x,0,y,0)\in C(L)\},\\&C_2(L):=\{(x,0,y,v)\in C(L):v\ne 0\},\\&C_3(L):=\{(x,u,y,v)\in C(L):u\ne 0\ne v\},\\&C_4(L):=\{(x,u,y,0)\in C(L):u\ne 0\}. \end{aligned} \end{aligned}$$

Since \(x=0 \Rightarrow u=0\) and \(y=0 \Rightarrow v=0\), for any Lyapunov-like transformation on L we only need to consider the case of  \(x\ne 0 \ne y\). Let T be any element of \({{\,\mathrm{LL}\,}}(L)\), so it has the following block form:

$$\begin{aligned} \left[ \begin{array}{cc} A &{} B \\ C &{} D \end{array} \right] : {\mathbb {R}}^p\times {\mathbb {R}}^q\rightarrow {\mathbb {R}}^p\times {\mathbb {R}}^q, \end{aligned}$$

where \(A\in {\mathbb {R}}^{p\times p},B\in {\mathbb {R}}^{p\times q}, C\in {\mathbb {R}}^{q\times p}\), and \(D\in {\mathbb {R}}^{q\times q}\). Take any \((x,u,y,v) \in C(L)\). Then, (1) implies

$$\begin{aligned}&\langle Ax,y\rangle + \langle Bu,y\rangle + \langle Cx,v\rangle + \langle Du,v\rangle =0,\\&\langle Ax,y\rangle - \langle Bu,y\rangle - \langle Cx,v\rangle + \langle Du,v\rangle =0, \end{aligned}$$

where the latter equation comes from the former one by substituting \(-u\) for u and \(-v\) for v. By adding and subtracting the above equations, we get

$$\begin{aligned} \begin{aligned}&\langle Ax,y\rangle + \langle Du,v\rangle =0,\\&\langle Bu,y\rangle + \langle Cx,v\rangle =0. \end{aligned} \end{aligned}$$
(15)

By using an element \((x,0,y,0)\in L\times M\) in \(C_1(L)\), with \(x\in {\mathbb {R}}_{\ge +}^p\) and \(y\in ({\mathbb {R}}_{\ge +}^p)^*\), we get \(\langle Ax,y\rangle = 0\), which implies that \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}_{\ge +}^p)\).

Now, we will determine the structures of matrices B and C. By using elements in \(C_2(L)\), from the second equation in (15), we get

$$\begin{aligned} \langle Cx,v\rangle = \langle B0,y\rangle + \langle Cx,v\rangle = 0. \end{aligned}$$

Suppose that \(Ca^i\ne 0\) for some \(i<p\) and let \(v:=\frac{Ca^i}{\Vert Ca^i\Vert }\), and \(y:=e^j,(j>i)\), thus, \(\langle y, e^j\rangle = 1 = \Vert v\Vert \). Hence, \((a^i,0,e^j,v)\in C_2(L)\). Then, \(0=\langle Cx,v\rangle =\langle Ca^i,v\rangle =\Vert Ca^i\Vert \), which leads to a contradiction. Hence, \(Ca^i = 0\). Then, for certain \(c_1,\ldots ,c_q\in {\mathbb {R}}\) we have

$$\begin{aligned} C=\left[ \begin{array}{c|c}{} \mathbf{0} &{}\begin{array}{ccc}c_1\\ \vdots \\ c_q\end{array}\end{array}\right] _{q\times p}. \end{aligned}$$

If \(C=0\), the second equation in (15) demonstrates that \(\langle Bu,y \rangle =0\) for all \((x,u,y,v)\in C_3(L)\). It is easy to verify that \((e,-v,e^i,v)\in C_3(L)\), where v is an arbitrary unit vector in \({\mathbb {R}}^q\). Hence, \(\langle B(-v),e^i\rangle = 0\), for all \(1 \le i \le p\), thus \(Bv=0\). In consequence, \(B=0\).

If \(C \ne 0\), first we need to find the structure of matrix B. We have \(\langle Bu,y\rangle =0\) for any \((x,u,y,0)\in C_4(L)\).

Let \(u^i\) denote the standard (canonical) unit vector in \({\mathbb {R}}^q\) and for any \(n>m\), let \(y^{m,n}:=e^m-e^n \in {\mathbb {R}}^p\). Since \(\left( e,u^i,y^{m,n},0\right) \in C_4(L)\),

$$\begin{aligned} \langle Bu^i,y^{m,n}\rangle =0. \end{aligned}$$

Therefore,

$$\begin{aligned} B=\begin{bmatrix} b_1 &{} b_2 &{} \cdots &{}b_q \\ \vdots &{} \vdots &{} &{} \vdots \\ b_1 &{} b_2 &{} \cdots &{}b_q \end{bmatrix}_{p\times q}\,. \end{aligned}$$

For \(i=1,\ldots ,q\) and \(j=1,\ldots ,p\), we have \((e,u^i,e^j,-u^i) \in C_3(L)\) and subsequently,

$$\begin{aligned} \langle Bu^i,e^j\rangle +\langle Ce,-u^i\rangle =0. \end{aligned}$$

It readily implies \(b_i=c_i\). Hence,

$$\begin{aligned} B=\begin{bmatrix} c_1 &{} c_2 &{} \cdots &{}c_q \\ \vdots &{} \vdots &{} &{} \vdots \\ c_1 &{} c_2 &{} \cdots &{}c_q \end{bmatrix}_{p\times q}\,. \end{aligned}$$

As \((e,u,\frac{1}{p}e, -u) \in C_3(L)\) for all u with \(\Vert u\Vert =1\), by using (15), we have

$$\begin{aligned} \left\langle Ae,\frac{1}{p}e\right\rangle +\langle Du,-u\rangle =0. \end{aligned}$$
(16)

Let \(\displaystyle a:=\frac{\langle Ae,e\rangle }{p}\). Then, (16) implies

$$\begin{aligned} \left\langle \left( \frac{D+D^T}{2}-aI\right) u,u\right\rangle =0, \end{aligned}$$

and hence

$$\begin{aligned} D+D^T=2aI. \end{aligned}$$
(17)

Obviously, \((e,-u^1,e^1,u^1) \in C(L)\) and using the first equation in (15) gives

$$\begin{aligned} \langle Ae,e^1\rangle -\langle Du^1,u^1\rangle =0, \end{aligned}$$

which implies that \(d_{11}=\sum _{j}a_{1j}\). Thus, (17) implies that \(d_{11}=a\) and hence, \(\sum _{j=1}^pa_{1j}=a\).

By changing \(e^1\) to \(e^2\) (yes, we can), we have \(\sum _{j=2}^pa_{2j}=d_{22}=a\). By following this process, we obtain that \(d_{ii}=\sum _{j=1}^pa_{ij}=a\), for all \(1\le i \le p\).

Therefore, by equation (17), \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}^n_{\ge +})\) (shown above) and Lemma 3.2, any Lyapunov-like transformation on L has the form (14).

Now, we want to show that any transformation T, which can be represented in the form (14), is Lyapunov-like on L, so let T be given as above. Then, we have

$$\begin{aligned} \langle T(x,u),(y,v)\rangle = \langle Ax,y\rangle +\langle Du,v\rangle +\langle Bu,y\rangle + \langle Cx,v\rangle . \end{aligned}$$
(18)

We wish to show that for any \((x,u,y,v) \in C(L)\), the RHS in the above equation is zero. We will perform a case-by-case analysis.

Case 1.  For any \((x,u,y,v):=(x,0,y,0)\in C_1(L)\), the RHS of (18) is equal to zero, as \((x,y)\in {\mathcal {C}}\left( {\mathbb {R}}^n_{\ge +}\right) \) and we have already shown that \(A\in {{\,\mathrm{LL}\,}}({\mathbb {R}}^n_{\ge +})\), hence it is enough to use again Lemma 3.2.

Case 2.  For any \((x,u,y,v):=(x,0,y,v)\in C_2(L)\), the RHS of (18) is \((c_1v_1+\cdots +c_qv_q)x_p\). Suppose that \(x_p\ne 0\). Then, since \((x,y)\in C\left( {\mathbb {R}}^n_{\ge +}\right) \), from Example 2 we get \(y_1+\dots +y_p=0\). Hence, \((y,v)\in M\) and (4) implies \(v=0\), which contradicts \((x,0,y,v)\in C_2(L)\). Thus, \(x_p=0\) and therefore the RHS of (18) is zero.

Case 3.  Take an arbitrary \((x,u,y,v) \in C_3(L)\). Proposition 3.2 indicates that for some \(\lambda >0\) one has \(v=-\lambda u\), thus

$$\begin{aligned} \begin{aligned} \langle Ax,y\rangle +\langle Du,v\rangle&= \langle Ax,y\rangle +\left\langle \frac{D+D^T}{2} u,v\right\rangle =\langle Ax,y\rangle + a\langle u,v\rangle \\&=\langle z,y\rangle +a\langle u,v\rangle \\&=\sum _{i=1}^{p-1}\left[ (z_i-z_{i+1})\sum _{j=1}^iy^j\right] +z_p\sum _{i=1}^{p}y^i+a\langle u,v\rangle , \end{aligned} \end{aligned}$$
(19)

where \(z_i:=\sum _{j=1}^ia_jx_i+\sum _{k=i+1}^pa_kx_k\), for any \(1\le i\le p-1\) and \(z_p=\sum _{k=1}^pa_kx_p\). Then, for any \(1\le i\le p-1\), it is easy check that \(z_i-z_{i+1}=\sum _{j=1}^ia_j(x_i-x_{i+1})\). By inserting these equalities and the formula for \(z_p\) into equation (19), and by using Proposition 3.2, we obtain \(\langle Ax,y\rangle +\langle Du,v\rangle =0\). We will show that

$$\begin{aligned} \langle Bu,y\rangle + \langle Cx,v\rangle =0. \end{aligned}$$

By using the full power of Proposition 3.2, including \(v=-\lambda u\) for some \(\lambda >0\), we have

$$\begin{aligned} \langle Bu,y\rangle = \sum _{i=1}^q(c_iu_i)\cdot \sum _{i=1}^py_i=\Vert v\Vert \sum _{i=1}^q(c_iu_i) \end{aligned}$$

and

$$\begin{aligned} \langle Cx,v\rangle = x_p\sum _{i=1}^q(c_iv_i)=\Vert u\Vert \sum _{i=1}^q(c_iv_i). \end{aligned}$$

Then,

$$\begin{aligned} \langle Bu,y\rangle + \langle Cx,v\rangle&= \Vert v\Vert \sum _{i=1}^q(c_iu_i)+ \Vert u\Vert \sum _{i=1}^q(c_iv_i) \\&=\lambda \Vert u\Vert \sum _{i=1}^q(c_iu_i)-\lambda \Vert u\Vert \sum _{i=1}^q(c_iu_i)=0. \end{aligned}$$

Case 4.  For any \((x,y,u,v):=(x,u,0,v)\in C_4(L)\), the RHS of (18) is \((c_1u_1+\cdots +c_qu_q)(y_1+\cdots +y_q)\). Suppose that \(y_1+\dots +y_p\ne 0\). Then, since \((x,y)\in C({\mathbb {R}}^n_{\ge +})\), from Example 2 we get \(x_p=0\). Hence, \((x,u)\in M\) and (4) implies \(u=0\), which contradicts \((x,u,y,0)\in C_4(L)\). Thus, \(y_1+\dots +y_p=0\) and therefore the RHS of (18) is zero.

In conclusion, the RHS of (18) is zero for any \((x,u,y,v)\in C_1(L)\cup C_2(L)\cup C_3(L)\cup C_4(L)=C(L)\) Therefore, \(T\in {{\,\mathrm{LL}\,}}(L)\). Following the definition of the Lyapunov rank, its value for the cone L equals to the number of independent parameters in (14), which is  \(p+\frac{q(q+1)}{2}\). \(\square \)

After calculating the Lyapunov rank of MESOC, we prove our second main result, namely that this cone is reducible. Recall that a cone K in \({\mathbb {R}}^m\) is reducible if it can be expressed as a sum \(K=K_1+K_2\), where \(K_1, K_2 \ne \{0\}\) are cones with \(\mathrm{span}(K_1) \cap \mathrm{span}(K_2)= \{0\}\). Otherwise, it is called irreducible.

Theorem 3.2

For the monotone extended second-order cone L defined in (3), one has \(L=L_1+L_2\), where

$$\begin{aligned} L_1:=\mathrm{cone}\,\Big \{ (\underbrace{1,\ldots ,1}_p, m_1, \ldots , m_q): m_1^2+ \cdots +m_q^2 \le 1 \Big \} \end{aligned}$$

and

$$\begin{aligned} L_2&:=\mathrm{cone}\,\Big \{(\underbrace{1,0,\ldots ,0}_p,\underbrace{0, \ldots ,0}_q), (\underbrace{1,1,\ldots ,0}_p,\underbrace{0, \ldots ,0}_q), \\&\quad \ldots , (\underbrace{1,1,\ldots ,1,0}_p,\underbrace{0, \ldots ,0}_q) \Big \}. \end{aligned}$$

As a by-product, we show that L is a reducible cone.

Proof

First, we show the inclusion \(L\subseteq L_1+L_2\).

An arbitrary element \((x_1, \ldots , x_p, u_1, \ldots , u_q) \in L\), by the definition of L, can be represented as \((\sum _{i=1}^p a_i, \ldots , a_1+a_2, a_1, u_1, \ldots , u_q)\), where \(a_i \ge 0\) for \(i=2,\ldots ,p\) and \(a_1 \ge \Vert (u_1, \ldots ,u_q)\Vert \). Hence,

$$\begin{aligned}&(x_1, \ldots , x_p, u_1, \ldots , u_q)\\&\quad = \left( \sum _{i=1}^p a_i, \sum _{i=1}^{p-1} a_i, \ldots , a_1, u_1, \ldots , u_q\right) \\&\quad = (a_1, \ldots , a_1,u_1, \ldots , u_q) + (\underbrace{a_2, \ldots , a_2,0}_p,\underbrace{0, \ldots , 0}_q) + \cdots \\&\qquad + (\underbrace{a_p,0, \ldots ,0}_p, \underbrace{0, \ldots , 0}_q)\\&\quad = (a_1, \ldots , a_1,u_1, \ldots , u_q)+ a_2(\underbrace{1, \ldots , 1,0}_p,\underbrace{0, \ldots , 0}_q)+ \cdots \\&\qquad + a_p(\underbrace{1,0, \ldots ,0}_p, \underbrace{0, \ldots , 0}_q). \end{aligned}$$

Obviously, \(a_2(\underbrace{1, \ldots , 1,0}_p,\underbrace{0, \ldots , 0}_q)+ \cdots + a_p(\underbrace{1,0, \ldots ,0}_p, \underbrace{0, \ldots , 0}_q) \in L_2\).

Now, we show that \((a_1, \ldots , a_1,u_1, \ldots , u_q) \in L_1\). It is trivial when \(a_1=0\), so we assume that \(a_1>0\). Thus, we have

$$\begin{aligned} \displaystyle (a_1, \ldots , a_1,u_1, \ldots , u_q)=a_1\left( 1, \ldots , 1,\frac{u_1}{a_1}, \ldots , \frac{u_q}{a_1}\right) \!. \end{aligned}$$

As \(a_1 \ge \Vert (u_1, \ldots ,u_q)\Vert \), we get

$$\begin{aligned} a_1\ge \sqrt{u_1^2+ \cdots + u_p^2}\, \equiv \, 1 \ge \sqrt{\left( \frac{u_1}{a_1}\right) ^2 + \cdots + \left( \frac{u_q}{a_1}\right) ^2 }, \end{aligned}$$

which, by the definition of \(L_1\), gives that \((a_1, \ldots , a_1,u_1, \ldots , u_q) \in L_1\).

Hence, we showed that an arbitrary element \((x_1, \ldots , x_p, u_1, \ldots , u_q) \in L\) can be represented as the sum of two elements, which are

$$\begin{aligned} (a_1, \ldots , a_1,u_1, \ldots , u_q) \in L_1~ \end{aligned}$$

and

$$\begin{aligned} a_2(\underbrace{1, \ldots , 1,0}_p,\underbrace{0, \ldots , 0}_q)+ \cdots + a_p(\underbrace{1,0, \ldots ,0}_p, \underbrace{0, \ldots , 0}_q) \in L_2. \end{aligned}$$

Now, for the inclusion \(L_1+L_2 \subseteq L\). Observe that \(L_1 \subseteq L\) and \(L_2 \subseteq L\). From the convexity of the cone L, it follows that \(L_1+l_2 \subseteq L+L=L\).

It concludes the proof of the equality \(L=L_1+L_2\). Obviously, the cones \(L_1, L_2 \ne \{0\}\) and \(\mathrm{span}(L_1) \cap \mathrm{span}(L_2)= \{0\}\). \(\square \)

For the sake of completeness, we quote the following three results that will help us proving Theorem 3.4, where a characterization of \(K\subseteq {\mathbb {R}}^p\times {\mathbb {R}}^q\) as an L-isotone projection set is given.

Theorem 3.3

(see [8]) The closed convex set \(C\subset {\mathbb {R}}^m\) with nonempty interior is a K-isotone projection set if and only if it is of the form

$$\begin{aligned} C=\bigcap _{i\in {\mathbb {N}}} {\mathcal {H}}_-(u^i,\alpha _i), \end{aligned}$$

where each affine hyperplane \({\mathcal {H}}(u^i,\alpha _i)\) is tangent to C and it is a K-isotone projection set.

The following two lemmas are from [13].

Lemma 3.3

Let \(K\subset {\mathbb {R}}^m\) be a closed convex cone and \({\mathcal {H}}\subset {\mathbb {R}}^m\) be a hyperplane with a unit normal vector \(a\in {\mathbb {R}}^m\). Then, \({\mathcal {H}}\) is a K-isotone projection set if and only if

$$\begin{aligned} \langle x,y\rangle \ge \langle a,x\rangle \langle a,y\rangle , \end{aligned}$$

for any \(x\in K\) and \(y\in K^*\).

Lemma 3.4

Let \(z\in {\mathbb {R}}^m\), \(K\subset {\mathbb {R}}^m\) be a closed convex cone and \(C\subset {\mathbb {R}}^m\) be a nonempty closed convex set. Then, C is a K-isotone projection set if and only if \(C+z\) is a K-isotone projection set.

Finally, by using the above three results, we derive an isotonicity property of MESOC, which we will use to solve complementarity problems on the MESOC.

Theorem 3.4

Let L be the MESOC corresponding to the dimensions p and q, with \(q>1\). The closed convex set with nonempty interior \(K\subseteq {\mathbb {R}}^p\times {\mathbb {R}}^q\) is an L-isotone projection set if and only if \(K={\mathbb {R}}^p\times C\), for some closed convex set with nonempty interior \(C\subseteq {\mathbb {R}}^q\).

Proof

First, suppose that \(K={\mathbb {R}}^p\times C\), where \(C\subseteq {\mathbb {R}}^q\) is a nonempty closed convex set with nonempty interior. Let \((x,u),(y,v)\in {\mathbb {R}}^p\times {\mathbb {R}}^q\) be such that \((x,u)\le _{L}(y,v)\), thus \((y-x,v-u)\in L\), i.e.,

$$\begin{aligned} y_1-x_1\ge y_2-x_2\ge \cdots \ge y_p-x_p\ge \Vert v-u\Vert . \end{aligned}$$
(20)

Since C is a closed and convex set in \({\mathbb {R}}^q\), by the nonexpansivity (2) of \(P_C\), we have

$$\begin{aligned} \Vert v-u\Vert \ge \Vert P_Cv-P_Cu\Vert , \end{aligned}$$

which together with (20) yields

$$\begin{aligned} y_1-x_1\ge y_2-x_2\ge \cdots \ge y_p-x_p\ge \Vert P_Cv-P_Cu\Vert . \end{aligned}$$

Thus,

$$\begin{aligned} (y,P_Cv)-(x,P_Cu)\in L \end{aligned}$$

and therefore we have

$$\begin{aligned} P_K(x,u)=(x,P_Cu)\le _{L}(y,P_Cv)=P_K(y,v). \end{aligned}$$

In conclusion, K is an L-isotone project set.

Conversely, suppose that the closed convex set \(K\subseteq {\mathbb {R}}^p\times {\mathbb {R}}^q\) with nonempty interior is an L-isotone project set. If \(p=1\), then in [8] it has been proved that \(K={\mathbb {R}}^p\times C\), where C is a nonempty, closed and convex subset with nonempty interior of \({\mathbb {R}}^q\). Therefore, assume that \(p>1\). By Theorem 3.3 and Lemma 3.4, we need to show that for any tangent hyperplane \({\mathcal {H}}\) of K with unit normal \(\gamma =(a,u)\), we have \(a=0\). From Lemma 3.3, we have

$$\begin{aligned} \langle \zeta ,\xi \rangle \ge \langle \gamma ,\zeta \rangle \langle \gamma ,\xi \rangle , \end{aligned}$$
(21)

for any \(\zeta :=(x,v)\in L\) and \(\xi :=(y,w)\in L^*\). By Lemma 3.3, condition (21) holds. Let \(x\in {\mathbb {R}}^p_+\) and \(v\in {\mathbb {R}}^q\). Then, by equation (3), and Proposition 3.1, it is easy to check that \(\zeta :=(\Vert v\Vert e,v)\in L\), \(\xi :=(\Vert v\Vert x,-\langle e,x\rangle v)\in L^*\) and \(\langle \zeta ,\xi \rangle =0\). Hence, condition (21) implies

$$\begin{aligned} 0\ge (\langle a,e\rangle \Vert v\Vert +\langle u,v\rangle )(\langle a,x\rangle \Vert v\Vert -\langle e,x\rangle \langle u,v\rangle ). \end{aligned}$$
(22)

If in (22) \(x=e\) and we choose \(v\ne 0\) such that \(\langle u,v\rangle =0\), then \(0\ge \langle a,e\rangle ^2\Vert v\Vert ^2\), and hence \(\langle a,e\rangle =0\). Thus, (22) becomes

$$\begin{aligned} 0\ge \langle u,v\rangle (\langle a,x\rangle \Vert v\Vert -\langle e,x\rangle \langle u,v\rangle ). \end{aligned}$$
(23)

First, suppose that \(u\ne 0\). Let \(v^n\in {\mathbb {R}}^q\) be a sequence of points such that \(\Vert v^n\Vert =1\), \(\langle u,v^n\rangle >0\) and \(\lim _{n\rightarrow +\infty }\langle u,v^n\rangle =0\). Let n be an arbitrary positive integer. If in (23) we choose \(\lambda >0\) sufficiently large such that \(x:=a+\lambda e\ge 0\) and \(v=v^n\), we get \(0\ge \langle u,v^n\rangle (\Vert a\Vert ^2-\lambda p\langle u,v^n\rangle ),\) or equivalently \(\Vert a\Vert ^2\le \lambda p\langle u,v^n\rangle \). By letting \(n\rightarrow +\infty \) in the last inequality, we obtain \(\Vert a\Vert ^2\le 0\), or equivalently \(a=0\).

Next, suppose that \(u=0\). Since (au) is a unit vector, it follows that \(a\ne 0\). Let \((x,y)\in C({\mathbb {R}}^p_{\ge +})\) and \(w\in {\mathbb {R}}^q\) such that \(\langle y,e\rangle \ge \Vert w\Vert \). Then, by (3) and Proposition 3.1, it is easy to check that \(\zeta :=(x,0)\in L\), \(\xi :=(y,w)\in L^*\) and \(\langle \zeta ,\xi \rangle =0\). Hence, inequality (21) implies

$$\begin{aligned} 0\ge \langle a,x\rangle \langle a,y\rangle , \end{aligned}$$

for any \((x,y)\in C({\mathbb {R}}^p_{\ge +})\) with \(\langle x,y\rangle =0\). From Example 2, we can choose \(x=e^1+\cdots +e^r\) and \(y=e^s-e^{s+1}\), where \(r, s\in \{1,\ldots ,p\}\), and we set \(e^{p+1}:=0\). Hence, \((a_1+\cdots +a_r)(a_s-a_{s+1})\le 0\), where we set \(a_{p+1}:=0\). Take now \(r=1\) and for \(s=1, \ldots ,p\), add the inequalities \(a_1(a_1-a_2)\le 0, \ldots , a_1(a_p-a_{p+1})\le 0\), to obtain (by the telescoping effect) \(a_1\cdot a_1 \le 0\), which gives \(a_1=0\). Similarly, for \(r=2\) and \(s=2, \ldots ,p\), add the inequalities \((0+a_2)(a_2-a_3) \le 0, \ldots , a_2(a_p-a_{p+1}) \le 0\), to get \(a_2=0\). Acting similarly (with \(r=3\), and so on), we get \(a_3=0\), up to \(a_p=0\). Thus, \(a=0\). But this contradicts \(a\ne 0\), so the case \(u=0\) cannot hold. \(\square \)

It is a well known that for the nonlinear complementarity problem NCP(FK), \(x^*\) is its solution if and only if \(x^*\) is a fixed point of the mapping \(K\ni x\mapsto P_K(x-F(x))\). For an arbitrary sequence \(\{x^n\}\) generated by the fixed point iteration process

$$\begin{aligned} x^{n+1}=P_{K}(x^n-F(x^n)), \end{aligned}$$
(24)

if the mapping F is continuous and the sequence \(\{x^n\}\) is convergent to \(x^*\in K\), then \(x^*\) is a fixed point of the mapping \(K\ni x\mapsto P_K(x-F(x))\), hence \(x^*\) is a solution of the nonlinear complementarity problem NCP(FK).

4 Mixed Complementarity Problem

Facchinei and Pang defined the mixed complementarity problem (\(\mathrm{MiCP}\)) on the nonnegative orthant (see Sect. 9.4.2 in [2]). It is not only equivalent to a linearly constrained variational inequality problem (this relationship is also known as the Karush–Kuhn–Tucker (KKT) system of the variational inequality), but it can also be viewed as an NCP for a particular nonpointed cone. Németh and Zhang [13] considered the MiCP defined on an arbitrary closed and convex cone. In Theorem 3.4, we have already shown that the projection mapping onto a cylinder is an isotonic projection set with respect to MESOC. It is interesting to consider using isotonicity on MESOC as a tool to solve the MiCP.

For the sake of completeness below, we quote Lemma 4 in [14].

Lemma 4.1

Let \(K={\mathbb {R}}^p\times C\), where C is an arbitrary nonempty closed and convex cone in \({\mathbb {R}}^q\). Denote mapping \(G: {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^p\), mapping \(H: {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^q\) and mapping \(F=(G;H): {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^p\times {\mathbb {R}}^q\). Then, the nonlinear complementarity problem NCP(FK) is equivalent to the mixed complementarity problem MiCP(GHCpq) defined as

$$\begin{aligned} G(x,u)=0,\,\,\,C\ni u\perp H(x,u)\in C^*. \end{aligned}$$

Proof

It is standard and follows from the definition of the nonlinear complementarity problem NCP(FK), by noting that \(K^* = \{0\}\times C^*\). \(\square \)

By using the notations of Lemma 4.1, the fixed point iteration (24) can be rewritten as:

$$\begin{aligned} x^{n+1}= & {} x^n-G(x^n,u^n), \nonumber \\ u^{n+1}= & {} P_C(u^n-H(x^n,u^n)). \end{aligned}$$
(25)

For the sake of self-containment below, we quote Proposition 2 in [14].

Proposition 4.1

Let \(L\subseteq {\mathbb {R}}^m\) be a pointed closed convex cone, \(K \subseteq {\mathbb {R}}^m\) be a closed convex cone and \(F: {\mathbb {R}}^m \rightarrow {\mathbb {R}}^m\) be a continuous mapping. Consider the sequence \(\{x^n\}_{n\in {\mathbb {N}}}\) defined by (24). Suppose that the mappings \(P_K\) and \(I - F\) are \(L-isotone\) and \(x^0 = 0 \le _L x^1\). Let

$$\begin{aligned} \varOmega :=K\cap L\cap F^{-1}(L)=\{x\in K\cap L: F(x)\in L\} \end{aligned}$$

and

$$\begin{aligned} \varGamma :=\{x\in K\cap L:P_K(x-F(x))\le _L x\}. \end{aligned}$$

Then, \(\varnothing \ne \varOmega \subset \varGamma \) and the sequence \(\{x^n\}\) is convergent to \(x^*\), which is a solution of NCP(FK). Moreover, \(x^*\) is a lower L-bound of \(\varOmega \) and the L-least element of \(\varGamma \).

The following theorem provides sufficient conditions for the solvability of the mixed complementarity problem MiCP(GHCpq).

Theorem 4.1

Let L be the monotone extended second-order cone corresponding to p and q. For an arbitrary cone \(K={\mathbb {R}}^p\times C\), where C be a closed convex cone, denote its dual cone by \(K^*\). Let \(F=(G;H): {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^p\times {\mathbb {R}}^q\), such that \(I-F\) is L-isotone, where I denotes the identical mapping, \(G: {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^p\) and \(H: {\mathbb {R}}^p\times {\mathbb {R}}^q\mapsto {\mathbb {R}}^q\) are two continuous mappings. Consider a sequence \(\{(x^n,u^n\}_{n\in {\mathbb {N}}}\) defined by (25), where \(x^0=0\in {\mathbb {R}}^p\) and \(u^0=0\in {\mathbb {R}}^q\). Let \(x,y\in {\mathbb {R}}^p\) and \(u,v\in {\mathbb {R}}^q\). Suppose that the system of inequalities

$$\begin{aligned} y_i-x_i\ge y_{i+1}-x_{i+1}\ge \Vert v-u\Vert ;\quad 1\le i\le p-1 \end{aligned}$$
(26)

implies the system of inequalities

$$\begin{aligned} \begin{array}{rcl} y_i-x_i-(G(y,v)_i-G(x,u)_i)&{}\ge &{} y_{i+1}-x_{i+1}-(G(y,v)_{i+1}-G(x,u)_{i+1})\\ &{}\ge &{}\Vert v-u-(H(y,v)-H(x,u))\Vert ; \end{array}\nonumber \\ \end{aligned}$$
(27)

\(1\le i\le p-1\), and that \(x^1_i\ge x^1_{i+1}\ge \Vert u^1\Vert \); \(1\le i\le p-1\) (in particular, this holds when \(-G(0,0)_i\ge -G(0,0)_{i+1}\ge \Vert H(0,0)\Vert \); \(1\le i\le p-1\)). Let

$$\begin{aligned} \varOmega :=\{(x,u)\in {\mathbb {R}}^p\times C: x_1\ge \cdots \ge x_p\ge \Vert u\Vert ,G(x,u)_1\ge \cdots \ge G(x,u)_p \ge \Vert H(x,u)\Vert \} \end{aligned}$$

and

$$\begin{aligned} \varGamma:= & {} \{(x,u)\in {\mathbb {R}}^p\times C:x_1\ge \cdots \ge x_p\ge \Vert u\Vert ,G(x,u)_1 \ge \cdots \ge G(x,u)_p\\\ge & {} \Vert u-P_C(u-H(x,u))\Vert \}. \end{aligned}$$

Then, \(\varnothing \ne \varOmega \subseteq \varGamma \), the sequence \(\{(x^n,u^n)\}\) is convergent, and its limit \((x^*,u^*)\) is a solution of MiCP(GHCpq). Moreover, \((x^*,y^*)\) is a lower L-bound of \(\varOmega \) and the L-least element of \(\varGamma \).

Proof

Following the definition of the monotone extended second-order cone, we have

$$\begin{aligned} \varOmega =K\cap L\cap F^{-1}(L)=\{z\in K\cap L: F(z)\in L\} \end{aligned}$$

and

$$\begin{aligned} \varGamma =\{z\in K\cap L:P_K(z-F(z))\le _Lz\}, \end{aligned}$$

where \(z=(x,u)\). Theorem 3.4 implies that \(P_K\) is L-isotone. Since (26)\(\implies \)(27), \(I-F\) is L-isotone. Meanwhile, \(x_1^1\ge x_2^1\ge \cdots \ge x_p^1\ge \Vert u^1\Vert \) implies that \((x^0,y^0)=(0,0)\le _L(x^1,y^1)\). Then, by using Proposition 4.1, we have that \(\varnothing \ne \varOmega \subset \varGamma \), the sequence \(\{(x^n,u^n)\}\) is convergent, and its limit \((x^*,u^*)\) is a solution of MiCP(GHCpq). Moreover, \((x^*,y^*)\) is a lower L-bound of \(\varOmega \) and the L-least element of \(\varGamma \). \(\square \)

5 Numerical Example

Let L be the monotone extended second-order cone, then suppose that \(K={\mathbb {R}}^2\times C\) where \(C=\{(u_1,u_2)\in {\mathbb {R}}^2: u_1\ge u_2\ge 0\}\). Let \(f_1(x,u)= \frac{1}{10}x_1-\frac{1}{20}x_2 +\frac{1}{20}\Vert u\Vert +1\) and \(f_2(x,u)=\frac{1}{5}x_1-\frac{3}{20}x_2+\frac{1}{20}\Vert u\Vert -\frac{3}{5}\). Obviously, \(f_1(x,u)\) and \(f_2(x,u)\) are L-monotone. Define \(\omega ^1:=(2,1, \frac{1}{3},\frac{1}{6})\) and \(\omega ^2:=(2,1,\frac{1}{6},\frac{1}{3})\); it is easy to find out that  \(\omega ^1, \omega ^2 \in L\). Then, for two arbitrary vectors \((x,u), (y,v) \in {\mathbb {R}}^2\times {\mathbb {R}}^2\) such that \((x,u)\le _L (y,v)\), by using the definition of the MESOC, we have that \(y_1-x_1\ge y_2-x_2\ge \Vert v-u\Vert \ge \Vert u\Vert -\Vert v\Vert \). Hence,

$$\begin{aligned} f_1(y,v)-f_1(x,u)= & {} \frac{1}{10}(y_1-x_1)-\frac{1}{20}(y_2-x_2)-\frac{1}{20}(\Vert u\Vert -\Vert v\Vert )\ge 0,\\ f_2(y,v)-f_2(x,u)= & {} \frac{1}{5}(y_1-x_1)-\frac{3}{20}(x_2-y_2)-\frac{1}{20}(\Vert u\Vert -\Vert v\Vert )\ge 0. \end{aligned}$$

Since \(\omega ^1, \omega ^2, (y,v)-(x,u) \in L\), by using the convexity of L, if we have \((x,u)\le _L(y,v)\), then

$$\begin{aligned} (f_1(y,v)-f_1(x,u))\omega ^1 + (f_2(y,v)-f_2(x,u))\omega ^2\in L, \end{aligned}$$

which is equivalent to the following inequality:

$$\begin{aligned} f_1(x,u)\omega ^1+f_2(x,u)\omega ^2\le _L f_1(y,v)\omega ^1+f_2(y,v)\omega ^2. \end{aligned}$$

Thus, the mapping \(f_1\omega ^1+f_2\omega ^2\) is L-isotone. Now, we define functions G and H as follows:

$$\begin{aligned} G(x,u):= & {} \Bigg (\frac{2}{5}x_1+\frac{2}{5}x_2-\frac{1}{5}\Vert u\Vert -\frac{4}{5},-\frac{3}{10}x_1+\frac{6}{5}x_2-\frac{1}{10}\Vert u\Vert -\frac{2}{5}\Bigg ),\\ H(x,u):= & {} \Bigg (u_1-\frac{1}{15}x_1+\frac{1}{24}x_2-\frac{1}{40}\Vert u\Vert -\frac{7}{30},u_2-\frac{1}{12}x_1+\frac{7}{120}x_2-\frac{1}{40}\Vert u\Vert +\frac{1}{30}\Bigg ). \end{aligned}$$

Hence, we get

$$\begin{aligned} (x-G,u-H)=f_1\omega ^1+f_2\omega ^2=\Bigg (2f_1+2f_2,f_1+f_2,\frac{1}{3}f_1+\frac{1}{6}f_2,\frac{1}{6}f_1+\frac{1}{3}f_2\Bigg ) \end{aligned}$$

is L-isotone. Then, we check that all the conditions in Theorem 4.1 are satisfied. Let us start at the initial condition. We have,

$$\begin{aligned} -G(0,0,0,0)=\left( \frac{4}{5},\frac{2}{5}\right) \quad {\mathrm{and}}\quad \Vert H(0,0,0,0)\Vert =\sqrt{\left( -\frac{7}{30}\right) ^2+\left( \frac{1}{30}\right) ^2}=\frac{\sqrt{2}}{6}. \end{aligned}$$

Evidently, \(-G(0,0,0,0)_1\ge -G(0,0,0,0)_2\ge \Vert H(0,0,0,0)\Vert .\) Now, consider a vector \(({\hat{x}},{\hat{u}}):=(30,12,4,3)\in K\), which yields

$$\begin{aligned} G({\hat{x}},{\hat{u}})=\left( 15,\frac{9}{2}\right) \quad \mathrm{and}\quad H({\hat{x}},{\hat{u}})=\left( \frac{257}{120},\frac{133}{120}\right) \!. \end{aligned}$$

Moreover, we have that \(G({\hat{x}},{\hat{u}})_1\ge G({\hat{x}},{\hat{u}})_2\ge \Vert H({\hat{x}},{\hat{u}})\Vert \), which implies that \(({\hat{x}},{\hat{u}})\in \varOmega \). Hence, \(\varOmega \ne \varnothing \). Next, we solve the mixed complementarity problem MiCP(GHCpq). For an arbitrary element (xy), if it is a solution of MiCP(GHCpq), then

$$\begin{aligned} x-G(x,u) = (2f_1+2f_2,f_1+f_2)\text { where } f_i=f_i(x,u),i=1,2, \end{aligned}$$

and \(G(x,u)=0\). Thus, we have \(x_1=2f_1+2f_2\) and \(x_2=f_1+f_2\). Moreover,

$$\begin{aligned} x_1=\frac{1}{3}\Vert u\Vert +\frac{4}{3}\text { and }x_2=\frac{1}{6}\Vert u\Vert +\frac{2}{3}. \end{aligned}$$
(28)

Meanwhile, we have \(u\perp H(x,u)\), which implies

$$\begin{aligned} \langle u,H(x,u)\rangle =u_1\left( u_1-\frac{1}{3}f_1-\frac{1}{6}f_2\right) +u_2\left( u_2-\frac{1}{6}f_1-\frac{1}{3}f_2\right) =0. \end{aligned}$$

Then,

$$\begin{aligned} \Vert u\Vert ^2=u_1^2+u_2^2=\left( \frac{1}{3}u_1+\frac{1}{6}u_2\right) f_1+\left( \frac{1}{6}u_1+\frac{1}{3}u_2\right) f_2. \end{aligned}$$
(29)

We will figure out all the nonzero solutions on the boundary of C. For the first case, without loss of generality, suppose that \(u_1=u_2 > 0\), so we have \(\Vert u\Vert =\sqrt{2}u_1=\sqrt{2}u_2\) and, by using (29),

$$\begin{aligned} u_1=u_2=\frac{1}{4}\left( f_1+f_2\right) \!. \end{aligned}$$

By using the definitions of \(f_1\) and \(f_2\) as well as (28), we get

$$\begin{aligned} u_1=u_2=\frac{48+2\sqrt{2}}{287}. \end{aligned}$$

Thus, the solution of MiCP(GHCpq) is

$$\begin{aligned} (x,u)=\left( \frac{384+16\sqrt{2}}{287},\frac{192+8\sqrt{2}}{287},\frac{48+2\sqrt{2}}{287},\frac{48+2\sqrt{2}}{287}\right) \!. \end{aligned}$$

For the second case, we consider \(u_2=0\), which implies that \(\Vert u\Vert =u_1\). Hence, equation (29) is equivalent to

$$\begin{aligned} u_1^2-\left( \frac{1}{3}f_1+\frac{1}{6}f_2\right) u_1=0. \end{aligned}$$

Since \(u_1\ne 0\), we have

$$\begin{aligned} u_1=\frac{1}{3}f_1+\frac{1}{6}f_2. \end{aligned}$$

By using the definitions of \(f_1\) and \(f_2\), and (28) again, we have \(u_1=\frac{212}{691}\), which implies that \(u=\left( \frac{212}{691},0\right) \). Thus,

$$\begin{aligned} (x,u)=\left( \frac{992}{691},\frac{496}{691},\frac{212}{691},0\right) \!. \end{aligned}$$

Consider (0, 0, 0, 0) as a starting point in the fixed point iteration process (25). We have

$$\begin{aligned} \begin{aligned} x_{n+1}&=x^n-G(x^n,u^n)\\&=(2f_1(x^n,u^n)+2f_2(x^n,u^n),f_1(x^n,u^n)+f_2(x^n,u^n)), \\ u_{n+1}&=P_C(u^n-H(x^n,u^n))\\&=P_C\left( \frac{1}{3}f_1(x^n,u^n)+\frac{1}{6}f_2(x^n,u^n),\frac{1}{6}f_1(x^n,u^n)+\frac{1}{3}f_2(x^n,u^n)\right) \!. \end{aligned} \end{aligned}$$
(30)

From the above equations, we get \(x^{n+1}_1\ge x^{n+1}_2\). Moreover, since as the starting point we set (0, 0, 0, 0), then for any arbitrary \(i\in {\mathbb {N}}\), we have that \(x_1^i\ge x_2^i\ge 0\). Define the set S as follows:

$$\begin{aligned} S:=\left\{ (x,u)\in {\mathbb {R}}^2\times {\mathbb {R}}^2:0\le x_1<\frac{992}{691},0\le x_2<\frac{496}{691},0\le u_1<\frac{212}{691},u_2=0 \right\} \!. \end{aligned}$$

We want to show that for any \(n\in {\mathbb {N}}\) we have \((x^n,u^n)\in S\). We will prove it by induction. First, we have \((x^0,u^0)\in S\). Suppose next  \(0\le x_1^n < \frac{992}{691}\), \(0\le x_2^n < \frac{496}{691}\), \(0\le u_1^n < \frac{212}{691}\) and \(u_2=0\), which is equivalent to \(\Vert u^n\Vert =u_1^n\). Since \(x_1^n\ge x_2^n\), we have

$$\begin{aligned} \begin{aligned} 0&<x_1^{n+1}=2f_1(x^n,u^n)+2f_2(x^n,u^n)\\&= \frac{3}{5}x_1^n-\frac{2}{5}x_2^n+\frac{1}{5}u_1^n+\frac{4}{5}\\&< \frac{3}{5}\cdot \frac{992}{691}-\frac{2}{5}\cdot \frac{496}{691}+\frac{1}{5}\cdot \frac{212}{691}+\frac{4}{5}= \frac{992}{691}. \end{aligned} \end{aligned}$$

Similarly,

$$\begin{aligned} \begin{aligned} 0&<x_2^{n+1}=f_1(x^n,u^n)+f_2(x^n,u^n)\\&= \frac{3}{10}x_1^n-\frac{1}{5}x_2^n+\frac{1}{10}u_1^n+\frac{2}{5}\\&< \frac{3}{10}\cdot \frac{992}{691}-\frac{1}{5}\cdot \frac{496}{691}+\frac{1}{10}\cdot \frac{212}{691}+\frac{2}{5}=\frac{496}{691}. \end{aligned} \end{aligned}$$

Meanwhile, we also have

$$\begin{aligned} u^n-H(x^n,u^n)=\left( \frac{1}{3}f_1(x^n,u^n)+\frac{1}{6}f_2(x^n,u^n),\frac{1}{6}f_1(x^n,u^n)+\frac{1}{3}f_2(x^n,u^n)\right) \!. \end{aligned}$$

Obviously, \((u^n-H(x^n,u^n))_1>0\), then

$$\begin{aligned} \begin{array}{rcl} (u^n-H(x^n,u^n))_1 &{} < &{} \displaystyle \frac{1}{3}\left( \frac{1}{10}\cdot \frac{992}{691}-\frac{1}{20}\cdot \frac{496}{691}+\frac{1}{20}\cdot \frac{212}{691}+1\right) \\ \\ &{} &{} \displaystyle +\frac{1}{6}\left( \frac{1}{5}\cdot \frac{992}{691}-\frac{3}{20}\cdot \frac{496}{691}+\frac{1}{20}\cdot \frac{212}{691}-\frac{3}{5}\right) =\frac{212}{691} \end{array} \end{aligned}$$

and \(0<(u^n-H(x^n,u^n))_2\). It is easy to check that the projection of it onto C such that \(0\le u^{n+1}_1 <\frac{691}{212}\) and \(u_2^{n+1}=0\), must be given on the ray \(\{(u_1,u_2):u_1\ge 0,u_2=0\}\). It is equivalent to

$$\begin{aligned} u^{n+1}=(u^{n+1}_1,u^{n+1}_2)=P_C\left( u^n-H(x^n,u^n)\right) = \left( \frac{1}{3}f_1(x^n,u^n)+\frac{1}{6}f_2(x^n,u^n),0\right) \!. \end{aligned}$$

Thus, the system of equations (30) is equivalent to

$$\begin{aligned} x_1^{n+1}= & {} \frac{3}{5}x_1^n-\frac{2}{5}x_2^n+\frac{1}{5}u_1^n+\frac{4}{5} , \nonumber \\ x_2^{n+1}= & {} \frac{3}{10}x_1^n-\frac{1}{5}x_2^n+\frac{1}{10}u_1^n+\frac{2}{5}, \nonumber \\ u_1^{n+1}= & {} \frac{1}{15}x_1^n-\frac{1}{24}x_2^n+\frac{1}{40}u_1^n+\frac{7}{30}\!. \end{aligned}$$
(31)

Moreover, we have \(x_1^n=2x_2^n\), so (31) is equivalent to

$$\begin{aligned} x_1^{n+1}= & {} 2x_2^{n+1}, \nonumber \\ x_2^{n+1}= & {} \frac{2}{5}x_2^n+\frac{1}{10}u_1^n+\frac{2}{5}, \nonumber \\ u_1^{n+1}= & {} \frac{11}{120}x_2^n+\frac{1}{40}u_1^n+\frac{7}{30}. \end{aligned}$$
(32)

The last two lines in (32) can be aggregated as follows

$$\begin{aligned} \left[ \begin{array}{c} x_2^{n+1}\\ \quad \\ u_1^{n+1} \end{array} \right] =\left[ \begin{array}{rr} \frac{2}{5} &{} \frac{1}{10}\\ \quad \\ \frac{11}{120} &{} \frac{1}{40} \end{array} \right] \left[ \begin{array}{c} x_2^n\\ \quad \\ u_1^n \end{array}\right] + \left[ \begin{array}{c} \frac{2}{5} \\ \quad \\ \frac{7}{30} \end{array}\right] . \end{aligned}$$

One easily verifies that the above \(2 \times 2\) matrix has both (real) eigenvalues whose absolute values are less than 1, so it is a convergent matrix. Hence, the above process is convergent to the unique fixed point \(\left[ x_2^*~ u_1^*\right] '\) of the above equation, regardless of a starting point \(\left[ x_2^0~ u_1^0\right] ' \in {\mathbb {R}}^2.\) Explicitly,

$$\begin{aligned} \left[ \begin{array}{c} x_2^*\\ \quad \\ u_1^* \end{array} \right] = \left[ \begin{array}{rr} \frac{3}{5} &{} -\frac{1}{10}\\ \quad \\ -\frac{11}{120} &{} \frac{39}{40} \end{array} \right] ^{-1} \cdot \left[ \begin{array}{c} \frac{2}{5} \\ \quad \\ \frac{7}{30} \end{array}\right] = \left[ \begin{array}{c} \frac{496}{691} \\ \quad \\ \frac{212}{691} \end{array}\right] . \end{aligned}$$

Bearing in mind that \(x_1^{n+1}= 2x_2^{n+1}\) and \(u_2^n=0\), we have the convergence:

$$\begin{aligned} (x^n,u^n)=(x_1^n,x_2^n,u_1^n, u_2^n) \rightarrow (x_1^*,x_2^*,u_1^*,0)= \left( \frac{992}{691}, \frac{496}{691}, \frac{212}{691}, 0 \right) \!, \end{aligned}$$

which is the same as one solution we have obtained on the boundary.

Remark 5.1

We remark that \(f_1\omega ^1+f_2\omega ^2\) is not ESOC-isotone. Indeed, if we assume that \(f_1\omega ^1+f_2\omega ^2\) is L-isotone, then for any \((x,u)\le _L(y,v)\) and \(\omega ^1,\,\omega ^2\in L\), we have

$$\begin{aligned} f_1(x,u)\omega ^1+f_2(x,u)\omega ^2\le _L f_1(y,v)\omega ^1+f_2(y,v)\omega ^2 \end{aligned}$$
(33)

and it is equivalent to

$$\begin{aligned} (f_1(y,v)-f_1(x,u))\omega ^1 + (f_2(y,v)-f_2(x,u))\omega ^2\in L. \end{aligned}$$

For arbitrary \((x^*,u^*),(y^*,v^*)\in {\mathbb {R}}^p\times {\mathbb {R}}^q\), such that \(y^*_1-x^*_1=\Vert u^*-v^*\Vert =\Vert u^*\Vert -\Vert v^*\Vert >0\), \(y^*_2-x^*_2=2\Vert u^*-v^*\Vert =2(\Vert u^*\Vert -\Vert v^*\Vert )>0\), it is obvious that \((x^*,u^*)\le _{\mathrm{ESOC}}(y^*,v^*)\). Since \(f_1(x,u)= \frac{1}{10}x_1-\frac{1}{20}x_2 +\frac{1}{20}\Vert u\Vert +1\) and \(f_2(x,u)=\frac{1}{5}x_1-\frac{3}{20}x_2+\frac{1}{20}\Vert u\Vert -\frac{3}{5}\),

$$\begin{aligned} \begin{aligned} f_1(y^*,v^*)-f_1(x^*,u^*)&=\frac{1}{10}(y^*_1-x^*_1)-\frac{1}{20}(y^*_2-x^*_2)-\frac{1}{20}(\Vert u^*\Vert -\Vert v^*\Vert )\\&=\frac{1}{10}(\Vert u^*\Vert -\Vert v^*\Vert )-\frac{2}{20}(\Vert u^*\Vert -\Vert v^*\Vert )-\frac{1}{20}(\Vert u^*\Vert -\Vert v^*\Vert )\\&=-\frac{1}{20}(\Vert u^*\Vert -\Vert v^*\Vert )< 0,\\ f_2(y^*,v^*)-f_2(x^*,u^*)&=\frac{1}{5}(y^*_1-x^*_1)-\frac{3}{20}(x^*_2-y^*_2)-\frac{1}{20}(\Vert u^*\Vert -\Vert v^*\Vert )\ge 0\\&=\frac{1}{5}(\Vert u^*\Vert -\Vert v^*\Vert )-\frac{6}{20}(\Vert u^*\Vert -\Vert v^*\Vert )-\frac{1}{20}(\Vert u^*\Vert -\Vert v^*\Vert )\\&=-\frac{3}{20}(\Vert u^*\Vert -\Vert v^*\Vert )< 0, \end{aligned} \end{aligned}$$

contradicting (33), so \(f_1\omega ^1+f_2\omega ^2\) is not ESOC-isotone. Let us recall that both \(f_1\) and \(f_2\) are MESOC-monotone (which has been proved in the numerical example) and not ESOC-monotone, which implies that \(f_1(y,v)-f_1(x,u)\) and \(f_2(y,v)-f_2(x,u)\) will not be nonnegative for all \((x,u)\le _\mathrm{ESOC}(y,v)\). Since both \(f_1(y^*,v^*)-f_1(x^*,u^*)\) and \(f_2(y^*,v^*)-f_2(x^*,u^*)\) are negative, then by using convexity of ESOC, since \(\omega ^1,\,\omega ^2\in \text {MESOC}\subseteq \text {ESOC}\), we have

$$\begin{aligned} -(f_1(y^*,v^*)-f_1(x^*,u^*))\omega ^1 - (f_2(y^*,v^*)-f_2(x^*,u^*))\omega ^2\in \text {ESOC}. \end{aligned}$$

Meanwhile, if \(f_1\omega _1+f_2\omega _2\) were ESOC-isotone, then

$$\begin{aligned} (f_1(y^*,v^*)-f_1(x^*,u^*))w^1 + (f_2(y^*,v^*)-f_2(x^*,u^*))w^2\in \text {ESOC}. \end{aligned}$$

Since \(\omega ^1\), \(\omega ^2\) are linearly independent, it contradicts pointedness of ESOC. Thus, \(f_1\omega ^1+f_2\omega ^2\) is not ESOC-isotone.

6 Concluding Remarks

In this paper, we study the monotone extended second-order cone (MESOC) as a new generalization of the Lorentz cone \({\mathcal {L}}^n_+\). This cone is different both from \({\mathcal {L}}^n_+\) and the previously introduced extended second-order cone (ESOC) [10,11,12,13,14, 18] in many aspects, but bears similarities too. Both MESOC and ESOC are cones in \({\mathbb {R}}^p\times {\mathbb {R}}^q\). MESOC is sub-dual as is ESOC, but for \(p>1\) it is not self-dual like \({\mathcal {L}}^n_+\). Both ESOC and MESOC become \({\mathcal {L}}^{q+1}_+\) when \(p=1\). Contrary to \({\mathcal {L}}^n_+\), for \(q=1\) both ESOC and MESOC are polyhedral. MESOC and ESOC are symmetric cones for \(p=1\) only, that is, if and only if they are Lorentz cones. Contrary to \({\mathcal {L}}^n_+\) and ESOC, MESOC is reducible. For both ESOC and MESOC the cylinders \({\mathbb {R}}^p\times C\), where C is an arbitrary closed convex set with nonempty interior in \({\mathbb {R}}^q\), are isotone projection sets. In fact, these cylinders are isotone projection sets with respect to any intersection of ESOC with \(U\times {\mathbb {R}}^q\), where U is an arbitrary closed convex cone in \({\mathbb {R}}^p\) (the proof is similar to the first part of the proof of Theorem 3.4). Contrary to ESOC, any isotone projection set with respect to MESOC is such a cylinder.

We determined the bilinearity rank of MESOC and used the MESOC-isotonicity of the projection onto the cylinder to solve general mixed complementarity problems. We illustrated the corresponding iterative method by using a numerical example with exact numbers. Although the iteration principle for the MESOC is similar to the corresponding one for ESOC, we remark that there are mixed complementarity problems which can be solved iteratively by MESOC, but the same iterative scheme cannot be used via ESOC, because it does not satisfy the corresponding ESOC-isotonicity condition (merely the MESOC-isotonicity). This is due to the fact that although MESOC is a subset of ESOC, MESOC-isotonicity of mappings does not imply their ESOC-isotonicity. This idea is underlined in the preceding section.