Skip to main content

Linear Complementarity Problems on Extended Second Order Cones

Abstract

In this paper, we study the linear complementarity problems on extended second order cones. We convert a linear complementarity problem on an extended second order cone into a mixed complementarity problem on the non-negative orthant. We state necessary and sufficient conditions for a point to be a solution of the converted problem. We also present solution strategies for this problem, such as the Newton method and Levenberg–Marquardt algorithm. Finally, we present some numerical examples.

Introduction

Although research in cone complementarity problems (see the definition in the beginning of the Preliminaries) goes back a few decades only, the underlying concept of complementarity is much older, being firstly introduced by Karush [1]. It seems that the concept of complementarity problems was first considered by Dantzig and Cottle in a technical report [2], for the non-negative orthant. In 1968, Cottle and Dantzig [3] restated the linear programming problem, the quadratic programming problem and the bimatrix game problem as a complementarity problem, which inspired the research in this field (see [4,5,6,7,8]).

The complementarity problem is a cross-cutting area of research, which has a wide range of applications in economics, finance and other fields. Earlier works in cone complementarity problems present the theory for a general cone and the practical applications merely for the non-negative orthant only (similarly to the books [8, 9]). These are related to equilibrium in economics, engineering, physics, finance and traffic. Examples in economics are Walrasian price equilibrium models, price oligopoly models, Nash–Cournot production/distribution models, models of invariant capital stock, Markov perfect equilibria, models of decentralized economy and perfect competition equilibrium, models with individual markets of production factors. Engineering and physics applications are frictional contact problems, elastoplastic structural analysis and nonlinear obstacle problems. An example in finance is the discretization of the differential complementarity formulation of the Black-Scholes models for the American options [10]. An application to congested traffic networks is the prediction of steady-state traffic flows. In the recent years, several applications have emerged where the complementarity problems are defined by cones essentially different from the non-negative orthant such as positive semidefinite cones, second order cones and direct product of these cones (for mixed complementarity problems containing linear subspaces as well). Recent applications of second order cone complementarity problems are in elastoplasticity [11, 12], robust game theory [13, 14] and robotics [15]. All these applications come from the Karush–Kuhn–Tucker conditions of second order conic optimization problems.

Németh and Zhang extended the concept of second order cone in [16] to the extended second order cone. Their extension seems the most natural extension of second order cones. Sznajder showed that the extended second order cones in [16] are irreducible cones (i.e., they cannot be written as a direct product of simpler cones) and calculated the Lyapunov rank of these cones [17]. The applications of second order cones and the elegant way of extending them suggest that the extended second order cones will be important from both theoretical and practical point of view. Although conic optimization problems with respect to extended second order cones can be reformulated as conic optimization problems with respect to second order cones, we expect that for several such problems, using the particular inner structure of the second order cones provides a more efficient way of solving them than solving the transformed conic optimization problem with respect to second order cones. Indeed, such a particular problem is the projection onto an extended second order cone, which is much easier to solve directly than solving the reformulated second order conic optimization problem [18].

Until now, the extended second order cones of Németh and Zhang were used as a working tool only for finding the solutions of mixed complementarity problems on general cones [16] and variational inequalities for cylinders whose base is a general convex set [19]. The applications above for second order cones show the importance of these cones and motivate considering conic optimization and complementarity problems on extended second order cones. As another motivation, we suggest the application to mean-variance portfolio optimization problems [20, 21] described in Sect. 3.

The paper is structured as follows: in Sect. 2, we illustrate the main terminology and definitions used in this paper. In Sect. 3, we present an application of extended second order cones to portfolio optimization problems. In Sect. 4, we introduce the notion of mixed implicit complementarity problem as an implicit complementarity problem on the direct product of a cone and a Euclidean space. In Sect. 5, we reformulate the linear complementarity problem as a mixed (implicit, mixed implicit) complementarity problem on the non-negative orthant (MixCP).

Our main result is Theorem 5.1, which discusses the connections between an ESOCLCP and mixed (implicit, mixed implicit) complementarity problems. In particular, under some mild conditions, given the definition of Fischer–Burmeister (FB) regularity and of the stationarity of a point, we prove in Theorem 5.2 that a point can be the solution of a mixed complementarity problem if it satisfies specific conditions related to FB regularity and stationarity (Theorem 5.2). This theorem can be used to determine whether a point is a solution of a mixed complementarity problem converted from ESOCLCP. In Sect. 6, we use Newton’s method and Levenberg–Marquardt algorithm to find the solution for the aforementioned MixCP. In Sect. 7, we provide an example of a linear complementarity problem on an extended second order cone. Based on the above, we convert this linear complementarity problem into a mixed complementarity problem on the non-negative orthant and use the aforementioned algorithms to solve it. A solution of this mixed complementarity problem will provide a solution of the corresponding ESOCLCP.

As a first step, in this paper, we study the linear complementarity problems on extended second order cones (ESOCLCP). We find that an ESOCLCP can be transformed to a mixed (implicit, mixed implicit) complementarity problem on the non-negative orthant. We will give the conditions for which a point is a solution of the reformulated MixCP problem, and in this way, we provide conditions for a point to be a solution of ESOCLCP.

Preliminaries

Let m be a positive integer and \(F{:}\,{\mathbb {R}}^m\rightarrow {\mathbb {R}}^m\) be a mapping and \(y=F(x)\). The definition of the classical complementary problem [22]

$$\begin{aligned} x \ge 0,\quad y\ge 0, \quad \hbox {and} \quad \langle x, y \rangle = 0, \end{aligned}$$

where \(\ge \) denotes the componentwise order induced by the non-negative orthant and \(\langle \cdot ,\cdot \rangle \) is the canonical scalar product in \({\mathbb {R}}^m\), was later extended to more general cones K, as follows:

$$\begin{aligned} x\in K,\quad y\in K^*, \quad \hbox {and} \quad \langle x, y \rangle = 0, \end{aligned}$$

where \(K^*\) is the dual of K [23].

Let \(k,\ell ,\hat{\ell }\) be non-negative integers such that \(m=k+\ell \).

Recall the definitions of the mutually dual extended second order cone \(L(k,\ell )\) and \(M(k,\ell )\) in \(\mathbb {R}^m\equiv {\mathbb {R}}^k\times {\mathbb {R}}^\ell \):

$$\begin{aligned} L(k,\ell )= & {} \left\{ (x,u) \in \mathbb {R}^k\times \mathbb {R}^\ell : x \ge \Vert u\Vert e\right\} , \end{aligned}$$
(1)
$$\begin{aligned} M(k,\ell )= & {} \left\{ (x,u) \in \mathbb {R}^k\times \mathbb {R}^\ell : e^\top x\ge \Vert u\Vert ,x\ge 0\right\} , \end{aligned}$$
(2)

where \(e=(1, \ldots , 1)^\top \in \mathbb {R}^k \). If there is no ambiguity about the dimensions, then we simply denote \(L(k,\ell )\) and \(M(k,\ell )\) by L and M, respectively.

Denote by \(\langle \cdot ,\cdot \rangle \) the canonical scalar product in \({\mathbb {R}}^m\) and by \(\Vert \cdot \Vert \) the corresponding Euclidean norm. The notation \(x\perp y\) means that \(\langle x,y\rangle =0\), where \(x,y\in {\mathbb {R}}^m\).

Let \(K\subset {\mathbb {R}}^m\) be a nonempty closed convex cone and \(K^*\) its dual.

Definition 2.1

The set

$$\begin{aligned}{{\mathrm{{\mathcal {C}}}}}(K):=\left\{ (x,y)\in K\times K^*:x\perp y\right\} \end{aligned}$$

is called the complementarity set of K.

Definition 2.2

Let \(F{:}\,{\mathbb {R}}^m\rightarrow {\mathbb {R}}^m\). Then, the complementarity problem \({{\mathrm{CP}}}(F,K)\) is defined by:

$$\begin{aligned} {{\mathrm{CP}}}(F,K):\,(x,F(x))\in {{\mathrm{{\mathcal {C}}}}}(K). \end{aligned}$$
(3)

The solution set of \({{\mathrm{CP}}}(F,K)\) is denoted by \({{\mathrm{SOL-CP}}}(F,K)\):

$$\begin{aligned} {{\mathrm{SOL-CP}}}(F,K) = \left\{ x\in {\mathbb {R}}^m: (x, F(x)) \in {{\mathrm{{\mathcal {C}}}}}(K)\right\} . \end{aligned}$$

If T is a matrix, \(r\in {\mathbb {R}}^m\) and F is defined by \(F(x)=Tx+r\), then \({{\mathrm{CP}}}(F,K)\) is denoted by \({{\mathrm{LCP}}}(T,r,K)\) and is called linear complementarity problem. The solution set of \({{\mathrm{LCP}}}(T,r,K)\) is denoted by \({{\mathrm{SOL-LCP}}}(T,r,K)\).

Definition 2.3

Let \(G,F{:}\,{\mathbb {R}}^m\rightarrow {\mathbb {R}}^m\). Then, the implicit complementarity problem \({{\mathrm{ICP}}}(F,G,K)\) is defined by

$$\begin{aligned} {{\mathrm{ICP}}}(F,G,K):\,(G(x),F(x))\in {{\mathrm{{\mathcal {C}}}}}(K). \end{aligned}$$
(4)

The solution set of \({{\mathrm{ICP}}}(F,G,K)\) is denoted by \({{\mathrm{SOL-ICP}}}(F,G,K)\):

$$\begin{aligned} {{\mathrm{SOL-ICP}}}(F,G,K) = \left\{ x\in {\mathbb {R}}^m: (G(x), F(x)) \in {{\mathrm{{\mathcal {C}}}}}(K)\right\} . \end{aligned}$$

Let \(m,k,\ell \) be non-negative integers such that \(m=k+\ell \), \(\varLambda \in {\mathbb {R}}^k\) be a nonempty closed convex cone and \(K=\varLambda \times {\mathbb {R}}^\ell \). Denote by \(\varLambda ^*\) the dual of \(\varLambda \) in \({\mathbb {R}}^k\) and by \(K^*\) the dual of K in \({\mathbb {R}}^k\times {\mathbb {R}}^\ell \). It is easy to check that \(K^*=\varLambda ^*\times \{0\}\).

Definition 2.4

Consider the mappings \(F_1:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k\) and \(F_2:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^{\hat{\ell }}\). The mixed complementarity problem \({{\mathrm{MixCP}}}(F_1,F_2,\varLambda )\) is defined by

$$\begin{aligned} {{\mathrm{MixCP}}}(F_1,F_2,\varLambda ):\left\{ \begin{array}{l} F_2(x,u)=0\\ (x,F_1(x,u))\in {{\mathrm{{\mathcal {C}}}}}(\varLambda ). \end{array} \right. \end{aligned}$$
(5)

The solution set of \({{\mathrm{MixCP}}}(F_1,F_2,\varLambda )\) is denoted by \({{\mathrm{SOL-MixCP}}}(F_1,F_2,\varLambda )\):

$$\begin{aligned} {{\mathrm{SOL-MixCP}}}(F_1,F_2,\varLambda ) =\left\{ x\in {\mathbb {R}}^m: F_2(x,u)=0,(x, F_1(x,u)) \in {{\mathrm{{\mathcal {C}}}}}(\varLambda )\right\} . \end{aligned}$$

Definition 2.5

[8, Definition 3.7.29] A matrix \(\varPi \in {\mathbb {R}}^{n\times n}\) is said to be an \(S_0\) matrix if the system of linear inequalities

$$\begin{aligned} \varPi x \ge 0,\quad 0\ne x\ge 0 \end{aligned}$$

has a solution.

The proof of our next result follows immediately from \(K^*= \varLambda ^*\times \{0\}\) and the definitions of \({{\mathrm{CP}}}(F,K)\) and \({{\mathrm{MixCP}}}(F_1,F_2,\varLambda )\).

Proposition 2.1

Consider the mappings

$$\begin{aligned}F_1:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k,\quad F_2:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^\ell .\end{aligned}$$

Define the mapping

$$\begin{aligned}F{:}\,{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k\times {\mathbb {R}}^\ell \end{aligned}$$

by

$$\begin{aligned}F(x,u)=(F_1(x,u),F_2(x,u)).\end{aligned}$$

Then,

$$\begin{aligned}(x,u)\in {{\mathrm{SOL-CP}}}(F,K)\iff (x,u)\in {{\mathrm{SOL-MixCP}}}(F_1,F_2,\varLambda ).\end{aligned}$$

Definition 2.6

[24, Schur complement] The notation of the Schur complement for a matrix \(\varPi =\left( {\begin{matrix} P &{} Q\\ R &{} S\end{matrix}}\right) \), with P nonsingular, is

$$\begin{aligned} \left( \varPi /P \right) = S - RP^{-1}Q. \end{aligned}$$

Definition 2.7

[25, Definition 4.6.2]

  1. (i)

    Let I be an open subset with \( I \subset {\mathbb {R}}^m\) and \( f: I \rightarrow {\mathbb {R}}^m\). We say that f is Lipschitz function, if there is a constant \(\lambda >0\) such that

    $$\begin{aligned} \left\| f(x) - f\left( x'\right) \right\| \le \lambda \left\| x - x'\right\| \quad \forall x, x' \in I. \end{aligned}$$
    (6)
  2. (ii)

    We say that f is locally Lipschitz if for every \( x \in I\), there exists \( \varepsilon > 0 \) such that f is Lipschitz on \(I\cap B_{\varepsilon }(x)\), where \(B_{\varepsilon }(x)=\{y\in {\mathbb {R}}^m:\Vert y-x\Vert \le \varepsilon \}\).

An Application of Extended Second Order Cones to Portfolio Optimization Problems

Consider the following portfolio optimization problem:

$$\begin{aligned} \min _{w}\left\{ w^{\top }\varSigma w:\,r^{\top }w \ge R,\,e^\top w=1\right\} , \end{aligned}$$

where \(\Sigma \in \mathbb {R}^{n\times n}\) is the covariance matrix, \(e=(1, \ldots , 1)^\top \in \mathbb {R}^n\), \(w\in {\mathbb {R}}^n\) is the weight of asset allocation for the portfolio and R is the required return of the portfolio.

In order to guarantee the diversified allocation of the fund into different assets in the market, a new constraint can be reasonably introduced: \(\Vert w\Vert \le \xi ,\) where \( \xi \) is the limitation of the concentration of the fund allocation. If short selling is allowed, then w can be less than zero. The introduction of this constraint can guarantee that the fund will be allocated into few assets only.

Since the covariance matrix \(\varSigma \) can be decomposed into \(\varSigma = U^{\top }U\), the problem can be rewritten as

$$\begin{aligned} \min _{w,\xi ,y}\left\{ y:\,r^{\top }w \ge R,\,\Vert Uw\Vert \le y,\,\Vert w\Vert \le \xi ,\,e^\top w = 1\right\} . \end{aligned}$$

The constraint \(\Vert Uw\Vert \le y\) is a relaxation of the constraint \(\Vert U\Vert \Vert w\Vert \le y\), where \(\Vert U\Vert =\max _{\Vert x\Vert \le 1}{\Vert Ux\Vert }\). The strengthened problem will become:

$$\begin{aligned} \min _{w,\xi ,y}\left\{ y:\,r^{\top }w \ge R,\,\Vert w\Vert e \le \left( \xi , \frac{y}{\Vert U\Vert }\right) ^{\top },\,e^\top w=1\right\} . \end{aligned}$$

The minimal value of the objective of the original problem is at most as large as the minimal value of the objective for this latter problem. The second constraint of the latter portfolio optimization problem means that the point \(\left( \xi , y/\Vert U\Vert ,w\right) ^{\top }\) belongs to the extended second order cone L(2, n). Hence, the strengthened problem is a conic optimization problem with respect to an extended second order cone.

Mixed Implicit Complementarity Problems

Let \(m,k,\ell ,\hat{\ell }\) be non-negative integers such that \(m=k+\ell \), \(\varLambda \in {\mathbb {R}}^k\) be a nonempty, closed, convex cone and \(K=\varLambda \times {\mathbb {R}}^\ell \). Denote by \(\varLambda ^*\) the dual of \(\varLambda \) in \({\mathbb {R}}^k\) and by \(K^*\) the dual of K in \({\mathbb {R}}^k\times {\mathbb {R}}^\ell \).

Definition 4.1

Consider the mappings

$$\begin{aligned}F_1,G_1:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k,\quad F_2:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^{\hat{\ell }}.\end{aligned}$$

The mixed implicit complementarity problem \({{\mathrm{MixICP}}}(F_1,F_2,G_1,\varLambda )\) is defined by

$$\begin{aligned} {{\mathrm{MixICP}}}(F_1,F_2,G_1,\varLambda ):\left\{ \begin{array}{l} F_2(x,u)=0\\ (G_1(x,u),F_1(x,u))\in {{\mathrm{{\mathcal {C}}}}}(\varLambda ). \end{array} \right. \end{aligned}$$
(7)

The solution set of the mixed complementarity problem \({{\mathrm{MixICP}}}(F_1,F_2,G_1,\varLambda )\) is denoted by \({{\mathrm{SOL-MixICP}}}(F_1,F_2,G_1,\varLambda )\):

$$\begin{aligned}&{{\mathrm{SOL-MixICP}}}( F_1,F_2,G_1,\varLambda ) \\&\quad =\left\{ x\in {\mathbb {R}}^m: F_2(x,u)=0,(G_1(x,u), F_1(x,u)) \in {{\mathrm{{\mathcal {C}}}}}(\varLambda )\right\} . \end{aligned}$$

The proof of our next result follows immediately from \(K^*=\varLambda ^*\times \{0\}\) and the definitions of \({{\mathrm{ICP}}}(F,G,K)\) and \({{\mathrm{MixICP}}}(F_1,F_2,G_1,\varLambda )\).

Proposition 4.1

Consider the mappings \(F_1,G_1:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k,\) \(F_2,G_2:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^\ell .\) Define the mappings \(F,G:{\mathbb {R}}^k\times {\mathbb {R}}^\ell \rightarrow {\mathbb {R}}^k\times {\mathbb {R}}^\ell \) by \(F(x,u)=(F_1(x,u),F_2(x,u)),\) \(G(x,u)=(G_1(x,u),G_2(x,u)),\) respectively. Then,

$$\begin{aligned}(x,u)\in {{\mathrm{SOL-ICP}}}(F,G,K)\iff (x,u)\in {{\mathrm{SOL-MixICP}}}(F_1,F_2,G_1,\varLambda ).\end{aligned}$$

Main Results

The linear complementarity problem is the dual problem of a quadratic optimization problem, which has a wide range of applications in various areas. One of the most famous application is the portfolio optimization problem first introduced by Markowitz [20]; see the application of the extended second order cone to this problem presented in the Introduction.

Proposition 5.1

Let \(x,y\in {\mathbb {R}}^k\) and \(u,v\in {\mathbb {R}}^\ell {\setminus }\{0\}\).

  1. (i)

    \((x,0,y,v)\in {{\mathrm{{\mathcal {C}}}}}(L)\) if and only if \(e^\top y\ge \Vert v\Vert \) and \((x,y)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) \).

  2. (ii)

    \((x,u,y,0)\in {{\mathrm{{\mathcal {C}}}}}(L)\) if and only if \(x\ge \Vert u\Vert \) and \((x,y)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) \).

  3. (iii)

    \((x,u,y,v):=((x,u),(y,v))\in {{\mathrm{{\mathcal {C}}}}}(L)\) if and only if there exists a \(\lambda >0\) such that \(v=-\lambda u\), \(e^\top y=\Vert v\Vert \) and \((x-\Vert u\Vert e,y)\in C\left( {\mathbb {R}}^k_+\right) \).

Proof

Items (i) and (ii) are easy consequence of the definitions of L, M and the complementarity set of a nonempty closed convex cone.

Item (iii) follows from Proposition 1 of [18]. For the sake of completeness, we will reproduce its proof here. First, assume that there exists \(\lambda >0\) such that \(v=-\lambda u\), \(e^\top y=\Vert v\Vert \) and \((x-\Vert u\Vert e,y)\in C({\mathbb {R}}^p_+)\). Thus, \((x,u)\in L\) and \((y,v)\in M\). On the other hand,

$$\begin{aligned}\langle (x,u),(y,v)\rangle =x^\top y+u^\top v=\Vert u\Vert e^\top y-\lambda \Vert u\Vert ^2=\Vert u\Vert \Vert v\Vert -\lambda \Vert u\Vert ^2=0.\end{aligned}$$

Thus, \((x,u,y,v)\in C(L)\).

Conversely, if \((x,u,y,v)\in C(L)\), then \((x,u)\in L\), \((y,v)\in M\) and

$$\begin{aligned}0=\langle (x,u),(y,v)\rangle =x^\top y+u^\top v\ge \Vert u\Vert e^\top y+ u^\top v\ge \Vert u\Vert \Vert v\Vert +u^\top v\ge 0.\end{aligned}$$

This implies the existence of a \(\lambda >0\) such that \(v=-\lambda u\), \(e^\top y=\Vert v\Vert \) and \((x-\Vert u\Vert e)^\top y=0\). It follows that \((x-\Vert u\Vert e,y)\in C({\mathbb {R}}^p_+)\). \(\square \)

Theorem 5.1

Denote \(z=(x,u)\), \(\hat{z}=(x-\Vert u\Vert ,u)\), \(\tilde{z}=(x-t,u,t)\) and \(r=(p,q)\) with \(x,p\in {\mathbb {R}}^k\), \(u,q\in {\mathbb {R}}^\ell \) and \(t\in {\mathbb {R}}\). Let \(T=\left( {\begin{matrix}A &{} B\\ C &{} D \end{matrix}}\right) \) with \(A\in {\mathbb {R}}^{k\times k}\), \(B\in {\mathbb {R}}^{k\times \ell }\), \(C\in {\mathbb {R}}^{\ell \times k}\) and \(D\in {\mathbb {R}}^{\ell \times \ell }\). The square matrices T, A and D are assumed to be nonsingular.

  1. (i)

    Suppose \(u=0\). We have

    $$\begin{aligned}&z\in {{\mathrm{SOL-LCP}}}(T,r,L) \\&\quad \iff x\in {{\mathrm{SOL-LCP}}}(A,p,{\mathbb {R}}^k_+)\quad \text{ and }\quad e^\top (Ax+p)\ge \Vert Cx+q\Vert . \end{aligned}$$
  2. (ii)

    Suppose \(Cx+Du+q=0\). Then,

    $$\begin{aligned}z\in {{\mathrm{SOL-LCP}}}(T,r,L)\iff z\in {{\mathrm{SOL-MixCP}}}\left( F_1,F_2,{\mathbb {R}}^k_+\right) \quad \text{ and }\quad x\ge \Vert u\Vert ,\end{aligned}$$

    where \(F_1(x,u)=Ax+Bu+p\) and \(F_2(x,u)=0\).

  3. (iii)

    Suppose \(u\ne 0\) and \(Cx+Du+q\ne 0\). We have

    $$\begin{aligned}z\in {{\mathrm{SOL-LCP}}}(T,r,L)\iff z\in {{\mathrm{SOL-MixICP}}}\left( F_1,F_2,G_1,{\mathbb {R}}^k_+\right) ,\end{aligned}$$

    where

    $$\begin{aligned}F_2(x,u)=\left( \Vert u\Vert C+ue^\top A\right) x+ue^\top (Bu+p)+\Vert u\Vert (Du+q),\end{aligned}$$

    \(G_1(x,u)=x-\Vert u\Vert e\) and \(F_1(x,u)=Ax+Bu+p\).

  4. (iv)

    Suppose \(u\ne 0\) and \(Cx+Du+q\ne 0\). We have

    $$\begin{aligned}z\in {{\mathrm{SOL-LCP}}}(T,r,L)\iff \hat{z}\in {{\mathrm{SOL-MixCP}}}\left( F_1,F_2,{\mathbb {R}}^k_+\right) ,\end{aligned}$$

    where

    $$\begin{aligned}F_2(x,u)=\left( \Vert u\Vert C+ue^\top A\right) (x+\Vert u\Vert e)+ue^\top (Bu+p)+\Vert u\Vert (Du+q)\end{aligned}$$

    and \(F_1(x,u)=A(x+\Vert u\Vert e)+Bu+p\).

  5. (v)

    Suppose \(u\ne 0\), \(Cx+Du+q\ne 0\) and \(\Vert u\Vert C+u^\top e A\) is a nonsingular matrix. We have

    $$\begin{aligned}z\in {{\mathrm{SOL-LCP}}}(T,r,L)\iff \hat{z}\in {{\mathrm{SOL-ICP}}}\left( F_1,F_2,{\mathbb {R}}^k_+\right) ,\end{aligned}$$

    where

    $$\begin{aligned}F_1(u)=A\left( \left( \Vert u\Vert C+ue^\top A\right) ^{-1}\left( ue^\top (Bu+p)+\Vert u\Vert (Du+q)\right) \right) +Bu+p\end{aligned}$$

    and

    $$\begin{aligned}F_2(u)=\left( \Vert u\Vert C+ue^\top A\right) ^{-1}\left( ue^\top (Bu+p)+\Vert u\Vert (Du+q)\right) .\end{aligned}$$
  6. (vi)

    Suppose \(u\ne 0\), \(Cx+Du+q\ne 0\). We have

    $$\begin{aligned}z\in {{\mathrm{SOL-LCP}}}(T,r,L)\iff \exists t>0\end{aligned}$$

    such that

    $$\begin{aligned}\tilde{z}\in {{\mathrm{MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) ,\end{aligned}$$

    where

    $$\begin{aligned}\widetilde{F}_1(x,u,t)=A(x+te)+Bu+p\end{aligned}$$

    and

    $$\begin{aligned} \widetilde{F}_2(x,u,t) = \begin{pmatrix} &{} \left( tC+ue^\top A\right) (x+te)+ue^\top (Bu+p)+t(Du+q) \\ &{} t^2 - \Vert u\Vert ^2 \end{pmatrix}. \end{aligned}$$
    (8)

Proof

  1. (i)

    We have that \(z\in {{\mathrm{SOL-LCP}}}(T,r,L)\) is equivalent to \((x,0,Ax+p,Cx+q)\in {{\mathrm{{\mathcal {C}}}}}(L)\) or, by item (i) of Proposition 5.1, to \((x,Ax+p)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) \) and \(e^\top (Ax+p)\ge \Vert Cx+q\Vert \).

  2. (ii)

    We have that \(z\in {{\mathrm{SOL-LCP}}}(T,r,L)\) is equivalent to \((x,u,Ax+Bu+p,0)\in {{\mathrm{{\mathcal {C}}}}}(L)\) or, by item (ii) of Proposition 5.1, to \((x,Ax+Bu+p)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) \) and \(x\ge \Vert u\Vert \), or to

    $$\begin{aligned}z\in {{\mathrm{SOL-MixCP}}}\left( F_1,F_2,{\mathbb {R}}^k_+\right) \quad \text{ and }\quad x\ge \Vert u\Vert ,\end{aligned}$$

    where \(F_1(x,u)=Ax+Bu+p\) and \(F_2(x,u)=0\).

  3. (iii)

    Suppose that \(z\in {{\mathrm{SOL-LCP}}}(T,r,L)\). Then, \((x,u,y,v)\in {{\mathrm{{\mathcal {C}}}}}(L)\), where \(y=Ax+Bu+p\) and \(v=Cx+Du+q\). Then, by item (iii) of Proposition 5.1, we have that \(\exists \lambda >0\) such that

    $$\begin{aligned}&\displaystyle Cx+Du+q=v=-\lambda u, \end{aligned}$$
    (9)
    $$\begin{aligned}&\displaystyle e^\top (Ax+Bu+p)=e^\top y=\Vert v\Vert =\Vert Cx+Du+q\Vert =\lambda \Vert u\Vert , \end{aligned}$$
    (10)
    $$\begin{aligned} (G_1(x,u),F_1(x,u))= & {} (x-\Vert u\Vert e,Ax+Bu+p)\nonumber \\= & {} (x-\Vert u\Vert e,y)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) . \end{aligned}$$
    (11)

    From Eq. (9), we obtain \(\Vert u\Vert (Cx+Du+q)=-\lambda \Vert u\Vert u\), which by Eq. (10) implies \(\Vert u\Vert (Cx+Du+q)=-ue^\top (Ax+Bu+p)\), which after some algebra gives

    $$\begin{aligned} F_2(x,u)=0. \end{aligned}$$
    (12)

    From Eqs. (11) and (12), we obtain that \(z\in {{\mathrm{SOL-MixICP}}}(F_1,F_2,G_1)\).

    Conversely, suppose that \(z\in {{\mathrm{SOL-MixICP}}}(F_1,F_2,G_1)\). Then,

    $$\begin{aligned} \Vert u\Vert v+ue^\top y=\Vert u\Vert (Cx+Du+q)+ue^\top (Ax+Bu+p)=F_2(x,u)=0 \end{aligned}$$
    (13)

    and

    $$\begin{aligned} (x-\Vert u\Vert e,y)=(x-\Vert u\Vert e,Ax+Bu+p)=(G_1(x,u),F_1(x,u))\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) , \end{aligned}$$
    (14)

    where \(v=Cx+Du+q\) and \(y=Ax+Bu+p\). Equations (14) and (13) imply

    $$\begin{aligned} v=-\lambda u, \end{aligned}$$
    (15)

    where

    $$\begin{aligned} \lambda =\left( e^\top y\right) /\Vert u\Vert >0. \end{aligned}$$
    (16)

    Equations (15) and (16) imply

    $$\begin{aligned} e^\top y=\Vert v\Vert . \end{aligned}$$
    (17)

    By item (iii) of Proposition 5.1, Eqs. (15), (17) and (14) imply

    $$\begin{aligned}(x,y,u,v)\in C(L)\end{aligned}$$

    and therefore \(z\in {{\mathrm{SOL-LCP}}}(T,r,L)\).

  4. (iv)

    It is a simple reformulation of item (iii) by using the change of variables

    $$\begin{aligned}(x,u)\mapsto (x-\Vert u\Vert e,u).\end{aligned}$$
  5. (v)

    Again it is a simple reformulation of item (iv) by using that \(\Vert u\Vert C+u^\top e A\) is a nonsingular matrix.

  6. (vi)

    Suppose that \(z\in {{\mathrm{SOL-LCP}}}(T,r,L)\). Then, \((x,u,y,v)\in {{\mathrm{{\mathcal {C}}}}}(L)\), where \(y=Ax+Bu+p\) and \(v=Cx+Du+q\). Let \(t = \Vert u\Vert \), Then, by item (iii) of Proposition 5.1, we have that \(\exists \lambda >0\) such that

    $$\begin{aligned}&Cx+Du+q=v=-\lambda u, \end{aligned}$$
    (18)
    $$\begin{aligned}&e^\top (Ax+Bu+p)=e^\top y=\Vert v\Vert =\Vert Cx+Du+q\Vert =\lambda t, \end{aligned}$$
    (19)
    $$\begin{aligned}&\left( \tilde{z},\widetilde{F}_1(x,u,t)\right) =\left( x-te,Ax+Bu+p\right) =(x-te,y)\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) ,\qquad \end{aligned}$$
    (20)

    where \(\tilde{z} = (x-t,u,t)\). From Eq. (18), we get \(t(Cx+Du+q)=-t \lambda u\), which, by Eq. (19), implies \(t(Cx+Du+q)=-ue^\top (Ax+Bu+p)\), which after some algebra gives

    $$\begin{aligned} \widetilde{F}_2(x,u,t)=0. \end{aligned}$$
    (21)

    From Eqs. (20) and (21), we obtain that \(z\in {{\mathrm{SOL-MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) \). \(\square \)

Note that the item(vi) makes \(\widetilde{F}_1(x,u,t)\) and \(\widetilde{F}_2(x,u,t)\) become smooth functions by adding the variable t. The smooth functions therefore make the smooth Newton’s method applicable to the mixed complementarity problem.

The conversion of \({{\mathrm{LCP}}}\) on extended second order cones to a \({{\mathrm{MixCP}}}\) problem defined on the non-negative orthant is useful, because it can be studied by using the Fischer–Burmeister function. In order to ensure the existence of the solution of \({{\mathrm{MixCP}}}\), we introduce the scalar Fischer–Burmeister C-function (see [26, 27]).

$$\begin{aligned} \psi _{FB}(a,b) = \sqrt{a^2+b^2} - (a+b) \quad \forall (a,b) \in \mathbb {R}^2. \end{aligned}$$

Obviously, \(\psi _{FB}^2(a,b)\) is a continuously differentiable function on \(\mathbb {R}^2\). The equivalent FB-based equation formulation for the \({{\mathrm{MixCP}}}\) problem is:

$$\begin{aligned} 0= \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}(x,u,t) = \begin{pmatrix} \psi \left( x_1,\widetilde{F}_1^1\left( x,u,t\right) \right) \\ \vdots \\ \psi \left( x_k,\widetilde{F}_1^k\left( x,u,t\right) \right) \\ \widetilde{F}_2\left( x,u,t\right) \end{pmatrix}, \end{aligned}$$
(22)

with the associated merit function:

$$\begin{aligned} \theta ^{{{\mathrm{MixCP}}}}_{FB}\left( x,u,t\right) =\frac{1}{2}\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( x,u,t\right) ^{\mathrm{T}} \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( x,u,t\right) . \end{aligned}$$

We continue by calculating the Jacobian matrix for the associated merit function. If \(i \in (1,\ldots ,k)\) is such that \((z_i,\widetilde{F}_1^i) \ne (0,0)\), then the differential with respect to \(z = (x,u,t)\in {\mathbb {R}}^{m+1}\) is

$$\begin{aligned} \frac{\partial \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\right) _i}{\partial z} = \left( \frac{x_i}{\sqrt{x_i^2+\left( \widetilde{F}_1^i(x,u,t)\right) ^2}}-1\right) e^i&\\ +\left( \frac{\widetilde{F}_1^i(x,u,t)}{\sqrt{x_i^2+\left( \widetilde{F}_1^i(x,u,t)\right) ^2}}-1\right)&\frac{\partial \widetilde{F}_1^i(x,u,t)}{\partial z}, \end{aligned}$$

where \(e^i\) denotes the i-th canonical unit vector. The differential with respect to \(z_j\) with \(j\ne i\) is

$$\begin{aligned} \frac{\partial \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\right) _i}{\partial z_j} = \left( \frac{\widetilde{F}_1^i(x,u,t)}{\sqrt{x_i^2+\left( \widetilde{F}_1^i(x,u,t)\right) ^2}}-1\right) \frac{\partial \widetilde{F}_1^i(x,u,t)}{\partial z_j}, \end{aligned}$$

Obviously, the differential with respect to \(z_j\) with \(j > k\), is equal to zero. Note that if \((z_i,\widetilde{F}_1^i) = (0,0)\), then \(\frac{\partial \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\right) _i}{\partial z}\) will be a generalized gradient of a composite function, i.e., a closed unit ball B(0, 1). However, this case will not occur in our paper. As for the term \(\widetilde{F}_2(x,u,t)\) with \(i \in ( k+1,\ldots ,m+1)\), the Jacobian matrix is much more simple, since

$$\begin{aligned} \frac{\partial \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\right) _i}{\partial z}= \frac{\partial \widetilde{F}_2^i(x,u,t)}{\partial z}. \end{aligned}$$

Therefore, the Jacobian matrix for the associated merit function is:

$$\begin{aligned} \mathcal {A} = \begin{pmatrix} D_a+D_bJ_x\widetilde{F}_1(x,u,t) &{}&{}&{}&{}&{} D_bJ_{(u,t)}\widetilde{F}_1(x,u,t)\\ J_x\widetilde{F}_2(x,u,t) &{}&{}&{}&{}&{} J_{(u,t)}\widetilde{F}_2(x,u,t) \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned}&D_a= {{\mathrm{diag}}}\begin{pmatrix} \frac{x_i}{\sqrt{x_i^2 + \widetilde{F}_1^i(x,u,t)^2}} - 1 \end{pmatrix}, \qquad D_b={{\mathrm{diag}}}\begin{pmatrix} \frac{\widetilde{F}_1^i(x,u,t)}{\sqrt{x_i^2 + \widetilde{F}_1^i(x,u,t)^2}} - 1 \end{pmatrix},\\&i=1, \ldots , k. \end{aligned}$$

Define the following index sets:

$$\begin{aligned} \begin{array}{lcl} {{\mathrm{{\mathcal {C}}}}}\equiv \left\{ i: x_i \ge 0, \widetilde{F}_1^i\ge 0, x_i \widetilde{F}_1^i(x,u,t) = 0\right\} &{} &{} \mathrm{complementarity\;index} \\ \mathcal {R} \equiv \left\{ 1, \ldots , k\right\} {\setminus } {{\mathrm{{\mathcal {C}}}}}&{} &{} \mathrm{residual\;index} \\ \mathcal {P} \equiv \left\{ i\in \mathbb {R} : x_i> 0 , \widetilde{F}_1^i(x,u,t) > 0\right\} &{} &{} \mathrm{positive\;index} \\ \mathcal {N} \equiv \mathcal {R}{\setminus }\mathcal {P} &{} &{} \mathrm{negative\;index} \\ \end{array} \end{aligned}$$

Definition 5.1

A point \((x,u,t) \in {\mathbb {R}}^{m+1} \) is called FB regular for the merit function \( \theta ^{{{\mathrm{MixCP}}}}_{FB}\) (or for the \( {{\mathrm{MixCP}}}( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+ )\)) if its partial Jacobian matrix of \( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}(x,u,t) \) with respect to x, \( J_x\widetilde{F}_1(x,u,t) \) is nonsingular and if for \(\forall w \in {\mathbb {R}}^k, w\ne 0 \) with

$$\begin{aligned} w_{{{\mathrm{{\mathcal {C}}}}}}=0, \quad w_{\mathcal {P}}>0, \quad w_{\mathcal {N}}<0, \end{aligned}$$

there exists a nonzero vector \( v \in {\mathbb {R}}^k \) such that

$$\begin{aligned} v_{{{\mathrm{{\mathcal {C}}}}}}=0, \quad v_{\mathcal {P}}\ge 0, \quad v_{\mathcal {N}}\le 0, \end{aligned}$$
(23)

and

$$\begin{aligned} w^{\mathrm{T}}\left( \varPi (x,u,t)/J_x\widetilde{F}_1(x,u,t)\right) v \ge 0, \end{aligned}$$
(24)

where

$$\begin{aligned} \varPi (x,u,t) \equiv \begin{pmatrix} J_x\widetilde{F}_1(x,u,t) &{}&{}&{}&{}&{} J_{(u,t)}\widetilde{F}_1(x,u,t)\\ J_x\widetilde{F}_2(x,u,t) &{}&{}&{}&{}&{} J_{(u,t)}\widetilde{F}_2(x,u,t) \end{pmatrix} \in {\mathbb {R}}^{(m+1)\times (m+1)}, \end{aligned}$$

and \( \varPi (x,u,t)/J_x\widetilde{F}_1(x,u,t) \) is the Schur complement of \( J_x\widetilde{F}_1(x,u,t)\) in \( \varPi (x,u,t) \).

In our case, for the \( {{\mathrm{MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+ \right) \), the Jacobian matrices are:

$$\begin{aligned} J\widetilde{F}_1(x,u,t) \equiv \left( \widetilde{A}\,\,\widetilde{B}\right) \end{aligned}$$

and

$$\begin{aligned} J\widetilde{F}_2 (x,u,t) \equiv \left( \widetilde{C}\,\,\widetilde{D}\right) \end{aligned}$$

where

$$\begin{aligned} \widetilde{A}= & {} A, \qquad \widetilde{B} = \left( B\,\,Ae\right) \qquad \widetilde{C} = \begin{pmatrix} tC+ue^\top A \\ 0 \end{pmatrix},\\ \widetilde{D}= & {} \begin{pmatrix} e^\top \left( A(x+te)+Bu+p\right) I + {{\mathrm{diag}}}(e^{\top }Bu) + tD &{}&{}&{}&{}&{} Cx+2tCe+ue^\top Ae+Du \\ -2u^\top &{}&{}&{}&{}&{} 2t \end{pmatrix}. \end{aligned}$$

In our case, if the Jacobian matrix block \( J_x\widetilde{F}_1(x,u,t) = A\) is nonsingular, then the Schur complement \( \varPi (x,u,t)/J_x\widetilde{F}_1(x,u,t) \) is

$$\begin{aligned} \left( \varPi (x,u,t)/J_x\widetilde{F}_1(x,u,t)\right) = \widetilde{D}-\widetilde{C}\widetilde{A}^{-1}\widetilde{B}. \end{aligned}$$
(25)

Proposition 5.2

If the matrices \(\widetilde{A}\) and \(\widetilde{D}\) are nonsingular for any \(z \in {\mathbb {R}}^{m+1}\), then the Jacobian matrix \(\mathcal {A}\) for the associated merit function is nonsingular.

Proof

It is easy to check that

$$\begin{aligned} \mathcal {A} = \begin{pmatrix} D_a+D_b\widetilde{A} &{}&{}&{}&{}&{} D_b\widetilde{B}\\ \widetilde{C} &{}&{}&{}&{}&{} \widetilde{D} \end{pmatrix}. \end{aligned}$$

\(\mathcal {A}\) is a nonsingular matrix if and only if the sub-matrix \(D_a+D_b\widetilde{A}\) and its Schur complement are nonsingular, and they are nonsingular if and only if the matrices \(\widetilde{A}\) and \(\widetilde{D}\) are nonsingular. \(\square \)

The following theorem is [8, Theorem 9.4.4]. For the sake of completeness, we provide a proof here.

Theorem 5.2

A point \((x, u, t)\in {\mathbb {R}}^{m+1}\) is a solution of the \( {{\mathrm{MixCP}}}(\widetilde{F}_1, \widetilde{F}_2,{\mathbb {R}}^k)\) if and only if (xut) is an FB regular point of \( \theta _{FB}^{{{\mathrm{MixCP}}}}\) and (xut) is a stationary point of \( \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\).

Proof

Suppose that \(z^{*}=\left( x^{*},u^{*},t^{*}\right) \in {{\mathrm{SOL-MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k\right) \). Then, it follows that \(z^{*}\) is a global minimum and hence a stationary point of \( \theta _{FB}^{{{\mathrm{MixCP}}}}\). Thus, \((x^{*},\widetilde{F}_1(z^{*}))\in {{\mathrm{{\mathcal {C}}}}}\left( {\mathbb {R}}^k_+\right) \), and we have \(\mathcal {P}=\mathcal {N}=\emptyset \). Therefore, the FB regularity of \(x^{*}\) holds since \(x^{*}=x_{{{\mathrm{{\mathcal {C}}}}}}\), because there is no nonzero vector x satisfying conditions (23). Conversely, suppose that \(x^{*}\) is FB regular and \( z^*=(x^*, u^*, t^*)\) is a stationary point of \( \theta _{FB}^{{{\mathrm{MixCP}}}}\). It follows that \(\nabla \theta _{FB}^{{{\mathrm{MixCP}}}} = 0\), i.e.:

$$\begin{aligned} \mathcal {A}^\top \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}} = \begin{pmatrix} D_a+D_bJ_x\widetilde{F}_1\left( z^*\right) &{} J_x\widetilde{F}_2\left( z^*\right) \\ D_bJ_{\left( u,t\right) }\widetilde{F}_1\left( z^*\right) &{} J_{\left( u,t\right) }\widetilde{F}_2\left( z^*\right) \end{pmatrix} \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}=0, \end{aligned}$$

where

$$\begin{aligned}&D_a= {{\mathrm{diag}}}\begin{pmatrix} \frac{x_i^*}{\sqrt{(x_i^*)^2 + \widetilde{F}_1^i\left( z^*\right) ^2}} - 1 \end{pmatrix}, \qquad D_b={{\mathrm{diag}}}\begin{pmatrix} \frac{\widetilde{F}_1^i\left( z^*\right) }{\sqrt{(x_i^*)^2 + \widetilde{F}_1^i\left( z^*\right) ^2}} - 1 \end{pmatrix},\\&i=1, \ldots , k. \end{aligned}$$

Hence, for any \(w \in {\mathbb {R}}^{m+1}\), we have

$$\begin{aligned} w^{\top } \begin{pmatrix} D_a+D_bJ_x\widetilde{F}_1\left( z^*\right) &{} J_x\widetilde{F}_2\left( z^*\right) \\ D_bJ_{(u,t)}\widetilde{F}_1\left( z^*\right) &{} J_{(u,t)}\widetilde{F}_2\left( z^*\right) \end{pmatrix} \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}=0. \end{aligned}$$
(26)

Assume that \(z^{*}\) is not a solution of \({{\mathrm{MixCP}}}\). Then, we have that the index set \(\mathcal {R}\) is not empty. Define \(v\equiv D_b\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\). We have

$$\begin{aligned} v_{\mathcal {C}} =0, \qquad v_{\mathcal {P}} >0, \qquad v_{\mathcal {N}} <0. \end{aligned}$$

Take w with

$$\begin{aligned} w_{\mathcal {C}} =0, \qquad w_{\mathcal {P}} >0, \qquad w_{\mathcal {N}} <0. \end{aligned}$$

From the definition of \(D_a\) and \(D_b\), we know that \(D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\) and \(D_b\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\) have the same sign. Therefore,

$$\begin{aligned} w^{\top }\left( D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\right) = w^{\top }_{\mathcal {C}}\left( D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\right) _{\mathcal {C}} + w^{\top }_{\mathcal {P}}\left( D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\right) _{\mathcal {P}} +w^{\top }_{\mathcal {N}}\left( D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\right) _{\mathcal {N}} > 0.\nonumber \\ \end{aligned}$$
(27)

By the regularity of \(J\widetilde{F}_1\left( z\right) ^{\top }\), we have

$$\begin{aligned} w^{\top }J\widetilde{F}_1\left( z\right) ^{\top }\left( D_a\mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}\right) = w^{\top } J\widetilde{F}_1\left( z\right) ^{\top }w \ge 0. \end{aligned}$$
(28)

The inequalities (27) and (28) together contradict condition (26). Hence, \( \mathcal {R} = \emptyset \). It means that \(z^{*}\) is a solution of \({{\mathrm{MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k\right) \). \(\square \)

Algorithms

For solving a complementarity problem, there are many different algorithms available. The common algorithms include numerical methods for systems of nonlinear equations (such as Newton’s method [28]), the interior point method (Karmarkar’s algorithm [29]), the projection iterative method [30] and the multi-splitting method [31]. In the previous sections, we have already provided sufficient conditions for using FB regularity and stationarity to identify a solution of the \({{\mathrm{MixCP}}}\) problem. In this section, we are trying to find a solution of \({{\mathrm{LCP}}}\) by finding the solution of \({{\mathrm{MixCP}}}\) which is converted from \({{\mathrm{LCP}}}\). One convenient way to do this is using the Newton’s method as follows:

Algorithm

(Newton’s method)

  • Given initial data \(z^0 \in {\mathbb {R}}^{m+1}\), and \(r = 10^{-7}\).

  • Step 1: Set \(k = 0\).

  • Step 2: If \(\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) \le r\), then stop.

  • Step 3: Find a direction \(d^k \in {\mathbb {R}}^{m+1}\) such that

    $$\begin{aligned} \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) + \mathcal {A}^{\top }\left( z^k\right) d^k = 0. \end{aligned}$$
  • Step 4: Set \(z^{k+1} := z^k + d^k \) and \(k := k + 1\), go to Step 2.

If the Jacobian matrix \(\mathcal {A}^{\top }\) is nonsingular, then the direction \(d^k \in {\mathbb {R}}^{m+1}\) for each step can be found. The following theorem, which is based on an idea similar to the one used in [32], proves that such a Newton’s method can efficiently solve the \({{\mathrm{LCP}}}\) on extended second order cone (i.e., solve the problem within polynomial time), by finding the solution of the \({{\mathrm{MixCP}}}\):

Theorem 6.1

Suppose that the Jacobian matrix \(\mathcal {A}\) is nonsingular. Then, Newton’s method for \({{\mathrm{MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) \) converges at least quadratically to

$$\begin{aligned} z^* \in {{\mathrm{SOL-MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) , \end{aligned}$$

if it starts with initial data \(z^0\) sufficiently close to \(z^*\).

Proof

Suppose that the starting point \(z^0\) is close to the solution \(z^*\), and suppose that \(\mathcal {A}\) is a Lipschitz function. There are \(\rho>0, \beta _1>0, \beta _2 >0\), such that for all z with \(||z - z^* ||< \rho \), there holds \(||\mathcal {A}^{-1}(z)||< \beta _1\), and \(||\mathcal {A} \left( z^k\right) - \mathcal {A} \left( z^*\right) ||\le \beta _2||z^k - z^*||\). By the definition of the Newton’s method, we have

$$\begin{aligned} ||z^{k+1} - z^* ||&= ||z^k - z^* - \mathcal {A}^{-1}\left( z^k\right) \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) ||\\&= \mathcal {A}^{-1}\left( z^k\right) \left[ \mathcal {A}\left( z^k\right) \left( z^k - z^*\right) - \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) - \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^*\right) \right) \right] , \end{aligned}$$

because \(\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^*\right) =0\) when \(z^* \in {{\mathrm{SOL-MixCP}}}\). By Taylor’s theorem, we have

$$\begin{aligned} \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) - \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^*\right) = \int _{0}^{1} \mathcal {A}\left( z^{k} + s\left( z^* - z^k\right) \right) (x^k - z^*)ds, \end{aligned}$$

so

$$\begin{aligned}&\left||\mathcal {A}\left( z^k\right) \left( z^k - z^*\right) - \left( \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) - \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^*\right) \right) \right||\\&\quad = \left||\int _{0}^{1}\left[ \mathcal {A}\left( z^k\right) - \mathcal {A} \left( z^k + s\left( z^* - z^k\right) \right) \right] ds\left( z^k - z^*\right) \right||\\&\quad \le \int _{0}^{1}\left||\mathcal {A}\left( z^k\right) - \mathcal {A} \left( z^k + s\left( z^* - z^k\right) \right) \right||ds\left||z^k - z^*\right||\\&\quad \le \left||z^k - z^*\right||^2 \int _{0}^{1}\beta _2 sds = \frac{1}{2}\beta _2 \left||z^k - z^*\right||^2. \end{aligned}$$

Also, we have \(||z - z^* ||< \rho \), that is,

$$\begin{aligned} ||z^{k+1} - z^* ||\le \frac{1}{2}\beta _1 \beta _2 ||z^{k} - z^*||^2. \end{aligned}$$

\(\square \)

Another widely used algorithm is presented by Levenberg and Marquardt [33]. Levenberg–Marquardt algorithm can approach second order convergence speed without requiring the Jacobian matrix to be nonsingular. We can approximate the Hessian matrix by:

$$\begin{aligned} \mathcal {H}(z)=\mathcal {A}^{\top }(z)\mathcal {A}(z), \end{aligned}$$

and the gradient by:

$$\begin{aligned} \mathcal {G}(z)=\mathcal {A}^{\top }(z)\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}(z). \end{aligned}$$

Hence, the upgrade step will be

$$\begin{aligned} z^{k+1} = z^k - \left[ \mathcal {A}^{\top }\left( z^k\right) \mathcal {A}\left( z^k\right) +\mu \mathbb {I}\right] ^{-1}\mathcal {A}^{\top }\left( z^k\right) \mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) . \end{aligned}$$

As we can see, Levenberg–Marquardt algorithm is a quasi-Newton’s method for an unconstrained problem. When \(\mu \) equals to zero, the step upgrade is just the Newton’s method using approximated Hessian matrix. The number of iterations of Levenberg–Marquardt algorithm to find a solution is higher than that of Newton’s method, but it works for singular Jacobian as well. The greater the parameter \(\mu \), the slower the calculation speed becomes. Levenberg–Marquardt algorithm is provided as follows:

Algorithm

(Levenberg–Marquardt)

  • Given initial data \(z^0 \in {\mathbb {R}}^{m+1}\), \(\mu = 0.005\) and \(r = 10^{-7}\).

  • Step 1: Set \(k = 0\).

  • Step 2: If \(\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) \le r\), stop.

  • Step 3: Find a direction \(d^k \in {\mathbb {R}}^{m+1}\) such that

    $$\begin{aligned} \mathcal {A}\left( z^k\right) ^{\top }\mathbb {F}^{{{\mathrm{MixCP}}}}_{FB}\left( z^k\right) + \left[ \mathcal {A}^{\top }\left( z^k\right) \mathcal {A}\left( z^k\right) +\mu \mathbb {I}\right] d^k = 0. \end{aligned}$$
  • Step 4: Set \(z^{k+1} := z^k + d^k \) and \(k := k + 1\), go to Step 2.

Theorem 6.2

[34] Without the nonsingularity assumption on the Jacobian matrix \(\mathcal {A}\), Levenberg–Marquardt algorithm for \({{\mathrm{MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) \) converges at least quadratically to

$$\begin{aligned} z^* \in {{\mathrm{SOL-MixCP}}}\left( \widetilde{F}_1,\widetilde{F}_2,{\mathbb {R}}^k_+\right) , \end{aligned}$$

if it starts with initial data \(z^0\) sufficiently close to \(z^*\).

The proof is omitted.

A Numerical Example

In this section, we will provide a numerical example for \({{\mathrm{LCP}}}\) on extended second order cones. Let L(3, 2) be an extended second order cone defined by (1). Following the notation in Theorem 5.1, let \(z=(x,u)\), \(\hat{z}=(x-\Vert u\Vert ,u)\), \(\tilde{z}=(x-t,u,t)\) and \(r=(p,q) = \left( (-55,-26,50)^{\top }, (-19,-26)^{\top } \right) \) with \(x,p\in {\mathbb {R}}^3\) , \(u,q\in {\mathbb {R}}^2\) and \(t\in {\mathbb {R}}\). Consider

$$\begin{aligned} T=\left( {\begin{matrix}A &{} B\\ C &{} D \end{matrix}}\right) = \left( \begin{array}{r@{\quad }r@{\quad }r@{\quad }r@{\quad }r} 26 &{} 15 &{} 3 &{} 51 &{} -42 \\ -7 &{} -39 &{} -16 &{} -17 &{} 18 \\ 32 &{} 23 &{} 40 &{} -38 &{} 46 \\ 6 &{} -22 &{} -28 &{} -17 &{} 27 \\ -38 &{} -25 &{} 24 &{} 47 &{} -16 \end{array} \right) , \end{aligned}$$

with \(A\in {\mathbb {R}}^{3\times 3}\), \(B\in {\mathbb {R}}^{3\times 2}\), \(C\in {\mathbb {R}}^{2\times 3}\) and \(D\in {\mathbb {R}}^{2\times 2}\). It is easy to show that square matrices T, A and D are nonsingular. By item (vi) of Theorem 5.1, we can reformulate this \({{\mathrm{LCP}}}\) problem as a smooth \({{\mathrm{MixCP}}}\) problem. We will use the Levenberg–Marquardt algorithm to find the solution of the FB-based equation formulation (22) of \({{\mathrm{MixCP}}}\) problem. The convergence point is:

$$\begin{aligned} \tilde{z}^*&= (x - t,u,t) \\&= \left( \left( 0, \frac{439}{660}, 0\right) ^{\top }, \left( \frac{341}{1460},\frac{724}{2683}\right) ^{\top }, \frac{1271}{3582} \right) . \end{aligned}$$

We need to check the FB regularity of \(\tilde{z}^*\). It is easy to show that the partial Jacobian matrix of \(\widetilde{F}_1\left( \tilde{z}^*\right) \)

$$\begin{aligned} J_x\widetilde{F}_1\left( \tilde{z}^*\right) = \widetilde{A} = \left( \begin{array}{r@{\quad }r@{\quad }r} 26 &{} 15 &{} 3 \\ -7 &{} -39 &{} -16 \\ 32 &{} 23 &{} 40 \end{array} \right) \end{aligned}$$

is nonsingular. Moreover, we have that

$$\begin{aligned} x - t = \left( 0, \frac{439}{660}, 0\right) ^{\top } \ge 0, \qquad \widetilde{F}_1\left( \tilde{z}^*\right) = \left( \frac{3626}{145}, 0, \frac{12{,}148}{185}\right) ^{\top } \ge 0, \end{aligned}$$

and therefore

$$\begin{aligned} \left<x - t, \widetilde{F}_1\left( \tilde{z}^*\right) \right> = 0. \end{aligned}$$

That is, \((x, \widetilde{F}_1\left( \tilde{z}^*\right) )\in {{\mathrm{{\mathcal {C}}}}}({\mathbb {R}}^3_+)\), so the index sets \(\mathcal {P}=\mathcal {N}=\emptyset \). The matrix \(\widetilde{A} \) is invertible. In addition, we can calculate that the Schur complement of \(\varPi (\widetilde{z}^*) \) with respect to \( J_x\widetilde{F}_1\left( \tilde{z}^*\right) \):

$$\begin{aligned} \left( \varPi \left( \tilde{z}^*\right) /J_x\widetilde{F}_1\left( \tilde{z}^*\right) \right) = \widetilde{D} - \widetilde{C}\widetilde{A}^{-1}\widetilde{B} = \left( \begin{array}{r@{\quad }r@{\quad }r} \frac{3991}{58} &{} \frac{11{,}387}{95} &{} -\frac{7203}{268} \\ \frac{15{,}910}{93} &{} \frac{5185}{163} &{} -\frac{5941}{248} \\ -\frac{341}{740} &{} -\frac{741}{1373} &{} \frac{1271}{1791} \end{array} \right) . \end{aligned}$$

The FB regularity of \(x^{*}\) holds as there is no nonzero vector x satisfying conditions (23). Then, we compute the gradient of the merit function, which is

$$\begin{aligned} \mathcal {A}^\top \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}}&= \begin{pmatrix} D_a+D_bJ_x\widetilde{F}_1\left( \tilde{z}^*\right) &{} J_{x}\widetilde{F}_2\left( \tilde{z}^*\right) \\ D_bJ_{(u,t)}\widetilde{F}_1\left( \tilde{z}^*\right) &{} J_{(u,t)}\widetilde{F}_2\left( \tilde{z}^*\right) \end{pmatrix} \mathbb {F}_{FB}^{{{\mathrm{MixCP}}}} \\&= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} -\frac{598}{605} &{} 7 &{} 0 &{} \frac{4844}{349} &{} \frac{345}{1238} &{} 0 \\ -\frac{32}{21{,}195} &{} 39 &{} 0 &{} -\frac{3946}{491} &{} -\frac{4031}{441} &{} 0 \\ 0 &{} 16 &{} -\frac{413}{415} &{} -\frac{26}{7} &{} \frac{1754}{111} &{} 0 \\ -\frac{33}{12{,}610} &{} 7 &{} 0 &{} \frac{12{,}462}{139} &{} \frac{78{,}767}{701} &{} -\frac{341}{740} \\ -\frac{32}{21{,}195} &{} 39 &{} 0 &{} \frac{13{,}790}{131} &{} \frac{9451}{105} &{} -\frac{741}{1373} \\ 0 &{} 16 &{} 0 &{} -\frac{3341}{135} &{} -\frac{3233}{190} &{} \frac{1271}{1791} \end{array} \right) \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix} =0. \end{aligned}$$

Hence, \(z^*\) is a stationary point of \(F^{{{\mathrm{MixCP}}}}_{FB}\). By Theorem 5.2, we conclude that \(z^*\) is the solution of the \({{\mathrm{MixCP}}}\) problem. By the item (vi) of Theorem 5.1, we have that

$$\begin{aligned} z&= (x,u) \\&=\left( \left( \frac{1271}{3582}, \frac{1072}{1051}, \frac{1271}{3582}\right) ^{\top }, \left( \frac{341}{1480},\frac{724}{2683}\right) ^{\top }\right) \end{aligned}$$

is the solution of \({{\mathrm{LCP}}}(T,r,L)\) problem.

Conclusions

In this paper, we studied the method of solving a linear complementarity problem on an extended second order cone. By checking the stationarity and FB regularity of a point, we can verify whether it is a solution of the mixed complementarity problem. Such conversion of a linear complementarity problem to a mixed complementarity problem reduces the complexity of the original problem. The connection between a linear complementarity problem on an extended second order cone and a mixed complementarity problem on a non-negative orthant will be useful for our further research about applications to practical problems, such us portfolio selection and signal processing problems.

References

  1. 1.

    Karush, W.: Minima of functions of several variables with inequalities as side conditions. Master Thesis, University of Chicago (1939)

  2. 2.

    Dantzig, G.B., Cottle, R.W.: Positive (semi-) definite matrices and mathematical programming. Tech. Rep., California Univ Berkeley Operations Research Center (1963)

  3. 3.

    Cottle, R.W., Dantzig, G.B.: Complementary pivot theory of mathematical programming. Linear Algebra Appl. 1(1), 103–125 (1968)

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Mangasarian, O.L.: Linear complementarity problems solvable by a single linear program. Math. Program. 10(1), 263–270 (1976)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Garcia, C.B.: Some classes of matrices in linear complementarity theory. Math. Program. 5(1), 299–310 (1973)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Borwein, J.M., Dempster, M.A.H.: The linear order complementarity problem. Math. Oper. Res. 14(3), 534–558 (1989)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Alizadeh, F., Goldfarb, D.: Second-order cone programming. Math. Program. 95(1), 3–51 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, New York (2003)

    MATH  Google Scholar 

  9. 9.

    Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)

    Book  MATH  Google Scholar 

  10. 10.

    Jaillet, P., Lamberton, D., Lapeyre, B.: Variational inequalities and the pricing of american options. Acta Appl. Math. 21(3), 263–289 (1990)

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Yonekura, K., Kanno, Y.: Second-order cone programming with warm start for elastoplastic analysis with von Mises yield criterion. Optim. Eng. 13(2), 181–218 (2012). https://doi.org/10.1007/s11081-011-9144-4

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Zhang, L.L., Li, J.Y., Zhang, H.W., Pan, S.H.: A second order cone complementarity approach for the numerical solution of elastoplasticity problems. Comput. Mech. 51(1), 1–18 (2013). https://doi.org/10.1007/s00466-012-0698-6

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Luo, G., An, X., Xia, J.: Robust optimization with applications to game theory. Appl. Anal. 88(8), 1183–1195 (2009). https://doi.org/10.1080/00036810903157196

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Nishimura, R., Hayashi, S., Fukushima, M.: Robust Nash equilibria in \(N\)-person non-cooperative games: uniqueness and reformulation. Pac. J. Optim. 5(2), 237–259 (2009)

    MathSciNet  MATH  Google Scholar 

  15. 15.

    Andreani, R., Friedlander, A., Mello, M.P., Santos, S.A.: Box-constrained minimization reformulations of complementarity problems in second-order cones. J. Global Optim. 40(4), 505–527 (2008). https://doi.org/10.1007/s10898-006-9109-x

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Németh, S.Z., Zhang, G.: Extended Lorentz cones and mixed complementarity problems. J. Glob. Optim. 62(3), 443–457 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  17. 17.

    Sznajder, R.: The Lyapunov rank of extended second order cones. J. Glob. Optim. 66(3), 585–593 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  18. 18.

    Ferreira, O.P., Németh, S.Z.: How to project onto extended second order cones (2016). arXiv:1610.08887v2

  19. 19.

    Németh, S.Z., Zhang, G.: Extended Lorentz cones and variational inequalities on cylinders. J. Optim. Theory Appl. 168(3), 756–768 (2016). https://doi.org/10.1007/s10957-015-0833-6

    MathSciNet  Article  MATH  Google Scholar 

  20. 20.

    Markowitz, H.: Portfolio selection. J. Finance 7(1), 77–91 (1952)

    Google Scholar 

  21. 21.

    Roy, A.D.: Safety first and the holding of assets. Econ. J. Econ. Soc. 20(3), 431–449 (1952)

    MATH  Google Scholar 

  22. 22.

    Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. I. Springer, New York (2003)

    MATH  Google Scholar 

  23. 23.

    Karamardian, S.: Generalized complementarity problem. J. Optim. Theory Appl. 8, 161–168 (1971)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Zhang, F.: The Schur Complement and Its Applications, vol. 4. Springer Science & Business Media, Berlin (2006)

    Google Scholar 

  25. 25.

    Sohrab, H.H.: Basic Real Analysis, vol. 231. Springer, New York (2003)

    Book  MATH  Google Scholar 

  26. 26.

    Fischer, A.: A special newton-type optimization method. Optimization 24(3–4), 269–284 (1992)

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Fischer, A.: A newton-type method for positive-semidefinite linear complementarity problems. J. Optim. Theory Appl. 86(3), 585–608 (1995)

    MathSciNet  Article  MATH  Google Scholar 

  28. 28.

    Atkinson, K.E.: An Introduction to Numerical Analysis. Wiley, London (2008)

    Google Scholar 

  29. 29.

    Karmarkar, N.: A new polynomial-time algorithm for linear programming. In: Proceedings of the 16th Annual ACM Symposium on Theory of Computing, pp. 302–311. ACM (1984)

  30. 30.

    Mangasarian, O.L.: Solution of symmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 22(4), 465–485 (1977)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    O’Leary, D.P., White, R.E.: Multi-splittings of matrices and parallel solution of linear systems. SIAM J. Algebraic Discrete Methods 6(4), 630–640 (1985)

    MathSciNet  Article  MATH  Google Scholar 

  32. 32.

    Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming, vol. 228. Springer, New York (2015)

  33. 33.

    Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963)

    MathSciNet  Article  MATH  Google Scholar 

  34. 34.

    Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg-Marquardt method. In: Alefeld, G., Chen, X. (eds.) Topics in Numerical Analysis, pp. 239–249. Springer, New York (2001)

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees for their helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Sándor Zoltán Németh.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Németh, S.Z., Xiao, L. Linear Complementarity Problems on Extended Second Order Cones. J Optim Theory Appl 176, 269–288 (2018). https://doi.org/10.1007/s10957-018-1220-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1220-x

Keywords

  • Complementarity problem
  • Extended second order cone
  • Conic optimization

Mathematics Subject Classification

  • 90C33
  • 90C25