## Article Outline

Keywords

See also

References

### Keywords

LCP Splitting method SOR methodSplitting methods were originally proposed as a generalization of the classical SOR method for solving a system of linear equations [8,25], and in the late 1970s they were extended to the linear complementarity problem (LCP; cf. Linear complementarity problem) [1,2, Chap. 5], [10,13,18]. These methods are iterative and are best suited for problems in which exploitation of sparsity is important, such as large sparse linear programs and the discretization of certain elliptic boundary value problems with obstacle.

*x*= (

*x*

_{1}, …,

*x*

_{ n }) ∊

**R**

^{ n }solving the following system of nonlinear equations:

*M*= [

*m*

_{ ij }]

_{ i, j = 1, …, n }∊

**R**

^{ n×n },

*q*= (

*q*

_{1}, …,

*q*

_{ n }) ∊

**R**

^{ n }and the lower bound

*l*= (

*l*

_{1}, …,

*l*

_{ n }) and upper bound

*u*= (

*u*

_{1}, …,

*u*

_{ n }) are given. (Here max and min are understood to be taken componentwise and we allow

*l*

_{ i }= − ∞ or

*u*

_{ i }= ∞ for some

*i*. The case of

*l*

_{ i }= 0 and

*u*

_{ i }= ∞ for all

*i*corresponds to the standard LCP.) In the splitting methods, we express

*B*∊

**R**

^{ n×n }and

*C*∊

**R**

^{ n×n }; then, starting with any

*x*∊

**R**

^{ n }, we iteratively update

*x*by solving the following equation for

*x*′:

*x*with

*x*′. Thus, at each iteration, we effectively replace

*M*and

*q*in the original problem by, respectively,

*B*and

*Cx*+

*q*which we then solve to obtain the new iterate.

*B*. We should choose

*B*to be a good approximation of

*M*so that the methods have rapid convergence and, at the same time, such that

*x*′ is easy to compute at each iteration (e. g.,

*B*is diagonal or upper/lower triangular). The best known choice, corresponding to the SOR method of C. Hildreth [3,7], is

*D*and

*L*denote, respectively, the diagonal and the strictly lower-triangular part of

*M*and ω ∊ (0, 2) is a relaxation parameter (see [2, p. 397], [13]). For this choice of

*B*and assuming

*M*has positive diagonal entries, the components of

*x*′ can be computed using

*n*-step backsolve:

*l*

_{ i }= − ∞ and

*u*

_{ i }= ∞ for all

*i*, the above iteration reduces to the classical SOR method for solving the system of linear equations

*Mx*+

*q*= 0. More generally, we can choose

*B*to be block-lower/upper-triangular, e. g.,

*p*≤

*n*, with the diagonal and triangular blocks possibly coming from

*M*. Then, each block of components of

*x*′ can be computed recursively by solving an LCP of dimension equal to the block size. Other choices of

*B*are discussed in [2, Chap. 5], [13] and below. Computation with the (block) SOR method on solving sparse linear programs and LCP with symmetric positive definite

*M*is investigated in [1,4,14,16].

*A*∊

**R**

^{ n×m }and

*b*∊

**R**

^{ n }are given, with

*A*having nonzero rows. (Here,

^{⊺}denotes the transpose.) Specifically, by attaching nonnegative Lagrange multipliers (cf. also Lagrangian multipliers methods for convex programming)

*x*= (

*x*

_{1}, …,

*x*

_{ n }) to the constraints

*Ay*≤

*b*, we obtain the following dual problem in

*x*:

*y*+

*A*

^{⊺}

*x*= 0 [7], solves the LCP (1) with

*M*=

*AA*

^{⊺},

*q*=

*b*and

*l*

_{ i }= 0,

*u*

_{ i }= ∞ for all

*i*[7, p. 4]. In this case,

*M*is symmetric positive semidefinite with positive diagonal entries and

*x*′ computed in the SOR method is alternatively given by the formula:

*y*

^{1}= −

*A*

^{⊺}

*x*and

*A*

_{ i }denotes the

*i*th row of

*A*. The above iteration is reminiscent of the

*Agmon–Motzkin–Fourier relaxation method*for solving the inequalities

*Ay*≤

*b*and, in fact, differs from the latter only in that the term −

*x*

_{ i }, rather than zero, appears inside the max.

*M*is symmetric (not necessarily positive semidefinite) and the function

*l*≤

*x*≤

*u*, then it is known that

*x*generated by the splitting method (2) converges to a solution of the LCP at a linear rate (in the root sense [17]), provided that (

*B*,

*C*) is a

*regular Q-splitting*in the sense that

[12, Thm. 3.2]. (Earlier results of this kind that further assumed

B−Cis positive definite and for everyxthere exists a solutionx′ to (2)

*M*is positive semidefinite or nondegenerate can be found in [2 Chap. 5], [5,11,19,20] and references therein.) For the SOR method, corresponding to

*B*given by (3) with ω ∊ (0, 2), it can be verified that (B, C) is a regular Q-splitting provided

*M*has positive diagonal entries. The proof of the above convergence result uses two key facts about the LCP, namely, that

*f*(

*x*) assumes only a finite number of values on the solution set and that the distance to the solution set from any point

*x*near the solution set is in the order of the‘residual’ at

*x*, defined to be the difference in the two sides of (1). In addition, the function

*f*(

*x*) can be used in a line-search strategy to accelerate convergence of the splitting methods [2, Sec. 5.5].

*M*is not symmetric but positive semidefinite, it is known that

*x*generated by the splitting method (2) converges to a solution of the LCP at a linear rate (in the root sense), provided that

[2, Thm. 5.6.1], [24, Cor. 5.3]. One choice of

B−Mis symmetric positive definite

*B*that satisfies the above assumption is

*L*denotes the strictly lower-triangular part of

*M*and \( \widehat{D} \) is any

*n*×

*n*diagonal matrix such that \( \widehat{D} - L - L^{\top} \) is positive definite. This choice of

*B*is upper-triangular and hence the corresponding

*x*′ can be computed in the order of

*n*

^{2}arithmetic operations using

*n*-step backsolve [22, Sec. 6], [24]. Computationally, the asymmetry of

*M*makes it difficult to incorporate line-search strategies since no ‘natural’ merit function analogous to (4) is known. As a result, on problems where

*M*is highly asymmetric, such as the LCP formulation of linear programs, the convergence of the splitting methods can be slow. Thus, accelerating convergence of the splitting methods on asymmetric problems remains a challenge. In this direction, we point out related methods based on projection or operator splitting (see [6,21] and references therein). These methods are applicable to the case where

*M*is positive semidefinite (not necessarily symmetric) and the major part of their iterations also involves solving a matrix-splitting equation of the form (2), except the solution

*x*′ must undergo additional transformations to yield the new iterate

*x*. These methods, which may be viewed as a hybrid form of the splitting methods, admit some forms of line search and show good promise in computation.

In summary, building on the early work of Hildreth and
H.B. Keller and others, splitting methods have been well developed in the last twenty years to solve the LCP (1) when the matrix *M* is either symmetric or positive semidefinite. Computationally, these methods are best suited when *M* is symmetric, possibly having some sparsity structure (e. g., *M* = *AA*
^{⊺} with *A* sparse), and the function (4) is used in a line-search strategy to accelerate convergence. Extensions of these methods to problems where the box *l* ≤ *x* ≤ *u* is replaced by a general polyhedral set, including as special cases the
*extended linear*
/
*quadratic programming problem*
of
R.T. Rockafellar and
R.J-B. Wets and the quadratic program formulation of the LCP with
*row sufficient matrix*
, have also been studied [2, Sec. 5.5], [6,12,22,23]. Inexact computation of *x*′ is discussed in [2, Sec. 5.7], [9,12,15]. Acceleration of the methods in the case where *M* is not symmetric remains an open issue. In fact, if *M* is not symmetric nor positive semidefinite, convergence of the splitting methods is known only for the case where *M* is an *H*-
*matrix*
with positive diagonal entries and *B* is likewise, with the comparison matrix of *B* having a contractive property [2, p. 418], [18,19]. Thus, even if *M* is a
*P-matrix*
, it is not known whether the splitting methods converge for some practical choice of *B*.