Reference Work Entry

Encyclopedia of Optimization

pp 3659-3662

Splitting Method for Linear Complementarity Problems

  • Paul TsengAffiliated withDepartment Math., University Washington

Article Outline

Keywords

See also

References

Keywords

LCP Splitting method SOR method

Article Outline

Keywords

See also

References

Keywords

LCP Splitting method SOR method

Splitting methods were originally proposed as a generalization of the classical SOR method for solving a system of linear equations [8,25], and in the late 1970s they were extended to the linear complementarity problem (LCP; cf. Linear complementarity problem) [1,2, Chap. 5], [10,13,18]. These methods are iterative and are best suited for problems in which exploitation of sparsity is important, such as large sparse linear programs and the discretization of certain elliptic boundary value problems with obstacle.

To describe the splitting methods, we formulate the LCP (with bound constraints) as the problem of finding an x = (x 1, …, x n ) ∊ R n solving the following system of nonlinear equations:
$$ x = \max\left[ l, \min[ u, x - (Mx + q)]\right], $$
(1)
where M = [m ij ] i, j = 1, …, n R n×n , q = (q 1, …, q n ) ∊ R n and the lower bound l = (l 1, …, l n ) and upper bound u = (u 1, …, u n ) are given. (Here max and min are understood to be taken componentwise and we allow l i = − ∞ or u i = ∞ for some i. The case of l i = 0 and u i = ∞ for all i corresponds to the standard LCP.) In the splitting methods, we express
$$ M = B + C $$
for some BR n×n and CR n×n ; then, starting with any xR n , we iteratively update x by solving the following equation for x′:
$$ x^{\prime} = \max\left[ l, \min[ u, x^{\prime} - (Bx^{\prime} + C x + q)]\right], $$
(2)
and then replacing x with x′. Thus, at each iteration, we effectively replace M and q in the original problem by, respectively, B and Cx + q which we then solve to obtain the new iterate.
A key to the performance of the splitting methods lies in the choice of the matrix B. We should choose B to be a good approximation of M so that the methods have rapid convergence and, at the same time, such that x′ is easy to compute at each iteration (e. g., B is diagonal or upper/lower triangular). The best known choice, corresponding to the SOR method of C. Hildreth [3,7], is
$$ B = \frac{1}{\omega} D + L, $$
(3)
where D and L denote, respectively, the diagonal and the strictly lower-triangular part of M and ω ∊ (0, 2) is a relaxation parameter (see [2, p. 397], [13]). For this choice of B and assuming M has positive diagonal entries, the components of x′ can be computed using n-step backsolve:
$$ x^{\prime}_{i} = \max \left[ l_{i}, \min\left[ u_{i}, x_{i} - \frac{\omega}{m_{ii}} \right.\right.\\ \shoveright{\times\left.\left. \left(\sum_{j< i} m_{ij} x^{\prime}_{j} +\sum_{j\ge i} m_{ij} x_{j} + q_{i}\right)\right]\right],}\\ i = 1,\ldots, n. $$
In the case where l i = − ∞ and u i = ∞ for all i, the above iteration reduces to the classical SOR method for solving the system of linear equations Mx + q = 0. More generally, we can choose B to be block-lower/upper-triangular, e. g.,
$$ B = \begin{pmatrix} B_{11} & & & \\ B_{21} & B_{22} & & \\ \vdots & \vdots & \ddots & \\ B_{p1} & B_{p2} & \cdots & B_{pp} \end{pmatrix} $$
for some 1< pn, with the diagonal and triangular blocks possibly coming from M. Then, each block of components of x′ can be computed recursively by solving an LCP of dimension equal to the block size. Other choices of B are discussed in [2, Chap. 5], [13] and below. Computation with the (block) SOR method on solving sparse linear programs and LCP with symmetric positive definite M is investigated in [1,4,14,16].
An original application of the SOR method is to the solution of convex quadratic programs of the form
$$ \begin{cases} \displaystyle \min&\frac{1}{2} y^{\top} y \\ \text{s.t.} &Ay \le b, \end{cases} $$
where AR n×m and bR n are given, with A having nonzero rows. (Here, denotes the transpose.) Specifically, by attaching nonnegative Lagrange multipliers (cf. also Lagrangian multipliers methods for convex programming) x = (x 1, …, x n ) to the constraints Ayb, we obtain the following dual problem in x:
$$ \max_{x\ge 0} \left\{ \min_{y} \left\{ \frac{1}{2}y^{\top}y + x^{\top}(Ay - b)\right\}\right\} \\ = \max_{x\ge 0} \left\{ - \frac{1}{2} x^{\top} AA^{\top} x - x^{\top}b \right\} $$
whose optimal solution, related to the optimal solution of the original problem by y + A x = 0 [7], solves the LCP (1) with M = AA , q = b and l i = 0, u i = ∞ for all i [7, p. 4]. In this case, M is symmetric positive semidefinite with positive diagonal entries and x′ computed in the SOR method is alternatively given by the formula:
$$ \begin{aligned} \Delta_{i} &= \max\left[ - x_{i}, \frac{\omega}{A_{i} A_{i}^{\top}} \left(A_{i} y^{i} - b_{i} \right)\right], \\ x^{\prime}_{i} &= x_{i} + \Delta_{i}, \\ y^{i + 1} &= y^{i} - A_{i}^{\top} \Delta_{i}, \qquad i = 1,\ldots, n, \end{aligned} $$
where y 1 = −A x and A i denotes the ith row of A. The above iteration is reminiscent of the Agmon–Motzkin–Fourier relaxation method for solving the inequalities Ayb and, in fact, differs from the latter only in that the term −x i , rather than zero, appears inside the max.
Convergence of the splitting methods, despite their relatively long history, was more fully analyzed only in the last ten years. In particular, if M is symmetric (not necessarily positive semidefinite) and the function
$$ f(x) = \frac{1}{2} x^{\top} Mx + q^{\top} x $$
(4)
is bounded below on the box lxu, then it is known that x generated by the splitting method (2) converges to a solution of the LCP at a linear rate (in the root sense [17]), provided that (B, C) is a  regular Q-splitting in the sense that

BC is positive definite and for every x there exists a solution x′ to (2)

[12, Thm. 3.2]. (Earlier results of this kind that further assumed M is positive semidefinite or nondegenerate can be found in [2 Chap. 5], [5,11,19,20] and references therein.) For the SOR method, corresponding to B given by (3) with ω ∊ (0, 2), it can be verified that (B, C) is a regular Q-splitting provided M has positive diagonal entries. The proof of the above convergence result uses two key facts about the LCP, namely, that f(x) assumes only a finite number of values on the solution set and that the distance to the solution set from any point x near the solution set is in the order of the‘residual’ at x, defined to be the difference in the two sides of (1). In addition, the function f(x) can be used in a line-search strategy to accelerate convergence of the splitting methods [2, Sec. 5.5].
If M is not symmetric but positive semidefinite, it is known that x generated by the splitting method (2) converges to a solution of the LCP at a linear rate (in the root sense), provided that

BM is symmetric positive definite

[2, Thm. 5.6.1], [24, Cor. 5.3]. One choice of B that satisfies the above assumption is
$$ B = M + \widehat{D} - L - L^{\top}, $$
where L denotes the strictly lower-triangular part of M and \( \widehat{D} \) is any n× n diagonal matrix such that \( \widehat{D} - L - L^{\top} \) is positive definite. This choice of B is upper-triangular and hence the corresponding x′ can be computed in the order of n 2 arithmetic operations using n-step backsolve [22, Sec. 6], [24]. Computationally, the asymmetry of M makes it difficult to incorporate line-search strategies since no ‘natural’ merit function analogous to (4) is known. As a result, on problems where M is highly asymmetric, such as the LCP formulation of linear programs, the convergence of the splitting methods can be slow. Thus, accelerating convergence of the splitting methods on asymmetric problems remains a challenge. In this direction, we point out related methods based on projection or operator splitting (see [6,21] and references therein). These methods are applicable to the case where M is positive semidefinite (not necessarily symmetric) and the major part of their iterations also involves solving a matrix-splitting equation of the form (2), except the solution x′ must undergo additional transformations to yield the new iterate x. These methods, which may be viewed as a hybrid form of the splitting methods, admit some forms of line search and show good promise in computation.

In summary, building on the early work of Hildreth and H.B. Keller and others, splitting methods have been well developed in the last twenty years to solve the LCP (1) when the matrix M is either symmetric or positive semidefinite. Computationally, these methods are best suited when M is symmetric, possibly having some sparsity structure (e. g., M = AA with A sparse), and the function (4) is used in a line-search strategy to accelerate convergence. Extensions of these methods to problems where the box lxu is replaced by a general polyhedral set, including as special cases the extended linear / quadratic programming problem of R.T. Rockafellar and R.J-B. Wets and the quadratic program formulation of the LCP with row sufficient matrix , have also been studied [2, Sec. 5.5], [6,12,22,23]. Inexact computation of x′ is discussed in [2, Sec. 5.7], [9,12,15]. Acceleration of the methods in the case where M is not symmetric remains an open issue. In fact, if M is not symmetric nor positive semidefinite, convergence of the splitting methods is known only for the case where M is an H- matrix with positive diagonal entries and B is likewise, with the comparison matrix of B having a contractive property [2, p. 418], [18,19]. Thus, even if M is a  P-matrix , it is not known whether the splitting methods converge for some practical choice of B.

See also

Linear Complementarity Problem

Copyright information

© Springer-Verlag 2008
Show all