Abstract
A new error bound for the linear complementarity problem when the matrix involved is a B-matrix is presented, which improves the corresponding result in (Li et al. in Electron. J. Linear Algebra 31(1):476-484, 2016). In addition some sufficient conditions such that the new bound is sharper than that in (García-Esnaola and Peña in Appl. Math. Lett. 22(7):1071-1075, 2009) are provided.
Similar content being viewed by others
1 Introduction
Given an \(n\times n\) real matrix M and \(q\in R^{n}\), the linear complementarity problem (LCP) is to find a vector \(x\in R^{n}\) satisfying
or to show that no such vector x exists. We denote this problem (1) by \(\operatorname{LCP}(M, q)\). The \(\operatorname{LCP}(M, q)\) arises in many applications such as finding Nash equilibrium point of a bimatrix game, the network equilibrium problem, the contact problem and the free boundary problem for journal bearing etc.; for details, see [3–5].
It is well known that the \(\operatorname{LCP}(M, q)\) has a unique solution for any vector \(q\in R^{n}\) if and only if M is a P-matrix [4]. Here a matrix M is called a P-matrix if all its principal minors are positive. For the \(\operatorname{LCP}(M, q)\), one of the interesting problems is to estimate
which can be used to bound the error \(\|x-x^{*}\|_{\infty}\) [6], that is,
where \(x^{*}\) is the solution of the \(\operatorname{LCP}(M, q)\), \(r(x)=\min\{ x,Mx+q\}\), \(D=\operatorname{diag}(d_{i})\) with \(0\leqslant d_{i} \leqslant1\) for each \(i\in N\), \(d=[d_{1},d_{2},\ldots,d_{n}]^{T}\in[0,1]^{n}\), and the min operator \(r(x)\) denotes the componentwise minimum of two vectors.
When the matrix M for the \(\operatorname{LCP}(M, q)\) belongs to P-matrices or some subclass of P-matrices, various bounds for (2) were proposed; e.g., see [2, 6–15] and the references therein. Recently, García-Esnaola and Peña in [2] provided an upper bound for (2) when M is a B-matrix as a subclass of P-matrices. Here, a matrix \(M=[m_{ij}]\in R^{n, n}\) is called a B-matrix [16] if for each \(i\in N=\{1,2,\ldots,n\}\),
Theorem 1
[2], Theorem 2.2
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form
where
and \(r_{i}^{+}=\max\{ 0,m_{ij}|j\neq i\}\). Then
where \(\beta=\min_{i\in N}\{\beta_{i}\}\) and \(\beta_{i}=b_{ii}-\sum_{j\neq i}|b_{ij}|\).
It is not difficult to see that the bound (5) will be inaccurate when the matrix M has very small value of \(\min_{i\in N}\{b_{ii}-\sum_{j\neq i}|b_{ij}|\}\); for details, see [17, 18]. To conquer this problem, Li et al., in [1] gave the following bound for (2) when M is a B-matrix, which improves those provided by Li and Li in [17, 18].
Theorem 2
[1], Theorem 2.4
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form \(M=B^{+}+C\), where \(B^{+}=[b_{ij}]\) is the matrix of (4). Then
where \(\bar{\beta}_{i}=b_{ii}-\sum_{j=i+1}^{n}|b_{ij}|l_{i}(B^{+})\) with \(l_{k}(B^{+})=\max_{k\leq i\leq n} \{\frac{1}{|b_{ii}|}\sum_{j=k,\atop j\neq i}^{n}|b_{ij}| \}\), and \(\prod_{j=1}^{i-1}\frac{b_{jj}}{\bar{\beta}_{j}}=1\) if \(i=1\).
In this paper, we further improve error bounds on the \(\operatorname{LCP}(M,q)\) when M belongs to B-matrices. The rest of this paper is organized as follows: In Section 2 we present a new error bound for (2), and then prove that this bound is better than those in Theorems 1 and 2. In Section 3, some numerical examples are given to illustrate our theoretical results obtained.
2 Main result
In this section, an upper bound for (2) is provided when M is a B-matrix. Firstly, some definitions, notation and lemmas which will be used later are given as follows.
A matrix \(A=[a_{ij}]\in C^{n,n}\) is called a strictly diagonally dominant (SDD) matrix if \(|a_{ii}|>\sum_{j\neq i}^{n}|a_{ij}|\) for all \(i=1,2,\ldots,n\). A matrix \(A=[a_{ij}]\in R^{n,n}\) is called a nonsingular M-matrix if its inverse is nonnegative and all its off-diagonal entries are nonpositive [3]. In [16] it was proved that a B-matrix has positive diagonal elements, and a real matrix A is a B-matrix if and only if it can be written in the form (3) with \(B^{+}\) being a SDD matrix. Given a matrix \(A=[a_{ij}]\in C^{n,n}\), let
Lemma 1
[19], Theorem 14
Let \(A=[a_{ij}]\) be an \(n\times n\) row strictly diagonally dominant M-matrix. Then
where \(u_{i}(A)=\frac{1}{|a_{ii}|}\sum_{j=i+1}^{n}|a_{ij}|\), \(l_{k}(A)=\max_{k\leq i\leq n} \{\frac{1}{|a_{ii}|}\sum_{j=k,\atop j\neq i}^{n}|a_{ij}| \}\), \(\prod_{j=1}^{i-1}\frac{1}{1-u_{j}(A)l_{j}(A)}=1\) if \(i=1\), and \(m_{ki}(A)\) is defined as in (7).
Lemma 2
[17], Lemma 3
Let \(\gamma> 0\) and \(\eta\geqslant0 \). Then, for any \(x\in [0,1]\),
and
Lemma 3
[18], Lemma 5
Let \(A=[a_{ij}]\) with \(a_{ii}>\sum_{j=i+1}^{n}|a_{ij}|\) for each \(i\in N\). Then, for any \(x_{i}\in[0,1]\),
Lemmas 2 and 3 will be used in the proofs of the following lemma and Theorem 3.
Lemma 4
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form \(M=B^{+}+C\), where \(B^{+}=[b_{ij}]\) is the matrix of (4). And let \(B_{D}^{+}=I-D+DB^{+}=[\tilde{b}_{ij}]\) where \(D=\operatorname{diag}(d_{i})\) with \(0\leqslant d_{i} \leqslant1\). Then
and
where \(w_{i}(B_{D}^{+})\), \(m_{ij}(B_{D}^{+})\) are defined as in (7), and
Proof
Note that
Since \(B^{+}\) is SDD, \(b_{ii}-\sum_{k=j+1,\atop k\neq i}^{n}|b_{ik}|> |b_{ij}|\) for each \(i\neq j\). Hence, by Lemma 2 and (7), it follows that
Furthermore, it follows from (7), (8) and Lemma 2 that for each \(i\neq j\) (\(j< i\leqslant n\))
The proof is completed. □
By Lemmas 1, 2, 3 and 4, we give the following bound for (2) when M is a B-matrix.
Theorem 3
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form \(M=B^{+}+C\), where \(B^{+}=[b_{ij}]\) is the matrix of (4). Then
where \(\widehat{\beta}_{i}=b_{ii}-\sum_{k=i+1}^{n}|b_{ik}|\cdot v_{ki}(B^{+})\) with \(v_{ki}(B^{+})\) is defined in Lemma 4, \(\bar{\beta}_{i}\) is defined in Theorem 2, and \(\prod_{j=1}^{i-1}\frac{b_{jj}}{\bar{\beta}_{j}}=1\) if \(i=1\).
Proof
Let \(M_{D}=I-D+DM\). Then
where \(B_{D}^{+}=I-D+DB^{+}=[\tilde{b}_{ij}]\) and \(C_{D}=DC\). Similarly to the proof of Theorem 2.2 in [2], we find that \(B_{D}^{+}\) is an SDD M-matrix with positive diagonal elements and that
Next, we give an upper bound for \(\|(B^{+}_{D} )^{-1} \|_{\infty}\). By Lemma 1, we have
where
and
with \(w_{l}(B_{D}^{+})=\max_{h\neq l} \{\frac{|b_{lh}|d_{l}}{1-d_{l}+b_{ll}d_{l}-\sum_{s=h+1,\atop s\neq l}^{n}|b_{ls}|d_{l}} \}\).
By Lemmas 2 and 4, we can easily see that, for each \(i\in N\),
and that, for each \(k\in N\),
Furthermore, according to Lemma 3 and (13), it follows that, for each \(j\in N\),
By (11), (12) and (14), we have
The conclusion follows from (10) and (15). □
The comparisons of the bounds in Theorems 2 and 3 are established as follows.
Theorem 4
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form \(M=B^{+}+C\), where \(B^{+}=[b_{ij}]\) is the matrix of (4). Let \(\bar{\beta}_{i}\) and \(\widehat{\beta}_{i}\) be defined in Theorems 2 and 3, respectively. Then
Proof
Note that
and \(B^{+}\) is a SDD matrix, it follows that for each \(i\neq j\) (\(j< i\leqslant n\))
Hence, for each \(i\in N\)
which implies that
This completes the proof. □
Remark here that, when \(\bar{\beta}_{i}<1\) for all \(i\in N\), then
which yields
Next it is proved that the bound (9) given in Theorem 3 can improve the bound (5) in Theorem 1 (Theorem 2.2 in [2]) in some cases.
Theorem 5
Let \(M=[m_{ij}]\in R^{n, n}\) be a B-matrix with the form \(M=B^{+}+C\), where \(B^{+}=[b_{ij}]\) is the matrix of (4). Let β, \(\bar{\beta}_{i}\) and \(\widehat{\beta}_{i}\) be defined in Theorems 1, 2 and 3, respectively, and let \(\alpha=1+\sum_{i=2}^{n}\prod_{j=1}^{i-1}\frac{b_{jj}}{\bar{\beta}_{j}}\) and \(\widehat{\beta}=\min_{i\in N}\{\widehat{\beta}_{i}\}\). If one of the following conditions holds:
-
(i)
\(\widehat{\beta}>1\) and \(\alpha<\frac{1}{\beta}\);
-
(ii)
\(\widehat{\beta}<1\) and \(\alpha\beta<\widehat{\beta}\),
then
Proof
When \(\widehat{\beta}>1\) and \(\alpha<\frac{1}{\beta}\), we can easily get
Similarly, for \(\widehat{\beta}<1\) and \(\alpha\beta<\widehat{\beta}\), the conclusion can be proved directly. □
3 Numerical examples
Two examples are given to show that the bound in Theorem 3 is sharper than those in Theorems 1 and 2.
Example 1
Consider the family of B-matrices in [17]:
where \(k\geqslant1\). Then \(M_{k}=B_{k}^{+}+C_{k}\), where
By computations, we have \(\beta=\frac{1}{10(k+1)}\), \(\bar{\beta}_{1}=\bar{\beta}_{2}=\frac {90k+91}{100k+100}\), \(\bar{\beta}_{3}=0.99\), \(\bar{\beta}_{4}=1\), \(\hat{\beta}_{1}=\frac{820k+828}{900k+900}\), \(\hat{\beta}_{2}=0.99\), \(\hat{\beta}_{3}=1\) and \(\hat{\beta}_{4}=1\). Then it is easy to verify that \(M_{k}\) satisfies the condition (ii) of Theorem 5. Hence, by Theorem 1 (Theorem 2.2 in [2]), we have
It is obvious that
By Theorem 2, we find that, for any \(k\geqslant1\),
By Theorem 3, we find that, for any \(k\geqslant1\),
In particular, when \(k=1\),
and the bound (5) in Theorem 1 is
When \(k=2\),
and the bound (5) in Theorem 1 is
Example 2
Consider the following family of B-matrices:
where \(\frac{\sqrt{5}-1}{2}< a<1\) and \(\frac{2-a^{2}}{1+a}< k<1\). Then \(M_{k}=B_{k}^{+}+C\) with C is the null matrix.
By simple computations, we can get
It is not difficult to verify that \(M_{k}\) satisfies the condition (i) of Theorem 5. Thus, the bound (6) of Theorem 2 (Theorem 2.4 in [1]) is
which is larger than the bound
given by (5) in Theorem 1 (Theorem 2.2 in [2]). However, by Theorem 3 we can get
which is smaller than the bound (5) in Theorem 1, i.e.,
In particular, when \(a=\frac{4}{5}\) and \(k=\frac{8}{9}\), the bounds in Theorems 1 and 2 are, respectively,
and
while the bound (9) in Theorem 3 is
These two examples show that the bound in Theorem 3 is sharper than those in Theorems 1 and 2.
References
Li, CQ, Gan, MT, Yang, SR: A new error bound for linear complementarity problems for B-matrices. Electron. J. Linear Algebra 31(1), 476-484 (2016)
García-Esnaola, M, Peña, JM: Error bounds for linear complementarity problems for B-matrices. Appl. Math. Lett. 22(7), 1071-1075 (2009)
Berman, A, Plemmons, RJ: Nonnegative Matrix in the Mathematical Sciences. SIAM, Philadelphia (1994)
Cottle, RW, Pang, JS, Stone, RE: The Linear Complementarity Problem. Academic Press, San Diego (1992)
Murty, KG: Linear Complementarity, Linear and Nonlinear Programming. Heldermann, Berlin (1988)
Chen, XJ, Xiang, SH: Perturbation bounds of P-matrix linear complementarity problems. SIAM J. Optim. 18(4), 1250-1265 (2007)
Chen, TT, Li, W, Wu, X, Vong, S: Error bounds for linear complementarity problems of MB-matrices. Numer. Algorithms 70(2), 341-356 (2015)
Chen, XJ, Xiang, SH: Computation of error bounds for P-matrix linear complementarity problems. Math. Program. 106(3), 513-525 (2006)
Dai, PF: Error bounds for linear complementarity problems of DB-matrices. Linear Algebra Appl. 434(3), 830-840 (2011)
Dai, PF, Li, YT, Lu, CJ: Error bounds for linear complementarity problems for SB-matrices. Numer. Algorithms 61(1), 121-139 (2012)
Dai, PF, Lu, CJ, Li, YT: New error bounds for the linear complementarity problem with an SB-matrix. Numer. Algorithms 64(4), 741-757 (2013)
García-Esnaola, M, Peña, JM: Error bounds for linear complementarity problems involving \(B^{S}\)-matrices. Appl. Math. Lett. 25(10), 1379-1383 (2012)
García-Esnaola, M, Peña, JM: Error bounds for the linear complementarity problem with a Σ-SDD matrix. Linear Algebra Appl. 438(3), 1339-1346 (2013)
García-Esnaola, M, Peña, JM: B-Nekrasov matrices and error bounds for linear complementarity problems. Numer. Algorithms 72(2), 435-445 (2016)
Li, CQ, Dai, PF, Li, YT: New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices. Numer. Algorithms 74(4), 997-1009 (2017)
Peña, JM: A class of P-matrices with applications to the localization of the eigenvalues of a real matrix. SIAM J. Matrix Anal. Appl. 22(4), 1027-1037 (2001)
Li, CQ, Li, YT: Note on error bounds for linear complementarity problems for B-matrices. Appl. Math. Lett. 57, 108-113 (2016)
Li, CQ, Li, YT: Weakly chained diagonally dominant B-matrices and error bounds for linear complementarity problems. Numer. Algorithms 73(4), 985-998 (2016)
Yang, Z, Zheng, B, Lian, X: A new upper bound for \(\|A^{-1}\|_{\infty}\) of a strictly α-diagonally dominant M-matrix. Adv. Numer. Anal. 2013, 980615 (2013)
Acknowledgements
This work is partly supported by National Natural Science Foundations of China (11601473, 31600299), Young Talent fund of University Association for Science and Technology in Shaanxi, China (20160234), the Research Foundation of Baoji University of Arts and Sciences (ZK2017021), and CAS ‘Light of West China’ Program.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to this work. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Gao, L., Li, C. An improved error bound for linear complementarity problems for B-matrices. J Inequal Appl 2017, 144 (2017). https://doi.org/10.1186/s13660-017-1414-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1414-z