Abstract
We present a Newton-type hybrid proximal extragradient primal–dual interior point method for solving smooth monotone mixed complementarity problems. Dual variables for the nonnegativity constraints are introduced. The ergodic complexity of the method is \(\mathcal {O}(1/k^{3/2})\). The method performs two types of iterations: underelaxed hybrid proximal extragradient iterations and short-steps primal–dual interior point iterations.
Similar content being viewed by others
References
Bauschke HH, Combettes PL (2011) Convex analysis and monotone operator theory in Hilbert spaces. CMS books in mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York
Bellavia S, Macconi M (1999) An inexact interior point method for monotone NCP. Optim. Meth. Softw. 11(1–4):211–241
Burachik RS, Iusem AN, Svaiter BF (1997) Enlargement of monotone operators with applications to variational inequalities. Set-Valued Anal 5(2):159–180
Burachik RS, Svaiter BF (1999) \(\epsilon \)-enlargements of maximal monotone operators in Banach spaces. Set-Valued Anal 7(2):117–132
Dantzig GB, Thapa MN (2003) Linear programming 2: theory and extensions. Springer series in operations research and financial engineering. Springer, New York
Douglas J, Rachford HH (1956) On the numerical solution of heat conduction problems in two and three space variables. Trans Am Math Soc 82:421–439
Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity problems, vol 2. Springer, New York
Gabay D, Mercier B (1976) A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput Math Appl 2(1):17–40
Glowinski R, Marrocco A (1975) Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité, d’une classe de problèmes de Dirichlet non linéaires. Rev Française Automat Informat Recherche Opérationnelle RAIRO Analyse Numérique 9(R-2):41–76
Jansen B, Roos K, Terlaky T, Yoshise A (1997) Polynomiality of primal–dual affine scaling algorithms for nonlinear complementarity problems. Math. Programm. 78:315–345
Harker PT, Xiao B (1990) Newton’s method for the nonlinear complementarity problem: a \({\text{ B }}\)-differentiable equation approach. Math. Programm. 48:339–357
Hačijan LG (1979) A polynomial algorithm in linear programming. Dokl Akad Nauk SSSR 244(5):1093–1096
Hock W, Schittkowski K (1981) Test examples for nonlinear programming codes. Springer, Berlin
Karmarkar N (1984) A new polynomial-time algorithm for linear programming. Combinatorica 4(4):373–395
Kojima M, Shindo S (1986) Extension of Newton and quasi-Newton methods to systems of PC\(^1\) equations. J Oper Res Soc Japan 29(4):352–374
Korpelevič GM (1976) An extragradient method for finding saddle points and for other problems. Èkonom i Mat Metody 12(4):747–756
Lions PL, Mercier B (1979) Splitting algorithms for the sum of two nonlinear operators. SIAM J Numer Anal 16(6):964–979
Martinet B (1970) Régularisation d’inéquations variationnelles par approximations successives. Rev Française Informat Recherche Opérationnelle 4(Ser. R-3):154–158
Minty GJ (1962) Monotone (nonlinear) operators in Hilbert space. Duke Math J 29:341–346
Monteiro RDC, Svaiter BF (2010) On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J Optim 20(6):2755–2787
Monteiro RDC, Svaiter BF (2012) Iteration-complexity of a Newton proximal extragradient method for monotone variational inequalities and inclusion problems. SIAM J Optim 22(3):914–935
Monteiro RDC, Svaiter BF (2013) Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM J Optim 23(1):475–507
Nesterov Y, Nemirovskii A (1994) Interior-point polynomial algorithms in convex programming. In: SIAM studies in applied mathematics, vol 13. Society for Industrial and Applied Mathematics (SIAM), Philadelphia
Peng J, Roos C, Terlaky T (2002) Self-regularity: a new paradigm for primal-dual interior-point algorithms., Princeton series in applied mathematicsPrinceton University Press, Princeton
Potra FA, Ye Y (1996) Interior-point methods for nonlinear complementarity problems. J Optim Theory Appl 88:617–642
Rockafellar RT (1970) On the maximal monotonicity of subdifferential mappings. Pac J Math 33(1):209–216
Rockafellar RT (1976a) Monotone operators and the proximal point algorithm. SIAM J Control Optim 14:877–898
Rockafellar RT (1976b) Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math Oper Res 1(2):97–116
Sicre MR, Svaiter BF (2015) Interior hybrid proximal extragradient methods for the linear monotone complementarity problem. Optimization 64(9):1957–1982
Solodov MV, Svaiter BF (1999a) A hybrid projection-proximal point algorithm. J Convex Anal 6(1):59–70
Solodov MV, Svaiter BF (1999) b) A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal 7(4):323–345
Solodov MV, Svaiter BF (2001) A unified framework for some inexact proximal point algorithms. Numer Funct Anal Optim 22(7–8):1013–1035
Svaiter BF (2000) A family of enlargements of maximal monotone operators. Set-Valued Anal 8(4):311–328
Sun J, Zhao G (2000) A quadratically convergent polynomial long-step algorithm for a class of nonlinear monotone complementarity problems, Optimization. J Math Program Oper Res 48:453–475
Tseng P (2000) A modified forward–backward splitting method for maximal monotone mappings. SIAM J Control Optim 38(2):431–446
Wright S (1987) Primal-dual interior-point methods. Society for Industrial and Applied Mathematics, Philadelphia
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Andreas Fischer.
M. R. Sicre is supported by Conselho Nacional de Pesquisa CNPq Grant 406250/2013-8 and Fundação de Amparo à Pesquisa do Estado da Bahia (FAPESB) Grant 022/2009-PPP. B. F. Svaiter is supported by CNPq Grants 302962/2011-5, 474944/2010-7, 480101/2008-6, 303583/2008-8, Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro (Faperj) Grants E-26/102.940/2011, E-26/102.821/2008 and PRONEX Optimization.
Appendix
Appendix
In this section we prove Lemma 4 and a technical result.
Proof of Lemma 4.
Proof
Since \(y, s>0\), \(\mu , \nu >0\) and A is positive semi-definite, pre-multiplying the last \({m}\) equations of (20) by \(S^{-1}\) we obtain an equivalent linear system of equations, which has is a positive definite matrix of coefficients and, therefore, a unique solution. Thus, the first claim of the lemma holds.
-
(a)
Left-multiply both sides of (20) by \((d^x,d^y,0)^T\) and use the assumption of A being positive semi-definite and Cauchy–Schwartz inequality to conclude that
$$\begin{aligned} \nu \Vert (d^x,d^y)\Vert ^2-\mu \langle {d^y},{d^s}\rangle \le \langle {(d^x,d^y)},{(b^x,b^y)}\rangle \le \dfrac{\nu \Vert (d^x,d^y)\Vert ^2}{2} +\dfrac{\Vert (b^x,b^y)\Vert ^2}{2\nu }. \end{aligned}$$We have that
$$\begin{aligned} \mu \Vert D^yd^s\Vert +\mu \langle {d^y},{d^s}\rangle&\le \frac{\mu }{2}\sum _{i=1}^n \frac{s_i}{y_i}(d^y_i)^2+\frac{y_i}{s_i}(d^s_i)^2+\mu \langle {d^y},{d^s}\rangle \\&=\frac{1}{2}\left\| (\mu YS)^{-1/2}\left( \mu Sd^y+\mu Yd^s\right) \right\| ^2\\&\le \frac{\Vert \mu \left( Sd^y+\mu Yd^s\right) \Vert ^2}{2(1-\Vert \mu Ys-e\Vert )} =\frac{\Vert b^s\Vert ^2}{2(1-\Vert \mu Ys-e\Vert )} \end{aligned}$$where the first inequality follows from the fact that the 2-norm is bounded by the 1-norm and relation \(2ab\le a^2+b^2\) for any \(a,b\in \mathbb {R}\), the second inequality follows from relations \(0<1-\Vert \mu Ys-e\Vert \le \mu y_i s_i \) \(i=1,2,\ldots ,n\), and the last equality follows from the definition of the Newton step. To end the proof, combine the above inequalities.
-
(b)
Define \((y_\alpha ,s_\alpha )=(y,s)+\alpha (d^y,d^s)\) for \(\alpha \ge 0\) and observe that, since \(\mu Sd^y+\mu Yd^s=b^s=-(\mu Ys-e)\), it holds
$$\begin{aligned} \mu Y_\alpha s_\alpha -e =(1-\alpha ) (\mu Ys-e) + \alpha ^2\mu D^yd^s. \end{aligned}$$In view of the assumptions and item (a), we have that \(\mu \Vert D^yd_s\Vert <1\). Therefore,
$$\begin{aligned} \Vert \mu Y_\alpha s_\alpha -e\Vert \le (1-\alpha ) \Vert \mu Ys-e\Vert + \alpha ^2\mu \Vert D^yd^s\Vert<1-\alpha +\alpha ^2<1\ \ \forall \,\alpha \in [0,1]. \end{aligned}$$Thus, the components of \((y_\alpha ,s_\alpha )\), which are continuous functions of \(\alpha \), do not vanish for any \(\alpha \in [0,1]\). Since \((y_0,s_0)=(y,s)>0\), we conclude that \((y_{\alpha =1},s_{\alpha =1})=(y+d^y,s+d^s)>0\).
Next we prove a technical lemma.
Lemma 8
If \(\alpha _1,\dots ,\alpha _k>0\) and \(\sum _{i=1}^k\alpha _i^2\le a, \) then
Proof
Define \(f:\mathbb {R}^k_{++}\rightarrow \mathbb {R}\) and \(\alpha ^*\in \mathbb {R}^k_{++}\),
Observe that f is strictly convex in \(\mathbb {R}^k_{++}\), \(\nabla f(\alpha ^*)=0\) and \(\sum _{i=1}^k(\alpha _i^*)^2=a\); therefore, if \( \alpha _1,\dots ,\alpha _k>0\) and \( \sum _{i=1}^k\alpha _i^2\le a\), then it holds
Rights and permissions
About this article
Cite this article
Sicre, M.R., Svaiter, B.F. A \(\mathcal {O}(1/k^{3/2})\) hybrid proximal extragradient primal–dual interior point method for nonlinear monotone mixed complementarity problems. Comp. Appl. Math. 37, 1847–1876 (2018). https://doi.org/10.1007/s40314-017-0425-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40314-017-0425-1
Keywords
- Hybrid proximal extra-gradient
- Interior point methods
- Primal–dual
- Monotone complementarity problem
- Complexity