Skip to main content

Advertisement

Log in

Accelerated schemes for a class of variational inequalities

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We propose a novel stochastic method, namely the stochastic accelerated mirror-prox (SAMP) method, for solving a class of monotone stochastic variational inequalities (SVI). The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method. The developed SAMP method computes weak solutions with the optimal iteration complexity for SVIs. In particular, if the operator in SVI consists of the stochastic gradient of a smooth function, the iteration complexity of the SAMP method can be accelerated in terms of their dependence on the Lipschitz constant of the smooth function. For SVIs with bounded feasible sets, the bound of the iteration complexity of the SAMP method depends on the diameter of the feasible set. For unbounded SVIs, we adopt the modified gap function introduced by Monteiro and Svaiter for solving monotone inclusion, and show that the iteration complexity of the SAMP method depends on the distance from the initial point to the set of strong solutions. It is worth noting that our study also significantly improves a few existing complexity results for solving deterministic variational inequality problems. We demonstrate the advantages of the SAMP method over some existing algorithms through our preliminary numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. When the maximum absolute values of P and Q are different, it is recommended to introduce weights \(\omega _x\) and \(\omega _y\) and set \(\Vert u\Vert :=\sqrt{\omega _x\Vert x\Vert _1^2 + \omega _y\Vert y\Vert _1^2}\) and \(\Vert \eta \Vert _*:=\sqrt{\Vert \eta _x\Vert _1^2/\omega _x + \Vert \eta _y\Vert _1^2/\omega _y}\). See “mixed setups” in Section 5 of [32] for the detailed derivations for best values of weights \(\omega _x\) and \(\omega _y\).

  2. See the proof of Theorem 2 for the definition of the perturbation term in the SAMP algorithm, and Theorem 5.2 in [28] for the definition of the perturbation term in the MP algorithm.

References

  1. Auslender, A., Teboulle, M.: Interior projection-like methods for monotone variational inequalities. Math. Program. 104, 39–68 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Ben-Tal, A., Nemirovski, A.: Non-Euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bennett, K.P., Mangasarian, O.L.: Robust linear programming discrimination of two linearly inseparable sets. Optim. Methods Softw. 1, 23–34 (1992)

    Article  Google Scholar 

  5. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge university press, Cambridge (2004)

    Book  MATH  Google Scholar 

  6. Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  7. Burachik, R.S., Iusem, A.N., Svaiter, B.F.: Enlargement of monotone operators with applications to variational inequalities. Set-Valued Anal. 5, 159–180 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, X., Wets, R.J.-B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sample average approximations. SIAM J. Optim. 22, 649–673 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Chen, X., Ye, Y.: On homotopy-smoothing methods for box-constrained variational inequalities. SIAM J. Control Optim. 37, 589–616 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chen, Y., Lan, G., Ouyang, Y.: Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24, 1779–1814 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dang, C.D., Lan, G.: On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60, 277–310 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1. Springer, Berlin (2003)

    MATH  Google Scholar 

  13. Fang, S.C., Peterson, E.: Generalized variational inequalities. J. Optim. Theory Appl. 38, 363–383 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  14. Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: a generic algorithmic framework. SIAM J. Optim. 22, 1469–1492 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156, 59–99 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271–310 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  17. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, Berlin (2009)

    Book  MATH  Google Scholar 

  18. Jacob, L., Obozinski, G., Vert, J.-P.: Group lasso with overlap and graph lasso. In: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, pp. 433–440. (2009)

  19. Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53, 1462–1475 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1, 17–58 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kien, B., Yao, J.-C., Yen, N.D.: On the solution existence of pseudomonotone variational inequalities. J. Glob. Optim. 41, 135–145 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  23. Koshal, J., Nedic, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE Trans. Autom. Control 58, 594–609 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lan, G.: An optimal method for stochastic composite optimization. Math. Program. 133(1), 365–397 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. 134, 425–458 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lin, G.-H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: a survey. Pac. J. Optim. 6, 455–482 (2010)

    MathSciNet  MATH  Google Scholar 

  27. Minty, G.J., et al.: Monotone (nonlinear) operators in hilbert space. Duke Math. J. 29, 341–346 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  28. Monteiro, R.D., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20, 2755–2787 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. Monteiro, R.D., Svaiter, B.F.: Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21, 1688–1720 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. MOSEK ApS, The MOSEK optimization toolbox for Matlab manual, version 6.0 (revision 135). MOSEK ApS, Denmark, (2012)

  31. Nemirovski, A.: Information-based complexity of linear operator equations. J. Complex. 8, 153–175 (1992)

    Article  MathSciNet  Google Scholar 

  32. Nemirovski, A.: Prox-method with rate of convergence \({O}(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex–concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  33. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. Nemirovski, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley-interscience series in discrete mathematics. Wiley, NewYork (1983)

    Google Scholar 

  35. Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109, 319–344 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Nesterov, Y.: Universal gradient methods for convex optimization problems. Math. Program. 152, 381–404 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  37. Nesterov, Y., Vial, J.P.: Homogeneous analytic center cutting plane methods for convex problems and variational inequalities. SIAM J. Optim. 9, 707–728 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  38. Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(O(1/k^2)\). Doklady SSSR 269, 543–547 (1983). Translated as Soviet Math. Docl

    Google Scholar 

  39. Nesterov, Y.E.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  40. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  41. Shapiro, A.: Monte carlo sampling methods. In: Shapiro, A., Ruszczyński, A. (eds.) Handbooks in Operations Research and Management Science, vol. 10, pp. 353–425. (2003)

  42. Shapiro, A., Xu, H.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57, 395–418 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  43. Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6, 59–70 (1999)

    MathSciNet  MATH  Google Scholar 

  44. Solodov, M.V., Svaiter, B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25, 214–230 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  45. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Series B (Methodological), 267–288 (1996)

  46. Tseng, P.: On accelerated proximal gradient methods for convex–concave optimization. Manuscript (2008)

  47. Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106, 25–57 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  48. Wang, M., Lin, G., Gao, Y., Ali, M.M.: Sample average approximation method for a class of stochastic variational inequality problems. J. Syst. Sci. Complex. 24, 1143–1153 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  49. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning with application to clustering with side-information. Adv. Neural Inf. Process. Syst. 15, 505–512 (2003)

    Google Scholar 

  50. Xu, H., Zhang, D.: Stochastic nash equilibrium problems: sample average approximation and applications. Computa. Optim. Appl. 55, 597–645 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  51. Yousefian, F., Nedić, A., Shanbhag, U.V.: A regularized smoothing stochastic approximation (rssa) algorithm for stochastic variational inequality problems. In: Proceedings of the 2013 Winter Simulation Conference: Simulation: Making Decisions in a Complex World, IEEE Press, pp. 933–944. (2013)

  52. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guanghui Lan.

Additional information

Yunmei Chen is partially supported by NSF Grants DMS-1115568, IIP-1237814 and DMS-1319050. Guanghui Lan is partially supported by NSF Grants CMMI-1637473, CMMI-1637474, DMS-1319050 and ONR Grant N00014-16-1-2802. Part of the research was done while Yuyuan Ouyang was a Ph.D. student at the Department of Mathematics, University of Florida, and Yuyuan Ouyang is partially supported by AFRL Mathematical Modeling Optimization Institute.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Lan, G. & Ouyang, Y. Accelerated schemes for a class of variational inequalities. Math. Program. 165, 113–149 (2017). https://doi.org/10.1007/s10107-017-1161-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-017-1161-4

Keywords

Mathematics Subject Classification

Navigation