Advertisement

Mathematical Programming

, Volume 165, Issue 1, pp 113–149 | Cite as

Accelerated schemes for a class of variational inequalities

  • Yunmei Chen
  • Guanghui Lan
  • Yuyuan Ouyang
Full Length Paper Series B

Abstract

We propose a novel stochastic method, namely the stochastic accelerated mirror-prox (SAMP) method, for solving a class of monotone stochastic variational inequalities (SVI). The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method. The developed SAMP method computes weak solutions with the optimal iteration complexity for SVIs. In particular, if the operator in SVI consists of the stochastic gradient of a smooth function, the iteration complexity of the SAMP method can be accelerated in terms of their dependence on the Lipschitz constant of the smooth function. For SVIs with bounded feasible sets, the bound of the iteration complexity of the SAMP method depends on the diameter of the feasible set. For unbounded SVIs, we adopt the modified gap function introduced by Monteiro and Svaiter for solving monotone inclusion, and show that the iteration complexity of the SAMP method depends on the distance from the initial point to the set of strong solutions. It is worth noting that our study also significantly improves a few existing complexity results for solving deterministic variational inequality problems. We demonstrate the advantages of the SAMP method over some existing algorithms through our preliminary numerical experiments.

Keywords

Stochastic variational inequalities Stochastic programming Mirror-prox method Extragradient method 

Mathematics Subject Classification

90C25 90C15 62L20 68Q25 

References

  1. 1.
    Auslender, A., Teboulle, M.: Interior projection-like methods for monotone variational inequalities. Math. Program. 104, 39–68 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Ben-Tal, A., Nemirovski, A.: Non-Euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bennett, K.P., Mangasarian, O.L.: Robust linear programming discrimination of two linearly inseparable sets. Optim. Methods Softw. 1, 23–34 (1992)CrossRefGoogle Scholar
  5. 5.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge university press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  6. 6.
    Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR comput. Math. Math. Phys. 7, 200–217 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Burachik, R.S., Iusem, A.N., Svaiter, B.F.: Enlargement of monotone operators with applications to variational inequalities. Set-Valued Anal. 5, 159–180 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Chen, X., Wets, R.J.-B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sample average approximations. SIAM J. Optim. 22, 649–673 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Chen, X., Ye, Y.: On homotopy-smoothing methods for box-constrained variational inequalities. SIAM J. Control Optim. 37, 589–616 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Chen, Y., Lan, G., Ouyang, Y.: Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24, 1779–1814 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Dang, C.D., Lan, G.: On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60, 277–310 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1. Springer, Berlin (2003)zbMATHGoogle Scholar
  13. 13.
    Fang, S.C., Peterson, E.: Generalized variational inequalities. J. Optim. Theory Appl. 38, 363–383 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: a generic algorithmic framework. SIAM J. Optim. 22, 1469–1492 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156, 59–99 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271–310 (1966)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, Berlin (2009)CrossRefzbMATHGoogle Scholar
  18. 18.
    Jacob, L., Obozinski, G., Vert, J.-P.: Group lasso with overlap and graph lasso. In: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, pp. 433–440. (2009)Google Scholar
  19. 19.
    Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53, 1462–1475 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1, 17–58 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Kien, B., Yao, J.-C., Yen, N.D.: On the solution existence of pseudomonotone variational inequalities. J. Glob. Optim. 41, 135–145 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Koshal, J., Nedic, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE Trans. Autom. Control 58, 594–609 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Lan, G.: An optimal method for stochastic composite optimization. Math. Program. 133(1), 365–397 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. 134, 425–458 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Lin, G.-H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: a survey. Pac. J. Optim. 6, 455–482 (2010)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Minty, G.J., et al.: Monotone (nonlinear) operators in hilbert space. Duke Math. J. 29, 341–346 (1962)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Monteiro, R.D., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20, 2755–2787 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Monteiro, R.D., Svaiter, B.F.: Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21, 1688–1720 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    MOSEK ApS, The MOSEK optimization toolbox for Matlab manual, version 6.0 (revision 135). MOSEK ApS, Denmark, (2012)Google Scholar
  31. 31.
    Nemirovski, A.: Information-based complexity of linear operator equations. J. Complex. 8, 153–175 (1992)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Nemirovski, A.: Prox-method with rate of convergence \({O}(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex–concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Nemirovski, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley-interscience series in discrete mathematics. Wiley, NewYork (1983)Google Scholar
  35. 35.
    Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109, 319–344 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Nesterov, Y.: Universal gradient methods for convex optimization problems. Math. Program. 152, 381–404 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Nesterov, Y., Vial, J.P.: Homogeneous analytic center cutting plane methods for convex problems and variational inequalities. SIAM J. Optim. 9, 707–728 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(O(1/k^2)\). Doklady SSSR 269, 543–547 (1983). Translated as Soviet Math. DoclGoogle Scholar
  39. 39.
    Nesterov, Y.E.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Shapiro, A.: Monte carlo sampling methods. In: Shapiro, A., Ruszczyński, A. (eds.) Handbooks in Operations Research and Management Science, vol. 10, pp. 353–425. (2003)Google Scholar
  42. 42.
    Shapiro, A., Xu, H.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57, 395–418 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6, 59–70 (1999)MathSciNetzbMATHGoogle Scholar
  44. 44.
    Solodov, M.V., Svaiter, B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25, 214–230 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Series B (Methodological), 267–288 (1996)Google Scholar
  46. 46.
    Tseng, P.: On accelerated proximal gradient methods for convex–concave optimization. Manuscript (2008)Google Scholar
  47. 47.
    Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106, 25–57 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Wang, M., Lin, G., Gao, Y., Ali, M.M.: Sample average approximation method for a class of stochastic variational inequality problems. J. Syst. Sci. Complex. 24, 1143–1153 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  49. 49.
    Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning with application to clustering with side-information. Adv. Neural Inf. Process. Syst. 15, 505–512 (2003)Google Scholar
  50. 50.
    Xu, H., Zhang, D.: Stochastic nash equilibrium problems: sample average approximation and applications. Computa. Optim. Appl. 55, 597–645 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  51. 51.
    Yousefian, F., Nedić, A., Shanbhag, U.V.: A regularized smoothing stochastic approximation (rssa) algorithm for stochastic variational inequality problems. In: Proceedings of the 2013 Winter Simulation Conference: Simulation: Making Decisions in a Complex World, IEEE Press, pp. 933–944. (2013)Google Scholar
  52. 52.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2017

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of FloridaGainesvilleUSA
  2. 2.School of Industrial and System EngineeringGeorgia Institute of TechnologyAtlantaGeorgia
  3. 3.Department of Mathematical SciencesClemson UniversityClemsonUSA

Personalised recommendations