Advertisement

Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms

  • Quoc Tran-Dinh
  • Tianxiao Sun
  • Shu Lu
Full Length Paper Series A
  • 166 Downloads

Abstract

We study a class of monotone inclusions called “self-concordant inclusion” which covers three fundamental convex optimization formulations as special cases. We develop a new generalized Newton-type framework to solve this inclusion. Our framework subsumes three schemes: full-step, damped-step, and path-following methods as specific instances, while allows one to use inexact computation to form generalized Newton directions. We prove the local quadratic convergence of both full-step and damped-step algorithms. Then, we propose a new two-phase inexact path-following scheme for solving this monotone inclusion which possesses an \({\mathcal {O}}(\sqrt{\nu }\log (1/\varepsilon ))\)-worst-case iteration-complexity to achieve an \(\varepsilon \)-solution, where \(\nu \) is the barrier parameter and \(\varepsilon \) is a desired accuracy. As byproducts, we customize our scheme to solve three convex problems: the convex–concave saddle-point problem, the nonsmooth constrained convex program, and the nonsmooth convex program with linear constraints. We also provide three numerical examples to illustrate our theory and compare with existing methods.

Keywords

Self-concordant inclusion Generalized Newton-type methods Path-following schemes Monotone inclusion Constrained convex programming Saddle-point problems 

Mathematics Subject Classification

90C25 90C06 90-08 

Notes

Acknowledgements

This work was supported in part by the NSF Grant, USA, Award Number: 1619884.

References

  1. 1.
    Auslender, A., Teboulle, M., Ben-Tiba, S.: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12(1–3), 31–40 (1999)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Bauschke, H.H., Combettes, P.: Convex Analysis and Monotone Operators Theory in Hilbert Spaces, 2nd edn. Springer, Berlin (2017)CrossRefzbMATHGoogle Scholar
  3. 3.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding agorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Becker, S., Fadili, M.J.: A quasi-Newton proximal splitting method. In: Proceedings of Neutral Information Processing Systems Foundation (NIPS) (2012)Google Scholar
  5. 5.
    Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. Volume 3 of MPS/SIAM Series on Optimization. SIAM, Philadelphia (2001)CrossRefzbMATHGoogle Scholar
  6. 6.
    Bonnans, J.F.: Local analysis of Newton-type methods for variational inequalities and nonlinear programming. Appl. Math. Optim. 29, 161–186 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefzbMATHGoogle Scholar
  8. 8.
    Boyd, S., Vandenberghe, L.: Convex Optimization. University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  9. 9.
    Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Combettes, P., Pesquet, J.-C.: Signal recovery by proximal forward-backward splitting. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, Berlin (2011)Google Scholar
  11. 11.
    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    De Luca, T., Facchinei, F., Kanzow, C.: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75(3), 407–439 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings: A View from Variational Analysis. Springer, Berlin (2014)CrossRefzbMATHGoogle Scholar
  14. 14.
    Eckstein, J., Bertsekas, D.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Esser, J.E.: Primal-dual algorithm for convex models and applications to image restoration, registration and nonlocal inpainting. Ph.D. Thesis, University of California, Los Angeles (2010)Google Scholar
  16. 16.
    Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1-2. Springer, Berlin (2003)zbMATHGoogle Scholar
  17. 17.
    Frank, M., Wolfe, P.: An algorithm for quadratic programming. Nav. Res. Logist. Q. 3, 95–110 (1956)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Friedlander, M., Goh, G.: Efficient evaluation of scaled proximal operators. Electron. Trans. Numer. Anal. 46, 1–22 (2017)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Fukushima, M.: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99–110 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Goldstein, T., Li, M., Yuan, X., Esser, E., Baraniuk, R.: Adaptive primal-dual hybrid gradient methods for saddle-point problems, pp. 1–15 (2015). arXiv:1305.0546v2
  21. 21.
    Grant, M., Boyd, S., Ye, Y.: Disciplined convex programming. In: Liberti, L., Maculan, N. (eds.) Global Optimization: From Theory to Implementation, Nonconvex Optimization and Its Applications, pp. 155–210. Springer, Berlin (2006)CrossRefGoogle Scholar
  22. 22.
    Hajek, B., Wu, Y., Xu, J.: Achieving exact cluster recovery threshold via semidefinite programming. IEEE Trans. Inf. Theory 62, 2788–2797 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Jaggi, M.: Revisiting Frank–Wolfe: projection-free sparse convex optimization. In: JMLR W&CP, vol. 28, no. 1, pp. 427–435 (2013)Google Scholar
  24. 24.
    Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems (NIPS), pp. 315–323 (2013)Google Scholar
  25. 25.
    Korpelevic, G.M.: An extragradient method for finding saddle-points and for other problems. Èkon. Mat. Metody 12(4), 747–756 (1976)MathSciNetGoogle Scholar
  26. 26.
    Kummer, B.: Newton’s method for non-differentiable functions. Adv. Math. Optim. 45, 114–125 (1988)zbMATHGoogle Scholar
  27. 27.
    Löefberg, J.: YALMIP : a toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004)Google Scholar
  28. 28.
    Monteiro, R.D.C., Svaiter, B.F.: Iteration-complexity of a Newton proximal extragradient method for monotone variational inequalities and inclusion problems. SIAM J. Optim. 22(3), 914–935 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Nemirovskii, A.: Prox-method with rate of convergence \({\cal{O}}(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex–concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Volume 87 of Applied Optimization. Kluwer Academic Publishers, Dordrecht (2004)CrossRefzbMATHGoogle Scholar
  32. 32.
    Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2–3), 319–344 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Nesterov, Y.: Smoothing technique and its applications in semidefinite optimization. Math. Program. 110(2), 245–259 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Program. 140(1), 125–161 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming. Society for Industrial Mathematics, Philadelphia (1994)CrossRefGoogle Scholar
  36. 36.
    Nesterov, Y., Todd, M.J.: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 22(1), 1–42 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2006)zbMATHGoogle Scholar
  38. 38.
    Pang, J.-S.: A B-differentiable equation-based, globally and locally quadratically convergent algorithm for nonlinear programs, complementarity and variational inequality problems. Math. Program. 51(1), 101–131 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)Google Scholar
  40. 40.
    Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Ralph, D.: Global convergence of damped Newton’s method for nonsmooth equations via the path search. Math. Oper. Res. 19(2), 352–389 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Robinson, S.M.: Strongly regular generalized equations. Math. Oper. Res. 5(1), 43–62 (1980)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Robinson, S.M.: Newton’s method for a class of nonsmooth functions. Set Valued Var. Anal. 2, 291–305 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    Rockafellar, R.T.: Convex Analysis. Volume 28 of Princeton Mathematics Series. Princeton University Press, Princeton (1970)CrossRefzbMATHGoogle Scholar
  45. 45.
    Rockafellar, R.T., Wets, R .J.-B.: Variational Analysis. Springer, Berlin (1997)zbMATHGoogle Scholar
  46. 46.
    Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  47. 47.
    Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set Valued Var. Anal. 7(4), 323–345 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Sturm, F.: Using SeDuMi 1.02: A Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  49. 49.
    Su, W., Boyd, S., Candes, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. In: Advances in Neural Information Processing Systems (NIPS), pp. 2510–2518 (2014)Google Scholar
  50. 50.
    Toh, K.-C., Todd, M.J., Tütüncü, R.H.: On the implementation and usage of SDPT3—a Matlab software package for semidefinite-quadratic-linear programming. Technical Report 4, NUS Singapore (2010)Google Scholar
  51. 51.
    Tran-Dinh, Q., Kyrillidis, A., Cevher, V.: An inexact proximal path-following algorithm for constrained convex minimization. SIAM J. Optim. 24(4), 1718–1745 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  52. 52.
    Tran-Dinh, Q., Kyrillidis, A., Cevher, V.: A single phase proximal path-following framework. Math. Oper. Res. (2018) (accepted) Google Scholar
  53. 53.
    Tran-Dinh, Q., Necoara, I., Savorgnan, C., Diehl, M.: An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization. SIAM J. Optim. 23(1), 95–125 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  54. 54.
    Tseng, P.: Applications of splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29, 119–138 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  55. 55.
    Tseng, P.: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7(4), 951–965 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  56. 56.
    Wen, Z., Goldfarb, D., Yin, W.: Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Program. Comput. 2, 203–230 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  57. 57.
    Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  58. 58.
    Womersley, R.S., Sun, D., Qi, H.: A feasible semismooth asymptotically Newton method for mixed complementarity problems. Math. Program. 94(1), 167–187 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  59. 59.
    Wright, S.J.: Applying new optimization algorithms to model predictive control. In: Kantor J.C., Garcia C.E., Carnahan B. (eds) Fifth International Conference on Chemical Process Control—CPCV, pp. 147–155. American Institute of Chemical Engineers (1996)Google Scholar
  60. 60.
    Xiu, N., Zhang, J.: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152(1), 559–585 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  61. 61.
    Yamashita, H., Yabe, H., Harada, K.: A primal-dual interior point method for nonlinear semidefinite programming. Math. Program. 135, 89–121 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  62. 62.
    Yang, L., Sun, D., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7(3), 331–366 (2015)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.Department of Statistics and Operations ResearchThe University of North Carolina at Chapel Hill (UNC)Chapel HillUSA

Personalised recommendations