Advertisement

Mathematical Programming

, Volume 158, Issue 1–2, pp 501–546 | Cite as

A proximal method for composite minimization

  • A. S. Lewis
  • S. J. Wright
Full Length Paper Series A

Abstract

We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.

Keywords

Prox-regular functions Polyhedral convex functions  Sparse optimization Global convergence Active constraint identification 

Mathematics Subject Classfication

49M37 90C30 

Notes

Acknowledgments

We acknowledge the support of NSF Grants 0430504 and DMS-0806057. We are grateful for the comments of two referees, which were most helpful in revising earlier versions. We thank Mr. Taedong Kim for obtaining computational results for the formulation (6.4).

References

  1. 1.
    Bolte, J., Daniilidis, A., Lewis, A.S.: Generic optimality conditions for semialgebraic convex problems. Math. Oper. Res. 36, 55–70 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer Series in Operations Research. Springer, Berlin (2000)CrossRefzbMATHGoogle Scholar
  3. 3.
    Burke, J.V.: Descent methods for composite nondifferentiable optimization problems. Math. Program. Ser. A 33, 260–279 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Burke, J.V.: On the identification of active constraints II: the nonconvex case. SIAM J. Numer. Anal. 27, 1081–1102 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Burke, J.V., Moré, J.J.: On the identification of active constraints. SIAM J. Numer. Anal. 25, 1197–1211 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Byrd, R., Gould, N.I.M., Nocedal, J., Waltz, R.A.: On the convergence of successive linear-quadratic programming algorithms. SIAM J. Optim. 16, 471–489 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cai, J.-F., Candès, E., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Candès, E., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Candès, E.J.: Compressive sampling. In: Proceedings of the International Congress of Mathematicians, Madrid (2006)Google Scholar
  10. 10.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Combettes, P., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control Optim. 43, 731–742 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Daniilidis, A., Hare, W., Malick, J.: Geometrical interpretation of the predictor-corrector type algorithms in structured optimization problems. Optimization 55, 481–503 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Dmitruk, A.V., Kruger, A.Y.: Metric regularity and systems of generalized equations. J. Math. Anal. Appl. 342, 864–873 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Dontchev, A.L., Lewis, A.S., Rockafellar, R.T.: The radius of metric regularity. Trans. Am. Math. Soc. 355, 493–517 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32, 407–499 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Fan, J., Li, R.: Variable selection via nonconvex penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1361 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Fletcher, R., Sainz de la Maza, E.: Nonlinear programming and nonsmooth optimization by successive linear programming. Math. Program. 43, 235–256 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Friedlander, M.P., Gould, N.I.M., Leyffer, S., Munson, T.S.: A filter active-set trust-region method, Preprint ANL/MCS-P1456-0907, Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Avenue, Argonne IL 60439, September 2007Google Scholar
  20. 20.
    Fukushima, M., Mine, H.: A generalized proximal point algorithm for certain nonconvex minimization problems. Int. J. Syst. Sci. 12, 989–1000 (1981)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Hare, W., Lewis, A.: Identifying active constraints via partial smoothness and prox-regularity. J. Convex Anal. 11, 251–266 (2004)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Iusem, A., Pennanen, T., Svaiter, B.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13, 1080–1097 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Jokar, S., Pfetsch, M.E.: Exact and approximate sparse solutions of underdetermined linear equations. SIAM J. Sci. Comput. 31, 23–44 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Kaplan, A., Tichatschke, R.: Proximal point methods and nonconvex optimization. J. Glob. Optim. 13, 389–406 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Kim, T., Wright, S.J.: An \(\text{ S }\ell _1\text{ LP }\)-active set approach for feasibility restoration in power systems, tech. rep., Computer Science Department, University of Wisconsin-Madison, May 2014. arXiv:1405.0322
  27. 27.
    Lan, G.: Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization. Math. Program. Ser. A 149, 1–45 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Lemaréchal, C., Oustry, F., Sagastizábal, C.: The \({\cal {U}}\)-Lagrangian of a convex function. Trans. Am. Math. Soc. 352, 711–729 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Levy, A.: Lipschitzian multifunctions and a Lipschitzian inverse mapping theorem. Math. Oper. Res. 26, 105–118 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Lewis, A.: Active sets, nonsmoothness, and sensitivity. SIAM J. Optim. 13, 702–725 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Mangasarian, O.L.: Minimum-support solutions of polyhedral concave programs. Optimization 45, 149–162 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 4, 154–158 (1970)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Mifflin, R., Sagastizábal, C.: A VU-algorithm for convex minimization. Math. Program. Ser. B 104, 583–608 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Miller, S.A., Malick, J.: Newton methods for nonsmooth convex minimization: connections among \({\cal {U}}\)-Lagrangian, Reimannian Newton, and SQP methods. Math. Program. Ser. B 104, 609–633 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Mordukhovich, B.: Variational Analysis and Generalized Differentiation, I: Basic Theory; II: Applications. Springer, New York (2006)Google Scholar
  36. 36.
    Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27, 170–191 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Rockafellar, R.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)CrossRefzbMATHGoogle Scholar
  40. 40.
    Rockafellar, R.T., Wets, R.J.: Variational Analysis. Springer, Berlin (1998)CrossRefzbMATHGoogle Scholar
  41. 41.
    Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Sagastizábal, C.: Composite proximal bundle method. Math. Program. Ser. B 140, 189–233 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Sagastizábal, C., Mifflin, R.: Proximal points are on the fast track. J. Convex Anal. 9, 563–579 (2002)MathSciNetzbMATHGoogle Scholar
  44. 44.
    Shapiro, A.: On a class of nonsmooth composite functions. Math. Oper. Res. 28, 677–692 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Shi, W., Wahba, G., Wright, S.J., Lee, K., Klein, R., Klein, B.: LASSO-Patternsearch algorithm with application to opthalmology data. Stat. Interface 1, 137–153 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    Spingarn, J.: Submonotone mappings and the proximal point algorithm. Numer. Funct. Anal. Optim. 4, 123–150 (1981/82)Google Scholar
  47. 47.
    Tibshirani, R.: Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. B 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  48. 48.
    Wen, Z., Yin, W., Zhang, H., Goldfarb, D.: On the convergence of an active set method for \(\ell _1\) minimization. SIAM J. Sci. Comput. 32, 1832–1857 (2010)MathSciNetCrossRefGoogle Scholar
  49. 49.
    Wright, S.J.: Convergence of an inexact algorithm for composite nonsmooth optimization. IMA J. Numer. Anal. 9, 299–321 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Wright, S.J.: Identifiable surfaces in constrained optimization. SIAM J. Control Optim. 31, 1063–1079 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  51. 51.
    Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2009)MathSciNetCrossRefGoogle Scholar
  52. 52.
    Yuan, Y.: Conditions for convergence of a trust-region method for nonsmooth optimization. Math. Program. 31, 220–228 (1985)CrossRefzbMATHGoogle Scholar
  53. 53.
    Yuan, Y.: On the superlinear convergence of a trust region algorithm for nonsmooth optimization. Math. Program. 31, 269–285 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  54. 54.
    Zhang, C.H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38, 894–942 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  55. 55.
    Zimmerman, R.D., Murillo-Sánchez, C.E., Thomas, R.J.: MATPOWER: steady-state operations, planning, and analysis tools for power systems research and education. IEEE Trans. Power Syst. 26, 12–19 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2015

Authors and Affiliations

  1. 1.School of ORIE, Cornell UniversityIthacaUSA
  2. 2.Computer Sciences Department, University of WisconsinMadisonUSA

Personalised recommendations