Skip to main content
Log in

Linearized Proximal Algorithms with Adaptive Stepsizes for Convex Composite Optimization with Applications

  • Published:
Applied Mathematics & Optimization Aims and scope Submit manuscript

Abstract

We propose an inexact linearized proximal algorithm with an adaptive stepsize, together with its globalized version based on the backtracking line-search, to solve a convex composite optimization problem. Under the assumptions of local weak sharp minima of order \(p\ (p\ge 1)\) for the outer convex function and a quasi-regularity condition for the inclusion problem associated to the inner function, we establish superlinear/quadratic convergence results for proposed algorithms. Compared to the linearized proximal algorithms with a constant stepsize proposed in Hu et al. (SIAM J Optim 26(2):1207–1235, 2016), our algorithms own broader applications and higher convergence rates, and the idea of analysis used in the present paper deviates significantly from that of Hu et al. (2016). Numerical applications to the nonnegative inverse eigenvalue problem and the wireless sensor network localization problem indicate that the proposed algorithms are more efficient and robust, and outperform the algorithms in Hu et al. (2016) and some popular algorithms for relevant problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. This was missed in the statements of [19, Algorithms 10, 17 and 19].

  2. \(\eta ({\bar{x}})\) and \(\beta ({\bar{x}})\) are the local weak sharp minima modulus of order 2 and the quasi-regular modulus around the involved point \({{\bar{x}}}\), respectively.

  3. CVX, designed by Michael Grant and Stephen Boyd, is a MATLAB-based modeling system for convex optimization. Detailed information is available from the website http://cvxr.com/cvx/.

References

  1. Bapat, R.B., Raghavan, T.E.S.: Nonnegative matrices and applications. In: Encyclopedia of Mathematics and Its Applications, vol. 15. Cambridge University Press, Cambridge (1997)

  2. Berman, A., Plemmons, R.J.: Nonnegative matrices in the mathematical sciences. In: Classics in Applied Mathematics, vol. 9. SIAM, Philadelphia (1994)

  3. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999)

    MATH  Google Scholar 

  4. Biswas, P., Liang, T.-C., Toh, K.-C., Ye, Y., Wang, T.-C.: Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Trans. Autom. Sci. Eng. 3(4), 360–371 (2006)

    Article  Google Scholar 

  5. Biswas, P., Ye, Y.: Semidefinite programming for ad hoc wireless sensor network localization. In: Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, pp. 46–54 (2004)

  6. Bonnans, J.F., Ioffe, A.: Second-order sufficiency and quadratic growth for nonisolated minima. Math. Oper. Res. 20, 801–817 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  7. Burke, J.V.: Descent methods for composite nondifferentiable optimization problems. Math. Program. 33(3), 260–279 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  8. Burke, J.V.: An exact penalization viewpoint of constrained optimization. SIAM J. Control. Optim. 29, 968–998 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  9. Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control. Optim. 31(5), 1340–1359 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  10. Burke, J.V., Ferris, M.C.: A Gauss-Newton method for convex composite optimization. Math. Program. 71, 179–194 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  11. Burke, J.V., Poliquin, R.A.: Optimality conditions for non-finite valued convex composite functions. Math. Program. 57, 103–120 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chu, M.T.: Inverse eigenvalue problems. SIAM Rev. 40, 1–39 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chu, M.T., Golub, G.H.: Structured inverse eigenvalue problems. Acta Numer 11, 1–71 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  14. Drusvyatskiy, D., Lewis, A.S.: Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 43, 919–948 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  15. Ferreira, O.P., Gonçalves, M.L.N., Oliveira, P.R.: Convergence of the Gauss-Newton method for convex composite optimization under a majorant condition. SIAM J. Optim. 23, 1757–1783 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Fletcher, R.: Practical Methods of Optimization. Wiley, New York (1987)

    MATH  Google Scholar 

  17. Fuhrmann, P.A., Helmke, U.: Nonnegative matrices and graph theory. In: The Mathematics of Networks of Linear Systems, pp. 411–466. Springer (2015)

  18. Golub, G.H., Van Loan, C.F.: Matrix Computations. John Hopkins University Press, Baltimore (2013)

    Book  MATH  Google Scholar 

  19. Hu, Y.H., Li, C., Yang, X.Q.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26(2), 1207–1235 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hu, Y.H., Yang, X.Q., Sim, C.-K.: Inexact subgradient methods for quasi-convex optimization problems. Eur. J. Oper. Res. 240, 315–327 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Ji, X., Zha, H.Y.: Sensor positioning in wireless ad-hoc sensor networks using multidimensional scaling. In: IEEE INFOCOM, vol. 4, pp. 2652–2661 (2004)

  22. Lewis, A.S., Wright, S.J.: A proximal method for composite minimization. Math. Program. 158, 501–546 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. Li, C., Ng, K.F.: Majorizing functions and convergence of the Gauss-Newton method for convex composite optimization. SIAM J. Optim. 18, 613–642 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  24. Li, C., Wang, X.H.: On convergence of the Gauss-Newton method for convex composite optimization. Math. Program. 91(2), 349–356 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, G.: Global error bounds for piecewise convex polynomials. Math. Program. 137(1–2), 37–64 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. Luo, Z.Q., Ma, W.-K., So, A.M.-C., Ye, Y., Zhang, S.Z.: Semidefinite relaxation of quadratic optimization problems. IEEE Signal Proc. Mag. 27(3), 20–34 (2010)

    Article  Google Scholar 

  27. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  28. Pang, J.-S.: Error bounds in mathematical programming. Math. Program. 79, 299–332 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  29. Perfect, H.: Methods of constructing certain stochastic matrices. Duke Math. J. 20, 395–404 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  30. Perfect, H.: Methods of constructing certain stochastic matrices. ii. Duke Math. J., 22:305–311 (1955)

  31. Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(1–3), 353–367 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  32. Rockafellar, R.T.: First- and second-order epi-differentiability in nonlinear programming. Trans. Am. Math. Soc. 307(1), 75–108 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  33. Saxe, J.B.: Embeddability of weighted graphs in k-space is strongly NP-hard. In: Proceedings of the 17th Allerton Conference on Communications, Control, and Computing, pp. 480–489 (1979)

  34. Studniarski, M., Ward, D.E.: Weak sharp minima: characterizations and sufficient conditions. SIAM J. Control. Optim. 38(1), 219–236 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  35. Suleĭmanova, H.R.: Stochastic matrices with real characteristic numbers. Doklady Akad. Nauk SSSR (NS) 66, 343–345 (1949)

    MathSciNet  MATH  Google Scholar 

  36. Sun, W.Y., Yuan, Y.X.: Optimization Theory and Methods. Springer, New York (2006)

    MATH  Google Scholar 

  37. Womersley, R.S.: Local properties of algorithms for minimizing nonsmooth composite functions. Math. Program. 32(1), 69–89 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  38. Zhao, Z., Bai, Z.J., Jin, X.Q.: A Riemannian inexact Newton-CG method for constructing a nonnegative matrix with prescribed realizable spectrum. Numer. Math. 140, 827–855 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  39. Zheng, X.Y., Ng, K.F.: Strong KKT conditions and weak sharp solutions in convex-composite optimization. Math. Program. 126, 259–279 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  40. Zheng, X.Y., Yang, X.Q.: Weak sharp minima for semi-infinite optimization problems with applications. SIAM J. Optim. 18, 573–588 (2007)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and the anonymous reviewers for their valuable comments and suggestions toward the improvement of this paper. Yaohua Hu’s work was supported in part by the National Natural Science Foundation of China (12222112, 12071306, 11871347, 32170655), Project of Educational Commission of Guangdong Province of China (2021KTSCX103, 2019KZDZX1007), and Natural Science Foundation of Shenzhen (JCYJ20190808173603590). Chong Li’s work was supported in part by the National Natural Science Foundation of China, 12071441 (11971429, 12071441). Jinhua Wang’s work was supported in part by the National Natural Science Foundation of China (Grant 12171131). Xiaoqi Yang’s work was supported in part by the Research Grants Council of Hong Kong (PolyU 15216518).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinhua Wang.

Ethics declarations

Competing Interests

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proposition A Let \(c>0\), \(q>1\), and let \(X^*\subseteq \mathbb {R}^{n}\) be closed. Let \(\{x_{k}\}\subseteq \mathbb {R}^{n}\) be a sequence satisfying

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert \le c \textrm{d}(x_{k},X^{*}) \quad \hbox {and}\quad \textrm{d}(x_{k+1},X^{*})\le c \textrm{d}^q(x_{k},X^{*}) \quad \text {for each}~ k\ge 0.\nonumber \\ \end{aligned}$$
(4.4)

If \( \textrm{d}(x_{0},X^{*})< \left( \frac{1}{c}\right) ^\frac{1}{q-1}\), then \(\{x_{k}\}\) converges to a point \(x^*\in X^*\) at a rate of q.

Proof

Assume that \( \textrm{d}(x_{0},X^{*})< \left( \frac{1}{c}\right) ^\frac{1}{q-1}\), and set \(\tau := c \textrm{d}^{q-1}(x_{0},X^{*})\). Then \(\tau <1\), and

$$\begin{aligned} c\textrm{d}^{q-1}(x_{k},X^{*})\le \tau \quad \hbox {and}\quad \textrm{d}(x_{k+1},X^{*})\le \tau \textrm{d}(x_{k},X^{*}) \quad \hbox {for each } k\ge 0 \end{aligned}$$
(4.5)

because, by the second inequality of (4.4), \(\textrm{d}(x_{k},X^{*})\le c^{\frac{q^k-1}{q-1}} \textrm{d}^{q^k}(x_{0},X^{*})= c^{\frac{q^k-1}{q-1}}\left( \frac{\tau }{c}\right) ^\frac{q^k}{q-1}\le \left( \frac{\tau }{c}\right) ^{\frac{1}{q-1}} \) for each k. In particular, we have that \(\textrm{d}(x_{k},X^{*})\rightarrow 0\).

Now fix \(k\ge 1\). We have from (4.5) that \( \textrm{d}(x_{k},X^{*})\le \textrm{d}(x_{k+1},X^{*})+\Vert x_{k+1}-x_k\Vert \le \tau \textrm{d}(x_{k},X^{*})+\Vert x_{k+1}-x_k\Vert \). It follows that \(\textrm{d}(x_{k},X^{*})\le \frac{1}{1-\tau }\Vert x_{k+1}-x_k\Vert \). Thus, using (4.4), we check that

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert \le c \textrm{d}( x_{k},X^{*})\le c^2\textrm{d}^{q}( x_{k-1},X^{*})\le \frac{ c^2}{(1-\tau )^{q}} \Vert x_{k}-x_{k-1}\Vert ^{q}. \end{aligned}$$

Set \(c_k:=\frac{ c^2}{(1-\tau )^{q}} \Vert x_{k}-x_{k-1}\Vert ^{q-1}\). Then

$$\begin{aligned} c_{k+1}\le \left( \frac{ c^2}{(1-\tau )^{q}}\right) ^{q-1} \Vert x_{k}-x_{k-1}\Vert ^{(q-1)^2} c_k\quad \hbox {and}\quad \Vert x_{k+1}-x_k\Vert \le c_k\Vert x_{k}-x_{k-1}\Vert .\nonumber \\ \end{aligned}$$
(4.6)

Since \(\textrm{d}(x_{k},X^{*})\rightarrow 0\), it follows from (4.4) that \(\Vert x_{k}-x_{k-1}\Vert \rightarrow 0\), and so \(c_k\rightarrow 0\). This, together with (4.6) implies that

\(\{x_k\}\) is a Cauchy sequence and so converges to a point \(x^*\in X^*\) (as \(X^*\) is closed). Furthermore, without loss of generality, we may assume that \(c_{k+1}\le c_k\le \frac{1}{2}\) (see the first inequality in (4.6)). Write \(d_{k}:=x_{k+1}-x_k\) for simplicity. Then (4.6) implies that \( \Vert d_{k+j}\Vert \le c_k^j\Vert d_k\Vert \) for each \(j\ge 1\). Therefore, \( \frac{\sum _{j=1}^{\infty }\Vert d_{k+j}\Vert }{\Vert d_{k}\Vert }\le \frac{c_k}{1-c_k}\rightarrow 0 \), and so \( \lim _{k\rightarrow \infty }\frac{\Vert \sum _{j=0}^{\infty } d_{k+j}\Vert }{\Vert d_{k} \Vert }=1, \) because

$$\begin{aligned} 1-\frac{\sum _{j=1}^{\infty }\Vert d_{k+j}\Vert }{\Vert d_{k}\Vert }\le \frac{\Vert \sum _{j=0}^{\infty } d_{k+j}\Vert }{\Vert d_{k}\Vert }\le 1+\frac{\sum _{j=1}^{\infty }\Vert d_{k+j}\Vert }{\Vert d_{k}\Vert }. \end{aligned}$$

Consequently, we conclude that

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{\Vert x_{k+1}-x^*\Vert }{\Vert {x_{k}-x^*}\Vert ^{q}}=\limsup _{k\rightarrow \infty }\frac{\Vert \sum _{j=1}^{\infty }d_{k+j}\Vert }{\Vert \sum _{j=0}^{\infty }d_{k+j}\Vert ^{q}}=\limsup _{k\rightarrow \infty }\frac{\Vert d_{k+1}\Vert }{\Vert d_{k}\Vert ^{q}}\le \left( \frac{1}{1-\tau }\right) ^{q}c^2, \end{aligned}$$

which means that \(\{x_k\}\) converges to \(x^*\) at a rate of q. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Y., Li, C., Wang, J. et al. Linearized Proximal Algorithms with Adaptive Stepsizes for Convex Composite Optimization with Applications. Appl Math Optim 87, 52 (2023). https://doi.org/10.1007/s00245-022-09957-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00245-022-09957-x

Keywords

Mathematics Subject Classification

Navigation