Skip to main content

Advertisement

Log in

Convergence rates analysis of a multiobjective proximal gradient method

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

Many descent algorithms for multiobjective optimization have been developed in the last two decades. Tanabe et al. (Comput Optim Appl 72(2):339–361, 2019) proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex (\(O(\sqrt{1 / k})\)), convex (O(1/k)), and strongly convex (\(O(r^k)\) for some \(r \in (0, 1)\)) problems. We also extend the so-called Polyak-Łojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities (\(O(r^k)\) for some \(r \in (0, 1)\)).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We say that \(h :\mathbf {R}^n \rightarrow \mathbf {R}\cup \{ \infty \}\) has a convexity parameter \(\varsigma \in \mathbf {R}\) if \(h(\alpha x + (1 - \alpha )y) \le \alpha h(x) + (1 - \alpha )h(y) - (1 / 2)\alpha (1 - \alpha ) \varsigma \left\Vert x - y\right\Vert ^2\) holds for all \(x, y \in \mathbf {R}^n\) and \(\alpha \in [0, 1]\). When \(\varsigma > 0\), we call h strongly convex. Note that this definition allows non-convex cases, i.e., \(\varsigma < 0\) can also be considered.

References

  1. Beck, A.: First-order methods in optimization. Society for Industrial and Applied Mathematics (2017). https://doi.org/10.1137/1.9781611974997

  2. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009). https://doi.org/10.1137/080716542

    Article  MathSciNet  MATH  Google Scholar 

  3. Bello-Cruz, Y., Melo, J.G., Serra, R.V.: A proximal gradient splitting method for solving convex vector optimization problems. Optimization (2020). https://doi.org/10.1080/02331934.2020.1800699

    Article  MATH  Google Scholar 

  4. Bertsekas, D.P.: Nonlinear programming, 2nd edn. Athena Scientific, Belmont, Mass (1999)

    MATH  Google Scholar 

  5. Boţ, R.I., Grad, S.M.: Inertial forward-backward methods for solving vector optimization problems. Optimization 67(7), 959–974 (2018). https://doi.org/10.1080/02331934.2018.1440553

    Article  MathSciNet  MATH  Google Scholar 

  6. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15(4), 953–970 (2005). https://doi.org/10.1137/S1052623403429093

    Article  MathSciNet  MATH  Google Scholar 

  7. Calderón, L., Diniz-Ehrhardt, M.A., Martínez, J.M.: On high-order model regularization for multiobjective optimization. Optim. Methods. Softw. (2020). https://doi.org/10.1080/10556788.2020.1719408

    Article  MATH  Google Scholar 

  8. Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159(1–2), 339–369 (2016). https://doi.org/10.1007/s10107-015-0962-6

    Article  MathSciNet  MATH  Google Scholar 

  9. Custódio, A.L., Madeira, J.F., Vaz, A.I., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21(3), 1109–1140 (2011). https://doi.org/10.1137/10079731X

    Article  MathSciNet  MATH  Google Scholar 

  10. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20(2), 602–626 (2009). https://doi.org/10.1137/08071692X

    Article  MathSciNet  MATH  Google Scholar 

  11. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51(3), 479–494 (2000). https://doi.org/10.1007/s001860000043

    Article  MathSciNet  MATH  Google Scholar 

  12. Fliege, J., Vaz, A.I., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Methods. Softw. 34(5), 949–959 (2019). https://doi.org/10.1080/10556788.2018.1510928

    Article  MathSciNet  MATH  Google Scholar 

  13. Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descemt methods. Pesquisa Operacional 34(3), 585–620 (2014). https://doi.org/10.1590/0101-7438.2014.034.03.0585

    Article  Google Scholar 

  14. Fukushima, M., Mine, H.: A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Syst. Sci. 12(8), 989–1000 (1981). https://doi.org/10.1080/00207728108963798

    Article  MATH  Google Scholar 

  15. Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28(1), 5–29 (2004). https://doi.org/10.1023/B:COAP.0000018877.86161.8b

    Article  MathSciNet  MATH  Google Scholar 

  16. Grapiglia, G.N., Yuan, J., Yuan, Y.X.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Program. 152(1–2), 491–520 (2015). https://doi.org/10.1007/s10107-014-0794-9

    Article  MathSciNet  MATH  Google Scholar 

  17. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49(4), 263–265 (1952). https://doi.org/10.1142/9789812796936_0018

    Article  MathSciNet  Google Scholar 

  18. Karimi, H., Nutini, J., Schmidt, M.: Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. In: P. Frasconi, N. Landwehr, G. Manco, J. Vreeken (eds.) Machine Learning and Knowledge Discovery in Databases, pp. 795–811. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-46128-1_50

  19. Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28(3), 2690–2720 (2018). https://doi.org/10.1137/17M1126588

    Article  MathSciNet  MATH  Google Scholar 

  20. Nesterov, Y.: Introductory lectures on convex optimization: A basic course. Kluwer Academic Publishers, Dordrecht (2004). https://doi.org/10.1007/978-1-4419-8853-9

  21. Polyak, B.: Gradient methods for minimizing functionals (in Russian). Zh. Vychisl. Mat. Mat. Fiz. 3(4), 643–653 (1963). https://doi.org/10.1016/0041-5553(63)90382-3

    Article  MATH  Google Scholar 

  22. Sion, M.: On general minimax theorems. Pacific J. Math. 8(1), 171–176 (1958). https://doi.org/10.2140/pjm.1958.8.171

    Article  MathSciNet  MATH  Google Scholar 

  23. Tanabe, H., Fukuda, E.H., Yamashita, N.: Proximal gradient methods for multiobjective optimization and their applications. Comput. Optim. Appl. 72(2), 339–361 (2019). https://doi.org/10.1007/s10589-018-0043-x

    Article  MathSciNet  MATH  Google Scholar 

  24. Tanabe, H., Fukuda, E.H., Yamashita, N.: New merit functions for multiobjective optimization and their properties. arXiv: 2010.09333 (2022)

Download references

Acknowledgements

This work was supported by the Grant-in-Aid for Scientific Research (C) (17K00032 and 19K11840) and Grant-in-Aid for JSPS Fellows (20J21961) from the Japan Society for the Promotion of Science. We are also grateful to the anonymous referees for their useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroki Tanabe.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tanabe, H., Fukuda, E.H. & Yamashita, N. Convergence rates analysis of a multiobjective proximal gradient method. Optim Lett 17, 333–350 (2023). https://doi.org/10.1007/s11590-022-01877-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-022-01877-7

Keywords

Navigation