Skip to main content
Log in

Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

In this paper, we analyze the worst-case complexity of trust-region methods for solving unconstrained smooth multiobjective optimization problems. We particularly focus on the method proposed by Villacorta et al. [J Optim Theory Appl 160:865–889, 2014]. When the component functions are convex (respectively strongly convex), we will derive a complexity bound of \({\mathcal {O}}(\epsilon ^{-1})\) (respectively \({\mathcal {O}}(\log \epsilon ^{-1})\)) for driving some criticality measure below some given positive \(\epsilon\). The derived complexity bounds recover those of classical trust-region methods for solving (strongly) convex smooth unconstrained single-objective problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyses during the current study.

References

  1. Eichfelder, G.: Adaptive Scalarization Methods in Multiobjective Optimization. Springer-Verlag, Berlin Heidelberg (2008)

    Book  MATH  Google Scholar 

  2. Jahn, J.: Vector Optimization: Theory, Applications, and Extensions. Springer, New York (2011)

    Book  MATH  Google Scholar 

  3. Miettinen, K.: Nonlinear Multiobjective Optimization, vol. 12. Kluwer Academic, Boston (1999)

    MATH  Google Scholar 

  4. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20(2), 602–626 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159, 339–369 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Qu, S., Goh, M., Liang, B.: Trust region methods for solving multiobjective optimisation. Optim. Methods Softw. 28(4), 796–811 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Thomann, J., Eichfelder, G.: A trust-region algorithm for heteregeneous multiobjective optimization. SIAM J. Optim. 29, 1017–1047 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Villacorta, K.D.V., Oliveira, P.R., Soubeyran, A.: A trust-region method for unconstrained multiobjective problems with applications in satisficing processes. J. Optim. Theory Appl. 160(3), 865–889 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descent methods. Pesqui. Oper. 34(3), 585–620 (2014)

    Article  Google Scholar 

  11. Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, Dordrecht (2004)

    Book  MATH  Google Scholar 

  12. Gratton, S., Sartenaer, A., Toint, Ph.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19, 414–444 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Adaptive cubic regularisation methods for unconstrained optimization part II: worst-case function-evaluation complexity. Math. Program. 130, 295–319 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  14. Garmanjani, R., Júdice, D., Vicente, L.N.: Trust-region methods without using derivatives: worst case complexity and the non-smooth case. SIAM J. Optim. 26, 1987–2011 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  15. Garmanjani, R.: A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization. Optimization 71, 1709–1719 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  16. Toint, Ph.L.: Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization. Optim. Methods Softw. 28, 82–95 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Grapiglia, G.N., Yuan, J., Yuan, Y.: On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization. Optim. Methods Softw. 31, 591–604 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fliege, J., Vaz, A.I.F., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Methods Softw. 34(5), 949–959 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  19. Ferreira, O.P., Louzeiro, M.S., Prudente, L.F.: Iteration-complexity and asymptotic analysis of steepest decent method for multiobjective optimization on riemannian manifolds. J. Optim. Theory Appl. 184, 507–533 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  20. Calderón, L., Diniz-Ehrhardt, M.A., Martínez, J.M.: On high-order model regularization for multiobjective optimization. Optim. Methods Softw. 37, 175–191 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  21. Custódio, A.L., Diouane, Y., Garmanjani, R., Riccietti, E.: Worst-case complexity bounds of directional direct-search methods for multiobjective optimization. J. Optim. Theory Appl. 188, 73–93 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  22. Liu, S., Vicente, L.N.: The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. Ann. Oper. Res. (2021)

  23. Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Program. 152, 491–520 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Cocchi, G., Lapucci, M.: An augmented Lagrangian algorithm for multi-objective optimization. Comput. Optim. Appl. 77, 29–56 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2000)

  26. Nesterov, Y.: How to make the gradients small. Optima 88, 10–11 (2012)

    Google Scholar 

  27. Calafiore, G.C., El Ghaoui, L.: Optimization Models. Control systems and optimization series. Cambridge University Press (2014)

Download references

Acknowledgements

The author is very grateful to two anonymous referees for very helpful and constructive comments, which significantly improved the contributions and presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Garmanjani.

Ethics declarations

Conflict of interest

The author declares no conflict of interest related to this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Support for the author was provided by Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) under the projects PTDC/MAT-APL/28400/2017, UIDP/MAT/00297/2020, and UIDB/MAT/00297/2020.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Garmanjani, R. Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization. Optim Lett 17, 1161–1179 (2023). https://doi.org/10.1007/s11590-022-01932-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-022-01932-3

Keywords

Navigation