Skip to main content
Log in

Globally convergent Newton-type methods for multiobjective optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas the second uses second-order information of the objective functions with ingredients of the steepest descent method. One of the key points of our approaches is to impose some safeguard strategies on the search directions. These strategies are associated to the conditions that prevent, at each iteration, the search direction to be too close to orthogonality with the multiobjective steepest descent direction and require a proportionality between the lengths of such directions. In order to fulfill the demanded safeguard conditions on the search directions of Newton-type methods, we adopt the technique in which the Hessians are modified, if necessary, by adding multiples of the identity. For our first Newton-type method, it is also shown that, under convexity assumptions, the local superlinear rate of convergence (or quadratic, in the case where the Hessians of the objectives are Lipschitz continuous) to a local efficient point of the given problem is recovered. The global convergences of the aforementioned methods are based, first, on presenting and establishing the global convergence of a general algorithm and, then, showing that the new methods fall in this general algorithm. Numerical experiments illustrating the practical advantages of the proposed Newton-type schemes are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability statement

The codes supporting the numerical experiments are freely available in the Github repository, https://github.com/lfprudente/newtonMOP.

References

  1. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide, 3rd edn. SIAM, Philadelphia (1999)

    Book  Google Scholar 

  2. Ansary, M.A., Panda, G.: A modified Quasi-Newton method for vector optimization problem. Optimization 64(11), 2289–2306 (2015)

    Article  MathSciNet  Google Scholar 

  3. Assunção, P.B., Ferreira, O.P., Prudente, L.F.: Conditional gradient method for multiobjective optimization. Comput. Optim. Appl. 78, 741–768 (2021). https://doi.org/10.1007/s10589-020-00260-5

    Article  MathSciNet  MATH  Google Scholar 

  4. Birgin, E., Martinez, J.: Practical Augmented Lagrangian Methods for Constrained Optimization. SIAM, Philadelphia (2014)

    Book  Google Scholar 

  5. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15(4), 953–970 (2005)

    Article  MathSciNet  Google Scholar 

  6. Custódio, A.L., Madeira, J.F.A., Vaz, A.I.F., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21(3), 1109–1140 (2011). https://doi.org/10.1137/10079731X

    Article  MathSciNet  MATH  Google Scholar 

  7. Dai, Y.H.: Convergence properties of the BFGS algorithm. SIAM J. Optim. 13(3), 693–701 (2002)

    Article  MathSciNet  Google Scholar 

  8. Dai, Y.H.: A perfect example for the BFGS method. Math. Program. 138(1–2), 501–530 (2013)

    Article  MathSciNet  Google Scholar 

  9. Das, I., Dennis, J.: Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 8(3), 631–657 (1998)

    Article  MathSciNet  Google Scholar 

  10. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  11. Fazzio, N.S., Schuverdt, M.L.: Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems. Optim. Lett. 13(6), 1365–1379 (2019)

    Article  MathSciNet  Google Scholar 

  12. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20(2), 602–626 (2009)

    Article  MathSciNet  Google Scholar 

  13. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Method. Oper. Res. 51(3), 479–494 (2000)

    Article  MathSciNet  Google Scholar 

  14. Fukuda, E.H., Graña Drummond, L.M.: Inexact projected gradient method for vector optimization. Comput. Optim. Appl. 54(3), 473–493 (2013)

    Article  MathSciNet  Google Scholar 

  15. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22(3), 618–630 (1968)

    Article  MathSciNet  Google Scholar 

  16. Gonçalves, M.L.N., Prudente, L.F.: On the extension of the Hager–Zhang conjugate gradient method for vector optimization. Comput. Optim. Appl. 76(3), 889–916 (2020)

    Article  MathSciNet  Google Scholar 

  17. Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28(1), 5–29 (2004)

    Article  MathSciNet  Google Scholar 

  18. Graña Drummond, L.M., Raupp, F.M.P., Svaiter, B.F.: A quadratically convergent Newton method for vector optimization. Optimization 63(5), 661–677 (2014)

    Article  MathSciNet  Google Scholar 

  19. Graña Drummond, L.M., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175(2), 395–414 (2005)

    Article  MathSciNet  Google Scholar 

  20. Hillermeier, C.: Generalized homotopy approach to multiobjective optimization. J. Optimiz. Theory App. 110(3), 557–583 (2001)

    Article  MathSciNet  Google Scholar 

  21. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evolut. Comput. 10(5), 477–506 (2006)

    Article  Google Scholar 

  22. Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how? In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO’01, pp. 1042-1049. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2001)

  23. Kim, I., de Weck, O.: Adaptive weighted-sum method for bi-objective optimization: Pareto front generation. Struct. Multidiscip. Optim. 29(2), 149–158 (2005)

    Article  Google Scholar 

  24. Laumanns, M., Thiele, L., Deb, K., Zitzler, E.: Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10(3), 263–282 (2002)

    Article  Google Scholar 

  25. Lovison, A.: Singular continuation: generating piecewise linear approximations to Pareto sets via global analysis. SIAM J. Optim. 21(2), 463–490 (2011)

    Article  MathSciNet  Google Scholar 

  26. Luc, D.T.: Theory of vector optimization. Lectures Notes in economics and mathematical systems, Vol. 319 (1989)

  27. Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28(3), 2690–2720 (2018)

    Article  MathSciNet  Google Scholar 

  28. Lucambio Pérez, L.R., Prudente, L.F.: A Wolfe line search algorithm for vector optimization. ACM Trans. Math. Softw. 45(4), 23 (2019)

    Article  MathSciNet  Google Scholar 

  29. Mascarenhas, W.F.: The BFGS method with exact line searches fails for non-convex objective functions. Math. Program. 99(1), 49–61 (2004)

    Article  MathSciNet  Google Scholar 

  30. Mascarenhas, W.F.: On the divergence of line search methods. Comput. Appl. Math. 26(1), 129–169 (2007)

    MathSciNet  MATH  Google Scholar 

  31. Miglierina, E., Molho, E., Recchioni, M.: Box-constrained multi-objective optimization: a gradient-like method without a priori scalarization. Eur. J. Oper. Res. 188(3), 662–682 (2008)

    Article  MathSciNet  Google Scholar 

  32. Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. J. Global Optim. 75, 63–90 (2019)

    Article  MathSciNet  Google Scholar 

  33. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981)

    Article  MathSciNet  Google Scholar 

  34. Nocedal, J., Wright, S.: Numerical optimization. Springer (2006)

  35. Povalej, Z.: Quasi-Newton method for multiobjective optimization. J. Comput. Appl. Math. 255, 765–777 (2014)

    Article  MathSciNet  Google Scholar 

  36. Preuss, M., Naujoks, B., Rudolph, G.: Pareto set and EMOA behavior for simple multimodal multiobjective functions. In: Runarsson, T.P., Beyer, H.G., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) Parallel Problem Solving from Nature - PPSN IX, pp. 513–522. Springer, Berlin (2006)

    Chapter  Google Scholar 

  37. Qu, S., Ji, Y., Jiang, J., Zhang, Q.: Nonmonotone gradient methods for vector optimization with a portfolio optimization application. Eur. J. Oper. Res. 263(2), 356–366 (2017)

    Article  MathSciNet  Google Scholar 

  38. Schütze, O., Laumanns, M., Coello Coello, C.A., Dellnitz, M., Talbi, E.G.: Convergence of stochastic search algorithms to finite size Pareto set approximations. J. Global Optim. 41(4), 559–577 (2008)

    Article  MathSciNet  Google Scholar 

  39. Svaiter, B.F.: The multiobjective steepest descent direction is not Lipschitz continuous, but is Hölder continuous. Oper. Res. Lett. 46(4), 430–433 (2018)

    Article  MathSciNet  Google Scholar 

  40. Toint, P.L.: Test problems for partially separable optimization and results for the routine PSPMIN. The University of Namur, Department of Mathematics, Belgium, Tech. Rep (1983)

  41. Wang, J., Hu, Y., Wai, Yu., C.K., Li, C., Yang, X.: Extended Newton methods for multiobjective optimization: majorizing function technique and convergence analysis. SIAM J. Optim. 29(3), 2388–2421 (2019)

Download references

Acknowledgements

This work was funded by FAPEG (Grants PRONEM-201710267000532, PPP03/15-201810267001725) and CNPq (Grants 302666/2017-6, 408123/2018-4, 424860/2018-0).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. L. N. Gonçalves.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gonçalves, M.L.N., Lima, F.S. & Prudente, L.F. Globally convergent Newton-type methods for multiobjective optimization. Comput Optim Appl 83, 403–434 (2022). https://doi.org/10.1007/s10589-022-00414-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-022-00414-7

Keywords

Mathematics Subject Classification

Navigation