Skip to main content
Log in

Combined gradient methods for multiobjective optimization

  • Original Research
  • Published:
Journal of Applied Mathematics and Computing Aims and scope Submit manuscript

Abstract

In this paper, the combined gradient methods are designed to solve multiobjective optimization problems. According to the special structure of the problem, we only use the gradient information of each objective function and combine each gradient by combining parameters to obtain the search direction of the problem. The Hessian matrix of each objective function is avoided in the methods. Under the assumption that the gradient of objective function is linearly independent, we prove that the methods can always produce a subsequence that converges to the local Pareto point of the problem, and analysis its worst-case iteration complexity. The numerical results are reported for showing the effectiveness of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

Similar content being viewed by others

References

  1. Bagchi, U.: Simultaneous minimization of mean and variation of flow time and waiting time in single machine systems. Oper. Res. 37, 118–125 (1989)

    Article  MathSciNet  Google Scholar 

  2. Numer, I.B.I.T., Bai, Z.Z., Duff, I.S., Wathen, A.J.: A class of incomplete orthogonal factorization methods. Methods Theor. Math. 41, 53–70 (2001)

    MathSciNet  MATH  Google Scholar 

  3. Bai, Z.Z., Yin, J.F.: Modified incomplete orthogonal factorization methods using givens rotations. Computing 86, 53–69 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nolinear Programming Theory and Algorithms. Wiley, New York, Chichester, Brisbane, Toronto, Singapore (1993)

    MATH  Google Scholar 

  5. Bento, G.C., Allende, G.B., Pereira, Y.R.L.: A Newton-like method for variable order vector optimization problems. J. Optim. Theory Appl. 177, 201–221 (2018)

    Article  MathSciNet  Google Scholar 

  6. Bento, G.C., Neto, J.X.C., Lopez, G., Soubeyran, A., Souza, J.C.O.: The proximal point method for locally lipschtz functions in multiobjective optimization with application to the compromise problem. SIAM J. OPTIM. 28(2), 1104–1120 (2018)

    Article  MathSciNet  Google Scholar 

  7. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15, 953–970 (2005)

    Article  MathSciNet  Google Scholar 

  8. Burachik, R.S., Kaya, C.Y., Rizvi, M.M.: A new scalarization technique and new algorithms to generate pareto fronts. SIAM J. Optim. 27(2), 1010–1034 (2017)

    Article  MathSciNet  Google Scholar 

  9. Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Appl. Anal. 90, 935–949 (2011)

    Article  MathSciNet  Google Scholar 

  10. Cruz, J.Y.B.: A subgradient method foe vector optimization problems. SIAM J. Optim. 23(4), 2169–2182 (2013)

    Article  MathSciNet  Google Scholar 

  11. Das, I., Dennis, J.E.: Normal-boundary intersection: a new method for generating pareto optimal points in nonlinear multicriteria optimization problems. SIAM J. Optim. 8, 631–657 (1998)

    Article  MathSciNet  Google Scholar 

  12. Dolan, E.D., Moré, I.J.: Benchmarking optimization software with performance profiles. Math. Programm. 91, 201–312 (2002)

    Article  MathSciNet  Google Scholar 

  13. Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)

    Article  MathSciNet  Google Scholar 

  14. Drummond, L.M.G., Raupp, F.M.P., Svaiter, B.F.: A quadratically convergent Newton method for vector optimization. Optimization 63, 661–677 (2014)

    Article  MathSciNet  Google Scholar 

  15. Drummond, L.M.G., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175, 395–414 (2005)

    Article  MathSciNet  Google Scholar 

  16. Eschenauer, H., Koski, J., Osyczka, A.: Multicriteria Design Optimization. Springer, Berlin (1990)

    Book  Google Scholar 

  17. Eichfelder, G.: Adaptive Scalarization Methods in Multiobjective Optimization. Springer-Verlag, Berlin, Heidelberg (2008)

    Book  Google Scholar 

  18. Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 26(4), 2091–2119 (2016)

    Article  MathSciNet  Google Scholar 

  19. Fliege, J., Drummond, L.M.G., Svaiter, B.F.: Newtons method for multiobjective optimization. SIAM J. Optim. 20, 602–626 (2009)

    Article  MathSciNet  Google Scholar 

  20. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)

    Article  MathSciNet  Google Scholar 

  21. Grandoni, F., Krysta, P., Leonardi, S., Ventre, C.: Utilitarian mechanism design for multiobjective optimization. SIAM J. Optim. 43(4), 1263–1290 (2014)

    MathSciNet  MATH  Google Scholar 

  22. Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multiobjective optimization: Why does it work and how?, In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1042–1049 (2001)

  23. Kim, I.Y., de Weck, O.L.: Adaptive weighted sum method for bi-objective optimization: Pareto fron generation. Struct. Multidiscip. Optim. 29, 149–158 (2005)

    Article  Google Scholar 

  24. Leschine, T.M., Wallenius, H., Verdini, W.A.: Interactive multiobjective analysis and assimilative capacity-based ocean disposal decisions. European J. Oper. Res. 56, 278–289 (1992)

    Article  Google Scholar 

  25. Lipovetsky, S., Conklin, W.M.: Ridge regression in two-parameter solution. Appl. Stoch. Models Bus. Ind. 21, 525–540 (2005)

    Article  MathSciNet  Google Scholar 

  26. Liuzzi, G., Lucidi, S., Rinaldi, F.: A derivative free approach to constrained multiobjective nonsmooth optimization. SIAM J. Optim. 26(4), 2744–2774 (2016)

    Article  MathSciNet  Google Scholar 

  27. Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Operat. Res. 273(1), 44–57 (2019)

    Article  MathSciNet  Google Scholar 

  28. Preuss, M., Naujoks, B., Rudolph, G.: Pareto set and EMOA behavior for simple multimodal multiobjective functions, In: Proceedings of the Ninth International Conference on Parallel Problem Solving from Nature (PPSN IX), Runarsson, T. P. et al., (eds.), Springer, Berlin, pp. 513–522 (2006)

  29. Ryu, J.H., Kim, S.: A derivative-free trust-region method for biobjective optimization. SIAM J. Optim. 24, 334–362 (2014)

    Article  MathSciNet  Google Scholar 

  30. Schreibmann, E., Lahanas, M., Xing, L., Baltas, D.: Multiobjective evolutionary optimization of the number of beams, their orientations and weights for intensity-modulated radiation therapy. Phys. Med. Biol. 49, 747–770 (2004)

    Article  Google Scholar 

  31. Wang, J., Hu, Y., Yu, C.K.W., Li, C., Yang, X.: Extened Newton methods for multiobjective optimization: majirizing function technique and convergence analysis. SIAM J. Optim. 29(3), 2388–2421 (2019)

    Article  MathSciNet  Google Scholar 

  32. Wiecek, M.M.: Advances in cone-based preference modeling for decision making with multiple criteria. Decis. Mak. Manuf. Serv. 1, 153–173 (2007)

    MathSciNet  MATH  Google Scholar 

  33. Zhang, H., Conn, A.R., Scheinberg, K.: A derivative-free algorithm for least-squares minimization, SIAM. J. Optim. 20, 3555–3576 (2010)

    MathSciNet  MATH  Google Scholar 

  34. Zhang, H., Conn, A.R.: On the local convergence of a derivative-free algorithm for least-squares minimization. Comput. Optim. Appl. 51, 481–507 (2012)

    Article  MathSciNet  Google Scholar 

  35. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evolut. Comput. 8, 173–195 (2000)

    Article  Google Scholar 

Download references

Acknowledgements

This work has been partially supported by National Natural Science Foundation (Grant number: 11371253), Hainan Natural Science Foundation (Grant number: 120MS029) and The Science Foundation Grant of Provincial Education Department of Hunan (Grant number: 18A351).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, P., Zhu, D. Combined gradient methods for multiobjective optimization. J. Appl. Math. Comput. 68, 2717–2741 (2022). https://doi.org/10.1007/s12190-021-01636-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12190-021-01636-4

Keywords

Navigation