Skip to main content
Log in

An accelerated proximal gradient method for multiobjective optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper presents an accelerated proximal gradient method for multiobjective optimization, in which each objective function is the sum of a continuously differentiable, convex function and a closed, proper, convex function. Extending first-order methods for multiobjective problems without scalarization has been widely studied, but providing accelerated methods with accurate proofs of convergence rates remains an open problem. Our proposed method is a multiobjective generalization of the accelerated proximal gradient method, also known as the Fast Iterative Shrinkage-Thresholding Algorithm, for scalar optimization. The key to this successful extension is solving a subproblem with terms exclusive to the multiobjective case. This approach allows us to demonstrate the global convergence rate of the proposed method (\(O(1 / k^2)\)), using a merit function to measure the complexity. Furthermore, we present an efficient way to solve the subproblem via its dual representation, and we confirm the validity of the proposed method through some numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

We do not analyse or generate any datasets, because our work proceeds within a theoretical and mathematical approach.

Code availability

The source code used for the numerical experiments is available at https://github.com/zalgo3/zfista.

References

  1. Bandyopadhyay, S., Pal, S., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 34, 2088–2099 (2004)

    Google Scholar 

  2. Beck, A.: First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia (2017)

    MATH  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  4. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont, Massachusetts (1999)

    MATH  Google Scholar 

  5. Boţ, R.I., Grad, S.M.: Inertial forward–backward methods for solving vector optimization problems. Optimization 67, 959–974 (2018)

    MathSciNet  MATH  Google Scholar 

  6. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    MATH  Google Scholar 

  7. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15, 953–970 (2005)

    MathSciNet  MATH  Google Scholar 

  8. Bouza, G., Tammer, C.: A steepest descent-like method for vector optimization problems with variable domination structure. J. Nonlinear Var. Anal. 6, 605–618 (2022)

    MATH  Google Scholar 

  9. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  10. Brent, R.P.: Algorithms for Minimization Without Derivatives. Prentice-Hall, New Jersey (1973)

    MATH  Google Scholar 

  11. Byrd, R.H., Hribar, M.E., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 9, 877–900 (1999)

    MathSciNet  MATH  Google Scholar 

  12. Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159, 339–369 (2016)

    MathSciNet  MATH  Google Scholar 

  13. Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm’’. J. Optim. Theory Appl. 166, 968–982 (2015)

    MathSciNet  MATH  Google Scholar 

  14. Custódio, A.L., Madeira, J.F., Vaz, A.I., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21, 1109–1140 (2011)

    MathSciNet  MATH  Google Scholar 

  15. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. Ser. B 91, 201–213 (2002)

    MathSciNet  MATH  Google Scholar 

  16. El Moudden, M., El Mouatasim, A.: Accelerated diagonal steepest descent method for unconstrained multiobjective optimization. J. Optim. Theory Appl. 188, 220–242 (2021)

    MathSciNet  MATH  Google Scholar 

  17. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20, 602–626 (2009)

    MathSciNet  MATH  Google Scholar 

  18. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)

    MathSciNet  MATH  Google Scholar 

  19. Fliege, J., Vaz, A.I.F., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Methods Softw. 34, 949–959 (2019)

    MathSciNet  MATH  Google Scholar 

  20. Fukuda, E.H., Graña Drummond, L.M.: Inexact projected gradient method for vector optimization. Comput. Optim. Appl. 54, 473–493 (2013)

    MathSciNet  MATH  Google Scholar 

  21. Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descemt methods. Pesquisa Operacional 34, 585–620 (2014)

    Google Scholar 

  22. Gandibleux, X., Sevaux, M., Sörensen, K., T’kindt, V.: Metaheuristics for Multiobjective Optimisation, Lecture Notes in Economics and Mathematical Systems, vol. 535. Springer, Berlin (2004)

  23. Gass, S., Saaty, T.: The computational algorithm for the parametric objective function. Naval Res. Logist. Q. 2, 39–45 (1955)

    MathSciNet  Google Scholar 

  24. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22, 618–630 (1968)

    MathSciNet  MATH  Google Scholar 

  25. Gonçalves, M.L.N., Lima, F.S., Prudente, L.F.: Globally convergent Newton-type methods for multiobjective optimization. Comput. Optim. Appl. 83, 403–434 (2022)

    MathSciNet  MATH  Google Scholar 

  26. Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)

    MathSciNet  MATH  Google Scholar 

  27. Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how? In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO’01, San Francisco, CA, USA, (2001), Morgan Kaufmann Publishers Inc

  28. Köbis, E., Köbis, M.A., Tammer, C.: A first bibliography on set and vector optimization problems with respect to variable domination structures. J. Nonlinear Var. Anal. 6, 725–735 (2022)

    MATH  Google Scholar 

  29. Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28, 2690–2720 (2018)

    MathSciNet  MATH  Google Scholar 

  30. Marler, R.T., Arora, J.S.: The weighted sum method for multi-objective optimization: new insights. Struct. Multidiscip. Optim. 41, 853–862 (2010)

    MathSciNet  MATH  Google Scholar 

  31. Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. J. Global Optim. 75, 63–90 (2019)

    MathSciNet  MATH  Google Scholar 

  32. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)

    MathSciNet  MATH  Google Scholar 

  33. Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    MathSciNet  MATH  Google Scholar 

  34. Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  35. Parikh, N., Boyd, S.: Proximal Algorithms, vol. 1. Now Publishers Inc, Boston (2014)

    Google Scholar 

  36. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Grundlehren der mathematischen Wissenschaften, vol. 317. Springer, Berlin (1998)

    Google Scholar 

  37. Sion, M.: On general minimax theorems. Pac. J. Math. 8, 171–176 (1958)

    MathSciNet  MATH  Google Scholar 

  38. Stadler, W., Dauer, J.: Multicriteria optimization in engineering: a tutorial and survey. In: Kamat, M.P. (ed.) Progress in Aeronautics and Astronautics: Structural Optimization: Status and Promise, vol. 150. American Institute of Aeronautics and Astronautics, Washington DC (1992)

    Google Scholar 

  39. Svaiter, B.F.: The multiobjective steepest descent direction is not Lipschitz continuous, but is Hölder continuous. Oper. Res. Lett. 46, 430–433 (2018)

    MathSciNet  MATH  Google Scholar 

  40. Tanabe, H., Fukuda, E.H., Yamashita, N.: Proximal gradient methods for multiobjective optimization and their applications. Comput. Optim. Appl. 72, 339–361 (2019)

    MathSciNet  MATH  Google Scholar 

  41. Tanabe, H., Fukuda, E.H., Yamashita, N.: Convergence rates analysis of a multiobjective proximal gradient method. Optim. Lett. 17, 333–350 (2023)

    MathSciNet  MATH  Google Scholar 

  42. Tanabe, H., Fukuda, E. H., Yamashita, N.: New merit functions for multiobjective optimization and their properties. arXiv:2010.09333 (2023)

  43. Toint, P.L.: Test problems for partially separable optimization and results for the routine PSPMIN, Namur Report (1983)

  44. Wang, X., Wang, Y., Wang, G.: An accelerated augmented Lagrangian method for multi-criteria optimization problem. J. Ind. Manag. Optim. 16, 1–9 (2020)

    MathSciNet  MATH  Google Scholar 

  45. Yosida, K.: Functional Analysis, Classics in Mathematics, vol. 123, 6th edn. Springer, Berlin (1995)

    Google Scholar 

  46. Zadeh, L.A.: Optimality and non-scalar-valued performance criteria. IEEE Trans. Autom. Control 8, 59–60 (1963)

    Google Scholar 

  47. Zhao, X., Jolaoso, L.O., Shehu, Y., Yao, J.-C.: Convergence of a nonmonotone projected gradient method for nonconvex multiobjective optimization. J. Nonlinear Var. Anal. 5, 441–457 (2021)

    MATH  Google Scholar 

  48. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000)

    Google Scholar 

  49. Zitzler, E., Technische, E., Zürich, H.: Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications, Phd thesis, Swiss Federal Institute of Technology Zurich (1999)

  50. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 301–320 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the Grant-in-Aid for Scientific Research (C) (21K11769 and 19K11840) and Grant-in-Aid for JSPS Fellows (20J21961) from the Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroki Tanabe.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tanabe, H., Fukuda, E.H. & Yamashita, N. An accelerated proximal gradient method for multiobjective optimization. Comput Optim Appl 86, 421–455 (2023). https://doi.org/10.1007/s10589-023-00497-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-023-00497-w

Keywords

Mathematics Subject Classification

Navigation