Descent algorithm for nonsmooth stochastic multiobjective optimization

Abstract

An algorithm for solving the expectation formulation of stochastic nonsmooth multiobjective optimization problems is proposed. The proposed method is an extension of the classical stochastic gradient algorithm to multiobjective optimization using the properties of a common descent vector defined in the deterministic context. The mean square and the almost sure convergence of the algorithm are proven. The algorithm efficiency is illustrated and assessed on an academic example.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. 1.

    Arnaud, R., Poirion, F.: Optimization of an uncertain aeroelastic system using stochastic gradient approaches. J. Aircr. 51, 1061–1066 (2014)

    Article  Google Scholar 

  2. 2.

    Bagirov, A., Karmitsa, N., Mäkelä, M.: Introduction to Nonsmooth Optimization: Theory Practice and Software. Springer, Berlin (2014)

    Google Scholar 

  3. 3.

    Bonnel, H., Collonge, J.: Stochastic optimization over a Pareto set associated with a stochastic multi-objective optimization problem. J. Optim. Theory Appl. 162, 405–427 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Burke, J.V., Lewis, A.S., Overton, M.L.: Approximating subdifferentials by random sampling of gradients. Math. Oper. Res. 27, 567–584 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Burke, J.V., Lewis, A.S., Overton, M.L.: A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim. 15, 751–779 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Da Cruz Neto, J., Da Silva, G., Ferreira, O., Lopes, J.: A subgradient method for multiobjective optimization. Comput. Optim. Appl. 54, 461–472 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Désidéri, J.: Multi-gradient descent algorithm (MGDA). Technical report 6953, INRIA. (2009)

  8. 8.

    Désidéri, J.: Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. CRAS Paris Ser. I 350, 313–318 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Duflo, M.: Random Iterative Models. Springer, Berlin (1997)

    Google Scholar 

  10. 10.

    Fliege, J., Xu, H.: Stochastic multiobjective optimization: sample average approximation and applications. J. Optim. Theory Appl. 151, 135–162 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Kiwiel, K.: Methods of Descent for Nondifferentiable Optimization, Lecture Notes in Mathematics, vol. 1133. Springer, Berlin (1985)

    Google Scholar 

  12. 12.

    Mäkelä, M.: Survey of bundle methods for nonsmooth optimization. Optim. Methods Softw. 17, 1–29 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Miettinen, K.: Nonlinear Multiobjective Optimization, International Series in Operations Research and Management Science, vol. 12. Springer, Berlin (1998)

    Google Scholar 

  14. 14.

    Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Springer, Berlin (1991)

    Google Scholar 

  15. 15.

    Wilppu, O., N. Karmitsa, N., Mäkelä, M.: New multiple subgradient descent bundle method for nonsmooth multiobjective optimization. Technical report 1126, TUCS, University of Turku, Finland. (2014)

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Fabrice Poirion.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Poirion, F., Mercier, Q. & Désidéri, J. Descent algorithm for nonsmooth stochastic multiobjective optimization. Comput Optim Appl 68, 317–331 (2017). https://doi.org/10.1007/s10589-017-9921-x

Download citation

Keywords

  • Multiobjective optimization
  • Stochastic
  • Nonsmooth
  • Almost sure convergence