Skip to main content
Log in

Convergence of Inexact Quasisubgradient Methods with Extrapolation

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we investigate an inexact quasisubgradient method with extrapolation for solving a quasiconvex optimization problem with a closed, convex and bounded constraint set. We establish the convergence in objective values, iteration complexity and rate of convergence for our proposed method under Hölder condition and weak sharp minima condition. When both diminishing stepsize and extrapolation stepsize are decaying as a power function, we obtain explicit iteration complexities. When diminishing stepsize is decaying as a power function and the extrapolation stepsize is decreasing not less than a power function, the diminishing stepsize provides a rate of convergence \({\mathcal {O}}\left( \tau ^{k^{s}}\right) (s \in (0,1))\) to an optimal solution or to a ball of the optimal solution set, which is faster than \({\mathcal {O}}\left( {1}/{k^\beta }\right) \) (for each \(\beta >0\)). With geometrically decreasing extrapolation stepsize, we obtain a linear rate of convergence to a ball of the optimal solution set for the constant stepsize and dynamic stepsize. Our numerical testing shows that the performance with extrapolation is much more efficient than that without extrapolation in terms of the number of iterations needed for reaching an approximate optimal solution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Alves, M.M., Eckstein, J., Geremia, M., Melo, J.G.: Relative-error inertial-relaxed inexact versions of douglas-rachford and ADMM splitting algorithms. Comput. Optim. Appl. 75(2), 389–422 (2020)

    Article  MathSciNet  Google Scholar 

  2. Auslender, A., Teboulle, M.: Interior gradient and epsilon-subgradient descent methods for constrained convex minimization. Math. Oper. Res. 29(1), 1–26 (2004)

    Article  MathSciNet  Google Scholar 

  3. Avriel, M., Diewert, W.E., Schaible, S., Zang, I.: Generalized Concavity. Plenum Press, New York (1988)

    Book  Google Scholar 

  4. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  5. Bertsekas, D.P., Nedic̀, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

  6. Boţ, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)

    MathSciNet  MATH  Google Scholar 

  7. Bradley, S.P., Frey, S.C., Jr.: Fractional programming with homogeneous functions. Oper. Res. 22(2), 350–357 (1974)

    Article  MathSciNet  Google Scholar 

  8. Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control. Optim. 31(5), 1340–1359 (1993)

    Article  MathSciNet  Google Scholar 

  9. Cai, X., Teo, K.-L., Yang, X., Zhou, X.: Portfolio optimization under a minimax rule. Manage. Sci. 46(7), 957–972 (2000)

    Article  Google Scholar 

  10. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  11. Chen, C., Chan, R.H., Ma, S., Yang, J.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imag. Sci. 8(4), 2239–2267 (2015)

    Article  MathSciNet  Google Scholar 

  12. Crouzeix, J.P., Martínez-Legaz, J.E., Volle, M.: Generalized Convexity, Generalized Monotonicity: Recent Results. Kluwer Academic Publishers, Dordrecht (1998)

    Book  Google Scholar 

  13. Cruz, J.B., Pérez, L.L., Melo, J.: Convergence of the projected gradient method for quasiconvex multiobjective optimization. Nonlinear Anal. Theory Methods Appl. 74(16), 5268–5273 (2011)

    Article  MathSciNet  Google Scholar 

  14. Daniilidis, A., Hadjisavvas, N., Martínez-Legaz, J.E.: An appropriate subdifferential for quasiconvex functions. SIAM J. Optim. 12(2), 407–420 (2001)

    Article  MathSciNet  Google Scholar 

  15. dos Santos Gromicho, J.A.: Quasiconvex Optimization and Location Theory. Kluwer Academic Publishers, Dordrecht (1998)

    Book  Google Scholar 

  16. Ermol’ev, Y.M.: Methods of solution of nonlinear extremal problems. Cybernetics 2(4), 1–14 (1966)

  17. Giannessi, F.: Constrained Optimization and Image Space Analysis: Volume 1: Separation of Sets and Optimality Conditions. Springer Science & Business Media, New York (2005)

  18. Greenberg, H.J., Pierskalla, W.P.: Quasi-conjugate functions and surrogate duality. Cahiers du Centre d,Etudes de Recherche Operationelle 15, 437–448 (1973)

  19. Hadjisavvas, N., Komlósi, S., Schaible, S.S.: Handbook of Generalized Convexity and Generalized Monotonicity. Springer-Verlag, New York (2005)

    Book  Google Scholar 

  20. Hu, Y., Yang, X., Sim, C.K.: Inexact subgradient methods for quasi-convexoptimization problems. Eur. J. Oper. Res. 240(2), 315–327 (2015)

    Article  Google Scholar 

  21. Hu, Y., Li, J., Yu, C.K.W.: Convergence rates of subgradient methods for quasi-convex optimization problems. Comput. Optim. Appl. 77, 183–212 (2020)

  22. Huang, X., Yang, X.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28(3), 533–552 (2003)

    Article  MathSciNet  Google Scholar 

  23. Jia, Z., Wu, Z., Dong, X.: An inexact proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth optimization problems. J. Inequ. Appl. 125, 1–16 (2019)

    MathSciNet  MATH  Google Scholar 

  24. Johnstone, P.R., Moulin, P.: Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions. Comput. Optim. Appl. 67(2), 259–292 (2017)

    Article  MathSciNet  Google Scholar 

  25. Kiwiel, K.C.: Convergence and efficiency of subgradient methods for quasiconvex minimization. Math. Program. 90(1), 1–25 (2001)

    Article  MathSciNet  Google Scholar 

  26. Konnov, I.V.: On properties of supporting and quasi-supporting vectors. J. Math. Sci. 71(6), 2760–2763 (1994)

    Article  MathSciNet  Google Scholar 

  27. Konnov, I.V.: Estimates of the labor cost of combined relaxation methods. J. Math. Sci. 74(5), 1225–1235 (1995)

    Article  MathSciNet  Google Scholar 

  28. Konnov, I.V.: On convergence properties of a subgradient method. Optim. Methods Softw. 18(1), 53–62 (2003)

    Article  MathSciNet  Google Scholar 

  29. Langenberg, N., Tichatschke, R.: Interior proximal methods for quasiconvex optimization. J. Global Optim. 52(3), 641–661 (2012)

    Article  MathSciNet  Google Scholar 

  30. Maingé, P.E.: Asymptotic convergence of an inertial proximal method for unconstrained quasiconvex minimization. J. Global Optim. 45(4), 631–644 (2009)

    Article  MathSciNet  Google Scholar 

  31. Nedic̀, A., Bertsekas, D.P.: The effect of deterministic noise in subgradient methods. Math. Program. 125(1), 75–99 (2010)

  32. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Boston (2004)

    Book  Google Scholar 

  33. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  34. Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imag. Sci. 7(2), 1388–1419 (2014)

    Article  MathSciNet  Google Scholar 

  35. Plastria, F.: Lower subdifferentiable functions and their minimization by cutting planes. J. Optim. Theory Appl. 46(1), 37–53 (1985)

    Article  MathSciNet  Google Scholar 

  36. Pock, T., Sabach, S.: Inertial proximal alternating linearized minimization (IPALM) for nonconvex and nonsmooth problems. SIAM J. Imag. Sci. 9(4), 1756–1787 (2016)

    Article  MathSciNet  Google Scholar 

  37. Polyak, B.T.: A general method for solving extremal problems. (Russian) Doklady Akademii Nauk, 174(1), 33–36 (1967)

  38. Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)

    MATH  Google Scholar 

  39. Ramík, J., Vlach, M.: Generalized Concavity in Fuzzy Optimization and Decision Analysis. Kluwer Academic Publishers, Boston (2012)

    MATH  Google Scholar 

  40. Sharpe, W.F.: Mutual fund performance. J. Bus. 39(1), 119–138 (1966)

    Article  Google Scholar 

  41. Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer-Verlag, New York (1985)

    Book  Google Scholar 

  42. Wen, B., Chen, X., Pong, T.K.: Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 27(1), 124–145 (2017)

    Article  MathSciNet  Google Scholar 

  43. Wu, Z., Li, M.: General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems. Comput. Optim. Appl. 73(1), 129–158 (2019)

    Article  MathSciNet  Google Scholar 

  44. Yu, C.K.W., Hu, Y., Yang, X., Choy, S.K.: Abstract convergence theorem for quasi-convex optimization problems with applications. Optimization 68(7), 1289–1304 (2019)

    Article  MathSciNet  Google Scholar 

  45. Zhang, X., Barrio, R., Martínez, M.A., Jiang, H., Cheng, L.: Bregman proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. IEEE Access 7, 126515–126529 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the reviewer for providing many constructive comments and suggestions and thank Professor I.V. Konnov for his suggestions on an early version of the paper, in particular to construct a nonsmooth quasiconvex optimization problem (see Sect. 6.2). These comments and suggestions have improved the presentation of the paper. The first author was supported in part by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC Ref No. 15234216).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoqi Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is dedicated to the 85th birthday of Professor Franco Giannessi.

Communicated by: Liqun Qi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, X., Zu, C. Convergence of Inexact Quasisubgradient Methods with Extrapolation. J Optim Theory Appl 193, 676–703 (2022). https://doi.org/10.1007/s10957-022-02014-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-022-02014-1

Keywords

Mathematics Subject Classification

Navigation