Abstract
Quasi-convex optimization acts a pivotal part in many fields including economics and finance; the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the quantitative convergence theory, including the iteration complexity and convergence rates, of various subgradient methods for solving quasi-convex optimization problems in a unified framework. In particular, we consider a sequence satisfying a general (inexact) basic inequality, and investigate the global convergence theorem and the iteration complexity when using the constant, diminishing or dynamic stepsize rules. More importantly, we establish the linear (or sublinear) convergence rates of the sequence under an additional assumption of weak sharp minima of Hölderian order and upper bounded noise. These convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods, including the standard/inexact/conditional subgradient methods, for solving quasi-convex optimization problems under the assumptions of the Hölder condition and/or the weak sharp minima of Hölderian order.
Similar content being viewed by others
References
Apolinario, H.C.F., Papa Quiroz, E.A., Oliveira, P.R.: A scalarization proximal point method for quasiconvex multiobjective minimization. J. Glob. Optim. 64, 79–96 (2016)
Auslender, A., Teboulle, M.: Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities. Math. Program. 120, 27–48 (2009)
Aussel, D., Daniilidis, A.: Normal characterization of the main classes of quasiconvex functions. Set-Valued Anal. 8, 219–236 (2000)
Avriel, M., Diewert, W.E., Schaible, S., Zang, I.: Generalized Concavity. Plenum Press, New York (1988)
Bao, J., Yu, C.K.W., Wang, J., Hu, Y., Yao, J.C.: Modified inexact Levenberg–Marquardt methods for solving nonlinear least squares problems. Comput. Optim. Appl. 74, 547–582 (2019)
Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999)
Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165, 471–507 (2017)
Brännlund, U.: A generalized subgradient method with relaxation step. Math. Program. 71, 207–219 (1995)
Burke, J.V., Deng, S.: Weak sharp minima revisited, part I: basic theory. Control Cybern. 31, 439–469 (2002)
Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 31, 1340–1359 (1993)
Crouzeix, J.P., Martinez-Legaz, J.E., Volle, M.: Generalized Convexity, Generalized Monotonicity. Kluwer, Dordrecht (1998)
Ermoliev, Y.M.: Methods of solution of nonlinear extremal problems. Cybern. Syst. Anal. 2, 1–14 (1966)
Ferris, M.C.: Finite termination of the proximal point algorithm. Math. Program. 50, 359–366 (1991)
Frank, M., Wolfe, P.: An algorithm for quadratic programming. Nav. Res. Logist. Q. 3, 95–110 (1956)
Freund, R.M., Lu, H.: New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure. Math. Program. 170, 445–477 (2018)
Goffin, J.L.: On convergence rates of subgradient optimization methods. Math. Program. 13, 329–347 (1977)
Greenberg, H.J., Pierskalla, W.P.: Quasiconjugate functions and surrogate duality. Cahiers Centre Études Recherche Opertionnelle 15, 437–448 (1973)
Gürbüzbalaban, M., Ozdaglar, A., Ribeiro, P.A.: On the convergence rate of incremental aggregated gradient algorithms. SIAM J. Optim. 27, 1035–1048 (2017)
Hadjisavvas, N., Komlósi, S., Schaible, S.: Handbook of Generalized Convexity and Generalized Monotonicity. Springer, New York (2005)
Hu, Y., Li, C., Yang, X.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26, 1207–1235 (2016)
Hu, Y., Yang, X., Sim, C.K.: Inexact subgradient methods for quasi-convex optimization problems. Eur. J. Oper. Res. 240, 315–327 (2015)
Hu, Y., Yang, X., Yu, C.K.W.: Subgradient methods for saddle point problems of quasiconvex optimization. Pure Appl. Funct. Anal. 2, 83–97 (2017)
Hu, Y., Yu, C.K.W., Li, C., Yang, X.: Conditional subgradient methods for constrained quasi-convex optimization problems. J. Nonlinear Convex Anal. 17, 2143–2158 (2016)
Huang, X.X., Yang, X.Q.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28, 533–552 (2003)
Johnstone, P.R., Moulin, P.: Faster subgradient methods for functions with Hölderian growth. Math. Program. 180, 417–450 (2020)
Kiwiel, K.C.: Convergence and efficiency of subgradient methods for quasiconvex minimization. Math. Program. 90, 1–25 (2001)
Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)
Konnov, I.V.: On properties of supporting and quasi-supporting vectors. J. Math. Sci. 71, 2760–2763 (1994)
Konnov, I.V.: On convergence properties of a subgradient method. Optim. Methods Softw. 18, 53–62 (2003)
Larsson, T., Patriksson, M., Strömberg, A.B.: Conditional subgradient optimization-theory and applications. Eur. J. Oper. Res. 88, 382–403 (1996)
Lu, Z., Zhang, Y., Lu, J.: \(\ell _p\) Regularized low-rank approximation via iterative reweighted singular value minimization. Comput. Optim. Appl. 68, 619–642 (2017)
Mastrogiacomo, M., Gianin, E.R.: Portfolio optimization with quasiconvex risk measures. Math. Oper. Res. 40, 1042–1059 (2015)
Mokhtari, A., Gürbüzbalaban, M., Ribeiro, A.: Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate. SIAM J. Optim. 28, 1420–1447 (2018)
Nedić, A., Bertsekas, D.P.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P.M., (eds.) Stochastic Optimization: Algorithms and Applications, pp. 223–264. Springer, Berlin (2001)
Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)
Nedić, A., Ozdaglar, A.: Subgradient methods for saddle-point problems. J. Optim. Theory Appl. 142, 205–228 (2009)
Nesterov, Y.: Primal-dual subgradient methods for convex problems. Math. Program. 120, 221–259 (2009)
Nesterov, Y., Shikhman, V.: Dual subgradient method with averaging for optimal resource allocation. Eur. J. Oper. Res. 270, 907–916 (2018)
Neto, E.S.H., Pierro, A.R.D.: Incremental subgradients for constrained convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009)
Poljak, B.T.: Nonlinear programming methods in the presence of noise. Math. Program. 14, 87–97 (1978)
Polyak, B.T.: A general method for solving extremum problems. Sov. Math. Doklady 8, 593–597 (1967)
Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)
Papa Quiroz, E.A., Apolinario, H.C.F., Villacorta, K.D., Oliveira, P.R.: A linear scalarization proximal point method for quasiconvex multiobjective minimization. J. Optim. Theory Appl. 183, 1028–1052 (2019)
Papa Quiroz, E.A., Oliveira, P.R.: An extension of proximal methods for quasiconvex minimization on the nonnegative orthant. Eur. J. Oper. Res. 216, 26–32 (2012)
Papa Quiroz, E.A., Oliveira, P.R.: Proximal point methods for quasiconvex and convex functions with Bregman distances on Hadamard manifolds. J. Convex Anal. 16, 49–69 (2009)
Papa Quiroz, E.A., Quispe Cardenas, E.M., Oliveira, P.R.: Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds. J. Math. Anal. Appl. 341, 467–477 (2008)
Papa Quiroz, E.A., Ramirez, L.M., Oliveira, P.R.: An inexact proximal method for quasiconvex minimization. Eur. J. Oper. Res. 246, 721–729 (2015)
Robinson, S.M.: Linear convergence of epsilon-subgradient descent methods for a class of convex functions. Math. Program. 86, 41–50 (1999)
Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, New York (1985)
Stancu-Minasian, I.M.: Fractional Programming. Kluwer, Dordrecht (1997)
Studniarski, M., Ward, D.E.: Weak sharp minima: characterizations and sufficient conditions. SIAM J. Control Optim. 38, 219–236 (1999)
Wang, J., Hu, Y., Li, C., Yao, J.-C.: Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 33, 055017 (2017)
Wang, J., Li, C., Lopez, G., Yao, J.-C.: Proximal point algorithms on Hadamard manifolds: linear convergence and finite termination. SIAM J. Optim. 26, 2696–2729 (2016)
Xu, Y., Lin, Q., Yang, T.: Accelerate stochastic subgradient method by leveraging local error bound. arXiv:1607.01027 (2018)
Yu, C.K.W., Hu, Y., Yang, X., Choy, S.K.: Abstract convergence theorem for quasi-convex optimization problems with applications. Optimization 68, 1289–1304 (2019)
Zhou, Z., So, A.M.C.: A unified approach to error bounds for structured convex optimization problems. Math. Program. 165, 689–728 (2017)
Acknowledgements
Y. Hu was supported in part by the National Natural Science Foundation of China (11871347), Natural Science Foundation of Guangdong (2019A1515011917), Natural Science Foundation of Shenzhen (JCYJ20190808173603590, JCYJ20170817100950436, JCYJ20170818091621856) and Interdisciplinary Innovation Team of Shenzhen University. C. K. W. Yu was supported in part by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (UGC/FDS14/P02/17).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Hu, Y., Li, J. & Yu, C.K.W. Convergence rates of subgradient methods for quasi-convex optimization problems. Comput Optim Appl 77, 183–212 (2020). https://doi.org/10.1007/s10589-020-00194-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-020-00194-y