Advertisement

Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions

  • Yaohua HuEmail author
  • Carisa Kwok Wai Yu
  • Xiaoqi Yang
Article
  • 55 Downloads

Abstract

The sum of ratios problem has a variety of important applications in economics and management science, but it is difficult to globally solve this problem. In this paper, we consider the minimization problem of the sum of a number of nondifferentiable quasi-convex component functions over a closed and convex set. The sum of quasi-convex component functions is not necessarily to be quasi-convex, and so, this study goes beyond quasi-convex optimization. Exploiting the structure of the sum-minimization problem, we propose a new incremental quasi-subgradient method for this problem and investigate its convergence properties to a global optimal value/solution when using the constant, diminishing or dynamic stepsize rules and under a homogeneous assumption and the Hölder condition. To economize on the computation cost of subgradients of a large number of component functions, we further propose a randomized incremental quasi-subgradient method, in which only one component function is randomly selected to construct the subgradient direction at each iteration. The convergence properties are obtained in terms of function values and iterates with probability 1. The proposed incremental quasi-subgradient methods are applied to solve the quasi-convex feasibility problem and the sum of ratios problem, as well as the multiple Cobb–Douglas productions efficiency problem, and the numerical results show that the proposed methods are efficient for solving the large-scale sum of ratios problem.

Keywords

Quasi-convex programming Sum-minimization problem Sum of ratios problem Subgradient method Incremental approach 

Notes

References

  1. 1.
    Almogy, Y., Levin, O.: Parametric analysis of a multi-stage stochastic shipping problem. Oper. Res. 69, 359–370 (1970)MathSciNetGoogle Scholar
  2. 2.
    Auslender, A., Teboulle, M.: Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities. Math. Program. 120, 27–48 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Aussel, D., Daniilidis, A.: Normal characterization of the main classes of quasiconvex functions. Set Valued Anal. 8, 219–236 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Avriel, M., Diewert, W.E., Schaible, S., Zang, I.: Generalized Concavity. Plenum Press, New York (1988)CrossRefzbMATHGoogle Scholar
  5. 5.
    Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Benson, H.: Branch-and-bound outer approximation algorithm for sum-of-ratios fractional programs. J. Optim. Theory Appl. 146, 1–18 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Bertsekas, D.P.: Convex Optimization Algorithms. Athena Scientific, Belmont (2015)zbMATHGoogle Scholar
  8. 8.
    Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129(2), 163–195 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont (1996)zbMATHGoogle Scholar
  10. 10.
    Bradley, S.P., Frey, S.C.: Fractional programming with homogeneous functions. Oper. Res. 22(2), 350–357 (1974)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Censor, Y., Segal, A.: Algorithms for the quasiconvex feasibility problem. J. Comput. Appl. Math. 185(1), 34–50 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Colantoni, C.S., Manes, R.P., Whinston, A.: Programming, profit rates, and pricing decisions. Account. Rev. 44, 467–481 (1969)Google Scholar
  13. 13.
    Crouzeix, J.-P., Martinez-Legaz, J.-E., Volle, M. (eds.): Generalized Convexity, Generalized Monotonicity. Kluwer Academic Publishers, Dordrecht (1998)zbMATHGoogle Scholar
  14. 14.
    Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Ermoliev, Y.M.: Methods of solution of nonlinear extremal problems. Cybern. Syst. Anal. 2, 1–14 (1966)CrossRefGoogle Scholar
  16. 16.
    Freund, R.W., Jarre, F.: Solving the sum-of-ratios problem by an interior-point method. J. Glob. Optim. 19, 83–102 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Greenberg, H.J., Pierskalla, W.P.: Quasiconjugate functions and surrogate duality. Cahiers Centre Études Recherche Opérationnelle 15, 437–448 (1973)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Goffin, J.-L., Luo, Z.-Q., Ye, Y.: On the complexity of a column generation algorithm for convex or quasiconvex feasibility problems. In: Hager, W.W., Hearn, D.W., Pardalos, P.M. (eds.) Large Scale Optimization: State of the Art, pp. 182–191. Kluwer Academic Publishers, Dordrecht (1994)CrossRefGoogle Scholar
  19. 19.
    Hadjisavvas, N., Komlósi, S., Schaible, S. (eds.): Handbook of Generalized Convexity and Generalized Monotonicity. Springer, New York (2005)zbMATHGoogle Scholar
  20. 20.
    Hu, Y., Li, C., Yang, X.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26(2), 1207–1235 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Hu, Y., Yang, X., Sim, C.-K.: Inexact subgradient methods for quasi-convex optimization problems. Eur. J. Oper. Res. 240(2), 315–327 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Hu, Y., Yu, C.K.W., Li, C.: Stochastic subgradient method for quasi-convex optimization problems. J. Nonlinear Convex Anal. 17(4), 711–724 (2016)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Hu, Y., Yu, C.K.W., Li, C., Yang, X.: Conditional subgradient methods for constrained quasi-convex optimization problems. J. Nonlinear Convex Anal. 17(10), 2143–2158 (2016)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Huang, X., Yang, X.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28(3), 533–552 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Johansson, B., Rabi, M., Johansson, M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20(3), 1157–1170 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Kiwiel, K.C.: Convergence and efficiency of subgradient methods for quasiconvex minimization. Math. Program. 90, 1–25 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14(3), 807–840 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Konno, H., Inori, M.: Bond portfolio optimization by bilinear fractional programming. J. Oper. Res. Soc. Jpn. 32, 143–158 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Konnov, I.V.: On properties of supporting and quasi-supporting vectors. J. Math. Sci. 71, 2760–2763 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Konnov, I.V.: On convergence properties of a subgradient method. Optim. Methods Softw. 18(1), 53–62 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Larsson, T., Patriksson, M., Strömberg, A.-B.: Conditional subgradient optimization: theory and applications. Eur. J. Oper. Res. 88, 382–403 (1996)CrossRefzbMATHGoogle Scholar
  32. 32.
    Mairal, J.: Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM J. Optim. 25(2), 829–855 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Nedić, A., Bertsekas, D.P.: The effect of deterministic noise in subgradient methods. Math. Program. 125, 75–99 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Nesterov, Y.: Primal–dual subgradient methods for convex problems. Math. Program. 120, 221–259 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Neto, E.S.H., Pierro, A.R.D.: Incremental subgradients for constrained convex optimization: a unified framework and new methods. SIAM J. Optim. 20(3), 1547–1572 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Nimana, N., Farajzadeh, A.P., Petrot, N.: Adaptive subgradient method for the split quasi-convex feasibility problems. Optimization 65(10), 1885–1898 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Polyak, B.T.: A general method for solving extremum problems. Sov. Math. Dokl. 8, 593–597 (1967)zbMATHGoogle Scholar
  39. 39.
    Rabbat, M.G., Nowak, R.D.: Quantized incremental algorithms for distributed optimization. IEEE J. Sel. Areas Commun. 23(4), 798–808 (2005)CrossRefGoogle Scholar
  40. 40.
    Ram, S.S., Nedić, A., Veeravalli, V.V.: Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20(2), 691–717 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Schaible, S., Shi, J.: Fractional programming: the sum-of-ratios case. Optim. Methods Softw. 18, 219–229 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Shi, Q., He, C., Jiang, L.: Normalized incremental subgradient algorithm and its application. IEEE Trans. Signal Process. 57(10), 3759–3774 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, New York (1985)CrossRefzbMATHGoogle Scholar
  44. 44.
    Stancu-Minasian, I.M.: Fractional Programming. Kluwer Academic Publishers, Dordrecht (1997)CrossRefzbMATHGoogle Scholar
  45. 45.
    Tan, C., Ma, S., Dai, Y.-H., Qian, Y.: Barzilai–Borwein step size for stochastic gradient descent. In: NIPS (2016)Google Scholar
  46. 46.
    Xiao, L., Zhang, T.: A proximal stochastic gradient method with progressive variance reduction. SIAM J. Optim. 24(4), 2057–2075 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  47. 47.
    Yu, C.K.W., Hu, Y., Yang, X., Choy, S.K.: Abstract convergence theorem for quasi-convex optimization problems with applications. Optimization (2019).  https://doi.org/10.1080/02331934.2018.1455831 MathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Shenzhen Key Laboratory of Advanced Machine Learning and Applications, College of Mathematics and StatisticsShenzhen UniversityShenzhenPeople’s Republic of China
  2. 2.Department of Mathematics and StatisticsThe Hang Seng University of Hong KongShatinHong Kong
  3. 3.Department of Applied MathematicsThe Hong Kong Polytechnic UniversityKowloonHong Kong

Personalised recommendations