Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- 55 Downloads
The sum of ratios problem has a variety of important applications in economics and management science, but it is difficult to globally solve this problem. In this paper, we consider the minimization problem of the sum of a number of nondifferentiable quasi-convex component functions over a closed and convex set. The sum of quasi-convex component functions is not necessarily to be quasi-convex, and so, this study goes beyond quasi-convex optimization. Exploiting the structure of the sum-minimization problem, we propose a new incremental quasi-subgradient method for this problem and investigate its convergence properties to a global optimal value/solution when using the constant, diminishing or dynamic stepsize rules and under a homogeneous assumption and the Hölder condition. To economize on the computation cost of subgradients of a large number of component functions, we further propose a randomized incremental quasi-subgradient method, in which only one component function is randomly selected to construct the subgradient direction at each iteration. The convergence properties are obtained in terms of function values and iterates with probability 1. The proposed incremental quasi-subgradient methods are applied to solve the quasi-convex feasibility problem and the sum of ratios problem, as well as the multiple Cobb–Douglas productions efficiency problem, and the numerical results show that the proposed methods are efficient for solving the large-scale sum of ratios problem.
KeywordsQuasi-convex programming Sum-minimization problem Sum of ratios problem Subgradient method Incremental approach
- 12.Colantoni, C.S., Manes, R.P., Whinston, A.: Programming, profit rates, and pricing decisions. Account. Rev. 44, 467–481 (1969)Google Scholar
- 18.Goffin, J.-L., Luo, Z.-Q., Ye, Y.: On the complexity of a column generation algorithm for convex or quasiconvex feasibility problems. In: Hager, W.W., Hearn, D.W., Pardalos, P.M. (eds.) Large Scale Optimization: State of the Art, pp. 182–191. Kluwer Academic Publishers, Dordrecht (1994)CrossRefGoogle Scholar
- 45.Tan, C., Ma, S., Dai, Y.-H., Qian, Y.: Barzilai–Borwein step size for stochastic gradient descent. In: NIPS (2016)Google Scholar