Abstract
The problem of minimizing a functionF over a set Ω is approximated by a sequence of problems whereF and Ω are replaced byF (n) and Ω(n), respectively. We show in which manner the convergence rates of the conditional gradient and projected gradient methods are influenced by the approximation. In particular, it becomes evident how the convergence theory for infinite dimensional problems such as control problems explains the behavior of finite dimensional implementations.
Similar content being viewed by others
References
Bertsekas DP (1976) On the Goldstein-Levitin-Polyak gradient projection method. IEEE Trans Auto Contr AC-21:174–184
Daniel JW (1971) The approximate minimization of functionals. Prentice-Hall, Englewood Cliffs
Dunn JC (1979) Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals, SIAM J Contr Opt 17:187–211
Dunn JC (1980) Convergence rates for conditional gradient sequences generated by implicit step rules. SIAM J Contr Opt 18:473–487
Dunn JC (1981) Global and asymptotic convergence rate estimates for a class of projected gradient processes. SIAM J Contr Opt 19:368–400
Dunn JC (1982) Extremal types for certainL P minimization problems and associated large scale nonlinear programs. Appl Math Optim 9:303–335
Hughes GC (1981) Convergence rate analysis for iterative minimization schemes with quadratic subproblems. (Ph.D. thesis) Mathematics Department, North Carolina State University
Levitin ES, Polyak BT (1966) Constrained minimization problems. USSR J Comput Math Phys 6:1–50
Author information
Authors and Affiliations
Additional information
Communicated by A. V. Balakrishnan
Partially supported by NSF Grant #ECS-8005958.
Rights and permissions
About this article
Cite this article
Dunn, J.C., Sachs, E. The effect of perturbations on the convergence rates of optimization algorithms. Appl Math Optim 10, 143–157 (1983). https://doi.org/10.1007/BF01448383
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF01448383