Summary
We study the family of gradient algorithms for solving quadratic optimization problems, where the step-length γ k is chosen according to a particular procedure. To carry out the study, we re-write the algorithms in a normalized form and make a connection with the theory of optimum experimental design. We provide the results of a numerical study which shows that some of the proposed algorithms are extremely efficient.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Akaike, H. (1959). On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Annals of the Institute of Statistical Mathematics, 11, 1–16.Tokyo,
Barzilai, J. and Borwein, J. (1988). Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8, 141–148.
Fedorov, V. (1972). Theory of Optimal Experiments. Academic Press, New York.
Forsythe, G. (1968). On the asymptotic directions of the s-dimensional optimum gradient method Numerische Mathematik, 11, 57–76.
Kozjakin, V. and Krasnosel'skii, M. (1982). Some remarks on the method of minimal residues. Numerical Functional Analysis and Optimization, 4, (3)211–239.
Nocedal, J., Sartenaer, A., and Zhu, C. (2002). On the behavior of the gradient norm in the steepest descent method. Computational Optimization and Applications, 22, 5–35.
Pronzato, L., Wynn, H., and Zhigljavsky, A. (2000). Dynamical Search. Chapman & Hall/CRC, Boca Raton.
Pronzato, L., Wynn, H., and Zhigljavsky, A. (2001). Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta Applicandae Mathematicae, 67, 1–18.
Pronzato, L., Wynn, H., and Zhigljavsky, A. (2002). An introduction to dynamical search. In P. Pardalos and H. Romeijn, editors, Handbook of Global Optimization, volume 2, Chap. 4, pages 115–150. Kluwer, Dordrecht.
Pronzato, L., Wynn, H., and Zhigljavsky, A. (2006). Asymptotic behaviour of a family of gradient algorithms in \({\mathbb R}^d\) and Hilbert spaces. Mathematical Programming, A107, 409–438.
Raydan, M. and Svaiter, B. (2002). Relaxed steepest descent and Cauchy-Barzilai-Borwein method. Computational Optimization and Applications, 21, 155–167.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media LLC
About this chapter
Cite this chapter
Haycroft, R., Pronzato, L., Wynn, H.P., Zhigljavsky, A. (2009). Studying Convergence of Gradient Algorithms Via Optimal Experimental Design Theory. In: Pronzato, L., Zhigljavsky, A. (eds) Optimal Design and Related Areas in Optimization and Statistics. Springer Optimization and Its Applications, vol 28. Springer, New York, NY. https://doi.org/10.1007/978-0-387-79936-0_2
Download citation
DOI: https://doi.org/10.1007/978-0-387-79936-0_2
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-79935-3
Online ISBN: 978-0-387-79936-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)