Studying Convergence of Gradient Algorithms Via Optimal Experimental Design Theory

  • R. Haycroft
  • L. Pronzato
  • H. P. Wynn
  • A. Zhigljavsky
Part of the Springer Optimization and Its Applications book series (SOIA, volume 28)


We study the family of gradient algorithms for solving quadratic optimization problems, where the step-length γk is chosen according to a particular procedure. To carry out the study, we re-write the algorithms in a normalized form and make a connection with the theory of optimum experimental design. We provide the results of a numerical study which shows that some of the proposed algorithms are extremely efficient.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Akaike, H. (1959). On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Annals of the Institute of Statistical Mathematics, 11, 1–16.Tokyo,MATHCrossRefMathSciNetGoogle Scholar
  2. Barzilai, J. and Borwein, J. (1988). Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8, 141–148.MATHCrossRefMathSciNetGoogle Scholar
  3. Fedorov, V. (1972). Theory of Optimal Experiments. Academic Press, New York.Google Scholar
  4. Forsythe, G. (1968). On the asymptotic directions of the s-dimensional optimum gradient method Numerische Mathematik, 11, 57–76.MATHCrossRefMathSciNetGoogle Scholar
  5. Kozjakin, V. and Krasnosel'skii, M. (1982). Some remarks on the method of minimal residues. Numerical Functional Analysis and Optimization, 4, (3)211–239.CrossRefGoogle Scholar
  6. Nocedal, J., Sartenaer, A., and Zhu, C. (2002). On the behavior of the gradient norm in the steepest descent method. Computational Optimization and Applications, 22, 5–35.MATHCrossRefMathSciNetGoogle Scholar
  7. Pronzato, L., Wynn, H., and Zhigljavsky, A. (2000). Dynamical Search. Chapman & Hall/CRC, Boca Raton.MATHGoogle Scholar
  8. Pronzato, L., Wynn, H., and Zhigljavsky, A. (2001). Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta Applicandae Mathematicae, 67, 1–18.MATHCrossRefMathSciNetGoogle Scholar
  9. Pronzato, L., Wynn, H., and Zhigljavsky, A. (2002). An introduction to dynamical search. In P. Pardalos and H. Romeijn, editors, Handbook of Global Optimization, volume 2, Chap. 4, pages 115–150. Kluwer, Dordrecht.Google Scholar
  10. Pronzato, L., Wynn, H., and Zhigljavsky, A. (2006). Asymptotic behaviour of a family of gradient algorithms in \({\mathbb R}^d\) and Hilbert spaces. Mathematical Programming, A107, 409–438.CrossRefMathSciNetGoogle Scholar
  11. Raydan, M. and Svaiter, B. (2002). Relaxed steepest descent and Cauchy-Barzilai-Borwein method. Computational Optimization and Applications, 21, 155–167.MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media LLC 2009

Authors and Affiliations

  • R. Haycroft
    • 1
  • L. Pronzato
    • 2
  • H. P. Wynn
    • 3
  • A. Zhigljavsky
    • 4
  1. 1.Cardiff University, School of MathematicsCardiffUK
  2. 2. Laboratoire I3S, CNRS - UNSA, Les Algorithmes – Bˆat. Euclide B Sophia Antipolis France
  3. 3. London School of Economics and Political Science London UK
  4. 4.Cardiff University,School of MathematicsCardiffUK

Personalised recommendations