Optimization Letters

, Volume 7, Issue 6, pp 1047–1059 | Cite as

An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost

Original Paper

Abstract

We consider gradient algorithms for minimizing a quadratic function in \({\mathbb{R}^n}\) with large n. We suggest a particular sequence of step-sizes and demonstrate that the resulting gradient algorithm has a convergence rate comparable with that of Conjugate Gradients and other methods based on the use of Krylov spaces. When the matrix is large and sparse, the proposed algorithm can be more efficient than the Conjugate Gradient algorithm in terms of computational cost, as k iterations of the proposed algorithm only require the computation of O(log k) inner products.

Keywords

Quadratic optimization Gradient algorithms Conjugate gradient Arcsine distribution Fibonacci numbers Estimation of leading eigenvalues 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barzilai J., Borwein J.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Erdös P., Turan P.: On interpolation. III. Interpolation theory of polynomials. Ann. Math. 41(3), 510–553 (1940)CrossRefGoogle Scholar
  3. 3.
    Golub G., Loan C.V.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)Google Scholar
  4. 4.
    Haycroft, R., Pronzato, L., Wynn, H., Zhigljavsky, A.: Studying convergence of gradient algorithms via optimal experimental design theory. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 13–37. Springer, Berlin (2009)Google Scholar
  5. 5.
    Krasnosel’skii M.A., G.Krein S.: An iteration process with minimal residues. Mat. Sb. 31(4), 315–334 (1952)Google Scholar
  6. 6.
    Meurant G.: The block preconditioned conjugate gradient method on vector computers. BIT 24, 623–633 (1984)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Pronzato L., Wynn H., Zhigljavsky A.: Asymptotic behaviour of a family of gradient algorithms in \({\mathbb{R}^d}\) and Hilbert spaces. Math. Program. A 107(3), 409–438 (2006)MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    Pronzato. L., Wynn. H.P., Zhigljavsky. A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009)Google Scholar
  9. 9.
    Pronzato L., Zhigljavsky A.: Gradient algorithms for quadratic optimization with fast convergence rates. Comput. Optim. Appl. 50(3), 597–617 (2011)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Saad Y.: Practical use of polynomial preconditionings for the conjugate gradient method. SIAM J. Sci. Stat. Comp. 6(4), 865–881 (1985)MathSciNetMATHCrossRefGoogle Scholar
  11. 11.
    Slater B.: Gaps and steps for the sequence n θ mod 1. Math. Proc. Camb. Phil. Soc. 63, 1115–1123 (1967)MathSciNetMATHCrossRefGoogle Scholar
  12. 12.
    van der Vorst, H.A. (2002) Iterative methods for large linear systems. Tech. rep., Math. Inst. Utrecht Univ.Google Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  • Anatoly Zhigljavsky
    • 1
  • Luc Pronzato
    • 2
  • Elena Bukina
    • 2
  1. 1.School of MathematicsCardiff UniversityCardiffUK
  2. 2.Laboratoire I3S, CNRS-UNSSophia AntipolisFrance

Personalised recommendations