Advertisement

Computational Optimization and Applications

, Volume 50, Issue 3, pp 597–617 | Cite as

Gradient algorithms for quadratic optimization with fast convergence rates

  • Luc Pronzato
  • Anatoly Zhigljavsky
Article

Abstract

We propose a family of gradient algorithms for minimizing a quadratic function f(x)=(Ax,x)/2−(x,y) in ℝ d or a Hilbert space, with simple rules for choosing the step-size at each iteration. We show that when the step-sizes are generated by a dynamical system with ergodic distribution having the arcsine density on a subinterval of the spectrum of A, the asymptotic rate of convergence of the algorithm can approach the (tight) bound on the rate of convergence of a conjugate gradient algorithm stopped before d iterations, with d≤∞ the space dimension.

Keywords

Chebyshev polynomials Conjugate gradient Krylov space Logistic map Quadratic operator Steepest descent 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akaike, H.: On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Ann. Inst. Stat. Math. Tokyo 11, 1–16 (1959) MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988) MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Bierens, H.J.: Topics in Advanced Econometrics. Cambridge University Press, Cambridge (1994) zbMATHCrossRefGoogle Scholar
  4. 4.
    Dai, Y.H., Yang, X.Q.: A new gradient method with an optimal stepsize property. Comput. Optim. Appl. 33(1), 73–88 (2006) MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Forsythe, G.E.: On the asymptotic directions of the s-dimensional optimum gradient method. Numer. Math. 11, 57–76 (1968) MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Haycroft, R., Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Studying convergence of gradient algorithms via optimal experimental design theory. In: Pronzato, L., Zhigljavsky, A.A. (eds.) Optimal Design and Related Areas in Optimization and Statistics, pp. 13–37. Springer, Berlin (2009) CrossRefGoogle Scholar
  7. 7.
    Kuipers, L., Niederreiter, H.: Uniform Distribution of Sequences. Wiley, New York (1974) zbMATHGoogle Scholar
  8. 8.
    Luenberger, D.G.: Introduction to Linear and Nonlinear Programming. Addison-Wesley, Reading (1973) zbMATHGoogle Scholar
  9. 9.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Dynamical Search. Chapman & Hall/CRC, Boca Raton (2000) zbMATHGoogle Scholar
  10. 10.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta Appl. Math. 67, 1–18 (2001) MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: An introduction to dynamical search. In: Pardalos, P.M., Romeijn, H.E. (eds.) Handbook of Global Optimization, vol. 2, pp. 115–150. Kluwer Academic, Dordrecht (2002). Chap. 4 Google Scholar
  12. 12.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Asymptotic behaviour of a family of gradient algorithms in ℝd and Hilbert spaces. Math. Program. A 107, 409–438 (2006) MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Pronzato, L., Zhigljavsky, A.A. (eds.) Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Laboratoire I3SCNRS–UNSSophia Antipolis CedexFrance
  2. 2.School of MathematicsCardiff UniversityCardiffUK

Personalised recommendations