Acta Applicandae Mathematicae

, Volume 127, Issue 1, pp 117–136 | Cite as

Estimation of Spectral Bounds in Gradient Algorithms

  • L. Pronzato
  • A. Zhigljavsky
  • E. Bukina


We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in ℝ n×n , through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions (1/2)(Ax,x)−(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after k<n iterations, k→∞, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures ν k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of ν k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm.


Estimation of leading eigenvalues Arcsine distribution Gradient algorithms Conjugate gradient Fibonacci numbers 

Mathematics Subject Classification

65F10 65F15 



The authors are very grateful to the referees for their useful comments.


  1. 1.
    Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988) MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events. Springer, Berlin (1997) CrossRefzbMATHGoogle Scholar
  3. 3.
    Fischer, B., Reichel, L.: A stable Richardson iteration method for complex linear systems. Numer. Math. 54, 225–242 (1988) MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Forsythe, G.E.: On the asymptotic directions of the s-dimensional optimum gradient method. Numer. Math. 11, 57–76 (1968) MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996) zbMATHGoogle Scholar
  6. 6.
    Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Applications. Academic Press, New York (1980) Google Scholar
  7. 7.
    Hestenes, M.H., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952) MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Krasnosel’skii, M.A., Krein, S.G.: An iteration process with minimal residues. Mat. Sb. 31(4), 315–334 (1952) (in Russian) MathSciNetGoogle Scholar
  9. 9.
    Meurant, G.: The block preconditioned conjugate gradient method on vector computers. BIT Numer. Math. 24, 623–633 (1984) MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992) CrossRefzbMATHGoogle Scholar
  11. 11.
    Paige, C.C., Saunders, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12(4), 617–629 (1975) MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Podvigina, O.M., Zheligovsky, V.A.: An optimized iterative method for numerical solution of large systems of equations based on the extremal property of zeros of Chebyshev polynomials. J. Sci. Comput. 12(4), 433–464 (1976) MathSciNetCrossRefGoogle Scholar
  13. 13.
    Pronzato, L., Zhigljavsky, A.: Gradient algorithms for quadratic optimization with fast convergence rates. Comput. Optim. Appl. 50(3), 597–617 (2011) MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.: Renormalised steepest descent in Hilbert space converges to a two-point attractor. Acta Appl. Math. 67, 1–18 (2001) MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: Asymptotic behaviour of a family of gradient algorithms in ℝd and Hilbert spaces. Math. Program., Ser. A 107, 409–438 (2006) MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Pronzato, L., Wynn, H.P., Zhigljavsky, A.A.: A dynamical-system analysis of the optimum s-gradient algorithm. In: Pronzato, L., Zhigljavsky, A.A. (eds.) Optimal Design and Related Areas in Optimization and Statistics, pp. 39–80. Springer, Berlin (2009). Chap. 3 CrossRefGoogle Scholar
  17. 17.
    Saad, Y.: Practical use of polynomial preconditionings for the conjugate gradient method. SIAM J. Sci. Stat. Comput. 6(4), 865–881 (1985) MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Saad, Y.: Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, Philadelphia (2008) Google Scholar
  19. 19.
    Slater, B.: Gaps and steps for the sequence mod 1. Math. Proc. Camb. Philos. Soc. 63, 1115–1123 (1967) MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Tal-Ezer, H.: Polynomial approximation of functions of matrices and applications. J. Sci. Comput. 4(1), 25–60 (1989) MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    van der Vorst, H.A.: Iterative Methods for Large Linear Systems. Utrecht University, Utrecht (2000) Google Scholar
  22. 22.
    Zhigljavsky, A., Pronzato, L., Bukina, E.: An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost. Optim. Lett. (2012). doi: 10.1007/s11590-012-0491-7 Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.Laboratoire I3S, Bât. Euclide, Les AlgorithmesCNRS/Université de Nice-Sophia AntipolisSophia Antipolis cedexFrance
  2. 2.School of MathematicsCardiff UniversityCardiffUK

Personalised recommendations