Skip to main content
Log in

A numerical study of optimized sparse preconditioners

  • Published:
BIT Numerical Mathematics Aims and scope Submit manuscript

Abstract

Preconditioning strategies based on incomplete factorizations and polynomial approximations are studied through extensive numerical experiments. We are concerned with the question of the optimal rate of convergence that can be achieved for these classes of preconditioners.

Our conclusion is that the well-known Modified Incomplete Cholesky factorization (MIC), cf. e.g., Gustafsson [20], and the polynomial preconditioning based on the Chebyshev polynomials, cf. Johnson, Micchelli and Paul [22], have optimal order of convergence as applied to matrix systems derived by discretization of the Poisson equation. Thus for the discrete two-dimensional Poisson equation withn unknowns,O(n 1/4) andO(n 1/2) seem to be the optimal rates of convergence for the Conjugate Gradient (CG) method using incomplete factorizations and polynomial preconditioners, respectively. The results obtained for polynomial preconditioners are in agreement with the basic theory of CG, which implies that such preconditioners can not lead to improvement of the asymptotic convergence rate.

By optimizing the preconditioners with respect to certain criteria, we observe a reduction of the number of CG iterations, but the rates of convergence remain unchanged.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. E. Arge, M. Dæhlen, and A. Tveito,Box spline interpolation; a computational study J. Comput. Appl. Math., 44 (1992), pp. 303–329.

    Google Scholar 

  2. S. F. Ashby,Polynomial preconditioning for conjugate gradient methods, Department of Computer Science, University of Illinois at Urbana-Champaign, Illinois, Ph.D. thesis, 1987. (Report No. UIUCDCS-R-87-1355.)

  3. S. F. Ashby,Minimax polynomial preconditioning for Hermitian linear systems SIAM J. Matrix Anal., 12 (1991), pp. 766–789.

    Google Scholar 

  4. S. F. Ashby, M. J. Holst, T. A. Manteuffel, and P. E. Saylor,The role of the inner product in stopping criteria for conjugate gradient iterations, Report UCRL-JC-112586, Comp. & Math. Research Division, Lawrence Livermore National Lab., 1992.

  5. S. F. Ashby, T. A. Manteuffel, and J. S. Otto,A comparison of adaptive Chebyshev and least squares polynomial preconditioning for Hermitian positive definite linear systems SIAM J. Sci. Stat. Comput., 13 (1992), pp. 1–29.

    Google Scholar 

  6. S. F. Ashby, T. A. Manteuffel, and P. E. Saylor,Adaptive polynomial preconditioning for Hermitian linear systems BIT, 29 (1989), pp. 583–609.

    Google Scholar 

  7. S. F. Ashby, T. A. Manteuffel, and P. E. Saylor,A taxonomy for conjugate gradient methods SIAM J. Numer. Anal., 27 (1990), pp. 1542–1568.

    Google Scholar 

  8. O. Axelsson and G. Lindskog,On the eigenvalue distribution of a class of preconditioning methods Numer. Math., 48 (1986), pp. 479–498.

    Google Scholar 

  9. O. Axelsson and G. Lindskog,On the rate of convergence of the preconditioned conjugate gradient method Numer. Math., 48 (1986), pp. 499–523.

    Google Scholar 

  10. P. N. Brown and A. C. Hindmarsh,Matrix-free methods for stiff systems of ODE's SIAM J. Numer. Anal., 23 (1986), pp. 610–638.

    Google Scholar 

  11. T. F. Chan,Fourier analysis of relaxed incomplete factorization preconditioners SIAM J. Sci. Stat. Comput., 12 (1991), pp. 668–680.

    Google Scholar 

  12. T. F. Chan and H. C. Elman,Fourier analysis of iterative methods for elliptic problems SIAM Review, 31 (1989), pp. 20–49.

    Google Scholar 

  13. P. Concus, G. H. Golub, and D. O'Leary,A generalized conjugate gradient method for the numerical solution of elliptic partial differential equations, in Sparse Matrix Computations, J. R. Bunch and D. J. Rose, eds., Academic Press, 1976, pp. 309–332.

  14. S. D. Conte and C. de Boor,Elementary Numerical Analysis, McGraw-Hill, 1981.

  15. J. E. Dennis Jr. and H. Wolkowicz,Sizing and least-change secant methods SIAM J. Numer. Anal., 30 (1993), pp. 1291–1314.

    Google Scholar 

  16. J. M. Donato and T. C. Chan,Fourier analysis of incomplete factorization preconditioners for three-dimensional anisotropic problems SIAM J. Sci. Stat. Comput., 13 (1992), pp. 319–338.

    Google Scholar 

  17. P. F. Dubois, A. Greenbaum, and G. H. Rodrigue,Approximating the inverse of a matrix for use in iterative algorithms on vector processors Computing, 22 (1979), pp. 257–268.

    Google Scholar 

  18. A. Greenbaum,Comparison of splittings used with the conjugate gradient algorithm Numer. Math., 33 (1979), pp. 181–194.

    Google Scholar 

  19. A. Greenbaum and G. H. Rodrigue,Optimal preconditioners of a given sparsity pattern BIT, 29 (1989), pp. 610–634.

    Google Scholar 

  20. I. Gustafsson,A class of first order factorization methods BIT, 18 (1978), pp. 142–156.

    Google Scholar 

  21. A. Jennings,Influence of the eigenvalue spectrum on the convergence rate of the conjugate gradient method J. Inst. Maths. Applics. 20 (1977), pp. 61–72.

    Google Scholar 

  22. O. G. Johnson, C. A. Micchelli, and G. Paul,Polynomial preconditioners for conjugate gradient calculations SIAM J. Numer. Anal. 20 (1983), pp. 362–376.

    Google Scholar 

  23. I. E. Kaporin,New convergence results and preconditioning strategies for the conjugate gradient method, Preprint, Dept. of Comp. Math. and Cyb., Moscow State University, 1992.

  24. The Mathworks,Pro-Matlab User's Guide, The Mathworks, 1990.

  25. J. A. Meijerink and H. A. van der Vorst,An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix Math. Comp., 31 (1977), pp. 148–162.

    Google Scholar 

  26. D. P. O'Leary,Yet another polynomial preconditioner for the conjugate gradient algorithm Linear Algebra Appl., 154/56 (1991), pp. 377–388.

    Google Scholar 

  27. G. Pini and G. Gambolati,Is a simple diagonal scaling the best preconditioner for conjugate gradients on supercomputers? Adv. Water Resources, 13 (1990), pp. 147–153.

    Google Scholar 

  28. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling,Numerical Recipes in C. The Art of Scientific Computing, Cambridge University Press, 1988.

  29. Z. Strakoš,On the real convergence rate of the conjugate gradient method Linear Algebra Appl., 154/56 (1991), pp. 535–549.

    Google Scholar 

  30. A. van der Sluis and H. A. van der Vorst,The rate of convergence of conjugate gradients Numer. Math., 48 (1986), pp. 543–560.

    Google Scholar 

  31. R. Winther,Some superlinear convergence results for the conjugate gradient method SIAM J. Numer. Anal., 17 (1980), pp. 14–17.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Supported by The Norwegian Research Council for Science and the Humanities (NAVF) under grants no. 413.90/002 and 412.93/005.

Supported by The Royal Norwegian Council for Scientific and Industrial Research (NTNF) through program no. STP.28402: Toolkits in industrial mathematics.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bruaset, A.M., Tveito, A. A numerical study of optimized sparse preconditioners. BIT 34, 177–204 (1994). https://doi.org/10.1007/BF01955867

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01955867

AMS subject classification

Key words

Navigation