Direct Methods for Linear Equations

  • James M. Ortega
Part of the Frontiers of Computer Science book series (FCOS)

Abstract

We now begin the study of the solution of linear systems of equations by direct methods. In Sections 2.1 and 2.2 we assume that the coefficient matrix is full, and we study Gaussian elimination, Choleski factorization, and the orthogonal reduction methods of Givens and Householder. In Section 2.1, we deal only with vector computers and then consider the same basic algorithms for parallel computers in Section 2.2. In Section 2.3 we treat the same algorithms, as well as others, for banded systems.

Keywords

Expense Sine Triad iPSC 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References and Extensions 2.3

  1. Stone, H. [1973]. “An Efficient Parallel Algorithm for the Solution of a Tridiagonal Linear System of Equations,” J. SCM 20, 27–38.MATHGoogle Scholar
  2. Voigt, R. [1977]. “The Influence of Vector Computer Architecture on Numerical Algorithms,” in Kuck et al. [1977], pp. 229–244.Google Scholar
  3. Stone, H. [1975]. “Parallel Tridiagonal Equation Solvers,” ACM Trans. Math. Software 1, 289–307.MathSciNetMATHCrossRefGoogle Scholar
  4. Hockney, R. [1965]. “A Fast Direct Solution of Poisson’s Equation Using Fourier Analysis,” J. ACM 12, 95–113.MathSciNetMATHCrossRefGoogle Scholar
  5. Hockney, R., and Jesshope, C. [1981]. Parallel Computers: Architecture, Programming and Algorithms, Adam Hilger, Ltd., Bristol.MATHGoogle Scholar
  6. Johnsson, L. [1987a]. “Communication Efficient Basic Linear Algebra Computations on Hypercube Architectures,” J. Par. Dist. Comp. 4, 133–172.CrossRefGoogle Scholar
  7. Johnsson, L. [1987b]. “Solving Tridiagonal Systems on Ensemble Architectures,” SIAM J. Sci. Stat. Comput. 8, 354–392.MathSciNetMATHCrossRefGoogle Scholar
  8. Lakshmivarahan, S. and Dhall, S. [1987]. “A Lower Bound on the Communication Complexity in Solving Linear Tridiagonal Systems on Cube Architectures,” in Heath [1987], pp. 560–568.Google Scholar
  9. Traub, J. [1974]. “Iterative Solution of Tridiagonal Systems on Parallel or Vector Computers,” in Traub, [1974], pp. 49–82.Google Scholar
  10. Heller, D. [1976]. “Some Aspects of the Cyclic Reduction Algorithm for Block Tridiagonal Linear Systems,” SIAM J. Numer. Appl. 13, 484–496.MathSciNetMATHCrossRefGoogle Scholar
  11. Swarztrauber, P. [1979]. “A Parallel Algorithm for Solving General Tridiagonal Equations,” Math. Comput. 33, 185–199.MathSciNetMATHGoogle Scholar
  12. Sameh, A. and Kuck, D. [1978]. “On Stable Parallel Linear System Solvers,” J. ACM 25, 81–91.MathSciNetMATHCrossRefGoogle Scholar
  13. Wang, H. [1981]. “A Parallel Method for Tridiagonal Equations,” ACM Trans. Math. Software 7, 170–183.MathSciNetMATHCrossRefGoogle Scholar
  14. Gao, G. [1986]. “A Pipelined Solution Method of Tridiagonal Linear Equation Systems,” Proc. 1986 Int. Conf. Parallel Processing, pp. 84–91.Google Scholar
  15. Opsahl, T., and Parkinson, D. [1986]. “An Algorithm for Solving Sparse Sets of Linear Equations with an Almost Tridiagonal Structure on SIMD Computers,” Proc. 1986 Int. Conf. Parallel Processing, pp. 369–374.Google Scholar
  16. van der Sluis, A., and van der Vorst, H. [1986]. The Rate of Convergence of Conjugate Gradients, Numer. Math. 48, 543–560.MathSciNetMATHCrossRefGoogle Scholar
  17. Wang, H. [1981]. “A Parallel Method for Tridiagonal Equations,” ACM Trans. Math. Software 7, 170–183.MathSciNetMATHCrossRefGoogle Scholar
  18. Sameh, A. and Kuck, D. [1978]. “On Stable Parallel Linear System Solvers,” J. ACM 25, 81–91.MathSciNetMATHCrossRefGoogle Scholar
  19. Lawrie, D., and Sameh, A. [1984]. The Computation and Communication Complexity of aGoogle Scholar
  20. Meier, U. [1985]. “A Parallel Partition Method for Solving Banded Systems of Linear Equations,” Parallel Comput. 2, 33–43.MathSciNetMATHCrossRefGoogle Scholar
  21. Dongarra, J., and Eisenstat, S. [1984]. “Squeezing the Most out of an Algorithm in CRAY-FORTRAN,” ACM Trans. Math. Softw. 10, 221–230.MathSciNetCrossRefGoogle Scholar
  22. Johnsson, L. [1985b]. “Solving Narrow Banded Systems on Ensemble Architectures,” ACM Trans. Math. Software 11, 271–288.MathSciNetMATHCrossRefGoogle Scholar
  23. Dongarra, J., and Johnsson, L. [1987]. “Solving Banded Systems on a Parallel Processor,” Parallel Comput. 5, 219–246.MathSciNetMATHCrossRefGoogle Scholar
  24. Gannon, D., and Van Rosendale, J. [1984]. “On the Impact of Communication Complexity in the Design of Parallel Numerical Algorithms,” IEEE Trans. Comput. 33, 1180–1194.MATHCrossRefGoogle Scholar
  25. Noor, A., Kamel, H., and Fulton, R. [1978]. “Substructuring Techniques-Status and Projections,” Comput. Structures 8, 621–632.MATHCrossRefGoogle Scholar
  26. George, A., and Liu, J. [1981]. Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall, Englewood Cliffs, New Jersey.MATHGoogle Scholar
  27. George, A. [1977]. “Numerical Experiments Using Dissection Methods to Solve n by n Grid problems,” SIAM J. Numer. Anal. 14, 161–179.MathSciNetMATHCrossRefGoogle Scholar
  28. George, A., and Liu, J. [1981]. Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall, Englewood Cliffs, New Jersey.MATHGoogle Scholar
  29. George, A., Poole, W., and Voigt, R. [1978]. “Analysis of Dissection Algorithms for Vector Computers,” Comput. Math. Appl. 4, 287–304.MathSciNetMATHCrossRefGoogle Scholar
  30. Gannon, D. [1980]. “A Note on Pipelining a Mesh Connected Multiprocessor for Finite Element Problems by Nested Dissection,” Proc. 1980 Int. Conf. Parallel Processing, pp. 197–204.Google Scholar
  31. Ashcraft, C. [1985a]. “Parallel Reduction Methods for the Solution of Banded Systems of Equations,” General Motors Research Lab. Report No. GMR-5094.Google Scholar
  32. Cleary, A., Harrar, D., and Ortega, J. [1986]. “Gaussian Elimination and Choleski Factorization on the FLEX/32,” Applied Mathematics Report RM-86–13, University of Virginia.Google Scholar
  33. Gannon, D. [1986]. “Restructuring Nested Loops on the Alliant Cedar Cluster: A Case Study of Gaussian Elimination of Banded Matrices,” Center for Supercomputing Research and Development Report No. 543, University of Illinois.Google Scholar
  34. Bjorstad, P. [1987]. “A Large Scale, Sparse, Secondary Storage, Direct Linear Equation Solver for Structural Analysis and its Implementation on Vector and Parallel Architectures,” Parallel Comput. 5, 3–12.MathSciNetCrossRefGoogle Scholar
  35. Schreiber, R. [1986]. “On Systolic Array Methods for Band Matrix Factorizations,” BIT 26, 303–316.MATHCrossRefGoogle Scholar
  36. Kapur, R., and Browne, J. [1984]. “Techniques for Solving Block Tridiagonal Systems on Reconfigurable Array Computers,” SIAM J. Sci. Stat. Comput. 5, 701–719.MathSciNetMATHCrossRefGoogle Scholar
  37. Saad, Y., and Schultz, M. [1987]. “Parallel Direct Methods for Solving Banded Linear Systems,” Lin. Alg. Appl. 88, 623–650.CrossRefGoogle Scholar
  38. George, A., and Liu, J. [1981]. Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall, Englewood Cliffs, New Jersey.MATHGoogle Scholar
  39. George, A., Heath, M., and Liu, J. [1986]. “Parallel Cholesky Factorization on a Shared Memory Multiprocessor,” Lin. Alg. Appl. 77, 165–187.MATHCrossRefGoogle Scholar
  40. Liu, J. [1987]. “Reordering Sparse Matrices for Parallel Elimination,” Department of Computer Science Report No. CS-87–01, York University, Ontario, Canada.Google Scholar
  41. George, A., Heath, M., Liu, J., and Ng, E. [1987a]. “Solution of Sparse, Positive Definite Systems on a Shared-Memory Multiprocessor,” Oak Ridge National Laboratory Report No. ORNL/TM-10260.Google Scholar
  42. George, A., Heath, M., Ng, E., and Liu, J. [1987b]. “Symbolic Cholesky Factorization on a Local-Memory Multiprocessor,” Parallel Comput. 5, 85–96.MathSciNetMATHCrossRefGoogle Scholar
  43. Lewis, J., and Simon, H. [1986]. “The Impact of Hardware Gather/Scatter on Sparse Gaussian Elimination,” Proc. 1986 Int. Conf. Parallel Processing, pp. 366–368.Google Scholar
  44. Duff, I. [1986]. “Parallel Implementation of Multifrontal Schemes,” Parallel Comput. 3, 193–209.MathSciNetMATHCrossRefGoogle Scholar
  45. Dave, A., and Duff, I. [1987]. “Sparse Matrix Calculations on the CRAY-2,” Parallel Comput. 5, 55–64.MathSciNetMATHCrossRefGoogle Scholar
  46. Liu, J. [1986]. “Computational Models and Task Scheduling for Parallel Sparse Cholesky Factorization,” Parallel Comput. 3, 327–342.MATHCrossRefGoogle Scholar
  47. Liu, J. [1987]. “Reordering Sparse Matrices for Parallel Elimination,” Department of Computer Science Report No. CS-87–01, York University, Ontario, Canada.Google Scholar
  48. Greenbaum, A. [1986b]. “Solving Sparse Triangular Linear Systems Using Fortran with Parallel Extensions on the NYU Ultracomputer Prototype,” New York University Ultracomputer Note No. 99.Google Scholar
  49. Wing, O., and Huang, J. [1977]. “A Parallel Triangulation Process of Sparse Matrices,” Proc. 1977 Int. Conf. Parallel Processing, pp. 207–214.Google Scholar
  50. Wing, O., and Huang, J. [1980]. “A Computational Model of Parallel Solutions of Linear Equations,” IEEE Trans. Comput. 29, 632–638.MathSciNetMATHCrossRefGoogle Scholar
  51. Alaghband, G., and Jordan, H. [1985]. “Multiprocessor Sparse L/U Decomposition with Controlled Fill-In,” ICASE Report No. 85–48, NASA Langley Research Center.Google Scholar

Copyright information

© Springer Science+Business Media New York 1988

Authors and Affiliations

  • James M. Ortega
    • 1
  1. 1.University of VirginiaCharlottesvilleUSA

Personalised recommendations