Skip to main content

Introduction to Numerical Methods for Solving Linear Systems

  • Chapter
  • First Online:
Krylov Subspace Methods for Linear Systems

Part of the book series: Springer Series in Computational Mathematics ((SSCM,volume 60))

Abstract

Numerical methods for solving linear systems are classified into two groups: direct methods and iterative methods. Direct methods solve linear systems within a finite number of arithmetic operations, and the best-known direct method is the LU decomposition. Iterative methods produce a sequence of approximate solutions, and the iterative methods are roughly classified into stationary iterative methods and Krylov subspace methods. Multigrid methods also fall into an important class of iterative methods. This chapter aims to describe the principles of the direct methods, the stationary iterative methods, and a brief introduction to the theory of Krylov subspace methods. A brief explanation of multigrid methods is also given.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In general, given a vector space V, a map \(\Vert \cdot \Vert :\mathbb {V}\rightarrow \mathbb {R}\) is called a norm if the map satisfies (Nv1)–(Nv3) for all \(\boldsymbol{x}, \boldsymbol{y}\in \mathbb {V}, \alpha \in \mathbb {K}\) (\(\mathbb {K}=\mathbb {R}\) or \(\mathbb {C}\)).

  2. 2.

    \(\sup \)” can be replaced with “\(\max \)”.

  3. 3.

    A real square matrix Q is called an orthogonal matrix if \(Q^\top Q = I\), where I is the identity matrix.

  4. 4.

    Using MATLAB notation, “all the leading principal minors” of A are defined by det(A(1:1,1:1)), det(A(1:2,1:2)), ..., det(A(1:n,1:n)).

  5. 5.

    After the proposal of the CG method, Stiefel proposed the Conjugate Residual (CR) method in 1955, and he is also known for the Stiefel manifold that is an extension of the unit circle.

  6. 6.

    Matrix \(G \in \mathbb {C}^{N\times N}\) is called Hermitian positive definite if G is Hermitian and \(\boldsymbol{v}^\text {{H}}G \boldsymbol{v} > 0\) for all \(\boldsymbol{v} (\ne \boldsymbol{0}) \in \mathbb {C}^N\), or equivalently, all the eigenvalues of Hermitian matrix G are positive. From this, it follows that if G is Hermitian positive definite, then \(G^{-1}\) is also Hermitian positive definite. For the positive definiteness, see also Sect. 1.4.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomohiro Sogabe .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sogabe, T. (2022). Introduction to Numerical Methods for Solving Linear Systems. In: Krylov Subspace Methods for Linear Systems. Springer Series in Computational Mathematics, vol 60. Springer, Singapore. https://doi.org/10.1007/978-981-19-8532-4_1

Download citation

Publish with us

Policies and ethics