Abstract
Zhou et al. and Huang et al. have proposed the modified shift-splitting (MSS) preconditioner and the generalized modified shift-splitting (GMSS) for non-symmetric saddle point problems, respectively. They have used symmetric positive definite and skew-symmetric splitting of the (1, 1)-block in a saddle point problem. In this paper, we use positive definite and skew-symmetric splitting instead and present new modified shift-splitting (NMSS) method for solving large sparse linear systems in saddle point form with a dominant positive definite part in (1, 1)-block. We investigate the convergence and semi-convergence properties of this method for nonsingular and singular saddle point problems. We also use the NMSS method as a preconditioner for GMRES method. The numerical results show that if the (1, 1)-block has a positive definite dominant part, the NMSS-preconditioned GMRES method can cause better performance results compared to other preconditioned GMRES methods such as GMSS, MSS, Uzawa-HSS and PU-STS. Meanwhile, the NMSS preconditioner is made for non-symmetric saddle point problems with symmetric and non-symmetric (1, 1)-blocks.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the following non-symmetric saddle point linear system
where \(A\in {\mathbb {R}}^{n\times n}\) is positive definite (symmetric or non-symmetric); \(B\in {\mathbb {R}}^{n\times m}(m\le n)\) is a rectangular matrix of rank \(r\le m\); \(f\in {\mathbb {R}}^{n}\) and \(g\in {\mathbb {R}}^{m}\) are the given vectors.
In general, matrices A and B in \({{\mathscr {A}}}\) are large and sparse. System (1) is important and arises in a variety of scientific and engineering applications, such as computational fluid dynamics, constrained optimization, mixed or hybrid finite elements approximations of second-order elliptic problems, see [1, 7, 15].
In recent years, many studies have focused on solving large linear systems in saddle point form. Iterative methods are used for solving saddle point problems (1), when matrix blocks A and B are large and sparse. Some of these methods, such as Uzawa [7], inexact Uzawa [16] and the Hermitian and skew-Hermitian splitting method [2, 18, 21] have been presented. In reality, these methods use much less memory compared to Krylov subspace methods, but Krylov subspace methods are very efficient. Unfortunately, for solving saddle point problems (1), Krylov subspace methods work very slowly and they require good preconditioners to increase the speed of convergence.
Different preconditioners based on the matrix splitting of the (1, 1)-block A have been proposed. For example, Bai and Zhang [6] proposed a regularized conjugate gradient method for symmetric positive definite system of linear equations by shifting the coefficient matrix. Shift-splitting preconditioner has been presented by Bai et al. [5] for non-Hermitian positive definite system of linear equations, to accelerate the convergence of the Krylov subspace methods. Cao et al. applied shift-splitting preconditioner and a local shift-splitting preconditioner to solve symmetric saddle point problems and extended it to generalized shift-splitting preconditioner for non-symmetric saddle point problems [10, 13]. Also, Shen et al. used generalized shift-splitting preconditioners for solving nonsingular and singular generalized saddle point problems [23].
Moreover, semi-convergence of the shift-splitting iteration method and spectral analysis of the shift-splitting preconditioned saddle point matrix have been studied by Cao et al. [11] and Ren et al. [22], respectively. Cao et al. used the generalize shift-splitting matrix as a preconditioner and analyzed eigenvalue distribution of the preconditioned saddle point matrix [12]. Zhou et al. [26] and Huang et al. [17], respectively, proposed modified shift-splitting (MSS) and generalized modified shift-splitting (GMSS) preconditioners, for solving non-Hermitian saddle point problems. They used symmetric and skew-symmetric splitting of the (1, 1)-block A to make these preconditioners. In addition, Dou et al. [14] presented the fast shift-splitting (FSS) preconditioners for non-symmetric saddle point problems. Recently, a general class of shift-splitting (GCSS) preconditioners has been proposed for non-Hermitian saddle point problems arising from time-harmonic eddy current problems by Cao [9].
In this paper, we work on the saddle point problems (1) in which the (1, 1)-block A has a dominant positive definite part i.e., we can split A as
where P is a positive definite matrix and S is a skew-symmetric matrix and in some matrix norm \(\Vert .\Vert \), \(\Vert P\Vert \gg \Vert S\Vert \), see [3]. We present new modified shift-splitting (NMSS) preconditioners for this type of the saddle point problems (1). The convergence of the iterative method, which is produced by these preconditioners, is investigated. We apply these preconditioners to both singular and nonsingular saddle point problems (1). Also, we study the eigenvalues distribution of the NMSS-preconditioned matrix. Finally, practical numerical examples are presented to show the effectiveness of the NMSS preconditioners
2 New modified shift-splitting method
Assume that \(A=P+S\) is the splitting of the (1, 1)-block A of the coefficient matrix \({{\mathscr {A}}}\) in (1), where P is a positive definite matrix and S is skew-symmetric matrix. In this study, we choose splitting \(P=L+D+U^\mathrm{T}\) and \(S=U-U^\mathrm{T}\) as positive definite and skew-symmetric splitting of A, where D is diagonal matrix, L and U are strictly lower and upper triangular matrices of A, respectively. Let
where \(\alpha ,\ \beta >0\) are two constants, and I is the unit matrix with appropriate dimension. This splitting gives the following new modified shift-splitting (NMSS) iteration method for saddle point problem (1).
2.1 NMSS iteration method
Given an initial guess \({u^{(0)}}^\mathrm{T}=\left( {x^{(0)}}^\mathrm{T},\ {y^{(0)}}^\mathrm{T} \right) \),
for \(k=0,1,2,\ldots \) to convergence, compute \({u^{(k)}}^\mathrm{T}=\left( {x^{(k)}}^\mathrm{T},\ {y^{(k)}}^\mathrm{T} \right) \) as follows:
Consequently, NMSS iteration method can be expressed as
In NMSS iteration method or when using \({{{\mathscr {M}}}} \) as a preconditioner for krylov subspace methods we need to solve the following system of linear equations \({{{\mathscr {M}}}}z=r\). Let \(r^\mathrm{T}=(r_1^\mathrm{T},r_2^\mathrm{T})\) and \(z^\mathrm{T}=(z_1^\mathrm{T},z_2^\mathrm{T})\), where \(r_1,z_1\in {\mathbb {R}}^n\) and \(r_2,z_2\in {\mathbb {R}}^m\)
An easy computation shows that (6) is equivalent to the following equations:
The approximate solution of the linear system (7) can be obtained by conjugate gradient method (for symmetric P) and Lanczos method (for non-symmetric P). In addition, linear system (7) can be solved by some direct methods.
3 Convergence of NMSS iteration method
In this section, we will investigate behavior convergence of NMSS method when saddle point system (1) is nonsingular. As we know, an NMSS method is convergent if and only if \(\rho (\varGamma )<1\).
Let us assume that \(\lambda \) is an eigenvalue of the iteration matrix \(\varGamma \) of the NMSS method and \(u=[x^\mathrm{T},y^\mathrm{T}]^\mathrm{T}\) is the corresponding eigenvector, then we have
which is equivalent to
We can write (9) as follows:
If \(\lambda =1\) is substituted in (8), we obtain \({\mathcal{A}}u=0\), which contradicts the nonsingularity of \({{{\mathscr {A}}}}\). Also, suppose that \(x=0\). We conclude from (11) and \(\lambda \ne 1\) that \(y=0\). But, this is impossible, then \(x\ne 0\) and the next lemma immediately follows.
Lemma 3.1
Let A be positive definite, B be of full column rank and \(\alpha ,\beta >0\) be given constants. If \(\lambda \) is an eigenvalue of the iteration matrix \(\varGamma \) and \(u=(x^\mathrm{T},y^\mathrm{T})^\mathrm{T}\) is the eigenvector of \(\ \varGamma \) corresponding to \(\lambda \), then \(\lambda \ne 1\) and \(x\ne 0\).
Lemma 3.2
[20] Both roots of the complex equation \(\lambda ^2-\phi \lambda +\psi =0\) are less than one in modulus if and only if \(|\phi -{\bar{\phi }}\psi |+|\psi |^2<1\), where \({\bar{\phi }}\) denotes the conjugate complex of \(\phi \) .
Theorem 3.3
Let \(A\in {\mathbb {R}}^{n\times n}\) be positive definite, \(B\in {\mathbb {R}}^{n\times m}\) be of full column rank and \(\alpha ,\ \beta >0\) be given constants. If \(\lambda \) is an eigenvalue of iteration matrix \({\varGamma }\) and \(u=[x^\mathrm{T},y^\mathrm{T}]^\mathrm{T}\) is the eigenvector of \(\ \varGamma \) corresponding to \(\lambda \), then NMSS iteration method converges to the unique solution of problem (1) if and only if parameters \(\alpha \) and \( \beta \) satisfy.
-
1.
If \(c=0\), then
$$\begin{aligned} \alpha >\displaystyle \frac{b^2-|a|^2}{a_1}; \end{aligned}$$(12) -
2.
If \(c\ne 0\), then
$$\begin{aligned} \beta a_1(\alpha a_1+|a|^2-b^2)-c(a_2-b)^2>0, \end{aligned}$$(13)
where
Proof
By Lemma 3.1, we know that \(\lambda \ne 1\) . Moreover, we can obtain from (11) that
By substituting (14) in (10), we have
Since \(x\ne 0\) , by multiplying \(\displaystyle \frac{x^*}{x^*x}\) in (15), we obtain
Let
Then (16) is simplified to
-
1.
If \(c=0\) (i.e., \(B^\mathrm{T}x=0\)), then (17) is reduced to
$$\begin{aligned} \alpha (\lambda -1) +2ib+2\lambda a=0, \end{aligned}$$which gives
$$\begin{aligned} \lambda =\frac{ \alpha -2ib}{\alpha +2a}. \end{aligned}$$Thus, \(|\lambda |<1\) if and only if
$$\begin{aligned} \alpha >\frac{b^2-|a|^2}{a_1}. \end{aligned}$$(18) -
2.
If \(c\ne 0\) (i.e., \(B^\mathrm{T}x\ne 0\)), then by arranging (17) in terms of \(\lambda \), we obtain the following quadratic equation:
$$\begin{aligned} (\alpha \beta +2\beta a+c)\lambda ^{2} +(-2\alpha \beta +2i\beta b-2\beta a+2c)\lambda +(\alpha \beta -2i\beta b+c)=0. \end{aligned}$$(19)We divide (19) by \((\alpha \beta +2\beta a+c)\ne 0\), then
$$\begin{aligned} \lambda ^{2} -\varphi \lambda +\psi =0 , \end{aligned}$$(20)where
$$\begin{aligned} \phi =2\frac{\beta (\alpha +a_{1} +ia_{2} )-i\beta b-c}{\beta (\alpha +2a_{1} +2ia_{2} )+c}\quad \text {and} \quad \psi =\frac{\beta (\alpha -2ib)+c}{\beta (\alpha +2a_{1} +2ia_{2} )+c} . \end{aligned}$$
Through Lemma 3.2, we know that a sufficient and necessary condition for the roots of the equation (20) to satisfy \(\left| \lambda \right| <1\) if and only if \(\left| \phi -\bar{\phi }\psi \right| +\left| \psi \right| ^{2} <1\). Some computations show that condition \(\left| \phi -\bar{\phi }\psi \right| +\left| \psi \right| ^{2} <1\) is equivalent to
Thus, if the condition (21) holds, then the NMSS iteration method must be convergent. \(\square \)
Remark 3.4
In Theorem 3.3, if (1, 1)-block A has a dominant positive definite part, then \(|a|\gg |b|\). We conclude that (12) for all \(\alpha >0\) holds. On the other hand, there is no restriction on \(\beta \), except non-negativity. Therefore, in this case, the iteration method is convergent unconditionally.
4 Semi-convergence of the NMSS iteration method for singular saddle point problems
Let B in (1) be rank deficient, i.e., rank\((B)= r (r< m)\). Since B is rank deficient, \( {{{\mathscr {A}}}} \) is singular and we study the semi-convergence of the NMSS iteration method for solving the singular saddle point problems (1). According to [8], iteration scheme (9) is semi-convergent if and only if the following two conditions are met.
-
(i)
Elementary divisors associated with \(\lambda = 1 \in \sigma (\varGamma )\) are linear, i.e., \(\mathrm{rank}\; (I-\varGamma )^{2} =\mathrm{rank}\; (I-\varGamma )\), or equivalently, \( \mathrm{index}(I - \varGamma ) = 1\).
-
(ii)
If \(\lambda \in \sigma (\varGamma )\) with \(|\lambda |=1\), then \(\lambda =1\), i.e., \(\nu (\varGamma )<1\), where \(\sigma (\varGamma )\) denotes the spectrum of \(\varGamma \) and \(\nu (\varGamma ) =\max \left\{ |\lambda |, \lambda \in \sigma (\varGamma ),\lambda \ne 1\right\} \) is the pseudo-spectral radius of \(\varGamma \).
For the first condition of semi-convergence, the following theorem will be present. It can be proved the same way as Theorem 4.1 in [14].
Theorem 4.1
Let \(A\in {\mathbb {R}} ^{n\times n} \) be positive definite and \(B\in {\mathbb {R}} ^{n\times m} \;\) be rank deficient. Suppose that \(\alpha \, ,\ \beta \, >0\) and \(\varGamma \) is the iteration matrix of the NMSS iteration method. Then
In what follows, second condition of the semi-convergence will be studied. Let \(B=U\; \left[ \begin{array}{cc} {B_{r} }&{0} \end{array}\right] \; V^\mathrm{T} \) be the singular value decomposition of B, where
with \(U\in {\mathbb {R}} ^{n\times n} \; \text {and}\; V\in {\mathbb {R}} ^{m\times m} \) being two orthogonal matrices and \(\sigma _{i} \; \left( i=1,\ldots ,r\right) \) being singular values of B.
We define
and consider the block diagonal matrix
which is an \(\left( n+m\right) \times \left( n+m\right) \) orthogonal matrix. Iteration matrix \(\varGamma \) is similar to matrix \(\hat{\varGamma } =Q ^\mathrm{T}\varGamma Q\). Hence, \(\varGamma \) has the same spectrum as \({\hat{\varGamma }}\). Now, we try to convert \({\hat{\varGamma }}\) to the new form using similarity, which can be clustered their eigenvalues, therefore
i.e.,
Let \({\tilde{\varGamma }} =\left( \begin{array}{cc} {\alpha I+2{\tilde{P}} } &{} {B_{r} } \\ {-B_r^\mathrm{T} } &{} {\beta I} \end{array}\right) ^{-1} \left( \begin{array}{cc} {\alpha I-2{\tilde{S}} } &{} {-B_{r} } \\ {B_{r}^\mathrm{T} } &{} {\beta I} \end{array}\right) .\) Then \({\hat{\varGamma }} =\left( \begin{array}{cc} {{\tilde{\varGamma }} } &{} {0} \\ {0} &{} {I} \end{array}\right) .\) Matrix \({\tilde{\varGamma }} \) can be viewed as the iteration matrix of the NMSS iteration method applied to
Because \({\tilde{A}}=UAU^\mathrm{T}\) is positive definite and \(B_r\) is full rank then (23) is nonsingular. Let \({\tilde{u}}=\left( {\tilde{x}}^\mathrm{T},{\tilde{y}}^\mathrm{T}\right) ^\mathrm{T}\) be an eigenvector of \({\tilde{\varGamma }}\), the relations in Theorem 3.3 can be expressed for new nonsingular system (23) with iteration matrix \({\tilde{\varGamma }}\) as follows:
when \({\tilde{c}}=0,\ \alpha >\displaystyle \frac{{\tilde{b}}^2-|{\tilde{a}}|^2}{{\tilde{a}}_1},\) and \({\tilde{c}}\ne 0,\ \beta {\tilde{a}}_1(\alpha {\tilde{a}}_1+|{\tilde{a}}|^2-{\tilde{b}}^2)-{\tilde{c}}({\tilde{a}}_2-{\tilde{b}})^2>0\),
where
Then, under the above conditions, \(\rho ({{\tilde{\varGamma }}})<1\) and the second condition of semi-convergence is satisfied. These concepts are briefed in the following theorem.
Theorem 4.2
Let \(A\in {\mathbb {R}} ^{n\times n} \) be positive definite and \(B\in {\mathbb {R}} ^{n\times m}\) be rank deficient. Assume that \(\alpha \, ,\ \beta \, >0\) and \(\varGamma \) is the iteration matrix of the NMSS iteration method. Then \(\nu (\varGamma )<1\) if and only if the following conditions are satisfied:
-
1.
If \({\tilde{c}}=0\), then
$$\begin{aligned} \alpha >\displaystyle \frac{{\tilde{b}}^2-|{\tilde{a}}|^2}{{\tilde{a}}_1}. \end{aligned}$$(24) -
2.
If \({\tilde{c}}\ne 0\), then
$$\begin{aligned} \beta {\tilde{a}}_1(\alpha {\tilde{a}}_1+|{\tilde{a}}|^2-{\tilde{b}}^2)-{\tilde{c}}({\tilde{a}}_2-{\tilde{b}})^2>0. \end{aligned}$$(25)
Using Theorems 4.1 and 4.2, we conclude the semi-convergence of the NMSS iteration method for singular saddle point problem (1).
5 Preconditioning properties
In preceding sections, we study convergence and semi-convergence of the NMSS method as an iteration method. Although, similar to the other shift-splitting methods, we do not expect fast convergence for the NMSS method in the actual implementations. Therefore, we focus on the preconditioner generated by this method, i.e., \(\mathcal{P}_\mathrm{NMSS}.\) We use this preconditioner to accelerate the convergence of the GMRES method as a Krylov subspace method. Also, we study the eigenvalues distribution of the preconditioned matrix \({{{\mathscr {P}}}}_\mathrm{NMSS}^{-1}{{{\mathscr {A}}}}.\)
Lemma 5.1
Let \(A\in {\mathbb {R}}^{n\times n}\) be positive definite, \(B\in {\mathbb {R}}^{n\times m}\) be of full column rank and \(\alpha ,\beta >0\) be given constants. Assume that \(A=P+S\) is a dominant positive definite and skew-Hermitian splitting. Let \(a,\ b\), and c be defined as in Theorem 3.3 and \(\alpha ,\beta >0\) satisfy (12) or (13). Then all eigenvalues of the NMSS-preconditioned matrix \(\mathcal{P}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) are located in a circle centered at (1, 0) with radius strictly less than 1.
Proof
Suppose that \(\mu \) and \(\lambda \) are eigenvalues of the NMSS-preconditioned matrix \({{{\mathscr {P}}}}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) and NMSS iteration matrix \(\varGamma \), respectively. With respect to the relation between \({{{\mathscr {P}}}}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) and \(\varGamma \), i.e.,
we have \( \lambda =1-\mu \). If \(\alpha \) and \(\beta \) satisfy (12) or (13) then \(|\lambda |<1\). Thus, we obtain
and the lemma follows. \(\square \)
Theorem 5.2
Under the hypotheses of Lemma 5.1, if \(\mu \) is an eigenvalue of the NMSS-preconditioned matrix \(\mathcal{P}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) and \(u=[x^\mathrm{T},y^\mathrm{T}]^\mathrm{T}\) be its associated eigenvector, then we have
-
1.
\(\mu \ne 0 \) and \(x\ne 0\).
-
2.
If \(y=0\), then \(x\in null(B^\mathrm{T})\) and \(\mu \rightarrow 1\) as \(\alpha \rightarrow 0_+\), where \(a,\ b\), and c are defined as in Theorem 3.3.
-
3.
If \(y\ne 0\), then \(\mu \rightarrow 2\) as \(\beta \rightarrow 0_+\).
Proof
Let \(\mu \) be the eigenvalue of the preconditioned matrix \(\mathcal{P}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) and \( \left[ \begin{array}{c} x\\ y \end{array} \right] \) be its associated eigenvector. Therefore,
which equivalent to
Using (26), the following equations are implied:
We obtain y in (28) and replace it in (27), then we have
Multiplying \(\displaystyle \frac{x^*}{x^*x}\) to both sides of (29), then
We use notations in Theorem 3.3 and rewrite (30) as follows:
By collecting terms in (31), we obtain
Proof of the first part immediately follows from Lemma 3.1. For the second statement, we set \(y=0\), then (27) and (28) become
and
(34) implies either \(\mu =2\) or \(B^\mathrm{T}x=0\). For \(\mu =2\), (33) becomes
We multiply (35) from the left in \(\displaystyle \frac{x^*}{x^*x}\) and obtain
This leads to a contradiction with the positive definiteness of A and the positivity of \(\alpha \). \(\mu \ne 2\) concludes that \(B^\mathrm{T}x=0\), i.e., \(x\in \mathrm{null}(B^\mathrm{T})\). From (31), we drive
If \(\alpha \rightarrow 0_+\), then \( \mu =\displaystyle 1+\frac{ib}{a}\). Also, since A has a dominant positive definite part, we conclude that, \(|a|\gg |b|\) for all \(x\in {\mathbb {C}}^n\), then \(\mu \) tends to 1. To prove the third part (3), since \(y\ne 0\), we conclude that \(B^\mathrm{T}x\ne 0\). Then \(c>0\). We solve quadratic equation (32). The roots of this equation are as follows:
Now, if \(\beta \rightarrow 0_+\), so \(\mu \rightarrow 2\), which completes the proof. \(\square \)
Now, we study the eigenvalues distribution of the NMSS-preconditioned matrix \({{{\mathscr {P}}}}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) in singular case. We give the following lemma and theorem. The proofs of this lemma and theorem are similar to nonsingular case, so we give them without proof.
Lemma 5.3
Let \(A\in {\mathbb {R}}^{n\times n}\) be positive definite, \(B\in {\mathbb {R}}^{n\times m}\), \(\mathrm{rank}(B)=r<m< n\) and \(\alpha ,\beta >0\) be given constants. Assume that \(A=P+S\) is a dominant positive define and skew-Hermitian splitting of A. Let \({\tilde{a}},\tilde{ b}\), and \({\tilde{c}}\) are defined as in Theorem 4.2 and \(\alpha ,\beta >0\) satisfy (24) or (25). Then all eigenvalues of the NMSS-preconditioned matrix \(\mathcal{P}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) are located in a circle centered at \((1,\,0)\) with radius 1.
Theorem 5.4
Under the hypotheses of Lemma 5.3. If \(\mu \ne 0\) is an eigenvalue of the NMSS-preconditioned matrix \(\mathcal{P}_\mathrm{NMSS}^{-1}{{\mathscr {A}}}\) and \(u=[x^\mathrm{T},y^\mathrm{T}]^\mathrm{T}\) is its associated eigenvector, then we have
-
1.
\(x\ne 0\).
-
2.
If \(y=0\), then \(x\in null(B^\mathrm{T})\) and \(\mu \rightarrow 1\) as \(\alpha \rightarrow 0_+\), where \({\tilde{a}},\ {\tilde{b}}\), and \({\tilde{c}}\) are defined as in Theorem 4.2.
-
3.
If \(y\ne 0\), then \(\mu \rightarrow 2\) as \(\beta \rightarrow 0_+\).
6 Numerical results
In this section, we present two examples to illustrate the effectiveness of the NMSS preconditioner for saddle point problem (1) arising from a model Stokes problem. We use left preconditioning with GMRES as a Krylov subspace method. We compare the elapsed CPU time (s) (CPU) and the number of iterations (IT) of the NMSS preconditioner with GMRES without preconditioning and GMRES method with GMSS preconditioner [17], Uzawa-HSS and PU-STS preconditioners [19, 24, 25]. In these examples, all of the optimal parameters are provided experimentally. We find them based on the least number of iterations in the method. We choose right-hand side vector b so that \(\mathcal{U}=(1,\ldots ,1)^\mathrm{T}\) is the exact solution of (1). We run examples with zero as initial vector and terminated if \(ERR=\Vert b-\mathcal{AU}^{(k)}\Vert _2/\Vert b\Vert _2<=10^{-9}\) is satisfied. All of the examples are performed by Matlab on a computer with Intel Core i7 CUP 2.0 GHz and 8GB memory.
Example 6.1
We consider the following nonsingular saddle point problem:
where
Here, \(\otimes \) is Kronecker product and \(h=\displaystyle \frac{1}{p+1}\) is the discretization mesh size. We find w such that (1, 1)-block A in (1) has a dominant positive definite part. This feature decreases the number of iterations of the GMRES method when NMSS is used as its preconditioner. Saddle point problem (1) with the matrices given in (37) has been studied in [19]. As for the matrix Q in the Uzawa-HSS and PU-STS methods, we choose \(Q = B^\mathrm{T} ({\text {diag}} (A ))^{-1} B\).
For this example, we have \(n = 2p^2\) and \(m = p^2\). Hence, the total number of variables is \(m+n = 3p^2\). We test three \(\nu \), i.e., \(\nu =1, 0.1\ {\text {and}}\ 0.01\). For each \(\nu \), four different type of p are used, i.e., \(p = 16,\ 24,\ 32,\ 64\). In Tables 1, 2 and 3, we list numerical results on different uniform grids with \(\nu = 1,\ \nu = 0.1\ and\ \nu = 0.01\), respectively. In these tables, No Pr. denotes the GMRES method without preconditioning. \({{{\mathscr {P}}}}_\mathrm{GMSS}\) , \({{{\mathscr {P}}}}_{Uzawa-HSS}\) and \({{{\mathscr {P}}}}_{PU-STS}\), respectively, denote GMRES method with the left GMSS preconditioning, left Uzawa-HSS preconditioning and the left PU-STS preconditioning.
From Tables 1, 2 and 3, we observe that GMRES without preconditioning is very slow and the new preconditioner NMSS is faster than GMSS preconditioner. Numerical results show that the number of iterations of the new method is so much less than the Uzawa-HSS and PU-STS methods when they are used as preconditioner for GMRES method.
Figure 1 shows the eigenvalues distribution of the matrix \(\mathcal{A}\), and the NMSS, GMSS, MSS, Uzawa-HSS and PU-STS preconditioned matrices, respectively. We can see that for NMSS, GMSS and MSS preconditioned matrices, eigenvalues are well clustered around (1, 0) and (2, 0), especially most of the eigenvalues of the NMSS-preconditioned matrix are clustered near (1, 0).
Example 6.2
We consider the following singular saddle point problem:
where
where \(\otimes \) is Kronecker product and \(h=\displaystyle \frac{1}{p+1}\) is the discretization mesh size. We find w such that (1, 1)-block A in (1) has a dominant positive definite for \(\nu =1,0.1\ \text {and}\ 0.01\). This decreases the number of iterations of the GMRES method when NMSS is used as its preconditioner. Saddle point problem (1) with the matrices given in (38) has been studied in [19].
For this example, we have \(n = 2p^2\) and \(m = p^2+2\) and same as Example 6.1, we test three \(\nu \), i.e., \(\nu =1, 0.1\ \mathrm{{and}}\ 0.01\) and for each \(\nu \), four different p are used, i.e., \(p = 16,\ 24,\ 32,\ 64\). In Tables 4, 5 and 6, we list numerical results on different uniform grids with \(\nu = 1,\ \nu = 0.1\ \mathrm{and}\ \nu = 0.01\), respectively.
According to tables, we compare NMSS preconditioner with GMSS and MSS preconditioners. From Tables 4, 5 and 6, we can see that for singular system, GMRES without preconditioning is also very slow and new preconditioner NMSS is faster than GMSS and MSS preconditioner.
Figure 2 gives the eigenvalue distribution of the matrix \({{{\mathscr {A}}}}\) and the NMSS, GMSS and MSS preconditioned matrices, respectively. This figure shows that except zero, the other eigenvalues same as nonsingular case are clustered near (1, 0) and (2, 0). With respect to choosing parameters in this example, eigenvalues of the NMSS-preconditioned matrix are more clustered than other preconditioned matrices.
7 Conclusion
In this work, we present new preconditioner based on the positive definite and skew-symmetric splitting of (1, 1)-block A of the saddle point problem (1). The convergence and semi-convergence of the NMSS method for solving nonsingular and singular saddle point problems are, respectively, investigated. The numerical results show that if (1, 1)-block A in saddle point problem (1) has a dominant positive definite part, then the NMSS preconditioner acts better than GMSS, MSS, Uzawa-HSS and PU-STS preconditioners. However, if (1, 1)-block A has no positive definite dominant part, we should not expect to see proper results. Moreover, this new preconditioner can be used when (1, 1)-block A is symmetric or non-symmetric while for GMSS and MSS, A must be non-symmetric.
References
Bai, Z.-Z.: Structured preconditioners for nonsingular matrices of block two-by-two structures. Math. Comput. 75, 791–815 (2006)
Bai, Z.-Z.; Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.-Z.; Golub, G.H.; Lu, L.-Z.; Yi, J.-F.: Block triangular and skew-Hermitian splitting methods for positive definite linear systems. SIAM J. Sci. Comput. 26, 844–863 (2005)
Bai, Z.-Z.; Parlett, B.N.; Wang, Z.-Q.: On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 102, 1–38 (2005)
Bai, Z.-Z.; Yin, J.-F.; Su, Y.-F.: A shift-splitting preconditioner for non-Hermitian positive definite matrices. J. Comput. Math. 24, 539–552 (2006)
Bai, Z.-Z.; Zhang, S.-L.: A regularized conjugate gradient method for symmetric positive definite system of linear equations. J. Comput. Math. 20, 437–448 (2002)
Benzi, M.; Golub, G.H.; Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005)
Berman, A.; Plemmons, R.J.: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia (1994)
Cao, Y.: A general class of shift-splitting preconditioners for non-Hermitian saddle point problems with applications to time-harmonic eddy current models. Comput. Math. Appl. 77, 1124–1143 (2019)
Cao, Y.; Du, J.; Niu, Q.: Shift-splitting preconditioners for saddle point problems. J. Comput. Appl. Math. 272, 239–250 (2014)
Cao, Y.; Miao, S.-X.: On semi-convergence of the generalized shift-splitting iteration method for singular non-symmetric saddle point problems. Comput. Math. Appl. 71, 1503–1511 (2016)
Cao, Y.; Miao, S.-X.; Ren, Z.-R.: On preconditioned generalized shift-splitting iteration methods for saddle point problems. Comput. Math. Appl. 74, 859–872 (2017)
Cao, Y.; Li, S.; Yao, L.-Q.: A class of generalized shift-splitting preconditioners for non-symmetric saddle point problems. Appl. Math. Lett. 49, 20–27 (2015)
Dou, Q.-Y.; Yin, J.-F.; Liao, Z.-Y.: A fast shift-splitting iteration method for non-symmetric saddle point problems. East Asian J. Appl. Math. 7, 172–191 (2017)
Elman, H.C.: Preconditioning for the steady-state Navier-Stokes equations with low viscosity. SIAM J. Sci. Comput. 20, 1299–1316 (1999)
Elman, H.C.; Golub, G.H.: Inexact and preconditioned Uzawa algorithms for saddle point problems. SIAM J. Numer. Anal. 31, 1645–1661 (1994)
Huang, Z.-G.; Wang, L.-G.; Xu, Z.; Cui, J.-J.: The generalized modified shift-splitting preconditioners for non-symmetric saddle point problems. Appl. Math. Comput. 299, 95–118 (2017)
Jiang, M.-Q.; Cao, Y.: On local Hermitian and skew-Hermitian splitting iteration methods for generalized saddle point problems. J. Comput. Appl. Math. 231, 973–982 (2009)
Liang, Z.-Z.; Zhang, G.-F.: PU-STS method for non-Hermitian saddle point problems. Appl. Math. Lett. 46, 1–6 (2015)
Miller, J.H.: On the location of zeros of certain classes of polynomials with applications to numerical analysis. IMA J. Appl. Math. 8, 397–406 (1971)
Pour, H.N.; Goughery, H.S.: Generalized accelerated Hermitian and skew-Hermitian splitting methods for saddle point problems. Numer. Math. Theor. Meth. Appl. 10, 167–185 (2017)
Ren, Z.-R.; Cao, Y.; Niu, Q.: Spectral analysis of the generalized shift-splitting preconditioned saddle point problem. J. Comput. Appl. Math. 311, 539–550 (2017)
Shen, Q.-Q.; Shi, Q.: Generalized shift-splitting preconditioners for nonsingular and singular generalized saddle point problems. Comput. Math. Appl. 72, 632–641 (2016)
Yang, A.-L.; Li, X.; Wu, Y.-J.: On semi-convergence of the Uzawa-HSS method for singular saddle point problems. Appl. Math. Comput. 252, 88–98 (2015)
Yang, A.-L.; Wu, Y.-J.: The Uzawa-HSS method for saddle point problems. Appl. Math. Lett. 38, 38–42 (2014)
Zhou, S.-W.; Yang, A.-L.; Dou, Y.; Wu, Y.-J.: The modified shift-splitting preconditioners for non-symmetric saddle point problems. App. Math. Lett. 59, 109–114 (2016)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ardeshiry, M., Goughery, H.S. & Pour, H.N. New modified shift-splitting preconditioners for non-symmetric saddle point problems. Arab. J. Math. 9, 245–257 (2020). https://doi.org/10.1007/s40065-019-0256-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40065-019-0256-6