New modified shift-splitting preconditioners for non-symmetric saddle point problems

Zhou et al. and Huang et al. have proposed the modified shift-splitting (MSS) preconditioner and the generalized modified shift-splitting (GMSS) for non-symmetric saddle point problems, respectively. They have used symmetric positive definite and skew-symmetric splitting of the (1, 1)-block in a saddle point problem. In this paper, we use positive definite and skew-symmetric splitting instead and present new modified shift-splitting (NMSS) method for solving large sparse linear systems in saddle point form with a dominant positive definite part in (1, 1)-block. We investigate the convergence and semi-convergence properties of this method for nonsingular and singular saddle point problems. We also use the NMSS method as a preconditioner for GMRES method. The numerical results show that if the (1, 1)-block has a positive definite dominant part, the NMSS-preconditioned GMRES method can cause better performance results compared to other preconditioned GMRES methods such as GMSS, MSS, Uzawa-HSS and PU-STS. Meanwhile, the NMSS preconditioner is made for non-symmetric saddle point problems with symmetric and non-symmetric (1, 1)-blocks.


Introduction
Consider the following non-symmetric saddle point linear system where A ∈ R n×n is positive definite (symmetric or non-symmetric); B ∈ R n×m (m ≤ n) is a rectangular matrix of rank r ≤ m; f ∈ R n and g ∈ R m are the given vectors. In general, matrices A and B in A are large and sparse. System (1) is important and arises in a variety of scientific and engineering applications, such as computational fluid dynamics, constrained optimization, mixed or hybrid finite elements approximations of second-order elliptic problems, see [1,7,15].
In recent years, many studies have focused on solving large linear systems in saddle point form. Iterative methods are used for solving saddle point problems (1), when matrix blocks A and B are large and sparse. Some of these methods, such as Uzawa [7], inexact Uzawa [16] and the Hermitian and skew-Hermitian splitting method [2,18,21] have been presented. In reality, these methods use much less memory compared to Krylov subspace methods, but Krylov subspace methods are very efficient. Unfortunately, for solving saddle point problems (1), Krylov subspace methods work very slowly and they require good preconditioners to increase the speed of convergence.
Different preconditioners based on the matrix splitting of the (1, 1)-block A have been proposed. For example, Bai and Zhang [6] proposed a regularized conjugate gradient method for symmetric positive definite system of linear equations by shifting the coefficient matrix. Shift-splitting preconditioner has been presented by Bai et al. [5] for non-Hermitian positive definite system of linear equations, to accelerate the convergence of the Krylov subspace methods. Cao et al. applied shift-splitting preconditioner and a local shift-splitting preconditioner to solve symmetric saddle point problems and extended it to generalized shift-splitting preconditioner for non-symmetric saddle point problems [10,13]. Also, Shen et al. used generalized shift-splitting preconditioners for solving nonsingular and singular generalized saddle point problems [23].
Moreover, semi-convergence of the shift-splitting iteration method and spectral analysis of the shift-splitting preconditioned saddle point matrix have been studied by Cao et al. [11] and Ren et al. [22], respectively. Cao et al. used the generalize shift-splitting matrix as a preconditioner and analyzed eigenvalue distribution of the preconditioned saddle point matrix [12]. Zhou et al. [26] and Huang et al. [17], respectively, proposed modified shift-splitting (MSS) and generalized modified shift-splitting (GMSS) preconditioners, for solving non-Hermitian saddle point problems. They used symmetric and skew-symmetric splitting of the (1, 1)-block A to make these preconditioners. In addition, Dou et al. [14] presented the fast shift-splitting (FSS) preconditioners for non-symmetric saddle point problems. Recently, a general class of shift-splitting (GCSS) preconditioners has been proposed for non-Hermitian saddle point problems arising from time-harmonic eddy current problems by Cao [9].
In this paper, we work on the saddle point problems (1) in which the (1, 1)-block A has a dominant positive definite part i.e., we can split A as where P is a positive definite matrix and S is a skew-symmetric matrix and in some matrix norm . , P S , see [3]. We present new modified shift-splitting (NMSS) preconditioners for this type of the saddle point problems (1). The convergence of the iterative method, which is produced by these preconditioners, is investigated. We apply these preconditioners to both singular and nonsingular saddle point problems (1). Also, we study the eigenvalues distribution of the NMSS-preconditioned matrix. Finally, practical numerical examples are presented to show the effectiveness of the NMSS preconditioners 2 New modified shift-splitting method Assume that A = P + S is the splitting of the (1, 1)-block A of the coefficient matrix A in (1), where P is a positive definite matrix and S is skew-symmetric matrix. In this study, we choose splitting P = L + D + U T and S = U − U T as positive definite and skew-symmetric splitting of A, where D is diagonal matrix, L and U are strictly lower and upper triangular matrices of A, respectively. Let where α, β > 0 are two constants, and I is the unit matrix with appropriate dimension. This splitting gives the following new modified shift-splitting (NMSS) iteration method for saddle point problem (1).

NMSS iteration method
Given an initial guess u (0) T = x (0) T , y (0) T , for k = 0, 1, 2, . . . to convergence, compute u (k) T = x (k) T , y (k) T as follows: Consequently, NMSS iteration method can be expressed as In NMSS iteration method or when using M as a preconditioner for krylov subspace methods we need to solve the following system of linear equations M z = r . Let r T = (r T 1 , r T 2 ) and z T = (z T 1 , z T 2 ), where r 1 , z 1 ∈ R n and r 2 , z 2 ∈ R m 1 2 An easy computation shows that (6) is equivalent to the following equations: The approximate solution of the linear system (7) can be obtained by conjugate gradient method (for symmetric P) and Lanczos method (for non-symmetric P). In addition, linear system (7) can be solved by some direct methods.

Convergence of NMSS iteration method
In this section, we will investigate behavior convergence of NMSS method when saddle point system (1) is nonsingular. As we know, an NMSS method is convergent if and only if ρ(Γ ) < 1. Let us assume that λ is an eigenvalue of the iteration matrix Γ of the NMSS method and u = [x T , y T ] T is the corresponding eigenvector, then we have which is equivalent to We can write (9) as follows: If λ = 1 is substituted in (8), we obtain A u = 0, which contradicts the nonsingularity of A . Also, suppose that x = 0. We conclude from (11) and λ = 1 that y = 0. But, this is impossible, then x = 0 and the next lemma immediately follows.
2. If c = 0, then Proof By Lemma 3.1, we know that λ = 1 . Moreover, we can obtain from (11) that By substituting (14) in (10), we have Since Let Then (16) is simplified to Thus, |λ| < 1 if and only if 2. If c = 0 (i.e., B T x = 0), then by arranging (17) in terms of λ, we obtain the following quadratic equation: We divide (19) by (αβ + 2βa + c) = 0, then where Through Lemma 3.2, we know that a sufficient and necessary condition for the roots of the equation (20) to satisfy |λ| < 1 if and only if φ −φψ +|ψ| 2 < 1. Some computations show that condition φ −φψ +|ψ| 2 < 1 is equivalent to Thus, if the condition (21) holds, then the NMSS iteration method must be convergent. Remark 3.4 In Theorem 3.3, if (1, 1)-block A has a dominant positive definite part, then |a| |b|. We conclude that (12) for all α > 0 holds. On the other hand, there is no restriction on β, except non-negativity. Therefore, in this case, the iteration method is convergent unconditionally.

Semi-convergence of the NMSS iteration method for singular saddle point problems
Let B in (1) be rank deficient, i.e., rank(B) = r (r < m). Since B is rank deficient, A is singular and we study the semi-convergence of the NMSS iteration method for solving the singular saddle point problems (1). According to [8], iteration scheme (9) is semi-convergent if and only if the following two conditions are met.
For the first condition of semi-convergence, the following theorem will be present. It can be proved the same way as Theorem 4.1 in [14].
In what follows, second condition of the semi-convergence will be studied.
with U ∈ R n×n and V ∈ R m×m being two orthogonal matrices and σ i (i = 1, . . . , r ) being singular values of B. We defineP and consider the block diagonal matrix Hence, Γ has the same spectrum asΓ . Now, we try to convertΓ to the new form using similarity, which can be clustered their eigenvalues, thereforê MatrixΓ can be viewed as the iteration matrix of the NMSS iteration method applied to BecauseÃ = U AU T is positive definite and B r is full rank then (23) is nonsingular. Letũ = x T ,ỹ T T be an eigenvector ofΓ , the relations in Theorem 3.3 can be expressed for new nonsingular system (23) with iteration matrixΓ as follows: Then, under the above conditions, ρ(Γ ) < 1 and the second condition of semi-convergence is satisfied. These concepts are briefed in the following theorem.
2. Ifc = 0, then Using Theorems 4.1 and 4.2, we conclude the semi-convergence of the NMSS iteration method for singular saddle point problem (1).

Preconditioning properties
In preceding sections, we study convergence and semi-convergence of the NMSS method as an iteration method. Although, similar to the other shift-splitting methods, we do not expect fast convergence for the NMSS method in the actual implementations. Therefore, we focus on the preconditioner generated by this method, i.e., P NMSS . We use this preconditioner to accelerate the convergence of the GMRES method as a Krylov subspace method. Also, we study the eigenvalues distribution of the preconditioned matrix P −1 NMSS A . Lemma 5.1 Let A ∈ R n×n be positive definite, B ∈ R n×m be of full column rank and α, β > 0 be given constants. Assume that A = P + S is a dominant positive definite and skew-Hermitian splitting. Let a, b, and c be defined as in Theorem 3.3 and α, β > 0 satisfy (12) or (13). Then all eigenvalues of the NMSSpreconditioned matrix P −1 NMSS A are located in a circle centered at (1, 0) with radius strictly less than 1.

Theorem 5.2 Under the hypotheses of Lemma 5.1, if μ is an eigenvalue of the NMSS-preconditioned matrix P −1
NMSS A and u = [x T , y T ] T be its associated eigenvector, then we have 1. μ = 0 and x = 0. 2. If y = 0, then x ∈ null(B T ) and μ → 1 as α → 0 + , where a, b, and c are defined as in Theorem 3.3. 3. If y = 0, then μ → 2 as β → 0 + . Proof Let μ be the eigenvalue of the preconditioned matrix P −1 NMSS A and x y be its associated eigenvector. Therefore, Using (26), the following equations are implied: We obtain y in (28) and replace it in (27), then we have Multiplying x * x * x to both sides of (29), then We use notations in Theorem 3.3 and rewrite (30) as follows: By collecting terms in (31), we obtain Proof of the first part immediately follows from Lemma 3.1. For the second statement, we set y = 0, then (27) and (28) become and (34) implies either μ = 2 or B T x = 0. For μ = 2, (33) becomes We multiply (35) from the left in x * x * x and obtain This leads to a contradiction with the positive definiteness of A and the positivity of α. μ = 2 concludes that B T x = 0, i.e., x ∈ null(B T ). From (31), we drive If α → 0 + , then μ = 1 + ib a . Also, since A has a dominant positive definite part, we conclude that, |a| |b| for all x ∈ C n , then μ tends to 1. To prove the third part (3), since y = 0, we conclude that B T x = 0. Then c > 0. We solve quadratic equation (32). The roots of this equation are as follows: Now, if β → 0 + , so μ → 2, which completes the proof. Now, we study the eigenvalues distribution of the NMSS-preconditioned matrix P −1 NMSS A in singular case. We give the following lemma and theorem. The proofs of this lemma and theorem are similar to nonsingular case, so we give them without proof. 3. If y = 0, then μ → 2 as β → 0 + .

Numerical results
In this section, we present two examples to illustrate the effectiveness of the NMSS preconditioner for saddle point problem (1) arising from a model Stokes problem. We use left preconditioning with GMRES as a Krylov subspace method. We compare the elapsed CPU time (s) (CPU) and the number of iterations (IT) of the NMSS preconditioner with GMRES without preconditioning and GMRES method with GMSS preconditioner [17], Uzawa-HSS and PU-STS preconditioners [19,24,25]. In these examples, all of the optimal parameters are provided experimentally. We find them based on the least number of iterations in the method. We choose right-hand side vector b so that U = (1, . . . , 1) T is the exact solution of (1). We run examples with zero as initial vector and terminated if E R R = b − A U (k) 2 / b 2 <= 10 −9 is satisfied. All of the examples are performed by Matlab on a computer with Intel Core i7 CUP 2.0 GHz and 8GB memory. Example 6.1 We consider the following nonsingular saddle point problem: where Here, ⊗ is Kronecker product and h = 1 p + 1 is the discretization mesh size. We find w such that (1, 1)block A in (1) has a dominant positive definite part. This feature decreases the number of iterations of the GMRES method when NMSS is used as its preconditioner. Saddle point problem (1) with the matrices given in (37) has been studied in [19]. As for the matrix Q in the Uzawa-HSS and PU-STS methods, we choose For this example, we have n = 2 p 2 and m = p 2 . Hence, the total number of variables is m + n = 3 p 2 . We test three ν, i.e., ν = 1, 0.1 and 0.01. For each ν, four different type of p are used, i.e., p = 16, 24, 32, 64. In Tables 1, 2 and 3, we list numerical results on different uniform grids with ν = 1, ν = 0.1 and ν = 0.01, respectively. In these tables, No Pr. denotes the GMRES method without preconditioning. P GMSS ,   Tables 1, 2 and 3, we observe that GMRES without preconditioning is very slow and the new preconditioner NMSS is faster than GMSS preconditioner. Numerical results show that the number of iterations of the new method is so much less than the Uzawa-HSS and PU-STS methods when they are used as preconditioner for GMRES method. Figure 1 shows the eigenvalues distribution of the matrix A , and the NMSS, GMSS, MSS, Uzawa-HSS and PU-STS preconditioned matrices, respectively. We can see that for NMSS, GMSS and MSS preconditioned matrices, eigenvalues are well clustered around (1, 0) and (2, 0), especially most of the eigenvalues of the NMSS-preconditioned matrix are clustered near (1, 0).

Example 6.2
We consider the following singular saddle point problem: where where ⊗ is Kronecker product and h = 1 p + 1 is the discretization mesh size. We find w such that (1, 1)-block    the GMRES method when NMSS is used as its preconditioner. Saddle point problem (1) with the matrices given in (38) has been studied in [19].
For this example, we have n = 2 p 2 and m = p 2 + 2 and same as Example 6.1, we test three ν, i.e., ν = 1, 0.1 and 0.01 and for each ν, four different p are used, i.e., p = 16, 24, 32, 64. In Tables 4, 5 and 6, we list numerical results on different uniform grids with ν = 1, ν = 0.1 and ν = 0.01, respectively. According to tables, we compare NMSS preconditioner with GMSS and MSS preconditioners. From Tables 4, 5 and 6, we can see that for singular system, GMRES without preconditioning is also very slow and new preconditioner NMSS is faster than GMSS and MSS preconditioner.  Figure 2 gives the eigenvalue distribution of the matrix A and the NMSS, GMSS and MSS preconditioned matrices, respectively. This figure shows that except zero, the other eigenvalues same as nonsingular case are clustered near (1, 0) and (2, 0). With respect to choosing parameters in this example, eigenvalues of the NMSS-preconditioned matrix are more clustered than other preconditioned matrices.

Conclusion
In this work, we present new preconditioner based on the positive definite and skew-symmetric splitting of (1, 1)-block A of the saddle point problem (1). The convergence and semi-convergence of the NMSS method for solving nonsingular and singular saddle point problems are, respectively, investigated. The numerical results show that if (1, 1)-block A in saddle point problem (1) has a dominant positive definite part, then the NMSS preconditioner acts better than GMSS, MSS, Uzawa-HSS and PU-STS preconditioners. However, if (1, 1)-block A has no positive definite dominant part, we should not expect to see proper results. Moreover, this new preconditioner can be used when (1, 1)-block A is symmetric or non-symmetric while for GMSS and MSS, A must be non-symmetric.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.