Introduction and preliminaries

Consider the generalized Sylvester matrix equation

$$ AV+ BW= EVF+C, $$
(1.1)

where A, E ∈ Rm × p, B ∈ Rm × q, F ∈ Rn × n, and C ∈ Rm × n while V ∈ Rp × n and W ∈ Rq × n are matrices to be determined. An n × n real matrix P ∈ Rn × n is called a generalized reflection matrix if PT = P and P2 = I. An n × n matrix A is said to be reflexive matrix with respect to the generalized reflection matrix P if A = PAP for more details see [1, 2]. The symbol A ⊗ B stands for the Kronecker product of matrices A and B. The vectorization of an m × n matrix A, denoted by vec(A), is the mn × 1 column vector obtains by stacking the columns of the matrix A on top of one another: \( vec(A)={\left({a}_1^T\kern0.5em {a}_2^T\dots {a}_n^T\right)}^T \). We use tr(A) and ATto denote the trace and the transpose of the matrix A respectively. In addition, we define the inner product of two matrices A, B as 〈A, B〉 = tr(BTA). Then, the matrix norm of A induced by this inner product is Frobenius norm and denoted by ‖A‖ where 〈A, A〉 = ‖A2.

The reflexive matrices with respect to the generalized reflection matrix P ∈ Rn × n have many special properties and widely used in engineering and scientific computations [2, 3]. Several authors have studied the reflexive solutions of different forms of linear matrix equations; see for example [4,5,6,7]. Ramadan et al. [8] considered explicit and iterative methods for solving the generalized Sylvester matrix equation. Dehghan and Hajarian [9] constructed an iterative algorithm to solve the generalized coupled Sylvester matrix equations (AY − ZB, CY − ZD) = (E, F) over reflexive matrices. Also, Dehghan and Hajarian [10] proposed three iterative algorithms for solving the linear matrix equation A1X1B1 + A2X2B2 = C over reflexive (anti -reflexive) matrices. Yin et al. [11] presented an iterative algorithm to solve the general coupled matrix equations \( \sum \limits_{j=1}^q{A}_{ij}{X}_j{B}_{ij}={M}_i\left(i=1,2,\cdots, p\right) \) and their optimal approximation problem over generalized reflexive matrices. Li [12] presented an iterative algorithm for obtaining the generalized (P, Q)-reflexive solution of a quaternion matrix equation \( \sum \limits_{l=1}^u{A}_l{XB}_l+\sum \limits_{s=1}^v{C}_s\tilde{X}{D}_s=F \). In [13], Dong and Wang presented necessary and sufficient conditions for the existence of the {P, Q, k + 1}-reflexive (anti-reflexive) solution to the system of matrices AX = C, XB = D. In [14], Nacevska found necessary and sufficient conditions for the generalized reflexive and anti-reflexive solution for a system of equations ax = b and xc = d in a ring with involution. Moreover, Hajarian [15] established the matrix form of the biconjugate residual (BCR) algorithm for computing the generalized reflexive (anti-reflexive) solutions of the generalized Sylvester matrix equation \( \sum \limits_{i=1}^s{A}_i{XB}_i+\sum \limits_{j=1}^t{C}_j{YD}_j=M \). Liu [16] established some conditions for the existence and the representations for the Hermitian reflexive, anti-reflexive, and non-negative definite reflexive solutions to the matrix equation AX = B with respect to a generalized reflection P by using the Moore-Penrose inverse. Dehghan and Shirilord [17] presented a generalized MHSS approach for solving large sparse Sylvester equation with non-Hermitian and complex symmetric positive definite/semi-definite matrices based on the MHSS method. Dehghan and Hajarian [18] proposed two algorithms for solving the generalized coupled Sylvester matrix equations over reflexive and anti-reflexive matrices. Dehghan and Hajarian [19] established two iterative algorithms for solving the system of generalized Sylvester matrix equations over the generalized bisymmetric and skew-symmetric matrices. Hajarian and Dehghan [20] established two gradient iterative methods extending the Jacobi and Gauss Seidel iteration for solving the generalized Sylvester-conjugate matrix equation A1XB1 + A2XB2 + C1YD1 + C2YD2 = E over reflexive and Hermitian reflexive matrices. Dehghan and Hajarian [21] proposed two iterative algorithms for finding the Hermitian reflexive and skew–Hermitian solutions of the Sylvester matrix equation AX + XB = C. Hajarian [22] obtained an iterative algorithm for solving the coupled Sylvester-like matrix equations. El–Shazly [23] studied the perturbation estimates of the maximal solution for the matrix equation \( X+{A}^T\sqrt{X^{-1}}A=P \). Khader [24] presented numerical method for solving fractional Riccati differential equation (FRDE). Balaji [25] presented a Legendre wavelet operational matrix method for solving the nonlinear fractional order Riccati differential equation. The generalized Sylvester matrix equation has numerous applications in control theory, signal processing, filtering, model reduction, and decoupling techniques for ordinary and partial differential equations (see [26,27,28,29]).

In this paper, we will investigate the reflexive solutions of the generalized Sylvester matrix equation AV + BW = EVF + C. The paper is organized as follows: First, in the “Iterative algorithm for solving AV + BW = EVF + C” section, an iterative algorithm for obtaining reflexive solutions of this problem is derived. The complexity of the proposed algorithm is presented. In the “Convergence analysis for the proposed algorithm” section, the convergence analysis for the proposed algorithm is given. Also, the least Frobenius norm reflexive solutions can be obtained when special initial reflexive matrices are chosen. Finally, in “Numerical samples” section, four numerical examples are considered for ensuring the performance of the proposed algorithm.

Iterative algorithm for solving AV + BW = EVF + C

In this part, we consider the following problem:

Problem 2.1. For given matrices A, B, E, C ∈ Rm × n, F ∈ Rn × n, and two generalized reflection Matrices P, S of size n, find the matrices \( V\in {R}_r^{n\times n}(P) \) and \( W\in {R}_r^{n\times n}(S) \) such that

$$ AV+ BW= EVF+C $$
(2.1)

Where the subspace \( {R}_r^{n\times n}(P) \) is defined by \( {R}_r^{n\times n}(P)=\left\{Q\in {R}^{n\times n}:Q= PQP\right\} \), where P is the generalized reflection matrix: P2 = I, PT = P.

An iterative algorithm for solving the consistent Problem 2.1

This subsection, an iterative algorithm is proposed for solving Problem 2.1 assuming that this problem is consistent.

figure a

In the next theorem, we prove that the solutions {Vk + 1} and {Wk + 1} are reflexive solutions for the matrix Eq. (2.1).

Theorem 2.1 The solutions {Vk + 1} and{Wk + 1} generated from Algorithm 2.1 are reflexive solutions with respect to the generalized reflection matrices P and S of the matrix Eq. ( 2.1 ).

Proof By using the induction we can prove this theorem as follows:

For k = 1,

$$ {\displaystyle \begin{array}{l}{PV}_2P={PV}_1P+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{PP}_1P\\ {}={V}_1+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\frac{1}{2}\left[{PA}^T{R}_1P+{P}^2{A}^T{R}_1{P}^2-{PE}^T{R}_1{F}^TP-{P}^2{E}^T{R}_1{F}^T{P}^2\right]\\ {}={V}_1+\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\frac{1}{2}\left[{PA}^T{R}_1P+{A}^T{R}_1-{PE}^T{R}_1{F}^TP-{E}^T{R}_1{F}^T\right]={V}_2\end{array}} $$

Assume that PVkP = Vk i.e.,

$$ {PV}_kP={PV}_{k-1}P+\frac{{\left\Vert {R}_{k-1}\right\Vert}^2}{{\left\Vert {P}_{k-1}\right\Vert}^2+{\left\Vert {Q}_{k-1}\right\Vert}^2}{PP}_{k-1}P={V}_{k-1}+\frac{{\left\Vert {R}_{k-1}\right\Vert}^2}{{\left\Vert {P}_{k-1}\right\Vert}^2+{\left\Vert {Q}_{k-1}\right\Vert}^2}{P}_{k-1}={V}_k $$

Now, \( {\displaystyle \begin{array}{l}{PV}_{k+1}P={PV}_kP+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}{PP}_kP\\ {}={V}_k+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}\left(\frac{1}{2}\left[{PA}^T{R}_kP+{P}^2{A}^T{R}_k{P}^2-{PE}^T{R}_k{F}^TP-{P}^2{E}^T{R}_k{F}^T{P}^2\right]+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {R}_{k-1}\right\Vert}^2}{PP}_{k-1}P\right)\\ {}={V}_k+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {P}_k\right\Vert}^2+{\left\Vert {Q}_k\right\Vert}^2}\left(\frac{1}{2}\left[{PA}^T{R}_kP+{A}^T{R}_k-{PE}^T{R}_k{F}^TP-{E}^T{R}_k{F}^T\right]+\frac{{\left\Vert {R}_k\right\Vert}^2}{{\left\Vert {R}_{k-1}\right\Vert}^2}{P}_{k-1}\right)={V}_{k+1}\cdot \end{array}} \)

Similarly, we can prove Wk + 1 is reflexive solution with respect to the generalized reflection matrix S of the matrix Eq. (2.1).

The complexity of the proposed iterative algorithm

Algorithmic complexity is concerned about how fast or slow particular algorithm performs. We define complexity as a numerical function T(n) —time versus the input size n. The complexity of an algorithm signifies the total time required by the program to run till its completion. The time complexity of algorithms is most commonly expressed using the big O notation. It is an asymptotic notation to represent the time complexity. A theoretical and very crude measure of efficiency is the number of floating point operations (flops) needed to implement the algorithm. A “flop” is an arithmetic operation: +, x, or /. In this subsection, we compute the flops of the proposed Algorithm 2.1 of the Sylvester matrix equation AV + BW = EVF + C.

The flop counts for step 3:

The residual R1 requires 4mn(2n − 1) + 3mn flops, computing the reflection matrix P1 requires 4n2(2m − 1) + 4n2(2n − 1) + 2mn(2n − 1) + 4n2 flops, and computing the reflection matrix Q1 requires 2n2(2m − 1) + n2(2n − 1) + mn(2n − 1) + 2n2 flops.

The flop counts for step 5:

Computing Vk + 1 requires [6n2 + 2mn + 5] flops, Wk + 1 requires [6n2 + 2mn + 5] flops, Rk + 1

requires [4mn(2n − 1) + 4n2 + 6mn + 5] flops, Qk + 1 requires [2n2(2m − 1) + n2(2n − 1) + mn(2n − 1) + 4n2 + 4mn + 3] flops, and Pk + 1 requires [4n2(2m − 1) + 3n2(2n − 1) + 2mn(2n − 1) + 6n2 + 4mn + 3] flops.

Thus, the total count of Algorithm 2.1 is:

$$ {\displaystyle \begin{array}{l}k\left[6{n}^2\left(2m-1\right)+4{n}^2\left(2n-1\right)+7 mn\left(2n-1\right)+26{n}^2+18 mn+21\right]\\ {}+6{n}^2\left(2m-1\right)+5{n}^2\left(2n-1\right)+7 mn\left(2n-1\right)+6{n}^2+3 mn\approx k\left[12{n}^2m+8{n}^3+14{mn}^2\right]\\ {}+12{n}^2m+10{n}^3+14{mn}^2\end{array}} $$

where k represents the number of iterations which is needed to find the reflexive solutions of Eq. (2.1). We can conclude that the total flop count of Algorithm 2.1 is O(n3).

Convergence analysis for the proposed algorithm

In this section, first, we present two lemmas which are important tools for the convergence of Algorithm 2.1.

Lemma 3.1 Assume that the sequences {Ri}, {Pi} and{Qi} are obtained by Algorithm 2.1, if there exists an integer number s > 1, such that \( {R}_i\ne \mathbf{0}, \) for all i = 1, 2, …, s, then we have

$$ tr\left({R}_j^T{R}_i\right)=0\kern0.50em and\ tr\left({P}_j^T{P}_i+{Q}_j^T{Q}_i\right)=0,i,j=1,2,\dots, s,i\ne j $$
(3.1)

Proof In view of the fact that tr(Y) = tr(YT) for arbitrary matrix Y. Therefore, we only need to prove that

$$ tr\left({R}_j^T{R}_i\right)=0, tr\left({P}_j^T{P}_i+{Q}_j^T{Q}_i\right)=0,\mathrm{for}\ 1\le i<j\le s $$
(3.2)

We prove the conclusion (3.2) by induction through the following two steps.

Step 1: First, we show that

$$ tr\left({R}_{i+1}^T{R}_i\right)=0\ \mathrm{and}\ tr\left({P}_{i+1}^T{P}_i+{Q}_{i+1}^T{Q}_i\right)=0,i=1,2,\dots, s $$
(3.3)

To prove (3.3), we also use induction.

For i = 1, noting that P1 = PP1P, and Q1 = SQ1S, from the iterative Algorithm 2.1, we can write \( tr\left({R}_2^T{R}_1\right)= tr\left({\left[{R}_1-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left({AP}_1-{EP}_1F+{BQ}_1\right)\right]}^T{R}_1\right) \)

$$ {\displaystyle \begin{array}{l}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2} tr\left({P}_1^T{A}^T{R}_1-{F}^T{P}_1^T{E}^T{R}_1+{Q}_1^T{B}^T{R}_1\right)\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2} tr\left({P}_1^T{A}^T{R}_1-{P}_1^T{E}^T{R}_1{F}^T+{Q}_1^T{B}^T{R}_1\right)\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]\right.\right.\\ {}+{Q}_1^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]+{Q}_1^T\left[\frac{B^T{R}_1-{SB}^T{R}_1S}{2}\right]\\ {}\left.\left.+{P}_1^T\left[\frac{A^T{R}_1-{PA}^T{R}_1P-{E}^T{R}_1{F}^T+{PE}^T{R}_1{F}^TP}{2}\right]\right)\right]\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]\right.\right.\\ {}\left.\left.+{Q}_1^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\right]\\ {}={\left\Vert {R}_1\right\Vert}^2-\frac{{\left\Vert {R}_1\right\Vert}^2}{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}\left[ tr\left({P}_1^T{P}_1+{Q}_1^T{Q}_1\Big)\right)\right]=0.\end{array}} $$
(3.4)

Similarly, we can write

$$ {\displaystyle \begin{array}{c}\begin{array}{l} tr\left({P}_2^T{P}_1+{Q}_2^T{Q}_1\right)= tr\left({\left[\frac{A^T{R}_2+{PA}^T{R}_2P-{E}^T{R}_2{F}^T-{PE}^T{R}_2{F}^TP}{2}+\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}{P}_1\right]}^T{P}_1\right.\\ {}\left.+{\left[\frac{B^T{R}_2+{SB}^T{R}_2S}{2}+\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}{Q}_1\right]}^T{Q}_1\right)\end{array}\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({P}_1^T{P}_1+{Q}_1^T{Q}_1\right)+ tr\left(\begin{array}{l}{P}_1^T\left[\frac{A^T{R}_2+{PA}^T{R}_2P-{E}^T{R}_2{F}^T-{PE}^T{R}_2{F}^TP}{2}\right]\\ {}+{Q}_1^T\left[\frac{B^T{R}_2+{SB}^T{R}_2S}{2}\right]\end{array}\right)\\ {}\begin{array}{l}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({P}_1^T\left[\frac{A^T{R}_2+{A}^T{R}_2-{E}^T{R}_2{F}^T-{E}^T{R}_2{F}^T}{2}\right]\right.\\ {}\left.+{Q}_1^T\left[\frac{B^T{R}_2+{B}^T{R}_2}{2}\right]\right)\end{array}\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({R}_2^T\left[{AP}_1-{EP}_1F+{BQ}_1\right]\right)\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+ tr\left({R}_2^T\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({R}_1-{R}_2\right)\right)\\ {}=\frac{{\left\Vert {R}_2\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2}\left({\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2\right)+\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({R}_2^T{R}_1\right)-{\left\Vert {R}_2\right\Vert}^2=0.\end{array}} $$
(3.5)

Now, assume that (3.3) holds for 1 < i ≤ t − 1 < s, noting that Pt = PPtP, and Qt = SQtS, then we have for i = t

$$ {\displaystyle \begin{array}{c} tr\left({R}_{t+1}^T{R}_t\right)= tr\left({\left[{R}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left\{{AP}_t-{EP}_tF+{BQ}_t\right\}\right]}^T{R}_t\right)\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2} tr\left({P}_t{A}^T{R}_t-{F}^T{P}_t{E}^T{R}_t+{Q}_t{B}^T{R}_t\right)\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2} tr\left({P}_t^T{A}^T{R}_t-{P}_t^T{E}^T{R}_t{F}^T+{Q}_t^T{B}^T{R}_t\right)\\ {}\begin{array}{l}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[ tr\left({P}_t^T\left[\frac{A^T{R}_t+{PA}^T{R}_tP-{E}^T{R}_t{F}^T-{PE}^T{R}_t{F}^TP}{2}\right]\right.\right.\\ {}\left.\left.+{Q}_t^T\left[\frac{B^T{R}_t+{SB}^T{R}_tS}{2}\right]\right)\right]\end{array}\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[ tr\left({P}_t^T\left({P}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2}{P}_{t-1}\right)+{Q}_t^T\left({Q}_t-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2}{Q}_{t-1}\right)\right)\right]\\ {}={\left\Vert {R}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}\left[{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2-\frac{{\left\Vert {R}_t\right\Vert}^2}{{\left\Vert {R}_{t-1}\right\Vert}^2} tr\left({P}_t^T{P}_{t-1}+{Q}_t^T{Q}_{t-1}\right)\right]=0.\end{array}} $$
(3.6)

Also, we have

$$ {\displaystyle \begin{array}{c} tr\left({P}_{t+1}^T{P}_t+{Q}_{t+1}^T{Q}_t\right)= tr\left({\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{P}_t\right]}^T{P}_t\right.\\ {}\left.+{\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{Q}_t\right]}^T{Q}_t\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left(\begin{array}{l}{P}_t^T\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}\right]\\ {}+{Q}_t^T\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}\right]\end{array}\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left({R}_{t+1}^T\left[{AP}_t-{EP}_tF+{BQ}_t\right]\right)\\ {}\begin{array}{l}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+ tr\left({R}_{t+1}^T\frac{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}\left({R}_t-{R}_{t+1}\right)\right)\\ {}=\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({P}_t^T{P}_t+{Q}_t^T{Q}_t\right)+\frac{{\left\Vert {P}_t\right\Vert}^2+{\left\Vert {Q}_t\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2} tr\left({R}_{t+1}^T{R}_t\right)-{\left\Vert {R}_{t+1}\right\Vert}^2=0.\end{array}\end{array}} $$
(3.7)

Therefore, the conclusion (3.3) holds for i = t. Hence, (3.3) holds by the principle of induction.

Step 2: In this step, we show for 1 ≤ i ≤ s − 1

$$ tr\left({R}_{i+l}^T{R}_i\right)=0\ \mathrm{and}\ tr\left({P}_{i+l}^T{P}_i+{Q}_{i+l}^T{Q}_i\right)=0 $$
(3.8)

for l = 1, 2, …, s. The case of l = 1 has been proven in step 1. Assume that (3.8) holds for l ≤ ν.

Now, we prove that \( tr\left({R}_{i+\nu +1}^T{R}_i\right)=0 \) and \( tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)=0 \) through the following two substeps.

Substep 2.1: In this substep, we show that

$$ tr\left({R}_{\nu +2}^T{R}_1\right)=0 $$
(3.9)
$$ tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right)=0 $$
(3.10)

By Algorithm 2.1 and the induction assumptions, we have

$$ {\displaystyle \begin{array}{c} tr\left({R}_{\nu +2}^T{R}_1\right)= tr\left({\left[{R}_{\nu +1}-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2}\left({AP}_{\nu +1}-{EP}_{\nu +1}F+{BQ}_{\nu +1}\right)\right]}^T{R}_1\right)\\ {}= tr\left({R}_{\nu +1}^T{R}_1\right)-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T\left[{A}^T{R}_1-{E}^T{R}_1{F}^T\right]+{Q}_{\nu +1}^T{B}^T{R}_1\right)\\ {}\begin{array}{l}=-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T\left[\frac{A^T{R}_1-{E}^T{R}_1{F}^T+{PA}^T{R}_1P-{PE}^T{R}_1{F}^TP}{2}\right]\right.\\ {}\left.+{Q}_{\nu +1}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\end{array}\\ {}=-\frac{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{{\left\Vert {P}_{\nu +1}\right\Vert}^2+{\left\Vert {Q}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T{P}_1+{Q}_{\nu +1}^T{Q}_1\right)=0,\end{array}} $$

and \( tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \)

$$ {\displaystyle \begin{array}{l}= tr\left({\left[\frac{A^T{R}_{\nu +2}+{PA}^T{R}_{\nu +2}P-{E}^T{R}_{\nu +2}{F}^T-{PE}^T{R}_{\nu +2}{F}^TP}{2}+\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{P}_{\nu +1}\right]}^T{P}_1\right.\\ {}\left.+{\left[\frac{B^T{R}_{\nu +2}+{SB}^T{R}_{\nu +2}S}{2}+\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2}{Q}_{\nu +1}\right]}^T{Q}_1\right)\\ {}=\frac{{\left\Vert {R}_{\nu +2}\right\Vert}^2}{{\left\Vert {R}_{\nu +1}\right\Vert}^2} tr\left({P}_{\nu +1}^T{P}_1+{Q}_{\nu +1}^T{Q}_1\right)+ tr\left({R}_{\nu +2}^T\left[{AP}_1-{EP}_1F+{BQ}_1\right]\right)\\ {}=\frac{{\left\Vert {P}_1\right\Vert}^2+{\left\Vert {Q}_1\right\Vert}^2}{{\left\Vert {R}_1\right\Vert}^2} tr\left({R}_{\nu +2}^T\left({R}_1-{R}_2\right)\right)=0\end{array}} $$

Thus, (3.9) and (3.10) hold.

Substep 2.2: By Algorithm 2.1 and the induction assumptions, we can write

$$ {\displaystyle \begin{array}{c} tr\left({R}_{i+\nu +1}^T{R}_i\right)= tr\left({\left[{R}_{i+\nu }-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2}\left({AP}_{i+\nu }-{EP}_{i+\nu }F+{BQ}_{i+\nu}\right)\right]}^T{R}_i\right)\\ {}= tr\left({R}_{\nu +1}^T{R}_i\right)-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[{A}^T{R}_i-{E}^T{R}_i{F}^T\right]+{Q}_{i+\nu}^T{B}^T{R}_i\right)\\ {}\begin{array}{l}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[\frac{A^T{R}_i-{E}^T{R}_i{F}^T+{PA}^T{R}_iP-{PE}^T{R}_i{F}^TP}{2}\right]\right.\\ {}\left.+{Q}_{i+\nu}^T\left[\frac{B^T{R}_i+{SB}^T{R}_iS}{2}\right]\right)\end{array}\\ {}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T\left[{P}_i-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2}{P}_{i-1}\right]+{Q}_{i+\nu}^T\left[{Q}_i-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2}{Q}_{i-1}\right]\right)\\ {}=-\frac{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2}\left[ tr\left({P}_{i+\nu}^T{P}_i+{Q}_{i+v}^T{Q}_i\right)-\frac{{\left\Vert {R}_i\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\right]\\ {}=\frac{{\left\Vert {R}_i\right\Vert}^2{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2\left({\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2\right)} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\end{array}} $$
(3.11)

Also, we have

$$ {\displaystyle \begin{array}{c} tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)\\ {}= tr\left({\left[\frac{A^T{R}_{i+\nu +1}+{PA}^T{R}_{i+\nu +1}P-{E}^T{R}_{i+\nu +1}{F}^T-{PE}^T{R}_{i+\nu +1}{F}^TP}{2}+\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{P}_{i+\nu}\right]}^T{P}_i\right.\\ {}\left.+{\left[\frac{B^T{R}_{i+\nu +1}+{SB}^T{R}_{i+\nu +1}S}{2}+\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2}{Q}_{i+\nu}\right]}^T{Q}_i\right)\\ {}=\frac{{\left\Vert {R}_{i+\nu +1}\right\Vert}^2}{{\left\Vert {R}_{i+\nu}\right\Vert}^2} tr\left({P}_{i+\nu}^T{P}_i+{Q}_{i+\nu}^T{Q}_i\right)+ tr\left({R}_{i+\nu +1}^T\left[{AP}_i-{EP}_iF+{BQ}_i\right]\right)\\ {}=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2} tr\left({R}_{i+\nu +1}^T\left[{R}_i-{R}_{i+1}\right]\right)=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2} tr\left({R}_{i+\nu +1}^T{R}_i\right)\\ {}=\frac{{\left\Vert {P}_i\right\Vert}^2+{\left\Vert {Q}_i\right\Vert}^2}{{\left\Vert {R}_i\right\Vert}^2}\frac{{\left\Vert {R}_i\right\Vert}^2{\left\Vert {R}_{i+\nu}\right\Vert}^2}{{\left\Vert {R}_{i-1}\right\Vert}^2\left({\left\Vert {P}_{i+\nu}\right\Vert}^2+{\left\Vert {Q}_{i+\nu}\right\Vert}^2\right)} tr\left({P}_{i+\nu}^T{P}_{i-1}+{Q}_{i+\nu}^T{Q}_{i-1}\right)\end{array}} $$
(3.12)

Repeating the above process (3.11) and (3.12), we can obtain, for certain α and β

\( tr\left({R}_{i+\nu +1}^T{R}_i\right)=\alpha tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \), and \( tr\left({P}_{i+\nu +1}^T{P}_i+{Q}_{i+\nu +1}^T{Q}_i\right)=\beta tr\left({P}_{\nu +2}^T{P}_1+{Q}_{\nu +2}^T{Q}_1\right) \).

Combining these two relations with (3.10) implies that (3.8) holds for l = ν + 1.

From steps 1 and 2, the conclusion (3.1) holds by the principle of induction.

Lemma 3.2 Let Problem 2.1 be consistent over reflexive matrices, and V and W be arbitrary reflexive solutions of Problem 2.1. Then for any initial reflexive matrices V 1 andW 1 , we have

$$ tr\left({\left({V}^{\ast }-{V}_i\right)}^T{P}_i+{\left({W}^{\ast }-{W}_i\right)}^T{Q}_i\right)={\left\Vert {R}_i\right\Vert}^2\ for\ i=1,2,\dots $$
(3.13)

where the Sequences {Ri}, {Pi}, {Qi}, {Vi} and{Wi} are generated by Algorithm 2.1.

Proof We can prove the conclusion (3.13) by using the induction as follows

For i = 1, noting that V − V1 = P(V − V1)P, and W − W1 = Z(W − W1)Z, we have

$$ {\displaystyle \begin{array}{c} tr\left({\left({V}^{\ast }-{V}_1\right)}^T{P}_1+{\left({W}^{\ast }-{W}_1\right)}^T{Q}_1\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP}{2}\right]+{\left({W}^{\ast }-{W}_1\right)}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S}{2}\right]\right)\\ {}\begin{array}{l}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[\frac{A^T{R}_1+{PA}^T{R}_1P-{E}^T{R}_1{F}^T-{PE}^T{R}_1{F}^TP+{A}^T{R}_1-{PA}^T{R}_1P-{E}^T{R}_1{F}^T}{2}\right.\right.\\ {}\left.+\frac{PE^T{R}_1{F}^TP}{2}\right]\left.+{\left({W}^{\ast }-{W}_1\right)}^T\left[\frac{B^T{R}_1+{SB}^T{R}_1S+{B}^T{R}_1-{SB}^T{R}_1S}{2}\right]\right)\end{array}\\ {}= tr\left({\left({V}^{\ast }-{V}_1\right)}^T\left[{A}^T{R}_1-{E}^T{R}_1{F}^T\right]+{\left({W}^{\ast }-{W}_1\right)}^T\left[{B}^T{R}_1\right]\right)\\ {}\begin{array}{l}= tr\left({R}_1^T\left[A\left({V}^{\ast }-{V}_1\right)-E\left({V}^{\ast }-{V}_1\right)F+B\left({W}^{\ast }-{W}_1\right)\right]\right)\\ {}= tr\left({R}_1^T\left[C-{AV}_1+{EV}_1F-{BW}_1\right]\right)={\left\Vert {R}_1\right\Vert}^2.\end{array}\end{array}} $$
(3.14)

Assume that the conclusion (3.13) holds for i = t. Now, for i = t + 1, we have

$$ {\displaystyle \begin{array}{l} tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T{P}_{t+1}+{\left({W}^{\ast }-{W}_{t+1}\right)}^T{Q}_{t+1}\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T\left[\frac{A^T{R}_{t+1}+{PA}^T{R}_{t+1}P-{E}^T{R}_{t+1}{F}^T-{PE}^T{R}_{t+1}{F}^TP}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{P}_t\right]\right.\\ {}\left.+{\left({W}^{\ast }-{W}_{t+1}\right)}^T\left[\frac{B^T{R}_{t+1}+{SB}^T{R}_{t+1}S}{2}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}{Q}_t\right]\right)\\ {}= tr\left({\left({V}^{\ast }-{V}_{t+1}\right)}^T\left[{A}^T{R}_{t+1}-{E}^T{R}_{t+1}{F}^T\right]+{\left({W}^{\ast }-{W}_{t+1}\right)}^T\left[{B}^T{R}_{t+1}\right]\right.\\ {}+\frac{{\left\Vert {R}_{t+1}\right\Vert}^2}{{\left\Vert {R}_t\right\Vert}^2}\left[{\left({V}^{\ast }-{V}_{t+1}\right)}^T{P}_t+{\left({W}^{\ast }-{W}_{t+1}\right)}^T{Q}_t\right]\\ {}= tr\left({R}_{t+1}^T\left[C-{AV}_{t+1}+{EV}_{t+1}F-{BW}_{t+1}\right]\right)={\left\Vert {R}_{t+1}\right\Vert}^2\end{array}} $$
(3.15)

Hence, Lemma 3.2 holds for all i = 1, 2, … by the principle of induction.

Theorem 3.1 Assume that Problem 2.1 is consistent over reflexive matrices, then by using Algorithm 2.1 for any arbitrary initial reflexive matrices \( {V}_1\in {R}_r^{n\times n}(P) \) and \( {W}_1\in {R}_r^{n\times n}(S) \) , reflexive solutions of Problem 2.1 can be obtained within a finite iterative steps by Algorithm 2.1 in absence of roundoff errors.

Proof Assume that \( {R}_i\ne \mathbf{0} \) for i = 1, 2, …, mn. From Lemma 3.2, we get \( {P}_i\ne \mathbf{0} \) or \( {Q}_i\ne \mathbf{0} \) for i = 1, 2, …, mn. Therefore, we can compute Rmn + 1, Vmn + 1 and Wmn + 1 by Algorithm 2.1. Also from Lemma 3.1, we have

$$ tr\left({R}_{mn+1}^T{R}_i\right)=0\ \mathrm{for}\ i=1,2,\dots, mn, $$
(3.16)

and

$$ tr\left({R}_i^T{R}_j\right)=0\ \mathrm{for}\ i,j=1,2,\dots, mn,\left(i\ne j\right). $$
(3.17)

Therefore, the set {R1, R2, …, Rmn} is an orthogonal basis of the matrix space Rm × n, which implies that \( {R}_{mn+1}=\mathbf{0} \), i.e., Vmn + 1, and Wmn + 1 are reflexive matrices solutions of Problem 2.1. Hence, the proof is completed.

To obtain least Frobenius norm solution of the generalized solution pair of Problem 2.1, we first present the following lemma.

Lemma 3.3 [4] Assume that the consistent system of linear equations Ax = b has a solution x ∈ R(AT), then x is a unique least Frobenius norm solution of the system of linear equations.

Theorem 3.2 Suppose that Problem 2.1 is consistent over reflexive matrices. Let the initial iteration matrices \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) where G and \( \tilde{G} \) are arbitrary, or especially \( {V}_1=\mathbf{0} \) and \( {W}_1=\mathbf{0} \) , then the reflexive solutions V andW obtained by Algorithm 2.1, are the least Frobenius norm reflexive solutions of Eq. ( 2.1 ).

Proof The solvability of the matrix Eq. ( 2.1 ) over reflexive matrices is equivalent to the solvability of the system of equations

$$ \left\{\begin{array}{l} AV- EVF+ BW=C,\\ {} APVP- EPVPF+ BSWS=C.\end{array}\right. $$
(3.18)

And the system of equations ( 3.18 ) is equivalent to

$$ \left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)\left(\begin{array}{c} vec(V)\\ {} vec(W)\end{array}\right)=\left(\begin{array}{c} vec(C)\\ {} vec(C)\end{array}\right) $$
(3.19)

Now, assume that G and \( \tilde{G} \) are arbitrary matrices, we can write

$$ {\displaystyle \begin{array}{c}\left(\begin{array}{c} vec\left({A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP\right)\\ {} vec\left({B}^TG+{SB}^T\tilde{G}S\right)\end{array}\right)\\ {}=\left(\begin{array}{cc}\left(I\otimes {A}^T\right)-\left(F\otimes {E}^T\right)& \left(P\otimes {PA}^T\right)-\left( PF\otimes {PE}^T\right)\\ {}\left(I\otimes {B}^T\right)& \left(S\otimes {SB}^T\right)\end{array}\right)\ \left(\begin{array}{c} vec(G)\\ {} vec\left(\tilde{G}\right)\end{array}\right)\\ {}={\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\left(\begin{array}{c} vec(G)\\ {} vec\left(\tilde{G}\right)\end{array}\right)\\ {}\in R\left({\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\right)\end{array}} $$

If we consider \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) then all Vk And Wk generated by Algorithm 2.1 satisfy

$$ \left(\begin{array}{c} vec\left({V}_k\right)\\ {} vec\left({W}_k\right)\end{array}\right)\in R\left({\left(\begin{array}{cc}\left(I\otimes A\right)-\left({F}^T\otimes E\right)& \left(I\otimes B\right)\\ {}\left(P\otimes AP\right)-\left({F}^TP\otimes EP\right)& \left(S\otimes BS\right)\end{array}\right)}^T\right) $$

By applying Lemma 3.3 with the initial iteration matrices \( {V}_1={A}^TG+{PA}^T\tilde{G}P-{E}^T{GF}^T-{PE}^T\tilde{G}{F}^TP \) and \( {W}_1={B}^TG+{SB}^T\tilde{G}S \) where G and \( \tilde{G} \) are arbitrary, or especially \( {V}_1=\mathbf{0} \) and \( {W}_1=\mathbf{0} \), the reflexive solutions V and W obtained by Algorithm 2.1 are the least Frobenius norm reflexive solutions of Eq. (2.1).

Numerical examples

In this section, four numerical examples are presented to illustrate the performance and the effectiveness of the proposed algorithm. We implemented the algorithms in MATLAB (writing our own programs) and ran the programs on a PC Pentium IV.

Example 4.1

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{cccc}3& 2& 4& 1\\ {}0& -2& 1& 3\\ {}5& 2& 3& 2\\ {}2& 1& 3& 4\\ {}2& 0& 2& 0\end{array}\right),B=\left(\begin{array}{cccc}5& 0& 2& 3\\ {}\hbox{-} 5& 0& 4& 1\\ {}3& 4& 5& 2\\ {}3& 2& 2& 3\\ {}0& 3& 4& 6\end{array}\right),E=\left(\begin{array}{cccc}-3& 2& 4& 0\\ {}2& 0& -3& 2\\ {}3& 2& 3& 0\\ {}3& 4& 3& 0\\ {}3& 0& 3& 2\end{array}\right)\\ {}F=\left(\begin{array}{cccc}3& -4& 5& 1\\ {}2& -4& 1& 3\\ {}-4& 2& 2& 1\\ {}-3& 0& -2& -12\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{cccc}84& -46& 49& 81\\ {}-13& 19& 11& 8\\ {}29& 70& 18& 15\\ {}26& 53& 29& 8\\ {}61& 35& -24& 68\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices V1 = W1 = 0. Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) as follows:

$$ {\displaystyle \begin{array}{l}{V}_{28}=\left(\begin{array}{cccc}1.0000& 3.0000& 0.0000& 0.0000\\ {}-2.0000& 2.0000& 0.0000& 0.0000\\ {}0.0000& 0.0000& 2.0000& 1.0000\\ {}0.0000& 0.0000& 4.0000& 2.0000\end{array}\right)\in {R}_r^{4\times 4}(P)\\ {}\mathrm{and}\ {W}_{28}=\left(\begin{array}{cccc}2.0000& 1.0000& 0.0000& 0.0000\\ {}3.0000& 3.0000& 0.0000& 0.0000\\ {}0.0000& 0.0000& 4.0000& 2.0000\\ {}0.0000& 0.0000& -1.0000& 3.0000\end{array}\right)\in {R}_r^{4x4}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -1& 0\\ {}0& 0& 0& -1\end{array}\right) \), with the corresponding residual

R28‖ = ‖C − (AV28 + BW28 − EV28F)‖ = 6.8125 × 10−10. Moreover, It can be verified that PV28P = V28 and SW28S = W28. Table 1 indicates the number of iterations k and norm of the corresponding residual:

Table 1 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.1

Now let \( \hat{V}=\left(\begin{array}{l}\kern0.5em 1\kern1.25em 1\kern1.25em 0\kern1.25em 0\\ {}-1\kern0.5em -1\kern1.25em 0\kern1.25em 0\\ {}\kern0.5em 0\kern1.25em 0\kern0.5em -2\kern1.25em 1\\ {}\kern0.5em 0\kern1.25em 0\kern1.25em 3\kern0.5em -1\end{array}\right),\hat{W}=\left(\begin{array}{l}\ 1\kern0.75em -1\kern1em 0\kern1.25em 0\\ {}\ 1\kern0.75em -1\kern1em 0\kern1.25em 0\\ {}\ 0\kern1.25em 0\kern1.25em 1\kern1.25em 2\\ {}\ 0\kern1.25em 0\kern0.75em -2\kern1.25em 1\end{array}\right) \).

By applying Algorithm 2.1 for the generalized Sylvester matrix equation \( A\overline{V}+B\overline{W}=E\overline{V}F+\overline{C}, \) and letting the initial pair \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \), we can obtain the least Frobenius norm generalized solution \( {\overline{V}}^{\ast },{\overline{W}}^{\ast } \) of the generalized Sylvester matrix Eq. (2.1) as follows

$$ {\overline{V}}^{\ast }={\overline{V}}_{29}=\left(\begin{array}{l}-0.0000\kern1em 2.0000\kern1.75em 0.0000\kern0.75em 0.0000\\ {}-1.0000\kern1em 3.0000\kern1.75em 0.0000\kern0.75em 0.0000\\ {}\kern0.5em 0.0000\kern0.75em 0.0000\kern1.5em 4.0000\kern1em 0.0000\\ {}\kern0.5em 0.0000\kern0.75em 0.0000\kern1.5em 1.0000\kern1em 3.0000\end{array}\right), $$

\( {\overline{W}}^{\ast }={\overline{W}}_{29}=\left(\begin{array}{l}1.0000\kern1em 2.0000\kern2em 0.0000\kern1em 0.0000\\ {}2.0000\kern1em 4.0000\kern2em 0.0000\kern1em 0.0000\\ {}0.0000\kern1em 0.0000\kern1.75em 3.0000\kern0.5em -0.0000\\ {}0.0000\kern1em 0.0000\kern1.75em 1.0000\kern1em 2.0000\end{array}\right) \), with the corresponding residual

$$ \left\Vert {R}_{29}\right\Vert =\left\Vert C-\left({AV}_{29}+{BW}_{29}-{EV}_{29}F\right)\right\Vert =\kern0.5em 5.0896\times {10}^{-11}. $$

Table 2 indicates the number of iterations k and norm of the corresponding residual with \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \).

Table 2 The number of iterations and norm of the corresponding residual for Example 4.1 with \( {\overline{V}}_1={\overline{W}}_1=\mathbf{0} \)

Example 4.2

(Special case) Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{cccc}-0.2& 1& 0& 0\\ {}1& -0.1& 0& 0\\ {}0& 0& -0.3& 0\\ {}0& 0& 0& 0.4\end{array}\right),E=\left(\begin{array}{cccc}2& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& 3& 0\\ {}0& 0& 0& 2\end{array}\right),B=\left(\begin{array}{cccc}-2& 1& 0& 0\\ {}-1& 4& 0& 0\\ {}0& 0& -3& 0\\ {}0& 0& 0& 3\end{array}\right)\\ {}F=\left(\begin{array}{cccc}2& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -0.2& 0\\ {}0& 0& 0& 4\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& -0.2& 0& 0\\ {}0& 0& 1& 0\\ {}0& 0& -0.1& 3\end{array}\right)\end{array}} $$

Choosing arbitrary initial iterative matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq.(2.1) after 7 iterations when ε = 10−10 as follows:

$$ {\displaystyle \begin{array}{l}{V}_7=\left(\begin{array}{l}\hbox{-} 0.177130\kern1.25em \hbox{-} 0.016697\kern2em 0.000000\kern1.25em 0.000000\\ {}\kern0.5em 0.041119\kern1.25em 0.014553\kern2em 0.000000\kern1.25em 0.000000\\ {}\kern0.5em 0.000000\kern1.25em 0.000000\kern2em 0.033003\kern1.25em 0.000000\\ {}\kern0.5em 0.000000\kern1.25em 0.000000\kern2em \hbox{-} 0.008300\kern0.75em \hbox{-} 0.341520\end{array}\right)\in {R}_r^{4\times 4}(P)\ \mathrm{and}\\ {}{W}_7=\left(\begin{array}{l}\hbox{-} 0.085183\kern1em 0.005417\kern1.25em 0.000000\kern1.75em 0.000000\\ {}\ 0.044574\kern1em \hbox{-} 0.040468\kern1em 0.000000\kern1.75em 0.000000\\ {}\ 0.000000\kern1.25em 0.000000\kern1em \hbox{-} 0.330030\kern1.5em 0.000000\\ {}\ 0.000000\kern1.5em 0.000000\kern0.75em \hbox{-} 0.031125\kern1.5em 0.134810\end{array}\right)\in {R}_r^{4\times 4}(S)\\ {}\end{array}} $$

where \( P=\left(\begin{array}{cccc}1& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& -1& 0\\ {}0& 0& 0& -1\end{array}\right) \) and S = P.

It can be verified that PV7P = V7 and SW7S = W7. Moreover, the corresponding residual ‖R7‖ = ‖C − AV7 + EV7F − BW7‖ = 8.1907e − 10.

Example 4.3

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{l}2\kern1em \hbox{-} 1\kern1em \hbox{-} 3\kern1.25em 3\kern1em \hbox{-} 1\kern1.25em 0\\ {}3\kern1em \hbox{-} 1\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 2\\ {}3\kern1.25em 2\kern1.25em 0\kern1.25em 1\kern1.25em 4\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 3\\ {}4\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 1\end{array}\right),E=\left(\begin{array}{l}\ 4\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\\ {}\ 0\kern1.25em 2\kern1.25em 4\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 3\\ {}\hbox{-} 2\kern1.25em 0\kern1.25em 4\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\\ {}\ 1\kern1em \hbox{-} 2\kern1.25em 4\kern1.25em 2\kern1.25em 0\kern1.25em 3\\ {}\ 3\kern1em \hbox{-} 2\kern1.25em 1\kern1.25em 4\kern1.25em 1\kern1.25em 2\\ {}1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 3\kern1.25em 4\end{array}\right),B=\left(\begin{array}{l}1\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 4\\ {}2\kern1em \hbox{-} 3\kern1.25em 3\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 2\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1em \hbox{-} 3\kern1.25em 1\\ {}0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 3\kern1.25em 4\kern1.25em 1\end{array}\right)\\ {}F=\left(\begin{array}{l}3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 1\kern1em \hbox{-} 2\kern1.25em 1\\ {}1\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 2\kern1.25em 1\kern1em \hbox{-} 3\\ {}\hbox{-} 3\kern1.25em 2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 4\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1.25em 2\kern1.25em 3\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{l}\kern1.5em 0\kern0.75em \hbox{-} 11\kern1.25em 1\kern1.48em 22\kern0.75em \hbox{-} 30\kern1.5em 14\\ {}\kern0.75em \hbox{-} 37\kern1em 19\kern0.5em \hbox{-} 122\kern0.75em \hbox{-} 22\kern1em \hbox{-} 4\kern1.5em 28\\ {}\kern1em 65\kern0.75em \hbox{-} 21\kern1.25em 4\kern1em 32\kern0.75em \hbox{-} 77\kern1em \hbox{-} 9\\ {}\kern1em 12\kern1.25em 7\kern0.75em \hbox{-} 77\kern1.75em 5\kern1em \hbox{-} 36\kern1em \hbox{-} 65\\ {}\kern0.5em \hbox{-} 11\kern1em 33\kern0.75em \hbox{-} 67\kern1.5em 60\kern0.75em \hbox{-} 52\kern1em \hbox{-} 43\\ {}\kern0.5em \hbox{-} 37\kern1em 19\kern0.75em \hbox{-} 68\kern1em \hbox{-} 39\kern1.25em 61\kern1.5em 3\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) after 108 iterations when ε = 10−10 as follows:

$$ {\displaystyle \begin{array}{l}V=\left(\begin{array}{l}1\kern3em 3\kern2.75em \hbox{-} 2\kern3em 0\kern3em 0\kern3em 0\\ {}1\kern3em 3\kern2.75em \hbox{-} 2\kern3em 0\kern3em 0\kern3em 0\\ {}1\kern3em 3\kern3em 2\kern3em 0\kern3em 0\kern3em 0\\ {}0\kern2.75em 0\kern3em 0\kern3em 2\kern3em 1\kern2.75em \hbox{-} 2\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern3em 1\kern2.75em \hbox{-} 2\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern3em 1\kern3em 2\end{array}\right)\in {R}_r^{6\times 6}(P)\ \mathrm{and}\\ {}W=\left(\begin{array}{l}2\kern2.75em \hbox{-} 1\kern3em 3\kern3em 0\kern3em 0\kern3em 0\\ {}2\kern2.75em \hbox{-} 1\kern3em 3\kern3em 0\kern3em 0\kern3em 0\\ {}2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\kern3em 0\kern3em 0\kern3em 0\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern3em 3\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern3em 3\\ {}0\kern3em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\end{array}\right)\in {R}_r^{6\times 6}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{l}1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\end{array}\right) \), with the corresponding residual

R108‖ = ‖C − (AV108 + BW108 − EV108F)‖ = 3.0452 × 10−10. Moreover, It can be verified that PV108P = V108 and SW108S = W108. Table 3 indicates the number of iterations k and norm of the corresponding residual:

Table 3 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.3

Example 4.4

Consider the generalized Sylvester matrix equation AV + BW = EVF + C where

$$ {\displaystyle \begin{array}{l}A=\left(\begin{array}{l}2\kern1em \hbox{-} 1\kern1em \hbox{-} 3\kern1.25em 3\kern1em \hbox{-} 1\kern1.25em 0\\ {}5\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 5\kern1.25em 1\kern1em \hbox{-} 3\\ {}3\kern1em \hbox{-} 1\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 6\\ {}1\kern1.25em 5\kern1em \hbox{-} 3\kern1em \hbox{-} 2\kern1.25em 0\kern1.25em 3\\ {}3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 5\kern1.25em 4\kern1em \hbox{-} 3\\ {}2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 5\kern1.25em 0\kern1em \hbox{-} 3\\ {}4\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 2\kern1.25em 3\kern1.25em 1\\ {}2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 1\end{array}\right),E=\left(\begin{array}{l}4\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\\ {}\kern0.5em 0\kern1.25em 2\kern1.25em 4\kern1.25em 1\kern1em \hbox{-} 2\kern1.25em 3\\ {}\hbox{-} 2\kern1.25em 0\kern1.25em 4\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\\ {}\kern0.75em 3\kern1em \hbox{-} 2\kern1.25em 5\kern1.25em 4\kern1.25em 3\kern1.25em 0\\ {}\kern0.75em 1\kern1em \hbox{-} 2\kern1.25em 4\kern1.25em 2\kern1.25em 0\kern1.25em 3\\ {}\kern0.75em 1\kern1em \hbox{-} 3\kern1.25em 2\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}\kern0.75em 3\kern1em \hbox{-} 2\kern1.25em 1\kern1.25em 4\kern1.25em 1\kern1.25em 2\\ {}\hbox{-} 1\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 3\kern1.25em 4\end{array}\right)\ B=\left(\begin{array}{l}1\kern1.25em 0\kern1em \hbox{-} 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 4\\ {}\ 3\kern1em \hbox{-} 5\kern1em \hbox{-} 3\kern1.25em 2\kern1.25em 4\kern1.25em 1\\ {}\ 2\kern1em \hbox{-} 3\kern1.25em 3\kern1.25em 4\kern1.25em 0\kern1.25em 2\\ {}\ 1\kern1.25em 0\kern1.25em 3\kern1em \hbox{-} 2\kern1.25em 2\kern1em \hbox{-} 3\\ {}\hbox{-} 4\kern1.25em 5\kern1em \hbox{-} 3\kern1.25em 4\kern1em \hbox{-} 1\kern1.25em 0\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1em \hbox{-} 3\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 3\kern1.25em 1\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1em \hbox{-} 3\kern1.25em 4\kern1.25em 1\end{array}\right),\\ {}F=\left(\begin{array}{l}\kern0.24em 3\kern1.25em 2\kern1.25em 0\kern1em \hbox{-} 1\kern1em \hbox{-} 2\kern1.25em 1\\ {}\kern0.24em 1\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 2\kern1.25em 1\kern1em \hbox{-} 3\\ {}\hbox{-} 3\kern1.25em 2\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 2\kern1.25em 1\\ {}\ 0\kern1em \hbox{-} 1\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\kern1.25em 4\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 4\kern1.25em 0\kern1.25em 2\kern1.25em 3\\ {}\ 2\kern1em \hbox{-} 1\kern1.25em 3\kern1.25em 2\kern1em \hbox{-} 3\kern1.25em 1\end{array}\right)\ \mathrm{and}\ C=\left(\begin{array}{l}\hbox{-} 11\kern0.75em \hbox{-} 51\kern0.75em \hbox{-} 58\kern1em 29\kern0.75em \hbox{-} 15\kern1em 68\\ {}\hbox{-} 27\kern1em 85\kern0.5em \hbox{-} 124\kern1em 52\kern0.75em \hbox{-} 26\kern0.75em \hbox{-} 82\\ {}\kern0.5em 59\kern0.75em \hbox{-} 93\kern0.75em 109\kern1em 39\kern1em \hbox{-} 6\kern1em 28\\ {}\hbox{-} 18\kern1em 22\kern0.75em \hbox{-} 24\kern0.75em \hbox{-} 14\kern1em \hbox{-} 5\kern1em 86\\ {}\hbox{-} 59\kern0.75em 125\kern0.5em \hbox{-} 144\kern0.75em \hbox{-} 41\kern0.75em \hbox{-} 68\kern0.75em \hbox{-} 11\\ {}\hbox{-} 40\kern1em 56\kern0.5em \hbox{-} 105\kern1em \hbox{-} 5\kern0.75em \hbox{-} 89\kern0.75em \hbox{-} 27\\ {}\hbox{-} 43\kern1em 43\kern0.5em \hbox{-} 110\kern1em 15\kern0.75em \hbox{-} 53\kern0.75em \hbox{-} 10\\ {}\hbox{-} 54\kern0.75em 144\kern0.5em \hbox{-} 109\kern1em 14\kern1em 20\kern0.75em \hbox{-} 75\end{array}\right)\end{array}} $$

Choosing arbitrary initial matrices \( {V}_1={W}_1=\mathbf{0} \). Applying Algorithm 2.1, we get the reflexive solutions of the matrix Eq. (2.1) after 96 iterations when ε = 10−12 as follows:

$$ {\displaystyle \begin{array}{l}V=\left(\begin{array}{l}\kern0.75em 1\kern2.25em 3\kern2em \hbox{-} 2\kern2em 0\kern2.5em 0\kern2.5em 0\\ {}2\kern2em \hbox{-} 2\kern2.25em 1\kern2.5em 0\kern2.5em 0\kern2.5em 0\\ {}\hbox{-} 2\kern2.5em 1\kern1.75em \hbox{-} 3\kern2.25em 0\kern2.5em 0\kern2.5em 0\\ {}0\kern2.5em 0\kern2.25em 0\kern2.25em 4\kern2.5em 1\kern2.25em \hbox{-} 2\\ {}0\kern2.75em 0\kern2.25em 0\kern2em \hbox{-} 3\kern2em \hbox{-} 2\kern2.25em \hbox{-} 1\\ {}0\kern2.75em 0\kern2.25em 0\kern2em \hbox{-} 1\kern2.5em 4\kern2.5em 1\end{array}\right)\in {R}_r^{6\times 6}(P)\ \mathrm{and}\ \\ {}W=\left(\begin{array}{l}2\kern2.75em \hbox{-} 1\kern2.75em 3\kern3em 0\kern3em 0\kern3em 0\\ {}\hbox{-} 3\kern2.5em 2\kern3em 1\kern3em 0\kern3em 0\kern3em 0\\ {}\hbox{-} 3\kern2.5em 4\kern3em 1\kern3em 0\kern3em 0\kern3em 0\\ {}\kern0.5em 0\kern2.25em 0\kern3em 0\kern3em 2\kern2.75em \hbox{-} 1\kern2.75em \hbox{-} 3\\ {}\kern0.5em 0\kern2.5em 0\kern2.75em 0\kern3em 4\kern3em 2\kern3em 1\\ {}\kern0.5em 0\kern2.5em 0\kern2.74em 0\kern3em 1\kern2.75em \hbox{-} 3\kern3em 2\end{array}\right)\in {R}_r^{6\times 6}(S)\end{array}} $$

where \( P=S=\left(\begin{array}{l}1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 1\kern1.25em 0\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\kern1.25em 0\\ {}0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1.25em 0\kern1em \hbox{-} 1\end{array}\right) \), with the corresponding residual ‖R96‖ = ‖ diag(C − (AV96 + BW96 − EV96F))‖ = 3.0152 × 10−12.

Moreover, It can be verified that PV96P = V96 and SW96S = W96. Table 4 indicates the number of iterations k and norm of the corresponding residual.

Table 4 The number of iterations and norm of the corresponding residual for the reflexive solution of the generalized Sylvester matrix equation Example 4.4

Conclusions

In this paper, an iterative method to solve the generalized Sylvester matrix equations over reflexive matrices is derived. With this iterative method, the solvability of the generalized Sylvester matrix equation can be determined automatically. Also, when this matrix equation is consistent, for any initial reflexive matrices, one can obtain reflexive solutions within finite iteration steps. In addition, both the complexity and the convergence analysis for our proposed algorithm are presented. Furthermore, we obtained the least Frobenius norm reflexive solutions when special initial reflexive matrices are chosen. Finally, four numerical examples were presented to support the theoretical results and illustrate the effectiveness of the proposed method.