1 Introduction

In this paper, \({\mathbb {C}}_{m, n}\) denotes the set of \(m\times n\) matrices with complex entries; \(A^{*}\), \({{\mathcal {R}}}\left( {A} \right) \), \(\text {rk}\left( A \right) \) and \(A^{\dagger }\) denote the conjugate transpose, range space, rank and Moore–Penrose of A, respectively. It is widely known that \(X= A^{\dagger }\) is the unique solution of \( AXA = A\), \(XAX=X\), \((AX)^{*}=AX \) and \( (XA)^{*}=XA \), [4, 24]. And denote

$$\begin{aligned} P_{A}=AA^{\dagger } \hbox { and } F_{A}=I_{m}-A A^{\dagger }. \end{aligned}$$

The smallest positive integer k satisfying \(\text {rk}\left( A^{k+1} \right) =\text { rk}\left( A^{k} \right) \) is called the index of \(A\in {\mathbb {C}}_{n,n}\) and is denoted by \({\hbox { Ind}}(A)\). The Drazin inverse of \(A\in {\mathbb {C}}_{n,n}\) with \({\hbox {Ind}}(A)=k\) is denoted as \(A^{D}\), which is the unique solution of \( AXA^{k}=A^{k}\), \(XAX =X\) and \(AX=XA\) [4, 24]. In general, when \(k=1\), it is called the group inverse of A, and is denoted as \(X=A^{\#}\). Therefore, we can also consider the Drazin inverse as one generalization of the group inverse.

Let \(A\in {\mathbb {C}}_{n, n}\) with \({\hbox { Ind}}(A)=k\), and denote \(A^{N}=A-AA^{D}A\). Greville [9] proves that

$$\begin{aligned} X=A^{D}+A^{N} \end{aligned}$$
(1)

is the unique solution of

$$\begin{aligned} AX = XA, \ {A}^{l+1}X={A}^{l}, \ A{X}^{l+1}={X}^{l}, \ A-X={A}^{l}{X}^{l}(A-X), \end{aligned}$$

in which \(l\ge k\) and \(A^{N}=A-AA^{D}A\). In [1], the solution is denoted by \(A^\mathrm{(S)}\). We call it the G-S inverse of A. Obviously, when \(k=1\), \(A^{\mathrm{{(S)}}}=A^\#\). Therefore, the G-S inverse is also a generalization of the group inverse.

In [1], we see that the G-S inverse is a kind of S-inverse. The S-inverse is one special generalized inverse: “ for every \(\lambda \) and every vector x, x is a \(\lambda \)-vector of A of grade p if and only if it is a \(\lambda ^\dag \)-vector of X of grade p,” [1]. And A is one S-inverse of X, too. When \(k=1\), since \(A^\#=A^{\text { {(s)}}}\), we see that group inverse is also an S-inverse. When the index of A is 1, \(A^\#\) is the unique S-inverse of A in \(A\{1\} \cup A\{2\}\), in which \(A \{ 1 \} = \{ X: AXA=A \}\) and \(A\{2\}=\{X: XAX=X\}\), [1]. It is worth noting that Drazin inverse is a generalized group inverse, but it is not the S-inverse of A. It is one \(S'\)-inverse of A, [1]. There are also some interesting results of the G-S inverse:

  1. (1)

    we call it the inverse, but the G-S inverse is neither \(\{1\}\)- nor \(\{2\}\)- inverse;

  2. (2)

    \((A^{\text { {(S)}}})^{\text {{(S)}}}=A\);

  3. (3)

    \(\text {rk} (A) =\text {rk} (A^{\text { {(S)}}})\);

  4. (4)

    \({\text {Ind}}(A)={\text {Ind}}(A^{\text { {(S)}}})\).

More properties of the G-S inverse can be found in [1, 9].

In [2], Baksalary and Trenkler define the core inverse of a square matrix: let \(A\in {\mathbb {C}}^{n,n}\) with \({\text {Ind}}(A)=1\), then the solution of

$$\begin{aligned} AX = AA^{\dagger }, \ {{\mathcal {R}}}\left( {X} \right) \subseteq {{\mathcal {R}}}\left( {A} \right) \end{aligned}$$

is unique, and is denoted as . Subsequently, the definition of core inverse is extended to a larger class of matrices. For example, let \(A\in {\mathbb {C}}_{n,n}\) with \({\text {Ind}}(A)=k\), Prasad and Mohana introduce the core-EP inverse \({A^{\textcircled {\dag }}}\) of A, and prove \({A^{\textcircled {\dag }}} =A^k\left( {\left( {A^*} \right) ^kA^{k + 1}} \right) ^ \dag \left( {A^*} \right) ^k\), [21]; Baksalary and Trenkler introduce the BT inverse \(A^{\diamond }\) of A and prove \(A^{\diamond }=(AP_{A})^{\dag }\), [3]; Malik and Thome introduce the DMP inverse \(A^{D,{\dagger }}\), and prove \(A^{D,{\dagger }}=A^D AA^\dag \), [15]; Mehdipour and Salemi introduce the CMP inverse \(A^{C,{\dagger }}\) of A, and prove \(A^{C,{\dagger }}=A^\dagger AA^D AA^\dagger \), [16]. It is easy to check that, when \(k=1\), \({A^{\textcircled {\#}} =A^{\textcircled {\dag }} =A^{\diamond } =A^{{D,\dagger }} =A^{C,{\dagger }}}\), that is, the above four generalized inverse are all generalized core inverses. Furthermore, Wang and Chen [26] introduce a generalized group inverse: the solution of \(AX^{2} = X\) and \({AX=A^{\textcircled {\dag }}}A\) is unique, which is called the WG inverse of A and is denoted as \({X=A^{\textcircled {W}}}\).

A variety of generalized inverses have been established successively, arousing the interest of scholars in the study of generalized inverses including their properties, algorithms, applications, etc. For example, generalized inverses are one of the tools for characterizing special matrices. let \(A \in {\mathbb {C}}_{n, n}\), then it is EP if and only if \(AA^\dag =A^\dag A\), [24]; it is i-EP if and only if \(A^k(A^k)^\dag =(A^k)^\dag A^k\), [14, 23]; it is a WG matrix if and only if \({AA^{\textcircled {W}}=A^{\textcircled {W}} A}\), [27], where k is the index of A. Generalized inverses are also one of the main tools for constructing and characterizing partial orders. For example,

  1. (1)

    \(A\mathop \le \limits ^{-}B\): \(A, B\in {\mathbb {C}}_{m, n}\), \(\text {rk}(B-A)=\text {rk}(B)-\text {rk}(A)\), [10];

  2. (2)

    \(A\mathop \le \limits ^{*}B\): \(A, B\in {\mathbb {C}}_{m, n}\), \(AA^{*}=BA^{*}\) and \(A^{*}A=A^{*}B\), [7].

More results about generalized inverses and their applications can be seen in [1, 4, 12, 13, 18, 19, 22, 24, 28].

In this paper, matrix decompositions are mainly used to study the properties and characterizations of the new inverse (called it the C-S inverse). A new binary relation is introduced by C-S inverse, and its relationship with the star partial order is discussed. Based on the binary relation, a new partial order (C-S partial order) is introduced. The characterizations of the partial order and the star partial order are given by using the core-EP decomposition. And it is proved that the C-S partial order is a special kind of star partial order. Finally, the C-S partial order is applied to give characterizations of special matrices such as EP matrix.

2 Preliminaries

In this section, we present some preliminary results.

Theorem 1

([25], Core-EP decomposition) Let \(A\in {\mathbb {C}}_{n, n}\), \({\text {Ind}}(A)=k\) and \({rk}\left( A^k\right) =t\). Then there exist \( {{A}}_{1}\) and \({{A}}_{2}\), which satisfy

$$\begin{aligned} A= {{A}}_{1}+ {{A}}_{2}, \end{aligned}$$

where \( {{A}}_{1}\in {\mathbb {C}}^{{\texttt {CM}}}_{n}\), \({ {{A}}_{2}^{k}}=0\) and \({ {{A}}_{1}^{*}} {{A}}_{2}= {{A}}_{2} {{A}}_{1}=0\). Furthermore, there exists a unitary matrix U satisfying

$$\begin{aligned} {{A}}_{1}=U \left[ \begin{matrix} T &{}S \\ 0 &{}0 \end{matrix} \right] U^{*} { and } {{A}}_{2}=U \left[ \begin{matrix} 0 &{}0 \\ 0 &{}N \end{matrix} \right] U^{*}, \end{aligned}$$
(2)

where \(T\in {\mathbb {C}}_{t, t}\) is nonsingular and N is nilpotent.

By applying the above decomposition, Ferreyra, Levis and Thome [8] get that

$$\begin{aligned} A^{k}&= U\left[ \begin{matrix} T^{k} &{}{\widetilde{T}} \\ 0 &{}0 \end{matrix} \right] U^{*}, \end{aligned}$$
(3)
$$\begin{aligned} A^k\left( A^k\right) ^{\dagger }&= U \left[ \begin{matrix} I_t &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*}, \end{aligned}$$
(4)

where

$$\begin{aligned} {\widetilde{T}}= {\sum \limits _{i =0}^{k-1} {T^{i}SN^{k-1-i}} } . \end{aligned}$$
(5)

Of course, the core-EP decomposition can also be used to characterize generalized inverses. For example, characterizations of \(A^{\dag }\), \(A^{D}\) and \(A^{\textcircled {\dag }}\) are given in [8, 25]:

$$\begin{aligned} A^{\dagger }&=U \left[ \begin{matrix} T^{*}\triangle &{}-T^{*}\triangle SN^{\dagger } \\ \left( I_{n-t}-N^\dag N\right) S^{*}\triangle &{}N^{\dagger }-\left( I_{n-t}-N^\dag N\right) S^{*}\triangle SN^{\dagger } \end{matrix} \right] U^{*}; \end{aligned}$$
(6)
$$\begin{aligned} A^{D}&=U \left[ \begin{matrix} T^{-1} &{}T^{-(k+1)}{\widetilde{T}} \\ 0 &{}0 \end{matrix} \right] U^{*}, \end{aligned}$$
(7)
$$\begin{aligned} A^{\textcircled {\dag }}&=U\left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*}, \end{aligned}$$
(8)

where \(\triangle =\left( TT^{*}+S\left( I_{n-t}-N^\dag N\right) S^{*}\right) ^{-1}\). Furthermore, by applying (1) and (7), we get a characterization of the G-S inverse:

$$\begin{aligned} A^{\text {{(S)}}} = U \left[ \begin{matrix} T^{-1} &{}\left( T^{-(k+1)} -T^{-(k-1)}\right) {\widetilde{T}}+S \\ 0 &{}N \end{matrix} \right] U^{*}. \end{aligned}$$
(9)

Theorem 2

([8, 25]) Let \(A\in {\mathbb {C}}_{n, n}\) and \({\text {Ind}}(A)=k\), then

$$\begin{aligned} AA^{\textcircled {\dag }}=P_{A^{k}},\ A^{\textcircled {\dag }} =A^{D}P_{A^{k}}. \end{aligned}$$
(10)

Theorem 3

([28], EP-nilpotent decomposition) Let \(A\in {\mathbb {C}}_{n, n}\), \({\text {Ind}}(A)=k\) and \({rk}\left( A^k\right) =t\). Then, there exist \({\widehat{A}}_{1}\) and \({{\widehat{A}}}_{2}\), which satisfy

$$\begin{aligned} A={{\widehat{A}}}_{1}+{{\widehat{A}}}_{2}, \end{aligned}$$

where \({\widehat{A}}_{1}\in {\mathbb {C}}^{{\texttt {EP}}}_{n}\), \({{\widehat{A}}^{k+1}}_{2}=0\) and \({\widehat{A}}_{2}{\widehat{A}}_{1}=0\). Furthermore, there exists a unitary matrix U satisfying

$$\begin{aligned} {{\widehat{A}}}_{1}=U \left[ \begin{matrix} T &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*} { and } {{\widehat{A}}}_{2}=U \left[ \begin{matrix} 0 &{}S \\ 0 &{}N \end{matrix} \right] U^{*}, \end{aligned}$$
(11)

where \(T\in {\mathbb {C}}_{t, t}\) is nonsingular, and N is nilpotent.

Theorem 4

( [20]) Let \(A\in {\mathbb {C}}_{n, n}\) and \(\text {rk}({A})=r\). Then A is EP if and only if there exists a unitary matrix U satisfying

$$\begin{aligned} A=U \left[ \begin{matrix} T &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*}, \end{aligned}$$
(12)

where \(T\in {\mathbb {C}}_{r, r}\) is invertible.

Theorem 5

( [27]) Let \(A\in {\mathbb {C}}_{n,n}\), \(\text {Ind}(A)=k\) and \({rk}\left( A^k\right) =t\). Then, A is i-EP if and only if there exists a unitary matrix U, such that

$$\begin{aligned} A=U\left[ \begin{matrix} T &{} 0 \\ 0 &{} N \\ \end{matrix} \right] U^*, \end{aligned}$$
(13)

where \(T\in {\mathbb {C}}_{t, t}\) is nonsingular, and N is nilpotent with \(\text {Ind}(N)=k\).

3 The C-S Inverse

In this section, we will give the definition of the C-S inverse, study its properties, its differences from and its connections with several generalized inverses.

3.1 Definition of the C-S Inverse

Theorem 6

Let \(A\in {\mathbb {C}}_{n, n}\) and \({\text {Ind}}(A)=k\). Then, the solution of

$$\begin{aligned} XA^{k+1}=A^{k}, \left( A^{k}{X}^{k}\right) ^{*}=A^{k}{X}^{k}, \ A-X={A}^{k}{X}^{k}(A-X) \end{aligned}$$
(14)

is unique. Furthermore, there exists a unitary matrix U such that

$$\begin{aligned} X = U\left[ \begin{matrix} T ^{-1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} =A^{\textcircled {\dag }}+A_2, \end{aligned}$$
(15)

where \(T\in {\mathbb {C}}_{\text {rk}\left( A^k\right) , {rk}\left( A^k\right) }\) is nonsingular, N is nilpotent, and \(A_2=A-AA^{\textcircled {\dag }}A\).

Proof

Let the core-EP decomposition of A be as shown in (2). Substituting (3) and (15) in (14), we have

$$\begin{aligned} XA^{k+1}-A^{k}&= U\left[ \begin{matrix} T^{-1}T^{k+1} &{}T^{-1}T{\widetilde{T}} \\ 0 &{}0 \end{matrix} \right] U^{*} - U\left[ \begin{matrix} T^{k} &{}{\widetilde{T}} \\ 0 &{}0 \end{matrix} \right] U^{*} =0, \end{aligned}$$
(16)
$$\begin{aligned} A^{k}{X}^{k}&= U\left[ \begin{matrix} T^{k} &{}{\widetilde{T}} \\ 0 &{}0 \end{matrix} \right] U^{*} U\left[ \begin{matrix} T^{-k} &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*} = U\left[ \begin{matrix} I _{\text {rk}(A^k)} &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*} \nonumber \\&=\left( A^{k}{X}^{k}\right) ^{*}, \nonumber \\ {A}^{k}{X}^{k}(A-X)&= U\left[ \begin{matrix} I _{\text {rk}(A^k)} &{}0 \\ 0 &{}0 \end{matrix} \right] \left[ \begin{matrix} T-T^{-1} &{}S \\ 0 &{}N-N \end{matrix} \right] U^{*} = U\left[ \begin{matrix} T-T^{-1} &{}S \\ 0 &{}0 \end{matrix} \right] U^{*} \nonumber \\&= A-X . \end{aligned}$$
(17)

Therefore, we get that (14) is consistent, and (15) is one of the solutions.

Next, we apply the EP-nilpotent decomposition to study uniqueness of the solution of (14). Let the EP-nilpotent decomposition of A be as in (11). Write

$$\begin{aligned} X =&F_{{\widehat{A}}_1}XF_{{\widehat{A}}_1} + P_{{\widehat{A}}_1}XF_{{\widehat{A}}_1} + P_{{\widehat{A}}_1}X P_{{\widehat{A}}_1} +F_{{\widehat{A}}_1} XP_{{\widehat{A}}_1}. \end{aligned}$$
(18)

Since \({\widehat{A}}_{2}{\widehat{A}}_{1}=0\) and \({\widehat{A}}_{2}^{k+1}=0\), by applying (11) it is easy to check that

$$\begin{aligned} A^{k}&= \left( {\widehat{A}}_{1}+{\widehat{A}}_{2}\right) ^{k}={{\widehat{A}}}_{1}^{k}+{\sum \limits _{i =0}^{k-1} {{{\widehat{A}}}_{1}^{i}{{\widehat{A}}}_{2}^{k-i}} }, \end{aligned}$$
(19)
$$\begin{aligned} A^{k+1}&= \left( {\widehat{A}}_{1}+{\widehat{A}}_{2}\right) ^{k+1} ={{\widehat{A}}}_{1}^{k+1}+{\sum \limits _{i =1}^{k} {{{\widehat{A}}}_{1}^{i}{{\widehat{A}}}_{2}^{k+1-i}} }, \end{aligned}$$
(20)
$$\begin{aligned} {\widehat{A}}_{1}&= {A}^{k+1}\left( {{\widehat{A}}}_{1}^{k}\right) ^{\dagger }, \end{aligned}$$
(21)
$$\begin{aligned} F_{{\widehat{A}}_1}A^{k}&=0 . \end{aligned}$$
(22)

Since \(XA^{k+1}=A^k\), by applying (21), we get

$$\begin{aligned} P_{{\widehat{A}}_1}^{\dagger }XP_{{\widehat{A}}_1} =P_{{\widehat{A}}_1}X{A}^{k+1}\left( {{\widehat{A}}}_{1}^{k}\right) ^{\dagger }{{\widehat{A}}}_{1}^{\dagger } =P_{{\widehat{A}}_1}A^{k}\left( {{\widehat{A}}}_{1}^{k}\right) ^{\dagger }{{\widehat{A}}}_{1}^{\dagger } ={{\widehat{A}}}_{1}^{\dagger }. \end{aligned}$$
(23)

By applying (21) and (22) we get

$$\begin{aligned} F_{{\widehat{A}}_1}XP_{{\widehat{A}}_1}&=F_{{\widehat{A}}_1}XA^{k+1}\left( {{\widehat{A}}}_{1}^{k}\right) ^{\dagger }{{\widehat{A}}}_{1}^{\dagger } =F_{{\widehat{A}}_1}A^{k} \left( {{\widehat{A}}}_{1}^{k}\right) ^{\dagger }{{\widehat{A}}}_{1}^{\dagger } =0. \end{aligned}$$
(24)

Since \( A-X={A}^{k}{X}^{k}(A-X)\), by applying (22), we get

$$\begin{aligned} F_{{\widehat{A}}_1}(A-X)F_{{\widehat{A}}_1} =F_{{\widehat{A}}_1}A^{k}X^{k}(A-X)F_{{\widehat{A}}_1} =0. \end{aligned}$$

It follows that

$$\begin{aligned} F_{{\widehat{A}}_1}XF_{{\widehat{A}}_1} = F_{{\widehat{A}}_1}AF_{{\widehat{A}}_1}. \end{aligned}$$
(25)

Therefore, by applying (23), (24) and (25) to (18), we get

$$\begin{aligned} X = {{\widehat{A}}}_{1}^{\dagger }+P_{{\widehat{A}}_1}XF_{{\widehat{A}}_1} +F_{{\widehat{A}}_1}AF_{{\widehat{A}}_1}. \end{aligned}$$
(26)

Furthermore, denote

$$\begin{aligned} X=X_{1}+X_{2}\,{and}\,X_2 = X_{21}+ X_{22}, \end{aligned}$$
(27)

in which \(X_{1}={{\widehat{A}}}_{1}^{\dagger }\), \(X_{21}=P_{{\widehat{A}}_1}XF_{{\widehat{A}}_1}\) and \(X_{22}=F_{{\widehat{A}}_1}AF_{{\widehat{A}}_1} \).

Since \({\widehat{A}}_{1}\) is EP, \(F_{{\widehat{A}}_1}{\widehat{A}}_1^\dag =0\), \(X_{2}X_{1}=0\) and

$$\begin{aligned} X^{k} = {X}_{1}^{k}+{\sum \limits _{i=0}^{k-1}{X}_{1}^{i}{X}_{2}^{k-i}}. \end{aligned}$$
(28)

By applying \(X_{22}^k=X_{22}X_{21}=0\), we get

$$\begin{aligned}&A^{k}X^{k}= {\widehat{A}}_{1}^{k}\left( {\widehat{A}}_{1}^{\dagger }\right) ^{k}+ {\widehat{A}}_{1}^{k}{\sum \limits _{i =0}^{k-1} {X_1^{i}X_{21}X_{22}^{k-1-i}} } . \end{aligned}$$

Since \(A^{k}X^{k}\) and \({{\widehat{A}}}_{1}^{k}({{\widehat{A}}}_{1}^{\dagger })^{k}\) are Hermitian, we get that \({\widehat{A}}_{1}^{k}{\sum \limits _{i =0}^{k-1} {X_1^{i}X_{21}X_{22}^{k-1-i}} }\) is Hermitian. For \({\widehat{A}}_{1}^{k}{\sum \limits _{i =0}^{k-1} {X_1^{i}X_{21}X_{22}^{k-1-i}} } \subseteq {\mathcal {R}}\left( {\widehat{A}}_1 \right) \), \(\left( {\widehat{A}}_{1}^{k}{\sum \limits _{i =0}^{k-1} {X_1^{i}X_{21}X_{22}^{k-1-i}} }\right) ^*\subseteq {\mathcal {R}}\left( F_{{\widehat{A}}_1} \right) \) and \({\mathcal {R}}\left( {\widehat{A}}_1 \right) \cap {\mathcal {R}}\left( F_{{\widehat{A}}_1} \right) =0\), we get

$$\begin{aligned}&{\widehat{A}}_{1}^{k}{\sum \limits _{i =0}^{k-1} {X_1^{i}X_{21}X_{22}^{k-1-i}} }\\&= {\widehat{A}}_{1}X_{21}+{\widehat{A}}_{1}^2X_{21}X_{22} +\cdots +{\widehat{A}}_{1}^{k-1}X_{21}X_{22}^{k-2}+{\widehat{A}}_{1}^{k}X_{21}X_{22}^{k-1} =0. \end{aligned}$$

Since \({\widehat{A}}_1\) is group invertible, by applying \(X_{22}^k=0\) and \( {\mathcal {R}}\left( X_{21}\right) \subseteq {\mathcal {R}}\left( {\widehat{A}}_1 \right) \), we get \(X_{21}=0\), that is,

$$\begin{aligned} P_{{\widehat{A}}_1}XF_{{\widehat{A}}_1}=0. \end{aligned}$$
(29)

Since \({\widehat{A}}_1\) and \({\widehat{A}}_2\) are unique, by applying (23), (24), (25) and (29) to (18), we get that solution of (14) is unique. \(\square \)

Definition 1

Let \(A\in {\mathbb {C}}_{n, n}\) with \({\text {Ind}}(A)=k\). The C-S inverse of A is defined as the solution of (14) and is denoted as \(A^{\textcircled {S}}\).

Remark 1

From (15), we see that \(A^{\textcircled {S}}=A^{\textcircled {\#}}\), when \(k=1\). Thus, the C-S inverse is one more generalization of the core inverse.

3.2 Proprieties and Characterizations of C-S Inverse

Let \(A\in {\mathbb {C}}_{n, n}\) with \({\text {Ind}}(A)=k\), the core-EP decomposition of A be as in Theorem 1, and the C-S inverse of A be as in (15). It is obvious that \({rk}({A})= \text {rk}({T}) +\text {rk}({N})\) and \({rk}\left( {A^{\textcircled {S}}}\right) = \textrm{rk}\left( {T^{-1}}\right) +\text {rk}({N})\). Therefore, we have Theorem 7.

Theorem 7

Let \(A\in {\mathbb {C}}_{n, n}\) with \({\text {Ind}}(A)=k\), then

$$\begin{aligned} \text {rk}\left( {A^{\textcircled {S}}}\right) = \textrm{rk}\left( A\right) . \end{aligned}$$

The following example shows that \(A^{\textcircled {S}}\) is not the same as \(A^{\dagger }\), \(A^{D}\), \(A^{\textcircled {\dag }}\), \(A^{\diamond } \), \(A^{D,{\dagger }}\), \(A^{C,{\dagger }}\), \(A^{{\dagger },D}\), \(A^{\textcircled {W}}\) and \(A^{\mathrm{{(S)}}}\).

Example 1

Let \(A ={ \left[ \begin{matrix} 1 &{}0 &{}1 &{}0 \\ 0 &{}1 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] }\). Then,

$$\begin{aligned} A^{\textcircled {S}} = { \left[ \begin{matrix} 1 &{}0 &{}0 &{}0 \\ 0 &{}1 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] } \end{aligned}$$

and

$$\begin{aligned} A^{\text {{(S)}}}&={ \left[ \begin{matrix} 1 &{}0 &{}1 &{}0 \\ 0 &{}1 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] }, \ A^{\dagger } ={ \left[ \begin{matrix} 0.5 &{}0 &{}0 &{}0 \\ 0 &{}1 &{}-1 &{}0 \\ 0.5 &{}0 &{}0 &{}0 \\ 0 &{}0 &{}1 &{}0 \end{matrix} \right] },\ A^{D} ={ \left[ \begin{matrix} 1 &{}0 &{}1 &{}1 \\ 0 &{}1 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] }, \\ A^{\textcircled {\dag }}&= { \left[ \begin{matrix} 1 &{}0 &{}0 &{}0 \\ 0 &{}1 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] },\ A^{D,{\dagger }}= { \left[ \begin{matrix} 1 &{} 0 &{} 1 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{matrix} \right] },\ A^{{\dagger },D}= { \left[ \begin{matrix} 0.5 &{} 0 &{} 0.5 &{} 0.5\\ 0 &{} 1 &{} 0 &{} 1\\ 0.5 &{} 0 &{} 0.5 &{} 0.5\\ 0 &{} 0 &{} 0 &{} 0 \end{matrix} \right] }, \\ A^{C,{\dagger }}&= {\left[ \begin{matrix} 0.5 &{} 0 &{} 0.5 &{} 0\\ 0 &{} 1&{} 0 &{} 0\\ 0.5 &{} 0 &{} 0.5 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{matrix} \right] },\ A^{\diamond } = { \left[ \begin{matrix} 0.5 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0.5 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{matrix} \right] },\ A^{\textcircled {W}} ={ \left[ \begin{matrix} 1 &{}0 &{}1 &{}0 \\ 0 &{}1 &{}0 &{}1 \\ 0 &{}0 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}0 \end{matrix} \right] }. \end{aligned}$$

Theorem 8

Let \(A\in {\mathbb {C}}_{n, n}\) and \({\text {Ind}}(A)=k\). Then, the following conditions are equivalent:

  1. (1)

    \(X =A^{\textcircled {S}}\);

  2. (2)

    \(A^{k} X^{k} = AA^{\textcircled {\dag }}\), \(A^{k} X^{k+1} = A^{\textcircled {\dag }}\), \(A-X=A^{k} X^{k}(A-X)\);

  3. (3)

    \(A^{k} X^{k} = P_{A^{k}}\), \( A^{k} X^{k+1} = A^{D}P_{A^{k}}\), \( A-X=A^{k} X^{k}(A-X)\);

  4. (4)

    \( P_{A^{k}} X = A^{\textcircled {\dag }}\), \( F_{A^{k}} (A-X)=0\);

  5. (5)

    \(F_{A^{k}} A +2 P_{A^{k}} X=A^{\textcircled {\dag }}+X\).

Proof

Let A be of the form (2), then C-S inverse of A is of the form (15).

“(1)\(\Rightarrow \)(2)” By applying (8) and (16), we get \(A^{k} X^{k} = AA^{\textcircled {\dag }}\). By applying (15) and (16), we get \(A^{k} X^{k+1} = U \left[ {\begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix}} \right] U^{*}\). It follows from (8) that we get \(A^{k} X^{k+1} = A^{\textcircled {\dag }}\). Furthermore, by applying (15), we get \( A-A^{\textcircled {S}} =U \left[ {\begin{matrix} T-T^{-1} &{}0 \\ 0 &{}0 \end{matrix}} \right] U^{*}\). It follows from applying (16) that we get \(A-X=A^{k} X^{k}(A-X)\).

“(2)\(\Rightarrow \)(1)” Write

$$\begin{aligned} X = U\left[ \begin{matrix} X_{1} &{}X_{2} \\ X_{3} &{}X_{4} \end{matrix} \right] U^{*}, \end{aligned}$$
(30)

in which \(X_{1}\in {\mathbb {C}}_{t, t}\). By applying \(A^{k} X^{k} = AA^{\textcircled {\dag }}\), \(A^{k} X^{k+1} = A^{\textcircled {\dag }}\) and \(A-X=A^{k} X^{k}(A-X)\), we have

$$\begin{aligned} 0&=(I-AA^{\textcircled {\dag }})(A-X) = U \left[ \begin{matrix} 0 &{} 0 \\ -X_3 &{} N-X_4 \end{matrix} \right] U^*, \\ 0&=AA^{\textcircled {\dag }} X-A^{\textcircled {\dag }} = U\left[ \begin{matrix} X_1- T^{-1} &{} X_2 \\ 0 &{} 0 \end{matrix} \right] U^*. \end{aligned}$$

It follows that \(X_{1} =T^{-1} \), \(X_{2}=0\), \(X_{3}=0\) and \(X_{4}=N\). Therefore, \(X=A^{\textcircled {S}}\).

“(2)\(\Leftrightarrow \)(3)” By applying (10), we get that (2) and (3) are equivalent.

“(1)\(\Rightarrow \)(4)” By using (13) and (15), we have

$$\begin{aligned} P_{A^{k}} X&= U \left[ \begin{matrix} I_{\text {rk}(A^k)} &{}0 \\ 0 &{}0 \end{matrix} \right] \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} = U \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*}= A^{\textcircled {\dag }}, \\ F_{A^{k}} (A-X)&=U \left[ \begin{matrix} 0 &{}0 \\ 0 &{} I_{n-\text {rk}(A^k)} \end{matrix} \right] \left[ \begin{matrix} T-T^{-1} &{}S \\ 0 &{}0 \end{matrix} \right] U^{*} =0. \end{aligned}$$

“(4)\(\Rightarrow \)(5)” For \( P_{A^{k}} X = A^{\textcircled {\dag }}\) and \( F_{A^{k}} (A-X)=0\), we have \( P_{A^{k}} X + F_{A^{k}} (A-X)= A^{\textcircled {\dag }}\), that is, \(F_{A^{k}} A +2 P_{A^{k}} X=A^{\textcircled {\dag }}+X\).

“(5)\(\Rightarrow \)(1)” Let X be of the form (30). Then,

$$\begin{aligned}&F_{A^{k}} A +2 P_{A^{k}} X-A^{\textcircled {\dag }}-X\\&= U \left[ \begin{matrix} 0 &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} + U\left[ \begin{matrix} 2 X_{1} &{}2X_{2} \\ 0 &{}0 \end{matrix} \right] U^{*} - U \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix} \right] U^{*} -U\left[ \begin{matrix} X_{1} &{}X_{2} \\ X_{3} &{}X_{4} \end{matrix} \right] U^{*} \\&=U\left[ \begin{matrix} X_{1} -T^{-1} &{}X_{2} \\ X_{3} &{}N-X_{4} \end{matrix} \right] U^{*}. \end{aligned}$$

From \(F_{A^{k}} A +2 P_{A^{k}} X=A^{\textcircled {\dag }}+X\), it follows that \(X_{1} =T^{-1} \), \(X_{2}=0\), \(X_{3}=0\) and \(X_{4}=N\). Therefore, we get \(X=A^{\textcircled {S}}\). \(\square \)

Theorem 9

Let \(A\in {\mathbb {C}}_{n, n}\) and \({\text {Ind}}(A)=k\). Then,

  1. (1)

    \(A^{\textcircled {S}}=0\) \(\Leftrightarrow \) \(A=0\);

  2. (2)

    \(A^{\textcircled {S}}=A\) \(\Leftrightarrow \) \(T^{-1}=T\), \(S=0\);

  3. (3)

    \(A^{\textcircled {S}}=A^{*}\) \(\Leftrightarrow \) T is unitary, \(S=0\), \(N=0\);

  4. (4)

    \(A^{\textcircled {S}}=P_{A}\) \(\Leftrightarrow \) \(T=I_{\textrm{rk}(A^k)}\), \(N=0\).

Proof

Let the core-EP decomposition of A be of the form (2). Then,

(1) Since \(A^{\textcircled {S}}=0\), \(N=0\) and T is null. Therefore, \(A=0\).

(2) Since \(A^{\textcircled {S}}=A\), \( U\left[ \begin{matrix} T^{ - 1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} = U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^{*} \). Therefore, \(T^{-1}=T\) and \(S=0\).

(3) Since \(A^{\textcircled {S}}=A^{*}\), \(U\left[ \begin{matrix} T^{ - 1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} = U\left[ \begin{matrix} T^{*} &{}0 \\ S^{*} &{}N^{*} \end{matrix} \right] U^{*}\), that is, \(T^{-1}=T^{*}\), \(S=0\) and \(N=N^{*}\). Therefore, T is unitary, \(S=0\) and N is Hermitian.

Since N is Hermitian, if k is greater than 1, then by applying \(N^k=0\) gives \(N^{k-1} N^*=0\), \(N^{k-1} N^\dag =0\), \(N^{k-1} N^\dag N=0\) and \(N^{k-1}=0\). Therefore, the index of N is not greater than \(k-1\). This contradicts with \({\textrm{Ind}}(A)=k\). So, \(N=0\).

(4) Since \(A^{\textcircled {S}}=P_{A}\), \(A^{\textcircled {S}}=AA^{\dag }\). Furthermore, we get \( U\left[ \begin{matrix} T^{ - 1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} = U\left[ \begin{matrix} I_{\text {rk}(A^k)} &{}0 \\ 0 &{}NN^{\dag } \end{matrix} \right] U^{*}\). It follows that \(T=I_{\text {rk}(A^k)}\) and \(N=0\). \(\square \)

Theorem 10

Let \(A\in {\mathbb {C}}_{n, n}\). Then, the following conditions are equivalent:

  1. (1)

    \(A^{\textcircled {S}}=A^{\dag }\);

  2. (2)

    \(A^{\textcircled {S}}=A^D\);

  3. (3)

    A is EP.

Proof

Let \(A\in {\mathbb {C}}_{n, n}\), \({\text {Ind}}(A)=k\) and the core-EP decomposition of A be as shown in (2).

“(1)\(\Rightarrow \)(3)” Applying (6) and (15), we get that \(A^{\textcircled {S}}=A^{\dag }\) is equivalent to \( T^{*}\triangle =T^{-1}\), \(-T^{*}\triangle SN^{\dagger }=0\), \(\left( I_{n-t}-N^\dag N\right) S^{*}\triangle =0\) and \(N^{\dagger }-\left( I_{n-t}-N^\dag N\right) S^{*}\triangle SN^{\dagger }=N\).

Since T and \(\triangle \) are nonsingular, from \(-T^{*}\triangle SN^{\dagger }=0\) and \(\left( I_{n-t}-N^\dag N\right) S^{*}\triangle =0\) we get \(SN^{\dagger }=0\) and \(S\left( I_{n-t}-N^\dag N\right) =0\). Therefore, \(S=0\). Substituting \(N=0\) in \(N^{\dagger }-\left( I_{n-t}-N^\dag N\right) S^{*}\triangle SN^{\dagger }=N\), we get \(N^{\dagger }=N\). Then \(N=0\). Since \(S=0\) and \(N=0\), by Theorem 5 it follows that A is EP.

“(2)\(\Rightarrow \)(3)” Since \( \text {rk} \left( A^D\right) = \textrm{rk} \left( T\right) \), from Theorem 7 we get that \( \text {rk}\left( {A^{\textcircled {S}}}\right) =\text {rk} \left( A^D\right) \) is equivalent to \(N=0\). Substituting \(N=0\) in (7), we get

$$\begin{aligned} A^D =U \left[ \begin{matrix} T^{-1} &{}T^{-2} S \\ 0 &{}0 \end{matrix} \right] U^{*}. \end{aligned}$$
(31)

If \(A^{\textcircled {S}}=A^D\), applying (15) and (31) we get \(S=0\). By Theorem 5, it follows that A is EP.

“(3)\(\Rightarrow \)(1) and (2)” When A is EP, the two equalities in (1) and (2) can trivially be derived applying (12). \(\square \)

4 Applications of the C-S Inverse

In this section, we discuss several applications of the C-S inverse.

4.1 A Binary Relation Based on the C-S Inverse

In this subsection, based on the C-S inverse, we introduce a binary relation:

$$\begin{aligned} A\mathop \le \limits ^{\textcircled {S}}B: A, B\in {\mathbb {C}}_{n, n}, A\left( A^{\textcircled {S}}\right) ^{*} =B\left( A^{\textcircled {S}}\right) ^{*} \hbox { and } \left( A^{\textcircled {S}}\right) ^{*}A =\left( A^{\textcircled {S}}\right) ^{*}B. \end{aligned}$$
(32)

Theorem 11

Let \(A, B\in {\mathbb {C}}_{n, n}\). Then, \(A\mathop \le \limits ^{\textcircled {S}}B\) if and only if there exists a unitary matrix U such that

$$\begin{aligned} A =U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^{*} \,{ and }\, B =U\left[ \begin{matrix} T &{}S \\ 0 &{}B_{4} \end{matrix} \right] U^{*}, \end{aligned}$$
(33)

where T is invertible, N is nilpotent, and \(N\mathop \le \limits ^{*}B_{4}\).

Proof

\(\Rightarrow \)” Let the core-EP decomposition of A be as shown in (2), and \(A^{\textcircled {S}}\) be of the form (15). And write

$$\begin{aligned} B = U^{*}\left[ \begin{matrix} B_{1} &{}B_{2} \\ B_{3} &{}B_{4} \end{matrix} \right] U. \end{aligned}$$
(34)

Applying \(A\left( A^{\textcircled {S}}\right) ^{*}=B\left( A^{\textcircled {S}}\right) ^{*}\),

$$\begin{aligned} A(A^{\textcircled {S}})^{*} =U\left[ \begin{matrix} T\left( T^{ - 1}\right) ^{*} &{}SN^{*} \\ 0 &{}NN^{*} \end{matrix} \right] U^{*} \hbox { and } B\left( A^{\textcircled {S}}\right) ^{*} =U\left[ \begin{matrix} B_{1}\left( T^{ - 1}\right) ^{*} &{}B_{2}N^{*} \\ B_{3}\left( T^{ - 1}\right) ^{*} &{}B_{4}N^{*} \end{matrix} \right] U^{*} \end{aligned}$$

gives

figure a

From (36), we have

$$\begin{aligned} B_{1}=T \text {and}\, B_{3}=0 . \end{aligned}$$
(37)

Since \((A^{\textcircled {S}})^{*}A=(A^{\textcircled {S}})^{*}B\), by applying (37) we get

$$\begin{aligned}&\left( A^{\textcircled {S}}\right) ^{*}\!A {=}U\left[ \begin{matrix} \left( T^{ {-}1}\right) ^{*}T &{}\left( T^{ {-} 1}\right) ^{*}S \\ 0 &{}N^{*}N \end{matrix} \right] U^{*} \hbox { and } \\&(A^{\textcircled {S}})^{*}B {=}U\! \left[ \begin{matrix} \left( T^{ - 1}\right) ^{*}T &{}\left( T^{ {-} 1}\right) ^{*}B_{2} \\ 0 &{}N^{*}B_{4} \end{matrix} \right] U^{*}. \end{aligned}$$

It follows that

figure b

Since T is nonsingular, by applying (39), we have

$$\begin{aligned} B_{2}=S . \end{aligned}$$
(40)

By applying (37) and (40) to (34) we get

$$\begin{aligned} B =U \left[ \begin{matrix} T &{}S\\ 0 &{}B_{4} \end{matrix} \right] U^{*}. \end{aligned}$$
(41)

Therefore, by applying (2) and (41), we ge (33). Furthermore, by applying (35) and (38), we get \(N\mathop \le \limits ^{*}B_{4}\).

\(\Leftarrow \)” Let A and B be as in (33), then

$$\begin{aligned} \left\{ \begin{aligned} A\left( A^{\textcircled {S}}\right) ^{*}&{=} U\left[ \begin{matrix} T\left( T^{ - 1}\right) ^{*} &{}SN^{*} \\ 0 &{}NN^{*} \end{matrix} \right] U^{*}, B\left( A^{\textcircled {S}}\right) ^{*} {=} U\left[ \begin{matrix} T\left( T^{ {-} 1}\right) ^{*} &{}SN^{*} \\ 0 &{}B_{4}N^{*} \end{matrix} \right] U^{*}, \\ \left( A^{\textcircled {S}}\right) ^{*}A&{=} U\!\left[ \begin{matrix} \left( T^{ {-} 1}\right) ^{*}T &{}\left( T^{ - 1}\right) ^{*}S \\ 0 &{}N^{*}N \end{matrix} \right] U^{*},\ \left( A^{\textcircled {S}}\right) ^{*}B {=} U\!\left[ \begin{matrix} \left( T^{ {-} 1}\right) ^{*}B_{1} &{}\left( T^{ - 1}\right) ^{*}S \\ 0 &{}N^{*}B_{4} \end{matrix} \right] U^{*}. \end{aligned}\right. \end{aligned}$$

It follows from \(N\mathop \le \limits ^{*}B_{4}\) that

$$\begin{aligned} A\left( A^{\textcircled {S}}\right) ^{*}=B\left( A^{\textcircled {S}}\right) ^{*} \hbox { and } \left( A^{\textcircled {S}}\right) ^{*}A=\left( A^{\textcircled {S}}\right) ^{*}B. \end{aligned}$$

Therefore, \(A\mathop \le \limits ^{\textcircled {S}}B\). \(\square \)

Theorem 12

The binary relation “ \(\mathop \le \limits ^{\textcircled {S}}\) ” is anti-symmetric.

Proof

Let A and B be as in (33) and \(A\mathop \le \limits ^{\textcircled {S}}B\). Then,

$$\begin{aligned} \left\{ \begin{aligned} B\left( B^{\textcircled {S}}\right) ^{*}&=U\left[ \begin{matrix} T\left( T^{ - 1}\right) ^{*} &{}S{B}_{4}^{*} \\ 0 &{}B_{4}{B}_{4}^{*} \end{matrix} \right] U^{*}, A\left( B^{\textcircled {S}}\right) ^{*} =U\left[ \begin{matrix} T\left( T^{ - 1}\right) ^{*} &{}S{B}_{4}^{*} \\ 0 &{}N {B}_{4}^{*} \end{matrix} \right] U^{*}, \\ \left( B^{\textcircled {S}}\right) ^{*}B&=U\left[ \begin{matrix} \left( T^{ - 1}\right) ^{*}T &{}\left( T^{ - 1}\right) ^{*}S \\ 0 &{}{B}_{4}^{*}B_{4} \end{matrix} \right] U^{*}, \left( B^{\textcircled {S}}\right) ^{*}A =U\left[ \begin{matrix} \left( T^{ - 1}\right) ^{*}B_{1} &{}\left( T^{ - 1}\right) ^{*}S \\ 0 &{}{B}_{4}^{*}N \end{matrix} \right] U^{*}. \end{aligned} \right. \end{aligned}$$

Since \(B\mathop \le \limits ^{\textcircled {S}}A\), we have

$$\begin{aligned} B_{4}{B}_{4}^{*}=N{B}_{4}^{*} \hbox { and } {B}_{4}^{*}B_{4}={B}_{4}^{*}N. \end{aligned}$$
(42)

It follows from (42) that \(B_{4}\mathop \le \limits ^{*}N\). And for \(N\mathop \le \limits ^{*}B_{4}\), we have \(N=B_{4}\). Therefore, by applying \(A\mathop \le \limits ^{\textcircled {S}}B\) and \(B\mathop \le \limits ^{\textcircled {S}}A\), we get \(A=B\), that is, the binary relation “ \( \mathop \le \limits ^{\textcircled {S}} \) ” is anti-symmetric. \(\square \)

Remark 2

The binary relation “ \( \mathop \le \limits ^{\textcircled {S}}\) ” is not transitive as the following example shows.

Example 2

Let

$$\begin{aligned} A =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}1 \\ 0 &{}0 &{}0 \end{matrix} \right] ,\ B =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}1 \\ 0 &{}-1 &{}0 \end{matrix} \right] \hbox { and } C =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}1 &{}1 \end{matrix} \right] . \end{aligned}$$

It is easy to get

$$\begin{aligned} A^{\textcircled {S}} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}1 \\ 0 &{}0 &{}0 \end{matrix} \right] , \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{aligned} A(A^{\textcircled {S}})^{*}&=B(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}1 &{}0 \\ 0 &{}1 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] , \ (A^{\textcircled {S}})^{*}A =(A^{\textcircled {S}})^{*}B =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}1 \end{matrix} \right] , \\ B(B^{\textcircled {S}})^{*}&=C(B^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] , \ (B^{\textcircled {S}})^{*}C =(B^{\textcircled {S}})^{*}C =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] . \end{aligned}\right. \end{aligned}$$

Therefore, we have \(A\mathop \le \limits ^{\textcircled {S}}B\) and \(B\mathop \le \limits ^{\textcircled {S}}C\). Furthermore, since

$$\begin{aligned} C(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}1 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}1 &{}0 \end{matrix} \right] \hbox { and } (A^{\textcircled {S}})^{*}C =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] , \end{aligned}$$

we get that \(A\mathop \le \limits ^{\textcircled {S}}C\) is not true.

Corollary 1

Let \(A\mathop \le \limits ^{\textcircled {S}}B\). Then, \(AA^{\textcircled {\dag }}A=AA^{\textcircled {\dag }}B\).

Proof

Let A and B be as in (33). Then,

$$\begin{aligned} \left\{ \begin{aligned} AA^{\textcircled {\dag }}A&=U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix} \right] \left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^{*} =U\left[ \begin{matrix} T &{}S \\ 0 &{}0 \end{matrix} \right] U^{*}, \\ AA^{\textcircled {\dag }}B&=U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}0 \end{matrix} \right] \left[ \begin{matrix} T &{}S \\ 0 &{}B_{4} \end{matrix} \right] U^{*} =U\left[ \begin{matrix} T &{}S \\ 0 &{}0 \end{matrix} \right] U^{*}. \end{aligned}\right. \end{aligned}$$

Therefore, \(AA^{\textcircled {\dag }}A=AA^{\textcircled {\dag }}B\). \(\square \)

Corollary 2

Let \(A\mathop \le \limits ^{\textcircled {S}}B\). Then, \(\text {rk}\left( {A} \right) \le \text {rk}\left( {B} \right) \).

Proof

Let A and B be as in (33). Then,

$$\begin{aligned} \left\{ \begin{aligned} \text {rk}\left( {A} \right)&=\text {rk}\left( {T} \right) +\textrm{rk}\left( {N} \right) , \\ \text {rk}\left( {B} \right)&=\text {rk}\left( {T} \right) +\text {rk}\left( {B}_{4} \right) . \end{aligned} \right. \end{aligned}$$
(43)

For \(A\mathop \le \limits ^{\textcircled {S}}B\), by applying Theorem 11, we have \(N\mathop \le \limits ^{*}B_{4}\). Therefore, \(\text {rk}\left( {N} \right) \le \text {rk}\left( {B}_{4} \right) \). It follows from (43) that \(\text {rk}\left( {A} \right) \le \text {rk}\left( {B} \right) \). \(\square \)

Theorem 13

Let \(A\mathop \le \limits ^{*} B\), and A be as in Theorem 1. Then,

$$\begin{aligned} {B}=U \left[ \begin{matrix} T &{} S \\ B_{3} &{}B_{4} \end{matrix} \right] U^{*}, \end{aligned}$$
(44)

where \( N^{*}B_3=0\), \(N\mathop \le \limits ^{*} B_{4}\) and \(NS^{*}= B_3T^{*}+B_{4}S^{*}\).

Proof

Let the core-EP decomposition of A be as in Theorem 1, and B be of the form (34). Then,

$$\begin{aligned} \left\{ \begin{aligned} AA^{*}&{=}U \left[ \begin{matrix} TT^{*}+SS^{*} &{}SN^{*} \\ NS^{*} &{}NN^{*} \end{matrix} \right] U^{*}, \ BA^{*} {=}U \left[ \begin{matrix} B_1T^{*}+B_2S^{*} &{}B_2N^{*} \\ B_3T^{*}+B_{4}S^{*} &{}B_{4}N^{*} \end{matrix} \right] U^{*}, \\ A^{*}A&{=}U \left[ \begin{matrix} T^{*}T &{}T^{*}S \\ S^{*}T &{}S^{*}S+N^{*}N \end{matrix} \right] U^{*}, \ A^{*}B {=}U \left[ \begin{matrix} T^{*}B_1 &{}T^{*}B_2 \\ S^{*}B_1 + N^{*}B_{3} &{}S^{*}B_2+N^{*}B_{4} \end{matrix} \right] U^{*}. \end{aligned} \right. \end{aligned}$$

From \(A\mathop \le \limits ^{*} B\), we have \(AA^{*}=BA^{*}\) and \(A^{*}A=A^{*}B\). From \(A^{*}A=A^{*}B\), we have

$$\begin{aligned} B_1 =T,\ B_2&=S, \ N^{*}B_3=0 \hbox { and } N^{*}N =N^{*}B_4. \end{aligned}$$
(45)

By using (45) and \(AA^{*}=BA^{*}\), we get

$$\begin{aligned} AA^{*}&=U \left[ \begin{matrix} TT^{*}+SS^{*} &{}SN^{*} \\ NS^{*} &{}NN^{*} \end{matrix} \right] U^{*} \hbox {\ and } BA^{*} =U \left[ \begin{matrix} TT^{*}+SS^{*} &{}SN^{*} \\ B_3T^{*}+B_{4}S^{*} &{}B_{4}N^{*} \end{matrix} \right] U^{*}. \end{aligned}$$

Therefore,

$$\begin{aligned} B_{4}N^{*}=NN^{*} \hbox {\ and } NS^{*}= B_3T^{*}+B_{4}S^{*} . \end{aligned}$$
(46)

By applying (45) and (46), we get

$$\begin{aligned} B_1 =T,\ B_2&=S, \ N^{*}B_3=0,\ N\mathop \le \limits ^{*} B_{4} \hbox { and } NS^{*}= B_3T^{*}+B_{4}S^{*} . \end{aligned}$$

\(\square \)

Next, we discuss the relationship between the above binary relation and star partial order on \({\mathbb {C}}_{n, n}\).

Remark 3

Comparing Theorems 11 and 13, we get

  1. (1)

    Applying \(A\mathop \le \limits ^{\textcircled {S}}B\) gives \(B_3=0\). But we can’t get \( NS^{*}= B_{4}S^{*}\). Therefore, \(A\mathop \le \limits ^{\textcircled {S}}B\) cannot deduce \(A\mathop \le \limits ^{ *}B\). (see Example 3)

  2. (2)

    Since only \(N^{*}B_3=0\) is required in \(A\mathop \le \limits ^{ *}B\), \(B_3\) may not be equal to 0. Therefore, \(A\mathop \le \limits ^{ *}B\) cannot deduce \(A\mathop \le \limits ^{\textcircled {S}}B\). (see Example 4)

Example 3

Let

$$\begin{aligned} A =\left[ \begin{matrix} 1 &{}1 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] =U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^*\hbox {\ and } B =\left[ \begin{matrix} 1 &{}1 &{}1 \\ 1 &{}0 &{}-1 \\ 0 &{}0 &{}0 \end{matrix} \right] =U\left[ \begin{matrix} T &{}S \\ B_3 &{}B_4 \end{matrix} \right] U^*, \end{aligned}$$

in which \(U=I_3\), \(T=1\), \(S=\left[ \begin{matrix}1&1\end{matrix} \right] \), \(N=\left[ \begin{matrix} 0 &{}0\\ 0 &{}0 \end{matrix} \right] \), \(B_3=\left[ \begin{matrix} 1 \\ 0 \end{matrix} \right] \) and \(B_4=\left[ \begin{matrix}0&{}-1 \\ 0 &{}0 \end{matrix} \right] \). Then,

$$\begin{aligned} A^{\textcircled {S}} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] . \end{aligned}$$

It is easy to check that \(N\mathop \le \limits ^{*} B_{4}\), \(N^{*}B_3=0\) and \(NS^{*}= B_3T^{*}+B_{4}S^{*}\), that is,

$$\begin{aligned} A\mathop \le \limits ^{*} B. \end{aligned}$$

And since

$$\begin{aligned} A(A^{\textcircled {S}})^{*} = \left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] \hbox { and \ } B(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] , \end{aligned}$$
(47)

we get \(A(A^{\textcircled {S}})^{*} \ne B(A^{\textcircled {S}})^{*}\). It follows that \(A \mathop \le \limits ^{\textcircled {S}} B\) is not true.

Example 4

Let

$$\begin{aligned} A =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] =U\left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^*\hbox { and \ } B =\left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}1 \\ 0 &{}0 &{}0 \end{matrix} \right] =U\left[ \begin{matrix} T &{}S \\ B_3 &{}B_4 \end{matrix} \right] U^*, \end{aligned}$$

in which \(U=I_3\), \(T=1\), \(S=\left[ \begin{matrix}0&1\end{matrix} \right] \), \(N=\left[ \begin{matrix} 0 &{}0\\ 0 &{}0 \end{matrix} \right] \), \(B_3=\left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \) and \(B_4=\left[ \begin{matrix}0&{}1 \\ 0 &{}0 \end{matrix} \right] \). Then,

$$\begin{aligned} A^{\textcircled {S}} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] . \end{aligned}$$

It is easy to check that \(NS^{*}\ne B_3T^{*}+B_{4}S^{*}\), that is, A is not below B under the partial order “ \(\mathop \le \limits ^{*}\).”

And by applying

$$\begin{aligned} A(A^{\textcircled {S}})^{*}&= B(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] \hbox { and \ } (A^{\textcircled {S}})^{*}A = (A^{\textcircled {S}})^{*}B= \left[ \begin{matrix} 1 &{}0 &{}1 \\ 0 &{}0 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] , \end{aligned}$$

we get \(A \mathop \le \limits ^{\textcircled {S}} B\).

4.2 C-S Partial Order

Based on Sect. 4.1, we will introduce a new partial order on \({\mathbb {C}}_{n, n}\).

Lemma 1

Let \(N_5\) be a nilpotent matrix of order q, \(N_{5}\mathop \le \limits ^{*}C_{4}\) and \(C_{4}{S}_{3}^{*}=N_{5}{S}_{3}^{*}\). Then,

$$\begin{aligned} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] \mathop \le \limits ^{*} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right] , \end{aligned}$$
(48)

in which \(T_1\) is an invertible matrix of appropriate order.

Proof

Since \(N_{5}\mathop \le \limits ^{*}C_{4}\), we have \(N_{5}^*N_5 =N_5^*C_{4}\) and \(C_{4}{S}_{3}^{*}=N_{5}{S}_{3}^{*}\). From

$$\begin{aligned} \left\{ \begin{aligned} \left[ \begin{matrix} {T}_{1}^{*} &{}0 \\ {S}_{3}^{*} &{}{N}_{5}^{*} \end{matrix} \right] \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right]&= \left[ \begin{matrix} {T}_{1}^{*}T_{1} &{}{T}_{1}^{*}S_{3} \\ {S}_{3}^{*}T_{1} &{}{S}_{3}^{*}S_{3}+{N}_{5}^{*}N_{5} \end{matrix} \right] , \\ \left[ \begin{matrix} {T}_{1}^{*} &{}0 \\ {S}_{3}^{*} &{}{N}_{5}^{*} \end{matrix} \right] \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right]&= \left[ \begin{matrix} {T}_{1}^{*}T_{1} &{}{T}_{1}^{*}S_{3} \\ {S}_{3}^{*}T_{1} &{}{S}_{3}^{*}S_{3}+{N}_{5}^{*}C_{4} \end{matrix} \right] , \end{aligned} \right. \end{aligned}$$

it follows that

$$\begin{aligned} \left[ \begin{matrix} {T}_{1}^{*}T_{1} &{}{T}_{1}^{*}S_{3} \\ {S}_{3}^{*}T_{1} &{}{S}_{3}^{*}S_{3}+{N}_{5}^{*}N_{5} \end{matrix} \right] = \left[ \begin{matrix} {T}_{1}^{*}T_{1} &{}{T}_{1}^{*}S_{3} \\ {S}_{3}^{*}T_{1} &{}{S}_{3}^{*}S_{3}+{N}_{5}^{*}C_{4} \end{matrix} \right] . \end{aligned}$$

Therefore,

$$\begin{aligned} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] ^{*}\left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right]&=\left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] ^{*} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right] . \end{aligned}$$
(49)

Similarly, by applying (48), \(C_{4}{S}_{3}^{*}=N_{5}{S}_{3}^{*}\) and

$$\begin{aligned} \left\{ \begin{aligned} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] \left[ \begin{matrix} {T}_{1}^{*} &{}0 \\ {S}_{3}^{*} &{}{N}_{5}^{*} \end{matrix} \right]&= \left[ \begin{matrix} T_{1}{T}_{1}^{*}+S_{3}{S}_{3}^{*} &{}S_{3}{N}_{5}^{*} \\ N_{5}{S}_{3}^{*} &{}N_{5}{N}_{5}^{*} \end{matrix} \right] , \\ \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right] \left[ \begin{matrix} {T}_{1}^{*} &{}0 \\ {S}_{3}^{*} &{}{N}_{5}^{*} \end{matrix} \right]&= \left[ \begin{matrix} T_{1}{T}_{1}^{*}+S_{3}{S}_{3}^{*} &{}S_{3}{N}_{5}^{*} \\ C_{4}{S}_{3}^{*} &{}C_{4}{N}_{5}^{*} \end{matrix} \right] , \end{aligned} \right. \end{aligned}$$

we have

$$\begin{aligned} \left[ \begin{matrix} T_{1}{T}_{1}^{*}+S_{3}{S}_{3}^{*} &{}S_{3}{N}_{5}^{*} \\ N_{5}{S}_{3}^{*} &{}N_{5}{N}_{5}^{*} \end{matrix} \right] = \left[ \begin{matrix} T_{1}{T}_{1}^{*}+S_{3}{S}_{3}^{*} &{}S_{3}{N}_{5}^{*} \\ C_{4}{S}_{3}^{*} &{}C_{4}{N}_{5}^{*} \end{matrix} \right] , \end{aligned}$$

that is,

$$\begin{aligned} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] ^{*}&= \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right] \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] ^{*}. \end{aligned}$$
(50)

By applying (49) and (50), we get (48). \(\square \)

Based on (32), we introduce a new binary relation “ \( \mathop \le \limits ^{\textcircled {c\!s}}\) ” on \(C\in {\mathbb {C}}_{n, n}\):

Definition 2

Let \( A, B\in {\mathbb {C}}_{n, n}\). The binary relation “ \(\mathop \le \limits ^{\textcircled {c\!s}}\) ” is defined as

$$\begin{aligned} A\mathop \le \limits ^{\textcircled {c\!s}}B: \ A\mathop \le \limits ^{\textcircled {S}}B \hbox { and } BA^{*}AA^{\textcircled {\dag }}=AA^{*}AA^{\textcircled {\dag }} . \end{aligned}$$
(51)

Theorem 14

The binary relation “ \(\mathop \le \limits ^{\textcircled {c\!s}}\) ” is transitive on \(C\in {\mathbb {C}}_{n, n}\).

Proof

Let A, B and \(C\in {\mathbb {C}}_{n, n}\), \(A\mathop \le \limits ^{\textcircled {c\!s}}B\), \({\textrm{Ind}}(A)=k\), \(\text {rk}\left( A^k\right) =t\), \({\text {Ind}}(B)=l\), \(\text {rk}\left( B^l\right) =t+p\) and \(B\mathop \le \limits ^{\textcircled {c\!s}}C\). Write \(q=n-\textrm{rk}\left( B^l\right) \). Next, we prove \(A\mathop \le \limits ^{\textcircled {c\!s}}C\), that is, \(A\mathop \le \limits ^{\textcircled {S}}C\) and \( CA^{*}AA^{\textcircled {\dag }}=AA^{*}AA^{\textcircled {\dag }} \).

Since \(A\mathop \le \limits ^{\textcircled {S}}B\) and \(B\mathop \le \limits ^{\textcircled {S}}C\), by applying the core-EP decomposition and Theorem 13, we have

$$\begin{aligned} A =U\left[ \begin{matrix} T &{}S_{1} &{}S_{2} \\ 0 &{}N_{1} &{}N_{2} \\ 0 &{}N_{3} &{}N_{4} \end{matrix} \right] U^{*},\ B =U\left[ \begin{matrix} T &{}S_{1} &{}S_{2} \\ 0 &{} T_1 &{}S_{3} \\ 0 &{}0 &{}N_{5} \end{matrix} \right] U^{*} \hbox { and } C =U\left[ \begin{matrix} T &{}S_{1} &{}S_{2} \\ 0 &{}T _{1} &{}S_{3} \\ 0 &{}0 &{} C_{4} \end{matrix} \right] U^{*}, \end{aligned}$$
(52)

where \(\left[ \begin{matrix} N_{1} &{}N_{2} \\ N_{3} &{}N_{4} \end{matrix} \right] \mathop \le \limits ^{*} \left[ \begin{matrix} T _{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix} \right] \), \(T_1\in {\mathbb {C}}_{p, p}\) is nonsingular, \(N_{5}\) is nilpotent, and \(N_{5}\mathop \le \limits ^{*}C_{4}\).

By applying (52), we have

$$\begin{aligned} \left\{ \begin{aligned} BB^{*}BB^{\textcircled {\dag }} =U\left[ \begin{matrix} TT^{*}+S_{1}{S}_{1}^{*}+S_{2}{S}_{2}^{*} &{}S_{1}T_{1}^{*}+S_{2}{S}_{3}^{*} &{}0 \\ T_{1}{S}_{1}^{*}+S_{3}{S}_{2}^{*} &{}T_{1}T_{1}^{*}+S_{3}{S}_{3}^{*} &{}0 \\ N_{5}{S}_{2}^{*} &{}N_{5}{S}_{3}^{*} &{}0 \end{matrix} \right] U^{*}, \\ CB^{*}BB^{\textcircled {\dag }} =U\left[ \begin{matrix} TT^{*}+S_{1}{S}_{1}^{*}+S_{2}{S}_{2}^{*} &{}S_{1}T_{1}^{*}+S_{2}{S}_{3}^{*} &{}0 \\ T_{1}{S}_{1}^{*}+S_{3}{S}_{2}^{*} &{}T_{1}T_{1}^{*}+S_{3}{S}_{3}^{*} &{}0 \\ C_{4}{S}_{2}^{*} &{}C_{4}{S}_{3}^{*} &{}0 \end{matrix} \right] U^{*}. \end{aligned} \right. \end{aligned}$$

Since \(B\mathop \le \limits ^{\textcircled {S}}C\), \(BB^{*}BB^{\textcircled {\dag }}=CB^{*}BB^{\textcircled {\dag }}\). Therefore,

$$\begin{aligned} N_{5}{S}_{2}^{*}=C_{4}{S}_{2}^{*} \hbox { and } N_{5}{S}_{3}^{*}=C_{4}{S}_{3}^{*}. \end{aligned}$$
(53)

For \(N_{5}\mathop \le \limits ^{*}C_{4}\) and \(N_{5}{S}_{3}^{*}=C_{4}{S}_{3}^{*}\), by applying Lemma 1, we get \(\left[ {\begin{matrix} T_{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix}} \right] \mathop \le \limits ^{*} \left[ {\begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix}} \right] \). Since the star partial order is transitive, it follows from \(\left[ {\begin{matrix} N_{1} &{}N_{2} \\ N_{3} &{}N_{4} \end{matrix}} \right] \mathop \le \limits ^{*} \left[ {\begin{matrix} T _{1} &{}S_{3} \\ 0 &{}N_{5} \end{matrix}} \right] \) that

$$\begin{aligned} \left[ \begin{matrix} N_{1} &{}N_{2} \\ N_{3} &{}N_{4} \end{matrix} \right] \mathop \le \limits ^{*} \left[ \begin{matrix} T_{1} &{}S_{3} \\ 0 &{}C_{4} \end{matrix} \right] . \end{aligned}$$
(54)

By applying (52), it is easy to check that

figure c

Then, from (53), \(AA^{*}AA^{\textcircled {\dag }}=BA^{*}AA^{\textcircled {\dag }}\) and \(BB^{*}BB^{\textcircled {\dag }}=CB^{*}BB^{\textcircled {\dag }}\), we get \(N_{1}{S}_{1}^{*}+N_{2}{S}_{2}^{*} =T{S}_{1}^{*}+S_{3}{S}_{2}^{*}\), \(N_{3}{S}_{1}^{*}+N_{4}{S}_{2}^{*} =N_{5}{S}_{2}^{*}\) and \(N_{5}{S}_{2}^{*}=C_{4}{S}_{2}^{*}\). It follows from (55) and (56) that

$$\begin{aligned} AA^{*}AA^{\textcircled {\dag }} = CA^{*}AA^{\textcircled {\dag }}. \end{aligned}$$
(57)

From (52), (54) and (57), we get \(A\mathop \le \limits ^{\textcircled {c\!s}}C\). Therefore, the binary relation “ \(\mathop \le \limits ^{\textcircled {c\!s}}\) ” is transitive. \(\square \)

Theorem 15

The binary relation “ \(\mathop \le \limits ^{\textcircled {{c\!s}} }\) ” is a partial order on \({\mathbb {C}}_{n, n}\). We call it C-S partial order.

Proof

Reflexivity is trivial and transitivity follows by Theorem 14. For transitivity, let A, \(B \in {\mathbb {C}}_{n, n}\), \(A\mathop \le \limits ^{\textcircled {c\!s}}B\) and \(A\mathop \le \limits ^{\textcircled {c\!s}}B\). Then \(A\mathop \le \limits ^{\textcircled {S}}B\) and \(B\mathop \le \limits ^{\textcircled {S}}A\). It follows from applying Theorem 12 that we get \(A=B\). Therefore, the binary relation “ \( \mathop \le \limits ^{\textcircled {c\!s}} \) ” is anti-symmetric. \(\square \)

Theorem 16

Let \(A\mathop \le \limits ^{\textcircled {{c\!s}}}B\). Then, \(A\mathop \le \limits ^{*}B\).

Proof

Let A, \(B \in {\mathbb {C}}_{n, n}\), and A be below B under the C-S partial order. By applying (51), we have \(A\mathop \le \limits ^{\textcircled {S}}B\) and \(BA^{*}AA^{\textcircled {\dag }}=AA^{*}AA^{\textcircled {\dag }}\). Since \(A\mathop \le \limits ^{\textcircled {S}}B\), A and B have the forms as shown in (33). Then,

$$\begin{aligned} \left\{ \begin{aligned} BA^{*}AA^{\textcircled {\dag }}&= U \left[ \begin{matrix} TT^{*}+SS^{*} &{}0 \\ B_{4}S^{*} &{}0 \end{matrix} \right] U^{*}, \\ AA^{*}AA^{\textcircled {\dag }}&= U \left[ \begin{matrix} TT^{*}+SS^{*} &{}0 \\ NS^{*} &{}0 \end{matrix} \right] U^{*}, \end{aligned} \right. \end{aligned}$$
(58)

and

$$\begin{aligned} \left\{ \begin{aligned} AA^{*}&=U \left[ \begin{matrix} TT^{*}+SS^{*} &{}SN^{*} \\ NS^{*} &{}NN^{*} \end{matrix} \right] U^{*}, BA^{*} =U \left[ \begin{matrix} TT^{*}+SS^{*} &{}SN^{*} \\ B_{4}S^{*} &{}B_{4}N^{*} \end{matrix} \right] U^{*}, \\ A^{*}A&=U \left[ \begin{matrix} T^{*}T &{}T^{*}S \\ S^{*}T &{}S^{*}S+N^{*}N \end{matrix} \right] U^{*}, A^{*}B =U \left[ \begin{matrix} T^{*}T &{}T^{*}S \\ S^{*}T &{}S^{*}S+N^{*}B_{4} \end{matrix} \right] U^{*}. \end{aligned} \right. \end{aligned}$$
(59)

From (58), \(BA^{*}AA^{\textcircled {\dag }}=AA^{*}AA^{\textcircled {\dag }}\) and \(N\mathop \le \limits ^{*}B_{4}\), we have

$$\begin{aligned} NN^{*}&=B_{4}N^{*}, \ N^{*}N =N^{*}B_{4} \ \hbox {and } B_{4}S^{*} = NS^{*}. \end{aligned}$$
(60)

Therefore, by applying (60) to (59) we get \(AA^{*}=BA^{*}\) and \(A^{*}A=A^{*}B\), that is, \(A\mathop \le \limits ^{*}B\). \(\square \)

Remark 4

In Theorem 16, we see that if \(A\mathop \le \limits ^{\textcircled {c\!s}}B\) then \(A\mathop \le \limits ^{*}B\). However, the converse does not hold (see Example 5)

Example 5

Let

$$\begin{aligned} A =\left[ \begin{matrix} 1 &{}1 &{}1 \\ 0 &{}0 &{}1 \\ 0 &{}0 &{}0 \end{matrix} \right] \hbox { and } B =\left[ \begin{matrix} 1 &{}1 &{}1 \\ 0 &{}0 &{}1 \\ 1 &{}-1 &{}0 \end{matrix} \right] . \end{aligned}$$

From

$$\begin{aligned} AA^{*}=BA^{*} =\left[ \begin{matrix} 3 &{}1 &{}0 \\ 1 &{}1 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] \hbox { and } A^{*}A=A^{*}B =\left[ \begin{matrix} 1 &{}1 &{}1 \\ 1 &{}1 &{}1 \\ 1 &{}1 &{}2 \end{matrix} \right] , \end{aligned}$$

it follows that \(A\mathop \le \limits ^{*}B\). For

$$\begin{aligned} A^{\textcircled {S}} =\left[ \begin{matrix} 1 &{}0 &{}0 \\ 0 &{}0 &{}1 \\ 0 &{}0 &{}0 \end{matrix} \right] , \end{aligned}$$

we get

$$\begin{aligned} A(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}1 &{}0 \\ 0 &{}1 &{}0 \\ 0 &{}0 &{}0 \end{matrix} \right] \ne B(A^{\textcircled {S}})^{*} =\left[ \begin{matrix} 1 &{}1 &{}0 \\ 0 &{}1 &{}0 \\ 1 &{}0 &{}0 \end{matrix} \right] . \end{aligned}$$

Therefore, A is not below B under the C-S partial order.

4.3 Characterizations of Special Matrices

Generalized inverses are one of the tools for characterizing special matrices. In this subsection, we try to use C-S inverse to characterize special matrices such as EP matrix and i-EP matrix.

Theorem 17

Let \(A \in {\mathbb {C}}_{n, n}\). Then, the following conditions are equivalent:

  1. (1)

    A is i-EP;

  2. (2)

    \(AA^{\textcircled {S}} = A^{\textcircled {S}}A\).

Proof

“(1)\(\Leftarrow \)(2)” Let \(A \in {\mathbb {C}}_{n, n}\). Write \({\text {Ind}}(A)=k\) and \(\text {rk}(A^k)=t\). By applying the core-EP decomposition of A, we have

$$\begin{aligned} \left\{ \begin{aligned} AA^{\textcircled {S}}&=U \left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}N \end{matrix} \right] U^{*} =U \left[ \begin{matrix} I_{t} &{}SN \\ 0 &{}N^{2} \end{matrix} \right] U^{*}, \\ A^{\textcircled {S}}A&=U \left[ \begin{matrix} T^{-1} &{}0 \\ 0 &{}N \end{matrix} \right] \left[ \begin{matrix} T &{}S \\ 0 &{}N \end{matrix} \right] U^{*} =U \left[ \begin{matrix} I_{t} &{}T^{-1}S \\ 0 &{}N^{2} \end{matrix} \right] U^{*}. \end{aligned} \right. \end{aligned}$$
(61)

Applying (61) and \(AA^{\textcircled {S}}= A^{\textcircled {S}}A\) gives

$$\begin{aligned} S=TSN. \end{aligned}$$

Post-multiplying \(N^{k-1}\) on \(S-TSN=0\) gives \(SN^{k-1}-TSN^{k}=0\). It follows from \(N^k=0\) that \(SN^{k-1}=0\). Post-multiplying \(N^{k-2}\) on \(S-TSN=0\) gives \(SN^{k-2}-TSN^{k-1}=0\), that is, \(SN^{k-2}=0\). And so on, we can get \(SN=0\). It follows from \(S-TSN=0\) that \(S=0\). Therefore, we get that A is i-EP.

(1)\( \Rightarrow \)(2) Obviously. \(\square \)

We see that \(\left( A^{\text { {(S)}}}\right) ^{\mathrm{{(S)}}}=A\) for any square matrix A. But in general, it is easy to see that \(\left( A^{\textcircled {S}}\right) ^{\textcircled {S}}\ne A \). So under what conditions are they equal? In other words, what kind of special matrix is A, when \(\left( A^{\textcircled {S}}\right) ^{\textcircled {S}}=A \)?

Theorem 18

Let \(A \in {\mathbb {C}}_{n, n}\). Then, the following conditions are equivalent:

  1. (1)

    A is i-EP;

  2. (2)

    \(\left( A^{\textcircled {S}}\right) ^{\textcircled {S}}= A \).

Proof

Let the decomposition of A be as in (2). Then,

$$\begin{aligned} (A^{\textcircled {S}})^{\textcircled {S}} =U\left[ \begin{matrix} T &{}0 \\ 0 &{}N \end{matrix} \right] U^{*}. \end{aligned}$$

If \((A^{\textcircled {S}})^{\textcircled {S}}=A\), we get \(S=0\). By applying Theorem 5, we get that A is i-EP.

Conversely, let A is i-EP, by applying Theorem 5, it is easy to check that \((A^{\textcircled {S}})^{\textcircled {S}}=A\). \(\square \)

In Example 1, we can see that in general, C-S inverse is not equal to Moore–Penrose inverse and G-S inverse. Next, we consider the case when the C-S inverse is equal to G-S inverse and Moore–Penrose inverse, respectively.

Theorem 19

Let \(A \in {\mathbb {C}}_{n, n}\). Then, the following conditions are equivalent:

  1. (1)

    A is i-EP;

  2. (2)

    \( A^{\textcircled {S}} = A^{\text {{(S)}}} \).

Proof

Let the decomposition of A be as in (2). If A is i-EP, then \(S=0\). By applying (9), it is easy to check that

$$\begin{aligned} A^{\textcircled {S}} = A^{\text {{(S)}}} =U\left[ \begin{matrix} T &{}0 \\ 0 &{}N \end{matrix} \right] U^{*}. \end{aligned}$$

Conversely, let \( A^{\textcircled {S}} = A^{\mathrm{{(S)}}}\). By applying (9) and (15), we get

$$\begin{aligned} T^{-(k+1)}\left( {\sum \limits _{i =0}^{k-1} {T^{i}SN^{k-1-i}} } \right) +S -T^{-(k-1)}\left( {\sum \limits _{i =0}^{k-1} {T^{i}SN^{k-1-i}} } \right) =0, \end{aligned}$$

that is, \(T^{-k-1}SN^{k-1} +T^{-k}SN^{k-2}+\cdots +T^{-3}SN +T^{-2}S +S - T^{-k+1}SN^{k-1}-T^{-k+2}SN^{k-2}-\cdots -T^{-1}SN -S =0\). After simplifying the equation, we get

$$\begin{aligned} \left( T^{-k-1}-T^{-k+1}\right) SN^{k-1}&+ \left( T^{-k}-T^{-k+2}\right) SN^{k-2} +\cdots \nonumber \\&+ \left( T^{-3}-T^{-1}\right) SN + T^{-2}S =0. \end{aligned}$$
(62)

Post-multiplying \(N^{k-1}\) on (62), we get

$$\begin{aligned} \left( T^{-k-1}-T^{-k+1}\right) SN^{2k-2} +\cdots + \left( T^{-3}-T^{-1}\right) SN^k + T^{-2}SN^{k-1} =0. \end{aligned}$$

Since T is nonsingular, by applying \(N^k=0\), we get \(SN^{k-1}=0\). Therefore,

$$\begin{aligned} \left( T^{-k}-T^{-k+2}\right) SN^{k-2} +\cdots + \left( T^{-3}-T^{-1}\right) SN + T^{-2}S =0. \end{aligned}$$

In the same way, by applying \(SN^{k-1}=0\), we get \(SN^{k-2}=0\); by applying \(SN^{k-2}=0\), we get \(SN^{k-3}=0\); \(\ldots \); by applying \(SN^2=0\), we get \(SN =0\). Therefore, from (62), we get \(T^{-2}S=0\). Since T is nonsingular, \(S=0\). It follows from Theorem 5 that A is i-EP. \(\square \)

Theorem 20

Let \(A \in {\mathbb {C}}_{n, n}\). Then, the following conditions are equivalent:

  1. (1)

    A is EP;

  2. (2)

    \( A^{\textcircled {S}} = A^\dag \).

Proof

“(1)\(\Leftarrow \)(2)” Let the decomposition of A and the C-S inverse of A be as in Theorem 6. If \( A^{\textcircled {S}} =A^\dag \), then

$$\begin{aligned} 0 = A^{\textcircled {S}} A A^{\textcircled {S}}-A^{\textcircled {S}} = U\left[ \begin{matrix} 0 &{}T^{-1}SN \\ 0 &{}N^3-N \end{matrix} \right] U^{*}. \end{aligned}$$

Therefore, \(N^3=N\). Since N is a nilpotent matrix, we get \(N=0\). By applying \(N=0\) and \(\left( A^{\textcircled {S}}A\right) ^*=A^{\textcircled {S}}A\), we get

$$\begin{aligned} U\left[ \begin{matrix} I_{\text {rk}(A^k)} &{}T^{-1}S \\ 0 &{}0 \end{matrix} \right] U^{*} = U\left[ \begin{matrix} I_{\text {rk}(A^k)} &{}0 \\ (T^{-1}S)^*&{}0 \end{matrix} \right] U^{*}. \end{aligned}$$

Since T is invertible, it follows that \(S=0\).

For \(S=0\) and \(N=0\), by applying Theorem 4, we get that A is EP.

“(1)\(\Rightarrow \)(2)” If A is EP, by applying Theorem 4, it is easy to check that \( A^{\textcircled {S}} = A^\dag \). \(\square \)