1 Introduction

Let \({\mathcal {H}}\) be a Hilbert space and \({\mathcal {B}}({\mathcal {H}})\) be the algebra of all bounded linear operators acting on \(\mathcal H\). For any \(A\in {\mathcal {B}}({\mathcal {H}})\) we will denote by \({\mathcal {N}} (A)\) and \({\mathcal {R}}(A)\) the kernel and the range of A, respectively.

Recall that \(B\in {\mathcal {B}}({\mathcal {H}})\) is an inner inverse of A if \(ABA=A\). The latter equality implies that AB is a projection onto \({\mathcal {R}}(A)\), and so A can only be inner invertible if its range is closed. As it happens, this condition is also sufficient and, if it holds, infinitely many inner inverses of A exist (unless, of course, A is invertible, in which case \(A^{-1}\) is the unique inner inverse of A). The additional requirement that A be also an inner inverse of B and that the projections AB, BA be orthogonal single out a unique inner inverse, the so-called Moore-Penrose inverse of A, usually denoted by \(A^\dagger \).

The Drazin inverse \(A^D\) of A exists, by definition, if the nested families

$$\begin{aligned} {\mathcal {R}}(A) \supseteq {\mathcal {R}}(A^2)\supseteq \ldots \supseteq {\mathcal {R}}(A^k)\supseteq \ldots \end{aligned}$$

and

$$\begin{aligned} {\mathcal {N}}(A) \subseteq {\mathcal {N}}(A^2)\subseteq \ldots \subseteq {\mathcal {N}}(A^k)\subseteq \ldots \end{aligned}$$

stabilize. The smallest non-negative integer k for which \(\mathcal R(A^k)={\mathcal {R}}(A^{k+1})\) and \({\mathcal {N}}(A^k)=\mathcal N(A^{k+1})\) is called the index of A, and \(A^D\) is defined uniquely as the operator commuting with A, for which A is an inner inverse, and such that \(A^{k+1}A^D=A^k\).

Observe that for Drazin invertible A the range of \(A^k\) is closed. In particular, \({\mathcal {R}}(A)\) is closed if \(k\le 1\); the respective Drazin inverse is then an inner inverse of A (which in general it is not), called the group inverse of A and denoted by \(A^\#\). So, the existence of \(A^\#\) implies the existence of \(A^\dagger \), but not the other way around.

The equality \(A^\#=A^\dagger \) holds if and only if A is a so-called EP operator, that is, if the (closed) ranges of A and \(A^*\) coincide. The EP stands for “Equal Principal” or “Equal Projections”.

Yet another notion of invertibility was introduced more recently by Baksalary and Trenkler in [2] (for matrices) and and Rakić, Dinčić, and Djordjević in [10] (for Hilbert space operators). Namely, an inner inverse of A is called its core inverse if

(1)

It was proved in [10] that exists only simultaneously with \(A^\#\) and that the two coincide exactly when \(A^\#=A^\dagger \), i.e., exactly for EP operators A (Theorems 3.2 and 3.10 of [10], respectively).

In this paper, we take a closer look at the operators from the von Neumann algebra \({\mathcal {W}}^*(P,Q)\) generated by two orthogonal projections PQ in \({\mathcal {B}}({\mathcal {H}})\). A constructive criterion for Moore-Penrose invertibility of operators \(A\in \mathcal W^*(P,Q)\) goes back to [11], see also [4, Theorem 8.1]. Drazin inversion was dealt with in [3] and [4, Sect. 9]. Since the index of Drazin invertible operators \(A\in {\mathcal {W}}^*(P,Q)\) was also computed there, implicitly the group invertibility criterion and formulas for \(A^\#\) were established there as well. For convenience of reference we state them explicitly in Sect. 2, along with the required notation. The rest of the section is devoted to the core invertibility. The EP property of operators from \({\mathcal {W}}^*(P,Q)\) is studied in Sect. 3. Some examples are examined in Sect. 4.

To conclude this introduction, let us mention our paper [5], in which the previously known results on the operators in \({\mathcal {W}}^*(P,Q)\) mentioned above (and some more) are stated and discussed from a unifying point of view. The main idea we were trying to convey in [5] was that any reasonable question about operators from the algebra generated by P and Q can be answered based on the structural description of the (PQ) pair going back to the “two subspaces” theory by Halmos [9]. This paper is yet another illustration of this point.

2 The core inverse

Our main tool is the explicit formula for operators in \(\mathcal W^*(P,Q)\) established in [8]; see also [4, Sect. 7]. To introduce the pertinent language, recall first of all Halmos’ “two subspaces” approach according to which \({\mathcal {H}}\) is represented as the orthogonal sum of

$$\begin{aligned} M_{00}={\mathcal {R}}(P)\cap {\mathcal {R}}(Q), \; M_{01}=\mathcal R(P)\cap {\mathcal {N}}(Q),\; M_{0}=\mathcal R(P)\ominus \left( M_{00}\oplus M_{01}\right) \end{aligned}$$

and

$$\begin{aligned} M_{10}={\mathcal {N}}(P)\cap {\mathcal {R}}(Q),\; M_{11}=\mathcal N(P)\cap {\mathcal {N}}(Q), \; M_{1}=\mathcal N(P)\ominus \left( M_{10}\oplus M_{11}\right) . \end{aligned}$$

The compression H of \(P(I-Q)P\) onto \(M_0\) and that of \((I-P)Q(I-P)\) onto \(M_1\) are then unitarily similar. Identifying \(M_1\) with \(M_0\) via this unitary similarity, operators \(A\in {\mathcal {W}}^*(P,Q)\) take the form

$$\begin{aligned} A=(\alpha _{00},\alpha _{01},\alpha _{10},\alpha _{11})\oplus \Phi _A(H). \end{aligned}$$
(2)

Here \(\alpha _{ij}\in {\mathbb {C}}\ (i,j=0,1)\), \((\alpha _{00},\alpha _{01},\alpha _{10},\alpha _{11})\) is an abbreviated notation for \(\alpha _{00}I_{M_{00}}\oplus \alpha _{01}I_{M_{01}}\oplus \alpha _{10}I_{M_{10}}\oplus \alpha _{11}I_{M_{11}}\), and the entries of the matrix \(\Phi _A=\begin{bmatrix}\phi _{00} &{} \phi _{01} \\ \phi _{10} &{} \phi _{11}\end{bmatrix}\) are arbitrary \(L^\infty \)-functions on the spectrum \(\sigma (H)\) of the operator H. In particular,

$$\begin{aligned} P=(1,1,0,0)\oplus \begin{bmatrix} I &{}\,\, 0 \\ 0 &{}\,\, 0\end{bmatrix} \end{aligned}$$
(3)

and

$$\begin{aligned} Q=(1,0,1,0)\oplus \begin{bmatrix} I-H &{}\,\, \sqrt{H(I-H)}\, \\ \sqrt{H(I-H)} &{}\,\, H \,\end{bmatrix}. \end{aligned}$$
(4)

For a given \(A\in {\mathcal {W}}^*(P,Q)\), partition \(\sigma (H)\) into the subsets

$$\begin{aligned} \Delta _r(A)=\{t \in \sigma (H):{\text {rank}}\Phi _A(t)=r\},\quad r=0,1,2, \end{aligned}$$

denote by \(M^{(r)}\) the respective spectral subspaces of \(M_0\), and by \(H_r\) the compressions of H onto \(M^{(r)}\). Then (2) can be refined to

$$\begin{aligned} A=(\alpha _{00},\alpha _{01},\alpha _{10},\alpha _{11})\oplus 0_{M^{(0)}\oplus M^{(0)}}\oplus \Phi _A(H_1)\oplus \Phi _A(H_2). \end{aligned}$$
(5)

The Drazin invertibility criterion and the formulas for both \(A^D\) and the index of A from [3, 4] yield the following.

Theorem 1

Let \(A\in {\mathcal {W}}^*(P,Q)\) be represented in the form (5). Then for \(A^\#\) to exist it is necessary and sufficient that \(\det \Phi _A\) and \({\text {Tr}}\Phi _A\) are separated from zero on \(\Delta _2(A)\) and \(\Delta _1(A)\), respectively. Under these conditions

$$\begin{aligned} A^\#=(\alpha _{00}^\dagger ,\alpha _{01}^\dagger ,\alpha _{10}^\dagger ,\alpha _{11}^\dagger )\oplus 0_{M^{(0)}\oplus M^{(0)}}\oplus \frac{1}{({\text {Tr}}\Phi _A)^2}\Phi _A(H_1)\oplus \Phi _A^{-1}(H_2).\nonumber \\ \end{aligned}$$
(6)

Here and below for any \(z\in {\mathbb {C}}\) we let \(z^\dagger =z^{-1}\) if \(z\ne 0\) while \(0^\dagger =0\). This notation is justified by the simple fact that \(z^\dagger \) is indeed the Moore-Penrose inverse of z in the (trivial) one-dimensional case. We also recall that a function f is said to be separated from zero on some set E if the infimum of \(\left| f\right| \) on E is strictly positive.

Before stating the respective result for the core inverse, let us introduce one more notation: \(\phi _A=\sum _{i,j=0}^1\left| \phi _{ij}\right| ^2\). In other words, \(\phi _A=\left\| \Phi _A\right\| _F^2\), where \(\left\| F\right\| \) denotes the Frobenius norm.

Theorem 2

Let \(A\in {\mathcal {W}}^*(P,Q)\) be represented in the form (5). Then for to exist it is necessary and sufficient that \(\det \Phi _A\) and \({\text {Tr}}\Phi _A\) are separated from zero on \(\Delta _2(A)\) and \(\Delta _1(A)\), respectively. Under these conditions

(7)

where

$$\begin{aligned} \Psi _A=\frac{1}{\phi _A{\text {Tr}}\Phi _A}\Phi _A\Phi _A^*. \end{aligned}$$
(8)

Proof

The existence conditions are the same as in Theorem 1 due to [10, Theorem 3.8]. When they hold, the right hand side of (7) is defined correctly. Checking that it matches the definition of can be done block-wise, and the only not quite trivial part is that .

To show that \(\Psi _A(H_1)\) is an inner inverse of \(\Phi _A(H_1)\), we have to verify the equality \(\Phi _A\Psi _A\Phi _A=\Phi _A\) when \({\text {rank}}\Phi _A=1\). In terms of solely \(\Phi _A\), the equality reads \(\Phi _A\Phi _A\Phi _A^*\Phi _A=(\phi _A {\text {Tr}}\Phi _A)\Phi _A\), and as both sides are invariant under unitary similarity, we may assume that \(\Phi _A\) is lower triangular. Due to the rank one condition, we may even assume that with \(|u|^2+|v|^2 >0\). This reduces the verification of the equality to a simple computation. Furthermore, the operator \((\phi _A{\text {Tr}}\Phi _A)(H_1)\) is invertible and commutes with the blocks of \(\Phi _A(H_1)\) and \(\Phi _A^*(H_1)\). From (8) it therefore follows that

$$\begin{aligned} {\mathcal {R}}(\Psi _A(H_1))={\mathcal {R}}(\Phi _A(H_1)\Phi _A^*(H_1))\ (={\mathcal {R}}(\Phi _A(H_1))) \end{aligned}$$

and \({\mathcal {N}}(\Psi _A(H_1))={\mathcal {N}}(\Phi _A(H_1)\Phi _A^*(H_1))\ (={\mathcal {N}}(\Phi _A^*(H_1))\)). \(\square \)

Corollary 3

Let \(A\in {\mathcal {W}}^*(P,Q)\) be group- (and thus core-) invertible. Then

(9)

if and only if \(\Phi _A(t)\) is a normal matrix for \(t\in \Delta _1(A)\).

Proof

From (6) and (7) with the use of (8) it is clear that (9) holds if and only if

$$\begin{aligned} \Phi _A=\frac{{\text {Tr}}\Phi _A}{\phi _A}\Phi _A\Phi _A^* \;\, \text{ on }\;\, \Delta _1(A). \end{aligned}$$

So, we need to check when a rank one 2-by-2 matrix X is such that

$$\begin{aligned} X=\frac{{\text {Tr}}X}{\left\| X\right\| _F^2}XX^*. \end{aligned}$$
(10)

If X is normal, then \(X=U\textrm{diag}(\lambda ,0)U^*\) with a unitary matrix U and a nonzero number \(\lambda \), and since the trace and the Frobenius norm are unitarily equivalent, equality (10) amounts to the identity \(\lambda =(\lambda /|\lambda |^2)|\lambda |^2\). Conversely, suppose (10) holds. As the right-hand side is a scalar multiple of a Hermitian matrix, it follows that so also is the X on the left, which implies that X is normal.

(Incidentally, an alternative argument can be based on the idea used in the previous proof. Namely, since all the ingredients in (10) are unitarily invariant, we may without loss of generality suppose that with \(|u|^2+|v|^2 >0\). Then (10) takes the form

$$\begin{aligned} \begin{bmatrix}u &{}\,\, 0 \\ v &{}\,\, 0\end{bmatrix} = \frac{u}{\left| u\right| ^2+\left| v\right| ^2}\begin{bmatrix}\left| u\right| ^2 &{}\,\, u{\overline{v}} \\ {\overline{u}}v &{}\,\, \left| v\right| ^2 \end{bmatrix}. \end{aligned}$$

Obviously, this holds if and only if \(v=0\).) \(\square \)

Along with the core inverse, the authors of [10] have also introduced its dual. Namely, an inner inverse of the operator A is called its dual core inverse, denoted by , if in place of (1)

It is easy to check that the analogue of Theorem 2 holds almost literally; one just needs to replace with in (7) and switch the positions of \(\Phi _A\) and \(\Phi _A^*\) in (8). Consequently, Corollary 3 with replaced by holds as well.

3 EP property

When combined with [10, Theorem 3.10] mentioned in the Introduction, Corollary 3 implies that a group invertible operator \(A\in {\mathcal {W}}^*(P,Q)\) possesses the EP property if and only if \(\Phi _A\) is normal on \(\Delta _1(A)\). However, we would like to treat the EP property of operators in \({\mathcal {W}}^*(P,Q)\) without any additional a priori conditions. The main step in this direction is to figure out what the normality of \(\Phi _A\) on \(\Delta _1(A)\) is equivalent to in this setting.

Proposition 4

Let \(A\in {\mathcal {W}}^*(P,Q)\). Then \({\mathcal {N}}(A)={\mathcal {N}}(A^*)\) if and only if the matrix-function \(\Phi _A\) from its representation (2) is normal on \(\Delta _1(A)\).

Proof

We will use the kernel description of A in terms of its representation (5). Recall therefore that \(M^{(1)}\) is the spectral subspace of A corresponding to the \(\Delta _1(A)\) part of its spectrum and that \(H_1\) is the compression of \(P(I-Q)P\) to \(M^{(1)}\). According to [11, Theorem 2.1] (or [4, Theorem 7.5]),

$$\begin{aligned} \mathcal N(A)=\oplus \{M_{ij}:\alpha _{ij}=0\}\oplus \left( M_0\oplus M_0\right) \oplus {\mathcal {N}}(\Phi _A(H_1)), \end{aligned}$$
(11)

with the last summand equal the image (in \(M^{(1)}\oplus M^{(1)}\)) of \(M^{(1)}\) under the mapping \([\theta \chi _1, -\chi _0]^\top (H_1)\). Here

$$\begin{aligned} \chi _0=\sqrt{\frac{\left| \phi _{00}\right| ^2+\left| \phi _{10}\right| ^2}{\phi _A}}, \quad \chi _1=\sqrt{\frac{\left| \phi _{01}\right| ^2+\left| \phi _{11}\right| ^2}{\phi _A}}, \end{aligned}$$
(12)

and

$$\begin{aligned} \theta =\eta /\left| \eta \right| \,\text { with }\, \eta =\overline{\phi _{00}}\phi _{01}+\overline{\phi _{10}}\phi _{11} \end{aligned}$$
(13)

(as in [11], \(\theta (t)\) is unimodular with an arbitrarily chosen argument for values of t where \(\eta (t)=0\)).

It will be convenient to think of the last summand in (11) as \({\mathcal {R}} (B)\), where

$$\begin{aligned} B=\begin{bmatrix} \theta \chi _1 &{}\,\, 0 \\ -\chi _0 &{}\,\, 0\end{bmatrix}(H_1) \in [M^{(1)}\oplus M^{(1)}]. \end{aligned}$$
(14)

Since

$$\begin{aligned} A^*=(\overline{\alpha _{00}},\overline{\alpha _{01}},\overline{\alpha _{10}},\overline{\alpha _{11}})\oplus 0_{M^{(0)}\oplus M^{(0)}}\oplus \Phi _A^*(H_1)\oplus \Phi _A^*(H_2), \end{aligned}$$
(15)

along with (11) we have

$$\begin{aligned} \mathcal N(A^*)=\oplus \{M_{ij}:\alpha _{ij}=0\}\oplus \left( M_0\oplus M_0\right) \oplus {\mathcal {R}}({\tilde{B}}), \end{aligned}$$

where

$$\begin{aligned} {\tilde{B}}=\begin{bmatrix} {\tilde{\theta }}\tilde{\chi _1} &{}\,\, 0 \\ -\tilde{\chi _0} &{}\,\, 0\end{bmatrix}(H_1), \end{aligned}$$
(16)
$$\begin{aligned} \tilde{\chi _0}=\sqrt{\frac{\left| \phi _{00}\right| ^2+\left| \phi _{01}\right| ^2}{\phi _A}}, \quad \tilde{\chi _1}=\sqrt{\frac{\left| \phi _{10}\right| ^2+\left| \phi _{11}\right| ^2}{\phi _A}}, \end{aligned}$$

and

$$\begin{aligned} {\tilde{\theta }}={\tilde{\eta }}/\left| {\tilde{\eta }}\right| \, \text { with } \,{\tilde{\eta }}=\overline{\phi _{10}}\phi _{00}+\overline{\phi _{11}}\phi _{01}. \end{aligned}$$

From (11) and (15) we see that \(\mathcal N(A)={\mathcal {N}}(A^*)\) if and only if the ranges of B and \({\tilde{B}}\) coincide.

We claim that \({\mathcal {R}}({\tilde{B}}) = {\mathcal {R}}({B})\) if and only if \({\tilde{B}}=B\). We actually show that if \(\mathcal R({\tilde{B}})\subseteq {\mathcal {R}}({B})\), then necessarily \({\tilde{B}}=B\). To prove this, we employ Douglas’ lemma [7, Theorem 1], which says that \({\mathcal {R}}({\tilde{B}})\subseteq \mathcal R({B})\) if and only if \({\tilde{B}}{\tilde{B}}^*\le c BB^*\) for some \(c>0\). With the use of (14), (16), the latter inequality can be rewritten as

$$\begin{aligned} \begin{bmatrix} {\tilde{\chi _1}}^2 &{}\,\,\, -\tilde{\chi _0}\tilde{\chi _1}{\tilde{\theta }} \\ -\tilde{\chi _0}\tilde{\chi _1}/{\tilde{\theta }} &{}\,\,\, {\tilde{\chi _0}}^2\end{bmatrix} \le c\begin{bmatrix} \chi _1^2 &{}\,\,\, -\chi _0\chi _1\theta \\ -\chi _0\chi _1/\theta &{}\,\,\, \chi _0^2\end{bmatrix} \end{aligned}$$
(17)

pointwise on \(\Delta _1(A)\). The matrices involved in (17) are 2-by-2 rank one Hermitian idempotents. A moment’s thought reveals that (17) holds if and only if these matrices are equal, i.e.,

$$\begin{aligned} \chi _j=\tilde{\chi _j}\ (j=1,2) \, \text { and } \, \theta ={\tilde{\theta }}. \end{aligned}$$
(18)

But then \(B={\tilde{B}}\), as claimed.

At this point we have proved that \({\mathcal {N}}(A)={\mathcal {N}}(A^*)\) if and only if (18) holds. To show that (18) is equivalent to the normality of \(\Phi _A\) on \(\Delta _1(A)\), we invoke the following criterion (a generalization of which to n-by-n matrices can be found as Lemma 5.1 in [6]): a 2-by-2 matrix is normal if and only if \(|\phi _{01}|=|\phi _{10}|\) and either \(\phi _{00}=\phi _{11}\) or \(2\arg (\phi _{11}-\phi _{00})=\arg \phi _{01}+\arg \phi _{10}\). The first two conditions in (18) are equivalent to \(\left| \phi _{01}\right| =\left| \phi _{10}\right| \), while the last one means that \({\overline{\eta }}{\tilde{\eta }}\) is positive on \(\Delta _1(A)\). But

$$\begin{aligned} {\overline{\eta }}{\tilde{\eta }}=\phi _{00}\overline{\phi _{11}}(\left| \phi _{01}\right| ^2+\left| \phi _{10}\right| ^2)+\phi _{00}^2 \overline{\phi _{01}}\overline{{\phi _{10}}}+\overline{\phi _{11}}^2\phi _{01}\phi _{10}. \end{aligned}$$
(19)

On \(\Delta _1(A)\) we have \(\phi _{00}\phi _{11}=\phi _{01}\phi _{10}\), and the sum of the last two terms in (19) is just \(\phi _{00}\overline{\phi _{11}}(\left| \phi _{00}\right| ^2+\left| \phi _{11}\right| ^2)\). So, (19) simplifies further to \({\overline{\eta }}{\tilde{\eta }}=\phi _{00}\overline{\phi _{11}}\phi _A\) and its positivity is equivalent to \(\arg \phi _{00}=\arg \phi _{11}\). This in turn can be rewritten as \(\arg (\phi _{00}\phi _{11})=2\arg (\phi _{00}-\phi _{11})\). It remains to take into account that \(\arg (\phi _{00}\phi _{11})=\arg (\phi _{01}\phi _{10})\). \(\square \)

From the proof of Proposition 4 it can be seen that for operators \(A\in {\mathcal {W}}^*(P,Q)\) the kernels of A and \(A^*\) are nested (i.e., one is contained in the other) only when they coincide.

Recall now that for any \(X\in {\mathcal {B}}({\mathcal {H}})\) the closure of \({\mathcal {R}}(X)\) is nothing but \({\mathcal {N}}(X^*)^\perp \). So, X is an EP operator if and only if its range is closed and \(\mathcal N(X^*)={\mathcal {N}}(X)\). A range-closedness criterion for operators in \({\mathcal {W}}^*(P,Q)\) is known [11, Sect. 2] (see also [4, Theorem 7.7]) and reads as follows.

Proposition 5

Let \(A\in {\mathcal {W}}^*(P,Q)\). Then \({\mathcal {R}}(A)\) is closed if and only if \(\det \Phi _A\) and \(\left\| \Phi _A\right\| _F\) are separated from zero on \(\Delta _2(A)\) and \(\Delta _1(A)\), respectively.

Combining Propositions 4 and 5, we arrive at the following.

Theorem 6

Let \(A\in {\mathcal {W}}^*(P,Q)\). Then A is an EP operator if and only if the determinant of \(\Phi _A\) is separated from zero on \(\Delta _2(A)\) and \(\Phi _A\) is a normal matrix on \(\Delta _1(A)\) whose Frobenius norm is separated from zero on \(\Delta _1(A)\).

4 Examples

Let \(T \in {\mathcal {B}}({\mathcal {H}})\) be a skew projection, that is, suppose \(T^2=T \ne T^*\), let \(\alpha , \beta \) be complex numbers, and consider the operator \(A=T+\alpha T^*+\beta I\). If \(\alpha =0\), then A is easily seen to be core invertible. Indeed, since \(\sigma (T)=\{0,1\}\), we have \(\sigma (A)=\sigma (T+\beta I)=\{\beta , 1+\beta \}\), which shows that A is invertible whenever \(\beta \notin \{0,-1\}\). If \(\beta =0\), then \(A^2=A\), and if \(\beta =-1\), then \(A^2=-A\). In both cases A is Drazin invertible with index \(k=1\), hence group invertible and thus core invertible. To tackle the case \(\alpha \ne 0\), we employ Theorem 2.

Let P and Q be the orthogonal projections onto \({\mathcal {R}}(T)\) and \({\mathcal {N}}(T)\), respectively, and represent these two operators in the form (3) and (4). It is well known from [1] (also cited as Proposition 1.6 in [4]) that then \(\Vert PQ\Vert <1\) and that T may be written as \(T=(I-PQ)^{-1}P(I-PQ)\). Thus, T and hence also A are in \({\mathcal {W}}^*(P,Q)\).

Theorem 7

Let \(A=T+\alpha T^*+\beta I\) be represented in the form (5). If

(i) \(\alpha \ne 0\), \(\alpha +\beta \ne 0\), \(\beta \ne -1\), and \(\alpha /((\alpha +\beta )(1+\beta ))\) belongs to \(\sigma (H)\) but is not an isolated point of \(\sigma (H)\)

or

(ii) \(\alpha =-1-2\beta \), \(\beta \notin \{-1,-1/2\}\), and \(\alpha /((\alpha +\beta )(1+\beta ))\) belongs to \(\sigma (H)\) and is an isolated point of \(\sigma (H)\),

then does not exist. In all other cases exists.

Proof

We know that \(H=I-PQ|M_0\) is selfadjoint with \(\sigma (H) \subseteq [0,1]\) and that 1 cannot be an isolated point of \(\sigma (H)\). Since \(\Vert PQ\Vert <1\), the operator H is invertible, and thus actually \(\sigma (H) \subseteq (0,1]\). Straightforward computation using  (3) and (4) gives

$$\begin{aligned} \Phi _T(H)= \begin{bmatrix} I &{}\,\,\,\, -\sqrt{H^{-1}-I}\, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix}, \end{aligned}$$

from which we obtain

$$\begin{aligned} \Phi _A(H)= \begin{bmatrix} (1+\alpha +\beta )I &{}\,\,\,\, -\sqrt{H^{-1}-I}\, \\ -\alpha \sqrt{H^{-1}-I} &{}\,\,\,\, \beta I\,\end{bmatrix}. \end{aligned}$$

Let first \(\alpha =0\). This case was already settled above, but let us examine what Theorem 2 gives. We have

$$\begin{aligned} \Phi _A(t)= \begin{bmatrix} 1+\beta &{}\,\,\,\, -\sqrt{1/t-1}\, \\ 0 &{}\,\,\,\, \beta \,\end{bmatrix}, \end{aligned}$$
(20)

which implies that \(\det \Phi _A(t)=\beta (1+\beta )\). Consequently, if \(\beta \notin \{0,-1\}\), then \(\Delta _2(A)=\sigma (H)\) and \(\Delta _1(A)=\emptyset \), and since \(\det \Phi _A\) is separated from zero on \(\Delta _2(A)\), Theorem 2 tells us that exists. On the other hand, if \(\beta \in \{0,-1\}\), then \(\Delta _2(A)=\emptyset \) and \(\Delta _1(A)=\sigma (H)\), and because \(\textrm{Tr}\,\Phi _A(t)=1+2\beta \ne 0\), we deduce from Theorem 2 that exists.

Let now \(\alpha \ne 0\) and thus

$$\begin{aligned} \Phi _A(t)= \begin{bmatrix} 1+\alpha +\beta &{}\,\,\,\, -\sqrt{1/t-1}\, \\ -\alpha \sqrt{1/t-1} &{}\,\,\,\, \beta \,\end{bmatrix}. \end{aligned}$$
(21)

Then \(\det \Phi _A(t)=(1+\alpha +\beta )\beta -\alpha (1/t-1)\), which equals zero if and only if

$$\begin{aligned} \frac{1}{t}=\frac{(\alpha +\beta )(1+\beta )}{\alpha }. \end{aligned}$$

This cannot happen if \(\alpha +\beta =0\) or \(\beta =-1\). It follows that in these cases \(\Delta _2(A)=\sigma (H)\) with \(\det \Phi _A\) being separated from zero on this set and \(\Delta _1(A)=\emptyset \). By Theorem 2, exists.

We are left with the case \(\alpha +\beta \ne 0\) and \(\beta \ne -1\). Then the zero of \(\det \Phi _A\) is \(t_0=\alpha /((\alpha +\beta )(1+\beta ))\). If \(t_0 \notin \sigma (H)\), then \(\Delta _2(A)=\sigma (H)\) and \(\Delta _1(A)= \emptyset \), and as \(\det \Phi _A\) is separated from zero on \(\Delta _2(A)\), we conclude from Theorem 2 that exists. So assume \(t_0 \in \sigma (H)\). Then \(\Delta _2(A)=\sigma (H) \setminus \{t_0\}\). If \(t_0\) is not an isolated point of \(\sigma (H)\), then \(\det \Phi _A\) is not separated from zero on \(\Delta _2(A)\) and hence does not exist by Theorem 2. This is exactly case (i) of the theorem. Thus, suppose \(t_0\) is an isolated point of \(\sigma (H)\). Then \(\det \Phi _A\) is separated from zero on \(\Delta _2(A)=\sigma (H) {\setminus } \{t_0\}\). Since 1 is not an isolated point of \(\sigma (H)\), we conclude that \(t_0 \ne 1\). Therefore

$$\begin{aligned} \frac{1}{t_0}-1=\frac{(1+\alpha +\beta )\beta }{\alpha } \end{aligned}$$

cannot be zero. It results that \(1+\alpha +\beta \ne 0\) and \(\beta \ne 0\), and this implies that \(\Delta _1(A)=\{t_0\}\). We have \(\textrm{Tr}\, \Phi _A(t_0)=1+\alpha +2\beta \). If this is nonzero, Theorem 2 gives the existence of . The same theorem implies that does not exist if \(1+\alpha +2\beta =0\). In summary, this last case is the situation where \(\alpha \ne 0\), \(\alpha +\beta \ne 0\), \(\beta \ne -1\), \(1+\alpha +2\beta =0\), and \(t_0\) is an isolated point of \(\sigma (H)\). This is equivalent to saying that \(\alpha =-1-2\beta \) with \(\beta \notin \{-1,-1/2,0\}\) and that \(t_0\) is an isolated point of \(\sigma (H)\), which is exactly case (ii) of the theorem. \(\square \)

It follows in particular that \(T+\alpha T^*\) (\(\alpha \ne 0\)) is core invertible if and only if \(1 \notin \sigma (H)\) and that the so-called Buckholtz operator \(T+T^*-I\) is always core invertible. Here are two concrete examples of the cases (i) and (ii) of Theorem 7.

Example 8

Let H be the operator defined by \(Hz=\frac{8}{9}z\) on \({\mathbb {C}}\) and consider the projection

$$\begin{aligned} T=\begin{bmatrix} I &{}\,\,\,\, -\sqrt{H^{-1}-I}\, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix}= \begin{bmatrix} 1 &{}\,\,\,\, \sqrt{2}/4\, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix} \end{aligned}$$

on \({\mathbb {C}}^2\). Take \(\alpha =-1/2\), \(\beta =-1/4\) and put

$$\begin{aligned} A=T+\alpha T^*+\beta I= \begin{bmatrix} \frac{1}{4}I &{}\,\,\,\, -\sqrt{H^{-1}-I}\, \\[1ex] \frac{1}{2}\sqrt{H^{-1}-I} &{}\,\,\,\, - \frac{1}{4}I\,\end{bmatrix} =\frac{1}{8}\begin{bmatrix} 2 &{}\,\,\,\, -2\sqrt{2}\, \\ \sqrt{2} &{}\,\,\,\, -2\,\end{bmatrix}. \end{aligned}$$

Since \(\alpha /((\alpha +\beta )(1+\beta ))=8/9\) and \(\sigma (H)=\{8/9\}\), we are in case (ii) of Theorem 7. We have \(\Delta _2(A)=\emptyset \), \(\Delta _1(A)=\{8/9\}\), \(\textrm{Tr}\,\Phi _A(8/9)=0\), and hence, in perfect agreement with Theorems 2 and 7, does not exist.

To see this without having recourse to the theorems, note that the Jordan canonical form of A is , which implies that A is Drazin invertible with the index \(k=2\). However, core or group invertibility is equivalent to Drazin invertibility with index \(k \le 1\). \(\square \)

Example 9

Consider the operator H given by \((Hf)(x)=\frac{1+x^2}{2} f(x)\) on \(L^2(-1,1)\). Clearly \(\sigma (H)=[1/2,1]\). Define the projection T on the space \({\mathcal {H}}=L^2(-1,1) \oplus L^2(-1,1)\) by

$$\begin{aligned} T=\begin{bmatrix} I &{}\,\,\,\, -\sqrt{H^{-1}-I}\, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix}= \begin{bmatrix} I &{}\,\,\,\, -M\left( \sqrt{\frac{1-x^2}{1+x^2}}\right) \, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix}, \end{aligned}$$

where \(M(\varphi )\) stands for the operator of multiplication by \(\varphi \) on \(L^2(-1,1)\). We let \(\alpha =1\), \(\beta =0\) and define

$$\begin{aligned} A=T+\alpha T^*+\beta I = \begin{bmatrix} 2I &{}\,\,\,\, -M\left( \sqrt{\frac{1-x^2}{1+x^2}}\right) \, \\ -M\left( \sqrt{\frac{1-x^2}{1+x^2}}\right) &{}\,\,\,\, 0\,\end{bmatrix}. \end{aligned}$$
(22)

Since \(\alpha /((\alpha +\beta )(1+\beta ))=1\) belongs to \(\sigma (H)\) and is not an isolated point of \(\sigma (H)\), we have case (i) of Theorem 7. Thus, does not exist.

In fact, the range of A is not closed, which implies that not even the Moore-Penrose inverse of A exists. Indeed, for A given by (22), we have \(\det \Phi _A(t)=1/t-1\) and this not separated from zero on \(\sigma (H)\). Thus, the conditions of Proposition 5 fail. \(\square \)

Theorem 10

Let \(A=T+\alpha T^*+\beta I\) be represented in the form (5) and suppose neither (i) nor (ii) of Theorem 7 hold. If

(iii) \(\alpha =0\) and \(\beta \in \{0,-1\}\)

or

(iv) \(\alpha \ne 0\), \(|\alpha |\ne 1\), \(\alpha +\beta \ne 0\), \(\beta \ne -1\), \(1+\alpha +2\beta \ne 0\), and \(\alpha /((\alpha +\beta )(1+\beta ))\) belongs to \(\sigma (H)\) and is an isolated point of \(\sigma (H)\),

then A is not an EP operator. In all other cases A is EP.

Proof

Let first \(\alpha =0\). Then \(\Phi _A(t)\) is given by (20). From the proof of Theorem 7 and from Theorem 6 we infer that A is EP if \(\beta \notin \{0,-1\}\). However, if (iii) holds and thus \(\beta \in \{0,-1\}\), then \(\Delta _2(A)=\emptyset \) and \(\Delta _1(A)=\sigma (H)\), and as \(\sigma (H)\) contains a point \(t \ne 1\) and the matrix \(\Phi _A(t)\) is not normal for this t, Theorem 6 implies that A is not EP.

So suppose \(\alpha \ne 0\) and \(\Phi _A(t)\) is the matrix (21). The proofs of Theorem 7 and Theorem 6 show that A is EP if \(\alpha +\beta =0\) or \(\beta =-1\). Thus, let \(\alpha +\beta \ne 0\) and \(\beta \ne -1\). If \(t_0 \notin \sigma (H)\), we see again from the proof of Theorem 7 and from Theorem 6 that A is EP. If \(t_0 \in \sigma (H)\) is not isolated, we are in case (i). It remains the case where \(t_0\) is in \(\sigma (H)\) and is an isolated point of \(\sigma (H)\). From the proof of Theorem 7 we know that then \(t_0 \ne 1\), \(1+\alpha +\beta \ne 0\), \(\beta \ne 0\), and \(\Delta _1(A)=\{t_0\}\). If \(\textrm{Tr}\, \Phi (A)(t_0)=0\), we have case (ii). Otherwise exists. We have

$$\begin{aligned} \Phi _A(t_0)= \begin{bmatrix} 1+\alpha +\beta &{}\,\,\,\, -\sqrt{1/t_0-1}\, \\ -\alpha \sqrt{1/t_0-1} &{}\,\,\,\, \beta \,\end{bmatrix} =: \begin{bmatrix} s &{}\,\,\,\, w\, \\ \alpha w &{}\,\,\,\, \beta \,\end{bmatrix}. \end{aligned}$$

It follows that

$$\begin{aligned} \Phi _A(t_0)\Phi _A(t_0)^*= & {} \begin{bmatrix} |s|^2+w^2 &{}\,\,\,\, s{\overline{\alpha }}w+w{\overline{\beta }}\, \\ \alpha w {\overline{s}}+\beta w &{}\,\,\,\, |\alpha |^2 w^2+|\beta |^2\,\end{bmatrix},\\ \Phi _A(t_0)^*\Phi _A(t_0)= & {} \begin{bmatrix} |s|^2+|\alpha |^2w^2 &{}\,\,\,\, {\overline{s}}w+{\overline{\alpha }}w\beta \, \\ w s +{\overline{\beta }}\alpha w &{}\,\,\,\, w^2+|\beta |^2\,\end{bmatrix}. \end{aligned}$$

Straightforward inspection reveals that \(\Phi _A(t_0)\) is normal if and only if \(|\alpha |=1\), which leads to (iv). \(\square \)

Example 11

Consider on \({\mathbb {C}}^2\). In the case (iii) we have the two operators T (\(\alpha =\beta =0\)) and \(T-I\) (\(\alpha =0\), \(\beta =-1\)). Elementary linear algebra gives

$$\begin{aligned}{} & {} T^\dagger =\begin{bmatrix} 1/2 &{}\,\,\,\, 0\, \\ 1/2 &{}\,\,\,\, 0\,\end{bmatrix}, \quad T^\#=\begin{bmatrix} 1 &{}\,\,\,\, 1\, \\ 0 &{}\,\,\,\, 0\,\end{bmatrix},\\{} & {} (T-I)^\dagger =\begin{bmatrix} 0 &{}\,\,\,\, 0\, \\ 1/2 &{}\,\,\,\, -1/2\,\end{bmatrix}, \quad (T-I)^\#=\begin{bmatrix} 0 &{}\,\,\,\, 1\, \\ 0 &{}\,\,\,\, -1\,\end{bmatrix}, \end{aligned}$$

and we see that, in agreement with Theorem 10, these two operators are not EP.

Now let T be as in Example 8, choose \(\alpha =-16/7\), \(\beta =1\), and put \(A=T+\alpha T^*+\beta I\). This is case (iv). Matlab returns

$$\begin{aligned} A^\dagger \approx \begin{bmatrix} -0.1536 &{}\,\,\,\, -0.4345\, \\ 0.1901 &{}\,\,\,\, 0.5377 \end{bmatrix},\quad A^\# \approx \begin{bmatrix} -0.5600 &{}\,\,\,\, 0.6930\, \\ -1.5839 &{}\,\,\,\, 1.0600 \end{bmatrix}, \end{aligned}$$

which shows that A is indeed not EP. \(\square \)