1 Introduction

Consider the spaces \(l^p=l^p(\mathbb {Z})\) of two-sided infinite sequences \(x=(x_i)_{i\in \mathbb {Z}}\) of complex numbers equipped with the respective p-norms

$$\begin{aligned} \Vert x\Vert _p=\Vert (x_i)\Vert _p&=\left( \sum _{i\in \mathbb {Z}}|x_i|^p\right) ^{1/p}&\text { for } p\in [1,\infty )\\ \Vert x\Vert _\infty =\Vert (x_i)\Vert _\infty&=\sup \{|x_i|:i\in \mathbb {Z}\}&\text { for } p\in \{0,\infty \}, \end{aligned}$$

where \(l^0\) denotes the subspace of \(l^\infty \) of all sequences \((x_i)\) of entries tending to zero as \(i\rightarrow \pm \infty \).

Roughly speaking, bounded linear operators \(A\in {\mathcal {L}}(l^p)\) on these \(l^p\) can be regarded as infinite matrices, and in view of this interpretation the definition of band-dominated operators is straightforward and easily understood: As a start, notice that every sequence \(a=(a_i)\in l^\infty \) defines an operator of multiplication \(aI:(x_i)\mapsto (a_ix_i)\) whose matrix representation is a diagonal matrix having the numbers \(a_i\) as entries along its main diagonal. Next, introduce the family of shift operators \(V_\alpha :(x_i)\mapsto (x_{i-\alpha })\) for every \(\alpha \in \mathbb {Z}\). Now, all finite combinations \(\sum _\alpha a_\alpha V_\alpha \) of these generators are referred to as band operators and form a (non-closed) algebra \({BO}\) in the Banach algebra \({\mathcal {L}}(l^p)\) of all bounded liner operators on \(l^p\), resp. Taking the closure of this algebra in \({\mathcal {L}}(l^p)\) clearly yields a Banach algebra, which is denoted by \({{\mathcal {A}}_{l^p}}\). Its elements are called band-dominated operators. In particular, the infinite matrix representations of band operators have only finitely many non-zero diagonals, i.e. have zero entries except in a “band” of finite “bandwidth”, whereas band-dominated operators may have infinitely many non-zero diagonals which decay towards zero away from the main diagonal.

It is well known, that \({{\mathcal {A}}_{l^p}}\) is not only a Banach algebra, but forms a closed universe also w.r.t. invertibility and the Fredholm property. More precisely, if \(A\in {{\mathcal {A}}_{l^p}}\) is invertible, resp. Fredholm, then its inverse and (at least some of) its regularizers belong to \({{\mathcal {A}}_{l^p}}\), too. Whenever \(p\in \{0\}\cup (1,\infty )\) this actually holds for every regularizer (see e.g. [9, 10]). A main observation of [11] is that a semi-Fredholm operator \(A\in {{\mathcal {A}}_{l^p}}\) is automatically Fredholm.

[3] recently studied whether \({{\mathcal {A}}_{l^p}}\) is also self-contained w.r.t. one-sided invertibility. The affirmative (under certain mild restrictions) answer was then applied to further related interesting classes of operators such as Wiener, E-modulated and slant-dominated operators. In Sect. 2 of the present work we repeat and extend this result on band-dominated operators from [3], in order to present a more complete picture. In particular, all the cases when \(p\in \{0\}\cup [1,\infty ]\) can be treated equally here:

Theorem 1.1

Let \(A\in {{\mathcal {A}}_{l^p}}\). Then A is invertible from the left (resp. right) if and only if its lower norm \(\nu (A)=\inf \{\Vert Ax\Vert :\Vert x\Vert =1\}\) (resp. \(\nu (A^*)\)) is positive. In this case there exists a left (right) inverse B in \({{\mathcal {A}}_{l^p}}\) which is also a generalized inverse for A. The latter means that \(ABA=A\) and \(BAB=B\), such that \(I-AB\) is a projection parallel to the range \({{\,\mathrm{im}\,}}A\), i.e. onto a complement of \({{\,\mathrm{im}\,}}A\), and \(I-BA\) is a projection onto the kernel \(\ker A\).

Note again that B is a Fredholm regularizer and the two projections are compact (cf. [11]). For an introduction to generalized invertibility see e.g. [4, Chapter 4].

The Wiener algebra Although \({BO}\) is one and the same operator algebra independent of the underlying space \(l^p\), its closure, the algebra \({{\mathcal {A}}_{l^p}}\) of band-dominated operators, depends on the choice of p. An important and fruitful superset of \({BO}\) which on the one hand is still p-invariant but on the other hand also beautifully self-contained in many regards is the so called Wiener algebra. It is defined as the closure of \({BO}\) w.r.t. the norm \(\Vert \cdot \Vert _{\mathcal {W}}\) given by

$$\begin{aligned} \Vert A\Vert _{\mathcal {W}}{:=}\sum _{k}\Vert a_k\Vert _\infty \quad \text {for}\quad A=\sum _ka_kV_k\quad \text {with all}\quad a_k\in l^\infty . \end{aligned}$$

Equipped with \(\Vert \cdot \Vert _{\mathcal {W}}\), \({\mathcal {W}}\) is a Banach algebra and it is well known that:

  1. 1.

    \({\mathcal {W}}\) is a subalgebra of \({{\mathcal {A}}_{l^p}}\) for every p with \(\Vert A\Vert _{{\mathcal {L}}(l^p)}\le \Vert A\Vert _{{\mathcal {W}}}\), i.e. Wiener operators and their adjoints act as band-dominated operators on all \(l^p\) (cf. [9, Section 2.5]).

  2. 2.

    If \(A\in {\mathcal {W}}\) is invertible on one \(l^p\)-space then it is invertible on every \(l^p\)-space and the inverse \(A^{-1}\) belongs to \({\mathcal {W}}\), i.e. \({\mathcal {W}}\) is inverse closed (cf. [9, Theorem 2.5.2]).

  3. 3.

    If \(A\in {\mathcal {W}}\) is Fredholm on one \(l^p\)-space then A is Fredholm on every \(l^p\)-space. In this case the Fredholm index is the same on all these spaces. Moreover there exists a \(B\in {\mathcal {W}}\) which is a Fredholm regularizer for A on all \(l^p\) (see [10, Theorem 3 and Corollary 25], as well as [1, 2, 6, 7] for a presentation of the history of these results).

Thus one can talk about \(A\in {\mathcal {W}}\), its invertibility, its Fredholm property, its inverse \(A^{-1}\) and its adjoint \(A^*\) independently of the underlying space \(l^p\), \(p\in \{0\}\cup [1,\infty ]\). This can be enriched by the following:

Theorem 1.2

Let \(A\in {\mathcal {W}}\). If A is left (resp. right) invertible on one \(l^p\) then it is left (right) invertible on all \(l^p\). In this case, there exists a \(B\in {\mathcal {W}}\) which is a one-sided inverse and also a generalized inverse for A on all spaces \(l^p\).

Theorem 1.2 appeared in large parts already in [3] and somehow triggered the present work. Actually, one can push this further and show the following in Sect. 3:

Theorem 1.3

Let \(A\in {\mathcal {W}}\) be semi-Fredholm. Then

  1. (a)

    \(\ker A\) is the same on all \(l^p\) and there is a subspace K which serves as a complement of \({{\,\mathrm{im}\,}}A\) in all \(l^p\), resp. In particular, \(\ker A\) and K are subspaces of \(l^1\).

  2. (b)

    There exists a generalized inverse \(B\in {\mathcal {W}}\), hence \(I-BA\in {\mathcal {W}}\) is a projection onto \(\ker A\) and \(I-AB\in {\mathcal {W}}\) is a projection parallel to \({{\,\mathrm{im}\,}}A\).

Thus, besides 1. - 3. above, also one-sided invertibility, the kernel and the cokernel of \(A\in {\mathcal {W}}\) as well as suitable generalized inverses are independent of the underlying space.

Generalized sequence spaces, the Hilbert space case and the Moore–Penrose inverse In fact, the above mentioned properties 1. - 3. of \({\mathcal {W}}\) have been proved in the cited literature for the much more general setting of Wiener operators on the spaces \(l^p(\mathbb {Z}^N,X)\) of X-valued generalized sequences \((x_i)_{i\in \mathbb {Z}^N}\subset X\) with X being a Banach space, not necessarily finite dimensional. Also Theorems 1.1, 1.2 and 1.3 extend to this setting and remain true for Fredholm operators \(A\in {{\mathcal {A}}_{l^p}}\), resp. \(A\in {\mathcal {W}}\). This is the goal of Sect. 4. In the Hilbert space case a very particular generalized inverse, the Moore–Penrose inverse is available, offers a deeper understanding and will be discussed in this last section as well. Finally, a criterion for one-sided invertibility based on finite discretizations is given.

2 One-sided invertible band-dominated operators

Proposition 2.1

Let \(A\in {{\mathcal {A}}_{l^p}}\). The following are equivalent

  1. 1.

    A is invertible from the left.

  2. 2.

    \(A^*\) is invertible from the right.

  3. 3.

    \(\nu (A)>0\).

In this case there exists a left inverse \(B\in {{\mathcal {A}}_{l^p}}\) which is also a regularizer and a generalized inverse for A, s.t. \(I-AB\) is a compact projection parallel to \({{\,\mathrm{im}\,}}A\).

Proof

If A is left invertible then \(\nu (A)>0\) and if B is a left inverse to A then \(B^*\) is a right inverse to \(A^*\). Further, if \(A^*\) is right invertible then \((A^*)^*\) is left invertible. Since the lower norms \(\nu (A)\) and \(\nu ((A^*)^*)\) coincide (see e.g. [11, Section 2.3]) we have 1. \(\Rightarrow \) 2. \(\Rightarrow \) 3. Now, let \(\nu (A)>0\). Then, by [11, Corollary 2.3],  A is semi-Fredholm and further, by [11, Theorem 4.3], it is Fredholm. [10, Theorem 21] yields a generalized inverse \(B\in {{\mathcal {A}}_{l^p}}\), i.e. \(ABA=A\) and \(BAB=B\). This relation \(A(I-BA)=0\) together with \(\nu (A)>0\) implies that \(I-BA=0\), i.e. B is a left inverse.

Finally, \((I-AB)^2=I-2AB+ABAB=I-2AB+AB=I-AB\) is a projection with \((I-AB)A=0\) and \(ABA=A\), hence parallel to the range \({{\,\mathrm{im}\,}}A\) onto a complement of \({{\,\mathrm{im}\,}}A\) (which is of finite dimension), thus \(I-AB\) is compact. \(\square \)

The analogous symmetric result reads as follows:

Proposition 2.2

Let \(A\in {{\mathcal {A}}_{l^p}}\). The following are equivalent

  1. 1.

    A is invertible from the right.

  2. 2.

    \(A^*\) is invertible from the left.

  3. 3.

    \(\nu (A^*)>0\).

In this case there exists a right inverse \(B\in {{\mathcal {A}}_{l^p}}\) which is also a regularizer and a generalized inverse for A, s.t. \(I-BA\) is a compact projection onto \(\ker A\).

Proof

If B is a right inverse to A then \(B^*\) is a left inverse to \(A^*\). The latter implies that \(\nu (A^*)>0\). Now, let \(\nu (A^*)>0\). Then [11, Corollary 2.3 and Theorem 4.3] apply to A and show again that A is Fredholm. [10, Theorem 21] yields a generalized inverse \(B\in {{\mathcal {A}}_{l^p}}\). Since \((I-AB)A=0\) and A is surjective, we find that \(I-AB=0\), i.e. B is a right inverse. The rest easily follows as above. \(\square \)

This finishes the proof of Theorem 1.1.

Remark 1

For all \(p\in \{0\}\cup (1,\infty )\), \({{\mathcal {A}}_{l^p}}\) includes the ideal of all compact operators (cf. [7, 9, 10]). This implies that if there is one regularizer \(B\in {{\mathcal {A}}_{l^p}}\) then all regularizers are band-dominated in these cases. To check this let \(C\in {\mathcal {L}}(l^p)\) s.t. \(I-CA\) is compact. Then \(C= B+(CA-I)B+C(I-AB)\in {{\mathcal {A}}_{l^p}}\).

3 Operators in the Wiener class

We continue with the highly interesting operators A in the Wiener class \({\mathcal {W}}\) which, as already mentioned in the introduction, have a couple of properties independent of the underlying space. However, in some situations and for some parts of the proofs it is useful to indicate on which \(l^p\)-space \(A\in {\mathcal {W}}\) is considered. This is done by the notation \(A=A_p\) in what follows.

3.1 One-sided invertibility

Lemma 3.1

If \(A,{\tilde{A}}\in {\mathcal {W}}\) coincide on one \(l^p\) then they coincide on all \(l^p\).

Proof

For \(n\in \mathbb {N}\) let \(P_n\) denote the canonical projection which truncates \((x_i)\in l^p\) by the rule

$$\begin{aligned} P_n:(x_i)\mapsto (\ldots ,0,x_{-n},\ldots ,x_0,\ldots ,x_n,0,\ldots ) \end{aligned}$$

and \(Q_n{:=}I-P_n\). Assume that \(A={\tilde{A}}\) on \(l^p\) and \((A-{\tilde{A}})x\ne 0\) for an \(x\in l^r\). Then there is an \(m\in \mathbb {N}\) such that \(P_m(A-{\tilde{A}})x\ne 0\), i.e. \(P_m(A-{\tilde{A}})\ne 0\). Since for arbitrary band-dominated operators \(\Vert P_m(A-{\tilde{A}})Q_n\Vert \rightarrow 0\) as \(n\rightarrow \infty \) (cf. [9, Theorem 2.1.6]), there exists an n with \(P_m(A-{\tilde{A}})P_n\ne 0\) and \(P_nx\ne 0\). Consequently, \((A-{\tilde{A}})P_nx\ne 0\) for the element \(P_nx\in l^p\), a contradiction. \(\square \)

We continue with the proof of Theorem 1.2 which states that \({\mathcal {W}}\) is one-sided inverse closed:

Proof

Let \(A=A_p\in {\mathcal {W}}\) be left invertible on \(l^p\) and let \(D\in {{\mathcal {A}}_{l^p}}\) be a left inverse given by Theorem 1.1. Further, choose a band operator C with \(\Vert D-C\Vert \Vert A\Vert <1/2\). Then

$$\begin{aligned} \Vert I-CA\Vert =\Vert I-DA+(D-C)A\Vert \le \Vert D-C\Vert \Vert A\Vert <1/2, \end{aligned}$$

i.e. \(CA=I-(I-CA)\) is invertible by a Neumann series argument. Since \(CA\in {\mathcal {W}}\), also \((CA)^{-1}\in {\mathcal {W}}\). Thus, \(B=(CA)^{-1}C\in {\mathcal {W}}\) is a left inverse of A as \(BA=(CA)^{-1}CA=I\). Moreover \(ABA=AI=A\) and \(BAB=IB=B\), i.e. B is a generalized inverse.

This Wiener operator B and the equation \(BA=I\), hence the left invertibility of A, translate to all \(l^p\)-spaces by the previous lemma. The case of right invertible A is analogous. \(\square \)

Notice that in general not all one-sided inverses of \(A\in {\mathcal {W}}\) belong to \({\mathcal {W}}\): For example, let \(P=\chi _{\mathbb {Z}_+}I\) be the projection which maps

$$\begin{aligned} (\ldots ,x_{-n},\ldots ,x_{-1},x_0,x_1,\ldots ,x_n,\ldots ) \mapsto (\ldots ,0,\ldots ,0,x_0,x_1,\ldots ,x_n,\ldots ) \end{aligned}$$

and \(Q=I-P\). For \(A=Q+V_{1}P\in {\mathcal {W}}\) the operator B which is defined by \(x=(x_i)\mapsto Bx{:=}Qx+PV_{-1}x+x_0y\), with \(y{:=}((|n|+1)^{-1})_{n\in \mathbb {Z}}\in l^2\setminus l^1\) is a left inverse on \(l^2\) which does not belong to \({\mathcal {W}}\).

Remark 2

As already mentioned in the introduction, the study and the results of the present work are related to the results on \({{\mathcal {A}}_{l^p}}\) and \({\mathcal {W}}\) obtained in [3]. More precisely, the following was already shown there by different methods:

  • \({{\mathcal {A}}_{l^p}}\) with \(1<p<\infty \) is one-sided inverse closed [3, Corollary 3.7].

  • The characterization of one-sided invertibility of \(A\in {{\mathcal {A}}_{l^p}}\), \(1<p<\infty \) in terms of its lower norm [3, Theorem 3.9].

  • \({\mathcal {W}}\) is one-sided inverse closed [3, Theorem 3.10].

  • The p-invariance (\(1<p<\infty \)) of one-sided invertibility of \(A\in {\mathcal {W}}\) [3, Corollary 3.11].

3.2 On kernels and cokernels

Let’s continue with the study of the kernels of Fredholm operators \(A\in {\mathcal {W}}\).

Proposition 3.2

Let \(A\in {\mathcal {W}}\) be right invertible. Then the kernels \(\ker A\) coincide on all \(l^p\).

Proof

Recall that \(A=A_1\) is Fredholm on \(l^1\) and set \(K{:=}\ker A_1\) and \(k{:=}\dim K\). Since \(A=A_p\) is right invertible and Fredholm on all \(l^p\) and the index of \(A_p\) is the same on all \(l^p\), we find that \(\dim \ker A_p=k\) on all \(l^p\). So, since \(K\subset l^1\subset l^p\) for all p, the assertion follows. \(\square \)

Proposition 3.3

Let \(A\in {\mathcal {W}}\) be left invertible. Then there is a finite dimensional subspace \(K\subset l^1\) which serves as a complement of \({{\,\mathrm{im}\,}}A\) in all \(l^p\), resp.

Proof

\(A=A_p\) is Fredholm on all \(l^p\) and has a left inverse/generalized inverse \(B\in {\mathcal {W}}\) by Theorem 1.2. In particular, \(A_pB\) is a projection onto \({{\,\mathrm{im}\,}}A_p\) and \(I-A_pB\) projects onto a complement K of \({{\,\mathrm{im}\,}}A_p\) in \(l^p\). Now, the relation \(B(I-A_pB)=B-BA_pB=0\) gives: K is the kernel of B. Prop. 3.2 applied to B and K shows that K is independent of p and is included in \(l^1\). \(\square \)

Thus, we get the following picture for one-sided invertible \(A\in {\mathcal {W}}\):

Corollary 3.4

Let \(A\in {\mathcal {W}}\) be one-sided invertible. Then \(\ker A\subset l^1\) is the same on all \(l^p\) and there is a finite dimensional space \(K\subset l^1\) which serves as complement of \({{\,\mathrm{im}\,}}A\) in all \(l^p\), resp.

Clearly the next question is, whether this extends and holds for Fredholm operators \(A\in {\mathcal {W}}\) in general. Here comes the proof of the affirmative answer which was already stated in Theorem 1.3a):

Proof

By [10, Lemma 24] we can choose a Fredholm operator \(S_k\in {\mathcal {W}}\) with \({{\,\mathrm{ind}\,}}S_k =k=-{{\,\mathrm{ind}\,}}(A)\). Then \(AS_k\in {\mathcal {W}}\) is Fredholm of index 0 and, by [10, Corollary 12], there is a decomposition \(AS_k=W+S\) with an invertible W and a compact S of finite rank which fulfills \(\Vert S-P_nSP_n\Vert \rightarrow 0\) as \(n\rightarrow \infty \). Since the set of invertible operators is open we find for sufficiently large n that \({\tilde{W}}{:=}W+(S-P_nSP_n)\) is still invertible. Since \(P_nSP_n\) in the decomposition \(AS_k={\tilde{W}}+P_nSP_n\) belongs to \({\mathcal {W}}\), also \({\tilde{W}}\in {\mathcal {W}}\) hence \({\tilde{W}}^{-1}\in {\mathcal {W}}\). This gives \(AB=I+T\) with \(B{:=}S_k{\tilde{W}}^{-1}\in {\mathcal {W}}\) and a finite rank operator \(T{:=}P_nSP_n{\tilde{W}}^{-1}\in {\mathcal {W}}\). By completely analogous arguments starting with \(S_k A\) one gets \({\hat{B}}A=I+{\hat{T}}\), with \({\hat{B}}\in {\mathcal {W}}\) and a finite rank operator \({\hat{T}}\in {\mathcal {W}}\) with \({\hat{T}}={\hat{T}}P_n\) (without loss of generality with the same n). Next, define \({\tilde{V}}=V_{2n+1}\) and further \({\tilde{P}}:(x_i)\mapsto (\chi _{M}(i)x_i)\) where \(M{:=}\{i\in \mathbb {Z}: i\ge -n\}\),

$$\begin{aligned} U{:=}(I-{\tilde{P}})+{\tilde{P}}{\tilde{V}}^{-1} \quad \text {and}\quad U^\star {:=}(I-{\tilde{P}})+{\tilde{V}}{\tilde{P}}. \end{aligned}$$
(1)

Then \(UU^\star =I\), \(U^\star U=I-P_n\), and \({{\,\mathrm{im}\,}}T \subset {{\,\mathrm{im}\,}}P_n=\ker U\) as well as \({{\,\mathrm{im}\,}}U^\star =\ker P_n\subset \ker {\hat{T}}\).

We get \(UABU^\star =I\), thus \(UA\in {\mathcal {W}}\) is Fredholm and right invertible, hence \(\ker (UA)\subset l^1\) is independent of p by Proposition 3.2. We particularly conclude that \(\ker A_\infty \subset \ker (UA_\infty )\subset l^1\). Therefore this \(\ker A_\infty \) is included in \(\ker A_p\) for every p. Since the converse inclusion is obvious, we find that \(\ker A_p\) is the same for every p.

Further \(U{\hat{B}}AU^\star =I\), hence \(AU^\star \in {\mathcal {W}}\) is Fredholm and left invertible and \({\hat{K}}\subset l^1\) shall denote the common p-invariant complement of the ranges \({{\,\mathrm{im}\,}}AU^\star \) given by Proposition 3.3. Now consider \({{\,\mathrm{im}\,}}A_1\) which is a superset of \({{\,\mathrm{im}\,}}A_1U^\star \), set \(K_1{:=}{{\,\mathrm{im}\,}}A_1 \cap {\hat{K}}\) and fix a decomposition \({\hat{K}}=K_1\oplus K\). Then \({{\,\mathrm{im}\,}}A_1={{\,\mathrm{im}\,}}A_1U^\star \oplus K_1\), thus \(l_1={{\,\mathrm{im}\,}}A_1 \oplus K\). For arbitrary p, we still have \(l^p={{\,\mathrm{im}\,}}A_pU^\star \oplus {\hat{K}}\) and \({{\,\mathrm{im}\,}}A_p\) covers \({{\,\mathrm{im}\,}}A_pU^\star \) and a subspace \(K_p{:=}{{\,\mathrm{im}\,}}A_p \cap {\hat{K}}\) of \({\hat{K}}\). From \({{\,\mathrm{im}\,}}A_1\subset {{\,\mathrm{im}\,}}A_p\) we conclude \(K_1\subset K_p\). Since neither the index of \(A_p\) nor its kernel dimension depend on p, by the above, the codimension of \({{\,\mathrm{im}\,}}A_p\) is p-invariant as well, so that these spaces \(K_p\) actually coincide with \(K_1\). Thus K serves as a complement for all \({{\,\mathrm{im}\,}}A_p\). \(\square \)

3.3 On generalized inverses

Having proved the p-invariance of the kernel and a complement of the range \({{\,\mathrm{im}\,}}A\) of \(A\in {\mathcal {W}}\), we finally turn to some corresponding p-invariant projections in \({\mathcal {W}}\) onto these spaces and a generalized inverse of A in \({\mathcal {W}}\). We start with an auxiliary result:

Lemma 3.5

Let R be a bounded linear operator on \(l^0\) such that its range \({{\,\mathrm{im}\,}}R\) is a subspace of \(l^1\subset l^0\) of finite dimension. Then \(R\in {\mathcal {W}}\).

Proof

Choose a basis \(y_1,\ldots ,y_m\) in \({{\,\mathrm{im}\,}}R\). Then for every x there is a unique decomposition \(Rx=\sum _{j=1}^m\alpha _jy_j\). Further choose functionals \(f_1,\ldots ,f_m\) on \({{\,\mathrm{im}\,}}R\) with \(f_k(y_j)=\delta _{jk}\), where \(\delta _{jk}\) denotes the the Kronecker delta,Footnote 1 and extend them to \(l^0\) as \(g_k{:=}f_k\circ R\). Then, for all x,

$$\begin{aligned} \sum _{k=1}^m g_k(x)y_k=\sum _{k=1}^m f_k(Rx)y_k=\sum _{k=1}^m f_k\left( \sum _{j=1}^m\alpha _jy_j\right) y_k=\sum _{k=1}^m \alpha _ky_k=Rx. \end{aligned}$$

Thus we have a representation/decomposition of R as a finite sum, where the functionals \(g_k\) can be interpreted as dual products with certain sequences \((g^{(k)}_i)_{i\in \mathbb {Z}}\in l^1\). To check that \(R\in {\mathcal {W}}\) it suffices to show that the summands of the form \(G:x\mapsto g(x)y\), with \(g=(g_i)\) and \(y=(y_i)\) of \(l^1\)-type, belong to \({\mathcal {W}}\): The entries of the canonical matrix representation of G are bounded by \(|y_i||g_j|\), hence the elements on the kth diagonal are bounded by \((|y_i||g_{i+k}|)_i\), resp. Consequently, the Wiener norm \(\Vert G\Vert _{{\mathcal {W}}}\) can be estimated as follows:

$$\begin{aligned} \sum _k\sup _i|y_i||g_{i+k}|\le \sum _k\sum _i|y_i||g_{i+k}| =\sum _i|y_i|\sum _k|g_{i+k}|=\Vert y\Vert _1\Vert g\Vert _1 \end{aligned}$$

which yields the assertion. \(\square \)

Now we can close this section with the proof of Theorem 1.3b):

Proof

Consider \(A=A_0\) on \(l^0\). In view of Theorem 1.3a) we have \(\ker A_0\subset l^1\) and a complement \(K\subset l^1\) of \({{\,\mathrm{im}\,}}A_0\) in \(l^0\). Choose a projection \(P_1\) from \(l^0\) onto K parallel to \({{\,\mathrm{im}\,}}A_0\) and a projection \(P_2\) onto \(\ker A_0\).Footnote 2 As in the proof of Lemma 3.5 there are representations for \(P_1,P_2\in {\mathcal {W}}\)

$$\begin{aligned} P_1x=\sum _{j=1}^k f_j(x)y_j \quad \text {and}\quad P_2x=\sum _{j=1}^l g_j(x)z_j \end{aligned}$$

with \(k=\dim {{\,\mathrm{coker}\,}}A\), \(l=\dim \ker A\) and \(f_j,g_j,y_j,z_j\) of \(l^1\)-type. Then the operator R

$$\begin{aligned} Rx{:=}\sum _{j=1}^{\min \{k,l\}} g_j(x)y_j \end{aligned}$$

is compact, belongs to \({\mathcal {W}}\) by Lemma 3.5 and has full rank \(\min \{k,l\}\).

Since \(g_j\circ P_2=g_j\) and \(P_1y_j=y_j\) we have \(R=P_1R=RP_2=P_1RP_2\). By construction, \(A+R\) is Fredholm with \(\dim \ker (A+R)=0\) if \(k\ge l\) (resp. \(\dim {{\,\mathrm{coker}\,}}(A+R)=0\) if \(k\le l\)), thus \(A+R\) is one-sided invertible. Let D be a generalized inverse (and hence one-sided inverse) in \({\mathcal {W}}\) for \(A+R\) which exists by Theorem 1.2 and define \(B{:=}(I-P_2)D(I-P_1) \in {\mathcal {W}}\). Since \(A=(I-P_1)A=A(I-P_2)=(I-P_1)A(I-P_2)\) and \(0=(I-P_1)R=R(I-P_2)=(I-P_1)R(I-P_2)\) it follows

$$\begin{aligned} ABA&=A(I-P_2)D(I-P_1)A=(A+R)(I-P_2)D(I-P_1)(A+R)\\&=(I-P_1)(A+R)D(A+R)(I-P_2)\\&=(I-P_1)(A+R)(I-P_2)=(I-P_1)A(I-P_2)=A,\\ BAB&=(I-P_2)D(I-P_1)A(I-P_2)D(I-P_1)\\&=(I-P_2)D(I-P_1)(A+R)(I-P_2)D(I-P_1). \end{aligned}$$

If D is a left (resp. right) inverse for \(A+R\) then the latter coincides with

$$\begin{aligned} (I-P_2)D(A+R)(I-P_2)D(I-P_1)&=(I-P_2)D(I-P_1)=B\text { (or}\\ (I-P_2)D(I-P_1)(A+R)D(I-P_1)&=(I-P_2)D(I-P_1)=B\text {, resp.)} \end{aligned}$$

This finishes the proof of Theorem 1.3b). \(\square \)

4 Generalization to \(l^p(\mathbb {Z}^N,X)\)

4.1 \(l^p(\mathbb {Z}^N,X)\) with a Banach space X

We now turn our attention to the generalizations of the above concepts and results to the spaces \(l^p(\mathbb {Z}^N,X)\) of Banach space valued generalized sequences \((x_i)_{i\in \mathbb {Z}^N}\subset X\), where \(N\in \mathbb {N}\) and X is a Banach space. The p-norms are naturally extended as

$$\begin{aligned} \Vert x\Vert _p=\Vert (x_i)\Vert _p&=\left( \sum _{i\in \mathbb {Z}^N}\Vert x_i\Vert _{X}^p\right) ^{1/p}&\text { for } p\in [1,\infty )\\ \Vert x\Vert _\infty =\Vert (x_i)\Vert _\infty&=\sup \{\Vert x_i\Vert _{X}:i\in \mathbb {Z}^N\}&\text { for } p\in \{0,\infty \}. \end{aligned}$$

Clearly in this setting the operators of multiplication are of the form aI with bounded \(a=(a_i)_{i\in \mathbb {Z}^N}\), where \(a_i\in {\mathcal {L}}(X)\), the shifts are \(V_\alpha :(x_i)\mapsto (x_{i-\alpha })\) for every \(\alpha \in \mathbb {Z}^N\), and then the definitions of \({{\mathcal {A}}_{l^p}}\) and \({\mathcal {W}}\) are identical. The definition of \(P_n\) is naturally extended by \(P_n=\chi _{[-n,n]^N}I\).

As long as \(N=1\) and X is of finite dimension, nothing changes, and reusing the above arguments verbatim still gives Theorems 1.1, 1.2 and 1.3. Essentially, the argument which gets lost in the more general situation is the automatic Fredholm property of semi-Fredholm band-dominated operators from [11]. However, supposing additionally that A is Fredholm, the proofs in Sects. 2 and 3.1 still work and immediately yield:

Theorem 4.1

Let \(A\in {{\mathcal {A}}_{l^p}}\) be Fredholm. Then the assertions of Theorem 1.1 hold.

Theorem 4.2

Let \(A\in {\mathcal {W}}\) be Fredholm. Then the assertions of Theorem 1.2 hold.

For the extension of Theorem 1.3 for Fredholm \(A\in {\mathcal {W}}\) one starts as in Sect. 3.2 in order to find the relations \(AB=I+T\) and \({\hat{B}}A=I+{\hat{T}}\) with \(B,{\hat{B}}\in {\mathcal {W}}\) and finite rank operators \(T,{\hat{T}}\in {\mathcal {W}}\) such that \({{\,\mathrm{im}\,}}T\subset {{\,\mathrm{im}\,}}P_n\) and \(\ker P_n\subset \ker {\hat{T}}\).

Only the definitions of U and \(U^\star \) require a modification. For this introduce the operators \(C_j:l^p(\mathbb {Z}^N,X)\rightarrow X\), \((x_i)\mapsto x_j\), as well as \(D_j:X \rightarrow l^p(\mathbb {Z}^N,X)\) which map \(x\in X\) to \((x_i)\) with \(x_j=x\) and \(x_i=0\) for all \(i\ne j\). Further define the set \(H{:=}\{-n,\ldots ,n\}^N\subset \mathbb {Z}^N\) and two subspaces of X by

$$\begin{aligned} X_1{:=}{{\,\mathrm{span}\,}}\Big (\bigcup _{j\in H} C_j({{\,\mathrm{im}\,}}T)\Big ),\quad X_2{:=}\bigcap _{j\in H} \ker {\hat{T}} D_j. \end{aligned}$$

Note that \(X_1\) is of finite dimension and \(X_2\) of finite codimension in X. Choose a decomposition \(X=Y\oplus X_2\) and further \(X=Y\oplus ((X_1\cap X_2) \oplus Z)\) with Z being of finite codimension. Thus there is a bounded finite rank projection R of X parallel to Z onto \(Y\oplus (X_1\cap X_2)\). Next, set \(\alpha {:=}(2n+1,0,\ldots ,0)\in \mathbb {Z}^N\), define the shift \({\tilde{V}}=V_\alpha \) and the projection \({\tilde{P}}:(x_i)\mapsto (\chi _{M}(i)Rx_i)\) where

$$\begin{aligned} M{:=}\{\alpha =(\alpha _k)\in \mathbb {Z}^N: \alpha _1\ge -n, |\alpha _k|\le n, k=2,\ldots ,N\}. \end{aligned}$$

Now set \(U{:=}(I-{\tilde{P}})+{\tilde{P}}{\tilde{V}}^{-1}\) and \(U^\star {:=}(I-{\tilde{P}})+{\tilde{V}}{\tilde{P}}\). Then, again, \(UU^\star =I\), \(U^\star U=I-P_n{\tilde{P}}\), U is Fredholm, and \({{\,\mathrm{im}\,}}T \subset {{\,\mathrm{im}\,}}P_n{\tilde{P}}=\ker U\) as well as \({{\,\mathrm{im}\,}}U^\star =\ker P_n{\tilde{P}}\subset \ker {\hat{T}}\). The rest of the proof in Sects. 3.2 and 3.3 remains unchanged and thus

Theorem 4.3

Let \(A\in {\mathcal {W}}\) be Fredholm. Then the assertions of Theorem 1.3 hold.

4.2 \(l^p(\mathbb {Z}^N,X)\) with a Hilbert space X

If X is a Hilbert space then \(l^2(\mathbb {Z}^N,X)\) is a Hilbert space. Let \(A\in {\mathcal {L}}(l^2)\) have closed range. Then a particular generalized inverse, the Moore–Penrose inverse \(A^+\) exists uniquely, and there are several ways to characterize it (see e.g. [5]):

For example, \(A^+\) is determined by the four Moore–Penrose equations

$$\begin{aligned} A^+AA^+=A^+,\quad AA^+A=A,\quad (AA^+)^\star =AA^+,\quad (A^+A)^\star = A^+A. \end{aligned}$$
(2)

Here \(A^\star \) denotes the Hilbert space adjoint - which coincides with the formal adjoint

$$\begin{aligned} \sum _k V_{-k} a_k^\star I \quad \text {for every band-dominated }\quad A=\sum _k a_k V_k\in {\mathcal {A}}_{l^2}. \end{aligned}$$

Equivalently, \(A^+\) is the desired Moore–Penrose inverse of A if and only if \(AA^+\) is the orthogonal projection onto \({{\,\mathrm{im}\,}}A\) and \(I-A^+A\) is the orthogonal projection onto \(\ker A\). Furthermore, it is also given by the following uniform limits

$$\begin{aligned} A^+=\lim _{\delta \searrow 0}(A^\star A+\delta I)^{-1}A^\star =\lim _{\delta \searrow 0}A^\star (AA^\star +\delta I)^{-1}. \end{aligned}$$
(3)

They particularly simplify if A is left (or right) invertible: if \(\nu (A)>0\) then (whenever \(\Vert x\Vert =1\))

$$\begin{aligned} \Vert A^\star Ax\Vert =\Vert A^\star Ax\Vert \Vert x\Vert \ge (A^\star Ax,x)=(Ax,Ax)=\Vert Ax\Vert ^2\ge (\nu (A))^2, \end{aligned}$$

i.e. \(\nu (A^\star A)>0\). Together with \((A^\star A)^\star =A^\star A\) this yields invertibility. Thus, \((A^\star A)^{-1}A^\star \) (or \(A^\star (AA^\star )^{-1}\), respectively) exists and equals \(A^+\). More generally, for all operators A with closed range, due to e.g. [5, Theorem 2.1.5], the following holds

$$\begin{aligned} A^+=(A^\star A)^+A^\star =A^\star (AA^\star )^+. \end{aligned}$$
(4)

If \(A\in {\mathcal {A}}_{l^2}\) is semi-Fredholm, i.e. has closed range and finite dimensional kernel (resp. cokernel) then \(C=A^\star A\) (resp. \(C=AA^\star \)) is Fredholm and \(C^+\) is a regularizer for C hence belongs to \({\mathcal {A}}_{l^2}\) by Remark 1. Equation (4) yields \(A^+\in {\mathcal {A}}_{l^2}\), which extends Theorems 1.1 and 4.1:

Theorem 4.4

Let \(A\in {\mathcal {A}}_{l^2}\) be semi-Fredholm. Then the assertions of Theorem 1.1 hold for \(B=A^+\in {\mathcal {A}}_{l^2}\).

Now, let \(A\in {\mathcal {W}}\) be semi-Fredholm. Then \(C=A^\star A\in {\mathcal {W}}\) (resp. \(C=AA^\star \in {\mathcal {W}}\)) is Fredholm, Theorem 4.3 yields a generalized inverse \(D\in {\mathcal {W}}\) for C and shows that \(\ker C\subset l^1\). Let R be the orthogonal projection onto \(\ker C\). Then \(RC=R^\star C^\star =(CR)^\star =0\). This implies that \({{\,\mathrm{im}\,}}C\subset {{\,\mathrm{im}\,}}(I-R)\). As \(\dim {{\,\mathrm{coker}\,}}C=\dim \ker C^\star =\dim \ker C=\dim {{\,\mathrm{im}\,}}R=\dim {{\,\mathrm{coker}\,}}(I-R)\) even \({{\,\mathrm{im}\,}}C={{\,\mathrm{im}\,}}(I-R)\). With an orthonormal basis \((y_k)\) of \(\ker C\subset l^1\) we get

$$\begin{aligned} Rx=\sum \langle Rx,y_k\rangle y_k=\sum \langle x,R^\star y_k\rangle y_k=\sum \langle x,Ry_k\rangle y_k=\sum \langle x,y_k\rangle y_k. \end{aligned}$$

As in the proof of Lemma 3.5 it follows that \(R\in {\mathcal {W}}\). Define another generalized inverse \(B{:=}(I-R)D(I-R)\in {\mathcal {W}}\) of C. Indeed,

$$\begin{aligned} BCB&=(I-R)D(I-R)C(I-R)D(I-R)\\&=(I-R)DCD(I-R)=(I-R)D(I-R)=B\\ CBC&=C(I-R)D(I-R)C=CDC=C. \end{aligned}$$

Notice that \(C-CBC=0\) yields \(\ker (I-CB)\supset {{\,\mathrm{im}\,}}C={{\,\mathrm{im}\,}}(I-R)\), hence \((I-CB)(I-R)=0\) which implies \(I-R=CB(I-R)=CB\), due to the definition of B. Furthermore, \({{\,\mathrm{im}\,}}(I-BC) \subset \ker C =\ker (I-R)\), hence \((I-R)(I-BC)=0\) which implies \(I-R=(I-R)BC=BC\). Consequently \(I-CB=I-BC=R\), thus \(C^+=B\in {\mathcal {W}}\) and, with Eq. (4):

Theorem 4.5

If \(A\in {\mathcal {W}}\) is semi-Fredholm then \(A^+\in {\mathcal {W}}\). Furthermore, \(A^+\) is a generalized inverse on every \(l^p(\mathbb {Z}^N,X)\) and yields projections \(I-A^+A\in {\mathcal {W}}\) onto \(\ker A\) and \(AA^+\in {\mathcal {W}}\) onto \({{\,\mathrm{im}\,}}A\).

In summary, if X is a Hilbert space, Theorem 1.1 for band-dominated operators on \(l^2(\mathbb {Z}^N,X)\), as well as Theorems 1.2 and 1.3 for Wiener class operators on \(l^p(\mathbb {Z}^N,X)\) remain true with the particular generalized inverse \(B=A^+\) and without imposing the additional condition ‘A is Fredholm’ as in Sect. 4.1.

4.3 A further criterion for one-sided invertibility based on finite matrices

Here we finally return to the classical situation \(X=\mathbb {C}\) once more. The previous formulas reveal a result which is already known from [3]:

Corollary 4.6

\(A\in {\mathcal {W}}\) is left (right) invertible if and only if \(A^\star A\) (resp. \(AA^\star \)) is invertible.

This opens the door for a subsequent study and a characterization of one-sided invertibility in terms of finite matrices. Consider \(C=A^\star A\) on \(l^2\) and its compressions \(C_n=P_nCP_n\) to the finite dimensional subspaces \({{\,\mathrm{im}\,}}P_n\). These \(C_n\) can be regarded as finite square matrices acting on finite vectors with the Euclidean norm.

Proposition 4.7

For \(A\in {\mathcal {W}}\), \(C=A^\star A\) is invertible if and only if its compressions \((P_nCP_n)\) to the spaces \({{\,\mathrm{im}\,}}P_n\) are uniformly invertible.

Proof

From e.g. [12] it is well known that if these \(C_n\) are invertible and their inverses have uniformly bounded norms (this is sometimes called stability of the sequence \((C_n)\)) then C is invertible. Actually, this can also be seen directly. Recall that \(\Vert Q_nx\Vert \rightarrow 0\) on \(l^2\) as \(n\rightarrow \infty \), assume that \(\nu (C)=0\) and fix \(\epsilon >0\). Choose x, \(\Vert x\Vert =1\) s.t. \(\Vert Cx\Vert \le \epsilon \) and n so large that \(\Vert Q_nx\Vert \le \epsilon /\Vert C\Vert \). Then, \(\nu (C_n)\le \Vert C_nx\Vert \le \Vert P_n\Vert \Vert CP_nx\Vert \le \Vert Cx\Vert +\Vert C\Vert \Vert Q_nx\Vert \le 2\epsilon \). Since \(\epsilon \) was chosen arbitrarily this yields \(\liminf _n \nu (P_nCP_n)=0\) contradicting the uniform invertibility. Thus, the self-adjoint C is left invertible hence invertible. Conversely, let C be invertible. Then for every \(x\in {{\,\mathrm{im}\,}}R\), \(\Vert x\Vert =1\), with R being an arbitrary self-adjoint projection, we have

$$\begin{aligned} \Vert RCRx\Vert&=\Vert RA^\star Ax\Vert =\Vert RA^\star Ax\Vert \Vert x\Vert \\&\ge (R^\star A^\star Ax,x)=(Ax,ARx)=\Vert Ax\Vert ^2\ge (\nu (A))^2, \end{aligned}$$

hence the restriction RCR is bounded below and due to its self-adjointness even invertible on \({{\,\mathrm{im}\,}}R\) with a bound on \(\Vert (RCR)^{-1}\Vert \) which is independent of R. Applying this to all \(R=P_n\) we get the uniform invertibility of the operators \(C_n\). \(\square \)

The uniform invertibility of a sequence of matrices w.r.t. the norm \(\Vert \cdot \Vert _2\), particularly if they are self-adjoint, can be characterized and checked by various tools. E.g. it means that there exists \(c>0\) such that all eigenvalues of all \(P_nCP_n\) are larger than c. In fact, as these matrices are positive, one only has to consider the smallest eigenvalues. This is also equivalent to having a \(c>0\) such that \(\nu (P_nCP_n)\ge c\) for all n. Another equivalent characterization is the uniform boundedness of the condition numbers of all these matrices.

Band-dominated operators A, and in particular \(A\in {\mathcal {W}}\), fulfill both \(\Vert P_nAQ_{2n}\Vert \rightarrow 0\) and \(\Vert Q_{2n}AP_n\Vert \rightarrow 0\) as \(n\rightarrow \infty \) (cf. e.g. [9, Theorem 2.1.6]). Hence \(\Vert P_nA^\star P_{2n}AP_n-P_nA^\star AP_n\Vert \), \(\Vert P_nAP_{2n}A^\star P_n-P_nAA^\star P_n\Vert \) tend to 0, which with Proposition 4.7 and Corollary 4.6 gives the following characterization of one-sided invertibility:

Theorem 4.8

Let \(A\in {\mathcal {W}}\). Then A is left (resp. right) invertible if and only if the sequence of positive matrices \((P_nA^\star P_{2n}AP_n)\) (resp. \((P_nAP_{2n}A^\star P_n)\)) is uniformly invertible w.r.t. the Euclidean norm.

Unlike the criterion in [3, Theorem 3.14] the remarkable advantage of these matrices is that their construction only requires finitely many entries from the infinite matrix A, resp., hence their invertibility can effectively be checked by finite computations and e.g. one of the above mentioned tools for \(p=2\).

We point out that the results of this final Sect. 4.3 apply to \(A\in {\mathcal {W}}\) on all \(l^p\) with \(p\in \{0\}\cup [1,\infty ]\). Moreover, they actually hold on \(l^2\) with literally the same proofs for all band-dominated A in the superset \({\mathcal {A}}_{l^2}\) of \({\mathcal {W}}\).

Further notice that the “uniform” in the conditions on the compressions is essential as the simple final example

$$\begin{aligned} A{:=}aI \text { with } a=(\ldots ,1,1,1,1/2,1/3,1/4,\ldots ,1/n,\ldots )\in l^\infty (\mathbb {Z}) \end{aligned}$$

obviates.