1 Introduction

Let \(\mathscr {X}\) and \(\mathscr {Y}\) be complex Banach spaces and let \(\mathcal {B}(\mathscr {X}, \mathscr {Y})\) be the Banach space of all bounded linear operators from \(\mathscr {X}\) to \(\mathscr {Y}\) (of course; if \(\mathscr {X}=\mathscr {Y}\), then we write \(\mathcal {B}(\mathscr {X})\) instead of \(\mathcal {B}(\mathscr {X},\mathscr {X})\)). A non-empty set \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) is reflexive if an operator \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) is in \(\mathcal {M}\) if and only if \(Tx\in \overline{\mathcal {M} x}\), for all \(x\in \mathscr {X}\). It is not hard to see that every finite set of operators is reflexive; see [3, Proposition 2.2]. If \(M\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) and \(\Lambda \subseteq \mathbb {C}\) is non-empty, then \(\Lambda \cdot M=\{ \lambda M;\; \lambda \in \Lambda \}\) is a reflexive set if and only if \(\Lambda \) is closed (see [3, Proposition 2.5]). In particular, every one-dimensional space of operators is reflexive.

Reflexivity was introduced by Halmos for subalgebras of \(\mathcal {B}(\mathscr {H})\), where \(\mathscr {H}\) is a Hilbert space. Loginov and Shulman [8, 9] have extended reflexivity to linear subspaces of \(\mathcal {B}(\mathscr {H})\) which are not necessarily algebras (see [6, Preliminaries]). In [3], we studied the reflexivity of arbitrary sets of operators. More precisely, no algebraic structure is assumed in the set under consideration. In [3, Section 4], we focused on the reflexivity of convex sets of operators. In this paper, we continue that study. Our main interest is in the question of whether a convex hull of a finite set of operators is reflexive. We are able to give an affirmative answer for the convex hull of three (or fewer) operators. However, the general problem remains open. The presented results are proved for flat sets of operators (for the definition, see Sect. 2.3), a particular case of which are convex sets.

The paper is organized as follows. In Sect. 2, we introduce notation and terminology and prove some preliminary results. If the set of operators contains only operators with high rank, then it is reflexive. This is proved in Sect. 3. The assertion follows from known results related to the reflexivity of linear spaces of operators with a high rank (see [4, 6, 7, 10]) and our main tool (Theorem 3.1) which gives a sufficient condition for a subset of a reflexive set to be reflexive. Section 4 is devoted to sets of operators determined by rank-one operators, and in the last section, we give a characterization of two-dimensional reflexive flat sets of operators.

2 Preliminaries

The dual space of a complex Banach space \(\mathscr {X}\) is denoted by \(\mathscr {X}^*\) and the pairing between these two Banach spaces is given by \(\langle x, \xi \rangle =\xi (x)\), for all \(x\in \mathscr {X},\,\xi \in \mathscr {X}^*\). For an operator \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\), we denote by \(\mathscr {R}(T)\) its range and by \(\mathscr {N}(T)\) its kernel. If \(\mathscr {R}(T)\) is a finite-dimensional subspace of \(\mathscr {Y}\), then T is a finite rank operator and we denote its rank, that is, the dimension of \(\mathscr {R}(T)\), by \(\textrm{rk}(T)\). For arbitrary \(0\ne f\in \mathscr {Y}\) and \(0\ne \xi \in \mathscr {X}^*\), the rank-one operator \(f\otimes \xi \) is given by \((f\otimes \xi )x=\langle x,\xi \rangle f\), for all \(x\in \mathscr {X}\). Note that \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) has rank \(k\in \mathbb {N}\) if and only if there exist linearly independent vectors \(f_1,\ldots , f_k\in \mathscr {Y}\) and linearly independent functionals \(\xi _1,\ldots ,\xi _k\in \mathscr {X}^*\), such that \(T=f_1\otimes \xi _1+\cdots +f_k\otimes \xi _k\).

2.1 Reflexivity

For a non-empty set \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) and a vector \(x\in \mathscr {X}\), let \(\overline{\mathcal {M} x}\) be the closure of the orbit \(\mathcal {M} x=\{ Mx;\; M\in \mathcal {M}\}\subseteq \mathscr {Y}\). Operator \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) is locally in \(\mathcal {M}\) if \(Tx\in \overline{\mathcal {M} x}\), for all \(x\in \mathscr {X}\). The set of all those operators that are locally in \(\mathcal {M}\) is called the reflexive cover of \(\mathcal {M}\) and is denoted by \(\textrm{Ref}(\mathcal {M})\). Thus

$$\begin{aligned} \textrm{Ref}(\mathcal {M})=\bigcap _{x\in \mathscr {X}}\{ T\in \mathcal {B}(\mathscr {X},\mathscr {Y});\quad Tx \in \overline{\mathcal {M} x}\}. \end{aligned}$$

Hence, an operator T is in \(\textrm{Ref}(\mathcal {M})\) if and only if, for every \(x\in \mathscr {X}\) and every \(\varepsilon >0\), there exists an operator \(M_{x,\varepsilon }\in \mathcal {M}\), such that \(\Vert (T-M_{x,\varepsilon })x\Vert <\varepsilon \). In the following lemma, we show that \(\textrm{Ref}(\mathcal {M})\) is closed in the strong operator topology. Note, however, that \(\textrm{Ref}(\mathcal {M})\) is not closed in the weak operator topology, in general (see [3, p. 756]).

Lemma 2.1

The reflexive cover of a non-empty set is closed in the strong operator topology.

Proof

Let \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) be a non-empty set. Suppose that \(\bigl ( T_j\bigr )_{j\in J}\subseteq \textrm{Ref}(\mathcal {M})\) is a net that converges to \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) in the strong operator topology. Let \(x\in \mathscr {X}\) and \(\varepsilon >0\) be arbitrary. Then, there exists an index \(j_{\varepsilon }\in J\), such that \(\Vert Tx-T_j x\Vert <\frac{\varepsilon }{2}\), for all \(j\in J\), such that \(j>j_{\varepsilon }\). Let \(j>j_{\varepsilon }\) be arbitrary. Since \(T_j\in \textrm{Ref}(\mathcal {M})\) there exists \(M_{x,\varepsilon }\in \mathcal {M}\), such that \(\Vert (T_j-M_{x,\varepsilon })x\Vert <\frac{\varepsilon }{2}\). Hence, \(\Vert (T-M_{x,\varepsilon })x\Vert \le \Vert (T-T_j)x\Vert +\Vert (T_j-M_{x,\varepsilon })x\Vert <\varepsilon \), that is, \(T\in \textrm{Ref}(\mathcal {M})\). \(\square \)

Hadwin [5] introduced algebraic reflexivity. The algebraic reflexive cover of \(\mathcal {M}\) is \(\textrm{Ref}_a(\mathcal {M})=\bigcap \nolimits _{x\in \mathscr {X}}\{ T\in \mathcal {B}(\mathscr {X},\mathscr {Y}); \quad Tx \in \mathcal {M} x\}\), that is, an operator T is in \(\textrm{Ref}_a(\mathcal {M})\) if and only if, for every \(x\in \mathscr {X}\), there exists \(M_x\in \mathcal {M}\), such that \(Tx=M_x x\). It is clear that \(\textrm{Ref}_a(\mathcal {M})\subseteq \textrm{Ref}(\mathcal {M})\) and these sets are equal if \(\mathcal {M} x\) is a closed subset of \(\mathscr {Y}\), for every \(x\in \mathscr {X}\). For instance, if \(\mathcal {M}\) is a finite set or a finite-dimensional subspace of \(\mathcal {B}(\mathscr {X},\mathscr {Y})\), then \(\mathcal {M} x\) is closed for every \(x\in \mathscr {X}\) and, therefore, \(\textrm{Ref}_a(\mathcal {M})= \textrm{Ref}(\mathcal {M})\).

It is not hard to see that \(\mathcal {M} \subseteq \textrm{Ref}_a(\mathcal {M})\subseteq \textrm{Ref}(\mathcal {M})\). Moreover, one has \(\textrm{Ref}\bigl (\textrm{Ref}(\mathcal {M})\bigr )=\textrm{Ref}(\mathcal {M})\) and, similarly, \(\textrm{Ref}_a\bigl (\textrm{Ref}_a(\mathcal {M})\bigr )=\textrm{Ref}_a(\mathcal {M})\). A set \(\mathcal {M} \subseteq \mathcal {B}(\mathscr {X})\) is said to be reflexive if \(\textrm{Ref}(\mathcal {M})=\mathcal {M}\). If \(\textrm{Ref}_a(\mathcal {M})=\mathcal {M}\), then \(\mathcal {M}\) is said to be algebraically reflexive. Of course, every reflexive set is algebraically reflexive.

Lemma 2.2

Let \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) be a non-empty set. If \(A\in \mathcal {B}(\mathscr {Y})\) and \(B\in \mathcal {B}(\mathscr {X})\) are invertible operators, then \(\textrm{Ref}(A\mathcal {M} B)=A\textrm{Ref}(\mathcal {M})B\). In particular, \(\mathcal {M}\) is reflexive if and only if \(A\mathcal {M} B\) is reflexive.

Proof

Assume that \(T\in \textrm{Ref}(\mathcal {M})\). Let \(x\in \mathscr {X}\) and \(\varepsilon >0\) be arbitrary. By the definition of the reflexive cover, there exists \(M_{x,\varepsilon }\in \mathcal {M}\), such that \(\Vert (T-M_{x,\varepsilon })x\Vert <\frac{\varepsilon }{\Vert A\Vert \Vert B\Vert }\). It follows that \(\Vert (ATB-AM_{x,\varepsilon }B)x\Vert <\varepsilon \). Since \(AM_{x,\varepsilon }B\in A\mathcal {M} B\), we conclude that \(ATB\in \textrm{Ref}(A\mathcal {M} B)\). We have proved that \(A\textrm{Ref}(\mathcal {M})B\subseteq \textrm{Ref}(A\mathcal {M} B) \). A similar inclusion holds if we replace A by \(A^{-1}\) and B by \(B^{-1}\), that is, \(A^{-1}\textrm{Ref}(\mathcal {M})B^{-1}\subseteq \textrm{Ref}(A^{-1}\mathcal {M} B^{-1})\) which gives \(\textrm{Ref}(\mathcal {M})\subseteq A\textrm{Ref}(A^{-1}\mathcal {M} B^{-1})B\). This last inclusion holds for all non-empty sets, and hence, we can put in it \(A\mathcal {M} B\). Then, we obtain \(\textrm{Ref}(A\mathcal {M} B)\subseteq A\textrm{Ref}(\mathcal {M})B\). This proves equality \(\textrm{Ref}(A\mathcal {M} B)=A\textrm{Ref}(\mathcal {M})B\). Of course, if follows from the equality that \(\mathcal {M}\) is reflexive if and only if \(A\mathcal {M} B\) is reflexive. \(\square \)

If \(\mathscr {X}_1\) and \(\mathscr {X}_2\) are complex Banach spaces, then let \(\mathscr {X}_1\oplus \mathscr {X}_2\) be the direct sum of \(\mathscr {X}_1\) and \(\mathscr {X}_2\) equipped with the norm \(\Vert x_1\oplus x_2\Vert =\Vert x_1\Vert +\Vert x_2\Vert \). For non-empty sets \(\mathcal {M}_i\subseteq \mathcal {B}(\mathscr {X}_i,\mathscr {Y}_i)\) \((i=1,2)\), let \(\mathcal {M}_1\oplus \mathcal {M}_2=\{ M_1\oplus M_2;\; M_1\in \mathcal {M}_1,\; M_2\in \mathcal {M}_2\}\subseteq \mathcal {B}(\mathscr {X}_1\oplus \mathscr {X}_2,\mathscr {Y}_1\oplus \mathscr {Y}_2)\).

Lemma 2.3

Let \(\mathcal {M}_i\subseteq \mathcal {B}(\mathscr {X}_i,\mathscr {Y}_i)\) \((i=1,2)\) be non-empty sets. Then, \(\textrm{Ref}(\mathcal {M}_1\oplus \mathcal {M}_2)=\textrm{Ref}(\mathcal {M}_1)\oplus \textrm{Ref}(\mathcal {M}_2)\). In particular, \(\mathcal {M}_1\oplus \mathcal {M}_2\) is reflexive if and only if \(\mathcal {M}_1\) and \(\mathcal {M}_2\) are reflexive.

Proof

Let \(T\in \textrm{Ref}(\mathcal {M}_1\oplus \mathcal {M}_2)\) be arbitrary. Then, with respect to the decompositions \(\mathscr {X}_1\oplus \mathscr {X}_2\) and \(\mathscr {Y}_1\oplus \mathscr {Y}_2\), operator T has an operator matrix \(\left[ \begin{array}{ll} T_{11} &{} T_{12}\\ T_{21} &{} T_{22} \end{array} \right] \). For arbitrary \(x_1\oplus x_2\in \mathscr {X}_1\oplus \mathscr {X}_2\) and \(\varepsilon >0\), there exists an operator \(M_1\oplus M_2\in \mathcal {M}_1\oplus \mathcal {M}_2\) (which depends on \(x_1\oplus x_2\) and \(\varepsilon \)), such that \(\Vert (T-M_1\oplus M_2) x_1\oplus x_2\Vert <\varepsilon \). It follows that:

$$\begin{aligned} \Vert T_{11}x_1+T_{12}x_2-M_1 x_1\Vert<\varepsilon \quad \text {and}\quad \Vert T_{21}x_1+T_{22}x_2-M_2 x_2\Vert <\varepsilon . \end{aligned}$$
(2.1)

If \(x_1=0\) and \(x_2\in \mathscr {X}_2\) is arbitrary, then (2.1) implies that \(T_{11}\in \textrm{Ref}(\mathcal {M}_1)\) and \(T_{21}=0\). Similarly, if \(x_1\in \mathscr {X}_1\) is arbitrary and \(x_2=0\), then (2.1) implies \(T_{22}\in \textrm{Ref}(\mathcal {M}_2)\) and \(T_{12}=0\). We conclude that \(T=T_{11}\oplus T_{22}\in \textrm{Ref}(\mathcal {M}_1)\oplus \textrm{Ref}(\mathcal {M}_2)\).

To prove the opposite inclusion, assume that \(T\in \textrm{Ref}(\mathcal {M}_1)\oplus \textrm{Ref}(\mathcal {M}_2)\). Then, \(T=T_1\oplus T_2\), where \(T_1\in \textrm{Ref}(\mathcal {M}_1)\) and \(T_2\in \textrm{Ref}(\mathcal {M}_2)\). For arbitrary \(x_1\oplus x_2\in \mathscr {X}_1\oplus \mathscr {X}_2\) and \(\varepsilon >0\), there exists \(M_{x_1,\varepsilon }\in \mathcal {M}_1\) and \(M_{x_2,\varepsilon }\in \mathcal {M}_2\), such that \(\Vert (T_1-M_{x_1,\varepsilon })x_1\Vert <\varepsilon \) and \(\Vert (T_1-M_{x_2,\varepsilon })x_2\Vert <\varepsilon \). It follows that \(\Vert (T-M_{x_1,\varepsilon }\oplus M_{x_2,\varepsilon }) x_1\oplus x_2\Vert <2\varepsilon \). We conclude that \(T\in \textrm{Ref}(\mathcal {M}_1\oplus \mathcal {M}_2)\). \(\square \)

2.2 Flat subsets of \(\mathbb {C}^n\)

In this paper, we will work with a special class of closed subsets \(\varvec{\Lambda }\subseteq \mathbb {C}^n\) called flat sets. A flat set \(\varvec{\Lambda }\) is determined by a complex matrix \(\varvec{C}\in \mathbb {M}_{m\times n}\) and an m-tuple of closed sets \(\Lambda _j\subseteq \mathbb {C}\) as follows:

$$\begin{aligned} \varvec{\Lambda }=\{ \varvec{\lambda }=(\lambda _1,\ldots ,\lambda _n)^{\intercal }\in \mathbb {C}^n;\quad \varvec{C}\varvec{\lambda } \in \Lambda _1\times \cdots \times \Lambda _m\}. \end{aligned}$$

It follows from the definition that \(\varvec{\Lambda }\) is a flat set if it is the preimage of \(\Lambda _1\times \cdots \times \Lambda _m\) with respect to the linear transformation \(\varvec{C}:\mathbb {C}^n\rightarrow \mathbb {C}^m\). Since \(\Lambda _1\times \cdots \times \Lambda _m\) is a closed subset of \(\mathbb {C}^m\) and \(\varvec{C}\) is a continuous transformation, every flat set is closed. It is clear that \(\Lambda _1\times \cdots \times \Lambda _m\) itself is a flat set. In particular, every closed subset of \(\mathbb {C}\) is flat. The empty subset of \(\mathbb {C}^n\) is flat. Another obvious example of a flat set is any linear subspace of \(\mathbb {C}^n\). Indeed, it is obvious that every linear subspace \(\varvec{\Lambda }\) of \(\mathbb {C}^n\) is the kernel of a matrix, say \(\varvec{C}\in \mathbb {M}_{n\times n}\). Hence, \(\varvec{\Lambda }\) is determined by \(\varvec{C}\) and \(\{ 0\}^n\).

Proposition 2.4

Let \(\varvec{\Lambda }\subseteq \mathbb {C}^n\) be a flat set.

  1. (i)

    If \(\varvec{\mu }=(\mu _1,\ldots ,\mu _n)^{\intercal }\in \mathbb {C}^n\), then \(\varvec{\Lambda }-\varvec{\mu }=\{ \varvec{\lambda }-\varvec{\mu }=(\lambda _1-\mu _1,\ldots ,\lambda _n-\mu _n)^{\intercal }\in \mathbb {C}^n;\) \(\varvec{\lambda }=(\lambda _1,\ldots \lambda _n)^{\intercal }\in \varvec{\Lambda }\}\) is a flat set.

  2. (ii)

    Set \(\varvec{\Theta }=\{ \varvec{\theta }\in \mathbb {C}^p;\; \varvec{A} \varvec{\theta }\in \varvec{\Lambda }\}\) is flat, for an arbitrary matrix \(\varvec{A}\in \mathbb {M}_{n\times p}\).

  3. (iii)

    If \(\varvec{B}\in \mathbb {M}_{n\times n}\) is invertible, then \(\varvec{\Sigma }=\varvec{B} \varvec{\Lambda }\) is a flat set.

  4. (iv)

    The intersection of finitely many flat sets in \(\mathbb {C}^n\) is a flat set.

  5. (v)

    If \(\varvec{\Lambda }_k\subseteq \mathbb {C}^{n_k}\) \((k=1,\ldots , q)\) are flat sets, then \(\varvec{\Lambda }=\varvec{\Lambda }_1\oplus \cdots \oplus \varvec{\Lambda }_q\subseteq \mathbb {C}^n\), where \(n=n_1+\cdots +n_q\), is a flat set.

Proof

To prove (i)–(iii), assume that \(\varvec{\Lambda }\) is determined by a matrix \(\varvec{C}\in \mathbb {M}_{m\times n}\) and closed subsets \(\Lambda _j\subseteq \mathbb {C}\) \((j=1,\ldots ,m)\).

  1. (i)

    Let \(\varvec{C}\varvec{\mu }=(\mu _{1}',\ldots ,\mu _{n}')^{\intercal }\). If \(\varvec{\lambda }\in \varvec{\Lambda }\), then \(\varvec{C}(\varvec{\lambda }-\varvec{\mu })= \varvec{C}\varvec{\lambda }-\varvec{C}\varvec{\mu }\in (\Lambda _1-\mu _{1}')\times \cdots \times (\Lambda _m-\mu _{1}')\), that is, \(\varvec{\Lambda }-\varvec{\mu }\) is contained in the flat set that is determined by \(\varvec{C}\) and sets \(\Lambda _j-\mu _{j}'\) \((j=1,\ldots , m)\). On the other hand, if \(\varvec{\nu }\) is in the flat set that is determined by \(\varvec{C}\) and sets \(\Lambda _1-\mu _{j}'\), then \(\varvec{C}\varvec{\nu }\in (\Lambda _1-\mu _{1}')\times \cdots \times (\Lambda _m-\mu _{1}')\) and therefore \(\varvec{C}(\varvec{\nu }+\varvec{\mu })\in \Lambda _1\times \cdots \times \Lambda _m\), which means that \(\varvec{\lambda }=\varvec{\nu }+\varvec{\mu }\in \varvec{\Lambda }\). Hence, \( \varvec{\nu }= \varvec{\lambda }-\varvec{\mu }\in \varvec{\Lambda }-\varvec{\mu }\). This proves that \(\varvec{\Lambda }-\varvec{\mu }\) is a flat set.

  2. (ii)

    If \(\varvec{\theta }\in \mathbb {C}^p\) is such that \(\varvec{C}\varvec{A}\varvec{\theta }\in \Lambda _1\times \cdots \times \Lambda _m\), then \(\varvec{A}\varvec{\theta }\in \varvec{\Lambda }\), by the definition of \(\varvec{\Lambda }\), and therefore \(\varvec{\theta }\in \varvec{\Theta }\). On the other hand, if \(\varvec{\theta }\in \varvec{\Theta }\), then \(\varvec{C}\varvec{A}\varvec{\theta }\in \Lambda _1\times \cdots \times \Lambda _m\). Hence, \(\varvec{\Theta }\) is determined by the matrix \(\varvec{C}\varvec{A}\) and closed subsets \(\Lambda _j\subseteq \mathbb {C}\) \((j=1,\ldots ,m)\).

  3. (iii)

    Let \(\varvec{B}\in \mathbb {M}_{n\times n}\) be an invertible matrix. If \(\varvec{\sigma }\in \mathbb {C}^n\) is such that \(\varvec{C}\varvec{B}^{-1}\varvec{\sigma }\in \Lambda _1\times \cdots \times \Lambda _m\), then \(\varvec{B}^{-1}\varvec{\sigma }\in \varvec{\Lambda }\), by the definition of \(\varvec{\Lambda }\). It follows that \(\varvec{\sigma }\in \varvec{\Sigma }\). On the other hand, if \(\varvec{\sigma }\in \varvec{\Sigma }\), then there exists \(\varvec{\lambda }\in \varvec{\Lambda }\), such that \(\varvec{\sigma }=\varvec{B}\varvec{\lambda }\). Hence, \(\varvec{C}\varvec{B}^{-1}\varvec{\sigma }=\varvec{C}\varvec{\lambda }\in \Lambda _1\times \cdots \times \Lambda _m\). Thus, \(\varvec{\Sigma }\) is determined by the matrix \(\varvec{C}\varvec{B}^{-1}\) and closed subsets \(\Lambda _j\subseteq \mathbb {C}\) \((j=1,\ldots ,m)\). For (iv) and (v), it is enough to consider only the case of two flat sets.

  4. (iv)

    Assume that \(\varvec{\Lambda }'\) is determined by a matrix \(\varvec{C}'\in \mathbb {M}_{m'\times n}\) and closed sets \(\Lambda _{j}'\subseteq \mathbb {C}\) and \(\varvec{\Lambda }''\) is determined by \(\varvec{C}''\in \mathbb {M}_{m''\times n}\) and closed sets \(\Lambda _{j}''\subseteq \mathbb {C}\). Let \(\varvec{C}=\left[ \begin{array}{ll} {\varvec{C}}' \\ {\varvec{C}}''\end{array}\right] \), that is, \(\varvec{C}\in \mathbb {M}_{(m'+m'')\times n}\). We claim that \(\varvec{\Lambda }'\cap \varvec{\Lambda }''\) is determined by \(\varvec{C}\) and \(\Lambda _1'\times \cdots \Lambda _{m'}'\times \Lambda _{1}''\times \cdots \times \Lambda _{m''}''\). It is clear that \(\varvec{C}\varvec{\lambda }= \left[ \begin{array}{ll} \varvec{C}'\varvec{\lambda }\\ \varvec{C}''\varvec{\lambda }\end{array}\right] \in \mathbb {C}^{m'+m''}\), for all \(\varvec{\lambda }\in \mathbb {C}^n\). Assume that \(\varvec{\lambda }\in \varvec{\Lambda }'\cap \varvec{\Lambda }''\). Then, \(\varvec{C}'\varvec{\lambda }\in \Lambda _1'\times \cdots \Lambda _{m'}'\) and \(\varvec{C}''\varvec{\lambda }\in \Lambda _1''\times \cdots \Lambda _{m''}''\). Hence, \(\varvec{C}\varvec{\lambda }\in \Lambda _1'\times \cdots \Lambda _{m'}'\times \Lambda _1''\times \cdots \Lambda _{m''}''\). This shows that the intersection \(\varvec{\Lambda }'\cap \varvec{\Lambda }''\) is a subset of the flat set that is determined by \(\varvec{C}\) and \(\Lambda _1'\times \cdots \Lambda _{m'}'\times \Lambda _1''\times \cdots \Lambda _{m''}''\). On the other hand, if \(\varvec{\lambda }\) is in that set, then \(\varvec{C}\varvec{\lambda }\in \Lambda _1'\times \cdots \Lambda _{m'}'\times \Lambda _1''\times \cdots \Lambda _{m''}''\) which means that \(\varvec{C}'\varvec{\lambda }\in \Lambda _1'\times \cdots \Lambda _{m'}'\) and \(\varvec{C}''\varvec{\lambda }\in \Lambda _1''\times \cdots \Lambda _{m''}''\), that is \(\varvec{\lambda } \in \varvec{\Lambda }'\cap \varvec{\Lambda }''\).

  5. (v)

    Assume that \(\varvec{\Lambda }_j\) \((j=1,2)\) is determined by a matrix \(\varvec{C}_j\in \mathbb {M}_{m_j\times n_j}\) and closed subsets \(\Lambda _{1}^{(j)},\ldots ,\Lambda _{m_j}^{(j)}\) of \(\mathbb {C}\). Let \(\varvec{C}=\varvec{C}_1\oplus \varvec{C}_2\). This is a matrix of dimension \((m_1+m_2)\times (n_1+n_2)\). We claim that \(\varvec{\Lambda }_1\oplus \varvec{\Lambda }_2\) is a flat set determined by \(\varvec{C}\) and \(\Lambda _{1}^{(1)}\times \cdots \times \Lambda _{m_1}^{(1)}\times \Lambda _{1}^{(2)}\times \cdots \times \Lambda _{m_2}^{(2)}\). It is clear that for \(\varvec{\lambda }\in \varvec{\Lambda }_1\oplus \varvec{\Lambda }_2\), there exist \(\varvec{\lambda }_1\in \varvec{\Lambda }_1\) and \(\varvec{\lambda }_2\in \varvec{\Lambda }_2\), such that \(\varvec{\lambda }=\varvec{\lambda }_1\oplus \varvec{\lambda }_2\). Hence, \(\varvec{C}\varvec{\lambda }=\varvec{C}_1\varvec{\lambda }_1\oplus \varvec{C}_2\varvec{\lambda }_2\in \Lambda _{1}^{(1)}\times \cdots \times \Lambda _{m_1}^{(1)}\times \Lambda _{1}^{(2)}\times \cdots \times \Lambda _{m_2}^{(2)}\). On the other hand, if \(\varvec{\lambda }\in \mathbb {C}^{n_1+n_2}\) is such that \(\varvec{C}\varvec{\lambda }\in \Lambda _{1}^{(1)}\times \cdots \times \Lambda _{m_1}^{(1)}\times \Lambda _{1}^{(2)}\times \cdots \times \Lambda _{m_2}^{(2)}\), let \(\varvec{\lambda }_1\in \mathbb {C}^{n_1}\) and \(\varvec{\lambda }_2\in \mathbb {C}^{n_2}\) be such that \(\varvec{\lambda }=\varvec{\lambda }_1\oplus \varvec{\lambda }_2\). It follows that \(\varvec{C}\varvec{\lambda }=\varvec{C}_1\varvec{\lambda }_1\oplus \varvec{C}_2\varvec{\lambda }_2\in \Lambda _{1}^{(1)}\times \cdots \times \Lambda _{m_1}^{(1)}\times \Lambda _{1}^{(2)}\times \cdots \times \Lambda _{m_2}^{(2)}\) and, therefore, \(\varvec{C}_j\varvec{\lambda }_j\in \Lambda _{1}^{(j)}\times \cdots \times \Lambda _{m_j}^{(j)}\), for \(j=1,2\). By the definition of \(\varvec{\Lambda }_j\), we have \(\varvec{\lambda }_j\in \varvec{\Lambda }_j\) which gives \(\varvec{\lambda }\in \varvec{\Lambda }\).

\(\square \)

Example

We have already observed that subspaces of \(\mathbb {C}^n\) are flat sets; in particular, every hyperplane is a flat set. Every hyperplane separates \(\mathbb {C}^n\) in two halfspaces. Halfspaces are flat sets. Indeed, let \(\varvec{0}\ne \varvec{C}=[c_1,\ldots , c_n]\in \mathbb {M}_{1\times n}\) and let \(\Lambda =\{ z\in \mathbb {C};\; \textrm{Re}(z)\ge 0\}\). Then, the flat set determined by \(\varvec{C}\) and \(\Lambda \) is a halfspace \(\varvec{\Lambda }=\{ (\lambda _1,\ldots ,\lambda _n)^{\intercal } \in \mathbb {C}^n;\; \textrm{Re}(c_1\lambda _1+\cdots +c_n\lambda _n)\ge 0\}\).

Recall that a convex polytope in \(\mathbb {C}^n\) is the intersection of a finite family of halfspaces. By Proposition 2.4, any convex polytope in \(\mathbb {C}^n\) is a flat set. For instance, the convex hull of the standard basis \((\varvec{e}_1,\ldots , \varvec{e}_n)\) in \(\mathbb {C}^n\) is determined by the matrix \(\varvec{C}=\left[ \begin{array}{llll} 1 &{} 0 &{} \cdots &{} 0\\ 0 &{} 1 &{} \cdots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 1\\ 1 &{} 1 &{} \cdots &{} 1\\ \end{array}\right] \in \mathbb {M}_{(n+1)\times n}\) and sets \(\Lambda _j=[0,1]\), for \(j=1,\ldots ,n\), and \(\Lambda _{n+1}=\{ 1\}\). \(\square \)

2.3 Finite-dimensional sets of operators

We will say that a non-empty set \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) is finite-dimensional, if \(\textrm{span}(\mathcal {M})\), the closed linear span of \(\mathcal {M}\), is a finite-dimensional subspace of \(\mathcal {B}(\mathscr {X},\mathscr {Y})\). If \(\dim \bigl (\textrm{span}(\mathcal {M})\bigr )=n\ge 1\), then we will say that \(\mathcal {M}\) is an n-dimensional set of operators. For instance, let \(M_1, \ldots , M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) be arbitrary operators and let \(\varvec{\Lambda }\subseteq \mathbb {C}^n\) be an arbitrary non-empty set. Denote by \(\varvec{M}=[M_1, \ldots , M_n]\) the \(1\times n\) operator matrix. Then, \( \varvec{\Lambda }\cdot \varvec{M}=\{ \varvec{\lambda }\cdot \varvec{M}=\lambda _1 M_1+\cdots +\lambda _n M_n;\; \varvec{\lambda }=(\lambda _1,\ldots ,\lambda _n)^{\intercal }\in \varvec{\Lambda }\}\) is a finite-dimensional set. Actually, all finite-dimensional sets of operators are of this form.

Lemma 2.5

Let \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) be a finite-dimensional set. Then, there exists \(\varvec{M}=[M_1,\ldots , M_n]\), with linearly independent operators \(M_1, \ldots , M_n\), and a non-empty set \(\varvec{\Lambda }\subseteq \mathbb {C}^n\), such that \(\mathcal {M}= \varvec{\Lambda }\cdot \varvec{M}\). Set \(\mathcal {M}\) is closed if and only if \(\varvec{\Lambda }\) is a closed set.

Proof

Let \(\mathcal {M}\ne \{ 0\}\) be an arbitrary finite-dimensional set. Suppose that \((M_1,\ldots ,M_n)\) is a basis of \(\textrm{span}(\mathcal {M})\) and denote \(\varvec{M}=[M_1, \ldots , M_n]\). For every \(T\in \mathcal {M}\), there exists a unique \(\varvec{\lambda }\in \mathbb {C}^n\), such that \(T=\varvec{\lambda }\cdot \varvec{M}\). Let \(\varvec{\Lambda }=\{ \varvec{\lambda }\in \mathbb {C}^n;\; \text {there exists}\; T\in \mathcal {M}\;\text {such that}\; T=\varvec{\lambda }\cdot \varvec{M}\}\). It is easily seen that \(\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\).

Assume that \(\varvec{\Lambda }\) is a closed set. Let \(\bigl (T_k\bigr )_{k=1}^{\infty }\subseteq \varvec{\Lambda }\cdot \varvec{M}\) be a Cauchy sequence and let \(T\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) be its limit. For each \(k\in \mathbb {N}\), there exists \(\varvec{\lambda }^{(k)}=(\lambda _{1}^{(k)},\ldots ,\lambda _{n}^{(k)})^{\intercal }\in \varvec{\Lambda }\), such that \(T_k=\lambda _{1}^{(k)}M_1+\cdots +\lambda _{n}^{(k)} M_n\). Since \(M_1,\ldots , M_n\) are linearly independent operators, for each \(1\le i\le n\), there exists a functional \(\Phi _i\in \mathcal {B}(\mathscr {X},\mathscr {Y})^*\), \(\Vert \Phi _i\Vert =1\), such that \(\langle M_i,\Phi _i\rangle =\Vert M_i\Vert \) and \(\langle M_j,\Phi _i\rangle =0\) for \(j\ne i\). Let \(\varepsilon >0\) be arbitrary. Then, there exists an index \(k_\varepsilon \), such that \(\Vert T_k-T_l\Vert <\varepsilon \) for all \(k, l\ge k_\varepsilon \). It follows that:

$$\begin{aligned} |\lambda _{i}^{(k)}-\lambda _{i}^{(l)}| \Vert M_i\Vert =| \langle (\lambda _{1}^{(k)}-\lambda _{1}^{(l)}M_1+\cdots +(\lambda _{n}^{(k)}-\lambda _{n}^{(l)})M_n,\Phi _i\rangle |\le \Vert T_k-T_l\Vert <\varepsilon , \end{aligned}$$

for all i. Hence, \(\bigl ( \varvec{\lambda }^{(k)}\bigr )_{k=1}^{\infty }\) is a Cauchy sequence in \(\varvec{\Lambda }\). Let \(\varvec{\lambda }=(\lambda _1,\ldots , \lambda _n)^{\intercal } \in \varvec{\Lambda }\) be its limit. It follows that \(\lim \nolimits _{k\rightarrow \infty }\Vert \varvec{\lambda }^{(k)}\cdot \varvec{M}-\varvec{\lambda }\cdot \varvec{M}\Vert =0\). Hence, for every \(\varepsilon >0\), there exists an index \(k_\varepsilon \), such that \(\Vert T-\varvec{\lambda }\cdot \varvec{M}\Vert \le \Vert T-T_k\Vert +\Vert \varvec{\lambda }^{(k)}\cdot \varvec{M}-\varvec{\lambda }\cdot \varvec{M}\Vert <\varepsilon ,\) for all \(k\ge k_\varepsilon \). We may conclude that \(T=\varvec{\lambda }\cdot \varvec{M}\in \varvec{\Lambda }\cdot \varvec{M}\).

Suppose now that \(\varvec{\Lambda }\) is a non-empty subset of \(\mathbb {C}^n\), such that \(\varvec{\Lambda }\cdot \varvec{M}\) is a closed set of operators. Let \(\varvec{\lambda }^{(k)}=(\lambda _{1}^{(k)},\ldots ,\lambda _{n}^{(k)})^{\intercal }\in \varvec{\Lambda }\) be a Cauchy sequence and let \(\varvec{\lambda }=(\lambda _{1},\ldots ,\lambda _{n})^{\intercal }\in \mathbb {C}^n\) be its limit. Then, \(\lim \nolimits _{k\rightarrow \infty } \lambda _{j}^{(k)}=\lambda _j\) for every \(1\le j\le n\). Denote \(T_{k}=\varvec{\lambda }^{(k)}\cdot \varvec{M}\) and \(T= \varvec{\lambda }\cdot \varvec{M}\). Note that \(T_{k}\in \varvec{\Lambda }\cdot \varvec{M}\). It follows that:

$$\begin{aligned} \begin{aligned} \Vert T_{k}-T\Vert&=\Vert (\lambda _{1}^{(k)}-\lambda _1)M_1+\cdots +(\lambda _{n}^{(k)}-\lambda _n)M_n\Vert \\&\le |\lambda _{1}^{(k)}-\lambda _1|+\cdots +|\lambda _{n}^{(k)}-\lambda _n|\bigr ) \max \{ \Vert M_j\Vert ;\; 1\le j\le n\}\xrightarrow {k\rightarrow \infty } 0. \end{aligned} \end{aligned}$$

Since \( \varvec{\Lambda }\cdot \varvec{M}\) is closed, it follows that \(T \in \varvec{\Lambda }\cdot \varvec{M}\). Assume that \(\varvec{\mu }=(\mu _1,\ldots ,\mu _n)^{\intercal }\) \(\in \varvec{\Lambda }\) is such that \(T=\varvec{\mu }\cdot \varvec{M}\). Then, \(\lambda _1 M_1+\cdots +\lambda _n M_n= \mu _1 M_1+\cdots +\mu _n M_n\) and, therefore, \(\varvec{\lambda }=\varvec{\mu }\) as \((M_1,\ldots , M_n)\) are linearly independent. \(\square \)

Now, we introduce flat sets of operators. A non-empty finite-dimensional set of operators \(\mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) is flat if there exist \(\varvec{M}=[M_1, \ldots , M_n]\), where \(M_1,\ldots , M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\), and a flat set \(\varvec{\Lambda }\subseteq \mathbb {C}^n\) which is determined by a matrix \(\varvec{C}\in \mathbb {M}_{m\times n}\) and closed sets \(\Lambda _j\subseteq \mathbb {C}\) \((j=1,\ldots ,m)\), such that \(\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\). Note that this definition does not assume that \(M_1,\ldots , M_n\) are linearly independent. We will say that a flat set \(\mathcal {M}\ne \{ 0\}\) is regular if there exists a flat set \(\varvec{\Lambda }\) and linearly independent operators \(M_1,\ldots , M_n\), such that \(\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\). In the following lemma, we give an equivalent condition for the regularity of a flat set of operators.

Lemma 2.6

A finite-dimensional set \(\{ 0\}\ne \mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) is a regular flat set if and only if there exists a flat set \(\varvec{\Gamma }\subseteq \mathbb {C}^k\), which is determined by a matrix \(\varvec{D}\in \mathbb {M}_{m\times k}\) and closed sets \(\Gamma _j\subseteq \mathbb {C}\) \((j=1, \ldots , m)\), and operators \(N_1,\ldots ,N_k\in \mathcal {B}(\mathscr {X},\mathscr {Y})\), such that \(\mathcal {M}=\varvec{\Gamma }\cdot \varvec{N}\), where \(\varvec{N}=[N_1,\ldots , N_k]\), and

$$\begin{aligned} \varvec{\gamma }\in \mathscr {N}(\varvec{D})\quad \text {whenever}\quad \varvec{\gamma }\cdot \varvec{N}=\varvec{0}. \end{aligned}$$
(2.2)

Proof

If \(\mathcal {M}\) is a regular flat set of operators, then \(\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\), where \(\varvec{\Lambda }\subseteq \mathbb {C}^n\) is a flat set and \(\varvec{M}=[M_1,\ldots ,M_n]\) with \(M_1,\ldots , M_n\) linearly independent. Hence, \(\varvec{\lambda }\cdot \mathbb {M}=\varvec{0}\) implies \(\varvec{\lambda }=\varvec{0}\), which means that \(\varvec{\lambda }\in \mathscr {N}(\varvec{C})\), where \(\varvec{C}\) is the matrix which determines \(\varvec{\Lambda }\).

Assume now that \(\mathcal {M}=\varvec{\Gamma }\cdot \varvec{N}\), where \(\varvec{N}=[N_1,\ldots ,N_k]\) with \(N_1,\ldots ,N_k\in \mathcal {B}(\mathscr {X},\mathscr {Y})\), and \(\varvec{\Gamma }\subseteq \mathbb {C}^k\) is a flat set determined by a matrix \(\varvec{D}\in \mathbb {M}_{m\times k}\) and closed sets \(\Gamma _j\subseteq \mathbb {C}\) \((j=1,\ldots ,m)\), such that (2.2) is fulfilled. By Lemma 2.5, there exist \(\varvec{M}=[M_1,\ldots , M_n]\), with linearly independent operators \(M_1, \ldots , M_n\), and a non-empty set \(\varvec{\Lambda }\subseteq \mathbb {C}^n\), such that \(\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\). Of course, \(1\le n\le k\). Since \(\varvec{\Gamma }\cdot \varvec{N}=\mathcal {M}=\varvec{\Lambda }\cdot \varvec{M}\) we have \(\textrm{span}\{ N_1,\ldots , N_k\}= \textrm{span}\{ M_1,\ldots ,M_n\}\). Hence, if \(\varvec{\gamma }\in \mathbb {C}^k\), then there exists a unique \(\varvec{\lambda }_{\varvec{\gamma }}\in \mathbb {C}^n\), such that \(\varvec{\gamma }\cdot \varvec{N}=\varvec{\lambda }_{\varvec{\gamma }}\cdot \varvec{M}\). It is not hard to see that \(\varvec{\gamma }\mapsto \varvec{\lambda }_{\varvec{\gamma }}\) is a linear map from \(\mathbb {C}^k\) onto \(\mathbb {C}^n\). Hence, there is a matrix \(\varvec{B}\in \mathbb {M}_{n\times k}\), such that \(\varvec{\gamma }\cdot \varvec{N}=\varvec{B}\varvec{\gamma }\cdot \varvec{M}\). Matrix \(\varvec{B}\) is surjective. Indeed, if \(\varvec{\lambda }\in \mathbb {C}^n\), then \(\varvec{\lambda }\cdot \varvec{M}\in \textrm{span}\{ M_1,\ldots ,M_n\}\) and, therefore, there exists \(\varvec{\gamma }\in \mathbb {C}^k\), such that \(\varvec{\gamma }\cdot \varvec{N}=\varvec{\lambda }\cdot \varvec{M}\) which gives \(\varvec{B}\varvec{\gamma }=\varvec{\lambda }\). If \(\varvec{\gamma }\in \varvec{\Gamma }\), then \(\varvec{B}\varvec{\gamma }\in \varvec{\Lambda }\), by the definition of \(\varvec{\Lambda }\) (see the proof of Lemma 2.5). Hence, \(\varvec{B}(\varvec{\Gamma })=\varvec{\Lambda }\).

Assume that \(\varvec{\gamma }\in \mathscr {N}(\varvec{B})\). Then, \(\varvec{\gamma }\cdot \varvec{N}=\varvec{0}\cdot \varvec{M}=\varvec{0}\cdot \varvec{N}\) and, therefore, \(\varvec{D}\varvec{\gamma }=\varvec{D} \varvec{0}=\varvec{0}\), by (2.2). Hence, \(\mathscr {N}(\varvec{B})\subseteq \mathscr {N}(\varvec{D})\). It follows that there exists a matrix \(\varvec{C}\in \mathbb {M}_{m\times n}\), such that \(\varvec{C} \varvec{B}=\varvec{D}\). We claim that \(\varvec{\Lambda }\) is a flat set determined by \(\varvec{C}\) and sets \(\Lambda _j=\Gamma _j\) \((j=1,\ldots ,m)\). Suppose that \(\varvec{\lambda }\in \varvec{\Lambda }\). Then, there exists \(\varvec{\gamma }\in \varvec{\Gamma }\), such that \(\varvec{\lambda }= \varvec{B}\varvec{\gamma }\). Hence, \(\varvec{C}\varvec{\lambda }=\varvec{C}\varvec{B}\varvec{\gamma }=\varvec{D}\varvec{\gamma }\in \Lambda _1\times \cdots \times \Lambda _m\). On the other hand, if \(\varvec{\lambda }\in \mathbb {C}^n\) is such that \(\varvec{C}\varvec{\lambda }\in \Lambda _1\times \cdots \times \Lambda _m\), then there exists \(\varvec{\gamma }\in \mathbb {C}^k\), such that \(\varvec{B}\varvec{\gamma }=\varvec{\lambda }\). Since \(\varvec{D}\varvec{\gamma }=\varvec{C}\varvec{B}\varvec{\gamma }=\varvec{C}\varvec{\lambda }\in \Lambda _1\times \cdots \times \Lambda _m\), we see that \(\varvec{\gamma }\in \varvec{\Gamma }\). This gives that \(\varvec{\lambda }=\varvec{B}\varvec{\gamma }\in \varvec{\Lambda }\). \(\square \)

2.4 Separating vectors and locally linearly dependent operators

Let \(\{ 0\} \ne \mathcal {S}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) be a subspace. Vector \(x\in \mathscr {X}\) is separating for \(\mathcal {S}\) if \(\theta _x:S\mapsto Sx\) is an injective mapping from \(\mathcal {S}\) to \(\mathscr {Y}\) (see [6]). If \(\{ 0\}\ne \mathcal {M}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) is a finite-dimensional set of operators, then x is a separating vector for \(\mathcal {M}\) if it is a separating vector for \(\textrm{span}\{ M_1,\ldots , M_n\}\).

Lemma 2.7

Vector \(x\in \mathscr {X}\) is separating for a finite-dimensional space \(\{ 0\} \ne \mathcal {S}\subseteq \mathcal {B}(\mathscr {X},\mathscr {Y})\) if and only if \(\dim (\mathcal {S} x)=\dim (\mathcal {S})\). In particular, x is separating for linearly independent operators \(M_1,\ldots , M_n\) if and only if \(M_1 x, \ldots , M_n x\) are linearly independent.

Proof

Let \(\dim (\mathcal {S})=k\) and let \((S_1,\ldots , S_k)\) be a basis of \(\mathcal {S}\). Assume that \(x\in \mathscr {X}\) is a separating vector for \(\mathcal {S}\). It is clear that \(\dim (\mathcal {S} x)\le k\). Let \(\alpha _1,\ldots , \alpha _k\in \mathbb {C}\) be such that \(\alpha _1 S_1 x+\cdots +\alpha _k S_k x=0\). Since \(\theta _x(\alpha _1 S_1+\cdots +\alpha _k S_k)= \alpha _1 S_1 x+\cdots +\alpha _k S_k x\) and \(\theta _x\) is injective, we have \(\alpha _1 S_1+\cdots +\alpha _k S_k=0\) which gives \(\alpha _1=\cdots =\alpha _k=0\). Thus, \(S_1 x,\ldots , S_k x\) are linearly independent and, therefore, \(\dim (\mathcal {S} x)=k\).

Suppose now that \(x\in \mathscr {X}\) is such that \(\dim (\mathcal {S} x)=k\). Since \(\mathcal {S} x=\textrm{span}\{ S_1 x,\ldots , S_k x\}\), we see that \(S_1 x, \ldots , S_k x\) are linearly independent vectors. Let \(\alpha _1,\ldots , \alpha _k\in \mathbb {C}\) be such that \( \theta _x(\alpha _1 S_1+\cdots +\alpha _k S_k)=0\). Then, \(\alpha _1 S_1 x+\cdots +\alpha _k S_k x=0\) and, therefore, \(\alpha _1=\cdots =\alpha _k=0\). Hence, \(\theta _x\) is an injective mapping.

Let \(M_1,\ldots , M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) be linearly independent and let \(\mathcal {S}\) be the linear span of these operators. Since \((M_1, \ldots , M_n)\) is a basis of \(\mathcal {S}\), a vector x is separating for \(\mathcal {S}\) if and only if \(M_1 x, \ldots , M_n x\) are linearly independent. \(\square \)

Let \(M_1, \ldots , M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\). Denote \(\varvec{M}=[M_1,\ldots , M_n]\) and \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ M_1,\ldots , M_n\}\). It is said that \(M_1, \ldots , M_n\) are locally linearly dependent (briefly, LLD) if there is no separating vector for \(\mathcal {S}_{\varvec{M}}\), that is, vectors \(M_1 x, \ldots , M_n x\) are linearly dependent, for any \(x\in \mathscr {X}\). Of course, if \(M_1, \ldots , M_n\) are linearly dependent, then they are locally linearly dependent. The opposite does not hold, in general. Aupetit [1, Theorem 4.2.9] proved that \(\mathcal {S}_{\varvec{M}}\) contains a non-zero operator whose rank is at most \(n-1\) if \(M_1,\ldots ,M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) are LLD. We will need the following corollary of the Aupetit’s result which is a special case of [4, Theorem 2.3]. However, we will include an elementary proof that relies on Aupetit’s theorem.

Corollary 2.8

Linearly independent operators \(M_1, M_2\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) are LLD if and only if there exist \(0\ne f\in \mathscr {Y}\) and linearly independent functionals \(\xi _1, \xi _2\in \mathscr {X}^*\), such that \(M_1=f\otimes \xi _1\) and \(M_2=f\otimes \xi _2\).

Proof

It is obvious that \(M_1=f\otimes \xi _1\) and \(M_2=f\otimes \xi _2\) are LLD. Hence, we have to prove the opposite implication. Denote \(\varvec{M}=[M_1,M_2]\) and \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ M_1,M_2\}\). Assume that \(M_1\) and \(M_2\) are LLD. It follows that \(\dim (\mathcal {S}_{\varvec{M}} x)\le 1\), for all \(x\in \mathscr {X}\), which means that arbitrary two operators in \(\mathcal {S}_{\varvec{M}}\) are LLD. Since \(M_1\) and \(M_2\) are linearly independent, by [1, Theorem 4.2.9], there exists a rank-one operator A in \(\mathcal {S}_{\varvec{M}}\). Hence, there exist \(0\ne f\in \mathscr {Y}\) and \(0\ne \xi \in \mathscr {X}^*\), such that \(A=f\otimes \xi \). Let \(u\in \mathscr {X}\) be such that \(\langle u,\xi \rangle =1\) and let \(0\ne B\in \mathcal {S}_{\varvec{M}}\) be arbitrary. Since Au and Bu are linearly dependent and \(Au=f\ne 0\), there exists a number \(\kappa (u)\), such that \(Bu=\kappa (u)Au=\kappa (u)f\). Let \(y\in \mathscr {N}(A)\) be arbitrary. Then, \(A(u+y)=Au=f\ne 0\). Since \(B(u+y)\) and \( A(u+y)\) are linearly dependent, there exists \(\kappa (u+y)\in \mathbb {C}\), such that \(B(u+y)=\kappa (u+y)A(u+y)=\kappa (u+y)f\). On the other hand, \(B(u+y)=Bu+By=\kappa (u)f+By\). Hence, \(By=\bigl ( \kappa (u+y)-\kappa (u)\bigr ) f\). An arbitrary vector \(x\in \mathscr {X}\) can be written as \(x=\alpha u+y\), where \(\alpha \in \mathbb {C}\) and \(y\in \mathscr {N}(A)=\mathscr {N}(\xi )\). It follows that \(Bx=\alpha Bu+By=\bigl ((\alpha -1) \kappa (u)+\kappa (u+y)\bigr )f\). This shows that B is a rank-one operator with the range spanned by f, that is, \(B=f\otimes \eta \) for some \(\eta \in \mathscr {X}^*\). In particular, \(M_1=f\otimes \xi _1\) and \(M_2=f\otimes \xi _2\) for some \(\xi _1, \xi _2\in \mathscr {X}^*\). \(\square \)

3 Reflexivity of finite-dimensional sets of operators with high rank

For operators \(M_1, \ldots , M_n\in \mathcal {B}(\mathscr {X},\mathscr {Y})\), let \(\varvec{M}=[M_1, \ldots , M_n]\) and \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ M_1,\ldots ,M_n\}\).

Theorem 3.1

Assume that \(M_1, \ldots , M_n\) are linearly independent and that \(\mathcal {S}_{\varvec{M}}\) has a separating vector. Let \(\varvec{\Lambda }_1\), \(\varvec{\Lambda }_2\) be non-empty closed subsets of \(\mathbb {C}^n\), such that \(\varvec{\Lambda }_1\subseteq \varvec{\Lambda }_2\). If \(\varvec{\Lambda }_{2}\cdot \varvec{M}\) is a reflexive set, then \(\varvec{\Lambda }_{1}\cdot \varvec{M}\) is a reflexive set, as well.

Proof

Since \(\varvec{\Lambda }_1\subseteq \varvec{\Lambda }_2\), we have \(\varvec{\Lambda }_{1}\cdot \varvec{M} \subseteq \varvec{\Lambda }_{2}\cdot \varvec{M}\) which implies \(\textrm{Ref}(\varvec{\Lambda }_{1}\cdot \varvec{M})\subseteq \textrm{Ref}(\varvec{\Lambda }_{2}\cdot \varvec{M})= \varvec{\Lambda }_{2}\cdot \varvec{M}\). Hence, if \(T\in \textrm{Ref}(\varvec{\Lambda }_{1}\cdot \varvec{M})\), then \(T\in \varvec{\Lambda }_{2}\cdot \varvec{M}\), which means that there exists \(\varvec{\lambda }=(\lambda _1, \ldots , \lambda _n)^{\intercal } \in \varvec{\Lambda }_2\), such that \(T=\varvec{\lambda }\cdot \varvec{M}=\lambda _1 M_1+\cdots +\lambda _n M_n\). Let \(x\in \mathscr {X}\) be a separating vector for \(\mathcal {S}_{\varvec{M}}\). Hence, vectors \(M_1 x, \ldots , M_n x\in \mathscr {Y}\) are linearly independent. It follows from \(T\in \textrm{Ref}(\varvec{\Lambda }_{1}\cdot \varvec{M})\) that for every \(\varepsilon >0\), there exists \(\varvec{\lambda }^{(\varepsilon )}=(\lambda _{1}^{(\varepsilon )}, \ldots , \lambda _{n}^{(\varepsilon )})^{\intercal } \in \varvec{\Lambda }_1\), such that \(\Vert Tx-(\lambda _{1}^{(\varepsilon )}M_1 x+\cdots +\lambda _{n}^{(\varepsilon )}M_n x)\Vert <\varepsilon \), that is, \(\Vert (\lambda _1-\lambda _{1}^{(\varepsilon )}) M_1 x+\cdots +(\lambda _n-\lambda _{n}^{(\varepsilon )}) M_n x\Vert <\varepsilon .\) Since vectors \(M_1 x, \ldots , M_n x\) are linearly independent, for every \(j\in \{ 1, \ldots , n\}\), there exists \(\eta _j \in \mathscr {Y}^*\), such that \(\Vert \eta _j\Vert =1\), \(\langle M_j x,\eta _j\rangle =\Vert M_j x\Vert \) and \(\langle M_i x,\eta _j\rangle =0\) if \(j\ne i\). It follows that \(| \lambda _j-\lambda _{j}^{(\varepsilon )}|\Vert M_j x\Vert = | \langle (\lambda _1-\lambda _{1}^{(\varepsilon )}) M_1 x+\cdots +(\lambda _n-\lambda _{n}^{(\varepsilon )}) M_n x,\eta _j\rangle |\le \Vert (\lambda _1-\lambda _{1}^{(\varepsilon )}) M_1 x+\cdots +(\lambda _n-\lambda _{n}^{(\varepsilon )}) M_n x\Vert <\varepsilon .\) We may conclude that \(\varvec{\lambda }^{(\varepsilon )} \rightarrow \varvec{\lambda }\) when \(\varepsilon \rightarrow 0\). Since \(\varvec{\Lambda }_1\) is a closed set we have \(\varvec{\lambda }\in \varvec{\Lambda }_1\) which gives \(T\in \varvec{\Lambda }_{1}\cdot \varvec{M}\). \(\square \)

Larson [6, Lemma 2.4] showed that \(\mathcal {S}_{\varvec{M}}\) is a reflexive space if there is no non-zero finite-rank operator in \(\mathcal {S}_{\varvec{M}}\). Li and Pan [7, Theorem 2] improved this by showing that \(\mathcal {S}_{\varvec{M}}\) is reflexive if every non-zero operator in \(\mathcal {S}_{\varvec{M}}\) has rank greater than or equal to \(2n-1\). Finally, Meshulam and Šemrl have proved that \(\mathcal {S}_{\varvec{M}}\) is reflexive if every non-zero operator in \(\mathcal {S}_{\varvec{M}}\) has rank larger than n. This assertion is stated as a slightly more general theorem in the abstract of [10]. The proof follows from several results stated in that paper. Using Meshulam-Šemrl’s result, we can deduce the following corollary from Theorem 3.1.

Corollary 3.2

If every non-zero operator in \(\mathcal {S}_{\varvec{M}}\) has rank larger than n, then \(\varvec{\Lambda }\cdot \varvec{M}\) is a reflexive set, for every non-empty closed set \(\varvec{\Lambda }\subseteq \mathbb {C}^n\).

Proof

By the Meshulam and Šemrl’s theorem [10], \(\mathcal {S}_{\varvec{M}}\) is a reflexive space. It has a separating vector, by Aupetit’s theorem [1, Theorem 4.2.9]. Hence, by Theorem 3.1, \(\varvec{\Lambda }\cdot \varvec{M}\) is reflexive. \(\square \)

4 Finite-dimensional sets determined by rank-one operators

In this section, we will consider finite-dimensional sets of operators which are determined by rank-one operators. There is no loss of generality if we work with matrices. Let \(p, q\in \mathbb {N}\) and let \(\mathscr {X}=\mathbb {C}^p\), \(\mathscr {Y}=\mathbb {C}^q\). Since all norms on a finite-dimensional vector space are equivalent, we will assume that these are Euclidean spaces. However, we will identify the dual space of \(\mathbb {C}^p\) with itself through a linear map. More precisely, for \(\varvec{x},\varvec{y}\in \mathbb {C}^p\), let \(\langle \varvec{x},\varvec{y}\rangle =\varvec{y}^{\intercal } \varvec{x}\).

If \(\varvec{x}\in \mathbb {C}^p\) and \(\varvec{u}\in \mathbb {C}^q\), then \(\varvec{u} \varvec{x}^{\intercal }\) is a rank-one matrix in \(\mathbb {M}_{q\times p}\). Denote by \((\varvec{e}_1,\ldots ,\varvec{e}_p)\), respectively, by \((\varvec{f}_1,\ldots ,\varvec{f}_q)\), the standard basis of \(\mathbb {C}^p\), respectively, of \(\mathbb {C}^q\). For \(i\in \{ 1,\ldots , q\}\) and \(j\in \{ 1,\ldots , p\}\), let \(\varvec{E}_{ij}=\varvec{f}_i \varvec{e}_{j}^{{\intercal }}\), that is, \(\varvec{E}_{ij}\) is a \(q\times p\) matrix whose entries are 0 except the entry at the position (ij) which is 1. Throughout this section, let \(\varvec{M}\) denote the \(1\times n\) operator matrix \([\varvec{E}_{11},\ldots ,\varvec{E}_{1p},\varvec{E}_{21},\ldots ,\varvec{E}_{qp}]\). In what follows, we will always use this lexicographic order. Let \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\). For a non-empty set \(\mathcal {E}\subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\), we say that \(\textrm{span}(\mathcal {E})\) is a standard subspace of \(\mathcal {S}_{\varvec{M}}\).

Proposition 4.1

For every \(\varvec{\Lambda }=\Lambda _{11}\times \cdots \times \Lambda _{qp}\), where each \(\Lambda _{ij}\) is a non-empty closed subset of \(\mathbb {C}\), the finite-dimensional set \(\varvec{\Lambda }\cdot \varvec{M}\) is reflexive.

Proof

Let \(\varvec{T}\in \textrm{Ref}(\varvec{\Lambda }\cdot \varvec{M})\). Of course, there exists \(\varvec{\alpha }=(\alpha _{11},\alpha _{12},\ldots , \alpha _{qp})^{\intercal }\in \mathbb {C}^{qp}\), such that \(\varvec{T}=\sum _{i=1}^{q}\sum _{j=1}^{p}\alpha _{ij} \varvec{E}_{ij}\). For every vector \(\varvec{e}_k\) from the standard basis, we have

$$\begin{aligned} \varvec{T} \varvec{e}_k=\sum _{i=1}^{q}\sum _{j=1}^{p}\alpha _{ij}(\varvec{f}_i \varvec{e}_{j}^{{\intercal }})\varvec{e}_k= \sum _{i=1}^{q}\alpha _{ik} \varvec{f}_i. \end{aligned}$$
(4.1)

On the other hand, since \(\varvec{T} \varvec{e}_k\in \overline{(\varvec{\Lambda } \cdot \varvec{M}) \varvec{e}_k}\), for every \(\varepsilon >0\), there exists \(\varvec{\lambda }^{(\varvec{e}_k)}=(\lambda _{11}^{(\varvec{e}_k)},\ldots , \lambda _{qp}^{(\varvec{e}_k)})^{\intercal } \in \varvec{\Lambda }\), which depends on \(\varepsilon \), such that \(\Vert (\varvec{T}- \varvec{\lambda }^{(\varvec{e}_k)}\cdot \varvec{M}) \varvec{e}_k\Vert <\varepsilon \). When we put in this inequality (4.1) and

$$\begin{aligned} (\varvec{\lambda }^{(\varvec{e}_k)}\cdot \varvec{M}) \varvec{e}_k= \sum _{i=1}^{q}\sum _{j=1}^{p}\lambda _{ij}^{(\varvec{e}_k)}(\varvec{f}_i \varvec{e}_{j}^{{\intercal }})\varvec{e}_k= \sum _{i=1}^{q}\lambda _{ik}^{(\varvec{e}_k)} \varvec{f}_i, \end{aligned}$$

we obtain \( \left\| \sum _{i=1}^{q}(\alpha _{ik}-\lambda _{ik}^{(\varvec{e}_k)}) \varvec{f}_i\right\| <\varepsilon \). Let \(l\in \{1,\ldots ,q\}\) be arbitrary. Then

$$\begin{aligned} \left| \alpha _{lk}-\lambda _{lk}^{(\varvec{e}_k)}\right| = \left| \varvec{f}_{l}^{{\intercal }}\left( \sum _{i=1}^{q}\left( \alpha _{ik}-\lambda _{ik}^{(\varvec{e}_k)}\right) \varvec{f}_i\right) \right| \le \left\| \sum _{i=1}^{q}\left( \alpha _{ik}-\lambda _{ik}^{(\varvec{e}_k)}\right) \varvec{f}_i\right\| <\varepsilon . \end{aligned}$$

Numbers \(\lambda _{lk}^{(e_k)}\), which depend on \(\varepsilon \), are in \(\Lambda _{lk}\) and this is a closed set. Hence, \(\alpha _{lk}\in \Lambda _{lk}\). Since this holds for all \(1\le k\le p\) and \(1\le l\le q\), we have \(\varvec{\alpha }\in \varvec{\Lambda }\) and, therefore, \(T\in \varvec{\Lambda }\cdot \varvec{M}\). \(\square \)

The following is an immediate consequence of Theorem 4.1.

Corollary 4.2

Every standard subspace of \(\mathcal {S}_{\varvec{M}}\) is reflexive.

Proof

Let \(\mathcal {E} \subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) be a non-empty set. Define \(\varvec{\Lambda }=\Lambda _{11}\times \cdots \times \Lambda _{qp}\) as follows. If \(\varvec{E}_{ij}\in \mathcal {E}\), then let \(\Lambda _{ij}=\mathbb {C}\), and let \(\Lambda _{ij}=\{ 0\}\) if \(\varvec{E}_{ij}\not \in \mathcal {E}\). It is clear that \(\textrm{span}(\mathcal {E})=\varvec{\Lambda }\cdot \varvec{M}\). \(\square \)

For some standard subspaces \(\textrm{span}(\mathcal {E})\) of \(\mathcal {S}_{\varvec{M}}\), we can show that every flat subset \(\varvec{\Lambda }\cdot \varvec{M}\) of \(\textrm{span}(\mathcal {E})\) is reflexive. A subset \(\mathcal {R}\) of \(\{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) is a row if there exists \(i_0\), such that \(\mathcal {R}\subseteq \{ \varvec{E}_{i_0 1},\ldots ,\varvec{E}_{i_0 p}\}\). Similarly, a subset \(\mathcal {Q}\) of \(\{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) is a column if there exists \(j_0\), such that \(\mathcal {Q}\subseteq \{ \varvec{E}_{1 j_0},\ldots ,\varvec{E}_{q j_0}\}\). Of course, when we work with a row or a column, there is no loss of generality if we assume that \(\mathcal {R}= \{ \varvec{E}_{i_0 1},\ldots ,\varvec{E}_{i_0 p}\}\) or \(\mathcal {Q}= \{ \varvec{E}_{1 j_0},\ldots ,\varvec{E}_{q j_0}\}\).

Proposition 4.3

Let \(\mathcal {R}= \{ \varvec{E}_{i_0 1},\ldots ,\varvec{E}_{i_0 p}\}\) and \(\mathcal {Q}= \{ \varvec{E}_{1 j_0},\ldots ,\varvec{E}_{q j_0}\}\). Denote \(\varvec{R}=[ \varvec{E}_{i_0 1},\ldots ,\varvec{E}_{i_0 p}]\) and \(\varvec{Q}=[ \varvec{E}_{1 j_0},\ldots ,\varvec{E}_{q j_0}]\). If \(\varvec{\Lambda }\subseteq \mathbb {C}^p\) is a flat set, then \(\varvec{\Lambda }\cdot \varvec{R}\) is reflexive. On the other hand, \(\varvec{\Lambda }\cdot \varvec{Q}\) is reflexive, for every non-empty closed set \(\varvec{\Lambda }\subseteq \mathbb {C}^q\).

Proof

Let \(\mathcal {S}_{\varvec{R}}=\textrm{span}\{\varvec{E}_{i_0 1},\ldots ,\varvec{E}_{i_0 p}\}\) and assume that \(\varvec{\Lambda }\subseteq \mathbb {C}^p\) is a flat set determined by non-empty closed sets \(\Lambda _i\subseteq \mathbb {C}\) \((i=1,\ldots ,m)\) and a matrix \(\varvec{C}=[c_{ij}]\in \mathbb {M}_{m\times p}\). Suppose that \(\varvec{T}\in \textrm{Ref}(\varvec{\Lambda }\cdot \varvec{R})\). Since \(\varvec{\Lambda }\cdot \varvec{R}\subseteq \mathcal {S}_{\varvec{R}}\) and \(\mathcal {S}_{\varvec{R}}\) is reflexive, by Corollary 4.2, we have \(\varvec{T}\in \mathcal {S}_{\varvec{R}}\). Hence, there exists \(\varvec{\alpha }=(\alpha _1,\ldots ,\alpha _n)^{\intercal }\in \mathbb {C}^n\), such that \(\varvec{T}=\alpha _1 \varvec{E}_{i_0 1}+\cdots +\alpha _p \varvec{E}_{i_0 p}\). For every \(i=1,\ldots , m\), let \( \varvec{u}_i=c_{i1}\varvec{e}_1+\cdots +c_{ip}\varvec{e}_p\). Then

$$\begin{aligned} \varvec{T} \varvec{u}_i=(\alpha _1 \varvec{E}_{i_0 1}+\cdots +\alpha _p \varvec{E}_{i_0 p}) \varvec{u}_i= (\alpha _1c_{i1}+\cdots +\alpha _p c_{ip}) \varvec{f}_{i_0}. \end{aligned}$$

Fix i and let \(\varepsilon >0\) be arbitrary. Then, there exists \(\varvec{\lambda }^{(\varvec{u}_i)}=(\lambda _{1}^{(\varvec{u}_i)},\ldots ,\lambda _{p}^{(\varvec{u}_i)})^{\intercal }\in \varvec{\Lambda }\), which depends on \(\varepsilon \), such that \(\Vert (\varvec{T}-\varvec{\lambda }^{(\varvec{u}_i)}\cdot \varvec{R}) \varvec{u}_i\Vert <\varepsilon \). Since

$$\begin{aligned} (\varvec{\lambda }^{(\varvec{u}_i)}\cdot \varvec{R})\varvec{u}_i= (\lambda _{1}^{(\varvec{u}_i)} \varvec{E}_{i_0 1}+\cdots +\lambda _{p}^{(\varvec{u}_i)} \varvec{E}_{i_0 p}) \varvec{u}_i= (\lambda _{1}^{(\varvec{u}_i)} c_{i1}+\cdots +\lambda _{p}^{(\varvec{u}_i)} c_{ip}) \varvec{f}_{i_0}, \end{aligned}$$

we see that

$$\begin{aligned} | (\alpha _1c_{i1}+\cdots +\alpha _p c_{ip})-(\lambda _{1}^{(\varvec{u}_i)} c_{i1}+ \cdots +\lambda _{p}^{(\varvec{u}_i)} c_{ip})|=\Vert (\varvec{T}-\varvec{\lambda }^{(\varvec{u}_i)}\cdot \varvec{R})\varvec{u}_i\Vert <\varepsilon . \end{aligned}$$

Numbers \(\lambda _{1}^{(\varvec{u}_i)} c_{i1}+\cdots +\lambda _{p}^{(\varvec{u}_i)} c_{ip}\) are in \(\Lambda _i\) which is a closed set. Since \(\varepsilon \) can be arbitrary small, we conclude that \(\alpha _1c_{i1}+\cdots +\alpha _p c_{ip}\in \Lambda _i\). This holds for every \(i=1, \ldots , m\). Thus, \(\varvec{\alpha }\in \varvec{\Lambda }\) and, therefore, \(\varvec{T}\in \varvec{\Lambda }\cdot \varvec{R}\).

For the second assertion, note that \(\mathcal {S}_{\varvec{Q}}=\textrm{span}\{\varvec{E}_{1 j_0},\ldots ,\varvec{E}_{q j_0}\}\) is reflexive, by Corollary 4.2. It is clear that every vector \(\varvec{x}\in \mathbb {C}^p\), such that \(\varvec{x} \varvec{e}_{j_0}^{{\intercal }}\ne 0\) is separating for \(\mathcal {S}_{\varvec{Q}}\). Hence, by Theorem 3.1, \(\varvec{\Lambda }\cdot \varvec{Q}\) is reflexive, for every non-empty closed set \(\varvec{\Lambda }\subseteq \mathbb {C}^q\). \(\square \)

Two-dimensional non-reflexive spaces are characterized in [2, Theorem 3.10]. The following example is a consequence of that characterization.

Example

Let \(\mathcal {E}\subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\). If \(\mathcal {E}\) contains \(\{ \varvec{E}_{ij},\) \( \varvec{E}_{i+k,j},\) \(\varvec{E}_{i+k, j+l}\}\) or a triple \(\{ \varvec{E}_{ij}, \varvec{E}_{i,j+l},\varvec{E}_{i+k, j+l}\}\), then there exist a non-reflexive two-dimensional subspace of the standard space \(\textrm{span}(\mathcal {E})\). To see this, assume that \(\{ \varvec{E}_{ij}, \varvec{E}_{i,j+l},\varvec{E}_{i+k, j+l}\}\subseteq \mathcal {E}\) (the case \(\{ \varvec{E}_{ij}, \varvec{E}_{i+k,j},\varvec{E}_{i+k, j+l}\} \subseteq \mathcal {E}\) can be treated similarly). Let \(\mathcal {M}\subseteq \textrm{span}(\mathcal {E})\) be the two-dimensional space spanned by \(\varvec{E}_{ij}+\varvec{E}_{i+k,j+l}\) and \(\varvec{E}_{i, j+l}\). It is clear that \(\varvec{E}_{ij}\not \in \mathcal {M}\). However, \(\varvec{E}_{ij}\in \textrm{Ref}(\mathcal {M})\). To check this, choose an arbitrary \(\varvec{x}=(x_1,\ldots , x_p)^{\intercal }\in \mathbb {C}^p\). Then, \(\varvec{E}_{ij}\varvec{x}=\varvec{f}_j \varvec{e}_{i}^{{\intercal }}(x_1 \varvec{e}_1+\cdots +x_p \varvec{e}_p)=x_i \varvec{f}_j\). Hence, if \(x_i=0\), then \(\varvec{E}_{ij}\varvec{x}=\varvec{0}=\varvec{0} \varvec{x}\), and if \(x_i\ne 0\), then \(\varvec{E}_{ij}\varvec{x}=x_i \varvec{f}_j=\bigl ( (\varvec{E}_{ij}+\varvec{E}_{i+k,j+l})-\frac{x_{i+k}}{x_i}\varvec{E}_{i,j+l}\bigr ) \varvec{x}\). Since \(\mathcal {M}\subsetneq \textrm{Ref}(\mathcal {M})\), we have proven that \(\mathcal {M}\) is not reflexive. \(\square \)

For a non-empty set \(\mathcal {E} \subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\), let

$$\begin{aligned} \varvec{P}_\mathcal {E}=\sum _{\varvec{E}_{ij}\in \mathcal {E}}\varvec{E}_{ij}. \end{aligned}$$

Thus, \(\varvec{P}_\mathcal {E}\in \mathbb {M}_{q\times p}\) is a 0-1 matrix with 1 at the position (ij) if and only if \(\varvec{E}_{ij}\in \mathcal {E}\). We will say that \(\mathcal {E}\) (and, consequently, \(\varvec{P}_\mathcal {E}\)) is a twisted diagonal if

$$\begin{aligned} \begin{aligned}&\text {whenever there is 1 at the position}\; (i_0,j_0)\; \text {of}\; \varvec{P}_\mathcal {E},\; \text {then either there is}\\&\text {no other 1 in the}\; i_0\text {-th row or no other 1 in the}\; j_0\text {-th column of}\; \varvec{P}_\mathcal {E}. \end{aligned} \end{aligned}$$
(4.2)

Examples of twisted diagonals are rows and columns from Proposition 4.3. A twisted diagonal \(\mathcal {E}\) is maximal if it is not contained properly in a larger twisted diagonal. There is no loss of generality if we confine ourselves to maximal twisted diagonals. In the following picture, we show two examples of matrix patterns that correspond to maximal twisted diagonals (light square means 0 and dark one means 1).

figure a

Note that \(\varvec{P}_{\mathcal {E}_1}\) is a direct sum of rows and columns and \(\varvec{P}_{\mathcal {E}_2}\) can be obtained from \(\varvec{P}_{\mathcal {E}_1}\) by permutation of rows and columns.

Lemma 4.4

Let \(\mathcal {E}\subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) be a maximal twisted diagonal. Then, there exist subsets \(\mathcal {E}_1,\ldots ,\mathcal {E}_k\) of \( \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\), each of which is either a row or a column, and permutation matrices \(\varvec{U}\in \mathbb {M}_{q\times q}\) and \(\varvec{V}\in \mathbb {M}_{p\times p}\), such that \(\varvec{U}\varvec{P}_{\mathcal {E}}\varvec{V}=\varvec{P}_{\mathcal {E}_1}\oplus \cdots \oplus \varvec{P}_{\mathcal {E}_k}\).

Proof

Let \(\mathcal {E}\) be a maximal twisted diagonal and let \(\varvec{P}_\mathcal {E}=[\rho _{ij}]\in \mathbb {M}_{q\times p}\) be the corresponding 0-1-matrix. By maximality, there is at least one 1 in each row, and each column of \(\varvec{P}_\mathcal {E}\). Hence, for \(i=1\), there exist indices \(j_1, \ldots , j_{l_1}\in \{ 1,\ldots , q\}\), where \(l_1\ge 1\), such that \(\rho _{1 j_1}=1,\ldots ,\rho _{1j_{l_1}}=1\) and \(\rho _{1 j}=0\) if j is not one among the listed indices. If necessary, we may permute columns to get \(j_1=1,\ldots , j_{l_1}=l_1\). We have to distinguish three cases. If \(l=p\), then we have done: \(\varvec{P}_\mathcal {E}\) is a \(1\times p\) matrix with all entries equal to 1. Assume that \(1<l<p\). Then, \(\rho _{ij}=0\), for all pairs (ij), such that \(2\le i\le q\) and \(1\le j\le l\) and for all pairs \((1,l+1),\ldots ,(1,p)\). Let \(\mathcal {E}_1=\{ \varvec{E}_{11},\ldots , \varvec{E}_{1\,l}\}\) and consider \(\varvec{P}_{\mathcal {E}_1}\) as a \(1\times l\) matrix with all entries equal to 1. It follows that \(\varvec{U}_1 \varvec{P}_\mathcal {E} \varvec{V}_1=\varvec{P}_{\mathcal {E}_1}\oplus \varvec{P}_{\mathcal {E}'}\), where \(\varvec{P}_{\mathcal {E}'}\in \mathbb {M}_{(q-1)\times (p-l)}\) is the 0-1-matrix corresponding to \(\mathcal {E}'=\mathcal {E}{\setminus } \{ \varvec{E}_{11},\ldots , \varvec{E}_{1\,l}\}\) and \(\varvec{U}_1\), respectively \(\varvec{V}_1\), is a suitable permutation of rows, respectively columns. The third case is \(l=1\), that is, in the first row of \(\varvec{P}_{\mathcal {E}}\) is only one 1 (which is in the first column after a suitable permutation of columns). Let \(i_1, i_2, \ldots , i_t\in \{ 1,\ldots , q\}\) be indices, such that \(\rho _{i_1 1}=1,\ldots ,\rho _{i_t 1}=1\) and all other entries in the first column are 0. If necessary, we permute rows to get \(i_1=1,\ldots ,i_t=t\). If \(t=q\), then we have done: \(\varvec{P}_\mathcal {E}\) is a \(q\times 1\) matrix with all entries equal to 1. Suppose that \(1\le t<q\). Then, \(\rho _{ij}=0\), for all pairs (ij), such that \(1\le i\le t\) and \(2\le j\le q\) and for all pairs \((t+1,1),\ldots , (q,1)\). Let \(\mathcal {E}_1=\{ \varvec{E}_{11},\ldots , \varvec{E}_{t1}\}\), that is, \(\varvec{P}_{\mathcal {E}_1}\) is a \(t\times 1\) matrix with all entries equal to 1. It follows that \(\varvec{U}_2\varvec{P}_\mathcal {E} \varvec{V}_2=\varvec{P}_{\mathcal {E}_1}\oplus \varvec{P}_{\mathcal {E}'}\), where \(\varvec{P}_{\mathcal {E}'}\in \mathbb {M}_{(q-t)\times (p-1)}\) is the 0-1-matrix corresponding to \(\mathcal {E}'=\mathcal {E}{\setminus } \{\varvec{E}_{11},\ldots , \varvec{E}_{t1}\}\) and \(\varvec{U}_2, \varvec{V}_2\) are suitable permutation matrices. It is clear that now we continue with the same procedure and consider a smaller matrix \(\varvec{P}_{\mathcal {E}'}\). \(\square \)

Let \(\mathcal {E}\subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) be a twisted diagonal and let \(\varvec{U}\in \mathbb {M}_{q\times q}\), \(\varvec{V}\in \mathbb {M}_{p\times p}\), where \(\mathcal {E}_1,\ldots ,\mathcal {E}_k\) and \(\varvec{U}\), \(\varvec{V}\) have the same meaning as in Lemma 4.4. We say that a flat set \(\mathcal {M}\subseteq \textrm{span}(\mathcal {E})\) splits if \(\varvec{U} \mathcal {M} \varvec{V}=\mathcal {M}_1\oplus \cdots \oplus \mathcal {M}_k\), where \(\mathcal {M}_j\subseteq \textrm{span}(\mathcal {E}_j)\) are flat subsets, for all \(j=1,\ldots , k\).

Proposition 4.5

Let \(\mathcal {E}\subseteq \{ \varvec{E}_{11},\ldots ,\varvec{E}_{qp}\}\) be a twisted diagonal. If \(\mathcal {M}\subseteq \textrm{span}(\mathcal {E})\) is a flat subset that splits, then it is reflexive.

Proof

Reflexivity of each \(\mathcal {M}_j\) follows from Proposition 4.3. By Lemma 2.3, \(\mathcal {M}_1\oplus \cdots \oplus \mathcal {M}_k\) is reflexive and, therefore, \(\mathcal {M}\) is reflexive, by Lemma 2.2. \(\square \)

5 Reflexivity of two-dimensional sets of operators

Let \(\varvec{M}=[M_1,M_2]\), where \(M_1, M_2\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) are linearly independent operators, and let \(\varvec{\Lambda }\subseteq \mathbb {C}^2\) be a non-empty closed set. In this section, we consider the reflexivity of \(\varvec{\Lambda }\cdot \varvec{M}=\{ \lambda _1 M_1+\lambda _2 M_2;\; (\lambda _1,\lambda _2)^{\intercal }\in \varvec{\Lambda }\}\). Let \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ M_1,M_2\}\). By [2, Theorem 3.10], space \(\mathcal {S}_{\varvec{M}}\) is not reflexive if and only if there exist linearly independent \(\xi _1, \xi _2\in \mathscr {X}^*\) and \(f_1, f_2\in \mathscr {Y}\), such that \(\mathcal {S}_{\varvec{M}}=\textrm{span}\{ f_1\otimes \xi _1, f_1\otimes \xi _2+f_2\otimes \xi _1\}\). However, the following lemma shows that the \(\mathbb {R}\)-linear span of operators \(f_1\otimes \xi _1\) and \(f_1\otimes \xi _2+f_2\otimes \xi _1\) is reflexive.

Lemma 5.1

Let \(\xi _1, \xi _2\in \mathscr {X}^*\) and \(f_1, f_2\in \mathscr {Y}\) be linearly independent. Denote \(M_1=f_1 \otimes \xi _1\), \(M_2=f_1\otimes \xi _2+f_2\otimes \xi _1\), and \(\varvec{M}=[M_1,M_2]\). Then, the \(\mathbb {R}\)-linear space \(\mathbb {R}^2\cdot \varvec{M}\) is reflexive.

Proof

It is clear that \((\mathbb {R}^2\cdot \varvec{M})x\) is a closed subset of \(\mathscr {Y}\), for every \(x\in \mathscr {X}\). Hence, \(\textrm{Ref}(\mathbb {R}^2\cdot \varvec{M})=\textrm{Ref}_a(\mathbb {R}^2\cdot \varvec{M})\). Let \(T\in \textrm{Ref}(\mathbb {R}^2\cdot \varvec{M})\) be arbitrary. If \(x\in \mathscr {X}\), then there exists \(\varvec{\lambda }^{(x)}=\bigl (\lambda _{1}^{(x)},\lambda _{2}^{(x)}\bigr )^{\intercal }\in \mathbb {R}^2\), such that

$$\begin{aligned} \begin{aligned} Tx&=\bigl ( \lambda _{1}^{(x)} f_1\otimes \xi _1+ \lambda _{2}^{(x)}(f_2\otimes \xi _2+f_2\otimes \xi _1)\bigr )x\\&=\bigl ( \lambda _{1}^{(x)}\langle x,\xi _1\rangle +\lambda _{2}^{(x)}\langle x,\xi _2\rangle \bigr ) f_1+ \lambda _{2}^{(x)}\langle x,\xi _2\rangle f_2. \end{aligned} \end{aligned}$$
(5.1)

Hence, \(\mathscr {R}(T)\subseteq \textrm{span}\{ f_1, f_2\}\) which means that there exist functionals \(\eta _1, \eta _2\in \mathscr {X}^*\), such that

$$\begin{aligned} Tx=\langle x,\eta _1\rangle f_1+\langle x,\eta _2\rangle f_2,\qquad \text {for all}\; x\in \mathscr {X}. \end{aligned}$$
(5.2)

It follows from (5.2) and (5.1) that:

$$\begin{aligned} \langle x,\eta _1\rangle =\lambda _{1}^{(x)}\langle x,\xi _1\rangle +\lambda _{2}^{(x)}\langle x,\xi \rangle \end{aligned}$$
(5.3)

and

$$\begin{aligned} \langle x,\eta _2\rangle =\lambda _{2}^{(x)}\langle x,\xi _1\rangle , \end{aligned}$$
(5.4)

for all \(x\in \mathscr {X}\). Suppose that \(x\in \mathscr {N}(\xi _1)\cap \mathscr {N}(\xi _2)\). Then, (5.3) gives \(x\in \mathscr {N}(\eta _1)\). It follows that there exist \(\alpha ,\beta \in \mathbb {C}\), such that \(\eta _1=\alpha \xi _1+\beta \xi _2\). Similarly, it follows from (5.4) that \(\eta _2=\gamma \xi _1\) for some number \(\gamma \). Thus, \( T=\alpha f_1\otimes \xi _1+\beta f_1\otimes \xi _2+\gamma f_2\otimes \xi _1\). Equations (5.3) and (5.4) can be rewritten as

$$\begin{aligned} \langle x,(\alpha -\lambda _{1}^{(x)})\xi _1+(\beta -\lambda _{2}^{(x)})\xi _2\rangle =0 \qquad \text {and}\qquad \langle x,(\gamma -\lambda _{2}^{(x)})\xi _1\rangle =0. \end{aligned}$$
(5.5)

Let \(e_1,e_2\in \mathscr {X}\) be such that \(\langle e_1,\xi _1\rangle =1=\langle e_2,\xi _2\rangle \) and \(\langle e_1,\xi _2\rangle =0=\langle e_2,\xi _1\rangle \). If we put \(x=e_1\) into (5.5), then we get \(\alpha =\lambda _{1}^{(e_1)}\) and \(\gamma =\lambda _{2}^{(e_1)}\). Similarly, \(x=e_2\) gives \(\beta =\lambda _{2}^{(e_2)}\). This shows that \(\alpha , \beta \), and \(\gamma \) are real numbers. Let \(u=e_1+ie_2\). Equation (5.1) gives

$$\begin{aligned} Tu=\bigl ( \lambda _{1}^{(u)}\langle u,\xi _1\rangle +\lambda _{2}^{(u)}\langle u,\xi _2\rangle \bigr ) f_1+ \lambda _{2}^{(u)}\langle u,\xi _2\rangle f_2 =\bigl ( \lambda _{1}^{(u)}+i \lambda _{2}^{(u)}\bigr ) f_1+ \lambda _{2}^{(u)} f_2.\nonumber \\ \end{aligned}$$
(5.6)

On the other hand

$$\begin{aligned} Tu=(\alpha f_1\otimes \xi _1+\beta f_1\otimes \xi _2+\gamma f_2\otimes \xi _1) u =(\alpha +i\beta )f_1+\gamma f_2. \end{aligned}$$
(5.7)

Comparison of (5.6) and (5.7) gives \(\alpha =\lambda _{1}^{(u)}\) and \(\beta =\gamma =\lambda _{2}^{(u)}\). Thus, \(T= \lambda _{1}^{(u)} M_1+\lambda _{2}^{(u)} M_2\in \mathbb {R}^2\cdot \varvec{M}\). \(\square \)

Now, we are ready for a description of two-dimensional reflexive sets of operators.

Theorem 5.2

Let \(M_1, M_2\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) be linearly independent operators and let \(\varvec{M}=[M_1, M_2]\). Set \(\varvec{\Lambda } \cdot \varvec{M}\) is reflexive for every non-empty closed set \(\varvec{\Lambda } \subseteq \mathbb {C}^2\) except if either \(M_1, M_2\) are rank-one operators with the same range or \(M_1=f_1 \otimes \xi _1\) and \(M_2=f_1\otimes \xi _2+f_2\otimes \xi _1\), where \(f_1, _2f\in \mathscr {Y}\) and \(\xi _1, \xi _2\in \mathscr {X}^*\) are linearly independent.

  1. (i)

    If \(M_1, M_2\) are rank-one operators with the same range, then \(\varvec{\Lambda }\cdot \varvec{M}\) is reflexive for every flat set \(\varvec{\Lambda }\subseteq \mathbb {C}^2\).

  2. (ii)

    If \(M_1=f_1 \otimes \xi _1\) and \(M_2=f_1\otimes \xi _2+f_2\otimes \xi _1\), where \(f_1, f_2\in \mathscr {Y}\) and \(\xi _1, \xi _2\in \mathscr {X}^*\) are linearly independent, then \(\varvec{\Lambda } \cdot \varvec{M}\) is reflexive for every non-empty closed set \(\varvec{\Lambda } \subseteq \mathbb {R}^2\).

Proof

Assume that \(M_1, M_2\) are neither rank-one operators with the same range nor \(M_1=f_1 \otimes \xi _1\) and \(M_2=f_1\otimes \xi _2+f_2\otimes \xi _1\), with \(f_1, f_2\in \mathscr {Y}\) and \(\xi _1, \xi _2\in \mathscr {X}^*\) linearly independent. Then, \(\mathcal {S}_{\varvec{M}}\) is a reflexive space, by [2, Theorem 3.10], and it has a separating vector, by Corollary 2.8. Hence, by Theorem 3.1, \(\varvec{\Lambda } \cdot \varvec{M}\) is reflexive for every non-empty closed set \(\varvec{\Lambda } \subseteq \mathbb {C}^2\)

  1. (i)

    If \(M_1, M_2\) are rank-one operators with the same range, then \(\varvec{\Lambda }\cdot \varvec{M}\) is reflexive for every flat set \(\varvec{\Lambda }\subseteq \mathbb {C}^2\), by Proposition 4.3 (i).

  2. (ii)

    If \(M_1=f_1 \otimes \xi _1\) and \(M_2=f_1\otimes \xi _2+f_2\otimes \xi _1\), where \(f_1, _2f\in \mathscr {Y}\) and \(\xi _1, \xi _2\in \mathscr {X}^*\) are linearly independent, then \(\mathbb {R}^2 \cdot \varvec{M}\) is reflexive, by Lemma 5.1. By Corollary 2.8, \(\mathcal {S}_{\varvec{M}}\) has a separating vector. Hence, \(\varvec{\Lambda } \cdot \varvec{M}\) is reflexive for every non-empty closed set \(\varvec{\Lambda } \subseteq \mathbb {R}^2\), by Theorem 3.1.

\(\square \)

Corollary 5.3

The convex hull of arbitrary three operators in \(\mathcal {B}(\mathscr {X}, \mathscr {Y})\) is a reflexive set.

Proof

Let \(M_1, M_2, M_3\in \mathcal {B}(\mathscr {X},\mathscr {Y})\) be arbitrary operators and let \(\mathcal {C}\) be its convex hull. Since \(\mathcal {C}\) is reflexive if and only if \(\mathcal {C}-M_3\) is reflexive, we may assume that \(M_3=0\). If \(M_1\) and \(M_2\) are linearly dependent, say \(M_2=\lambda M_1\), then \(\mathcal {C}=\Lambda M_1\), where \(\Lambda =\{ t_1+t_2\lambda \in \mathbb {C};\; t_1, t_2, t_1+t_2\in [0,1]\}\). By [3, Proposition 2.5], \(\mathcal {C}\) is reflexive. Assume, therefore, that \(M_1\) and \(M_2\) are linearly independent and let \(\varvec{M}=[M_1,M_2]\). Then, \(\mathcal {C}=\varvec{\Lambda }\cdot \varvec{M}\), where \(\varvec{\Lambda }\subseteq \mathbb {R}^2\) is the flat set determined by sets \(\Lambda _1=\Lambda _2=[0,1]\), \(\Lambda _3=\{ 1\}\), and matrix \(\varvec{C}=\left[ \begin{array}{ll} 1 &{} 0\\ 0 &{} 1 \\ 1 &{} 1 \end{array}\right] \) (see Sect. 2.2). The reflexivity of \(\mathcal {C}\) follows by Theorem 5.2. \(\square \)