1 Introduction

The calculus of rows, columns, and block matrices, where the entries are unbounded operators, was studied in the paper of Möller and Szafraniec [13]; see also [8]. However, already in the case of \(2 \times 2\) blocks, it is natural that one of the entries is multivalued; see [2, 6, 11, 18]. Even earlier in [3] there was a suggestion of a \(2 \times 2\) block matrix where two of its entries were multivalued; the corresponding situation was considered in [5], where such blocks are shown to appear naturally as selfadjoint extensions of certain symmetric relations. The present paper offers an attempt to formalize the operational calculus for block matrices, whose entries are all linear relations, along the lines linear relations between two Hilbert spaces are treated in [1]. The basic idea in the context of linear relations is to first develop the notions of a row and a column consisting of a sequence of linear relations. This approach was taken in the study of matrices with unbounded operators in [13]. Rows and columns can be considered as the building blocks in the definition of a block matrix whose entries are linear relations. To keep the presentation simple, it suffices to concentrate on the special case of \(2\times 2\) blocks.

To explain the notions of row, column, and block, let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\). Recall that one can form a sum of \(H_1\) and \(H_2\) in different ways. The componentwise sum\(H_1 \, {\widehat{+}} \,H_2\) is defined as

$$\begin{aligned} H_1 \, {\widehat{+}} \,H_2=\big \{\, \{h_1+h_2, k_1+k_2\} :\, \{h_1, k_1\} \in H_1, \,\, \{h_2, k_2\} \in H_2\,\big \}, \end{aligned}$$
(1.1)

and the usual sum\(H_1+H_2\), like in the case of operators, is defined as

$$\begin{aligned} H_1+H_2=\big \{ \, \{h,k_1+k_2\} :\, \{h,k_1\} \in H_1, \,\, \{h,k_2\} \in H_2\,\big \}. \end{aligned}$$
(1.2)

Of course, when \(H_{1}\) and \(H_{2}\) are operators, then these definitions make sense for their graphs (identifying graph and operator). The main point here is that in many situations one encounters a slightly different situation, as the Hilbert spaces in which these relations are defined or to which they map may be different: either \(H_1\) is a linear relation from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and \(H_2\) is a linear relation from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\), or \(H_1\) is a linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}_1\) and \(H_2\) is a linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}_2\). In the first case one needs the notion of row of relations, which extends the componentwise sum, while in the second case one needs the notion of column of relations, which extends the usual sum. In particular, these notions play a role when introducing \(2\times 2\) blocks of linear relations:

$$\begin{aligned} \begin{pmatrix} H_{11} &{}\quad H_{12} \\ H_{21} &{}\quad H_{22} \end{pmatrix}: \begin{pmatrix}{{\mathfrak{H}}}_{1} \\{{\mathfrak{H}}}_{2} \end{pmatrix} \rightarrow \begin{pmatrix}{{\mathfrak{H}}}_{1} \\{{\mathfrak{H}}}_{2} \end{pmatrix}, \end{aligned}$$

where each \(H_{ij}\) is a linear relation from \({{\mathfrak{H}}}_{j}\) to \({{\mathfrak{H}}}_{i}\). It turns out that one may speak of a column of two rows or of a row of two columns.

The operational calculus for these notions concerns their linear structure. In particular, the notions of row, column, and block will be characterized among all linear relations. When adjoints are involved, one is confronted with the occurence of the various Hilbert spaces, so one needs to keep track of the relevant spaces. Recall that for the componentwise sum, the adjoint behaves as one would expect, but for the usual sum things are of course different. These facts have their counterparts in the behavior of adjoints for rows, columns, and blocks of linear relations. In addition, it is observed that multiplication of rows, columns, and blocks is a complicated process, because one has to keep track not only of the domains, but also of the multivalued parts.

The present paper is of an expository nature. It gives a brief survey of the above notions, along the lines of [13] where the case of operators was treated, but now in the context of linear relations, see also [4, 12, 14,15,16]. However, the appearance of multivalued parts in the entries of the block produces some new obstacles in this study. Finally, it is mentioned that different special cases of linear block relations with multivalued entries appear in the literature and that various notions and results given here can be applied, e.g., in the context of the sum of selfadjoint (or maximal sectorial) operators or relations, see [7, 8], in the extension theory for nondensely defined symmetric operators and symmetric relations, see, e.g., [6], and, for instance, in the study of linear relations whose domain and range are orthogonal, see [3, 5]. Also there is a connection to the recent papers [19] and [24], which will be explained.

2 Linear relations and their adjoints

Let H be a linear relation from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\), i.e., H is a linear subspace of the product space \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\). Its multivalued part \(H_{\text{mul}\,}=\{0\} \times \text{mul}\,H\) is a purely multivalued relation. The closure \(\overline{H}\) of H is a subspace of \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\), and the relation H is closed if and only if \(H=\overline{H}\). The adjoint\(H^*\) of H is defined as a relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\) by

$$\begin{aligned} H^*:= \bigl \{\{f, f^\prime \}\in{{\mathfrak{K}}}\times{{\mathfrak{H}}}:\, (f^\prime ,h)=(f,h^\prime )\,\,\,\, \text{for all}\,\,\,\{h, h^\prime \}\in H \bigr \}. \end{aligned}$$

Then it is clear that

$$\begin{aligned} H^{*}=(JH)^{\perp } =J H^{\perp }, \end{aligned}$$
(2.1)

where the orthogonal complements refer to the componentwise inner product of \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) and \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\), respectively. Here J from \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\) to \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) is defined by

$$\begin{aligned} J\{f,f'\}=\{f',-f\}, \quad \{f,f'\} \in{{\mathfrak{H}}}\times{{\mathfrak{K}}}. \end{aligned}$$

Clearly, if H and K are relations with \(H \subset K\) then \(K^* \subset H^{*}\). It also follows from (2.1) that \(H^{*}\) is a closed linear relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\). Note that (2.1) gives \(J^{-1}H^*=H^\perp \), i.e.

$$\begin{aligned} (J^{-1}H^*)^\perp =H^{\perp \perp }=\overline{H}. \end{aligned}$$

Since \(J^{-1}\) is the flip-flop operator from \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) to \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\) the left-hand side coincides with \(H^{**}\) and hence \(H^{**}=\overline{H}\). The orthogonal decompositions

$$\begin{aligned}{{\mathfrak{H}}}=\overline{\text{dom}\,}H^{**} \oplus \text{mul}\,H^{*} \quad \text{ and } \quad{{\mathfrak{K}}}=\overline{\text{dom}\,}H^* \oplus \text{mul}\,H^{**} \end{aligned}$$
(2.2)

are now clear from (2.1). For a brief introduction to these notions, see [2]. The following additive decomposition for linear relations is an operator-theoretic version of the Lebesgue decomposition of measures; see [9, Theorem 4.1].

Theorem 2.1

LetHbe a relation from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and letQbe the orthogonal projection in\({{\mathfrak{K}}}\)onto\(\overline{\text{dom}\,}H^*\). ThenHallows the following sum decomposition

$$\begin{aligned} H=QH+(I-Q)H, \end{aligned}$$
(2.3)

where the relationsQHand\((I-Q)H\)have the following properties:

  1. (a)

    QHis a closable operator;

  2. (b)

    \(\text{clos}\,((I-Q)H)=\overline{\text{dom}\,}H \times \text{mul}\,H^{**}\).

The closure of the so-called regular part QH is (the graph of) an operator, while the closure of the so-called singular part \((I-Q)H\) is a closed singular relation, i.e., the product of two closed subspaces. The so-called Lebesgue decomposition (2.3) for a relation H from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}\) gives rise to a componentwise direct sum decomposition when \(\text{mul}\,H=\text{mul}\,H^{**}\); see [10, Theorem 3.10, Corollary 3.14].

Proposition 2.1

LetHbe a relation from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and letQbe the orthogonal projection in\({{\mathfrak{K}}}\)onto\(\overline{\text{dom}\,}H^*\). Assume that

$$\begin{aligned} \text{mul}\,H=\text{mul}\,H^{**}, \end{aligned}$$
(2.4)

so that\({{\mathfrak{K}}}\)can be decomposed as\({{\mathfrak{K}}}=\overline{\text{dom}\,}H^* \oplus \text{mul}\,H\). Then\(QH \subset H\)and the relationHhas the following direct sum decomposition

$$\begin{aligned} H=QH \, \widehat{+} \,\bigl (\{0\} \times \text{mul}\,H\bigr ), \end{aligned}$$
(2.5)

whereQHis a closable operator from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and\(\{0\} \times \text{mul}\,H\)is a purely multivalued relation in\(\text{mul}\,H\). Moreover, if the relationHis closed, then (2.4) is automatically satisfied and the operatorQHis closed.

If \(\text{mul}\,H=\text{mul}\,H^{**}\), then \(H_{\text{s}}=QH\) is the operator part of H, so that (2.5) reads \(H=H_{\text{s}} \, \widehat{+} \,H_{\text{mul}}\). Clearly, if H is closed then \(\text{mul}\,H=\text{mul}\,H^{**}\), so that a closed linear relation always has a closed orthogonal operator part. Note that \(H^{**}\) is a closed linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}\) and it follows from (2.2) that \((H^{**})_{\text{s}}\) takes \(\text{dom}\,H^{**}\) to \(\overline{\text{dom}\,}H^*\). Likewise, \(H^{*}\) is a closed linear relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\), so that \((H^{*})_{\text{s}}\) takes \(\text{dom}\,H^{*}\) to \(\overline{\text{dom}\,}H^{**}\).

Lemma 2.1

LetHbe a linear relation from a Hilbert space\({{\mathfrak{H}}}\)to a Hilbert space\({{\mathfrak{K}}}\). The following statements are equivalent:

  1. (i)

    \((H^{**})_{\text{s}}\)is a bounded operator;

  2. (ii)

    \((H^{*})_{\text{s}}\)is a bounded operator;

  3. (iii)

    \(\text{dom}\,H^{**}\)is closed;

  4. (iv)

    \(\text{dom}\,H^*\)is closed.

Proof

Observe that \(\text{dom}\,(H^{**})_{\text{s}}=\text{dom}\,H^{**}\) and \(\text{dom}\,(H^{*})_{\text{s}}=\text{dom}\,H^{*}\), and that these spaces are simultaneously closed. \(\square \)

Let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\). Then one can form a sum of \(H_1\) and \(H_2\) in different ways. The componentwise sum \(H_1 \, \widehat{+} \,H_2\) is defined as (1.1), and the usual sum \(H_1+H_2\), like in the case of operators, is defined as (1.2). The adjoints of these sums behave in the usual way. The following statements are straightforward to check.

Lemma 2.2

Let\(H_1\)and\(H_2\)be linear relations from a Hilbert space\({{\mathfrak{H}}}\)to a Hilbert space\({{\mathfrak{K}}}\). Then

$$\begin{aligned} (H_1 \, \widehat{+} \,H_2)^*=H_1^* \cap H_2^*, \quad H_1^*+H_2^* \subset (H_1+H_2)^*. \end{aligned}$$
(2.6)

The inclusion in (2.6) is actually an equality if \(H_2 \in \mathbf{B}({{\mathfrak{H}}},{{\mathfrak{K}}})\); in fact, there is equality under slightly more general circumstances.

Corollary 2.1

Let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\) and assume that

$$\begin{aligned} \text{dom}\,H_1 \subset \text{dom}\,H_2, \quad \text{dom}\,H_2^{**}\, \text{ is } \text{ closed }, \quad \text{ and } \quad \overline{\text{mul}\,}H_2=\text{mul}\,H_2^{**}. \end{aligned}$$
(2.7)

Then

$$\begin{aligned} H_1^*+H_2^* = (H_1+H_2)^*. \end{aligned}$$
(2.8)

Proof

To see (2.8), let \(\{f,g\} \in (H_1+H_2)^*\). Then

$$\begin{aligned} (g,h)=(f,k_1)+(f,k_2) \quad \text{ for } \text{ all } \quad \{h,k_1+k_2\} \in H_1+H_2. \end{aligned}$$
(2.9)

In particular, with the choice \(h=0\), \(k_1=0\), and \(k_2\in \text{mul}\,H_2\) one concludes, by the assumption \(\overline{\text{mul}\,}H_2=\text{mul}\,H_2^{**}\), that

$$\begin{aligned} f \in (\text{mul}\,H_2)^\perp = (\text{mul}\,H_2^{**})^\perp =\overline{\text{dom}\,}H_2^*. \end{aligned}$$

Under the assumption that \( \text{dom}\,H_2^{**}\) is closed, one has that \(\overline{\text{dom}\,}H_2^*=\text{dom}\,H_2^*\); see Lemma 2.1. Thus one has \(f \in \text{dom}\,H_2^*\) and hence there exists \(\varphi \in{{\mathfrak{H}}}\) such that \(\{f,\varphi \} \in H_2^*\). Now let \(\{h,k_1\} \in H_1\). Then the assumption \(\text{dom}\,H_1 \subset \text{dom}\,H_2\) shows \(\{h,k_2\} \in H_2\) for some \(k_2 \in{{\mathfrak{K}}}\). Therefore \((f,k_2)=(\varphi ,h)\), and (2.9) gives

$$\begin{aligned} (g,h)=(f,k_1)+(\varphi , h) \quad \text{ or } \quad (g-\varphi ,h)=(f,k_1) \end{aligned}$$

for all \(\{h,k_1\} \in H_1\), so that \(\{f, g-\varphi \} \in H_1^*\). Hence \(\{f,g\} \in H_1^*+H_2^*\). This establishes the inclusion \((H_1+H_2)^* \subset H_1^*+H_2^*\). \(\square \)

The equality in (2.8) has received attention in [17,18,19,20,21,22,23,24] recently. The results in the present paper are closely related and can be used in these considerations; cf. Proposition 9.1.

3 Rows and columns

Rows and columns are the analogs of the componentwise sum and the usual sum for a pair of relations in (1.1) and (1.2).

Rows. Let \({{\mathfrak{H}}}_1\), \({{\mathfrak{H}}}_2\), and \({{\mathfrak{K}}}\) be Hilbert spaces. Let \(R_1\) be a linear relation from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and let \(R_2\) be a linear relation from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\). Then the row\((R_1; R_2)\) of \(R_1\) and \(R_2\) as a linear relation from \({{\mathfrak{H}}}_1 \oplus{{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\) is defined by

$$\begin{aligned} (R_1\,;\,R_2) := \left\{ \left\{ \begin{pmatrix} h_1 \\ h_2 \end{pmatrix}, k_1+k_2 \right\} :\, \{h_1,k_1\} \in R_1, \,\, \{h_2,k_2\} \in R_2 \,\right\} . \end{aligned}$$
(3.1)

Observe that

$$\begin{aligned} \text{dom}\,(R_1 \,;\, R_2)&=\text{dom}\,R_1 \times \text{dom}\,R_2,\\ \text{ker}\,(R_1\,;\,R_2)&=\{ h_1 \oplus h_2:\, \{h,k_1\} \in R_1, \,\{h_2,-k_1\} \in R_2\}, \\ \text{ran}\,(R_1\,;\,R_2)&=\text{ran}\,R_1 + \text{ran}\,R_2, \\ \text{mul}\,(R_1\,;\,R_2)&=\text{mul}\,R_1 + \text{mul}\,R_2. \end{aligned}$$

Notice that the definition (3.1) associates to a row \((R_1\,;\,R_2)\) a unique linear relation from \({{\mathfrak{H}}}_1 \oplus{{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\). However, the same relation can clearly be generated by different choices of the entries \(R_1\) and \(R_2\). In this sense rows \((R_1\,;\,R_2)\) which generate the same linear relation can be used to introduce an equivalence relation in the set of all rows \((R_1\,;\,R_2)\) of linear relations \(R_1\) from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and \(R_2\) from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\).

The row of \(R_1\) and \(R_2\) resembles a componentwise sum of linear relations once the domain spaces of \(R_1\) and \(R_2\) are combined orthogonally in the above way.

The linear relations from \({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}\) that can be written as a row of linear relations will now be characterized. Let R be a linear relation from \({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}\), then by identifying \({\mathfrak{H}}_1 \oplus \{0\}\) with \({\mathfrak{H}}_1\) and \(\{0\} \oplus {\mathfrak{H}}_2\) with \({\mathfrak{H}}_2\), one obtains that \(R{\upharpoonright }\,_{{\mathfrak{H}}_1}\) and \(R{\upharpoonright }\,_{{\mathfrak{H}}_2}\) are linear relations from \({\mathfrak{H}}_1\) to \({\mathfrak{K}}\) and from \({\mathfrak{H}}_2\) to \({\mathfrak{K}}\), respectively. Hence the linear relation R induces a row:

$$\begin{aligned} (R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} ), \end{aligned}$$
(3.2)

and it is clear that \((R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} ) \subset R\).

Lemma 3.1

LetRbe a linear relation from\({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\)to\({\mathfrak{K}}\). Then the following statements are equivalent:

  1. (i)

    Ris a row;

  2. (ii)

    \(\text{dom}\,R=(\text{dom}\,R) \cap {\mathfrak{H}}_1 \oplus (\text{dom}\,R) \cap {\mathfrak{H}}_2\);

  3. (iii)

    \(R = (R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} )\).

Proof

(i) \(\Rightarrow \) (ii) This is clear from (3.1).

(ii) \(\Rightarrow \) (iii) It suffices to show that \(R \subset (R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} )\). Thus let

$$\begin{aligned} \left\{ \begin{pmatrix} h_1 \\ h_2 \end{pmatrix}, k_1+k_2 \right\} \in R. \end{aligned}$$

By assumption, both \(h_1\) and \(h_2\) belong to \(\text{dom}\,R\). Hence, there exist elements such that

$$\begin{aligned} \left\{ \begin{pmatrix} h_1 \\ 0\end{pmatrix}, \ell _1 \right\} \in R, \quad \left\{ \begin{pmatrix} 0 \\ h_2 \end{pmatrix}, \ell _2 \right\} \in R, \end{aligned}$$

and \(k_1+k_2-\ell _1-\ell _2 \in \text{mul}\,R\). It is clear that elements of the form \(\{0,\varphi \} \in R\) belong to both \(R{\upharpoonright }\,_{{\mathfrak{H}}_1}\) and \(R{\upharpoonright }\,_{{\mathfrak{H}}_2}\). This completes the argument.

(iii) \(\Rightarrow \) (i) This implication is clear. \(\square \)

Moreover, if \(R_1'\) is a linear relation from \({\mathfrak{H}}_1\) to \({\mathfrak{K}}\) and \(R_2'\) is a linear relation from \({\mathfrak{H}}_2\) to \({\mathfrak{K}}\), such that \(R_1 \subset R_1'\) and \(R_2 \subset R_2'\), then by (3.3), it is clear that the extensions are preserved in the sense of the row

$$\begin{aligned} (R_1 \,; \, R_2 ) \subset (R_1' \,;\, R_2' ). \end{aligned}$$

Columns. Now let \({\mathfrak{H}}\), \({\mathfrak{K}}_1\), and \({\mathfrak{K}}_2\) be Hilbert spaces. Let \(C_1\) be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and let \(C_2\) be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\). Then the column\(\text{col}\,(C_1\,;\,C_2)\) of \(C_1\) and \(C_2\), as a linear relation from \({\mathfrak{H}}\) to the orthogonal sum \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\), is defined by

$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix} := \left\{ \left\{ h, \begin{pmatrix} k_1 \\ k_2 \end{pmatrix} \right\} :\, \{h,k_1\} \in C_1, \,\, \{h,k_2\} \in C_2 \,\right\} . \end{aligned}$$
(3.3)

Observe that

$$\begin{aligned} \text{dom}\,\text{col}\,(C_1\,;\,C_2)= &{} \text{dom}\,C_1 \cap \text{dom}\,C_2,\\ \text{ker}\,\text{col}\,(C_1\,;\,C_2)= &{} \text{ker}\,C_1 \cap \text{ker}\,C_2, \\ \text{ran}\,\text{col}\,(C_1\,;\,C_2)= &{} \{ k_1 \oplus k_2:\, \{h,k_1\} \in C_1, \,\{h,k_2\} \in C_2\}, \\ \text{mul}\,\text{col}\,(C_1\,;\,C_2)= &{} \text{mul}\,C_1 \times \text{mul}\,C_2. \end{aligned}$$

Again the definition (3.3) associates to a column \(\text{col}\,(C_1\,;\,C_2)\) a unique linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) and it determines an equivalence relation in the set of all columns \(\text{col}\,(C_1\,;\,C_2)\) of linear relations \(C_1\) from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and \(C_2\) from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\).

The column of \(C_1\) and \(C_2\) resembles a usual sum of linear relations once the range spaces of \(C_1\) and \(C_2\) are combined orthogonally in the above way.

The linear relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) that can be written as a column of linear relations will now be characterized. Let C be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\). The orthogonal projections from \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) to \({\mathfrak{K}}_1\) and \({\mathfrak{K}}_2\) are denoted by \(P_1\) and \(P_2\), respectively. Then \(P_1C\) and \(P_2C\) are linear relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), respectively. Hence the linear relation C induces a column:

$$\begin{aligned} \begin{pmatrix} P_1 C \\ P_2 C \end{pmatrix} = \left\{ \left\{ h, \begin{pmatrix} P_1 k \\ P_2 k \end{pmatrix} \right\} :\, \{h,k\} \in C \,\right\} , \end{aligned}$$
(3.4)

and it is clear that \(C \subset \text{col}\,(P_1 C ; P_2 C)\).

Lemma 3.2

LetCbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\). Then the following statements are equivalent:

  1. (i)

    Cis a column;

  2. (ii)

    \(\text{mul}\,C=(\text{mul}\,C) \cap {\mathfrak{K}}_1 \oplus (\text{mul}\,C) \cap {\mathfrak{K}}_2\);

  3. (iii)

    \(C = \text{col}\,(P_1 C ; P_2 C)\).

Proof

(i) \(\Rightarrow \) (ii) This is clear from (3.3).

(ii) \(\Rightarrow \) (iii) It suffices to show that \(\text{col}\,(P_1 C ; P_2 C) \subset C\). Thus let

$$\begin{aligned} \left\{ h, \begin{pmatrix} k_1 \\ k_2 \end{pmatrix} \right\} \in \begin{pmatrix} P_1 C \\ P_2 C \end{pmatrix}. \end{aligned}$$

Then \(\{h,k_1\} \in P_1C\) and \(\{h,k_2\} \in P_2C\), so that there exist elements, for which

$$\begin{aligned} \left\{ h, \begin{pmatrix} k_1 \\ \alpha _2 \end{pmatrix} \right\} , \left\{ h, \begin{pmatrix} \alpha _1 \\ k_2 \end{pmatrix} \right\} \in C. \end{aligned}$$

Observe that

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix}\alpha _{1} -k_1 \\ k_2-\alpha _{2} \end{pmatrix} \right\} \in C, \end{aligned}$$

which by assumption leads to

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} 0 \\ k_2-\alpha _{2} \end{pmatrix} \right\} \in C. \end{aligned}$$

Thus, in particular, it follows that

$$\begin{aligned} \left\{ h , \begin{pmatrix} k_{1} \\ k_{2} \end{pmatrix} \right\} =\left\{ h , \begin{pmatrix} k_{1} \\ \alpha _{2} \end{pmatrix} \right\} +\left\{ 0, \begin{pmatrix} 0 \\ k_2-\alpha _{2} \end{pmatrix} \right\} \in C, \end{aligned}$$

which completes the argument.

(iii) \(\Rightarrow \) (i) This implication is clear. \(\square \)

Moreover, if \(C_1'\) is a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and \(C_2'\) is a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), such that \(C_1 \subset C_1'\) and \(C_2 \subset C_2'\), then by (3.3), it is clear that the extensions are preserved in the sense of the column

$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix} \subset \begin{pmatrix} C_1' \\ C_2' \end{pmatrix}. \end{aligned}$$
(3.5)

Conversely, the left-hand side is called a restriction of the right-hand-side. Note that any column can be reduced in the sense that \(C_1\) and \(C_2\) may be restricted to their common domain \({\mathfrak{D}}={\text{dom}}\,C_1 \cap {\text{dom}}\,C_2\):

$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix} = \begin{pmatrix} C_1{\upharpoonright }\,_{\mathfrak{D}} \\ C_2{\upharpoonright }\,_{\mathfrak{D}} \end{pmatrix}. \end{aligned}$$

4 Adjoints of rows and columns

In terms of adjoints, the operations of rows and columns behave formally like componentwise sums and usual sums of linear relations; cf. Lemma 2.2. But recall that the definition of an adjoint relation depends on the Hilbert spaces in which the original relation is defined. Hence for rows and columns for a pair of linear relations one has to make sure what Hilbert spaces are involved when taking adjoints. The formulas (4.1) and (4.2) in the next result can be found in [8, Proposition 2.1] and are proved here for completeness of the present statement.

Proposition 4.1

Let\(R_1\)and\(R_2\)be linear relations from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively. Then

$$\begin{aligned} (R_1\,;\,R_2)^* =\begin{pmatrix} R_1^* \\ R_2^* \end{pmatrix}. \end{aligned}$$
(4.1)

Moreover, let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively. Then

$$\begin{aligned} (C_1^* \,;\, C_2^*) \subset \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^*. \end{aligned}$$
(4.2)

Moreover, there is equality in (4.2) if and only if\((\text{col}\,(C_1\,;\,C_2))^*\)is a row from\({\mathfrak{K}}_1\oplus {\mathfrak{K}}_2\)to\({\mathfrak{H}}\).

Proof

In order to prove the equality (4.1), let

$$\begin{aligned} \left\{ f , \begin{pmatrix} g_{1} \\ g_{2} \end{pmatrix} \right\} \in (R_1\,;\,R_2)^*. \end{aligned}$$

By (3.1) this is equivalent to

$$\begin{aligned} (k_1,f)+(k_2,f)=(h_1,g_1)+(h_2,g_2) \end{aligned}$$

for all \(\{h_1,k_1\} \in R_1\) and \(\{h_2,k_2\} \in R_2\), or to

$$\begin{aligned} \{f,g_1\} \in R_1^* \quad \text{ and } \quad \{f,g_2\} \in R_2^*, \end{aligned}$$

or, in other words,

$$\begin{aligned} \left\{ f , \begin{pmatrix} g_{1} \\ g_{2} \end{pmatrix} \right\} \in \begin{pmatrix} R_1^* \\ R_2^* \end{pmatrix}. \end{aligned}$$

This shows the equality in (4.1).

In order to prove (4.2), consider the element

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g_1+g_2 \right\} \in (C_1^*\,;\,C_2^*). \end{aligned}$$

where \(\{f_1,g_1\} \in C_1^*\) and \(\{f_2,g_2\} \in C_2^*\). For any element in \(\text{col}\,(C_1\,;\,C_2)\) of the form

$$\begin{aligned} \left\{ h , \begin{pmatrix} k_{1} \\ k_{2} \end{pmatrix} \right\} \quad \text{ with } \quad \{h,k_1 \} \in C_1, \,\, \{h,k_2\} \in C_2, \end{aligned}$$

it then follows that

$$\begin{aligned} (g_1,h)+(g_2,h)-(f_1,k_1)-(f_2,k_2)=0. \end{aligned}$$

Therefore one sees that

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g_1+g_2 \right\} \in \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^*, \end{aligned}$$

which shows the inclusion (4.2).

If there is equality in (4.2), then \((\text{col}\,( C_1\,;\,C_2))^*\) is a row. Conversely, assume that \((\text{col}\,( C_1\,;\,C_2))^*\) is a row, i.e.,

$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^*= (R_1 \,;\, R_2), \end{aligned}$$
(4.3)

for some linear relations \(R_1\) and \(R_2\) from \({\mathfrak{K}}_1\) to \({\mathfrak{H}}\) and from \({\mathfrak{K}}_2\) to \({\mathfrak{H}}\), respectively. Then it follows from (4.1) that

$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix} \subset \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^{**} = \begin{pmatrix} R_1^* \\ R_2^* \end{pmatrix}, \end{aligned}$$

which gives \(C_1 \subset R_1^*\) and \(C_2 \subset R_2^*\). Hence \(R_1 \subset R_1^{**} \subset C_1^*\) and \(R_2 \subset R_2^{**} \subset C_2^*\). Taking into account (4.2) and (4.3), this leads to

$$\begin{aligned} (C_1^* \,;\, C_2^*) \subset (R_1\,;\,R_2) \subset (C_1^* \,;\, C_2^*), \end{aligned}$$

which gives equality in (4.2). \(\square \)

In [13] the term formal adjoint of a row (of the column) is used for the right-hand side of (4.1) (respectively, for the left-hand side of (4.2)). Although the case of equality in (4.2) has been characterized, it is useful to have some sufficient concrete conditions; cf. (2.7) in Corollary 2.1.

Lemma 4.1

Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), respectively. Assume that\(\text{dom}\,C_1 \subset \text{dom}\,C_2\), \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\), and that\(\text{dom}\,C_2^{**}\)is closed. Then

$$\begin{aligned} (C_1^* \,;\, C_2^*) =\begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^*. \end{aligned}$$
(4.4)

Proof

It suffices to show that the right-hand side of (4.2) is contained in the left-hand side under the given assumptions. To see this inclusion, let

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g \right\} \in \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^*. \end{aligned}$$

This means that

$$\begin{aligned} (g,h)=(f_1,k_1)+(f_2,k_2) \quad \text{ for } \text{ all } \quad \left\{ h, \begin{pmatrix} k_1 \\ k_2 \end{pmatrix} \right\} \in \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}. \end{aligned}$$
(4.5)

The choice \(h=0\), \(k_1=0\), and \(k_2\in \text{mul}\,C_2\) together with \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\) leads to

$$\begin{aligned} f_2 \in (\text{mul}\,C_2)^\perp = (\text{mul}\,C_2^{**})^\perp =\overline{\text{dom}\,}C_2^*. \end{aligned}$$

By assumption \(\text{dom}\,C_2^{**}\) is closed and thus also \(\text{dom}\,C_2^{*}\) is closed; see Lemma 2.1. One concludes that, in fact, \(f_2 \in \text{dom}\,C_2^*\). Thus, there exists an element \(\varphi \) such that \(\{f_2, \varphi \} \in C_2^*\). This means that

$$\begin{aligned} (f_2, k_2)=(\varphi , h) \end{aligned}$$
(4.6)

holds for all \(\{h,k_2\}\in C_2\). Now by combining (4.5), (4.6), and the condition \(\text{dom}\,C_1 \subset \text{dom}\,C_2\), one obtains

$$\begin{aligned} (g-\varphi ,h)=(f_1,k_1) \quad \text{ for } \text{ all } \quad \{h,k_1\} \in C_1. \end{aligned}$$

Thus \(\{f_1, g-\varphi \} \in C_1^*\) and since \(\{f_2, \varphi \} \in C_2^*\), one therefore obtains

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g \right\} = \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g-\varphi +\varphi \right\} \in (C_1^*\,:\,C_2^*). \end{aligned}$$

This completes the proof. \(\square \)

The first statement in the following corollary is clear by itself, but it is an automatic consequence of the above reasoning as well.

Corollary 4.1

  1. (a)

    If the linear relations\(C_1\)and\(C_2\)from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively, are closed, then the column\(\text{col}\,(C_1\,;\,C_2)\)is closed.

  2. (b)

    If the linear relations\(R_1\)and\(R_2\)from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively, are closed, while\(\text{dom}\,R_1^* \subset \text{dom}\,R_2^*\)and\(\text{dom}\,R_2^*\) is closed, then the row\((R_1\,;\,R_2)\)is closed.

Proof

  1. (a)

    Apply (4.1) in Proposition 4.1 with \(R_1=C_1^*\) and \(R_2=C_2^*\). Then one sees that

    $$\begin{aligned} \begin{pmatrix} C_1^{**} \\ C_2^{**} \end{pmatrix} = (C_1^*\,;\,C_2^*)^*. \end{aligned}$$

    This implies that the column \(\text{col}\,( C_1^{**} \,;\, C_2^{**})\) is closed; in particular, the column of closed linear relations \(C_1\) and \(C_2\) is closed.

  2. (b)

    To see this, one applies Lemma 4.1 with \(C_1=R_1^*\) and \(C_2=R_2^*\).

\(\square \)

A repeated application of Proposition 4.1 and Lemma 4.1 leads to the following straightforward results.

Corollary 4.2

  1. (a)

    Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\) to \({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively. Then

    $$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^{**} \subset \begin{pmatrix} C_1^{**} \\ C_2^{**} \end{pmatrix}. \end{aligned}$$
    (4.7)

    If, in addition, \(\text{dom}\,C_1 \subset \text{dom}\,C_2\), \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\), and\(\text{dom}\,C_2^{**}\)is closed, then equality holds in (4.7).

  2. (b)

    Let\(R_1\)and\(R_2\)be linear relations from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively. Then

    $$\begin{aligned} (R_1^{**}\,;\,R_2^{**}) \subset (R_1\,;\,R_2)^{**}. \end{aligned}$$
    (4.8)

    If, in addition, \(\text{dom}\,R_1^* \subset \text{dom}\,R_2^*\)and\(\text{dom}\,R_2^*\)is closed, then equality holds in (4.8).

There are some useful special cases for equality in (4.4).

Corollary 4.3

Let\(\text{dom}\,C_1 \subset \text{dom}\,C_2\)and let\(C_2\)be densely defined and bounded (i.e.\(\Vert g\Vert \le C \Vert f\Vert \)for all\(\{f,g\} \in C_2\)). Then\(C_2\)is an operator and (4.4) holds.

Proof

Since \(C_2\) is bounded, it is an operator and, by assumption, it is densely defined. Then \(C_2^{**} \in \mathbf{B}({\mathfrak{H}}, {\mathfrak{K}}_2)\) and hence \(C_2^{*} \in \mathbf{B}({\mathfrak{K}}_2, {\mathfrak{H}})\), \(\text{mul}\,C_2^{**}=\text{mul}\,C_2=\{0\}\), and \(\text{dom}\,C_2^{**}\) is closed. \(\square \)

Corollary 4.4

Let\(C_1\)be a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and let\(C_2= \mathfrak{M}\times \mathfrak{N}_2\)be a singular relation, where\(\mathfrak{M}\)is a linear subspace of\({\mathfrak{H}}\)and\(\mathfrak{N}\)is a closed linear subspace of\({\mathfrak{K}}_2\). Assume that\(\text{dom}\,C_1 \subset \mathfrak{M}\). Then

$$\begin{aligned} \begin{pmatrix} C_1 \\ {\mathfrak{M}}\times {\mathfrak{N}}\end{pmatrix}^* =( C_1^* \,;\, {\mathfrak{N}}^\perp \times {\text{mul}}\,C_1^{**} ). \end{aligned}$$

Proof

By the definition in (3.3) one observes that

$$\begin{aligned} \begin{pmatrix} C_1 \\ \mathfrak{M}\times \mathfrak{N}\end{pmatrix} = \begin{pmatrix} C_1 \\ \text{dom}\,C_1 \times \mathfrak{N}\end{pmatrix}, \end{aligned}$$

and note that \(({\text{dom}}\,C_1 \times {\mathfrak{N}})^*={\mathfrak{N}}^\perp \times {\text{mul}}\,C_1^*\). Now apply Lemma 4.1. \(\square \)

Observe that in Corollary 4.4 one has \({\mathfrak{M}}^\perp ={\text{mul}}\,C_1^{**}\) if and only if \(\overline{\text{dom}\,}C_1=\overline{\mathfrak{M}}\). However, under the condition \(\text{dom}\,C_1 \subset \mathfrak{M}\) also the following equality holds:

$$\begin{aligned} \begin{pmatrix} C_1 \\ \mathfrak{M}\times \mathfrak{N}\end{pmatrix}^* = (C_1^* \,;\, \mathfrak{N}^\perp \times \mathfrak{M}^\perp ). \end{aligned}$$

Indeed, \(\text{dom}\,C_1 \subset \mathfrak{M}\) implies \({\mathfrak{M}}^\perp \subset {\text{mul}}\,C_1^{**}\) and hence the inclusion

$$\begin{aligned} {\mathfrak{N}}^\perp \times {\mathfrak{M}}^\perp \subset {\mathfrak{N}}^\perp \times {\text{mul}}\,C_1^{**} \end{aligned}$$

can be strict. However, \({\text{mul}}\,(C_1^* \,;\, {\mathfrak{N}}^\perp \times {\mathfrak{M}}^\perp ) ={\text{mul}}\,C_1^{**}+{\mathfrak{M}}^\perp ={\text{mul}}\,C_1^{**}\), and thus these two different rows generate the same linear relation.

There are simple examples where a strict inclusion in (4.2) can occur. For instance, if \(B_1,B_2\in \mathbf{B}({\mathfrak{H}})\) are two positive operators such that

$$\begin{aligned} \text{ran}\,B_1\cap \text{ran}\,B_2=\{0\}, \quad \text{ker}\,B_1=\text{ker}\,B_2=\{0\}, \end{aligned}$$

and one takes \(C_1=B_1^{-1}\) and \(C_2=B_2^{-1}\), then the inclusion in (4.2) is strict. Namely, \(\text{dom}\,C_1\cap \text{dom}\,C_2=\{0\}\) and hence the adjoint of \(\text{col}\,(C_1\,;\,C_2)\) has multivalued part equal to \({\mathfrak{H}}\), while \((C_1^*\,;\,C_2^*)=(B_1^{-1}\,;\,B_2^{-1})\) is an operator.

5 Block relations

Let each of the Hilbert spaces \({\mathfrak{H}}\) and \({\mathfrak{K}}\) be decomposed into two orthogonal components

$$\begin{aligned} {\mathfrak{H}}={\mathfrak{H}}_{1} \oplus {\mathfrak{H}}_{2} \quad \text{ and } \quad {\mathfrak{K}}={\mathfrak{K}}_{1} \oplus {\mathfrak{K}}_{2}, \end{aligned}$$

and let

$$\begin{aligned} E_{ij} : {\mathfrak{H}}_{j} \rightarrow {\mathfrak{K}}_{i}, \quad i,j=1,2, \end{aligned}$$

be linear relations; these four relations form a \(2 \times 2\)block of relations

$$\begin{aligned} \begin{bmatrix} E_{11} &{}\quad E_{12} \\ E_{21} &{}\quad E_{22} \end{bmatrix}, \end{aligned}$$

abbreviated by \([E_{ij}]\). The set of all such blocks is denoted by \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Every block of relations \([E_{ij}]\) in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) leads to a row of columns or, alternatively, a column of rows. This way every block of relations \([E_{ij}]\) generates a unique linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}\) as follows from the next lemma.

Lemma 5.1

Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Then

$$\begin{aligned} \begin{pmatrix} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} \end{pmatrix} = \begin{pmatrix} ( E_{11} \,;\, E_{12} ) \\ (E_{21} \,; \,E_{22}) \end{pmatrix}. \end{aligned}$$
(5.1)

Proof

By the definition of a column in (3.3) a typical element in the right-hand side of (5.1) is of the form

$$\begin{aligned} \left\{ f, \begin{pmatrix} \gamma _{1} \\ \gamma _{2} \end{pmatrix} \right\} \quad \text{ with } \quad \begin{matrix} \{f, \gamma _{1}\} \in (E_{11}\,;\, E_{12}), \\ \{f, \gamma _{2}\} \in (E_{21} \,;\,E_{22} ). \end{matrix} \end{aligned}$$

Recall that by the definition of a row in (3.1) one has \(\{f, \gamma _{1}\} \in (E_{11} \,;\, E_{12})\) if and only if

$$\begin{aligned} \{f, \gamma _1\} =\left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \alpha _1+ \beta _1 \right\} \quad \text{ with } \quad \{f_1, \alpha _1\} \in E_{11} \quad \text{ and } \quad \{f_2, \beta _1 \} \in E_{12}, \end{aligned}$$

and, similarly, \(\{f, \gamma _{2}\} \in (E_{21} \,;\, E_{22})\) if and only if

$$\begin{aligned} \{f, \gamma _2\} =\left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \alpha _2+ \beta _2 \right\} \quad \text{ with } \quad \{f_1, \alpha _2\} \in E_{21} \quad \text{ and } \quad \{f_2, \beta _2 \} \in E_{22}. \end{aligned}$$

Thus, in fact, a typical element in the right-hand side of (5.1) is of the form

$$\begin{aligned} \left\{ f, \begin{pmatrix} \gamma _{1} \\ \gamma _{2} \end{pmatrix} \right\} =\left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} \alpha _1+ \beta _1 \\ \alpha _2 +\beta _2 \end{pmatrix} \right\} \quad \text{ with } \quad \begin{matrix} \{f_1, \alpha _1\} \in E_{11}, \, \{f_2, \beta _1 \} \in E_{12}, \\ \{f_1, \alpha _2\} \in E_{21}, \, \{f_2, \beta _2 \} \in E_{22}, \end{matrix} \end{aligned}$$

and the last four conditions can be written in the equivalent form

$$\begin{aligned} \left\{ f_1, \begin{pmatrix} \alpha _{1} \\ \alpha _{2} \end{pmatrix} \right\} \in \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}, \quad \left\{ f_2, \begin{pmatrix} \beta _{1} \\ \beta _{2} \end{pmatrix} \right\} \in \begin{pmatrix} E_{21} \\ E_{22} \end{pmatrix}. \end{aligned}$$

Thus, by the definition of a row in (3.1), a typical element in the right-hand side of (5.1) is a typical element in the left-hand side of (5.1). \(\square \)

Definition 5.1

Let \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Then the linear relation E from \({\mathfrak{H}}\) to \({\mathfrak{K}}\)generated by the block is defined by (5.1) and denoted by

$$\begin{aligned} E= \begin{pmatrix} E_{11} &{}\quad E_{12} \\ E_{21} &{}\quad E_{22} \end{pmatrix}. \end{aligned}$$

The relation E is called the block relation corresponding to the block \([E_{ij}]\).

The proof of Lemma 5.1 shows that the unique linear relation E generated by the block \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) can be expressed equivalently by the formula

$$\begin{aligned} E= \left\{ \left\{ \begin{pmatrix} f_{1} \\ f_{2}\end{pmatrix}, \begin{pmatrix} \alpha _{1} + \beta _{1} \\ \alpha _{2} + \beta _{2} \end{pmatrix} \right\} :\, \begin{matrix} \{f_{1}, \alpha _{1}\} \in E_{11}, \,\{f_{2}, \beta _{1}\} \in E_{12} \\ \{f_{1}, \alpha _{2}\} \in E_{21}, \,\{f_{2}, \beta _{2}\} \in E_{22} \end{matrix} \right\} . \end{aligned}$$
(5.2)

This is the direct way to think of the block \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) as a linear relation E from \({\mathfrak{H}}\) to \({\mathfrak{K}}\). It agrees with the well-known definition in the case that each relation \(E_{ij}\) is (the graph of) an everywhere defined bounded linear operator from \({\mathfrak{H}}_j\) to \({\mathfrak{K}}_i\).

Let \([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}}, {\mathfrak{K}})\) and let E and F be the block relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}\) generated by them. The blocks are said to satisfy the inclusion\([E_{ij}] \subset [F_{ij}] \) if \(E_{ij} \subset F_{ij}\) for all ij. In particular,

$$\begin{aligned}{}[E_{ij}] = [F_{ij}] \quad \Longleftrightarrow \quad E_{ij} = F_{ij} \quad \text{for all } i,j=1,2. \end{aligned}$$

It clearly follows from (5.2) that

$$\begin{aligned}{}[E_{ij}] \subset [F_{ij}] \quad \Rightarrow \quad E \subset F;\qquad [E_{ij}] = [F_{ij}] \quad \Rightarrow \quad E = F. \end{aligned}$$
(5.3)

The reverse implications in (5.3) do not hold, since different blocks \([E_{ij}]\) in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) can generate the same linear relation E. Definition 5.1 leads to linear relations from the Hilbert space \({\mathfrak{H}}={\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to the Hilbert space \({\mathfrak{K}}={\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) with distinguishing properties.

Lemma 5.2

Let\({\mathfrak{H}}\)and\({\mathfrak{K}}\)be Hilbert spaces, let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\), and letEbe the block relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it. Then the following statements hold:

$$\begin{aligned} \text{dom}\,E= &{} (\text{dom}\,E) \cap {\mathfrak{H}}_1 \oplus (\text{dom}\,E) \cap {\mathfrak{H}}_2, \end{aligned}$$
(5.4)
$$\begin{aligned} \text{mul}\,E= &{} (\text{mul}\,E) \cap {\mathfrak{K}}_1 \oplus (\text{mul}\,E) \cap {\mathfrak{K}}_2. \end{aligned}$$
(5.5)

Proof

To prove (5.4), it suffices to show that its lefthand-side is contained in its right-hand side. Observe that if \(f =f_1 \oplus f_2 \in \text{dom}\,E\), then there exist elements \(\alpha _i\) and \(\beta _i\), \(i=1,2\), such that

$$\begin{aligned} \{f_{1}, \alpha _{1}\} \in E_{11}, \,\{f_{2}, \beta _{1}\} \in E_{12}, \, \{f_{1}, \alpha _{2}\} \in E_{21}, \,\{f_{2}, \beta _{2}\} \in E_{22}. \end{aligned}$$

But then it follows immediately from (5.2) that

$$\begin{aligned} \left\{ \begin{pmatrix} f_{1} \\ 0 \end{pmatrix}, \begin{pmatrix} \alpha _{1} \\ \alpha _{2} \end{pmatrix} \right\} \in E, \quad \left\{ \begin{pmatrix} 0 \\ f_{2} \end{pmatrix}, \begin{pmatrix} \beta _{1} \\ \beta _{2} \end{pmatrix} \right\} \in E, \end{aligned}$$

in other words \(f_1, f_2 \in \text{dom}\,E\).

To prove (5.5), it suffices to show that its lefthand-side is contained in its right-hand side. Observe that if \(g =g_1 \oplus g_2 \in \text{mul}\,E\), then there exist elements \(\alpha _i\) and \(\beta _i\), such that \(g_i=\alpha _i+\beta _i\), \(i=1,2\), and

$$\begin{aligned} \{0, \alpha _{1}\} \in E_{11}, \,\{0, \beta _{1}\} \in E_{12}, \, \{0, \alpha _{2}\} \in E_{21}, \,\{0, \beta _{2}\} \in E_{22}. \end{aligned}$$

But then it follows immediately from (5.2) that

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \alpha _{1} +\beta _1 \\ 0 \end{pmatrix} \right\} \in E, \quad \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ \alpha _2+\beta _{2} \end{pmatrix} \right\} \in E, \end{aligned}$$

in other words \(g_1, g_2 \in \text{mul}\,E\). \(\square \)

It will be shown that the properties (5.4) and (5.5) in Lemma 5.2 characterize the block relations in Definition 5.1. The arguments will be based on the following notions and observations.

Definition 5.2

Let E be a linear relation from \({\mathfrak{H}}={\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}={\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) and define the block \([\mathcal{E}_{ij}]\) by

$$\begin{aligned} \mathcal{E}_{ij}= P_i E{\upharpoonright }\,{\mathfrak{H}}_j. \end{aligned}$$
(5.6)

where \(P_{i}\) is the orthogonal projection from \({\mathfrak{K}}\) onto \({\mathfrak{K}}_{i}\).

Observe, that Definition 5.2 agrees with the definitions in (3.2) and (3.4). It is clear that all relations \(\mathcal{E}_{ij}\) from \({\mathfrak{H}}_j\) to \({\mathfrak{K}}_i\) in (5.6) are linear, and that they satisfy

$$\begin{aligned} \text{dom}\,\mathcal{E}_{11}= &{} \text{dom}\,\mathcal{E}_{21}, \quad \text{dom}\,\mathcal{E}_{12}=\text{dom}\,\mathcal{E}_{22}, \end{aligned}$$
(5.7)
$$\begin{aligned} \text{mul}\,\mathcal{E}_{11}= &{} \text{mul}\,\mathcal{E}_{12}, \quad \text{mul}\,\mathcal{E}_{21}=\text{mul}\,\mathcal{E}_{22}. \end{aligned}$$
(5.8)

Thus (5.7) describes the domains of the columns of the block \([\mathcal{E}_{ij}]\), while (5.8) describes the multivalued parts of the rows of the block \([\mathcal{E}_{ij}]\). The block \([\mathcal{E}_{ij}]\) is uniquely determined by E, however, in general the linear relation \(\mathcal{E}\) generated by the block \([\mathcal{E}_{ij}]\) differs from the original relation E. This becomes clear from the next lemma, which also leads to the main characterization of all linear relations E that are generated by some \(2 \times 2\) block \([E_{ij}]\) of linear relations.

Lemma 5.3

LetEbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), let\([\mathcal{E}_{ij}]\)be the block induced byEin (5.6), and let\(\mathcal{E}\)be the linear relation generated by\([\mathcal{E}_{ij}]\). Then

  1. (a)

    the identity for the domain (5.4) implies\(E \subset \mathcal{E}\);

  2. (b)

    the identity for the multivalued part (5.5) implies\(E \supset \mathcal{E}\).

Proof

  1. (a)

    In order to show that \(E \subset \mathcal{E}\), let

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in E. \end{aligned}$$
    (5.9)

    In view of (5.4), there exist \(\alpha _{1}\in {\mathfrak{H}}_1\) and \(\alpha _{2}\in {\mathfrak{H}}_2\) such that

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0\end{pmatrix}, \begin{pmatrix} \alpha _{1} \\ \alpha _{2} \end{pmatrix} \right\} \in E, \end{aligned}$$

    and then, with \(\beta _{1}=g_1-\alpha _{1}\in {\mathfrak{H}}_1\) and \(\beta _{2}=g_2-\alpha _{2} \in {\mathfrak{H}}_2\), it follows that

    $$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ f_2 \end{pmatrix}, \begin{pmatrix} \beta _{1} \\ \beta _{2} \end{pmatrix} \right\} \in E. \end{aligned}$$

    From (5.6) it is clear that

    $$\begin{aligned} \{f_1,\alpha _{1}\} \in \mathcal{E}_{11}, \quad \{f_1, \alpha _{2}\} \in \mathcal{E}_{21}, \quad \{f_2,\beta _{1}\} \in \mathcal{E}_{12}, \quad \{f_2, \beta _{2}\} \in \mathcal{E}_{22}, \end{aligned}$$

    which implies that the element in (5.9) belongs to \(\mathcal{E}\); see (5.2).

  2. (b)

    In order to show that \(\mathcal{E}\subset E\), let

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in \mathcal{E}. \end{aligned}$$
    (5.10)

Hence, by definition, there exist \(\alpha _{1} \in {\mathfrak{K}}_1\), \(\alpha _2 \in {\mathfrak{K}}_2\), \(\beta _{1} \in {\mathfrak{K}}_1\), and \(\beta _2 \in {\mathfrak{K}}_2\), such that

$$\begin{aligned} g_1=\alpha _{1}+\beta _{1} \quad \text{ and } \quad g_2=\alpha _{2}+\beta _{2} \end{aligned}$$
(5.11)

and

$$\begin{aligned} \{f_1, \alpha _{1}\} \in \mathcal{E}_{11}, \quad \{f_1, \alpha _{2}\} \in \mathcal{E}_{21}, \quad \{f_2, \beta _{1}\} \in \mathcal{E}_{12}, \quad \{f_2, \beta _{2}\} \in \mathcal{E}_{22}. \end{aligned}$$

Again, by definition, there exist \(\alpha _{2}', \alpha _{1}'\) such that

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1} \\ \alpha _{2}' \end{pmatrix} \right\} \in E \quad \text{ and } \quad \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1}' \\ \alpha _{2} \end{pmatrix} \right\} \in E, \end{aligned}$$

which implies that

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1}-\alpha _{1}' \\ \alpha _{2}'-\alpha _{2} \end{pmatrix} \right\} \in E. \end{aligned}$$

Hence, by the assumption (5.5), one has also

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1}-\alpha _{1}' \\ 0\end{pmatrix} \right\} \in E \quad \text{ and } \quad \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} 0 \\ \alpha _{2}'-\alpha _{2} \end{pmatrix} \right\} \in E. \end{aligned}$$

Thus, it follows that

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1} \\ \alpha _{2} \end{pmatrix} \right\} =\left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix} , \begin{pmatrix} \alpha _{1} \\ \alpha _{2}'\end{pmatrix} \right\} +\left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} 0 \\ \alpha _{2}'-\alpha _{2}\end{pmatrix} \right\} \in E. \end{aligned}$$
(5.12)

Likewise, it follows that

$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ f_2 \end{pmatrix} , \begin{pmatrix} \beta _{1} \\ \beta _{2} \end{pmatrix} \right\} =\left\{ \begin{pmatrix} 0 \\ f_2 \end{pmatrix} , \begin{pmatrix} \beta _{1} \\ \beta _{2}' \end{pmatrix} \right\} + \left\{ \begin{pmatrix} 0 \\ 0 \end{pmatrix} , \begin{pmatrix} 0 \\ \beta _{2}'-\beta _{2} \end{pmatrix} \right\} \in E. \end{aligned}$$
(5.13)

Combining (5.11), (5.12), and (5.13) one sees that the element in (5.10) belongs to the relation E. \(\square \)

As a consequence, the following result is an extension of the characterizations in Lemmas 3.1 and 3.2.

Theorem 5.1

LetEbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), let\([\mathcal{E}_{ij}]\)be the block induced byEin (5.6), and let\(\mathcal{E}\)be the linear relation generated by\([\mathcal{E}_{ij}]\). Then the following statements are equivalent:

  1. (i)

    Eis a linear relation generated by some block in\(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\);

  2. (ii)

    (5.4) and (5.5) are satisfied;

  3. (iii)

    \(E= \mathcal{E}\).

Proof

(i) \(\Rightarrow \) (ii) This implication is a consequence of Lemma 5.2.

(ii) \(\Rightarrow \) (iii) This implication is a consequence of Lemma 5.3.

(iii) \(\Rightarrow \) (i) This is clear. \(\square \)

Thus the identities (5.4) and (5.5) characterize the relations that are generated by blocks of relations, as in (5.1). Likewise, the identities (5.7) and (5.8) characterize the blocks that are induced by linear relations, as in (5.6).

Lemma 5.4

Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and letEbe the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it, as in (5.1). Let\([\mathcal{E}_{ij}]\)be the block induced byEas in (5.6) and let\(\mathcal{E}\)be the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by\([\mathcal{E}_{ij}]\)as in (5.1). Then

  1. (a)

    the identities for the domains

    $$\begin{aligned} \text{dom}\,E_{11}=\text{dom}\,E_{21} \quad \text{ and } \quad \text{dom}\,E_{12}=\text{dom}\,E_{22} \end{aligned}$$
    (5.14)

    imply \([E_{ij}] \subset [\mathcal{E}_{ij}]\);

  2. (b)

    the identities for the multivalued parts

    $$\begin{aligned} \text{mul}\,E_{11}=\text{mul}\,E_{12} \quad \text{ and } \quad \text{mul}\,E_{21}=\text{mul}\,E_{22} \end{aligned}$$
    (5.15)

    imply \([E_{ij}] \supset [\mathcal{E}_{ij}] \).

Proof

  1. (a)

    Assume that (5.14) holds. Let \(\{f_1,g_1\} \in E_{11}\). Then by assumption there exists \(g_2' \in {\mathfrak{H}}_2\) such that \(\{f_1,g_2'\} \in E_{21}\). Hence, one sees from (5.2) that

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2' \end{pmatrix} \right\} \in E, \end{aligned}$$

    which shows that \(\{f_1, g_1\} \in \mathcal{E}_{11}\). Thus \(E_{11} \subset \mathcal{E}_{11}\). The other inclusions follow in the same way.

  2. (b)

    Assume that (5.15) holds. Let \(\{f_1,g_1\} \in \mathcal{E}_{11}\). Then there exists \(g_2 \in {\mathfrak{H}}_2\) such that

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in E, \end{aligned}$$

    which implies that \(g_1=g_1'+g_1''\) and \(g_2=g_2'+g_2''\) such that

    $$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} = \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1'+g_1'' \\ g_2'+g_2'' \end{pmatrix} \right\} \end{aligned}$$

    with \(\{f_1, g_1'\} \in E_{11}\), \(\{f_1, g_2'\} \in E_{21}\), \(\{0, g_1''\} \in E_{12}\), and \(\{0, g_2''\} \in E_{22}\). By assumption \(\{0, g_1''\} \in E_{11}\), which shows that \(\{f_1,g_1\} \in E_{11}\). Thus \(\mathcal{E}_{11} \subset E_{11}\). The other inclusions follow in the same way.

\(\square \)

Theorem 5.2

Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and letEbe the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it. Let\([\mathcal{E}_{ij}]\)be the block induced byEand let\(\mathcal{E}\)be the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by\([\mathcal{E}_{ij}]\). Then the following statements are equivalent:

  1. (i)

    \(E_{ij}= P_i F{\upharpoonright }\,{\mathfrak{H}}_j\)for some linear relationF from \({\mathfrak{H}}\) to \({\mathfrak{K}}\);

  2. (ii)

    (5.14) and (5.15) are satisfied;

  3. (iii)

    \([E_{ij}]=[\mathcal{E}_{ij}]\).

Proof

(i) \(\Rightarrow \) (ii) Recall that (5.7) and (5.8) hold for the relation defined in (5.6).

(ii) \(\Rightarrow \) (iii) This implication is a consequence of Lemma 5.4.

(iii) \(\Rightarrow \) (i) This is clear. \(\square \)

For \([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}})\) and \(\lambda \in \mathbb{C}\) define the sum\([E_{ij}]+[F_{ij}]\) and the scalar multiplication\(\lambda [E_{ij}]\) via the corresponding operations on the entries:

$$\begin{aligned}{}[E_{ij}]+[F_{ij}]:=[E_{ij}+F_{ij}]; \qquad \lambda [E_{ij}]:=[\lambda E_{ij}], \quad i,j=1,2. \end{aligned}$$

When equipped with these operations the space \(\mathbf{M}({\mathfrak{H}})\) has a linear structure. On the other hand, the formula

$$\begin{aligned}{}[E_{ij}] \sim [{\widetilde{E} }_{ij}] \quad \Longleftrightarrow \quad E={\widetilde{E} } \end{aligned}$$

defines an equivalence relation in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). The sum and scalar multiplication of blocks are preserved by the linear relations generated by them.

Lemma 5.5

Assume that\([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}}, {\mathfrak{K}})\)and let\([G_{ij}]=[E_{ij}] + [F_{ij}]\) and \([H_{ij}]=\lambda [E_{ij}]\). LetE, F, G, andHbe the linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by these blocks, respectively. Then

$$\begin{aligned} G=E+F, \quad H=\lambda E. \end{aligned}$$

Proof

Since \([G_{ij}]=[E_{ij} + F_{ij}]\), it follows from (5.2) that the linear relation G satisfies

$$\begin{aligned} G= \left\{ \left\{ \begin{pmatrix} f_{1} \\ f_{2}\end{pmatrix}, \begin{pmatrix} \alpha _{1} + \beta _{1} \\ \alpha _{2} + \beta _{2} \end{pmatrix} \right\} :\, \begin{matrix} \{f_{1}, \alpha _{1}\} \in E_{11}+F_{11}, \,\{f_{2}, \beta _{1}\} \in E_{12}+F_{12}, \\ \{f_{1}, \alpha _{2}\} \in E_{21}+F_{21}, \,\{f_{2}, \beta _{2}\} \in E_{22}+F_{22} \end{matrix} \right\} , \end{aligned}$$

so that, by the definition of the sum of relations, the linear relation G is the set of all

$$\begin{aligned} \left\{ \begin{pmatrix} f_{1} \\ f_{2}\end{pmatrix}, \begin{pmatrix} \alpha _{1} + \beta _{1} \\ \alpha _{2} + \beta _{2} \end{pmatrix} \right\}&= \left\{ \begin{pmatrix} f_{1} \\ f_{2}\end{pmatrix}, \begin{pmatrix} \alpha _{1}' + \alpha _1''+\beta _{1}' + \beta _1'' \\ \alpha _{2}' +\alpha _2'' + \beta _{2}' + \beta _2'' \end{pmatrix} \right\} \\&= \left\{ \begin{pmatrix} f_{1} \\ f_{2}\end{pmatrix}, \begin{pmatrix} \alpha _{1}' +\beta _{1}' + \alpha _1''+ \beta _1'' \\ \alpha _{2}' +\beta _{2}' +\alpha _2'' + \beta _2'' \end{pmatrix} \right\} , \end{aligned}$$

where

$$\begin{aligned} \begin{matrix} \{f_{1}, \alpha _{1}'\} \in E_{11}, \, \{f_1, \alpha _1''\} \in F_{11} ,\\ \{f_{1}, \alpha _{2}'\} \in E_{21}, \, \{f_1, \alpha _2''\} \in F_{21}, \end{matrix} \,\, \begin{matrix} \{f_{2}, \beta _{1}'\} \in E_{12}, \, \{f_2, \beta _1''\} \in F_{12}, \\ \{f_{2}, \beta _{2}'\} \in E_{22}, \, \{f_2, \beta _2''\} \in F_{22}. \end{matrix} \end{aligned}$$

Therefore, it is now clear, by the definition of the sum of relations and the formula (5.2) that the linear relation G is equal to the sum \(E+F\). For the scalar multiplication the argument is similar. \(\square \)

6 Adjoints of block relations

Let \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) be a block of the form (5.1). Then the adjoints of the entries give rise to the following \(2 \times 2\)block of relations

$$\begin{aligned} \begin{bmatrix} E_{11}^* &{}\quad E_{21}^* \\ E_{12}^* &{}\quad E_{22}^* \end{bmatrix}, \end{aligned}$$
(6.1)

where

$$\begin{aligned} E_{ij}^{*} : {\mathfrak{K}}_{i} \rightarrow {\mathfrak{H}}_{j}, \quad i,j=1,2, \end{aligned}$$

The block relation from \({\mathfrak{K}}\) to \({\mathfrak{H}}\) corresponding to the \(2 \times 2\) block in (6.1) is denoted by

$$\begin{aligned} \begin{pmatrix} E_{11}^* &{}\quad E_{21}^* \\ E_{12}^* &{}\quad E_{22}^* \end{pmatrix}, \end{aligned}$$
(6.2)

cf. Definition 5.1. The relation in (6.2) is sometimes called the formal adjoint of the block relation generated by \([E_{ij}]\). In general, there is the following inclusion result.

Proposition 6.1

Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)be a block as in (5.1). Then

$$\begin{aligned} \begin{pmatrix} E_{11}^* &{}\quad E_{21}^* \\ E_{12}^* &{}\quad E_{22}^* \end{pmatrix} \subset \begin{pmatrix} E_{11} &{}\quad E_{12} \\ E_{21} &{}\quad E_{22} \end{pmatrix}^*. \end{aligned}$$
(6.3)

Proof

By Lemma 5.1 and Definition 5.1 the left-hand side of (6.3) may be written as

$$\begin{aligned} \begin{pmatrix} E_{11}^* &{}\quad E_{21}^* \\ E_{12}^* &{}\quad E_{22}^* \end{pmatrix} = \begin{pmatrix} \left( E_{11}^* \,;\, E_{21}^* \right) \\ \left( E_{12}^* \,;\, E_{22}^* \right) \end{pmatrix}. \end{aligned}$$

Now observe that

$$\begin{aligned} \left( E_{11}^* \,;\, E_{21}^* \right) \subset \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}^* \quad \left( E_{12}^* \,;\, E_{22}^* \right) \subset \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}^*, \end{aligned}$$

due to Proposition 4.1. These two inclusions may be combined by (3.5), which gives

$$\begin{aligned} \begin{pmatrix} E_{11}^* &{}\quad E_{21}^* \\ E_{12}^* &{}\quad E_{22}^* \end{pmatrix} \subset \begin{pmatrix} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}^* \\ \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}^* \end{pmatrix}. \end{aligned}$$

Next observe that by Proposition 4.1

$$\begin{aligned} \begin{pmatrix} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}^* \\ \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}^* \end{pmatrix} = \begin{pmatrix} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}; \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} \end{pmatrix}^* = \begin{pmatrix} E_{11} &{}\quad E_{12} \\ E_{21} &{}\quad E_{22} \end{pmatrix}^*, \end{aligned}$$

from which the desired result follows. \(\square \)

Corollary 6.1

Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)be a block as in (5.1). Then there is equality in (6.3) if and only if

$$\begin{aligned} \left( E_{11}^* \,;\, E_{21}^* \right) = \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}^* \quad \text{ and } \quad \left( E_{12}^* \,;\, E_{22}^* \right) = \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}^*. \end{aligned}$$

Corollary 6.1 combined with Proposition 4.1 gives the necessary and sufficient condition stated in item (i) of the next corollary, while Lemma 4.1 yields sufficient conditions stated in item (ii).

Corollary 6.2

Let\([E_{ij}]\)be a block as in (5.1). Assume that, up to interchange ofAandB, the entries of each column\(\text{col}\,(A;B)\)in\([E_{ij}]\)satisfy one of the following:

  1. (a)

    the criterion in Proposition4.1, i.e. for each column\(\text{col}\,(A;B)^*\)is a row;

  2. (b)

    each column\(\text{col}\,(A;B)\)satisfies the (sufficient) conditions in Lemma4.1.

Then there is equality in (6.3).

7 Multiplication of rows and columns

The product of linear relations is well defined. If the linear relations are given as rows, columns, or blocks, then also their components may be multiplied in a similar way. In general these formal products are different from the product in the relation sense. This will be considered here for rows and columns.

Lemma 7.1

Let\(R_1\)be a linear relation from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and let\(R_2\)be a linear relation from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\). Let\(R=(R_1 \,;\, R_2) \)and letAbe a linear relation from\({\mathfrak{K}}\)to\({\mathfrak{K}}'\). Then

$$\begin{aligned} (AR_1 \,;\, AR_2) \subset AR. \end{aligned}$$
(7.1)

Moreover, the following statements are equivalent:

  1. (i)

    there is equality in (7.1);

  2. (ii)

    there is the implication

    $$\begin{aligned} \left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in R \\ h \in \text{dom}\,A \end{array} \right. \quad \Rightarrow \quad \left\{ \begin{array}{l} \{f_1,h_1\}\in R_{1}, \{f_2,h_2\} \in R_{2} \\ h-h_1-h_2 \in \text{ker}\,A \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,A. \end{array} \right. \end{aligned}$$

Proof

To prove (7.1), let \(\{f,g\} \in (AR_1 \,;\, AR_2)\). This means \(f=f_1 \oplus f_2\) and

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g_1+g_2 \right\} \in (AR_1 \,;\, AR_2), \end{aligned}$$

where \(\{f_1,g_1\} \in AR_1\), \(\{f_2,g_2\} \in AR_2\), and \(g=g_1+g_2\). Then there exist elements \(h_1\) and \(h_2\) in \({\mathfrak{K}}\) such that

$$\begin{aligned} \{f_1,h_1\} \in R_1, \quad \{h_1, g_1\} \in A, \quad \{f_2,h_2\} \in R_2, \quad \{h_2, g_2\} \in A. \end{aligned}$$
(7.2)

Consequently,

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, h_1+h_2 \right\} \in (R_1 \,;\, R_2) \quad \text{ and } \quad \{h_1+h_2, g_1+ g_2\} \in A, \end{aligned}$$
(7.3)

which implies that

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, g_1+g_2 \right\} \in A(R_1 \,;\, R_2). \end{aligned}$$

Thus (7.1) has been shown.

(i) \(\Rightarrow \) (ii) Assume that there is equality in (7.1). In order to prove (ii), let \(\{f_1\oplus f_2,h\}\in (R_1 \,;\, R_2)\) for some \(h\in \text{dom}\,A\). Then \(\{h,g\}\in A\) for some \(g\in {\mathfrak{K}}'\) and \(\{f_1\oplus f_2,g\}\in AR=(AR_1 \,;\, AR_2)\). By (3.1) this means that \(\{f_1,g_1\}\in AR_1\) and \(\{f_2,g_2\}\in AR_2\) for some \(g_1,g_2\in {\mathfrak{K}}'\) with \(g_1+g_2=g\). Hence, there exist \(h_1,h_2\in {\mathfrak{K}}\) which satisfy all the conditions in (7.2) and (7.3). In particular, one sees that \(h_1,h_2\in \text{dom}\,A\) and now \(\{h,g\}\in A\) and (7.3) show that \(\{h-h_1-h_2,0\} \in A\).

(ii) \(\Rightarrow \) (i) It suffices to demonstrate the inclusion \(AR \subset (AR_1 \,;\, AR_2)\). To do this, assume that \(\{f_1\oplus f_2,g\}\in AR\). Then \(\{f_1\oplus f_2,h\}\in R\) and \(\{h,g\}\in A\) for some \(h\in {\mathfrak{K}}\). In particular, \(h\in \text{dom}\,A\) and, by assumption, there exist \(h_1,h_2\in \text{dom}\,A\) such that

$$\begin{aligned} \{f_1,h_1\}\in R_1, \quad \{f_2,h_2\} \in R_2, \quad \text{ and } \quad h-h_1-h_2\in \text{ker}\,A. \end{aligned}$$

Then \(\{h_1,g_1\}\in A\), \(\{h_2,g_2\} \in A\) for some \(g_1,g_2\in {\mathfrak{K}}'\), so that \(\{f_1,g_1\}\in AR_1\), \(\{f_2,g_2\} \in AR_2\), and

$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix},g_1+g_2\right\} \in (AR_1 \,;\, AR_2), \quad \{h_1+h_2,g_1+g_2\}\in A. \end{aligned}$$
(7.4)

Hence \(\{h,g\}\in A\) yields \(\{h-h_1-h_2,g-g_1-g_2\}\in A\) and \(\{0,g-g_1-g_2\}\in A\). Now observe that

$$\begin{aligned} \text{mul}\,A \subset \text{mul}\,AR_{1}+\text{mul}\,AR_{2}=\text{mul}\,(AR_{1}\,;\, AR_{2}), \end{aligned}$$

so that \(\{0,g-g_1-g_2\}\in (AR_1 \,;\, AR_2)\). Then the first statement in (7.4) shows that \(\{f_1\oplus f_2,g\}\in (AR_1 \,;\, AR_2)\). \(\square \)

In special cases the condition (ii) in Lemma 7.1 becomes easier to verify.

Corollary 7.1

Let\(R=(R_1 \,;\, R_2) \)andAbe as in Lemma7.1. Then:

  1. (a)

    ifRis an operator, in particular, if\(R_{1} \in \mathbf{B}({\mathfrak{H}}_{1}, {\mathfrak{K}})\)and\(R_{2} \in \mathbf{B}({\mathfrak{H}}_{1}, {\mathfrak{K}})\), then there is equality in (7.1) if and only if

    $$\begin{aligned} R_1f_1+ R_2f_2 \in \text{dom}\,A \quad \Longrightarrow \quad R_1f_1,\, R_2f_2\in \text{dom}\,A; \end{aligned}$$
  2. (b)

    if\(\text{ran}\,R_1\cup \text{ran}\,R_2\subset \text{dom}\,A\), in particular, if\(A \in \mathbf{B}({\mathfrak{K}}, {\mathfrak{K}}')\), then there is equality in (7.1).

Lemma 7.2

LetBbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\). Let\(C_1\)be a linear relation from\({\mathfrak{K}}\)to\(\mathfrak{L}_{1}\), let\(C_2\)be a linear relation from\({\mathfrak{K}}\)to\(\mathfrak{L}_{2}\), and let\(C=\text{col}\,(C_1\,;\,C_2)\). Then

$$\begin{aligned} CB \subset \begin{pmatrix} C_1 B \\ C_2 B \end{pmatrix}. \end{aligned}$$
(7.5)

Moreover, the following statements are equivalent:

  1. (i)

    there is equality in (7.5);

  2. (ii)

    there is the inclusion

    $$\begin{aligned} \text{mul}\,B \cap \text{dom}\,C \subset \text{mul}\,B \cap \text{ker}\,C_1+\text{mul}\,B \cap \text{ker}\,C_2. \end{aligned}$$

Proof

It is straightforward to see (7.5). To see the equivalence of the statements (i) and (ii), observe that

$$\begin{aligned} \text{mul}\,CB=\left\{ \begin{pmatrix} \varphi _1 \\ \varphi _2 \end{pmatrix}:\, \psi \in \text{mul}\,B,\, \{\psi , \varphi _1\} \in C_1, \, \{\psi , \varphi _2\} \in C_2 \,\right\} . \end{aligned}$$
(7.6)

(i) \(\Rightarrow \) (ii) If there is equality in (7.5), then CB is a column. By Lemma 3.2 this is equivalent to

$$\begin{aligned} \text{mul}\,CB=(\text{mul}\,CB) \cap {\mathfrak{K}}_1 \oplus (\text{mul}\,CB) \cap {\mathfrak{K}}_2. \end{aligned}$$

To show the inclusion in (ii), let \(\psi \in \text{mul}\,B \cap \text{dom}\,C\). Then there exist elements \(\varphi _1\) and \(\varphi _2\) so that \(\{\psi , \varphi _1\} \in C_1\) and \(\{\psi , \varphi _2\} \in C_2\), or

$$\begin{aligned} \left\{ \psi , \begin{pmatrix} \varphi _1 \\ \varphi _2 \end{pmatrix} \right\} \in C, \quad \text{ which } \text{ implies } \quad \left\{ 0, \begin{pmatrix} \varphi _1 \\ \varphi _2 \end{pmatrix} \right\} \in CB. \end{aligned}$$

By assumption, \(\varphi _1 \in (\text{mul}\,CB)\cap {\mathfrak{K}}_1\) and \(\varphi _2 \in (\text{mul}\,CB)\cap {\mathfrak{K}}_2\). Hence there exist \(\psi ', \psi '' \in \text{mul}\,B\), such that

$$\begin{aligned} \left\{ \psi ', \begin{pmatrix} \varphi _1 \\ 0 \end{pmatrix} \right\} \in C \quad \text{ and } \quad \left\{ \psi '', \begin{pmatrix} 0 \\ \varphi _2 \end{pmatrix} \right\} \in C. \end{aligned}$$

Therefore \(\psi ' \in \text{ker}\,C_2\), \(\psi '' \in \text{ker}\,C_1\), and

$$\begin{aligned} \left\{ \psi -\psi ' -\psi '',\begin{pmatrix} 0 \\ 0 \end{pmatrix} \right\} \in C, \end{aligned}$$

which shows \(\zeta = \psi -\psi ' -\psi '' \in \text{ker}\,C=\text{ker}\,C_1 \cap \text{ker}\,C_2\). Since \(\psi ', \psi '' \in \text{mul}\,B\), also \(\zeta = \psi -\psi ' -\psi '' \in \text{mul}\,B\). Hence,

$$\begin{aligned} \psi = \psi ' +\psi '' + \zeta \in \text{mul}\,B \cap \text{ker}\,C_1 + \text{mul}\,B \cap \text{ker}\,C_2. \end{aligned}$$

Therefore (ii) holds.

(ii) \(\Rightarrow \) (i) It suffices to show that CB is a column, as in this case one has equality in (7.5), since

$$\begin{aligned} CB=\begin{pmatrix} P_1 CB \\ P_2 CB \end{pmatrix} =\begin{pmatrix} C_1 B \\ C_2 B \end{pmatrix} \end{aligned}$$

by Lemma 3.2. Thus by Lemma 3.2, one needs to show

$$\begin{aligned} \text{mul}\,CB \subset (\text{mul}\,CB) \cap {\mathfrak{K}}_1 \oplus (\text{mul}\,CB) \cap {\mathfrak{K}}_2, \end{aligned}$$

as the reverse inclusion is clear. Let \(\varphi \in \text{mul}\,CB\), which, by (7.6), gives

$$\begin{aligned} \{0,\psi \} \in B, \quad \{\psi , \varphi \} \in C \quad \text{ or } \quad \left\{ \psi , \begin{pmatrix} \varphi _1 \\ \varphi _2 \end{pmatrix} \right\} \in C, \end{aligned}$$

where \(\psi \in \text{mul}\,B \cap \text{dom}\,C\). By assumption \(\psi =\psi ' +\psi ''\), where \(\psi ',\psi ''\in \text{mul}\,B\) with

$$\begin{aligned} \{\psi ',0\} \in C_1,\quad \{\psi '',0\} \in C_2. \end{aligned}$$

Since

$$\begin{aligned} \{ \psi '+\psi '', \varphi _1 \} \in C_1, \quad \{ \psi '+\psi '' ,\varphi _2 \} \in C_2, \end{aligned}$$

this leads to

$$\begin{aligned} \{ \psi '', \varphi _1 \} \in C_1, \quad \{ \psi ' ,\varphi _2 \} \in C_2, \end{aligned}$$

and

$$\begin{aligned} \left\{ \psi '', \begin{pmatrix} \varphi _1 \\ 0 \end{pmatrix} \right\} \in C \quad \text{ and } \quad \left\{ \psi ', \begin{pmatrix} 0 \\ \varphi _2 \end{pmatrix} \right\} \in C. \end{aligned}$$

so that

$$\begin{aligned} \left\{ 0, \begin{pmatrix} \varphi _1 \\ 0 \end{pmatrix} \right\} \in CB \quad \text{ and } \quad \left\{ 0, \begin{pmatrix} 0 \\ \varphi _2 \end{pmatrix} \right\} \in CB. \end{aligned}$$

This shows the required inclusion. \(\square \)

In special cases the condition (ii) in Lemma 7.2 becomes easier to verify. Observe that if \(\text{ker}\,C_{2}=\{0\}\), then (ii) reads \(\text{mul}\,B \cap \text{dom}\,C \subset \text{ker}\,C_1\), and if \(\text{ker}\,C_{1}=\{0\}\), then (ii) reads \(\text{mul}\,B \cap \text{dom}\,C \subset \text{ker}\,C_2\).

Corollary 7.2

LetBand\(C=\text{col}\,(C_1\,;\,C_2)\)be as in Lemma7.2. Then:

  1. (a)

    ifBis an operator, in particular, if\(B \in \mathbf{B}({\mathfrak{H}},{\mathfrak{K}})\), then there is equality in (7.5);

  2. (b)

    if\(C_{1} \in \mathbf{B}({\mathfrak{K}}, \mathfrak{L}_{1})\)and\(C_{2} \in \mathbf{B}({\mathfrak{K}}, \mathfrak{L}_{2})\), then there is equality in (7.5) if and only if

    $$\begin{aligned} \text{mul}\,B \subset \text{mul}\,B \cap \text{ker}\,C_1+\text{mul}\,B \cap \text{ker}\,C_2. \end{aligned}$$

In the case where a row and a column are multiplied there are no obstacles.

Lemma 7.3

Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), and let\(C=\text{col}\,(C_1; C_2)\)be their column. Let\(R_1\)be a linear relation from\({\mathfrak{K}}_1\)to\(\mathfrak{L}\), let\(R_2\)be a linear relation from\({\mathfrak{K}}_2\)to\(\mathfrak{L}\), and let\(R=(R_1; R_2)\)be their row. Then

$$\begin{aligned} RC=R_1C_1+R_2C_2. \end{aligned}$$

Proof

\((\subset )\) Let \(\{f,g\} \in R C\). Then it follows that \(\{f,h\} \in C\) and \(\{h,g\} \in R\) for some \(h=h_1 \oplus h_2\), or

$$\begin{aligned} \left\{ f, \begin{pmatrix} h_1 \\ h_2 \end{pmatrix} \right\} \in \begin{pmatrix} C_1\\ C_2 \end{pmatrix}, \quad \left\{ \begin{pmatrix} h_1 \\ h_2 \end{pmatrix}, g_1+g_2 \right\} \in (R_1\,;\,R_2), \end{aligned}$$

where \(\{h_1,g_1 \} \in R_1\) and \(\{h_2,g_2 \} \in R_2\), while \(g=g_1+g_2\). This implies

$$\begin{aligned} \{f,g\}=\{f,g_1+g_2\} \quad \text{ with } \quad \{f,g_1\} \in R_1C_1, \,\, \{f,g_2\} \in R_2C_2, \end{aligned}$$
(7.7)

so that \(\{f,g\} \in R_1C_1+R_2C_2\).

\((\supset )\) Let \(\{f,g\} \in R_1C_1+R_2C_2\). This implies that (7.7) holds. Hence there exist elements \(h_1 \in {\mathfrak{H}}_1\) and \(h_2 \in {\mathfrak{H}}_2\) such that

$$\begin{aligned} \{f,h_1\} \in C_1, \,\, \{h_1,g_1\} \in R_1, \,\, \{f,h_2\} \in C_2, \,\, \{h_2,g_2\} \in R_2. \end{aligned}$$

However, this can be written as

$$\begin{aligned} \left\{ f, \begin{pmatrix} h_1 \\ h_2 \end{pmatrix} \right\} \in \begin{pmatrix} C_1\\ C_2 \end{pmatrix}, \quad \left\{ \begin{pmatrix} h_1 \\ h_2 \end{pmatrix}, g_1+g_2 \right\} \in (R_1\,;\,R_2), \end{aligned}$$

and \( \{f,g\}=\{f,g_1+g_2\} \in RC\). \(\square \)

8 Multiplication of block relations

Multiplication of block relations is a complicated operation. Lemmas 7.1 and 7.2 describe the difficulties to be encountered. Therefore, it may be helpful to consider the case of blocks of operators first.

Lemma 8.1

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of operators, and letEandFbe the linear operators generated by these two blocks. Then the domain\(\text{dom}\,EF\)consists of all\(h=(h_{1}, h_{2}) \in {\mathfrak{H}}_{1} \times {\mathfrak{H}}_{2}\)for which

  1. (a)

    \(h_{1} \in \text{dom}\,F_{11} \cap \text{dom}\,F_{21}\);

  2. (b)

    \(h_{2} \in \text{dom}\,F_{12} \cap \text{dom}\,F_{22}\);

  3. (c)

    \(F_{11}h_{1} +F_{12}h_{2} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);

  4. (d)

    \(F_{21}h_{1} +F_{22}h_{2} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\).

Moreover, if\(h=(h_{1}, h_{2}) \in \text{dom}\,EF\), then

$$\begin{aligned} EFh= \begin{pmatrix} E_{11}(F_{11}h_{1}+F_{12}h_{2}) + E_{12}(F_{21}h_{1}+F_{22}h_{2}) \\ E_{21}(F_{11}h_{1}+F_{12}h_{2}) + E_{22}(F_{21}h_{1}+F_{22}h_{2}) \end{pmatrix}. \end{aligned}$$

Proof

Let \(h \in \text{dom}\,EF\), so that \(h \in \text{dom}\,F\) and \(Fh \in \text{dom}\,E\). The condition \(h \in \text{dom}\,F\) means that (i) and (ii) hold, in which case

$$\begin{aligned} Fh=\begin{pmatrix} F_{11}h_{1} +F_{12}h_{2} \\ F_{21}h_{1} +F_{22}h_{2} \end{pmatrix}. \end{aligned}$$

Moreover, the condition \(Fh \in \text{dom}\,E\) means that (iii) and (iv) hold, in which case EFh is as in the statement of the lemma.

Corollary 8.1

Let \(h=(h_{1}, h_{2}) \in {\mathfrak{H}}_{1} \times {\mathfrak{H}}_{2}\) satisfy

  1. (a)

    \(h_{1} \in \text{dom}\,F_{11} \cap \text{dom}\,F_{21}\);

  2. (b)

    \(h_{2} \in \text{dom}\,F_{12} \cap \text{dom}\,F_{22}\);

  3. (c)

    \(F_{11}h_{1} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);

  4. (d)

    \(F_{12}h_{2} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);

  5. (e)

    \(F_{21}h_{1} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\);

  6. (f)

    \(F_{22}h_{2} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\).

Then \(h \in \text{dom}\,EF\) and

$$\begin{aligned} EFh= \begin{pmatrix} (E_{11}F_{11}+ E_{12}F_{21})h_{1}+(E_{11}F_{12} +E_{12} F_{22})h_{2}) \\ (E_{21}F_{11}+ E_{22}F_{21})h_{1}+(E_{21}F_{12}+ E_{22}F_{22})h_{2}) \end{pmatrix}. \end{aligned}$$

The message of Lemma 8.1 and Corollary 8.1 is that the domain of the operator EF contains the domain of the operator generated by the block which results after formally multiplying the blocks, and that

$$\begin{aligned} \begin{pmatrix} E_{11}F_{11}+E_{12}F_{21} &{} E_{11}F_{12}+E_{12}F_{22} \\ E_{21}F_{11}+E_{22}F_{21} &{} E_{21}F_{12}+E_{22}F_{22} \end{pmatrix} \subset EF. \end{aligned}$$
(8.1)

The domain of the left-hand side is more restrictive than the domain of EF as one has to take care of all the entries, including the summands, in the block on the left-hand side. This reflects the contents of condition (ii) in Lemma 7.1 and, of course, the criterion stated in Corollary 7.1 (a).

Now the case of linear relations will be taken up. Let \([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) and \([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\) be blocks of linear relations, and let E and F be the linear relations generated by these blocks. Since multivalued operators are allowed, the following arguments must take care of this extra complication.

The next lemma is based on Lemmas 7.1, 7.2, and 7.3. Its statement is simple to state, but it contains a lot of information, as will be shown below.

Lemma 8.2

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Then the following equality holds

$$\begin{aligned} EF =\begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) +\begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22}). \end{aligned}$$
(8.2)

Proof

The key is the interpretation of the relations E and F as a row and a column, respectively; cf. Lemma 5.1. Thus E will be written as a row of columns:

$$\begin{aligned} E=(E^{(1)}\,;\,E^{(2)}), \; \text{ where } \; E^{(1)}=\begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}, \quad E^{(2)}=\begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}, \end{aligned}$$

and, likewise, F will be written as a column of rows:

$$\begin{aligned} F=\begin{pmatrix} F_{(1)}\\ F_{(2)}\end{pmatrix}, \; \text{ where } \; F_{(1)}=( F_{11} \,;\, F_{12} ), \quad F_{(2)}=( F_{12} \,;\, F_{22}). \end{aligned}$$

Then an application of Lemma 7.3 leads to

$$\begin{aligned} EF&=(E^{(1)}\,;\,E^{(2)})\begin{pmatrix} F_{(1)}\\ F_{(2)}\end{pmatrix} =E^{(1)}F_{(1)}+E^{(2)}F_{(2)} \\&=\begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) +\begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22}), \end{aligned}$$

which gives the identity (8.2). \(\square \)

It is instructive to describe all the elements in the relation EF. By Lemma 8.2 they are of the form

$$\begin{aligned} \left\{ \begin{pmatrix} h_{1} \\ h_{2} \end{pmatrix}, \begin{pmatrix} k_{1}' +k_{1}''\\ k_{2}'+k_{2}'' \end{pmatrix} \right\} \in EF, \end{aligned}$$

due to the sum in the right-hand side of (8.2), where

$$\begin{aligned} \left\{ \begin{pmatrix} h_{1} \\ h_{2} \end{pmatrix}, \begin{pmatrix} k_{1}' \\ k_{2}' \end{pmatrix} \right\} \in \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ), \quad \left\{ \begin{pmatrix} h_{1} \\ h_{2} \end{pmatrix}, \begin{pmatrix} k_{1}'' \\ k_{2}'' \end{pmatrix} \right\} \in \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22} ). \end{aligned}$$

Hence there exist elements \(l'\) and \(l''\) such that

$$\begin{aligned} \left\{ \begin{pmatrix} h_{1} \\ h_{2} \end{pmatrix}, l' \right\} \in ( F_{11} \,;\, F_{12} ), \quad \left\{ l', \begin{pmatrix} k_{1}' \\ k_{2}' \end{pmatrix} \right\} \in \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} , \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{pmatrix} h_{1} c_{2} \end{pmatrix}, l'' \right\} \in ( F_{21} \,;\, F_{22} ), \quad \left\{ l'', \begin{pmatrix} k_{1}''\\ k_{2}'' \end{pmatrix} \right\} \in \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} . \end{aligned}$$

Note that this description, provided by the identity in (8.2), agrees with the one in Lemma 8.1 for the case of operators.

Parallel to the operator case, one may ask the question what the relationship is of the linear relation EF with the linear relation \(E \star F\) generated by the formal product \([E_{ij}]\cdot [F_{ij}]\) of the relations \(E_{ij}\) and \(F_{ij}\):

$$\begin{aligned} E \star F = \begin{pmatrix} E_{11}F_{11}+E_{12}F_{21} &{} E_{11}F_{12}+E_{12}F_{22} \\ E_{21}F_{11}+E_{22}F_{21} &{} E_{21}F_{12}+E_{22}F_{22} \end{pmatrix}. \end{aligned}$$
(8.3)

Recall that this block can be seen as a row of columns, or as a column of rows; cf. Lemma 5.1. The following result shows that the relationship between the product EF and the formal product \(E \star F\) is fuzzy.

Lemma 8.3

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\) be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Then for both\(X=EF\)and\(X=E \star F\)the following inclusions hold:

$$\begin{aligned}&\left( \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} F_{11} \,;\, \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}F_{12} \right) +\left( \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} F_{21} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}F_{22} \right) \nonumber \\&\quad \subset X \subset \begin{pmatrix} E_{11}( F_{11} \,;\, F_{12}) + E_{12}( F_{21} \,;\, F_{22}) \\ E_{21}( F_{11} \,;\, F_{12} )+E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix}. \end{aligned}$$
(8.4)

It is straightforward to obtain the above inclusions. First consider the case that \(X=EF\). It follows from (8.2), Lemmas 7.1 and 5.5 that

$$\begin{aligned} EF&=\begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) +\begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22})\nonumber \\&\supset \left( \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} F_{11} \,;\, \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}F_{12} \right) +\left( \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} F_{21} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}F_{22} \right) \end{aligned}$$
(8.5)

which proves the first inclusion in (8.4) for \(X=EF\). Similarly, it follows from (8.2), Lemma 7.2 (see also (5.3)), and Lemma 5.5 that

$$\begin{aligned} EF&=\begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) +\begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22})\nonumber \\&\subset \begin{pmatrix} E_{11}( F_{11} \,;\, F_{12} ) \\ E_{21}( F_{11} \,;\, F_{12} ) \end{pmatrix} +\begin{pmatrix} E_{12}( F_{21} \,;\, F_{22}) \\ E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix}\nonumber \\&= \begin{pmatrix} E_{11}( F_{11} \,;\, F_{12}) + E_{12}( F_{21} \,;\, F_{22}) \\ E_{21}( F_{11} \,;\, F_{12} )+E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix}, \end{aligned}$$
(8.6)

which proves the second inclusion in (8.4) for \(X=EF\).

Next consider the case that \(X=E \star F\). By rewriting (8.3) as a sum (cf. Lemma 5.5) and then applying Lemma 7.2 to each column (cf. Lemma 5.1) it is seen that (see also (5.3))

$$\begin{aligned} E \star F&=\begin{pmatrix} E_{11}F_{11}&{}\quad E_{11}F_{12} \\ E_{21}F_{11}&{}\quad E_{21}F_{12} \end{pmatrix} +\begin{pmatrix} E_{12}F_{21} &{}\quad E_{12}F_{22} \\ E_{22}F_{21} &{}\quad E_{22}F_{22} \end{pmatrix}\nonumber \\&\supset \left( \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} F_{11} \,;\, \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}F_{12} \right) +\left( \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} F_{21} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}F_{22}\right) , \end{aligned}$$
(8.7)

which proves the first inclusion in (8.4) for \(X=E \star F\). To prove the other inclusion rewrite the first identity in (8.7) and then apply Lemma 7.1 to each entry (see also (5.3)) to obtain

$$\begin{aligned} E \star F&=\begin{pmatrix} ( E_{11}F_{11} \,;\, E_{11}F_{12}) + (E_{12} F_{21} \,;\, E_{12}F_{22}) \\ (E_{21} F_{11} \,;\, E_{21}F_{12} )+(E_{22} F_{21} \,;\, E_{22}F_{22}) \end{pmatrix} \nonumber \\&\subset \begin{pmatrix} E_{11}( F_{11} \,;\, F_{12}) + E_{12}( F_{21} \,;\, F_{22}) \\ E_{21}( F_{11} \,;\, F_{12} )+E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix}, \end{aligned}$$
(8.8)

which proves the second inclusion in (8.4) for \(X=E \star F\).

Observe that if one of the inclusions in (8.4) of Lemma 8.3 holds as an equality, then these two products are related to each other by an inclusion. More precisely:

  1. (a)

    if equality prevails in (8.5) or in (8.8), then \(EF\subset E\star F\);

  2. (b)

    if equality prevails in (8.6) or in (8.7), then \(E\star F\subset EF\).

If, in particular, F is an operator, then one has case (b), i.e.,

$$\begin{aligned} \text{mul}\,F=\{0\} \quad \Longrightarrow \quad E \star F \subset EF. \end{aligned}$$

Namely, in this case the condition (ii) of Lemma 7.2 needed for equality in (7.5) is automatically satisfied and, therefore, inclusions in (8.6) and (8.7) both hold as equalities, hence \(E \star F \subset EF\). This observation covers the case of operators as indicated in (8.1).

To guarantee the equality \(EF=E\star F\), it suffices to find conditions such that the implications (a) and (b) above hold simultaneously. Such conditions are made explicit in the next proposition; they guarantee equalities in (8.6) and (8.8).

Proposition 8.1

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Assume that

$$\begin{aligned} \left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in (F_{11}\,;\,F_{12}) \\ h \in \text{dom}\,E_{i1} \end{array} \right. \quad \Rightarrow \quad \left\{ \begin{array}{l} \{f_1,h_1\}\in F_{11}, \{f_2,h_2\} \in F_{12} \\ h-h_1-h_2 \in \ \text{ker}\,E_{i1} \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,E_{i1} \end{array} \right. \end{aligned}$$
(8.9)

with\(i=1,2\);

$$\begin{aligned} \left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in (F_{21}\,;\,F_{22}) \\ h \in \text{dom}\,E_{i2} \end{array} \right. \quad \Rightarrow \quad \left\{ \begin{array}{l} \{f_1,h_1\}\in F_{21}, \{f_2,h_2\} \in F_{22} \\ h-h_1-h_2 \in \ \text{ker}\,E_{i2} \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,E_{i2} \end{array} \right. \end{aligned}$$
(8.10)

with\(i=1,2\);

$$\begin{aligned}&\text{mul}\,(F_{11} \,;\, F_{12}) \cap (\text{dom}\,E_{11} \cap \text{dom}\,E_{21}) \nonumber \\&\quad \subset \text{mul}\,(F_{11} \,;\, F_{12}) \cap \text{ker}\,E_{11} + \text{mul}\,(F_{11} \,;\, F_{12}) \cap \text{ker}\,E_{21}, \end{aligned}$$
(8.11)
$$\begin{aligned}&\text{mul}\,(F_{21} \,;\, F_{22}) \cap (\text{dom}\,E_{12} \cap \text{dom}\,E_{22}) \nonumber \\&\quad \subset \text{mul}\,(F_{21} \,;\, F_{22}) \cap \text{ker}\,E_{12} + \text{mul}\,(F_{21} \,;\, F_{22}) \cap \text{ker}\,E_{22}. \end{aligned}$$
(8.12)

Then\(EF=E \star F\).

Proof

For the proof of this result it is shown that under the given conditions equalities prevail in (8.6) and (8.8). The first two conditions together with Lemma 7.1 show that

$$\begin{aligned} \begin{pmatrix} E_{11}( F_{11} \,;\, F_{12} ) \\ E_{21}( F_{11} \,;\, F_{12} ) \end{pmatrix} =\begin{pmatrix} (E_{11}F_{11} \,;\, E_{11}F_{12} ) \\ (E_{21} F_{11} \,;\, E_{21}F_{12} ) \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} \begin{pmatrix} E_{12}( F_{21} \,;\, F_{22}) \\ E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix} =\begin{pmatrix} (E_{12}F_{21} \,;\, E_{12}F_{22}) \\ (E_{22} F_{21} \,;\, E_{22}F_{22}) \end{pmatrix}. \end{aligned}$$

Thus, equality holds in (8.8). The last two conditions together with Lemma 7.2 show that

$$\begin{aligned} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) =\begin{pmatrix} E_{11}( F_{11} \,;\, F_{12} ) \\ E_{21}( F_{11} \,;\, F_{12} ) \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22}) =\begin{pmatrix} E_{12}( F_{21} \,;\, F_{22}) \\ E_{22}( F_{21} \,;\, F_{22}) \end{pmatrix}. \end{aligned}$$

Thus, equality holds in (8.6). Now \(EF=E\star F\) follows from (a) and (b). \(\square \)

Corollary 8.2

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. In addition, assume that\(F_{ij} \in \mathbf{B}({\mathfrak{H}},{\mathfrak{K}})\)and

$$\begin{aligned}&F_{11}h_{1} +F_{12}h_{2} \in \text{dom}\,E_{i1} \quad \Longrightarrow \quad F_{11}h_{1}, F_{12}h_{2} \in \text{dom}\,E_{i1}, \quad i=1,2,\\&F_{21}h_{1} +F_{22}h_{2} \in \text{dom}\,E_{i2} \quad \Longrightarrow \quad F_{21}h_{1}, F_{22}h_{2} \in \text{dom}\,E_{i2}, \quad i=1,2. \end{aligned}$$

Then\(EF=E \star F\).

Proof

This follows from Corollary 7.1 (a). \(\square \)

Corollary 8.3

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. In addition, assume that\(E_{ij} \in \mathbf{B}({\mathfrak{K}},\mathfrak{L})\)and

$$\begin{aligned}&\text{mul}\,(F_{11} \,;\, F_{12}) \subset \text{mul}\,(F_{11} \,;\, F_{12}) \cap \text{ker}\,E_{11} + \text{mul}\,(F_{11} \,;\, F_{12}) \cap \text{ker}\,E_{21},\\&\text{mul}\,(F_{21} \,;\, F_{22}) \subset \text{mul}\,(F_{21} \,;\, F_{22}) \cap \text{ker}\,E_{12} + \text{mul}\,(F_{21} \,;\, F_{22}) \cap \text{ker}\,E_{22}. \end{aligned}$$

Then\(EF=E \star F\).

Proof

This follows from Corollary 7.1 (b). \(\square \)

Another set of sufficient conditions which are similar, but actually guarantee equalities in (8.5) and (8.7), is given in the next proposition.

Proposition 8.2

Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Assume that

$$\begin{aligned}&\left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in (F_{11}\,;\,F_{12}) \\ h \in \text{dom}\,E_{11}\cap \text{dom}\,E_{21} \end{array} \right. \,\, \Rightarrow \,\,\,\, \left\{ \begin{array}{l} \{f_1,h_1\}\in F_{11}, \{f_2,h_2\} \in F_{12} \\ h-h_1-h_2 \in \ \text{ker}\,E_{11}\cap \text{ker}\,E_{21} \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,E_{11}\cap \text{dom}\,E_{21}; \end{array} \right. \end{aligned}$$
(8.13)
$$\begin{aligned}&\left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in (F_{21}\,;\,F_{22}) \\ h \in \text{dom}\,E_{12}\cap \text{dom}\,E_{22} \end{array} \right. \,\, \Rightarrow \,\,\,\, \left\{ \begin{array}{l} \{f_1,h_1\}\in F_{21}, \{f_2,h_2\} \in F_{22} \\ h-h_1-h_2 \in \ \text{ker}\,E_{12}\cap \text{ker}\,E_{22} \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,E_{12}\cap \text{dom}\,E_{22}; \end{array} \right. \end{aligned}$$
(8.14)
$$\begin{aligned}&\text{mul}\,F_{1i} \cap (\text{dom}\,E_{11} \cap \text{dom}\,E_{21}) \nonumber \\&\quad \subset \text{mul}\,F_{1i} \cap \text{ker}\,E_{11} + \text{mul}\,F_{1i}\cap \text{ker}\,E_{21}, \quad \text{with } i=1,2; \end{aligned}$$
(8.15)
$$\begin{aligned}&\text{mul}\,F_{2i} \cap (\text{dom}\,E_{12} \cap \text{dom}\,E_{22}) \nonumber \\&\quad \subset \text{mul}\,F_{2i} \cap \text{ker}\,E_{12} + \text{mul}\,F_{2i} \cap \text{ker}\,E_{22}, \quad \text{with } i=1,2. \end{aligned}$$
(8.16)

Then\(EF=E \star F\).

Proof

The first two conditions together with Lemma 7.1 show that

$$\begin{aligned} \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}( F_{11} \,;\, F_{12} ) = \left( \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} F_{11} \,;\, \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}F_{12} \right) \end{aligned}$$

and

$$\begin{aligned} \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}( F_{21} \,;\, F_{22}) = \left( \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} F_{21} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}F_{22} \right) . \end{aligned}$$

Thus, equality holds in (8.5). The last two conditions together with Lemma 7.2 show that

$$\begin{aligned} \left( \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix} F_{11} \,;\, \begin{pmatrix} E_{11} \\ E_{21} \end{pmatrix}F_{12} \right) = \left( \begin{pmatrix} E_{11} F_{11} \\ E_{21} F_{11} \end{pmatrix} \,;\, \begin{pmatrix} E_{11} F_{12} \\ E_{21} F_{12} \end{pmatrix} \right) \end{aligned}$$

and

$$\begin{aligned} \left( \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix} F_{21} \,;\, \begin{pmatrix} E_{12} \\ E_{22} \end{pmatrix}F_{22} \right) = \left( \begin{pmatrix} E_{12}F_{21} \\ E_{22}F_{21} \end{pmatrix} \,;\, \begin{pmatrix} E_{12} F_{22} \\ E_{22}F_{22} \end{pmatrix} \right) . \end{aligned}$$

Thus, equality holds in (8.7). Now \(EF=E\star F\) follows from (a) and (b). \(\square \)

It is of interest to compare the conditions in Propositions 8.1 and 8.2. For instance, notice that

$$\begin{aligned} \text{mul}\,(F_{11} \,;\, F_{12})=\text{mul}\,F_{11}+ \text{mul}\,F_{12} \quad \text{ and } \quad \text{mul}\,(F_{21} \,;\, F_{22})=\text{mul}\,F_{21} + \text{mul}\,F_{22}. \end{aligned}$$

Therefore the condition (8.15) implies the condition (8.11) and the condition (8.16) implies the condition (8.12).

9 An application for the product of block relations

In this section the product of two simple looking blocks of linear relations is treated by means of the general results in the previous two sections. For this purpose, the following version of the distributive law for linear relations will be useful.

Lemma 9.1

LetHbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), and letKandGbe linear relations from\({\mathfrak{K}}\)to\(\mathfrak{L}\). Then:

  1. (a)

    \((K+G)H\subset KH+GH\);

  2. (b)

    if \(G\in \mathbf{B}({\mathfrak{K}},\mathfrak{L})\) and \(G(\text{mul}\,H) \subset \text{mul}\,K\), then

    $$\begin{aligned} (K+G)H=KH+GH. \end{aligned}$$

Proof

  1. (a)

    Let \(\{f,g\}\in (K+G)H\). Then for some \(k\in {\mathfrak{K}}\) one has \(\{f,k\}\in H\) and \(\{k,g_1\}\in G\) and \(\{k,g_2\}\in K\) with \(g=g_1+g_2\). Hence \(\{f,g_1\}\in HG\), \(\{f,g_2\}\in KH\) and \(\{f,g\}=\{f,g_1+g_2\}\in KH+GH\), which proves the stated inclusion.

  2. (b)

    It suffices to prove \(KH+GH\subset (K+G)H\). Let \(\{f,g\}\in KH+GH\). Then for some \(\ell _1,\ell _2\in \mathfrak{L}\) one has \(\{f,\ell _1\}\in GH\) and \(\{f,\ell _2\}\in KH\) with \(\ell _1+\ell _2=g\). Hence, \(\{f,k_1\}\in H\), \(\{k_1,\ell _1\}\in G\) with \(\ell _1=G k_1\) and \(\{f,k_2\}\in H\), \(\{k_2,\ell _2\}\in K\) for some \(k_1,k_2\in {\mathfrak{K}}\). This implies that \(e=k_1-k_2\in \text{mul}\,H\) and by assumption this gives that \(Ge\in \text{mul}\,K\). Therefore, \(\{k_2,\ell _2+Ge\}\in K\), \(\{k_2,G(k_1-e)\} \in G\), and since \(\{f,k_2\}\in H\) this leads to

    $$\begin{aligned} \{f,g\}=\{f,Gk_1+\ell _2\}=\{f,G(k_1-e)+(\ell _2+Ge)\} \in (K+G)H. \end{aligned}$$

    This proves \(KH+GH\subset (K+G)H\) and thus \((K+G)H=KH+GH\) holds.

\(\square \)

The next result has useful applications elsewhere; see [19, 24]. One can prove it directly via the definition of blocks and the product of the corresponding linear relations. Here the result is derived by applying some general formulas and identities presented in Sect. 7.

Proposition 9.1

LetSbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)and letTbe a linear relation from\({\mathfrak{K}}\)to\({\mathfrak{H}}\). Then

$$\begin{aligned} \begin{pmatrix} T &{}\quad -I \\ I &{}\quad S \end{pmatrix} \begin{pmatrix} S &{}\quad I \\ -I &{}\quad T \end{pmatrix} =\begin{pmatrix} TS+I &{}\quad 0 \\ 0 &{}\quad ST+I \end{pmatrix}. \end{aligned}$$
(9.1)

Proof

To rewrite the product in a different form apply Lemma 9.1 with the choices

$$\begin{aligned} K=\begin{pmatrix} T &{}\quad 0 \\ 0 &{}\quad S \end{pmatrix}, \quad G=\begin{pmatrix} 0 &{}\quad -I \\ I &{} \quad 0 \end{pmatrix}, \quad H=\begin{pmatrix} S &{}\quad I \\ -I &{}\quad T \end{pmatrix}, \end{aligned}$$

and note that G is everywhere defined and bounded. Then the left-hand side of (9.1) can be rewritten in the form \((K+G)H\). Observe that

$$\begin{aligned} \text{mul}\,H=\text{mul}\,S\oplus \text{mul}\,T \quad \text{ and } \quad G(\text{mul}\,S\oplus \text{mul}\,T)=\text{mul}\,T\oplus \text{mul}\,S=\text{mul}\,K. \end{aligned}$$

Therefore G satisfies Lemma 9.1 (ii) and thus \((K+G)H=KH+GH\). Next the products HK and GH are calculated. For GH it is clear from the definition of block relation that

$$\begin{aligned} GH=\begin{pmatrix} I &{}\quad -T\\ S &{}\quad I\end{pmatrix}. \end{aligned}$$

For KH first observe that \(\text{mul}\,H=\text{mul}\,S\oplus \text{mul}\,T\) and that

$$\begin{aligned} \text{mul}\,S\oplus \{0\} \subset \text{ker}\,(0\,;\, S), \quad \{0\}\oplus \text{mul}\,T\subset \text{ker}\,(T\,;\, 0). \end{aligned}$$

Hence the condition for \(\text{mul}\,H\cap \text{dom}\,K\) in Lemma 7.2 (b) is satisfied and it follows that

$$\begin{aligned} KH=\begin{pmatrix} (T\,;\, 0) \\ (0\,;\, S)\end{pmatrix}H =\begin{pmatrix} (T\,;\, 0)H \\ (0\,;\, S)H \end{pmatrix}. \end{aligned}$$

Next observe that \(\text{dom}\,H=\text{dom}\,S\oplus \text{dom}\,T\) and then apply Lemma 7.3 to the entries \((T\,;\, 0)H\) and \((0\,;\, S)H\) to obtain

$$\begin{aligned} KH=\begin{pmatrix} T(S\,;\, I{\upharpoonright }\,\text{dom}\,T)+0(-I{\upharpoonright }\,\text{dom}\,S\,;\, T) \\ 0(S\,;\, I{\upharpoonright }\,\text{dom}\,T)+S(-I{\upharpoonright }\,\text{dom}\,S\,;\, T) \end{pmatrix} =\begin{pmatrix} T(S\,;\, I{\upharpoonright }\,\text{dom}\,T) \\ S(-I{\upharpoonright }\,\text{dom}\,S\,;\, T) \end{pmatrix}. \end{aligned}$$

Here each entry satisfies the condition in Lemma 7.1 (ii) and hence

$$\begin{aligned} KH=\begin{pmatrix} TS &{}\quad T\\ -S &{}\quad ST \end{pmatrix}. \end{aligned}$$

This combined with the expression for GH leads to

$$\begin{aligned} KH+GH=\begin{pmatrix} TS+I &{}\quad T-T\\ S-S &{}\quad ST+I \end{pmatrix}. \end{aligned}$$

Since \(T-T=\text{dom}\,T\times \text{mul}\,T\), \(S-S=\text{dom}\,S\times \text{mul}\,S\) and one has inclusion \(\text{mul}\,T\subset \text{mul}\,TS\), \(\text{mul}\,S\subset \text{mul}\,ST\), this last block formula corresponds to the same linear relation as the one on the right-hand side of (9.1).

Remark 9.1

The arguments in the above proof are based on the general results in the previous sections. It is instructive to look at the difficulties that may appear if, for instance, one starts directly with the formula (8.2). In the circumstances of Proposition 9.1 the left-hand side of the identity (8.2) takes the form

$$\begin{aligned} \begin{pmatrix} T \\ I \end{pmatrix}( S \,;\, I ) +\begin{pmatrix} -I \\ S \end{pmatrix}( -I \,;\, T). \end{aligned}$$
(9.2)

In order to proceed, note that the domain of the sum is contained in \(\text{dom}\,S\times \text{dom}\,T\), which gives

$$\begin{aligned} \begin{pmatrix} T \\ I \end{pmatrix}( S \,;\, I{\upharpoonright }\,\text{dom}\,T ) +\begin{pmatrix} -I \\ S \end{pmatrix}( -I{\upharpoonright }\,\text{dom}\,S \,;\, T). \end{aligned}$$

Now the condition in Lemma 7.1 (ii) is satisfied and one can write

$$\begin{aligned} \left( \begin{pmatrix} T \\ I \end{pmatrix} S \,;\, \begin{pmatrix} T \\ I \end{pmatrix} \right) +\left( \begin{pmatrix} I \\ -S \end{pmatrix} \,;\, \begin{pmatrix} -I \\ S \end{pmatrix}T\right) . \end{aligned}$$

However, the following inclusions

$$\begin{aligned} \begin{pmatrix} T \\ I \end{pmatrix} S \subset \begin{pmatrix} TS \\ S \end{pmatrix}, \quad \begin{pmatrix} -I \\ S \end{pmatrix}T \subset \begin{pmatrix} -T \\ ST \end{pmatrix}, \end{aligned}$$

are, in general, strict by Lemma 7.2: this is the case if, for instance, \(\text{ker}\,T=\{0\}\), \(\text{mul}\,S\cap \text{dom}\,T\ne \{0\}\) and \(\text{ker}\,S=\{0\}\), \(\text{mul}\,T\cap \text{dom}\,S\ne \{0\}\), respectively. Thus a separate treatment of the summands in (9.2) does not lead to the desired identity (9.1) in Proposition 9.1.