Abstract
Columns and rows are operations for pairs of linear relations in Hilbert spaces, modelled on the corresponding notions of the componentwise sum and the usual sum of such pairs. The introduction of matrices whose entries are linear relations between underlying component spaces takes place via the row and column operations. The main purpose here is to offer an attempt to formalize the operational calculus for block matrices, whose entries are all linear relations. Each block relation generates a unique linear relation between the Cartesian products of initial and final Hilbert spaces that admits particular properties which will be characterized. Special attention is paid to the formal matrix multiplication of two blocks of linear relations and the connection to the usual product of the unique linear relations generated by them. In the present general setting these two products need not be connected to each other without some additional conditions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The calculus of rows, columns, and block matrices, where the entries are unbounded operators, was studied in the paper of Möller and Szafraniec [13]; see also [8]. However, already in the case of \(2 \times 2\) blocks, it is natural that one of the entries is multivalued; see [2, 6, 11, 18]. Even earlier in [3] there was a suggestion of a \(2 \times 2\) block matrix where two of its entries were multivalued; the corresponding situation was considered in [5], where such blocks are shown to appear naturally as selfadjoint extensions of certain symmetric relations. The present paper offers an attempt to formalize the operational calculus for block matrices, whose entries are all linear relations, along the lines linear relations between two Hilbert spaces are treated in [1]. The basic idea in the context of linear relations is to first develop the notions of a row and a column consisting of a sequence of linear relations. This approach was taken in the study of matrices with unbounded operators in [13]. Rows and columns can be considered as the building blocks in the definition of a block matrix whose entries are linear relations. To keep the presentation simple, it suffices to concentrate on the special case of \(2\times 2\) blocks.
To explain the notions of row, column, and block, let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\). Recall that one can form a sum of \(H_1\) and \(H_2\) in different ways. The componentwise sum\(H_1 \, {\widehat{+}} \,H_2\) is defined as
and the usual sum\(H_1+H_2\), like in the case of operators, is defined as
Of course, when \(H_{1}\) and \(H_{2}\) are operators, then these definitions make sense for their graphs (identifying graph and operator). The main point here is that in many situations one encounters a slightly different situation, as the Hilbert spaces in which these relations are defined or to which they map may be different: either \(H_1\) is a linear relation from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and \(H_2\) is a linear relation from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\), or \(H_1\) is a linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}_1\) and \(H_2\) is a linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}_2\). In the first case one needs the notion of row of relations, which extends the componentwise sum, while in the second case one needs the notion of column of relations, which extends the usual sum. In particular, these notions play a role when introducing \(2\times 2\) blocks of linear relations:
where each \(H_{ij}\) is a linear relation from \({{\mathfrak{H}}}_{j}\) to \({{\mathfrak{H}}}_{i}\). It turns out that one may speak of a column of two rows or of a row of two columns.
The operational calculus for these notions concerns their linear structure. In particular, the notions of row, column, and block will be characterized among all linear relations. When adjoints are involved, one is confronted with the occurence of the various Hilbert spaces, so one needs to keep track of the relevant spaces. Recall that for the componentwise sum, the adjoint behaves as one would expect, but for the usual sum things are of course different. These facts have their counterparts in the behavior of adjoints for rows, columns, and blocks of linear relations. In addition, it is observed that multiplication of rows, columns, and blocks is a complicated process, because one has to keep track not only of the domains, but also of the multivalued parts.
The present paper is of an expository nature. It gives a brief survey of the above notions, along the lines of [13] where the case of operators was treated, but now in the context of linear relations, see also [4, 12, 14,15,16]. However, the appearance of multivalued parts in the entries of the block produces some new obstacles in this study. Finally, it is mentioned that different special cases of linear block relations with multivalued entries appear in the literature and that various notions and results given here can be applied, e.g., in the context of the sum of selfadjoint (or maximal sectorial) operators or relations, see [7, 8], in the extension theory for nondensely defined symmetric operators and symmetric relations, see, e.g., [6], and, for instance, in the study of linear relations whose domain and range are orthogonal, see [3, 5]. Also there is a connection to the recent papers [19] and [24], which will be explained.
2 Linear relations and their adjoints
Let H be a linear relation from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\), i.e., H is a linear subspace of the product space \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\). Its multivalued part \(H_{\text{mul}\,}=\{0\} \times \text{mul}\,H\) is a purely multivalued relation. The closure \(\overline{H}\) of H is a subspace of \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\), and the relation H is closed if and only if \(H=\overline{H}\). The adjoint\(H^*\) of H is defined as a relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\) by
Then it is clear that
where the orthogonal complements refer to the componentwise inner product of \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) and \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\), respectively. Here J from \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\) to \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) is defined by
Clearly, if H and K are relations with \(H \subset K\) then \(K^* \subset H^{*}\). It also follows from (2.1) that \(H^{*}\) is a closed linear relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\). Note that (2.1) gives \(J^{-1}H^*=H^\perp \), i.e.
Since \(J^{-1}\) is the flip-flop operator from \({{\mathfrak{K}}}\times{{\mathfrak{H}}}\) to \({{\mathfrak{H}}}\times{{\mathfrak{K}}}\) the left-hand side coincides with \(H^{**}\) and hence \(H^{**}=\overline{H}\). The orthogonal decompositions
are now clear from (2.1). For a brief introduction to these notions, see [2]. The following additive decomposition for linear relations is an operator-theoretic version of the Lebesgue decomposition of measures; see [9, Theorem 4.1].
Theorem 2.1
LetHbe a relation from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and letQbe the orthogonal projection in\({{\mathfrak{K}}}\)onto\(\overline{\text{dom}\,}H^*\). ThenHallows the following sum decomposition
where the relationsQHand\((I-Q)H\)have the following properties:
-
(a)
QHis a closable operator;
-
(b)
\(\text{clos}\,((I-Q)H)=\overline{\text{dom}\,}H \times \text{mul}\,H^{**}\).
The closure of the so-called regular part QH is (the graph of) an operator, while the closure of the so-called singular part \((I-Q)H\) is a closed singular relation, i.e., the product of two closed subspaces. The so-called Lebesgue decomposition (2.3) for a relation H from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}\) gives rise to a componentwise direct sum decomposition when \(\text{mul}\,H=\text{mul}\,H^{**}\); see [10, Theorem 3.10, Corollary 3.14].
Proposition 2.1
LetHbe a relation from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and letQbe the orthogonal projection in\({{\mathfrak{K}}}\)onto\(\overline{\text{dom}\,}H^*\). Assume that
so that\({{\mathfrak{K}}}\)can be decomposed as\({{\mathfrak{K}}}=\overline{\text{dom}\,}H^* \oplus \text{mul}\,H\). Then\(QH \subset H\)and the relationHhas the following direct sum decomposition
whereQHis a closable operator from\({{\mathfrak{H}}}\)to\({{\mathfrak{K}}}\)and\(\{0\} \times \text{mul}\,H\)is a purely multivalued relation in\(\text{mul}\,H\). Moreover, if the relationHis closed, then (2.4) is automatically satisfied and the operatorQHis closed.
If \(\text{mul}\,H=\text{mul}\,H^{**}\), then \(H_{\text{s}}=QH\) is the operator part of H, so that (2.5) reads \(H=H_{\text{s}} \, \widehat{+} \,H_{\text{mul}}\). Clearly, if H is closed then \(\text{mul}\,H=\text{mul}\,H^{**}\), so that a closed linear relation always has a closed orthogonal operator part. Note that \(H^{**}\) is a closed linear relation from \({{\mathfrak{H}}}\) to \({{\mathfrak{K}}}\) and it follows from (2.2) that \((H^{**})_{\text{s}}\) takes \(\text{dom}\,H^{**}\) to \(\overline{\text{dom}\,}H^*\). Likewise, \(H^{*}\) is a closed linear relation from \({{\mathfrak{K}}}\) to \({{\mathfrak{H}}}\), so that \((H^{*})_{\text{s}}\) takes \(\text{dom}\,H^{*}\) to \(\overline{\text{dom}\,}H^{**}\).
Lemma 2.1
LetHbe a linear relation from a Hilbert space\({{\mathfrak{H}}}\)to a Hilbert space\({{\mathfrak{K}}}\). The following statements are equivalent:
-
(i)
\((H^{**})_{\text{s}}\)is a bounded operator;
-
(ii)
\((H^{*})_{\text{s}}\)is a bounded operator;
-
(iii)
\(\text{dom}\,H^{**}\)is closed;
-
(iv)
\(\text{dom}\,H^*\)is closed.
Proof
Observe that \(\text{dom}\,(H^{**})_{\text{s}}=\text{dom}\,H^{**}\) and \(\text{dom}\,(H^{*})_{\text{s}}=\text{dom}\,H^{*}\), and that these spaces are simultaneously closed. \(\square \)
Let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\). Then one can form a sum of \(H_1\) and \(H_2\) in different ways. The componentwise sum \(H_1 \, \widehat{+} \,H_2\) is defined as (1.1), and the usual sum \(H_1+H_2\), like in the case of operators, is defined as (1.2). The adjoints of these sums behave in the usual way. The following statements are straightforward to check.
Lemma 2.2
Let\(H_1\)and\(H_2\)be linear relations from a Hilbert space\({{\mathfrak{H}}}\)to a Hilbert space\({{\mathfrak{K}}}\). Then
The inclusion in (2.6) is actually an equality if \(H_2 \in \mathbf{B}({{\mathfrak{H}}},{{\mathfrak{K}}})\); in fact, there is equality under slightly more general circumstances.
Corollary 2.1
Let \(H_1\) and \(H_2\) be linear relations from a Hilbert space \({{\mathfrak{H}}}\) to a Hilbert space \({{\mathfrak{K}}}\) and assume that
Then
Proof
To see (2.8), let \(\{f,g\} \in (H_1+H_2)^*\). Then
In particular, with the choice \(h=0\), \(k_1=0\), and \(k_2\in \text{mul}\,H_2\) one concludes, by the assumption \(\overline{\text{mul}\,}H_2=\text{mul}\,H_2^{**}\), that
Under the assumption that \( \text{dom}\,H_2^{**}\) is closed, one has that \(\overline{\text{dom}\,}H_2^*=\text{dom}\,H_2^*\); see Lemma 2.1. Thus one has \(f \in \text{dom}\,H_2^*\) and hence there exists \(\varphi \in{{\mathfrak{H}}}\) such that \(\{f,\varphi \} \in H_2^*\). Now let \(\{h,k_1\} \in H_1\). Then the assumption \(\text{dom}\,H_1 \subset \text{dom}\,H_2\) shows \(\{h,k_2\} \in H_2\) for some \(k_2 \in{{\mathfrak{K}}}\). Therefore \((f,k_2)=(\varphi ,h)\), and (2.9) gives
for all \(\{h,k_1\} \in H_1\), so that \(\{f, g-\varphi \} \in H_1^*\). Hence \(\{f,g\} \in H_1^*+H_2^*\). This establishes the inclusion \((H_1+H_2)^* \subset H_1^*+H_2^*\). \(\square \)
The equality in (2.8) has received attention in [17,18,19,20,21,22,23,24] recently. The results in the present paper are closely related and can be used in these considerations; cf. Proposition 9.1.
3 Rows and columns
Rows and columns are the analogs of the componentwise sum and the usual sum for a pair of relations in (1.1) and (1.2).
Rows. Let \({{\mathfrak{H}}}_1\), \({{\mathfrak{H}}}_2\), and \({{\mathfrak{K}}}\) be Hilbert spaces. Let \(R_1\) be a linear relation from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and let \(R_2\) be a linear relation from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\). Then the row\((R_1; R_2)\) of \(R_1\) and \(R_2\) as a linear relation from \({{\mathfrak{H}}}_1 \oplus{{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\) is defined by
Observe that
Notice that the definition (3.1) associates to a row \((R_1\,;\,R_2)\) a unique linear relation from \({{\mathfrak{H}}}_1 \oplus{{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\). However, the same relation can clearly be generated by different choices of the entries \(R_1\) and \(R_2\). In this sense rows \((R_1\,;\,R_2)\) which generate the same linear relation can be used to introduce an equivalence relation in the set of all rows \((R_1\,;\,R_2)\) of linear relations \(R_1\) from \({{\mathfrak{H}}}_1\) to \({{\mathfrak{K}}}\) and \(R_2\) from \({{\mathfrak{H}}}_2\) to \({{\mathfrak{K}}}\).
The row of \(R_1\) and \(R_2\) resembles a componentwise sum of linear relations once the domain spaces of \(R_1\) and \(R_2\) are combined orthogonally in the above way.
The linear relations from \({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}\) that can be written as a row of linear relations will now be characterized. Let R be a linear relation from \({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}\), then by identifying \({\mathfrak{H}}_1 \oplus \{0\}\) with \({\mathfrak{H}}_1\) and \(\{0\} \oplus {\mathfrak{H}}_2\) with \({\mathfrak{H}}_2\), one obtains that \(R{\upharpoonright }\,_{{\mathfrak{H}}_1}\) and \(R{\upharpoonright }\,_{{\mathfrak{H}}_2}\) are linear relations from \({\mathfrak{H}}_1\) to \({\mathfrak{K}}\) and from \({\mathfrak{H}}_2\) to \({\mathfrak{K}}\), respectively. Hence the linear relation R induces a row:
and it is clear that \((R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} ) \subset R\).
Lemma 3.1
LetRbe a linear relation from\({\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\)to\({\mathfrak{K}}\). Then the following statements are equivalent:
-
(i)
Ris a row;
-
(ii)
\(\text{dom}\,R=(\text{dom}\,R) \cap {\mathfrak{H}}_1 \oplus (\text{dom}\,R) \cap {\mathfrak{H}}_2\);
-
(iii)
\(R = (R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} )\).
Proof
(i) \(\Rightarrow \) (ii) This is clear from (3.1).
(ii) \(\Rightarrow \) (iii) It suffices to show that \(R \subset (R{\upharpoonright }\,_{{\mathfrak{H}}_1} \,;\, R{\upharpoonright }\,_{{\mathfrak{H}}_2} )\). Thus let
By assumption, both \(h_1\) and \(h_2\) belong to \(\text{dom}\,R\). Hence, there exist elements such that
and \(k_1+k_2-\ell _1-\ell _2 \in \text{mul}\,R\). It is clear that elements of the form \(\{0,\varphi \} \in R\) belong to both \(R{\upharpoonright }\,_{{\mathfrak{H}}_1}\) and \(R{\upharpoonright }\,_{{\mathfrak{H}}_2}\). This completes the argument.
(iii) \(\Rightarrow \) (i) This implication is clear. \(\square \)
Moreover, if \(R_1'\) is a linear relation from \({\mathfrak{H}}_1\) to \({\mathfrak{K}}\) and \(R_2'\) is a linear relation from \({\mathfrak{H}}_2\) to \({\mathfrak{K}}\), such that \(R_1 \subset R_1'\) and \(R_2 \subset R_2'\), then by (3.3), it is clear that the extensions are preserved in the sense of the row
Columns. Now let \({\mathfrak{H}}\), \({\mathfrak{K}}_1\), and \({\mathfrak{K}}_2\) be Hilbert spaces. Let \(C_1\) be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and let \(C_2\) be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\). Then the column\(\text{col}\,(C_1\,;\,C_2)\) of \(C_1\) and \(C_2\), as a linear relation from \({\mathfrak{H}}\) to the orthogonal sum \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\), is defined by
Observe that
Again the definition (3.3) associates to a column \(\text{col}\,(C_1\,;\,C_2)\) a unique linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) and it determines an equivalence relation in the set of all columns \(\text{col}\,(C_1\,;\,C_2)\) of linear relations \(C_1\) from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and \(C_2\) from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\).
The column of \(C_1\) and \(C_2\) resembles a usual sum of linear relations once the range spaces of \(C_1\) and \(C_2\) are combined orthogonally in the above way.
The linear relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) that can be written as a column of linear relations will now be characterized. Let C be a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\). The orthogonal projections from \({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) to \({\mathfrak{K}}_1\) and \({\mathfrak{K}}_2\) are denoted by \(P_1\) and \(P_2\), respectively. Then \(P_1C\) and \(P_2C\) are linear relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), respectively. Hence the linear relation C induces a column:
and it is clear that \(C \subset \text{col}\,(P_1 C ; P_2 C)\).
Lemma 3.2
LetCbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\). Then the following statements are equivalent:
-
(i)
Cis a column;
-
(ii)
\(\text{mul}\,C=(\text{mul}\,C) \cap {\mathfrak{K}}_1 \oplus (\text{mul}\,C) \cap {\mathfrak{K}}_2\);
-
(iii)
\(C = \text{col}\,(P_1 C ; P_2 C)\).
Proof
(i) \(\Rightarrow \) (ii) This is clear from (3.3).
(ii) \(\Rightarrow \) (iii) It suffices to show that \(\text{col}\,(P_1 C ; P_2 C) \subset C\). Thus let
Then \(\{h,k_1\} \in P_1C\) and \(\{h,k_2\} \in P_2C\), so that there exist elements, for which
Observe that
which by assumption leads to
Thus, in particular, it follows that
which completes the argument.
(iii) \(\Rightarrow \) (i) This implication is clear. \(\square \)
Moreover, if \(C_1'\) is a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_1\) and \(C_2'\) is a linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), such that \(C_1 \subset C_1'\) and \(C_2 \subset C_2'\), then by (3.3), it is clear that the extensions are preserved in the sense of the column
Conversely, the left-hand side is called a restriction of the right-hand-side. Note that any column can be reduced in the sense that \(C_1\) and \(C_2\) may be restricted to their common domain \({\mathfrak{D}}={\text{dom}}\,C_1 \cap {\text{dom}}\,C_2\):
4 Adjoints of rows and columns
In terms of adjoints, the operations of rows and columns behave formally like componentwise sums and usual sums of linear relations; cf. Lemma 2.2. But recall that the definition of an adjoint relation depends on the Hilbert spaces in which the original relation is defined. Hence for rows and columns for a pair of linear relations one has to make sure what Hilbert spaces are involved when taking adjoints. The formulas (4.1) and (4.2) in the next result can be found in [8, Proposition 2.1] and are proved here for completeness of the present statement.
Proposition 4.1
Let\(R_1\)and\(R_2\)be linear relations from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively. Then
Moreover, let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively. Then
Moreover, there is equality in (4.2) if and only if\((\text{col}\,(C_1\,;\,C_2))^*\)is a row from\({\mathfrak{K}}_1\oplus {\mathfrak{K}}_2\)to\({\mathfrak{H}}\).
Proof
In order to prove the equality (4.1), let
By (3.1) this is equivalent to
for all \(\{h_1,k_1\} \in R_1\) and \(\{h_2,k_2\} \in R_2\), or to
or, in other words,
This shows the equality in (4.1).
In order to prove (4.2), consider the element
where \(\{f_1,g_1\} \in C_1^*\) and \(\{f_2,g_2\} \in C_2^*\). For any element in \(\text{col}\,(C_1\,;\,C_2)\) of the form
it then follows that
Therefore one sees that
which shows the inclusion (4.2).
If there is equality in (4.2), then \((\text{col}\,( C_1\,;\,C_2))^*\) is a row. Conversely, assume that \((\text{col}\,( C_1\,;\,C_2))^*\) is a row, i.e.,
for some linear relations \(R_1\) and \(R_2\) from \({\mathfrak{K}}_1\) to \({\mathfrak{H}}\) and from \({\mathfrak{K}}_2\) to \({\mathfrak{H}}\), respectively. Then it follows from (4.1) that
which gives \(C_1 \subset R_1^*\) and \(C_2 \subset R_2^*\). Hence \(R_1 \subset R_1^{**} \subset C_1^*\) and \(R_2 \subset R_2^{**} \subset C_2^*\). Taking into account (4.2) and (4.3), this leads to
which gives equality in (4.2). \(\square \)
In [13] the term formal adjoint of a row (of the column) is used for the right-hand side of (4.1) (respectively, for the left-hand side of (4.2)). Although the case of equality in (4.2) has been characterized, it is useful to have some sufficient concrete conditions; cf. (2.7) in Corollary 2.1.
Lemma 4.1
Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\) to \({\mathfrak{K}}_2\), respectively. Assume that\(\text{dom}\,C_1 \subset \text{dom}\,C_2\), \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\), and that\(\text{dom}\,C_2^{**}\)is closed. Then
Proof
It suffices to show that the right-hand side of (4.2) is contained in the left-hand side under the given assumptions. To see this inclusion, let
This means that
The choice \(h=0\), \(k_1=0\), and \(k_2\in \text{mul}\,C_2\) together with \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\) leads to
By assumption \(\text{dom}\,C_2^{**}\) is closed and thus also \(\text{dom}\,C_2^{*}\) is closed; see Lemma 2.1. One concludes that, in fact, \(f_2 \in \text{dom}\,C_2^*\). Thus, there exists an element \(\varphi \) such that \(\{f_2, \varphi \} \in C_2^*\). This means that
holds for all \(\{h,k_2\}\in C_2\). Now by combining (4.5), (4.6), and the condition \(\text{dom}\,C_1 \subset \text{dom}\,C_2\), one obtains
Thus \(\{f_1, g-\varphi \} \in C_1^*\) and since \(\{f_2, \varphi \} \in C_2^*\), one therefore obtains
This completes the proof. \(\square \)
The first statement in the following corollary is clear by itself, but it is an automatic consequence of the above reasoning as well.
Corollary 4.1
-
(a)
If the linear relations\(C_1\)and\(C_2\)from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively, are closed, then the column\(\text{col}\,(C_1\,;\,C_2)\)is closed.
-
(b)
If the linear relations\(R_1\)and\(R_2\)from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively, are closed, while\(\text{dom}\,R_1^* \subset \text{dom}\,R_2^*\)and\(\text{dom}\,R_2^*\) is closed, then the row\((R_1\,;\,R_2)\)is closed.
Proof
-
(a)
Apply (4.1) in Proposition 4.1 with \(R_1=C_1^*\) and \(R_2=C_2^*\). Then one sees that
$$\begin{aligned} \begin{pmatrix} C_1^{**} \\ C_2^{**} \end{pmatrix} = (C_1^*\,;\,C_2^*)^*. \end{aligned}$$This implies that the column \(\text{col}\,( C_1^{**} \,;\, C_2^{**})\) is closed; in particular, the column of closed linear relations \(C_1\) and \(C_2\) is closed.
-
(b)
To see this, one applies Lemma 4.1 with \(C_1=R_1^*\) and \(C_2=R_2^*\).
\(\square \)
A repeated application of Proposition 4.1 and Lemma 4.1 leads to the following straightforward results.
Corollary 4.2
-
(a)
Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\) to \({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), respectively. Then
$$\begin{aligned} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}^{**} \subset \begin{pmatrix} C_1^{**} \\ C_2^{**} \end{pmatrix}. \end{aligned}$$(4.7)If, in addition, \(\text{dom}\,C_1 \subset \text{dom}\,C_2\), \(\overline{\text{mul}\,}C_2=\text{mul}\,C_2^{**}\), and\(\text{dom}\,C_2^{**}\)is closed, then equality holds in (4.7).
-
(b)
Let\(R_1\)and\(R_2\)be linear relations from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\), respectively. Then
$$\begin{aligned} (R_1^{**}\,;\,R_2^{**}) \subset (R_1\,;\,R_2)^{**}. \end{aligned}$$(4.8)If, in addition, \(\text{dom}\,R_1^* \subset \text{dom}\,R_2^*\)and\(\text{dom}\,R_2^*\)is closed, then equality holds in (4.8).
There are some useful special cases for equality in (4.4).
Corollary 4.3
Let\(\text{dom}\,C_1 \subset \text{dom}\,C_2\)and let\(C_2\)be densely defined and bounded (i.e.\(\Vert g\Vert \le C \Vert f\Vert \)for all\(\{f,g\} \in C_2\)). Then\(C_2\)is an operator and (4.4) holds.
Proof
Since \(C_2\) is bounded, it is an operator and, by assumption, it is densely defined. Then \(C_2^{**} \in \mathbf{B}({\mathfrak{H}}, {\mathfrak{K}}_2)\) and hence \(C_2^{*} \in \mathbf{B}({\mathfrak{K}}_2, {\mathfrak{H}})\), \(\text{mul}\,C_2^{**}=\text{mul}\,C_2=\{0\}\), and \(\text{dom}\,C_2^{**}\) is closed. \(\square \)
Corollary 4.4
Let\(C_1\)be a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and let\(C_2= \mathfrak{M}\times \mathfrak{N}_2\)be a singular relation, where\(\mathfrak{M}\)is a linear subspace of\({\mathfrak{H}}\)and\(\mathfrak{N}\)is a closed linear subspace of\({\mathfrak{K}}_2\). Assume that\(\text{dom}\,C_1 \subset \mathfrak{M}\). Then
Proof
By the definition in (3.3) one observes that
and note that \(({\text{dom}}\,C_1 \times {\mathfrak{N}})^*={\mathfrak{N}}^\perp \times {\text{mul}}\,C_1^*\). Now apply Lemma 4.1. \(\square \)
Observe that in Corollary 4.4 one has \({\mathfrak{M}}^\perp ={\text{mul}}\,C_1^{**}\) if and only if \(\overline{\text{dom}\,}C_1=\overline{\mathfrak{M}}\). However, under the condition \(\text{dom}\,C_1 \subset \mathfrak{M}\) also the following equality holds:
Indeed, \(\text{dom}\,C_1 \subset \mathfrak{M}\) implies \({\mathfrak{M}}^\perp \subset {\text{mul}}\,C_1^{**}\) and hence the inclusion
can be strict. However, \({\text{mul}}\,(C_1^* \,;\, {\mathfrak{N}}^\perp \times {\mathfrak{M}}^\perp ) ={\text{mul}}\,C_1^{**}+{\mathfrak{M}}^\perp ={\text{mul}}\,C_1^{**}\), and thus these two different rows generate the same linear relation.
There are simple examples where a strict inclusion in (4.2) can occur. For instance, if \(B_1,B_2\in \mathbf{B}({\mathfrak{H}})\) are two positive operators such that
and one takes \(C_1=B_1^{-1}\) and \(C_2=B_2^{-1}\), then the inclusion in (4.2) is strict. Namely, \(\text{dom}\,C_1\cap \text{dom}\,C_2=\{0\}\) and hence the adjoint of \(\text{col}\,(C_1\,;\,C_2)\) has multivalued part equal to \({\mathfrak{H}}\), while \((C_1^*\,;\,C_2^*)=(B_1^{-1}\,;\,B_2^{-1})\) is an operator.
5 Block relations
Let each of the Hilbert spaces \({\mathfrak{H}}\) and \({\mathfrak{K}}\) be decomposed into two orthogonal components
and let
be linear relations; these four relations form a \(2 \times 2\)block of relations
abbreviated by \([E_{ij}]\). The set of all such blocks is denoted by \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Every block of relations \([E_{ij}]\) in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) leads to a row of columns or, alternatively, a column of rows. This way every block of relations \([E_{ij}]\) generates a unique linear relation from \({\mathfrak{H}}\) to \({\mathfrak{K}}\) as follows from the next lemma.
Lemma 5.1
Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Then
Proof
By the definition of a column in (3.3) a typical element in the right-hand side of (5.1) is of the form
Recall that by the definition of a row in (3.1) one has \(\{f, \gamma _{1}\} \in (E_{11} \,;\, E_{12})\) if and only if
and, similarly, \(\{f, \gamma _{2}\} \in (E_{21} \,;\, E_{22})\) if and only if
Thus, in fact, a typical element in the right-hand side of (5.1) is of the form
and the last four conditions can be written in the equivalent form
Thus, by the definition of a row in (3.1), a typical element in the right-hand side of (5.1) is a typical element in the left-hand side of (5.1). \(\square \)
Definition 5.1
Let \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). Then the linear relation E from \({\mathfrak{H}}\) to \({\mathfrak{K}}\)generated by the block is defined by (5.1) and denoted by
The relation E is called the block relation corresponding to the block \([E_{ij}]\).
The proof of Lemma 5.1 shows that the unique linear relation E generated by the block \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) can be expressed equivalently by the formula
This is the direct way to think of the block \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) as a linear relation E from \({\mathfrak{H}}\) to \({\mathfrak{K}}\). It agrees with the well-known definition in the case that each relation \(E_{ij}\) is (the graph of) an everywhere defined bounded linear operator from \({\mathfrak{H}}_j\) to \({\mathfrak{K}}_i\).
Let \([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}}, {\mathfrak{K}})\) and let E and F be the block relations from \({\mathfrak{H}}\) to \({\mathfrak{K}}\) generated by them. The blocks are said to satisfy the inclusion\([E_{ij}] \subset [F_{ij}] \) if \(E_{ij} \subset F_{ij}\) for all i, j. In particular,
It clearly follows from (5.2) that
The reverse implications in (5.3) do not hold, since different blocks \([E_{ij}]\) in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) can generate the same linear relation E. Definition 5.1 leads to linear relations from the Hilbert space \({\mathfrak{H}}={\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to the Hilbert space \({\mathfrak{K}}={\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) with distinguishing properties.
Lemma 5.2
Let\({\mathfrak{H}}\)and\({\mathfrak{K}}\)be Hilbert spaces, let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\), and letEbe the block relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it. Then the following statements hold:
Proof
To prove (5.4), it suffices to show that its lefthand-side is contained in its right-hand side. Observe that if \(f =f_1 \oplus f_2 \in \text{dom}\,E\), then there exist elements \(\alpha _i\) and \(\beta _i\), \(i=1,2\), such that
But then it follows immediately from (5.2) that
in other words \(f_1, f_2 \in \text{dom}\,E\).
To prove (5.5), it suffices to show that its lefthand-side is contained in its right-hand side. Observe that if \(g =g_1 \oplus g_2 \in \text{mul}\,E\), then there exist elements \(\alpha _i\) and \(\beta _i\), such that \(g_i=\alpha _i+\beta _i\), \(i=1,2\), and
But then it follows immediately from (5.2) that
in other words \(g_1, g_2 \in \text{mul}\,E\). \(\square \)
It will be shown that the properties (5.4) and (5.5) in Lemma 5.2 characterize the block relations in Definition 5.1. The arguments will be based on the following notions and observations.
Definition 5.2
Let E be a linear relation from \({\mathfrak{H}}={\mathfrak{H}}_1 \oplus {\mathfrak{H}}_2\) to \({\mathfrak{K}}={\mathfrak{K}}_1 \oplus {\mathfrak{K}}_2\) and define the block \([\mathcal{E}_{ij}]\) by
where \(P_{i}\) is the orthogonal projection from \({\mathfrak{K}}\) onto \({\mathfrak{K}}_{i}\).
Observe, that Definition 5.2 agrees with the definitions in (3.2) and (3.4). It is clear that all relations \(\mathcal{E}_{ij}\) from \({\mathfrak{H}}_j\) to \({\mathfrak{K}}_i\) in (5.6) are linear, and that they satisfy
Thus (5.7) describes the domains of the columns of the block \([\mathcal{E}_{ij}]\), while (5.8) describes the multivalued parts of the rows of the block \([\mathcal{E}_{ij}]\). The block \([\mathcal{E}_{ij}]\) is uniquely determined by E, however, in general the linear relation \(\mathcal{E}\) generated by the block \([\mathcal{E}_{ij}]\) differs from the original relation E. This becomes clear from the next lemma, which also leads to the main characterization of all linear relations E that are generated by some \(2 \times 2\) block \([E_{ij}]\) of linear relations.
Lemma 5.3
LetEbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), let\([\mathcal{E}_{ij}]\)be the block induced byEin (5.6), and let\(\mathcal{E}\)be the linear relation generated by\([\mathcal{E}_{ij}]\). Then
-
(a)
the identity for the domain (5.4) implies\(E \subset \mathcal{E}\);
-
(b)
the identity for the multivalued part (5.5) implies\(E \supset \mathcal{E}\).
Proof
-
(a)
In order to show that \(E \subset \mathcal{E}\), let
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in E. \end{aligned}$$(5.9)In view of (5.4), there exist \(\alpha _{1}\in {\mathfrak{H}}_1\) and \(\alpha _{2}\in {\mathfrak{H}}_2\) such that
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0\end{pmatrix}, \begin{pmatrix} \alpha _{1} \\ \alpha _{2} \end{pmatrix} \right\} \in E, \end{aligned}$$and then, with \(\beta _{1}=g_1-\alpha _{1}\in {\mathfrak{H}}_1\) and \(\beta _{2}=g_2-\alpha _{2} \in {\mathfrak{H}}_2\), it follows that
$$\begin{aligned} \left\{ \begin{pmatrix} 0 \\ f_2 \end{pmatrix}, \begin{pmatrix} \beta _{1} \\ \beta _{2} \end{pmatrix} \right\} \in E. \end{aligned}$$From (5.6) it is clear that
$$\begin{aligned} \{f_1,\alpha _{1}\} \in \mathcal{E}_{11}, \quad \{f_1, \alpha _{2}\} \in \mathcal{E}_{21}, \quad \{f_2,\beta _{1}\} \in \mathcal{E}_{12}, \quad \{f_2, \beta _{2}\} \in \mathcal{E}_{22}, \end{aligned}$$which implies that the element in (5.9) belongs to \(\mathcal{E}\); see (5.2).
-
(b)
In order to show that \(\mathcal{E}\subset E\), let
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in \mathcal{E}. \end{aligned}$$(5.10)
Hence, by definition, there exist \(\alpha _{1} \in {\mathfrak{K}}_1\), \(\alpha _2 \in {\mathfrak{K}}_2\), \(\beta _{1} \in {\mathfrak{K}}_1\), and \(\beta _2 \in {\mathfrak{K}}_2\), such that
and
Again, by definition, there exist \(\alpha _{2}', \alpha _{1}'\) such that
which implies that
Hence, by the assumption (5.5), one has also
Thus, it follows that
Likewise, it follows that
Combining (5.11), (5.12), and (5.13) one sees that the element in (5.10) belongs to the relation E. \(\square \)
As a consequence, the following result is an extension of the characterizations in Lemmas 3.1 and 3.2.
Theorem 5.1
LetEbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), let\([\mathcal{E}_{ij}]\)be the block induced byEin (5.6), and let\(\mathcal{E}\)be the linear relation generated by\([\mathcal{E}_{ij}]\). Then the following statements are equivalent:
-
(i)
Eis a linear relation generated by some block in\(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\);
- (ii)
-
(iii)
\(E= \mathcal{E}\).
Proof
(i) \(\Rightarrow \) (ii) This implication is a consequence of Lemma 5.2.
(ii) \(\Rightarrow \) (iii) This implication is a consequence of Lemma 5.3.
(iii) \(\Rightarrow \) (i) This is clear. \(\square \)
Thus the identities (5.4) and (5.5) characterize the relations that are generated by blocks of relations, as in (5.1). Likewise, the identities (5.7) and (5.8) characterize the blocks that are induced by linear relations, as in (5.6).
Lemma 5.4
Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and letEbe the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it, as in (5.1). Let\([\mathcal{E}_{ij}]\)be the block induced byEas in (5.6) and let\(\mathcal{E}\)be the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by\([\mathcal{E}_{ij}]\)as in (5.1). Then
-
(a)
the identities for the domains
$$\begin{aligned} \text{dom}\,E_{11}=\text{dom}\,E_{21} \quad \text{ and } \quad \text{dom}\,E_{12}=\text{dom}\,E_{22} \end{aligned}$$(5.14)imply \([E_{ij}] \subset [\mathcal{E}_{ij}]\);
-
(b)
the identities for the multivalued parts
$$\begin{aligned} \text{mul}\,E_{11}=\text{mul}\,E_{12} \quad \text{ and } \quad \text{mul}\,E_{21}=\text{mul}\,E_{22} \end{aligned}$$(5.15)imply \([E_{ij}] \supset [\mathcal{E}_{ij}] \).
Proof
-
(a)
Assume that (5.14) holds. Let \(\{f_1,g_1\} \in E_{11}\). Then by assumption there exists \(g_2' \in {\mathfrak{H}}_2\) such that \(\{f_1,g_2'\} \in E_{21}\). Hence, one sees from (5.2) that
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2' \end{pmatrix} \right\} \in E, \end{aligned}$$which shows that \(\{f_1, g_1\} \in \mathcal{E}_{11}\). Thus \(E_{11} \subset \mathcal{E}_{11}\). The other inclusions follow in the same way.
-
(b)
Assume that (5.15) holds. Let \(\{f_1,g_1\} \in \mathcal{E}_{11}\). Then there exists \(g_2 \in {\mathfrak{H}}_2\) such that
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} \in E, \end{aligned}$$which implies that \(g_1=g_1'+g_1''\) and \(g_2=g_2'+g_2''\) such that
$$\begin{aligned} \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\} = \left\{ \begin{pmatrix} f_1 \\ 0 \end{pmatrix}, \begin{pmatrix} g_1'+g_1'' \\ g_2'+g_2'' \end{pmatrix} \right\} \end{aligned}$$with \(\{f_1, g_1'\} \in E_{11}\), \(\{f_1, g_2'\} \in E_{21}\), \(\{0, g_1''\} \in E_{12}\), and \(\{0, g_2''\} \in E_{22}\). By assumption \(\{0, g_1''\} \in E_{11}\), which shows that \(\{f_1,g_1\} \in E_{11}\). Thus \(\mathcal{E}_{11} \subset E_{11}\). The other inclusions follow in the same way.
\(\square \)
Theorem 5.2
Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and letEbe the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by it. Let\([\mathcal{E}_{ij}]\)be the block induced byEand let\(\mathcal{E}\)be the linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by\([\mathcal{E}_{ij}]\). Then the following statements are equivalent:
-
(i)
\(E_{ij}= P_i F{\upharpoonright }\,{\mathfrak{H}}_j\)for some linear relationF from \({\mathfrak{H}}\) to \({\mathfrak{K}}\);
- (ii)
-
(iii)
\([E_{ij}]=[\mathcal{E}_{ij}]\).
Proof
(i) \(\Rightarrow \) (ii) Recall that (5.7) and (5.8) hold for the relation defined in (5.6).
(ii) \(\Rightarrow \) (iii) This implication is a consequence of Lemma 5.4.
(iii) \(\Rightarrow \) (i) This is clear. \(\square \)
For \([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}})\) and \(\lambda \in \mathbb{C}\) define the sum\([E_{ij}]+[F_{ij}]\) and the scalar multiplication\(\lambda [E_{ij}]\) via the corresponding operations on the entries:
When equipped with these operations the space \(\mathbf{M}({\mathfrak{H}})\) has a linear structure. On the other hand, the formula
defines an equivalence relation in \(\mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\). The sum and scalar multiplication of blocks are preserved by the linear relations generated by them.
Lemma 5.5
Assume that\([E_{ij}], [F_{ij}] \in \mathbf{M}({\mathfrak{H}}, {\mathfrak{K}})\)and let\([G_{ij}]=[E_{ij}] + [F_{ij}]\) and \([H_{ij}]=\lambda [E_{ij}]\). LetE, F, G, andHbe the linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)generated by these blocks, respectively. Then
Proof
Since \([G_{ij}]=[E_{ij} + F_{ij}]\), it follows from (5.2) that the linear relation G satisfies
so that, by the definition of the sum of relations, the linear relation G is the set of all
where
Therefore, it is now clear, by the definition of the sum of relations and the formula (5.2) that the linear relation G is equal to the sum \(E+F\). For the scalar multiplication the argument is similar. \(\square \)
6 Adjoints of block relations
Let \([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) be a block of the form (5.1). Then the adjoints of the entries give rise to the following \(2 \times 2\)block of relations
where
The block relation from \({\mathfrak{K}}\) to \({\mathfrak{H}}\) corresponding to the \(2 \times 2\) block in (6.1) is denoted by
cf. Definition 5.1. The relation in (6.2) is sometimes called the formal adjoint of the block relation generated by \([E_{ij}]\). In general, there is the following inclusion result.
Proposition 6.1
Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)be a block as in (5.1). Then
Proof
By Lemma 5.1 and Definition 5.1 the left-hand side of (6.3) may be written as
Now observe that
due to Proposition 4.1. These two inclusions may be combined by (3.5), which gives
Next observe that by Proposition 4.1
from which the desired result follows. \(\square \)
Corollary 6.1
Let\([E_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)be a block as in (5.1). Then there is equality in (6.3) if and only if
Corollary 6.1 combined with Proposition 4.1 gives the necessary and sufficient condition stated in item (i) of the next corollary, while Lemma 4.1 yields sufficient conditions stated in item (ii).
Corollary 6.2
Let\([E_{ij}]\)be a block as in (5.1). Assume that, up to interchange ofAandB, the entries of each column\(\text{col}\,(A;B)\)in\([E_{ij}]\)satisfy one of the following:
-
(a)
the criterion in Proposition4.1, i.e. for each column\(\text{col}\,(A;B)^*\)is a row;
-
(b)
each column\(\text{col}\,(A;B)\)satisfies the (sufficient) conditions in Lemma4.1.
Then there is equality in (6.3).
7 Multiplication of rows and columns
The product of linear relations is well defined. If the linear relations are given as rows, columns, or blocks, then also their components may be multiplied in a similar way. In general these formal products are different from the product in the relation sense. This will be considered here for rows and columns.
Lemma 7.1
Let\(R_1\)be a linear relation from\({\mathfrak{H}}_1\)to\({\mathfrak{K}}\)and let\(R_2\)be a linear relation from\({\mathfrak{H}}_2\)to\({\mathfrak{K}}\). Let\(R=(R_1 \,;\, R_2) \)and letAbe a linear relation from\({\mathfrak{K}}\)to\({\mathfrak{K}}'\). Then
Moreover, the following statements are equivalent:
-
(i)
there is equality in (7.1);
-
(ii)
there is the implication
$$\begin{aligned} \left\{ \begin{array}{l} \{f_1\oplus f_2,h\}\in R \\ h \in \text{dom}\,A \end{array} \right. \quad \Rightarrow \quad \left\{ \begin{array}{l} \{f_1,h_1\}\in R_{1}, \{f_2,h_2\} \in R_{2} \\ h-h_1-h_2 \in \text{ker}\,A \\ \text{ for } \text{ some } h_{1}, h_{2} \in \text{dom}\,A. \end{array} \right. \end{aligned}$$
Proof
To prove (7.1), let \(\{f,g\} \in (AR_1 \,;\, AR_2)\). This means \(f=f_1 \oplus f_2\) and
where \(\{f_1,g_1\} \in AR_1\), \(\{f_2,g_2\} \in AR_2\), and \(g=g_1+g_2\). Then there exist elements \(h_1\) and \(h_2\) in \({\mathfrak{K}}\) such that
Consequently,
which implies that
Thus (7.1) has been shown.
(i) \(\Rightarrow \) (ii) Assume that there is equality in (7.1). In order to prove (ii), let \(\{f_1\oplus f_2,h\}\in (R_1 \,;\, R_2)\) for some \(h\in \text{dom}\,A\). Then \(\{h,g\}\in A\) for some \(g\in {\mathfrak{K}}'\) and \(\{f_1\oplus f_2,g\}\in AR=(AR_1 \,;\, AR_2)\). By (3.1) this means that \(\{f_1,g_1\}\in AR_1\) and \(\{f_2,g_2\}\in AR_2\) for some \(g_1,g_2\in {\mathfrak{K}}'\) with \(g_1+g_2=g\). Hence, there exist \(h_1,h_2\in {\mathfrak{K}}\) which satisfy all the conditions in (7.2) and (7.3). In particular, one sees that \(h_1,h_2\in \text{dom}\,A\) and now \(\{h,g\}\in A\) and (7.3) show that \(\{h-h_1-h_2,0\} \in A\).
(ii) \(\Rightarrow \) (i) It suffices to demonstrate the inclusion \(AR \subset (AR_1 \,;\, AR_2)\). To do this, assume that \(\{f_1\oplus f_2,g\}\in AR\). Then \(\{f_1\oplus f_2,h\}\in R\) and \(\{h,g\}\in A\) for some \(h\in {\mathfrak{K}}\). In particular, \(h\in \text{dom}\,A\) and, by assumption, there exist \(h_1,h_2\in \text{dom}\,A\) such that
Then \(\{h_1,g_1\}\in A\), \(\{h_2,g_2\} \in A\) for some \(g_1,g_2\in {\mathfrak{K}}'\), so that \(\{f_1,g_1\}\in AR_1\), \(\{f_2,g_2\} \in AR_2\), and
Hence \(\{h,g\}\in A\) yields \(\{h-h_1-h_2,g-g_1-g_2\}\in A\) and \(\{0,g-g_1-g_2\}\in A\). Now observe that
so that \(\{0,g-g_1-g_2\}\in (AR_1 \,;\, AR_2)\). Then the first statement in (7.4) shows that \(\{f_1\oplus f_2,g\}\in (AR_1 \,;\, AR_2)\). \(\square \)
In special cases the condition (ii) in Lemma 7.1 becomes easier to verify.
Corollary 7.1
Let\(R=(R_1 \,;\, R_2) \)andAbe as in Lemma7.1. Then:
-
(a)
ifRis an operator, in particular, if\(R_{1} \in \mathbf{B}({\mathfrak{H}}_{1}, {\mathfrak{K}})\)and\(R_{2} \in \mathbf{B}({\mathfrak{H}}_{1}, {\mathfrak{K}})\), then there is equality in (7.1) if and only if
$$\begin{aligned} R_1f_1+ R_2f_2 \in \text{dom}\,A \quad \Longrightarrow \quad R_1f_1,\, R_2f_2\in \text{dom}\,A; \end{aligned}$$ -
(b)
if\(\text{ran}\,R_1\cup \text{ran}\,R_2\subset \text{dom}\,A\), in particular, if\(A \in \mathbf{B}({\mathfrak{K}}, {\mathfrak{K}}')\), then there is equality in (7.1).
Lemma 7.2
LetBbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\). Let\(C_1\)be a linear relation from\({\mathfrak{K}}\)to\(\mathfrak{L}_{1}\), let\(C_2\)be a linear relation from\({\mathfrak{K}}\)to\(\mathfrak{L}_{2}\), and let\(C=\text{col}\,(C_1\,;\,C_2)\). Then
Moreover, the following statements are equivalent:
-
(i)
there is equality in (7.5);
-
(ii)
there is the inclusion
$$\begin{aligned} \text{mul}\,B \cap \text{dom}\,C \subset \text{mul}\,B \cap \text{ker}\,C_1+\text{mul}\,B \cap \text{ker}\,C_2. \end{aligned}$$
Proof
It is straightforward to see (7.5). To see the equivalence of the statements (i) and (ii), observe that
(i) \(\Rightarrow \) (ii) If there is equality in (7.5), then CB is a column. By Lemma 3.2 this is equivalent to
To show the inclusion in (ii), let \(\psi \in \text{mul}\,B \cap \text{dom}\,C\). Then there exist elements \(\varphi _1\) and \(\varphi _2\) so that \(\{\psi , \varphi _1\} \in C_1\) and \(\{\psi , \varphi _2\} \in C_2\), or
By assumption, \(\varphi _1 \in (\text{mul}\,CB)\cap {\mathfrak{K}}_1\) and \(\varphi _2 \in (\text{mul}\,CB)\cap {\mathfrak{K}}_2\). Hence there exist \(\psi ', \psi '' \in \text{mul}\,B\), such that
Therefore \(\psi ' \in \text{ker}\,C_2\), \(\psi '' \in \text{ker}\,C_1\), and
which shows \(\zeta = \psi -\psi ' -\psi '' \in \text{ker}\,C=\text{ker}\,C_1 \cap \text{ker}\,C_2\). Since \(\psi ', \psi '' \in \text{mul}\,B\), also \(\zeta = \psi -\psi ' -\psi '' \in \text{mul}\,B\). Hence,
Therefore (ii) holds.
(ii) \(\Rightarrow \) (i) It suffices to show that CB is a column, as in this case one has equality in (7.5), since
by Lemma 3.2. Thus by Lemma 3.2, one needs to show
as the reverse inclusion is clear. Let \(\varphi \in \text{mul}\,CB\), which, by (7.6), gives
where \(\psi \in \text{mul}\,B \cap \text{dom}\,C\). By assumption \(\psi =\psi ' +\psi ''\), where \(\psi ',\psi ''\in \text{mul}\,B\) with
Since
this leads to
and
so that
This shows the required inclusion. \(\square \)
In special cases the condition (ii) in Lemma 7.2 becomes easier to verify. Observe that if \(\text{ker}\,C_{2}=\{0\}\), then (ii) reads \(\text{mul}\,B \cap \text{dom}\,C \subset \text{ker}\,C_1\), and if \(\text{ker}\,C_{1}=\{0\}\), then (ii) reads \(\text{mul}\,B \cap \text{dom}\,C \subset \text{ker}\,C_2\).
Corollary 7.2
LetBand\(C=\text{col}\,(C_1\,;\,C_2)\)be as in Lemma7.2. Then:
-
(a)
ifBis an operator, in particular, if\(B \in \mathbf{B}({\mathfrak{H}},{\mathfrak{K}})\), then there is equality in (7.5);
-
(b)
if\(C_{1} \in \mathbf{B}({\mathfrak{K}}, \mathfrak{L}_{1})\)and\(C_{2} \in \mathbf{B}({\mathfrak{K}}, \mathfrak{L}_{2})\), then there is equality in (7.5) if and only if
$$\begin{aligned} \text{mul}\,B \subset \text{mul}\,B \cap \text{ker}\,C_1+\text{mul}\,B \cap \text{ker}\,C_2. \end{aligned}$$
In the case where a row and a column are multiplied there are no obstacles.
Lemma 7.3
Let\(C_1\)and\(C_2\)be linear relations from\({\mathfrak{H}}\)to\({\mathfrak{K}}_1\)and from\({\mathfrak{H}}\)to\({\mathfrak{K}}_2\), and let\(C=\text{col}\,(C_1; C_2)\)be their column. Let\(R_1\)be a linear relation from\({\mathfrak{K}}_1\)to\(\mathfrak{L}\), let\(R_2\)be a linear relation from\({\mathfrak{K}}_2\)to\(\mathfrak{L}\), and let\(R=(R_1; R_2)\)be their row. Then
Proof
\((\subset )\) Let \(\{f,g\} \in R C\). Then it follows that \(\{f,h\} \in C\) and \(\{h,g\} \in R\) for some \(h=h_1 \oplus h_2\), or
where \(\{h_1,g_1 \} \in R_1\) and \(\{h_2,g_2 \} \in R_2\), while \(g=g_1+g_2\). This implies
so that \(\{f,g\} \in R_1C_1+R_2C_2\).
\((\supset )\) Let \(\{f,g\} \in R_1C_1+R_2C_2\). This implies that (7.7) holds. Hence there exist elements \(h_1 \in {\mathfrak{H}}_1\) and \(h_2 \in {\mathfrak{H}}_2\) such that
However, this can be written as
and \( \{f,g\}=\{f,g_1+g_2\} \in RC\). \(\square \)
8 Multiplication of block relations
Multiplication of block relations is a complicated operation. Lemmas 7.1 and 7.2 describe the difficulties to be encountered. Therefore, it may be helpful to consider the case of blocks of operators first.
Lemma 8.1
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of operators, and letEandFbe the linear operators generated by these two blocks. Then the domain\(\text{dom}\,EF\)consists of all\(h=(h_{1}, h_{2}) \in {\mathfrak{H}}_{1} \times {\mathfrak{H}}_{2}\)for which
-
(a)
\(h_{1} \in \text{dom}\,F_{11} \cap \text{dom}\,F_{21}\);
-
(b)
\(h_{2} \in \text{dom}\,F_{12} \cap \text{dom}\,F_{22}\);
-
(c)
\(F_{11}h_{1} +F_{12}h_{2} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);
-
(d)
\(F_{21}h_{1} +F_{22}h_{2} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\).
Moreover, if\(h=(h_{1}, h_{2}) \in \text{dom}\,EF\), then
Proof
Let \(h \in \text{dom}\,EF\), so that \(h \in \text{dom}\,F\) and \(Fh \in \text{dom}\,E\). The condition \(h \in \text{dom}\,F\) means that (i) and (ii) hold, in which case
Moreover, the condition \(Fh \in \text{dom}\,E\) means that (iii) and (iv) hold, in which case EFh is as in the statement of the lemma.
Corollary 8.1
Let \(h=(h_{1}, h_{2}) \in {\mathfrak{H}}_{1} \times {\mathfrak{H}}_{2}\) satisfy
-
(a)
\(h_{1} \in \text{dom}\,F_{11} \cap \text{dom}\,F_{21}\);
-
(b)
\(h_{2} \in \text{dom}\,F_{12} \cap \text{dom}\,F_{22}\);
-
(c)
\(F_{11}h_{1} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);
-
(d)
\(F_{12}h_{2} \in \text{dom}\,E_{11} \cap \text{dom}\,E_{21}\);
-
(e)
\(F_{21}h_{1} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\);
-
(f)
\(F_{22}h_{2} \in \text{dom}\,E_{12} \cap \text{dom}\,E_{22}\).
Then \(h \in \text{dom}\,EF\) and
The message of Lemma 8.1 and Corollary 8.1 is that the domain of the operator EF contains the domain of the operator generated by the block which results after formally multiplying the blocks, and that
The domain of the left-hand side is more restrictive than the domain of EF as one has to take care of all the entries, including the summands, in the block on the left-hand side. This reflects the contents of condition (ii) in Lemma 7.1 and, of course, the criterion stated in Corollary 7.1 (a).
Now the case of linear relations will be taken up. Let \([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\) and \([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\) be blocks of linear relations, and let E and F be the linear relations generated by these blocks. Since multivalued operators are allowed, the following arguments must take care of this extra complication.
The next lemma is based on Lemmas 7.1, 7.2, and 7.3. Its statement is simple to state, but it contains a lot of information, as will be shown below.
Lemma 8.2
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Then the following equality holds
Proof
The key is the interpretation of the relations E and F as a row and a column, respectively; cf. Lemma 5.1. Thus E will be written as a row of columns:
and, likewise, F will be written as a column of rows:
Then an application of Lemma 7.3 leads to
which gives the identity (8.2). \(\square \)
It is instructive to describe all the elements in the relation EF. By Lemma 8.2 they are of the form
due to the sum in the right-hand side of (8.2), where
Hence there exist elements \(l'\) and \(l''\) such that
and
Note that this description, provided by the identity in (8.2), agrees with the one in Lemma 8.1 for the case of operators.
Parallel to the operator case, one may ask the question what the relationship is of the linear relation EF with the linear relation \(E \star F\) generated by the formal product \([E_{ij}]\cdot [F_{ij}]\) of the relations \(E_{ij}\) and \(F_{ij}\):
Recall that this block can be seen as a row of columns, or as a column of rows; cf. Lemma 5.1. The following result shows that the relationship between the product EF and the formal product \(E \star F\) is fuzzy.
Lemma 8.3
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\) be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Then for both\(X=EF\)and\(X=E \star F\)the following inclusions hold:
It is straightforward to obtain the above inclusions. First consider the case that \(X=EF\). It follows from (8.2), Lemmas 7.1 and 5.5 that
which proves the first inclusion in (8.4) for \(X=EF\). Similarly, it follows from (8.2), Lemma 7.2 (see also (5.3)), and Lemma 5.5 that
which proves the second inclusion in (8.4) for \(X=EF\).
Next consider the case that \(X=E \star F\). By rewriting (8.3) as a sum (cf. Lemma 5.5) and then applying Lemma 7.2 to each column (cf. Lemma 5.1) it is seen that (see also (5.3))
which proves the first inclusion in (8.4) for \(X=E \star F\). To prove the other inclusion rewrite the first identity in (8.7) and then apply Lemma 7.1 to each entry (see also (5.3)) to obtain
which proves the second inclusion in (8.4) for \(X=E \star F\).
Observe that if one of the inclusions in (8.4) of Lemma 8.3 holds as an equality, then these two products are related to each other by an inclusion. More precisely:
-
(a)
if equality prevails in (8.5) or in (8.8), then \(EF\subset E\star F\);
-
(b)
if equality prevails in (8.6) or in (8.7), then \(E\star F\subset EF\).
If, in particular, F is an operator, then one has case (b), i.e.,
Namely, in this case the condition (ii) of Lemma 7.2 needed for equality in (7.5) is automatically satisfied and, therefore, inclusions in (8.6) and (8.7) both hold as equalities, hence \(E \star F \subset EF\). This observation covers the case of operators as indicated in (8.1).
To guarantee the equality \(EF=E\star F\), it suffices to find conditions such that the implications (a) and (b) above hold simultaneously. Such conditions are made explicit in the next proposition; they guarantee equalities in (8.6) and (8.8).
Proposition 8.1
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Assume that
with\(i=1,2\);
with\(i=1,2\);
Then\(EF=E \star F\).
Proof
For the proof of this result it is shown that under the given conditions equalities prevail in (8.6) and (8.8). The first two conditions together with Lemma 7.1 show that
and
Thus, equality holds in (8.8). The last two conditions together with Lemma 7.2 show that
and
Thus, equality holds in (8.6). Now \(EF=E\star F\) follows from (a) and (b). \(\square \)
Corollary 8.2
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. In addition, assume that\(F_{ij} \in \mathbf{B}({\mathfrak{H}},{\mathfrak{K}})\)and
Then\(EF=E \star F\).
Proof
This follows from Corollary 7.1 (a). \(\square \)
Corollary 8.3
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. In addition, assume that\(E_{ij} \in \mathbf{B}({\mathfrak{K}},\mathfrak{L})\)and
Then\(EF=E \star F\).
Proof
This follows from Corollary 7.1 (b). \(\square \)
Another set of sufficient conditions which are similar, but actually guarantee equalities in (8.5) and (8.7), is given in the next proposition.
Proposition 8.2
Let\([F_{ij}] \in \mathbf{M}({\mathfrak{H}},{\mathfrak{K}})\)and\([E_{ij}] \in \mathbf{M}({\mathfrak{K}},\mathfrak{L})\)be blocks of linear relations, and letEandFbe the linear relations generated by these two blocks. Assume that
Then\(EF=E \star F\).
Proof
The first two conditions together with Lemma 7.1 show that
and
Thus, equality holds in (8.5). The last two conditions together with Lemma 7.2 show that
and
Thus, equality holds in (8.7). Now \(EF=E\star F\) follows from (a) and (b). \(\square \)
It is of interest to compare the conditions in Propositions 8.1 and 8.2. For instance, notice that
Therefore the condition (8.15) implies the condition (8.11) and the condition (8.16) implies the condition (8.12).
9 An application for the product of block relations
In this section the product of two simple looking blocks of linear relations is treated by means of the general results in the previous two sections. For this purpose, the following version of the distributive law for linear relations will be useful.
Lemma 9.1
LetHbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\), and letKandGbe linear relations from\({\mathfrak{K}}\)to\(\mathfrak{L}\). Then:
-
(a)
\((K+G)H\subset KH+GH\);
-
(b)
if \(G\in \mathbf{B}({\mathfrak{K}},\mathfrak{L})\) and \(G(\text{mul}\,H) \subset \text{mul}\,K\), then
$$\begin{aligned} (K+G)H=KH+GH. \end{aligned}$$
Proof
-
(a)
Let \(\{f,g\}\in (K+G)H\). Then for some \(k\in {\mathfrak{K}}\) one has \(\{f,k\}\in H\) and \(\{k,g_1\}\in G\) and \(\{k,g_2\}\in K\) with \(g=g_1+g_2\). Hence \(\{f,g_1\}\in HG\), \(\{f,g_2\}\in KH\) and \(\{f,g\}=\{f,g_1+g_2\}\in KH+GH\), which proves the stated inclusion.
-
(b)
It suffices to prove \(KH+GH\subset (K+G)H\). Let \(\{f,g\}\in KH+GH\). Then for some \(\ell _1,\ell _2\in \mathfrak{L}\) one has \(\{f,\ell _1\}\in GH\) and \(\{f,\ell _2\}\in KH\) with \(\ell _1+\ell _2=g\). Hence, \(\{f,k_1\}\in H\), \(\{k_1,\ell _1\}\in G\) with \(\ell _1=G k_1\) and \(\{f,k_2\}\in H\), \(\{k_2,\ell _2\}\in K\) for some \(k_1,k_2\in {\mathfrak{K}}\). This implies that \(e=k_1-k_2\in \text{mul}\,H\) and by assumption this gives that \(Ge\in \text{mul}\,K\). Therefore, \(\{k_2,\ell _2+Ge\}\in K\), \(\{k_2,G(k_1-e)\} \in G\), and since \(\{f,k_2\}\in H\) this leads to
$$\begin{aligned} \{f,g\}=\{f,Gk_1+\ell _2\}=\{f,G(k_1-e)+(\ell _2+Ge)\} \in (K+G)H. \end{aligned}$$This proves \(KH+GH\subset (K+G)H\) and thus \((K+G)H=KH+GH\) holds.
\(\square \)
The next result has useful applications elsewhere; see [19, 24]. One can prove it directly via the definition of blocks and the product of the corresponding linear relations. Here the result is derived by applying some general formulas and identities presented in Sect. 7.
Proposition 9.1
LetSbe a linear relation from\({\mathfrak{H}}\)to\({\mathfrak{K}}\)and letTbe a linear relation from\({\mathfrak{K}}\)to\({\mathfrak{H}}\). Then
Proof
To rewrite the product in a different form apply Lemma 9.1 with the choices
and note that G is everywhere defined and bounded. Then the left-hand side of (9.1) can be rewritten in the form \((K+G)H\). Observe that
Therefore G satisfies Lemma 9.1 (ii) and thus \((K+G)H=KH+GH\). Next the products HK and GH are calculated. For GH it is clear from the definition of block relation that
For KH first observe that \(\text{mul}\,H=\text{mul}\,S\oplus \text{mul}\,T\) and that
Hence the condition for \(\text{mul}\,H\cap \text{dom}\,K\) in Lemma 7.2 (b) is satisfied and it follows that
Next observe that \(\text{dom}\,H=\text{dom}\,S\oplus \text{dom}\,T\) and then apply Lemma 7.3 to the entries \((T\,;\, 0)H\) and \((0\,;\, S)H\) to obtain
Here each entry satisfies the condition in Lemma 7.1 (ii) and hence
This combined with the expression for GH leads to
Since \(T-T=\text{dom}\,T\times \text{mul}\,T\), \(S-S=\text{dom}\,S\times \text{mul}\,S\) and one has inclusion \(\text{mul}\,T\subset \text{mul}\,TS\), \(\text{mul}\,S\subset \text{mul}\,ST\), this last block formula corresponds to the same linear relation as the one on the right-hand side of (9.1).
Remark 9.1
The arguments in the above proof are based on the general results in the previous sections. It is instructive to look at the difficulties that may appear if, for instance, one starts directly with the formula (8.2). In the circumstances of Proposition 9.1 the left-hand side of the identity (8.2) takes the form
In order to proceed, note that the domain of the sum is contained in \(\text{dom}\,S\times \text{dom}\,T\), which gives
Now the condition in Lemma 7.1 (ii) is satisfied and one can write
However, the following inclusions
are, in general, strict by Lemma 7.2: this is the case if, for instance, \(\text{ker}\,T=\{0\}\), \(\text{mul}\,S\cap \text{dom}\,T\ne \{0\}\) and \(\text{ker}\,S=\{0\}\), \(\text{mul}\,T\cap \text{dom}\,S\ne \{0\}\), respectively. Thus a separate treatment of the summands in (9.2) does not lead to the desired identity (9.1) in Proposition 9.1.
References
Arens, R.: Operational calculus of linear relations. Pac. J. Math. 11, 9–23 (1961)
Behrndt, J., Hassi, S., de Snoo, H.S.V.: Boundary Value Problems, Weyl Functions, and Differential Operators. Monographs in Mathematics, vol. 108. Birkhäuser, Basel (2020)
Coddington, E.A.: Extension theory of formally normal and symmetric subspaces. Memoirs of the American Mathematical Society, vol. 134. American Mathematical Society, Providence (1973)
Engel, K.J.: Matrix representation of linear operators on product spaces. Suppl. Rend. Circ. Mat. Palermo 56, 219–224 (1998)
Hassi, S., Labrousse, J.-P., de Snoo, H.S.V.: Selfadjoint extensions of relations whose domain and range are orthogonal. Methods Funct. Anal. Topol. 26, 39–62 (2020)
Hassi, S., Malamud, M.M., de Snoo, H.S.V.: On Kreĭn’s extension theory of nonnegative operators. Math. Nachr. 274(275), 40–73 (2004)
Hassi, S., Sandovici, A., de Snoo, H.S.V.: Factorized sectorial relations, their maximal sectorial extensions, and form sums. Banach J. Math. Anal. 13, 538–564 (2019)
Hassi, S., Sandovici, A., de Snoo, H.S.V., Winkler, H.: Extremal extensions for the sum of nonnegative selfadjoint relations. Proc. Am. Math. Soc. 135, 3193–3204 (2007)
Hassi, S., Sebestyén, Z., de Snoo, H.S.V., Szafraniec, F.H.: A canonical decomposition for linear operators and linear relations. Acta Math. Hung. 115, 281–307 (2007)
Hassi, S., de Snoo, H.S.V., Szafraniec, F.H.: Componentwise and canonical decompositions of linear relations. Dissert. Math. 465 (2009)
Hassi, S., de Snoo, H.S.V., Szafraniec, F.H.: Infinite-dimensional perturbations, maximally nondensely defined symmetric operators, and some matrix representations. Indag. Math. 23, 1087–1117 (2012)
Jin, G., Chen, A.: Some basic properties of block operator matrices. (2014). arXiv:1403.7732v1
Möller, M., Szafraniec, F.H.: Adjoints and formal adjoints of matrices of unbounded operators. Proc. Am. Math. Soc. 136, 2165–2176 (2008)
Nagel, R.: Towards a ’matrix theory’ for unbounded operator matrices. Math. Z. 201, 57–68 (1989)
Nagel, R.: The spectrum of unbounded operator matrices with non-diagonal domain. J. Funct. Anal. 89, 291–302 (1990)
Ota, S., Schmüdgen, K.: Some selfadjoint \(2 \times 2\) operator matrices associated with closed operators. Integr. Equ. Oper. Theory 45, 475–484 (2003)
Popovici, D., Sebestyén, Z.: Operators which are adjoint to each other. Acta Sci. Math. (Szeged) 80, 175–194 (2014)
Sandovici, A.: A range matrix-type criterion for the self-adjointness of symmetric linear relations. Acta Math. Hung. 158, 27–35 (2019)
Sandovici, A.: On the adjoint of linear relations in Hilbert spaces. Mediterr. J. Math. 17, 68 (2020)
Sebestyén, Z., Tarcsay, Z.: Characterizations of selfadjoint operators. Studia Sci. Math. Hung. 4(50), 423–435 (2013)
Sebestyén, Z., Tarcsay, Z.: Characterizations of essentially self-adjoint and skew-adjoint operators. Studia Sci. Math. Hung. 3(52), 371–385 (2015)
Sebestyén, Z., Tarcsay, Z.: Adjoint of sums and products of operators in Hilbert spaces. Acta Sci. Math. (Szeged) 82, 175–191 (2016)
Sebestyén, Z., Tarcsay, Z.: On the adjoint of Hilbert space operators. Linear Multilinear Algebra 67, 625–645 (2019)
Tarcsay, Z., Sebestyén, Z.: Range-kernel characterizations of operators which are adjoint of each other. Adv. Oper. Theory (2020). https://doi.org/10.1007/s43036-020-00068-4
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Lajos Molnar.
Dedicated to our friend Franek Szafraniec on the occasion of his eightieth birthday.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hassi, S., Labrousse, JP. & de Snoo, H. Operational calculus for rows, columns, and blocks of linear relations. Adv. Oper. Theory 5, 1193–1228 (2020). https://doi.org/10.1007/s43036-020-00085-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43036-020-00085-3