On Equivalence and Linearization of Operator Matrix Functions with Unbounded Entries

In this paper we present equivalence results for several types of unbounded operator functions. A generalization of the concept equivalence after extension is introduced and used to prove equivalence and linearization for classes of unbounded operator functions. Further, we deduce methods of finding equivalences to operator matrix functions that utilizes equivalences of the entries. Finally, a method of finding equivalences and linearizations to a general case of operator matrix polynomials is presented.


Introduction
Spectral properties of unbounded operator matrices are of major interest in operator theory and its applications [Tre08]. Important examples are systems of partial differential equations with λ-dependent coefficients or boundary conditions [Nag90,Tre01,AL95,ELT17,ET17]. A concept of equivalence can be used to compare spectral properties of different operator functions and the problem of classifying bounded analytic operator functions modulo equivalence has been studied intensely [GKL78,dB78,BGKR08,KvdMR81]. The properties preserved by equivalences include the spectrum and for holomorphic operator functions there is a oneto-one correspondence between their Jordan chains, [KL92, Proposition 1.2]. Our aim is to generalize some of the results in those articles and study a concept of equivalence for classes of operator functions whose values are unbounded linear operators. A prominent result in this direction is the equivalence between an operator matrix and its Schur complements [ALMS94,Shk95,Tre08].
In this paper, we consider systems described by nˆn operator matrix functions and study a concept of equivalence when some of the entries are Schur complements, polynomials, or can be written as a product of operator functions. Examples of this type are the operator matrix function with quadratic polynomial entries that were studied in [APT02] and functions with rational and polynomial entries in plasmonics [MRW`14]. In order to extend previous results to cases with unbounded entries, we generalize in definition 2.2 the concept of equivalence after extension in [GKL78]. This new concept can be used to compare spectral properties of two unbounded operator functions, but also for determining the correspondence between the domains and when two operator functions are simultaneously closed. Our main results are (i) equivalence results for operator matrix functions containing unbounded Schur complement entries (Theorem 3.4) and polynomial entries (Theorem 3.11) and (ii) a systematic approach to linearize operator matrix functions with polynomial entries (Theorem 4.1 together with the algorithm in Proposition 4.9 or Proposition 4.10).
Throughout this paper, H with or without subscripts, tildes, hats, or primes denote complex Banach spaces. Moreover, LpH, r Hq denotes the collection of linear (not necessarily bounded) operators between H and r H. The space of everywhere defined bounded operators between H and r H is denoted BpH, r Hq and we use the notations LpHq :" LpH, Hq and BpHq :" BpH, Hq. For convenience, a product Banach space of d identical Banach spaces is denoted The domain of an operator A P LpH, r Hq is denoted DpAq and if A is closable the closure of A is denoted A. In the following, we denote for a linear operator A the spectrum and resolvent set by σpAq and ρpAq, respectively. The point spectrum σ p pAq, continuous spectrum σ c pAq, and residual spectrum σ r pAq are defined as in [EE87,Section I.1].
Let Ω Ă C be a non-empty open set and let T : Ω Ñ LpH, H 1 q denote an operator function. Then the spectrum of T is σpT q :" tλ P Ω : 0 P σpT pλqqu.

Unless otherwise stated the natural domain
DpT pλqq :" DpApλqq X DpCpλqq ' DpBpλqq X DpDpλqq, λ P Ω is assumed [Tre08, Section 2.2]. The paper is organized as follows. In Section 2 we generalize concepts of equivalence to study functions whose values are unbounded operators. In particular, the concept equivalence after operator function extension is defined, which enable us to show an equivalence for pairs of unbounded operator functions. We provide natural generalizations of results that for bounded operator functions are well known. Further, we show how equivalence for an entry in an operator matrix function can be used to find an equivalence for the full operator matrix function.
Section 3 contains three subsections, one for each of the studied equivalences: Schur complements, [Tre08,Nag89,ALMS94,ELT17], multiplication of operator functions, [GKL78], and operator polynomials, [KL78,Mar88], each structured similarly. First, an equivalence for the class of operator functions is presented and then we show how this equivalence can be used to prove equivalences for operator matrix functions.
In Section 4 we use the results from Section 3 to also find equivalences between a class of operator matrix functions and operator matrix polynomials. Moreover, we discuss two different ways of finding linear equivalences (linearizations) of operator matrix polynomials. The section is concluded with an example on how the results from Section 3 and Section 4 can be used jointly to linearize operator matrix functions.

Equivalence and equivalence after operator function extension
In this section we introduce the concepts used to classify unbounded operator functions up to equivalence. These concepts were used to study bounded operator functions [GKL78,BGKR05] Let S Ω and T Ω denote the restrictions of S and T to Ω. Then σpT Ω q " σpS Ω q, σ p pT Ω q " σ p pS Ω q, σ c pT Ω q " σ c pS Ω q, σ r pT Ω q " σ r pS Ω q.
Gohberg et al. [GKL78] and Bart et al. [BGKR05] studied a generalization of equivalence called equivalence after extension. Here, we introduce a more general definition of equivalent after extension, which we for clarity call equivalence after operator function extension.
Definition 2.2. Let S : ΩS Ñ LpH, H 1 q and T : ΩT Ñ Lp p H, p H 1 q denote operator functions with domains DpSq and DpT q, respectively. Assume there are operator functions W S : Ω Ñ Lp q H S , q H S q and W T : are equivalent on Ω. Then S and T are said to be equivalent after operator function extension on Ω. The operator functions S and T are said to be equivalent after one-sided operator function extension on Ω if either q H S or q H T can be chosen to t0u. If q H T can be chosen to t0u then we say that S is after W S -extension equivalent to T on Ω.
The definition of equivalent after extension in [BGKR05] correspond in Definition 2.2 to the case W S pλq " I q HS and W T pλq " I q HT for all λ P Ω. We allow W S and W T to be unbounded operator functions and can therefore study a concept of equivalence for a larger class of unbounded operator function pairs S and T .
In particular, the equivalence results for Schur complements and polynomial problems presented in Section 3.1 respectively Section 3.3, can not be described by an equivalence after extension with the identity operator. In the equivalence results for multiplication operators in Section 3.2 the operator function W is bounded (actually W pλq " I for all λ P C). Thus, in that case the standard definition of equivalence after extension is sufficient as well.
Proposition 2.1 shows that two equivalent unbounded operator functions have the same spectral properties and it provides the correspondence between the domains. In the following proposition, those results are extended to include operator functions that are equivalent after operator function extension.
Proposition 2.3. Assume that S : ΩS Ñ LpH, H 1 q and T : ΩT Ñ Lp p H, p H 1 q, are equivalent after operator function extension on Ω Ă ΩS X ΩT . Let W S : Ω Ñ Lp q H S , q H S q and W T : Ω Ñ Lp q H T , q H T q denote the invertible operator functions such that Spλq ' W S pλq is equivalent to T pλq ' W T pλq for λ P Ω and let E, F be the operator functions in the equivalence relation (2.1). Define the operator π H 1 : H 1 ' q H S Ñ H 1 as π H 1 u ' v " u and let τ H denote the natural embedding of H into H ' q H S given by τ H u " u ' 0 q HS . Then for λ P Ω we have the relations DpSpλqq " π H F´1pλqpDpT pλqq ' DpWT pλqqq, and the operator Spλq is closed (closable) if and only if T pλq is closed (closable). The closure of a closable operator Spλq is DpSpλqq " π H F´1pλqpDpT pλqq ' DpWT pλqqq, and we have then σpT Ω q " σpS Ω q, σ p pT Ω q " σ p pS Ω q, σ c pT Ω q " σ c pS Ω q, σ r pT Ω q " σ r pS Ω q, where S Ω and T Ω denote the restrictions of S and T to Ω.
Proof. From Definition 2.2 it follows that for λ P Ω the following relations hold " The result then follows from Proposition 2.1 and that the closure of a block diagonal operator coincides with the closures of the blocks.
Below we show how an equivalence for an entry in an operator matrix function can be used to find an equivalence for the full operator matrix function. A general operator matrix function p S : Ω Ñ L p À n i"1 H i Ñ À n i"1 H 1 i q defined on its natural domain can be represented as However, any entry Spλq :" S j,i pλq can be moved to the upper left corner by changing the orders of the spaces, which result in the equivalent problem Hence, it is sufficient to study the 2ˆ2 system given in ( Then S is equivalent to T : and Epλq :" Proof. Under the assumption (2.4), the lemma follows immediately by verifying Spλq " EpλqT pλqF pλq.
Remark 2.5. The condition (2.4) is satisfied in the trivial case r E " 0, r F " 0, and for the problems we study in Section 3. A similar result holds also when (2.4) is not satisfied, but then the p2, 2q-entry in T pλq will not be of the same form.

Equivalences for classes of operator matrix functions
In this section, we study Schur complements, operator functions consisting of multiplications of operator functions, and operator polynomials. Each type will be studied similarly: First an equivalence after operator function extension is shown, which then together with Lemma 2.4 is utilized in an operator matrix function.
Remark 3.1. Assume that Spλq ' W pλq is equivalent to T pλq for λ P Ω and let S be defined as (2.3). For the equivalence relation between T and S we want the block Spλq ' W pλq intact to be able to apply Lemma 2.4 directly. Therefore, an equivalence after W -extension of Spλq is given as  Hq denote an operator function with domain DpDpλqq for λ P ΩD Ă C. Assume that Ω 1 Ă ΩD XρpDq is non-empty and let S : Ω 1 Ñ LpH, H 1 q for λ P Ω 1 be defined as Hq, and DpDpλqq Ă DpBpλqq. The claims in the following lemma are standard results for Schur complements [Shk95], [Tre08, Theorem 2.2.18] formulated in terms of an equivalence after operator function extension. For convenience of the reader we provide a short proof.
Lemma 3.2. Let the operator Spλq denote the operator defined in (3.2), assume that Cpλq is densely defined in H, and that D´1pλqCpλq is bounded on DpCpλqq for all λ P Ω 1 . Define the operator matrix function T on its natural domain as Then S is after D-extension equivalent to T on Ω 1 , where the operator matrix functions E and F in the equivalence relation (2.1) are Epλq :" The operator T pλq is closable if and only if Spλq is closable, and Proof. The operators matrices Epλq and F pλq are bounded on DpCpλqq and Dpλq´1Cpλq " Dpλq´1Cpλq on DpSpλqq. The result then follows from the factorization " Spλq Dpλq Remark 3.3. If D is unbounded, S and T are not equivalent after extension. However, they are equivalent after D-extension.
The domain and the closure are not explicitly stated in the equivalences in the remaining part of the article but they can be derived using the relations in Proposition 2.3.
Theorem 3.4. Let S, E, and F denote the operator functions on Ω 1 Ą Ω defined in Lemma 3.2. The operator matrix function S : Define the operator matrix function T : Then, S is after D-extension with respect to structure (3.1) equivalent to T on Ω, where the operator matrix functions E and F in the equivalence relation (2.1) for λ P Ω are Epλq :" Proof. From Lemma 3.2, it follows that Spλq ' Dpλq " EpλqT pλqF pλq. By using Lemma 2.4 with r E " 0 and r F " 0, the proposed Epλq and F pλq are obtained and

Products of operator functions.
Assume that for some n P N the operator M : Ω 1 Ñ BpH n , H 0 q can be written as The following lemma is a straightforward generalization of a result in [GKL78].
Lemma 3. 5. Let M denote the operator function (3.3) and set H : Then M is after I H -extension equivalent to T , where the operator matrix functions E : Ω 1 Ñ BpH 0 ' Hq and F : Ω 1 Ñ BpH ' H n q in the equivalence relation (2.1) are Proof. For n " 2 the equivalence result is used in the proof of [GKL78, Theorem 4.1] and the claims in the lemma follows by applying that equivalence iteratively.
Remark 3.6. Consider the operator function (3.3) with n " 2 and write M pλq in the form M pλq "´M 1 pλqp´I H1 q´1M 2 pλq. Then, Lemma 3.2 can be used to obtain the same equivalence result as in Lemma 3.5. Doing this iteratively for n ą 2 shows that Lemma 3.5 is a consequence of Lemma 3.2. However, M pλq is an important case that has been studied separately (see e.g. [GKL78, Theorem 4.1]).
Below we show how Lemma 3.5 can be applied to an operator matrix function.
Theorem 3.7. Let M , E, and F denote the operator functions on Ω 1 Ą Ω defined in Lemma 3.5. The operator matrix function M : Ω Ñ LpH n ' r H, H 0 ' r H 1 q is on its natural domain defined as Then M is after I H -extension, with respect to the structure (3.1), equivalent to The operator matrix functions E : Ω Ñ BpH 0 'H' r H 1 q and F : Ω Ñ BpH'H n ' r Hq in the equivalence relation (2.1) are Epλq :" Proof. The claims follow by combining the extension in Lemma 3.5 with Lemma 2.4 for the case r Epλq " 0, r F pλq " 0. This derivation is similar to the proof of Theorem 3.4.

3.3.
Operator polynomials. Let l P t0, . . . , du and consider the operator polynomial P : C Ñ LpHq, where P i P BpHq for i ‰ l. A linear equivalence is for l " 0 in principal given by [GKL78,p. 112]. Only bounded operator coefficients are considered in that paper but the operator matrix functions E and F in the equivalence relation (2.1) are independent of P 0 . Hence they remain bounded also when P 0 is unbounded. However, the method in [GKL78] can not be used directly if P i is unbounded for some i ą 0. The following example illustrates the problem for a quadratic polynomial.
Example 3.8. Consider the operator polynomial P : C Ñ LpHq defined as where A P LpHq is an unbounded operator and B P BpHq. Then the method in [GKL78] is not applicable to find an equivalent linear problem after extension as Epλq and Epλq´1 would be unbounded for all λ as can be seen below: However for all λ ‰ 0, an equivalent spectral problem is Spλq :" P pλq{λ " Aλ´p´B q{p´λq. By extending Spλq by´λI H an equivalent problem is given by and as a consequence P pλq ' W pλq " EpλqpT´λqF pλq with W pλq "´λ and Using this method, the obtained T has the same entries as the operator given in [GKL78,p. 112], but the functions Epλq, F pλq are bounded for λ ‰ 0. Inspired by the previous example, we show how an equivalence can be found independent of which operator P i in Lemma 3.9 that is unbounded. Note that Lemma 3.9 is the standard companion block linearization for operator polynomials formulated as an equivalence after extension.
Lemma 3.9. Let P denote the operator polynomial defined in (3.4) and assume that P d is invertible. For i ă d set p P i :" P´1 d P i and p P d :" I H . Let Ω 1 :" C if l " 0, and Ω 1 :" Czt0u otherwise. Define the operator matrix T P LpH d q on its natural domain as Further, define the operator matrix function W : Ω 1 Ñ LpH maxpd´1,lq q as Then, the following equivalence results hold: The operator matrix functions in the equivalence relation (2.1) are for λ P Ω 1 defined in the following steps: For l ă d, define the operator matrix functions whereas for l " d´1 define E α pλq :"´P d and F α pλq :" λ d´1 I H .
where for l ě d´1 we use the convention that the 0-row/column vanish. If l " d, we define the operators E γ P BpH, H d q and F γ P BpH d , Hq as Then, for all λ P Ω 1 the operator matrix functions E and F in the equivalence relation (2.1) are given by Proof. For l " 0, the result follows in principle from [GKL78,p. 112]. Hence, we show the claim for l ą 0 and Ω 1 " Czt0u. Define for all λ P Ω 1 the operator function S by Assume l ă d, then apart from the sum ř l´1 k"0 P k {λ l´k , S is polynomial in λ and only the zeroth-order term P l can be unbounded. Then, from [GKL78,p. 112] it can be seen that S is after I H d´1´l -extension equivalent to Since, the following identity holds, it follows from Theorem 3.4 that P d ' pT´λq is equivalent to r T pλq.
Example 3. 10. In Lemma 3.9, the result is rather different when l " d even though T has the same entries. In this case the equivalence is after both P pλq and T´λ have been extended with an operator function and the following example shows that this extension in general cannot be avoided. Let A P LpHq, B P BpHq and define P : Czt0u Ñ LpHq as where A is invertible. If A is bounded, P pλq is equivalent to T´λ, T "´A´1B but this equivalence do not hold if A is unbounded. However, these operator functions are equivalent on Czt0u after operator function extension as can be seen from Lemma 3.9 where the lemma for λ P Czt0u gives that Theorem 3.11. Let P , E, F , and W denote the operator functions on Ω 1 Ą Ω defined in Lemma 3.9 and let p P i , i " 1, . . . , d denote the operators in that lemma. The operator matrix function P : Ω Ñ LpH ' r H, H ' r H 1 q is on its natural domain defined as Assume that Q i P BpH, r Hq for i ‰ l and if l " d then P´1 d Xpλq P Bp r H, Hq for all λ P Ω. Define for all λ P Ω the operator matrix function T : Ω Ñ LpH d ' r H, H d ' r H 1 q on its natural domain as Then, with respect to (3.1), the following equivalence results hold: T pλq for all λ P Ω. The operator matrix functions in the equivalence relation (2.1) are for λ P Ω defined in the following steps: If l ă d, define the operator matrix function r E α : Ω Ñ LpH d´l , r Hq as where r E α pλq :" 0 for l " d´1. If l ą 0, define the operator matrix function r E β : Ω Ñ BpH l , r Hq, The operator matrices r E : Ω Ñ BpH maxpd,l`1q , r Hq and r F : Ω Ñ Bp r H, H maxpd,l`1q q are then defined as (3.5) r Epλq :" r E α pλq, r F pλq :" 0, l " 0, r Epλq :" Finally define the operator matrices Epλq and F pλq in the equivalence relation (2.1): Proof. Similar to the proof of Theorem 3.4, where Lemma 3.9 with (3.5) is used in Lemma 2.4. Note that P´1 d Xpλq " P´1 d Xpλq on DpXpλqq.
Remark 3.12. Theorem 3.11 requires Q to be an operator polynomial. For a general Q an equivalence is obtained by using the equivalence given in Lemma 3.9 together with Lemma 2.4 with r E :" 0 and r F :" 0.

Linearization of classes of operator matrix functions
In Section 3 we considered three types of operator functions. One vital property differs between operator functions of the forms (3.2) and (3.3) compared to operator polynomials (3.4): For polynomials the equivalence is to a linear operator function (Lemma 3.9), but it is clear that a similar result will not hold in general for (3.2) and (3.3).
If A, B, C, and D in (3.2) and M 1 , . . . , M n in (3.3) are operator polynomials, Lemma 3.2 respective Lemma 3.5 can be used to find an equivalence after operator function extension to an operator matrix polynomial. Hence, if the entries in a nˆn operator matrix function are either multiplications of polynomials or Schur complements, then Theorem 3.4 and Theorem 3.7 can be used iteratively to find an equivalence to a operator matrix polynomial. An example of this form is considered in Section 4.3. where P j,i pλq :" ř di,j k"0 λ k P pkq j,i and P pkq j,i P LpH i , H j q. There are different ways to formulate (4.1) that highlight different methods to linearize the operator matrix polynomial. By using the notation: P pkq j,i :" 0 for k ą d j,i and d :" max d j,i , it follows that P can be written in the form In the formulation (4.2), the problem is written as a single operator function, which makes it possible to utilize Lemma 3.9, provided certain conditions hold. This is the most commonly used formulation, see e.g., [APT02]. For the original formulation (4.1), Theorem 3.11 can be applied iteratively for each column, which results in a linear function. In Theorem 4.1 we present the linearization obtained using this method and in Section 4.2 we will present a systematic approach to linearize operator matrix polynomials that relies on Theorem 4.1. where T j,i P LpH di i , H dj j q are the operator matrices T j,i :" For i ‰ j define the operators matrices: Then the operator matrices Epλq and F pλq in the equivalence relation (2.1) are Epλq " » -- Proof. The claims follows from applying Theorem 3.11 to each column in (4.1). However, for columns 2, . . . , n reordering of the diagonal blocks as in (2.3) is needed to be able to apply Theorem 3.11 directly.
Remark 4.2. In Theorem 4.1 the operator matrix functions E and F in the equivalence relation (2.1) are not specified for the case l i " d i . The reason is that then Epλq and F pλq depend on the order of which Theorem 3.11 is applied to the columns and are very complicated albeit possible to determine.
Remark 4.3. For operator polynomials it is common to consider equivalence after extension to a non-monic linear operator pencil, T´λS, [GKL78]. In Theorem 4.1 the condition that P i,i is invertible for i " 1, . . . , n can be dropped if the matrix block in the equivalence is non-monic. However, the reduction of a non-monic pencil to an operator is as pointed out by Kato [Kat95, VII, Section 6.1] non-trivial; see also Example 3.10.
There are both advantages and disadvantages of using Theorem 4.1 instead of Lemma 3.9 for operator matrix polynomials. One advantage is that P d does not have to be invertible. Furthermore, for unbounded operators functions Theorem 4.1 can handle more cases since it allows l i ‰ l j while in Lemma 3.9, P l is unbounded for at most one l P t0, . . . , du. However, a disadvantage of this method is that the highest degree in each column has to be in the diagonal. Importantly, if both methods are applicable for P, then the obtained linearization using Theorem 4.1 and Lemma 3.9 is the same up to ordering of the spaces. Even if the conditions on P in Lemma 3.9 and/or Theorem 4.1 are not satisfied an equivalent operator matrix function p P that satisfies these conditions can in many cases still be found. For example, Lemma 3.9 cannot be applied if the highest degree in the columns, d i , are not the same. However, for λ P Ω zt0u an equivalent operator matrix function is obtained as where in p P, the highest degree is the same in each column, unless one column is identically 0. However, the coefficient to the highest order, p P d , might still be non-invertible and the boundedness condition might not be satisfied. Even if all conditions are satisfied the method increases the size of the linearization and introduces false solutions at 0. This is connected to the column reduction concept for matrix polynomials discussed for example in [NP93]. Due to these common problems that restrict use of Lemma 3.9 and the problems that can occur when trying to find a suitable equivalent problem, we prefer to use the results in Theorem 4.1. Therefore we develop a method that for a given operator matrix polynomial P provides an equivalent operator matrix polynomial p P for which the conditions in Theorem 4.1 are satisfied.

4.2.
Column reduction of operator matrix polynomials. Theorem 4.1 is only applicable when the diagonal entries in (4.1) are of strictly higher degree than the degrees of the rest of the entries in the same column. The aim of this subsection is to find for given operator matrix polynomial P a sequence of transformations that yields an equivalent operator matrix polynomial, where the diagonal entries have the highest degrees.
One type of column reduction algorithms of polynomial matrices was considered in [NP93], but the column reduction algorithms presented in this section are different also in the finite dimensional case. Naturally, new challenges emerge in the infinite dimensional case and when some of the operators are unbounded. This can be seen in the following example, which also illustrates that it is not necessary to have an equivalence in each step. The operator matrix function r K 1 P is then which for the first two columns has the highest degree in the diagonal but not in the last column. Let r K 3 denote the operator matrix function defined by r K 3 pλq :" Hence, for r K 3 r K 1 P the third column has the highest degree in the diagonal. However, in the first column the entry in the diagonal is not of strictly higher degree than the rest of the column. We will therefore apply the operator matrix . In order to justify the formal steps above, we first state some conditions on P. Assume that A, L are invertible and CL´1, pD´HL´1JqA´1, HL´1 are bounded. The domain of P is chosen as Let E : C Ñ BpH 1 , H 2 , H 3 q be defined as Epλq :" p K 1 r K 3 pλq r K 1 , where Epλq " » -I H1´C L´1 pD´HL´1JqA´1 I H2´λ HL´1`pD´HL´1JqA´1CL´1 I H3 fi fl .
Define p P : C Ñ LpH 1 , H 2 , H 3 q, Dp p Pq " DpPq as p Ppλq :" EpλqPpλq, where The operator matrix polynomial p P has the highest degrees in the diagonal. Furthermore, since Epλq is bounded and invertible for λ P C it follows that P and p P are equivalent on C.
Example 4.4 indicates that in the general case it is not feasible to obtain a closed formula for the final equivalent operator matrix polynomial. However, algorithms that follow the steps in Example 4.4 will below be developed for bounded operator matrix polynomials. These algorithms also work for classes of operator matrix functions with unbounded entries, as in Example 4.4, and it is in each case possible to check if one of the algorithms is applicable.
Let P denote the operator matrix polynomial (4.1) and assume that for i ‰ j there exists operator polynomials K j,i pPq and R j,i pPq such that P j,i " K j,i pPqP i,iR j,i pPq, where deg R j,i pPq ă deg P i,i pPq. A sufficient condition for the existence of these operators is that P pdi,iq i,i is invertible.
The dependence on P : C Ñ BpHq is written out explicitly since we want to use K j,i pPq : C Ñ BpH i , H j q in the algorithms. Define K j,i pPq : C Ñ BpHq as (4.4) Multiplying an operator matrix polynomial P from the left with K j,i pPq will be called reduction of the i-th column in the j-th row. Additionally a column in P is said to be reduced if the highest degree is in the diagonal of P in that column. When we in the algorithms presented below reduce the pi, jq-entry in P the condition that P j,i " K j,i pPqP i,i`Rj,i pPq has a solution with deg R j,i pPq ă deg P i,i pPq is not stated explicitly. Moreover, the notation K l:k,i pPq :" K l,i pPq . . . K k,i pPq is used and it is clear that K j,i pPq commutes so K l:k,i pPq is independent of the ordering in the multiplication. For convenience, the notation K i pPq :" K 1:n,i pPq is used. For example, the first column in the operator function p P defined by is reduced. The entries in p P satisfy the conditions deg P 1,1 ą deg R j,1 pPq and p P j,i :" P j,i´Kj,1 pPqP 1,i .
With the notation above the operator functions defined in Example 4.4 reads E :" pK 1˝K3˝K1 qpPq and p P :" pK 1˝K3˝K1 qpPqP.
Definition 4.5. Let P : C Ñ L p À n i"1 H i q denote an operator matrix function with the operator polynomial entries P j,i : C Ñ L pH i , H j q and define its R nˆn degree matrix DpPq " » -- Define the functions (4.6) f px, y, zq " " maxpx, y`zq y ě 0 x y ă 0 , and (4.7) f 0 px, y, z, wq " f px, y, zq´f p0, w, zq.
ii) f 0 is non-decreasing in the first and second argument.
ii.) The function f px, y, zq is non-decreasing in x and y, which implies the same properties for f 0 .
The case deg p P j,i ă maxtdeg P j,i , deg K j,1 pPqP 1,i u in (4.5) can only occur if deg P j,i " deg K j,1 pPqP 1,i and even then it is improbable in general. Therefore, in the following we assume that deg p P j,i " maxtdeg P j,i , deg K j,1 pPqP 1,i u. This means that the degree matrix of p P is where f is defined in (4.6) and p δ j,i :" ∆pPq j,i " d j,i´di,i denote the matrix entries in Definition 4.5. Moreover, m px,yq denotes a value that is less than or equal to minpx, yq. It then follows that the difference matrix of p P is m p δn,1,´1 f 0 p p δ n,2 , p δ n,1 , p δ 1,2 , p δ 2,1 q . . . f 0 p p δ n,n , p δ n,1 , p δ 1,n , p δ n,1 q fi ffi ffi ffi ffi fl , where f 0 is given by (4.7). Hence, the difference matrix, ∆pK i pPqPq, can be computed using only the difference matrix ∆pPq, apart from the column i where an upper estimate is found. This knowledge of the difference matrix is sufficient for the presented algorithms.
Lemma 4.7. Let P be the operator matrix polynomial (4.1). Assume ∆pPq j,i ă 0 for all j, i ď k´1 with j ‰ i and ∆pPq k,i ď δ for i ď k´1. Define the operator matrix polynomial p P :" EP where E " pK k,k´1˝. . .˝K k,1 q δ`1 pPq.
Then ∆p p Pq j,i ă 0 for j ‰ i and i ď k´1, j ď k.
Then ∆p p Pq j,i ă 0 for i, j ď k and j ‰ i.
Assume q " k´1. Then we show the conditions (4.8), (4.9) for P 1 p`1 :" K 2:k,1 pP k`1 p qP k`1 p . This is done similarly as for q ă k´1 with the exception that i ą 1, which implies that only one case has to be considered in (4.9).
In conclusion, ∆pP k´1 d´2 q j,i ď 0 holds for k ě j ą i due to condition (4.9) and for j ă i ď k the inequality holds since f 0 is non-decreasing in the first two arguments. By definition we have p P " K 1,k,k´1˝. . .˝K 1:k,1 pP k´1 d´2 qP k´1 d´2 , which satisfies the conditions in the theorem.
The following propositions present two algorithms that for given operator matrix polynomial P generates an equivalent operator matrix polynomial p P, where the highest degrees are in the diagonal. The algorithm in Proposition 4.9 usually preserves a greater number of the original operator polynomial entries and exploits the structure of P. However, it is only applicable when H i » H j for i, j P t1, . . . , nu.
In the algorithms presented in Propositions 4.9 and 4.10, J i,j denote the operator matrix permuting the rows of entries i and j.
(2) If k " n, set P 1 k :" P k and E 1 k :" E k . Else, let i ě k be the least index such that ∆pP k q i,k ě ∆pP k q l,k for all l ě k. Set P 1 k :" K k`1:n,k pJ k,i P k qJ k,i P k and E 1 k :" K k`1:n,k pJ k,i P k qJ k,i E k .
(3) Set q P k :" J 1,k P 1 k J 1,k and q E k :" J 1,k E 1 k . (4) Let J be the operator matrix that permutes the 2, . . . , k diagonal operators in q P k to obtain r P k :" J q P k J´1, which satisfies ∆p r P k q i,1 ď ∆p r P k q j,1 for all j ą i ą 1 and define r E k :" J q E k . (5) Obtain p E and p P k by applying Lemma 4.8 on r P k and set p E k :" p E r E k . (6) Set P k`1 :" J 1,k J´1 p P k JJ 1,k and E k`1 " J 1,k J´1 p E k . (7) If k " n set p P :" P k`1 , E :" E k`1 and terminate. Else set k :" k`1 and return to p2q.
By applying the algorithm to P, we obtain operator matrix functions p P : C Ñ LpH n 1 q and an invertible E : C Ñ BpH n 1 q such that Proof. The result holds trivially for k " 1 and the proof for k ą 1 is by induction.
In the inductive step we show that P k " E k P and ∆pP k q j,i ă ∆pP k q i,i for all j P t1, . . . , nu, i P t1, . . . , k´1u, and j ‰ i. Assume that induction hypothesis holds for k ě 1. By applying step 2 it follows that P 1 k " E 1 k P. Further since ∆pJ k,i P k q k,k ě ∆pJ k,i P k q l,k , the condition ∆pJ k,i P k q j,i ă 0 for j ą k and i ď k implies the condition ∆pP k q 1 j,i ă 0 for j ą k and i ď k. After step 3 we have q P k " q E k PJ 1,k and the inequality ∆p q P k q j,i ă ∆p q P k q i,i holds for all j P t1, . . . , nu and i P t2, . . . , ku, since the k-th column is swapped with column one. The existence of J in step 4 is obvious and from the definitions r P k " r E k PJ 1,k J´1 and ∆p r P k q j,i ă ∆p r P k q i,i for all j P t1, . . . , nu and i P t2, . . . , ku. By construction r P k satisfies the assumptions of Lemma 4.8. This lemma then implies that p P k " p E k PJ 1,k J´1 and ∆p r P k q j,i ă ∆p r P k q i,i for all j P t1, . . . , nu and i P t1, . . . , ku.
Hence, p P k satisfies the desired condition for P k`1 , but the equivalence is p P k " p E k PJ 1,k J´1.
Step 6 finds an equivalence of the desired type, P k`1 " E k`1 P and since J 1,k J´1 is a permutation operator matrix of first k rows the condition ∆p r P k q j,i ă ∆p r P k q i,i for all j P t1, . . . , nu, i P t1, . . . , ku and i ‰ j implies the same conditions for P k`1 . Hence, the result follows by induction.
(2) Obtain E and P 1 k by applying Lemma 4.7 on P k and set E 1 k :" EE k .
(3) Set q P k :" J 1,k P 1 k J 1,k and q E k :" J 1,k E 1 k . (4) Let J be the operator matrix that permutes the 2, . . . , k diagonal operators in q P k to obtain r P k :" J q P k J´1, which satisfies ∆p r P k q i,1 ď ∆p r P k q j,1 for all j ą i ą 1 and define r E k :" J q E k . (5) Obtain p E and p P k by applying Lemma 4.8 on r P k and set p E k :" p E r E k . (6) Set P k`1 :" J 1,k J´1 p P k JJ 1,k and E k`1 " J 1,k J´1 p E k . (7) If k " n set p P :" P k`1 , E :" E k`1 and terminate. Else set k :" k`1 and return to p2q.
By applying the algorithm to P, we obtain operator matrix functions p P : C Ñ LpH 1 ' . . . ' H n q and an invertible E : C Ñ BpH 1 ' . . . ' H n q such that EpλqPpλq " p Ppλq " P is an operator matrix polynomial, but in the last two columns the highest degree is not strictly in the diagonal. Hence, an equivalent problem has to be found. Apply the algorithm given in Proposition 4.10 to P. This results in the equivalent operator function p P :" K 4,3 pPqP, where G " C 2`D2 A, DpGq " DpC2q, D B :" D 2 B 2`D 1 B`D 0 , DpDBq " DpD0q, and K :" D 1`D2 B. In p P the highest degrees are in the diagonal and at most one coefficient in Gλ 2`p C 1`K Aqλ`C 0 and P 1 λ`P 0 are unbounded. Hence, Theorem 4.1 can be applied. Define p G :" pD 2 Qq´1G, p K :" pD 2 Qq´1K, p C i :" pD 2 Qq´1C i , and p D B :" pD 2 Qq´1D B . Let W denote the function defined in Theorem 4.1. Then is p Ppλq after Wpλq-extension equivalent to T´λ on Ω, where the operator matrix T P LpH 4 ' r H 3 q is defined as In conclusion, Spλq is after I H ' Dpλq ' Wpλq-extension equivalent to T´λ for all λ P Ω. Hence, Proposition 2.3 yields that the spectral properties of T and of S coincides.