Conditional Reducibility of Certain Unbounded Nonnegative Hamiltonian Operator Functions

Let J and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{J}}}$$\end{document} be operators on a Hilbert space \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal{H}}}$$\end{document} which are both self-adjoint and unitary and satisfy \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${J{\mathfrak{J}}=-{\mathfrak{J}}J}$$\end{document} . We consider an operator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{A}}}$$\end{document} on [0, 1] of the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{A}}(t)={\mathfrak{S}}+{\mathfrak{B}}(t)}$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${t \in [0, 1]}$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathfrak{S}}$$\end{document} is a closed densely defined Hamiltonian (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${={\mathfrak{J}}}$$\end{document} -skew-self-adjoint) operator on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal{H}}}$$\end{document} with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${i {\mathbb{R}} \subset \rho ({\mathfrak{S}})}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{B}}}$$\end{document} is a function on [0, 1] whose values are bounded operators on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal{H}}}$$\end{document} and which is continuous in the uniform operator topology. We assume that for each \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${t \in [0,1] \,{\mathfrak{A}}(t)}$$\end{document} is a closed densely defined nonnegative (=J-accretive) Hamiltonian operator with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${i {\mathbb{R}} \subset \rho({\mathfrak{A}}(t))}$$\end{document} . In this paper we give sufficient conditions on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{S}}}$$\end{document} under which \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{A}}}$$\end{document} is conditionally reducible, which means that, with respect to a natural decomposition of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal{H}}}$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{A}}}$$\end{document} is diagonalizable in a 2×2 block operator matrix function such that the spectra of the two operator functions on the diagonal are contained in the right and left open half planes of the complex plane. The sufficient conditions involve bounds on the resolvent of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathfrak{S}}}$$\end{document} and interpolation of Hilbert spaces.


Introduction
In [2] and [3] the problem is considered under what conditions a continuous function whose values are bounded nonnegative Hamiltonian operators is conditionally reducible, in particular, admits a spectral diagonalization with respect to a fixed fundamental decomposition. In this paper we extend the results from [2] to functions on [0, 1] whose values are closed densely defined nonnegative Hamiltonian operators of the form described in the abstract. IEOT Throughout this note we will use the theory of operators in Krein spaces and J-spaces where J is a signature operator: J = J * = J −1 (hence selfadjoint and unitary), see, for example, [1,4,9].
Let H be a Hilbert space and let H r and H be two subspaces of H. A closed densely defined operator A on H is called conditionally (H r , H )reducible, if H is the orthogonal sum of H r and H : and there exists a bounded and boundedly invertible operator V on H such that with respect to the decomposition (1.1) the operator B := V −1 AV is a diagonal 2 × 2 block matrix: 2) whose diagonal entries B r and B are closed densely defined operators on H r and H with σ(B r ) ⊂ C r and σ(B ) ⊂ C , where C r and C stand for the right and left open half-planes of C. The operator V will be called the diagonalizing operator. If V is the identity operator, then we say that A is (H r , H )-reducible. Diagonalization problems of matrix functions with spectral constraints on the diagonal entries can be found in [27] and more recently in [14], where further references can be found. Conditional diagonalization of bounded block operator matrices is studied, besides in [2,3] mentioned above, in [15][16][17].
If A is a closed densely defined operator on a Banach space X a subspace L of X will be called A-invariant if L ∩ dom A is dense in L and A(L ∩ dom A) ⊂ L. For example, regarding (1.2), the subspaces H r and H are B-invariant.
Let G be a Hilbert space and denote by H the Hilbert space which is the orthogonal direct sum of two copies of G: (1. 3) In H we consider the signature operators J and J represented by the 2 × 2 block matrices  is Hamiltonian. If also [C 3 ] holds, then V can be chosen continuous on [0,1] in the strong (uniform) operator topology as well.
The theorem will be proved in the next section, Sect. 2. The proof involves block operator matrix representations of operators and operator functions relative to various decompositions of the Hilbert space H such as (1.3) and (1.5). If [C 1 ] holds we construct a bounded and boundedly invertible operator function  [18], see also [28], and involves the existence of an integral of the resolvent of S. The set of conditions considered in Sect. 4 comes from [24] (and in a weaker form from [11]) and involves the theory of interpolating Hilbert spaces. It is not clear whether or not the conditions of one set imply the conditions of the other. In Sect. 5 we discuss and prove some results related to conditions [C 1 ] and [C 2 ]. Finally, in Sect. 6 we give an example in which S is a differential operator and B(t) is a multiplication operator, see Example 6.3. Remark 1.3. In the definition of a nonnegative Hamiltonian operator there is no need to consider signature operators J and J of the form (1.4): Consider two signature operators J and J on a Hilbert space H such that Re JJ = 0. Let A be a closed densely defined operator on H which is J-dissipative (that is, Im J A ≥ 0) and J-self-adjoint (hence Im J A = 0). Then there exist an orthogonal decomposition H = G 1 ⊕G 2 of H in which the summands are J-as well as J-neutral subspaces of H and a unitary mapping T : Thus if we identify G 1 and G 2 = T G 1 as the same space G, then J and J are given by (1.4) and A = −iA is a nonnegative Hamiltonian. The proof of this result and some examples are given in Sect. 6. Theorem 1.2 also holds with this seemingly more general definition.
be the fundamental J-decomposition of H and denote by P ± the projections onto H ± . Then and the projections have the 2 × 2 block matrix form The space H can be written as the direct sums This follows from [1,Theorem 1.4.5] which implies P ± L ± (t) = H ± . We carry out the proof of the first equality: Since also H + ∩ L − (t) = {0}, the sum is direct. This completes the proof. For each t ∈ [0, 1] the bounded operators U ± (t) = P ± (t)| H ± are boundedly invertible operators from H ± onto L ± (t). We give a proof for U + (t): It is surjective, because The closed graph theorem implies that U + (t) −1 is bounded. This completes the proof. It follows that the operator U (t) = P + (t)P + +P − (t)P − : H → H, or in 2 × 2 block matrix form is bounded and has a bounded inverse for each t ∈ [0, 1]. Both U (t) and U (t) −1 are diagonal and therefore condition [ is a diagonal operator on H + ⊕ J H − . We consider the operator W defined by the 2 × 2-block matrix It readily follows that It follows that A 1 (t) is Hamiltonian and conditionally (G, G)-reducible with diagonalizing operator V (t) = U (t)W if we can construct an operator function U with the properties To construct such a U we use that by [1, Theorem 1.8.2 and Proposition 1.8.7 b)] the subspace L + (t), since it is maximal J-nonnegative, is the graph of a bounded operator K(t) : H + → H − (called the angular operator of L + (t)): With respect to the fundamental J-decomposition (2.1) the operator P + (t) has a matrix representation of the form : in which X(t) : H + → H + and Y (t) : H − → H + are bounded operators, t ∈ [0, 1]. In fact, and, since U + (t) is a bijection from H + onto L + (t) and P + L + (t) = H + , we see that X(t) is a bijection on H + and hence boundedly invertible on H + , t ∈ [0, 1]. The equality P + (t) 2 = P + (t) with P + (t) given by (2.3) yields the equality has the properties (I) and (II): (I) Since H ± are J-neutral subspaces, the operator J admits the matrix representation where G : H − → H + is the unitary operator defined by By assumption [C 2 ], the subspaces L ± (t) are J-neutral, whence the equalities which will be proved below. The equalities (2.6), (2.8) and G * = G −1 imply It remains to prove (2.6)-(2.8). We denote the inner product and the norm on H by ( · , · ) H and · H . For all x + , y + ∈ H + we have T. Ya. Azizov, A. Dijksma and I. V. Gridneva IEOT and, since L + (t) is J-neutral, Hence JP − (t) = P + (t) * J and this implies (2.7). If we replace P + (t) and J in the equality (2.7) written as I − P + (t) = JP + (t) * J by the righthand side of (2.3) and the righthand side of (2.5) we obtain the equality and if we equate the entries in the upper righthand corner of the matrices on the left and on the right we obtain (2.8).
(II) follows from the equalities (2.4), (2.3), X(t) = I − Y (t)K(t) and the fact that I − K(t)Y (t) is a bijection on H − . Indeed they imply and Finally we assume that the conditions [ are contractions for all t ∈ [0, 1]. We give the proof for X(t) −1 Y (t), the proof for K(t) follows from (2.2). The equalities in (2.9) and (2.10) and the equalities imply Vol. 73 (2012) Conditional Reducibility of Certain Unbounded 281 Hence if X is continuous on [0, 1] in the strong operator topology, then so is In particular, Lemma 3.1 holds for B(t) ≡ 0 and then it is apparently well known, see [11, (3.3)] and [26, (3.1)]; a proof can be found in [12, Since B is continuous on [0, 1], the righthand side converges to 0 as (λ, t) → (μ, s). This proves (a).
For (λ, t) ∈ R \ R 1 we have |λ| > γM and therefore . This proves (i). Moreover, with γ = 2γ 1 we have As to S ± α , note that it is the union of the singleton {0} and the rays . Thus to prove that S ± α ⊂ ρ(A(t)), we only have to show that each such ray belongs to ρ (A(t)). For λ = βtan ϕ + iβ ∈ R ± ϕ we have on account of (ii) that and  (A(t)). This proves (3.1) and completes the proof of (iii). (e) We prove (iv). Set c = γtan α, then 0 ≤ c < 1 and, by (3.2), (3.3) and

Sufficient Conditions (II)
Let (V, · V ) and (W, · W ) be Banach spaces continuously embedded in a Hausdorff topological vector space, so that the sum V + W is well defined. The K-method introduced in [23] associates with these two spaces a family of Banach spaces, indexed by two parameters θ ∈ (0, 1) and p ∈ [1, ∞) in the following way: Define for x ∈ V + W the functions Then (V, W) θ,p := {x ∈ V + W : x θ,p,V,W < ∞} is a vector space and x θ,p,V,W is a norm on this space which makes it a Banach space. It is called an interpolation space because of the property that if T is a bounded operator on V + W such that T | V and T | W are bounded operators on V and W respectively, then T | (V,W) θ,p is a bounded operator on (V, W) θ,p . There are other ways of defining the interpolation space (V, W) θ,p , for example via the trace method, complex interpolation and via the domains of operators, Vol. 73 (2012) Conditional Reducibility of Certain Unbounded 287 see, for example, [12,19,20,22]. In some of the references given below such equivalent definitions are used. If X and Y are Banach spaces, then we mean by the equality X = Y that the spaces X and Y coincide as vector spaces and that their norms are equivalent. By way of example, if X = (V, W) θ,p and V 1 and W 1 are Banach spaces then the following implication holds: (4.1) To formulate the main result of this section we consider the case where θ = 1/2, p = 2 and the Banach spaces V and W are Hilbert spaces associated with a closed densely defined operator T in a Hilbert space (H, ( · , · ) H ) with 0 ∈ ρ(T ): Denote by H 1 (T ) the vector space dom T equipped with the Hilbert space norm x 1 = T x H , x ∈ H 1 (T ). Then the inclusion map (= identity mapping) ι : H 1 (T ) → H is continuous: and it is injective and has a dense range. For the inclusion map with these properties we use the notation: H 1 (T ) → H. Denote by H −1 (T ) the dual space of H 1 (T ), that is, the space of linear mappings : These equalities and the identification of H with H imply that ι can be viewed as an inclusion map from H to H −1 (T ). Since ι is continuous, injective and has a dense range, the inclusion map ι is continuous, has a dense range and is injective. It follows that H −1 (T ) is the (Hilbert space) completion of H with respect to the norm For the third equality we used that ran T = H. To summarize we have For x ∈ H these inequalities become These inequalities imply that there exist contractive operators S and T such that In the proof of the theorem below we use the following simple lemma. Proof. By (i), Remark 3.2 and Lemma 3.1 (iii) and (iv), there are positive real numbers α > 0, γ > 0 and such that and The complement in C of the set on the left of (4.5) is the union Ω ∪ Ω r of the open sets The first set and its boundary Γ α = ∂Ω α belong to C , and the second set and its boundary Γ r α = ∂Ω r α belong to C r . The boundaries are connected sets contained in D α ∪ S + α ∪ S − α and hence are part of ρ(A(t)) for all t ∈ [0, 1]. Following [24] we consider the integrals The orientation of Γ r α is from above to below and the orientation of Γ α is from below to above. For each t ∈ [0, 1] these integrals define bounded linear operators P ± (t) from H 1 (A(t)) to H. By Lemma 4.3, item (ii) implies By [24,Theorem 4.1] this equality implies that P + (t) and P − (t) can be extended to bounded linear operators on H, also denoted by P + (t) and P − (t), having the following properties: P ± (t) are projections on H: P ± (t) 2 = P ± (t), P + (t) + P − (t) = I, the ranges L ± (t) := P ± (t)H are A(t)-invariant and L + (t) is J-nonnegative and L − (t) is J-nonpositive. On [24, page 38] it is observed that P ± (t) are the Riesz projections corresponding to the part of the spectrum of A(t) that lies in C r/ , hence σ(A(t)| L±(t) ) ⊂ C r/ . Thus [C 1 ] holds.
We prove [C 2 ]. The operator A(t) is Hamiltonian and hence the operators ±iA(t) are J-dissipative. We first consider the operator iA(t). Then, by what has been proved above with J replaced by J, we find that the subspace L + (t) is J-nonnegative and the subspace L − (t) is J-nonpositive. Now we consider the operator −iA(t). Define the projection operators Q ± (t) by the integrals in (4.7) with A(t) replaced by −A(t). Then, also by what has been proved above with J replaced by J, the subspace Q + (t)H is J-nonnegative and the subspace Q − (t)H is J-nonpositive. If in the integrals for Q ± (t) Vol. 73 (2012) Conditional Reducibility of Certain Unbounded 291 we change the variable z to −z, we find that Q ± (t) = P ∓ (t). Hence the subspaces L ± (t) = Q ∓ (t)H are J-nonnegative as well as J-nonpositive, that is, they are J-neutral. This proves [C 2 ]. Finally, we prove [C 3 ]. For t, s ∈ [0, 1] we have, on account of (4.6), that Since the integral is finite, the continuity of t → P + (t) follows from the continuity of B. The continuity of t → P − (t) can be proved in the same way.
Remark 4.5. In [25] and [26] versions appear of [24,Theorem 4.1] used in the foregoing proof, in which the operators are assumed maximal J-dissipative. We note that the operator iA(t) in Theorem 4.4 is also maximal J-dissipative. This follows from the fact that 0 ∈ ρ(A(t)). Indeed, assume that for some t ∈ [0, 1], iA(t) has a proper J-dissipative extension D on H, in graph notation: iA(t) D. Then (iA(t)) −1 is a bounded operator and (iA(t)) −1 D −1 , implies that 0 is an eigenvalue of D. Then, by [1,Corollary 2.2.16], 0 is also an eigenvalue of JD * , hence an eigenvalue of D * and, since D * ⊂ −iA(t) * , an eigenvalue of A(t) * . It follows that 0 ∈ ρ(A(t) * ) = ρ(A(t)) * ,which is in contradiction with 0 ∈ ρ(A(t)). Thus A(t) is maximal J-dissipative for all t ∈ [0, 1]. Indeed, by [24, Lemma 3.14]: (ii ) ⇒ (ii). Condition (ii ) appears in [11]. According to the proof of Lemma 4.3, condition (ii ) implies that By [11,Theorem 3.2] and the remark following its proof, this equality and Theorem 4.4(i) imply that the integrals in (4.7) define bounded projections P ± (t) on H with P + (t) + P − (t) = I whose ranges L ± (t) are A(t)-invariant and satisfy σ(A(t)| L±(t) ) ⊂ C r/ . That L ± (t) are J-nonnegative/J-nonpositive follows from the assumption that iA(t) is J-dissipative, see the proof of [ Before proving the implication (iii) ⇒ (ii), we note that x ∈ L and hence L = L ⊥ . Since (iii) implies P M = P * L , in the above the roles of L and M can be interchanged to obtain that M = M ⊥ . Hence (ii) holds.
Proof. If L is neutral, then it is semi-definite. We will show the converse implication. Assume L is semi-definite. Set B := A| L . From 0 ∈ σ c (A) it follows that 0 ∈ σ c (B). Let where L • = L ∩ L ⊥ and L 1 is the Hilbert space orthogonal complement of L • in L. Our aim is to show that L 1 = {0}. Assume it is not. For x ∈ L • and y ∈ L we have (Bx, y) K = (x, By) K = 0, hence the neutral subspace L • is B-invariant. Thus with respect to (5.2) the operator B has the matrix representation: Here B 1 is a bounded self-adjoint operator in L 1 which is a Hilbert space or the antispace of a Hilbert space (depending on L being nonnegative or nonpositive) and σ(B 1 ) ⊆ C + ∪ C − ∪ {0}. It follows that the spectrum of B 1 is real, thus σ(B 1 ) = {0} and therefore B 1 = 0. Hence, by the assumption that L 1 is not trivial, the range of B is not dense in L, which contradicts the earlier conclusion that 0 ∈ σ c (B). This implies that L = L • is neutral.
If Γ ⊂ C, then Γ * := {λ : λ * ∈ Γ}. The last fact follows from [1, 1.1.25] applied to the decomposition (5.5). It implies that L + is maximal J-nonnegative and this, by [1,Theorem 1.8.11], implies that L ⊥J + is maximal J-nonpositive. ( The following example shows that there is a nonnegative Hamiltonian A such that the direct sum decomposition H = L ++ L − in which the summands are A-invariant and J-neutral need not be unique.

Proof of Remark 1.3 and Some Examples
We say that a pair (J, J) of signature operators on a Hilbert space is admissible if Re J J = 0 or, equivalently, J J = −J J. In Remark 1.3 it is assumed that (J, J) is an admissible pair of signature operators.